id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
248638000
pes2o/s2orc
v3-fos-license
Roman Jakobson's Conception of «Sprachbund» Helmut SCHALLER, Univ. de Marburg : «La conception du Sprachbund chez R. Jakobson» Pour la première fois en 1929 Troubetzkoy, lors du Premier Congrès international des linguistes, développe la conception du Sprachbund, dans laquelle il prend en considération non seulement le point de vue phonique, mais aussi le point de vue morphologique, syntaxique et lexical de proximité de langues relevant de familles différentes ; avant tout, il prend pour objet de réflexion la proximité de langues balkaniques slaves et non slaves. Pour Jakobson, cependant, c'est le point de vue phonologique qui est au cœur du concept de Sprachbund, ce qui a eu pour conséquence qu'il a pu, sur la base de corrélations phonologiques, rassembler en Sprachbund de vastes zones de langues voisines. C'est ainsi qu'il a développé le concept de Sprachbund eurasien, avec pour marque commune la corrélation de mouillure. Ce à quoi on a affaire entre Troubetzkoy et Jakobson, ce sont des particularités dans les différences, mais certainement pas des contradictions quant à la conception du Sprachbund. To illustrate the vagueness of the notion «Sprachbund» since Tru betzkoy and Jakobson, 1 should like to make a survey of its usage and then attempt to come to sorne definition with special reference to the «Bal kansprachbund». The notion «Sprachbund» was first mooted by N. Tru betzkoy first of ail known as the fo under of the phonological method in 1923 in «Vavilonskaja baSnja i smesenie jazykov», then at the First Inter national Congress of Linguists in The Hague in 1928, in order to add to language families and groups another term, which takes into account the linguistic peculiarities which have arisen from mutual influences between languages. Trubetzkoy writes : Viele Missverstii ndnisse und Pehler entstehen dadurch, dass die Sprachforscher die Ausdrücke Sprachgruppe und Sprachfamilie ohne genügende Vorsicht und in zu wenig bestimmter Bedeutung gebrauchen. Trubetzkoy therefore made the fo llowing suggestions : Unter den Sprachgruppen sind zwei Ty pen zu unterscheiden: Gruppen, beste hend aus Sprachen, die eine groBe Ahnlichkeit in syntaktischer Hinsicht, eine ahnlichkeit in den Grundsatzen des morphologischen. Baus aufweisen, und eine groBe Zah) gemeinsamer Kulturworter bieten, manchmal auch auBere Ahnlich keit im Bestande der Lautsysteme, -dabei auch auBere Ahnlichkeit im Be stande der Lautsysteme, -dabei aber keine systematischen Lautentspre chungen, keine Übereinstimmungen in der lautlichen Gestalt der morphologi schen Elemente und keine gemeinsamen Elementarworter besitzen, -so1che Sprachgruppen nennen wir Sprachbünde. Gruppe, bestehend aus Sprachen, die eine betrachtliche Anzahl von Elementarwortem besitzen, Übereinstimmungen im lautlichen Ausdruck morphologischer Kategorien aufweisen, und vor allem konstante Lautentsprechungen bieten, -solche Sprachgruppen nennen wir Sprachfamilien. (Trubetzkoy, 1928, p. 17-18) thuanian, Latvian, Estonian and North Kashubian and finally also sorne of the North German dialects. The German Baltist Viktor Falkenhahn (1963) had attempted to establish a Lithuanian-Polish Sprachbund on the grounds of the similarities between two languages in the verbal rection. The two languages belong to different families, yet there is only one pattern which they share. If one compares the various Sprachbünde within and outside Europe' contrastively, above a11 the question arises which the common characteris tics of the constituent languages of a Sprachbund are. Thus there are postu lated Sprachbünde which have only one linguistic characteristic, e. g. the polytony in the Baltic Sprachbund as a phonological conformity or as a syntactical conformity the verbal rection in the Polish-Lithuanian Sprach bund. In contrast to these, there are quite a nurober of linguistic similarities in the Balkansprachbund. But strictly speaking, aIl the Sprachbünde mentio ned above share only one characteristic, namely that they consist of lan guages of various families, as Trubetzkoy laid down as early as 1923 and 1928. The question which arises again and again as to how many similari ties are required to constitute a Sprachbund has led to subdivision of two kinds of Sprachbünde, namely the intensive and extensive. As an example of the extensive Sprachbund, we may take the Baltic Sprachbund, with its only one characteristic of polytony, whereas for the intensive Sprachbund the Balkansprachbund may serve as the prime example, which stands out on account of its various correspondences, in the phonetical field as weIl in other linguistic fields. Unlike «language» which is a fixed concept, the word Sprachbund can be replaced by more or less synonymous terms like «Sprachverband» or «Sprachenbund», at least with regard to German ter minology. If we try to find the genus proximum, 1 would suggest the notion of the family of languages in which the languages are doser to each other in their genetic similarities than in a Sprachbund the similarities of which are of a typological order : differentia specifica. The characteristics of a Sprachbund, as mentioned above, arise from mutual influences. Therefore a definition of the term Sprachbund could be made as follows : ln contrast to the genetically defined family of languages (genus proximum), the Sprachbund comprises a typologically defined group of geographically neighbouring language whose common features are derived from mutual influences (differentia specifica). Neither an extensive nor an intensive Sprachbund can consist of two languages, with the exception of the Lithuanian-Polish Sprachbund. It is questionable whether the extensive Sprachbund with only one common feature is in Hne with the definition of a Sprachbund. Therefore one rnight arrive at the following extended definition : . In contrast to genetically definedfamilies of languages, the Sp rach b' und comprises a typ ologically defined group of at least three geographi cally neighbouring languages, whose common fe atures are derived from mutual influences. Not only Trubetzk.oy's, but also Jakobson's influence on the deve lopment of linguistics has been a very great one. The latter was one of the founders and movers of the Prague Linguistic Cercle. On the basis of the new structuralist concepts, he set fo rth theories like that of an extensive Sprachbund and illustrated it with demonstrations based on Slavic and 0ther languages. So he examined prosodie problems of languages as diverse as Ancient Greek, Norwegian and Chinese. AIso, Siavic accentological evi dence plays a small or secondary role in his works on phonological «con vergence 'areas», Sprachbünde, particularly the «Eurasian linguistic allian ce», said to be characterized by the combination of accentuai monotony and distinctive palatalization in consonants. Yet, although the phenomenon is fa miliar, the term «Sprachbund» introduced by Trubetzkoy and Jakobson, is admittedly unsatisfactory . Its fu ndamental fauit seems to be that it implies a unit, as if a language either were or were not a member of a given Sprachbund. U. Weinreich (1948, p. 378) proposes that it would be preferable to abandon these terms and speak simply of cases of convergent development and, if necessary, of conver gence areas. He would then say that in the Caribbean area, as for example in the Balkans, a number of Indo-European languages have undergone in tensive convergent developments 3 . So we can sum up, in the sense of N. Trubetzkoy, that a lot of mis leadings and mistakes were originated by the fact that linguists used the no tions «language group» and «language fam ily» without sufficient examina tion and in not sufficiently defined meaning. Within language groups we have to see two different groups, consisting of languages which show a great similarity in syntax, similarity in the principles of morphologie al structure, and also a great number of common cultural words, sometimes an external sim ilari t y in the stock of their phonetic systems, but no systematic phonetic correspondences, no identity in the phonetic shape of morpholo gical elements and no common words. These groups of languages are named 'Sprachbund', but groups consisting of languages which show a great number of common words, identity of morpholo gical categories, and last not least fixed ph onetic correspondences, -these groups of languages are named language fa milies. So we have two categories of Sprachbund : the intensive one, consti tuted by N. Trubetzkoy, the extensive one, constituted by Roman Jakobson, based on phonological marks in contrast to phonological, morphological, syntactic and even lexical marks of Balkansprachbund. Both concepts of Sprachbund, the intensive and the extensive one are discussed up to today and so we remember in 1996 the great ideas of Roman Jakobson and the Prague School of linguistics.
2020-12-14T02:33:36.158Z
2022-04-09T00:00:00.000
{ "year": 2022, "sha1": "ee74e9a3acb084cf69a4cbb0f3ca58e01212adf9", "oa_license": "CCBY", "oa_url": "https://www.cahiers-clsl.ch/article/download/1895/1651", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "130d6d35bb60efd04863683239b22d3a0c3fe4f1", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
118922043
pes2o/s2orc
v3-fos-license
Cavity tests of parity-odd Lorentz violations in electrodynamics Electromagnetic resonant cavities form the basis for a number modern tests of Lorentz invariance. The geometry of most of these experiments implies unsuppressed sensitivities to parity-even Lorentz violations only. Parity-odd violations typically enter through suppressed boost effects, causing a reduction in sensitivity by roughly four orders of magnitude. Here we discuss possible techniques for achieving unsuppressed sensitivities to parity-odd violations by using asymmetric resonators. I. INTRODUCTION In recent years, renewed interest in precision tests of relativity has resulted in a number of modern version of the classic Michelson-Morley [1] and Kennedy-Thorndike [2] experiments [3]. These tests are motivated in part by the observation that attempts to quantize gravity may lead to tiny violations of Lorentz invariance at attainable energies. While originally conceived within the context of spontaneous symmetry breaking in string theory [4,5], a number of other possible origins have been proposed [6,7,8,9]. Remarkably, naive estimates suggest that these violations may be within reach of contemporary experiment [10,11,12]. Lorentz invariance includes covariance under both rotations and boosts. Traditionally, Michelson-Morleytype experiments test the rotational invariance, while Kennedy-Thorndike experiments focus on boost symmetry. Modern versions are normally sensitive to both types of violations, but sensitivities to boost effects are usually suppressed relative to rotational violations due to the small velocities involved in most experiments. The symmetry of most resonators imply that only parity-even violations of Lorentz invariance are observable in Michelson-Morley tests and are detectable at unsuppressed levels. Parity-breaking and isotropic Lorentz violations enter at first and second order in small velocities, causing reduced sensitivities. While sensitivities to Lorentz violations in photons continue to improve, a substantial increase in sensitivity to parity-odd violations may be possible in experiments that do not respect parity symmetry. In this work, we focus on resonator experiments, exploring the reasons behind the suppression and the potential for parity-asymmetric resonators to yield higher sensitivities to parity-odd Lorentz violations. Other suggestions for improving sensitivity to parity-odd violations include searches for mixing between the electric-field vector and the magnetic-field pseudovector in electromagnetostatic experiments [13] and the use of interferometers or traveling-wave resonators [14]. General violations of Lorentz invariance are described by a field-theoretic framework known as the Standard-Model Extension (SME) [11,12]. The SME provides a systematic theoretical basis for studies of Lorentz invariance in many systems, including those involving photons [3,13,14,15,16,17,18,19], baryons [20,21], hadrons [22,23], electrons [24,25], muons [26], neutrinos [27], Higgs bosons [28], and gravitation [29,30]. Here we work within the renormalizable gauge-invariant CP Teven photon sector of the minimal SME, but many of the symmetry arguments presented here may be extended to more general violations. The structure of this paper is as follows. Section II A gives a review of the theory and notation used in this paper. The general theory behind resonator experiments is described in Sec. II B. The possibility of using resonators with parity-breaking geometries is discussed in Sec. III. Section IV presents a numerical example of a parity-asymmetric resonant cavity. Some concluding remarks are given in Sec. V. II. BASIC THEORY This section provides some basic theory and definitions. Lorentz violation in the photon sector of the minimal SME is reviewed, and the characterization of potential sensitivities in general resonator experiments is given. A. Framework The violations of interest are described by a modified Maxwell lagrangian [11], where tensor coefficients (k F ) κλµν characterize the extent to which Lorentz symmetry is violated. The tensor (k F ) κλµν is real and obeys the symmetries of the Riemann tensor. In addition, the double trace is usually assumed to be zero since it only contributes to a scaling of the theory. This leaves a total of 19 independent coefficients for Lorentz violation. These coefficients are taken to be constant in the minimal SME, but may depend on spacetime location in more general contexts, including those involving gravitation [12,29,30]. Other forms of Lorentz violation that could also be considered include the CP T -odd k AF term of the minimal SME [11], and nonrenormalizable terms of the general SME [31,32]. The equations of motion associated with lagrangian (1) provide modified inhomogeneous Maxwell equations. It turns out that these can be cast into the familiar form ∇ × H − ∂ 0 D = 0, ∇ · D = 0, provided we define [15] Here we allow for the possibility of general linear passive magnetoelectric media with constituent matrices ǫ DE , ǫ DB , ǫ HE , and ǫ HB [33]. For harmonic fields, these matrices are complex and may depend on frequency. Losslessness implies that ǫ DE and ǫ DB are hermitian and ǫ HE = −ǫ † DB . In many applications these reduce to a simple isotropic permittivity and permeability: ǫ DE = ǫ, ǫ HB = µ −1 , and ǫ DB = ǫ HE = 0. In this language, Lorentz violation in photons is controlled by the real 3 × 3 matrices κ DE , κ DB , κ HE , and κ HB , which result from a 1-3 decomposition of the (k F ) κλµν tensor. These matrices obey the same lossless conditions as their ǫ counterparts. Note that κ DE and κ HB are parity conserving, while κ DB = −κ T HE mixes vectors and pseudovectors, introducing parity violations. Also note that it is usually assumed that the ǫ matrices are not significantly altered by Lorentz violation in photons. A subset of the coefficients for Lorentz violation cause vacuum birefringence, which can be tested with extreme precision by polarimetry of light from sources at cosmological distances [15,16,17]. It is therefore useful to decompose the κ matrices into coefficients that cause birefringence and those that do not: Hereκ e+ ,κ e− ,κ o+ , andκ o− are 3 × 3 real traceless matrices. The matrixκ o+ is antisymmetric, and the other three are symmetric. The remaining trace componentκ tr represents a single real coefficient and is associated with isotropic violations. The coefficients inκ e− ,κ o+ , andκ tr mimic a small distortion in the spacetime metric, resulting in a distorted version of the usual electrodynamics. In contrast, the coefficientsκ e+ andκ o− break the usual two-fold degeneracy that occurs in electrodynamics, causing light to propagate as the superposition of two modes that differ in speed and polarization. This causes birefringence and results in a change in the net polarization of light as it propagates. Searches for birefringence in light from astrophysical sources have resulted in stringent constraints at the level of 10 −32 or less on the 10 coefficients inκ e+ andκ o− [16]. Consequently, resonator experiments normally focus on the 8 coefficients inκ o+ andκ e− , which do not cause birefringence. The isotropic coefficientκ tr is not usually considered because it is doubly suppressed. However, in principle, resonator experiments can test all 19 coefficients. B. Resonator experiments Equation (2) suggests that the effects of Lorentz violation are similar to those of linear media. This analogy provides an intuitive understanding of the basic principle behind resonant-cavity experiments. The matter effects from the ǫ matrices generally depend on the orientation of the media within the cavity. However, since the location and orientation of media are typically fixed with respect to the apparatus, the frequency does not change with changes in the orientation or velocity of the resonator. In contrast, the κ matrices can be viewed as constant background fields pervading all of space. The cavities are immersed in these background fields, and changing the orientation or velocity of the cavity with respect to these fields can lead to a change in resonant frequency. To test for these effects, experiments search for small variations in resonant frequencies with changes in orientation or velocity. Rotations of the resonator are normally achieved through either the sidereal motion of the Earth or more actively through the use of turntables. Experiments monitor the frequency, searching for rotationviolating Michelson-Morley-type signals. To date, this method has yielded sensitivity to parity-even coefficients only. At present,κ e− is constrained at the level of ∼ 10 −16 by Michelson-Morley techniques [3]. Sensitivity to the parity-oddκ o+ has only been obtained through Kennedy-Thorndike tests, resulting in less stringent constraints. The reason for this stems from the fact that frequency is a parity-even quantity. In parity-symmetric resonators, parity-odd violations can only affect the frequency if they contribute in conjunction with another parity-odd quantity. Boost effects allow for this since they involve a parity-odd velocity. As a result, Kennedy-Thorndike effects are usually suppressed by a factor of β ∼ 10 −4 , the typical velocity of the apparatus. Consequently, current constraints on parity-oddκ o+ coefficients are near 10 −12 . Similar symmetry arguments apply to the isotropic violations associated withκ tr . Isotropic effects are difficult to observable, butκ tr does cause observable boost violations. However, arguments similar to those given above imply that these effects enter suppressed by two factors of velocity, giving a suppression factor of ∼ 10 −8 in paritysymmetric experiments. While searching for these effects in resonators is feasible, current bounds on this coefficient use other techniques [19]. For resonators, the effects of Lorentz violation are characterized by the leading-order shifts in resonant frequencies, given by the generic expression where (M DE ) jk , (M HB ) jk , and (M DB ) jk are experiment-dependent factors. Typically one begins an analysis by calculating these dimensionless factors in a frame that is fixed to the resonator. In this frame, the M matrices are experiment-specific numerical constants. In contrast, the κ matrices are constant only in inertial frames. By convention, a standard Sun-centered inertial frame is used, and all measurements are reported in terms of coefficients in this frame. A coordinate transformation is used to relate the resonator-frame κ matrices to constant Sun-frame matrices. This transformation introduces the orientation and velocity dependence that constitute the signals for Lorentz violation. Neglecting boost effects, the resonator-frame and Sun-frame κ's are related by a rotation. This implies that the unsuppressed Michelson-Morley-type sensitivity to a particular Sun-frame κ matrix is completely determined by the corresponding resonator-frame M matrix. For example, an experiment with nonzero M DB would be sensitive to rotational effects associated with a nonzero κ DB . In contrast, zero M DB implies that only suppressed boost effects arise from nonzero κ DB . The M matrices can be calculated perturbatively in terms of the fields in the absence of Lorentz violation, E 0 , D 0 , B 0 , and H 0 [15]: is the timeaveraged energy stored in the resonator. In what follows, it will be useful to have a birefringent decomposition of these matrices, These M matrices characterize the dependence on thẽ κ matrices through an expression analogous to Eq. (4). Here we want to explore sensitivity to parity-odd violations, so our primary focus will be on M o+ and M o− matrices. Note that the M matrices are calculated using conventional solutions. Therefore, to determine the effects of Lorentz violation on resonator frequencies, we need only to solve for the fields in the Lorentz-invariant case. Consequently, we drop the subscript 0 on all fields in what follows, with the understanding that we are working within the usual Lorentz-invariant electrodynamics, and all fields are conventional. III. PARITY-BREAKING RESONATORS Mathematically, the reason parity-odd Lorentz violations do not typically contribute at unsuppressed levels is because the solutions can be split into solutions of definite parity. In parity-symmetric cavities with parity-conserving media, the boundary conditions and the Maxwell equations normally admit conventional nondegenerate resonances of the form, . The result is a zero M DB , which implies no sensitivity to parity-odd Lorentz violations. So, in order to access parity-odd violations, we should construct resonators that admit solutions of indefinite parity. Resonators could be constructed that break parity symmetry by using asymmetric geometries or by introducing parity-breaking media. Below we demonstrate this with an explicit example, but we first show that, in either case, one cannot achieve unsuppressed sensitivity to certain combinations of Lorentz violations using a single lossless resonator. We begin by noting that the average flow of electromagnetic energy within a volume V can be split into terms representing the energy flowing through the surface of V and the energy from sources and sinks within V . The explicit expression, in terms of the harmonic Poynting vector S = 1 2 Re E * × H, is given by the integral identity For harmonic fields, the source term vanishes since ∇·S = 0 in regions without current [34]. This is simply the statement that there are no sources or sinks of energy within the resonator. This leaves the surface term and the energy flowing through ∂V . This term vanishes immediately if the fields are sufficiently confined to the interior of V . It also vanishes if we impose perfect-conductor boundary conditions at ∂V . In this case, E is perpendicular to the surface ∂V , so S is parallel to ∂V . This implies that, on average, no energy is exchanged with any portion of the conductor, and the average-energy-flow lines are confined to the volume of the cavity. Equation (7) then implies that V S d 3 x = 0. It follows that This 3 × 3 antisymmetric matrix equation places three real constraints on the M matrices. This implies that, for a given lossless resonator, regardless of geometry, there are at least three combinations of coefficients for Lorentz violation that are inaccessible at unsuppressed levels. As an example, consider a cavity filled with a frequency-independent medium. In this case, the ǫ constituent matrices are real, and the above discussion implies assuming the the constituent matrices ǫ HB and ǫ DB are uniform throughout the volume V . This matrix equation places three constraints on the M matrices, implying that three combinations of κ matrices are inaccessible. A particularly relevant simple case is a cavity containing a simple magnetic medium with ǫ DB = 0 and ǫ HB = µ −1 , where µ is a homogeneous isotropic permeability. Equation (9) then implies that the antisymmetric component of M DB vanishes. Consequently, M o+ is zero andκ o+ no longer contributes to the fractional frequency shift at unsuppressed levels. The conclusion is that resonators incorporating simple isotropic magnetic media have no sensitivity to nonbirefringent parity-odd violations. Sensitivity to the three components ofκ o+ can only be obtained by the introduction of more complicated magnetic materials. This is of particular interest because the 3 coefficients inκ o+ are the least constrained of the 18 anisotropic coefficients for Lorentz violation. IV. NUMERICAL EXAMPLE In this section, we give a numerical example of a resonant cavity with parity-breaking geometry. A numerical method for solving the Maxwell equations in curvilinear coordinates is given, and used to illustrate some of the conclusions of the previous section. A. Geometry One way to ensure a breakdown of parity symmetry is to introduce a net chirality in the cavity geometry. We do this here by considering a helical cavity, as illustrated in Fig. 1. While we will assume that the cavity is empty, the technique described here is readily adapted to cases involving linear media. The geometry of this cavity can be characterized using helical coordinates x a , a = 1, 2, 3, related to standard cartesian coordinates x j , j = x, y, z, through x 1 = x x cos αx z − x y sin αx z , x 2 = x x sin αx z + x y cos αx z , x 3 = x z . Here we consider a cavities with perfectly conducting boundaries at x a = ±X a , where X a are positive constants that specify the cross-sectional and length dimensions of the cavity. The parameter α determines the amount of rotation in the cavity about the x 3 central axis. For example, in Sec. IV C we take X 1 = 1/2, X 2 = 1, X 3 = 1, and α = 45 • . This gives a cavity with a rectangular cross section and a quarter left-handed turn from end to end, as in Fig. 1. In curvilinear coordinates, the conventional Maxwell equations take the form where E a and B a are contravariant field components, and E a = g ab E b and B a = g ab B b are covariant components. Here, g ab is the metric in curvilinear coordinates, and ∇ a is the associated covariant derivative. Note that the determinant g = det g ab = 1 in the helical coordinates used here. One advantage to using curvilinear coordinates is that the boundary conditions become relatively simple. Perfect-conductor boundary conditions imply that E is perpendicular and B parallel to the conducting surfaces of the cavity. As an example, consider a conducting boundary whose surface is represented by constant x 1 . The contravariant basis vector e 1 is perpendicular to this surface, and covariant vectors e 2 and e 3 are parallel to the surface. So, we must have E = E 1 e 1 and B = B 2 e 2 + B 3 e 3 at this boundary. In our case, this implies E 1,2 vanish at the ends (x 3 = ±X 3 ), E 2,3 = 0 at x 1 = ±X 1 , and E 3,1 = 0 at x 2 = ±X 2 . For B, we get vanishing B 3 on the ends, B 1 = 0 at x 1 = ±X 1 , and B 2 = 0 at x 2 = ±X 2 . B. Discrete solutions In order to show that the chiral geometry described above does in fact produce sensitivity to parity-odd Lorentz violations, we perform a numerical analysis of the its lowest-frequency resonances. Finite-difference-timedomain (FDTD) methods [35] provide a straightforward procedure for estimating the M matrices over a range of frequencies. In this section, we develop a FDTD procedure for curvilinear coordinates. We begin by defining discrete time by taking t N = δt · N , where δt is a small time interval, and N is an integer. The discrete fields are then taken as E N = E(t N ) and B N = B(t N − δt/2). This leads to discrete Maxwell equations: where we have assumed g = 1. This result allows us to "leapfrog" through time by iteratively applying Eq. (12) followed by (13). For spatial dimensions, we construct a grid in helical coordinates, where δx a are small spatial intervals, J, K, L are integers, and −X a represent the low edges of the cavity. We then construct a pair of lattices containing field values (E a ) N JKL and (B a ) N JKL defined at these spatial points. In order to apply Eqs. (12) and (13), we need estimates for the spatial derivatives. Whenever possible, we use the symmetric forms These can be used at each of the interior nodes, but boundary nodes must be treated more carefully since derivatives (15) are not always defined at these points. Also, some care must be taken to ensure that the boundary conditions are satisfied at these nodes. The method proceeds by stepping the B field forward in time using Eqs. (12) and (15) for interior nodes. Next we propagate the boundary nodes. Here we illustrate the procedure for boundary nodes on the J = 0, x 1 = −X 1 surface. The generalization to other boundary surfaces is straightforward. For J = 0 boundary nodes, the boundary conditions imply vanishing E 2 , E 3 , and B 1 . To propagate B at one of these nodes using Eq. (12), we need the partial derivatives ∂ a E b for a = b. Since E 2,3 = 0 on this surface, the derivatives ∂ 2 E 3 and ∂ 3 E 2 vanish. This implies that B 1 remains zero provided that it vanished to begin with, as required by the boundary conditions. The derivatives ∂ 2 E 1 and ∂ 3 E 1 can be estimated using the symmetric form (15). In contrast, Eq. (15) fails for ∂ 1 E 2 and ∂ 1 E 3 , since it would require field values at nodes outside of the cavity. So, in these cases we use the one-sided derivative, where we take advantage of the boundary conditions E 2,3 = 0 at J = 0. We now have estimates for all six spatial derivatives ∂ a E b , a = b at these nodes and can use Eq. (12) to propagate B at this boundary. The other boundary surfaces are then propagated one step in time using similar methods. Next, we propagate E at interior points using Eqs. (13) and (15). Again, we will illustrate the procedure for boundary nodes by considering the x 1 = −X 1 , J = 0 surface. Since E 2 and E 3 vanish on this surface, we only need to calculate the change in E 1 . However, since Eq. (13) propagates contravariant components, some care is needed in developing a procedure that updates E 1 , but leaves E 2 and E 3 unaltered. We do this by noting that where we have used the fact that E 2,3 = 0 to write E 1 = g 1a E a = g 11 E 1 . Using this result, we can step E 1 fields in time at J = 0 nodes with the relation Here, Eq. (15) is used to estimate spatial derivatives without difficulty. Again, the other boundary surfaces are propagated using the generalization of this method. By repeating the above procedure, we can propagate the B and E fields in time indefinitely. Note that at each time step, the E (B) fields at a given node depend on the prior E (B) fields at that node and the prior B (E) fields at adjacent nodes. This implies that E and B need not be defined at every node, and we may adopt a lattice of fields in which E and B are only defined at alternate nodes. For example, in this work we take E defined at nodes with J + K + L = even, and B defined at nodes with J + K + L = odd, forming two interlaced E N JKL and B N JKL lattices. There is nothing preventing us from defining E and B at each node, but, in doing so, the calculation would essentially decouple into the propagation of two independent sets of fields like the ones used here, doubling the amount of information that is necessary. A similar observation in the cartesian case led to a "Yee cell" in which different field components are defined at different spatial points [35]. In our case, the mixing of components resulting from the raising and lowering of To initialize the calculation, we must first seed the cavity with divergenceless fields satisfying the boundary conditions. A convenient set of initial fields is obtained by taking the expressions for the usual transversemagnetic (TM) and transverse-electric (TE) B fields associated with a rectangular cavity with α = 0 and mak- For simplicity, we simply set the initial E fields to zero. The resulting initial fields obey the correct boundary conditions and can be shown to be divergenceless. Once the fields are set to these valid initial values, we can then propagate the fields in time using the above procedure. C. Results We next apply the method described in the previous section to the cavity shown in Fig. 1. The dimensions of the cavity, in arbitrary spacetime units, are taken as X 1 = 1/2, X 2 = 1, and X 3 = 1. Taking α = 45 • gives a quarter left-handed twist as shown in the figure. Applying the initialization method described above, we set initial-field values using the conventional expressions for the magnetic fields associated with the TM 110 mode for the analogous rectangular cavity with α = 0. We use a spatial lattice 50 nodes wide in each of the three helical coordinates. We take a total time interval of 100 and calculate a total of 20,000 time steps. In order to reduce the amount of data saved to disk, we only record the field values for every fiftieth step. A fast fourier transform is performed on the saved field values, at each spatial node, yielding frequency-domain data. Using these, we determine the energy U and the M matrices, in cartesian coordinates, as a function of frequency. The results near the two lowest resonances are shown in Fig. 2 As expected, this parity-breaking configuration gives rise to a nonzero M o− at both of the resonances shown in Fig. 2, demonstrating sensitivity to the parity-odd violations associated withκ o− . Furthermore, we find that M o+ = 0 to within the errors of the calculation. This confirms the predictions of Sec. III, showing that while sensitivities to parity-odd Lorentz violations are possible, sensitivity to the nonbirefringent parity-odd violations cannot be achieved in resonators with simple isotropic magnetic media. We also note that | M o− | appears to be significantly smaller in the lower-frequency resonance, suggesting that sensitivities to parity-odd Lorentz violations are likely to be strongly dependent on the resonant mode excited in the cavity. For both of the resonances in Fig. 2, we find nonzero M e+ and M e− matrices, demonstrating sensitivity to the violations associated withκ e+ andκ e− , as in parityeven cavities. We note that the sensitivities to parity-odd violations in this example are larger by roughly a order of magnitude relative to parity-even violations. This geometric suppression shows that even in resonators with significant parity asymmetries, sensitivity to parity-odd violations may be small compared to those for parityeven violations. Nevertheless, this demonstrates the potential for at least a thousand-fold improvement in sensitivity to parity-odd Lorentz violations, assuming cavities of this type could be constructed and achieve stabilities comparable to their symmetric counterparts. V. SUMMARY AND OUTLOOK At present, resonant-cavity experiments have achieved sensitivities near 10 −16 to parity-even coefficients for Lorentz violation [3]. The parity-odd coefficients enter through suppressed boost effects, resulting in constraints that are larger by approximately four orders of magnitude. Here we have shown that resonant cavities that do not respect parity symmetry can provide unsuppressed sensitivity to parity-odd Lorentz violations. Parity asymmetries can be introduced through the geometry of the cavity or by incorporating parity-breaking media. In principle, the parity-odd coefficientsκ o+ andκ o− can cause observable violations of rotation symmetry in parity-breaking cavities, leading to improved sensitivities. In particular, this idea could be used to make significantly tighter constraints on the nonbirefringentκ o+ coefficients. However, some thought must go into the design of a resonator to ensure sensitivity toκ o+ . In Sec. III, we have shown that a given resonator will be insensitive to certain combinations of coefficients. More specifically, we have shown that sensitivity to the three coefficients inκ o+ is not possible in cavities incorporating only simple isotropic magnetic media. Better sensitivities toκ o+ could be achieved in resonators utilizing a combination of anisotropic magnetic media, with nondegenerate ǫ HB , in conjunction with asymmetric geometries or by using parity-violating media with nonzero ǫ DB . Chiral media [36] provide another interesting possibility. Assuming stabilities comparable to those in current experiments, parity-asymmetric resonators have the potential to improve the constraints oñ κ o+ coefficients by four orders of magnitude by circumventing the boost suppression associated with Kennedy-Thorndike tests. Resonators of this type could also be used to place improved laboratory-bounds on the five parity-odd coefficients inκ o− . While cavity tests are not likely to achieve the same kind of sensitivities that are obtained in searches for birefringence, these experiments could pro-vide a valuable laboratory-based check on astrophysical bounds. As illustrated in Sec. IV, sensitivities toκ o− can be improved simply by using parity-breaking geometries. Finding geometries and media that maximize sensitivities to parity-odd effects remains an interesting open problem. The construction of high-Q asymmetric cavities may also pose a technological challenge. However, development of parity-breaking resonators would provide another avenue for high-precision tests of Lorentz invariance that would compliment the current parity-symmetric experiments. They have the potential to yield significant improvements in sensitivities to parity-odd Lorentz violation and could rival the best tests in any sector.
2019-04-14T02:40:01.253Z
2006-12-29T00:00:00.000
{ "year": 2006, "sha1": "ae8d9ea6ecdaf453a451703f1a175cacaef877b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0612372", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ae8d9ea6ecdaf453a451703f1a175cacaef877b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53870469
pes2o/s2orc
v3-fos-license
Extending LHC Coverage to Light Pseudoscalar Mediators and Coy Dark Sectors Many dark matter models involving weakly interacting massive particles (WIMPs) feature new, relatively light pseudoscalars that mediate dark matter pair annihilation into Standard Model fermions. In particular, simple models of this type can explain the gamma ray excess originating in the Galactic Center as observed by the Fermi Large Area Telescope. In many cases the pseudoscalar's branching ratio into WIMPs is suppressed, making these states challenging to detect at colliders through standard dark matter searches. Here, we study the prospects for observing these light mediator states at the LHC without exploiting missing energy techniques. While existing searches effectively probe pseudoscalars with masses between 5 - 14 GeV and above 90 GeV, the LHC reach can be extended to cover much of the interesting parameter space in the intermediate 20 - 80 GeV mass range in which the mediator can have appreciable Yukawa-like couplings to Standard Model fermions but would have escaped detection by LEP and other experiments. Models explaining the Galactic Center excess via a light pseudoscalar mediator can give rise to a promising signal in this regime through the associated production of the mediator with bottom quarks while satisfying all other existing constraints. We perform an analysis of the backgrounds and trigger efficiencies, detailing the cuts that can be used to extract the signal. A significant portion of the otherwise unconstrained parameter space of these models can be conclusively tested at the 13 TeV LHC with 100 fb$^{-1}$, and we encourage the ATLAS and CMS collaborations to extend their existing searches to this mass range. Introduction Light, weakly-interacting massive particles (WIMPs) are a particularly compelling class of particle dark matter (DM) candidates. The case for WIMPs with masses close to the electroweak scale has been strengthened by recent observations of an excess in gamma rays originating from the Galactic Center (GC) by the Fermi Large Area Telescope [1][2][3][4][5][6][7][8][9][10]. This signal has garnered much recent attention, since its morphology closely resembles that expected from dark matter pair annihilation into bottom quarks [9,11], though other final states can also provide a good fit when systematics are properly taken into account [12]. Moreover, the signal suggests a WIMP annihilation rate close to that required in the early universe for a thermal relic to saturate the observed dark matter density [9], and the excess is difficult to explain in terms of astrophysical backgrounds alone [9,13]. This has led many to believe that the Fermi GC signal may represent the first (indirect) observation of dark matter to date. A common and well-motivated class of models that can explain the observed excess features dark matter annihilating through a light pseudoscalar with Yukawa-like couplings to Standard Model fermions [14][15][16][17]. For example, these states appear generically in two Higgs doublet models and their extensions [18], as well as pseudo Nambu Goldstone bosons associated with the spontaneous breaking of a new global symmetry [19][20][21]. Their couplings to Standard Model fermions can arise at tree-or loop-level (see e.g. Ref. [22] for an example with heavy vector-like fermions). Since they couple to the visible sector, such pseudoscalars can constitute a portal to the dark sector, mediating the annihilation of dark matter (DM) into SM final states [14,[23][24][25][26][27][28]. Understanding how dark matter interacts with the visible sector is a crucial part of the current dark matter program. Direct detection experiments [29][30][31][32] and the observation of a Standard Model-like 125 GeV Higgs with a small invisible decay width [33,34] have severely constrained Z-and Higgs boson-mediated scenarios [21]. As a result, much recent work has been devoted to studying various possibilities for new mediator particles coupling weakly to the Standard Model degrees of freedom. Of these possibilities, pseudoscalars stand apart for several reasons. For one, they do not predict sizable spin-independent direct detection signals, in contrast with scalar and vector mediators. Furthermore, current collider constraints on new pseudoscalar particles are generally weaker than those on new scalar and vector degrees of freedom [35,36]. If the GC excess is indeed a signal of dark matter annihilation, and if the annihilation is mediated by a new pseudoscalar particle, it is both important and timely to consider how one might probe such scenarios at colliders. Much progress has already been made on this front. Based on the topology and kinematics of the dominant dark matter annihilation channel, scenarios explaining the GC excess with pseudoscalar mediators can be grouped into roughly three types, each with distinct prospects for collider discovery: 1. Models which rely on dark matter annihilating into on-shell mediators [37][38][39][40][41][42]. In this case, the annihilation rate into SM fermions factorizes and the coupling of the pseudoscalar mediator to SM degrees of freedom can be very small. Prospects for direct collider searches are often dim in this case, but there may be other handles on these models provided by direct detection, as well as fixed target and other precision experiments [37][38][39][40]. 2. Scenarios featuring a pseudoscalar mediator with a significant invisible branching fraction [22,25,26,[43][44][45][46]. This results in distinctive missing energy signatures at the LHC which can be effectively probed by bb+MET, mono-jet, and other existing and planned LHC searches, as studied in detail in e.g. Refs. [22,25,26,43]. 3. Scenarios in which the pseudoscalar mediator is expected to have a small branching fraction into dark matter particles [14-17, 27, 28]. This can occur when the coupling between the dark matter and the mediator is small relative to the coupling of the mediator to Standard Model degrees of freedom, or when on-shell decays of the mediator into WIMP pairs is not kinematically allowed. Such scenarios can be more difficult to probe directly at the LHC than case 2, since they lack a distinctive missing energy signature [14]. In concrete models of this type, rare Higgs decays can be constraining, however the resulting limits can be straightforwardly avoided in many instances, as can limits from LEP, the Tevatron, and B-physics experiments (see e.g. Refs. [15,16]). While a signal would arise in indirect detection experiments, it has been shown that the dark matter and mediator in this case might avoid detection elsewhere [14]. This rather grim scenario is appropriately known as "Coy dark matter". In this study we will focus our attention on case 3 above, as it is a generic yet largely unconstrained possibility, as we discuss below. We will restrict our attention to light mediators, with masses below 90 GeV, as pseudoscalars with larger masses are already probed by existing LHC Higgs searches. Furthermore, light pseudoscalars are very attractive from the standpoint of the Galactic Center excess, since they can provide an efficient resonant annihilation channel for the light dark matter masses suggested by the signal and, in some cases, allow for a p-wave annihilation channel into pairs of mediators to drive down the relic abundance without violating constraints from dwarf spheroidal observations [21]. In this situation, on-shell decays of the pseudoscalar to pairs of dark matter particles are suppressed and WIMP production at the LHC through the mediator will be negligible. Our strategy will be to extend LHC coverage to such scenarios by probing the light pseudoscalar directly through its interactions with the Standard Model degrees of freedom. The discovery of such a new particle would constitute a great step forward in our understanding of the dark sector and open up many possibilities for further study, including more dedicated experiments to probe its coupling to dark matter directly. As we discuss below, the GC excess can suggest an appreciable mediator coupling to down-type fermions. Consequently, we focus on the associated production of the mediator with a b-jet or bb pair. We will assume that the mediator couples to Standard Model fermions with strength proportional to their mass, as in models with minimal flavor violation (MFV). We find that, for a significant range of mediator masses and couplings consistent with the GC excess, a promising signal is predicted in the 1-2 b+a production modes, with a → τ + τ − . We also explore the possibility of a → µ + µ − decays, which is more promising for low masses and likely features lower systematic uncertainties. Existing searches for pseudoscalars motivated by the Minimal Supersymmetric Standard Model (MSSM) and Next-to-MSSM (NMSSM) currently probe mediator masses down to 90 GeV and in the low-mass region between 5-14 GeV. However, we find that coverage can be extended to pseudoscalars in the intermediate mass range (between 20-80 GeV), which are promising for explaining the GC excess and would have evaded detection by LEP. We encourage both ATLAS and CMS to expand their analysis to include this region. In this study, we detail the cuts and kinematic variables that can be used to reduce the large backgrounds and show the extent to which the parameter space in these models can be conclusively tested at the 13 TeV LHC with 100 fb −1 of integrated luminosity. We demonstrate this using a simplified model and show the application of our results to the otherwise unconstrained parameter space of the NMSSM that is consistent with the excess (the NMSSM can be mapped directly onto our model). It is important to emphasize that, although we will focus on pseudoscalars mediating dark matter annihilation consistent with the GC signal, our study can be applied much more generally to any model featuring light mediators with significant coupling to isospindown Standard Model fermions. Since we assume that the invisible branching fraction of the pseudoscalar is small, our analysis of the predicted collider signatures does not depend on the pseudoscalar's coupling to dark matter, nor on the nature of the dark matter itself. This study is organized as follows: in Sec. 2, we discuss the simplified model used for our analysis, its relationship to the GC excess, and the existing constraints on light pseudoscalars. The following section (Sec. 3), details the collider signatures of the new mediator, as well as the backgrounds and trigger efficiencies relevant for our analysis. Our results for the LHC discovery potential of light psuedoscalar mediators are presented and discussed in Sec. 4, with further details of the analysis contained in Appendices A, B, and C. We then apply these results to the NMSSM in Sec. 5, showing that the searches we propose here can cover much of the parameter space consistent with the excess and that is currently unconstrained by other experimental searches. Finally, we summarize and conclude in Sec. 6. A Simplified Model For our analysis, we follow Ref. [14] and consider a light pseudoscalar that couples to Dirac fermion dark matter, χ, and to Standard Model fermions, with effective Lagrangian where y i = m i /v are the SM Yukawa couplings, with v = 174 GeV. We have assumed that the pseudoscalar a couples to the SM fermions with strength proportional to their masses. The pseudoscalar couplings to up-and down-type fermions are further assumed to depend on the overall scaling factors, g u,d , which we take to be the same for all down-or up-type fermions 1 . These factors appear e.g. in Two Higgs Doublet models (2HDMs) and their extensions; in a Type II 2HDM, g d = 1/g u = tan β, where β is the ratio between the two SU (2) Higgs vacuum expectation values. With the addition of a singlet that mixes with the SU (2) doublets, the effective couplings become g u = cot β cos θ and g d = tan β cos θ, where θ is the mixing angle between the SU (2) and singlet pseudoscalars. Note that Ref. [14] considered the case in which g d = g u = 1. This situation is very difficult to probe at colliders. Explaining the Fermi GC signal with g u = g d = 1 can require rather large values of g DM , unless the annihilation is quite close to the s-channel resonance. Often in ultra-violet (UV) complete models, a sizable value for g DM occurs together with low mass WIMPs in parametric regions featuring a large invisible branching fraction of the Standard Model-like Higgs [23], which is not observed. On the other hand, for pseudoscalar-WIMP couplings that are not too large, the Galactic Center excess suggests enhanced couplings to down-type fermions, as we show below. This situation is much more promising from the standpoint of LHC searches and, in some cases, is not probed by existing searches. Explaining the Excess Given the Lagrangian in Eq. 2.1, the zero-temperature s-channel annihilation rate for dark matter through a pseudoscalar into SM fermion pair f ifi is where m i , N C,i are the mass and color factor of the decay states, g i is either g u,d depending on the fermion, and Γ a is the total width of the mediator. Throughout this study we will assume that the dominant DM annihilation channel is χχ → bb. This mode has received the most attention in explaining the GC excess, and although a recent analysis has pointed out that other channels can also explain the signal [12], annihilation into a bb pair still provides a very good fit to the data. There have been several recent developments in determining which annihilation channels, WIMP masses, and annihilation rates best fit the Fermi data. For the bb channel, most previous analyses had suggested that m χ should fall roughly in the range 35 GeV m χ 50 GeV with annihilation rate σv 2-6 × 10 −26 cm 3 /s [9,14] (the required annihilation rate for self-conjugate dark matter would be reduced by a factor of two relative to these values). However, there are large systematic uncertainties associated with the propagation of gamma rays in the Galactic Center that must be taken into account. The impact of these systematics was first studied in Ref. [13], and subsequently by Ref. [12], which performed a detailed analysis incorporating several different models for the diffuse gamma ray background supplied by the Fermi collaboration. The end result is that the range of WIMP masses and annihilation modes statistically consistent with the excess increased significantly once these systematic uncertainties were taken into account. In particular, the range for WIMP masses annihilating primarily into bb was extended to [12] 35 GeV m χ 165 GeV, χχ → bb. for specific values of the annihilation rate. Across this mass window, the signal from the Galactic Center suggests a clear range of values for the coupling constants of the mediator to SM states for a given g DM in this setup. Since the annihilation cross-section and pseudoscalar width are dominated by down-type interactions (for BR(a → χχ) 1), the only significant parametric dependence is on g DM , g d , m χ and m a . The down-type scale factor g d required to explain the GC excess for m χ = 45 GeV and m χ = 145 GeV (close to the best fit mass for the Fermi model (d) from Ref. [12]) is shown by the bands on the left and right hand side of Fig. 1, respectively, as a function of m a The shaded regions are compatible with the signal, with the red (upper) regions in each band excluded by the recent dwarf spheroidal constraints from Fermi [48], and the yellow (lower) regions with an annihilation rate compatible with both the excess and the constraints. The upper bound on g d from existing LHC searches for a → τ + τ − is shown in blue. for various values of g DM . The range of annihilation rates allowed in the low mass case (LHS) is taken from Ref. [13], while the allowed values in the high mass case (RHS) are taken from Ref. [12]. In both cases the local dark matter density is assumed to be ρ = 0.4 GeV/cm 3 . The preferred regions depend on J ≡J/J 0 , the ratio of the angularly-averaged integral over the line-of-sight dark matter density ρ DM (r), given bȳ to the canonical valueJ 0 . For the low mass (m χ = 45 GeV) case we take J = 1, while for m χ = 145 GeV we take J = 0.3, which is within the systematic uncertainties discussed in Ref. [12]. The latter choice allows for an annihilation rate close to the canonical thermal freeze-out value ( σv 4.4 × 10 −26 cm 3 /s for Dirac fermion dark matter [49]) and consistent with the Fermi signal while evading the dwarf spheroidal constraints, discussed below. For reasonable choices for g DM , the value of g d must be quite large to account for the GC excess, unless the masses are tuned to fall very close to the resonance. In addition, reducing the χ abundance has the effect of increasing the preferred value of g d for a given g DM . The regions of parameter space with large g d , in many cases preferred by the signal, predict a significant mediator production cross section at the LHC in association with bottom quarks. Also, the pseudoscalar's invisible branching fraction is small across the entire parameter space, except for low g d and large g DM . For m a < 2m χ an on-shell pseudoscalar cannot decay to a pair of WIMPs, while for m a > 2m χ we find that BR(a → χχ) > 0.1 only for g d 4 for g DM = 0.1 in the m χ = 45 GeV case, since everywhere else g d (g DM ) is too large (small) for this decay to contribute appreciably to the total width. It is important to note that the Fermi collaboration recently released updated limits on the dark matter annihilation rate from observations of dwarf spheroidal (dSph) galaxies [48]. The resulting constraints 2 are in mild tension with a dark matter explanation of the excess, however there is still a large amount of parameter space capable of explaining the GC excess that survives this constraint. This is shown in Fig. 1, in which the red bands show the impact of the dwarf spheroidal limits (points in these bands could potentially explain the excess but are excluded at 95% C.L.). Meanwhile the yellow bands show points consistent with both the GC excess and dSph limits. Note that in the high mass case all points consistent with the excess are compatible with the dSph constraints for our particular choice of J . One concern may be that, since the recent dSph constraints disfavor larger annihilation rates, some points with light WIMP masses consistent with the GC excess and dSph limits will tend to produce too large a relic abundance. The dark matter relic density is set by the annihilation rate at finite temperature, which can differ from that at T = 0. In particular, for s-channel annihilation through a pseudoscalar with m a < 2m χ , the annihilation rate at T = 0 is greater than that at freeze-out (T f.o. ∼ m χ /20). The upper limit on the annihilation rate, set by the Fermi dSph results, is below the required annihilation rate at freeze-out for m χ 100 GeV, naively disfavoring this region. However, there are several well-known and straightforward exceptions to this reasoning [53]. For example, p-wave processes with contributions to the total annihilation rate scaling as v 2 DM (with v DM the relative dark matter velocity) will become important at freeze-out, increasing the annihilation rate at T f.o. but not altering the T = 0 prediction. An example of such a process generically expected along with light mediators is χχ → aa (this is another virtue of the light pseudoscalar scenario). Other scenarios allowing for an enhanced annihilation rate at T f.o. relative to that at late times include those with additional co-annihilation channels or featuring m a > 2m χ so that σv T =0 < σv T =T f.o. . Thus, although in some cases the dSph limits may result in requiring some additional tuning or model-building to achieve the correct DM relic abundance, dark matter explanations of the excess, particularly those involving s-channel annihilation through a relatively light pseudoscalar, are alive and well. Note that this tension largely disappears above m χ ∼ 100 GeV, since the dSph upper bound is above the canonical WIMP crosssection in this region (although one should verify that contributions to the annihilation rate at freeze-out from the other states in the theory do not over-dilute the relic density). In summary, dark matter annihilating through a relatively light pseudoscalar can explain the Galactic Center excess and be compatible with the recent dwarf spheroidal limits from Fermi. In all most discussed and shown above we expect BR(a → χχ) 1, either because m a < 2m χ , g DM 1, or both. This implies a low likelihood of observing the pseudoscalar through missing energy signals at the LHC. In the following subsection, we describe some of the other existing constraints on the parameter space and highlight the need for direct coverage of these scenarios at the LHC. Existing Constraints Our goal will be to ascertain to what extent LHC searches can cover the parameter space shown in Fig. 1 that is not currently probed by LHC searches [54][55][56][57][58] . To our knowledge, there are currently no direct constraints on the parameter space of our simplified model with 15 GeV m a 90 GeV. By this, we mean that there do not exist constraints depending only on the pseudoscalar's coupling to SM fermions in this mass range. There are indeed several other indirect constraints, but these are inherently dependent on other degrees of freedom in the UV complete theory and can be straightforwardly avoided in many cases. We will present explicit examples of points evading all of the searches discussed below but still predicting an observable LHC signal in the NMSSM in Sec. 5. Pseudoscalar mediators with GeV-scale masses predict highly suppressed direct dark matter detection cross-sections. At tree-level, the pseudoscalar only interacts spin-dependently with nuclei. Using the expressions and results found in Refs. [14,59], we find that the spindependent scattering cross-section for dark matter off of nuclei via the pseudoscalar is far below the reach of current and planned experiments (σ SD 10 −48 cm 2 ) across the parameter space we consider. This is thanks to the 1/m 4 a suppression in σ SD in this regime. Also, while spin-independent scattering can occur via one loop diagrams, this contribution is also much too small to be observed. The difficulty in observing dark matter interacting with the visible sector primarily through a pseudoscalar in direct detection experiments in indeed one of the main reasons such models are understood to be coy. Light pseudoscalars can also be constrained by flavor observables 3 . Loop diagrams involving the pseudoscalar can generate effective flavor-changing vertices [21,61]. The limits are severe for pseudoscalars lighter than the B and Υ meson scale simply because the mediator can be produced on-shell in decays. For m a 10 GeV, the constraints are very significantly relaxed, with the most stringent arising from LHCb [62] and CMS [63] measurements of BR(B s → µ + µ − ). For m a m B , the limits are approximately considering the pseudoscalar contribution alone [21]. This constraint would naively appear to directly constrain some of the parameter space shown in Fig. 1, however, the new contributions to B s → µ + µ − are strongly model-dependent [64]. For example, in supersymmetric UV completions of our model, such as the NMSSM, there are several new contributions which enter with opposite sign to that from the a-induced vertex. Thus, cancellations can occur over large portions of the parameter space allowing for light pseudoscalars with large couplings to SM fermions (i.e. above the naive upper bound of Eq. 2.5) [16], once again highlighting the need for direct probes of this parameter space. For light mediators with 2m a < m h (h is the 125 GeV SM-like Higgs), exotic Higgs decays to pseudoscalar pairs can affect the Higgs width and signal rates [65,66], which are constrained by both ATLAS [67] and CMS [68]. Evidence for h → aa decays was also searched for at LEP [69] and the Tevatron [70]. Such decay modes can also be very effectively probed at the High Luminosity LHC [66]. Indeed, this has long been recognized as an important potential discovery channel of NMSSM pseudoscalars at colliders [71][72][73][74][75]. However, these constraints depend on the haa coupling which, in some cases, can be made appropriately small in realistic models [16,23], especially those in which the pseudoscalar coupling to Standard Model fermions does not arise through mixing with the SM-like Higgs [22]. Alternatively, simply taking m a > m h /2 avoids these constraints altogether. Another indirect constraint arises from LEP searches for e + e − → ha production [76]. While these results prohibit MSSM-like pseudoscalars lighter than 90 GeV for all values of tan β, these bounds depend on the Zha coupling, which is model-dependent, and can again be straightforwardly avoided [16]. For example, in a Type II 2HDM with an additional singlet (2HDM+S), the Zh i a coupling scales as where S i1 and S i2 are the corresponding entries in the matrix diagonalizing the 3 × 3 CPeven mass matrix with the Higgs bosons ordered in mass (see e.g. Eq. 2.22 in Ref. [77]). Contrasting to g d ∼ cos θ tan β, we see that the simple limit cos θ 1, tan β 1 can result in an appreciable g d with a significantly suppressed g Zha . Finally, existing MSSM Higgs boson searches at the Tevatron [78][79][80][81][82] and the LHC [54,55,57] constrain g d for m a > 90 GeV, but in an effort to avoid the large backgrounds encountered for lighter masses, and because LEP had already ruled out MSSM-like pseudoscalars with masses below 90 GeV, the published limits do not extend below the Z mass. There are also searches for light (m a 15 GeV) pseudoscalars at CMS [56], motivated by certain limits of the NMSSM. However, the 15 GeV m a 90 GeV mass range remains currently untested 4 . Although the collider limits on a light pseudoscalar can be avoided, one might also be concerned about the consistency of this scenario once the model is UV-completed. Our Lagrangian is not invariant under SU (2) L × U (1) Y , and so, given a particular UV completion, one should also check that constraints on the other states can be satisfied while demanding a light pseudoscalar. In 2HD+S models, for example, most constraints on the rest of the Higgs sector can be satisfied by simply taking the charged Higgs mass to be moderately heavy (a few hundred GeV) with an appropriate choice of tan β [54,55]. Such requirements are consistent with light pseudoscalars and sizable g d , as shown in e.g. Ref. [16] and in Sec. 5 for the NMSSM. Perhaps surprisingly, there is a significant gap in coverage for light pseudoscalars with appreciable couplings to SM fermions, as arise in models explaining the GC excess or otherwise. This situation has room for improvement. In the remaining portion of this paper, we will investigate to what extent searches similar to those already existing for heavy MSSM Higgs bosons and for light NMSSM pseudoscalars can directly probe the parameter space motivated by the Galactic Center excess. This task requires a careful treatment of the backgrounds below the Z mass. As we will show below, the backgrounds can be substantially reduced by using a suitably chosen sequence of kinematic cuts. Production and Signals Heavy neutral Higgs bosons in two Higgs doublet models are being searched for via a variety of experimental signatures, including gluon fusion (ggF) production, or production in association with top or bottom quarks [54,55]. These canonical Higgs-type searches become much more difficult below the Z threshold, where the backgrounds increase dramatically. Fortunately, as we have shown in Sec. 2 above, light pseudoscalar mediators consistent with the Galactic Center excess can have enhanced couplings to down-type Standard Model fermions relative to those expected for a Standard Model-like Higgs boson of the same mass. This results in an enhanced production cross-section in modes involving b quarks, and (potentially) in the gluon fusion channel relative to the Standard Model-like case. This situation is depicted on the left hand side of Fig. 2, which shows as an example the enhancement of both the inclusive bba (black) and gluon fusion (red) production cross-sections with g d = g −1 u (i.e. cos θ = 1), relative to those with g d = g u = 1 (σ 0 ), as a function of g d 5 . The enhancement of the bba cross-section is independent of m a , as it only depends on g d for a given m a , while the differently styled red curves correspond to σ ggF /σ ggF,0 for different values of the pseudoscalar mass. The enhancement is substantially larger in the bba mode across the parameter space, which suggests focusing on production processes involving b quarks rather than the gluon fusion process. We consider the branching ratio of the pseudoscalar into various final states, assuming BR(a → χχ) is negligible, on the right hand side of Fig. 2. The pseudoscalar's branching = 1 as a function of g d . The dotted, dash-dotted, dashed, and solid red lines correspond to the enhancement in ggF production for m a = 20, 40, 60, and 80 GeV respectively. The corresponding enhancement for bb associated production is shown by the solid black curve (the enhancement is independent of m a ). Right: Branching fraction of the pseudoscalar into various final states (assuming BR(a → χχ) is negligible). Note that the branching ratios into fermions are nearly independent of g d (since the total width is set primarily by a → bb, τ + τ − decays), while the a → γγ partial width is substantially suppressed for g d > 1. fraction into photons is small and is further suppressed for g d > 1 which, when combined with the increased backgrounds for m a < m Z , suggests that diphoton searches will likely be unable to probe the low-mass pseudoscalar mediators we are interested in. On the other hand, while the favored decay is into a bb pair, searches for such resonances would contend with large, pure QCD backgrounds to exploit this mode. Thus, to avoid large backgrounds while maintaining a reasonable signal, and to maximize the enhancement of the production cross-section, we propose a search for the pseudoscalar in second and third generation dilepton (τ + τ − and µ + µ − ) pair production in association with one or two b-jets. Of course this strategy requires that the pseudoscalar couples to leptons, which is typical in extended two Higgs doublet models, but need not be the case [21]. Similar searches have been considered by both ATLAS [54] and CMS [55], but are focused on higher mass resonances motivated by two Higgs doublet models and the MSSM, where the mass region of interest is greater than about 90 GeV [76] due to LEP searches and precision constraints on heavy Higgs bosons. Also, previous theoretical studies in the context of the NMSSM have investigated the potential for the LHC to probe light pseudoscalars with somewhat similar searches [83][84][85][86][87][88]. However, these investigations did not incorporate trigger and detector effects, and did not analyze the effects of cuts on the signal and backgrounds in detail, which is a major component of this work and crucial for obtaining an observable signal. While Ref. [87] arrives at largely negative conclusions regarding bba production (at least in the NMSSM with partial universality), our analysis suggests a much more positive picture once appropriate cuts are implemented. It is worthwhile to point out that the CMS search in Ref. [56] finds sensitivity down to g d ∼ 3 for masses up to m a ∼ 14 GeV in the gluon fusion mode with a → µ + µ − . One might be inclined to conclude that this search channel could simply be extrapolated to larger masses in the scenarios of interest. However, this is unlikely to be the case. Fig. 2 shows that the gluon fusion production cross-section is actually suppressed for 1 < g d 10 as compared to its value with g d = 1, given our assumptions about the couplings. The suppression increases with m a and is due to the decreased top quark loop contribution that is otherwise dominant for heavier masses. In addition, due to the kinematic beta factor 1 − ( 2m i ma ) 2 , the bb branching ratio is suppressed for smaller values of m a , resulting in an increase in the µ + µ − branching fraction. For example, BR(a → µ + µ − ) is enhanced by almost a factor of 2 at m a = 10 GeV versus m a > 20 GeV. Thus, for the scenarios we consider, production modes involving downtype fermions at tree-level would appear more promising than those relying on gluon fusion production and decays to muons, although different assumptions about the coupling structure could alter this conclusion. For a related analysis of the potential LHC reach in the 0b mode in Z models, see e.g. Ref. [89]. In the remainder of this section, we discuss the challenges and strategies for examining low mass pseudoscalars with enhanced couplings to down-type fermions, g d > 1. We implemented our simplified model in FeynRules 2.0 [90], and generated both our signals and backgrounds at leading order (LO) using MadGraph5+aMC@NLO [91]. We then used Pythia 6.4 [92] to decay the τ leptons and hadronize the b-jets, and incorporated initial and final state radiation, with an appropriate scale used for the MLM matching of hard element and radiated jets. Detector simulation for trigger and tagging was performed using Delphes 3.0 [93]. Trigger effects were implemented as step-function cuts at the analysis stage, though some minimum kinematic requirements were enforced at the generation phase. Diagrams for some of the primary production modes for the signal are shown in Figure 3. To avoid the appearance of potentially large logarithms arising from the phase space integration over collinear final state quarks, the semi-inclusive b(b)a events were generated with b quarks included in the parton distribution functions (pdfs) of the proton. This is known as the "five flavor scheme" which effectively re-sums the large logarithms [94][95][96]. Exclusive bba events were generated without the inclusion of the b pdfs since the resulting contributions are doubly pdf-suppressed and subleading when compared to the gluon induced processes. To avoid double counting between the two-body, b(b)a, production and the three-body, bba, production mode where one of the b-jets is collinear with the proton beam, the three-body production mode was generated with a minimum p b T > 5 GeV. There are several technical difficulties associated with accurately calculating the twobody b(b)a production cross-section at hadron colliders, which have received much attention in the literature [97][98][99][100][101][102][103][104]. In particular, the leading order production cross-sections are known to exhibit a substantial dependence on the renormalization and factorization scales, µ r and µ f , respectively [99,103]. For our signal generation, we consider dynamic scales defined by Figure 3. Some of the diagrams contributing to the production of the pseudoscalar, a, at the LHC. The two rightmost diagrams arise in the 5FS. where f is an overall scaling factor, and i refers to the produced b's and a. This is in keeping with previous analyses in the context of Standard Model-like Higgs production [99,[104][105][106][107]. We considered the impact of the scale dependence by varying the overall scaling factor in the range [1/2, 2], which resulted in a 2-20% change in the production cross section, with larger effects occurring for smaller values of m a . This is consistent with the range typically found in the literature [98,99,101]. To further validate the results of our leading order calculation, we have compared our LO result for the dominant (gb(b) → b(b)a) production mode to the next-to-leading order (NLO) result calculated in the five flavor scheme implemented in the program MCFM [108] for several choices for µ f,r (we neglect the difference between scalar and pseudoscalar production which are small [104]). We find that our LO results exhibit reasonable agreement with the NLO result, falling within a factor of 1-2 across the parameter space we consider. Additionally, there are theoretical uncertainties related to the specific choice of parton distribution functions, which have been shown to be of order ∼ 5% for low masses [103], as well as some residual renormalization scheme dependence (MadGraph uses an on-shell scheme, while e.g. MCFM uses MS). To account for these effects, Appendix C takes a conservative approach and explores the effect of a factor of 2 over-estimation in our signal and, separately, a factor of 2 under-estimation in the backgrounds. Our overall conclusions are not significantly affected by this re-scaling, and so we believe them to be quite robust. For an experimental search, we consider three possible leptonic tagging channels: SR1 requires one electron and one muon; SR2 requires one lepton (e or µ) and one hadronic τ ; SR3 requires two muons. SR1 is motivated by excellent trigger response, while SR2 is motivated by the larger branching ratios and SR3 is motivated by a resonance search methodology in the di-muon invariant mass spectrum that allows for the use of data-driven backgrounds. In all three signal regions, we also require 1-2 b-jet tags, and no light jets, where light jets are defined as p T > 40 GeV. The signals are therefore inclusive for light jets with p T < 40 GeV, such as those that are commonly generated from ISR effects. These tagging requirements significantly suppress fake backgrounds arising from vector boson production in association with light jets. We assume the default CMS tagging efficiencies that are implemented in Delphes 3.0, which are as follows. For tagging, electrons are required to have p T > 10 GeV and |η| < 2.5. Within the inner region of the detector, |η| < 1.5, we assume a tagging efficiency of e = 0.95, while for the outer region but with |η| < 2.5 we assume e = 0.85. The rate at which jets fake electrons is taken to be j e = 0.0001 and uniform over the whole detector. For muons, we require that candidates have p T > 10 GeV and |η| < 2.4. Since our analysis involves only low p T muons, we take a fixed tagging efficiency of µ = 0.95, which is appropriate for p µ T < 1000 GeV. For the tagging of hadronic taus, we require |η| < 2.5 and take a fixed tagging efficiency of τ = 0.4 with a fake rate for mistagging a light jet as a hadronic tau of j τ = 0.001. Trigger Effects Since the signal typically produces very soft jets and leptons, trigger effects are very important to consider. To account for the effect that trigger has on our results, we have implemented a variety of triggers as a step-function cut based on what we believe are reasonable off-line triggers for CMS 6 . The following primary triggers are potentially relevant to our study: • 1e: single electron with p T > 35 GeV; • 1µ: single muon with p T > 25 GeV; • 2µ: di-muon leading with p T > 17 GeV, subleading p T > 10 GeV; • eτ h : electron + hadronic tau with p τ T > 45 GeV, p e T > 19 GeV; • µτ h : muon + hadronic tau with p τ T > 40 GeV, p e T > 15 GeV; • eµ: leading electron + muon with p e T > 23 GeV, p µ T > 10 GeV; • µe: electron + leading muon with p e T > 12 GeV, p µ T > 23 GeV; We also include other triggers, such as those involving photons, jets, τ h plus MET, and b-jets, but these provide a negligible effect on the signal events (i.e. < 0.3% of signal events pass all the non-primary triggers combined) and so are not included in the above list. The nonprimary triggers pass a significant portion of the backgrounds, however, which necessitates their inclusion, but this indicates that these events have distinctive signatures that can be eliminated from the analysis by kinematic cuts. Due to the low mass of the pseudoscalar in our search, a significant number of the production events will not pass the trigger. Since we are not privy to the details of the final triggers, we consider the effect of varying the muon p T thresholds for the triggers that include a primary muon. These triggers have the greatest likelihood for discretionary variation in a dedicated experimental search, and are the most important due to having the lowest inclusive cross sections and thus p T thresholds. We analyzed the cross section of signal events that pass each of the primary trigger cuts (σ Ty SRx ) as a fraction of the cross section of generated events (σ gen SRx = σ gen × BR(τ + τ − → SRx) × SRx ) with the same tagging signature in each signal region, independently: where SRx refers to the signal region and T y refers to the specific trigger. This ratio can be considered as a sort of trigger efficiency. Of note, we found that the e + τ h and µ + τ h triggers did not pass any of a preliminary 200k generated events, likely due to the hard cut on the p T of the τ h and the low mass of the pseudoscalar. Since the hadronic tau has a large fake background from mistagged light jets, we do not anticipate that the trigger threshold for hadronic tau p T will be improved enough to make these triggers worthwhile to consider. While the fake rate of jets for electrons is smaller than for hadronic taus, we believe it is unlikely that any significant improvement in the electron trigger thresholds will be implemented as there would still be a larger increase in the inclusive cross section than for similar changes in the muon trigger thresholds. The summary of the trigger efficiency ratios in Eq. 3.2 for the default implemented triggers is shown in Table 1, while an analysis of the effect of varying the threshold for the muon p T in the 1µ, 2µ and µe triggers for each of the three signal regions is shown in Figure 4. A naïve interpretation of this figure suggests that the single muon trigger includes a larger fraction of the signal than µe or 2µ triggers, but it is important to note that the single muon inclusive cross section at the LHC is significantly larger than the muon+electron or dimuon inclusive cross sections, and thus will typically have a higher p T threshold than the other triggers and a lower trigger efficiency, as shown in Table 1. Backgrounds and Their Reduction Since the QCD backgrounds at the LHC are significant, the fake rate of jets as electrons, hadronic τ -jets, and b-jets are important to take into account. Additionally, backgrounds with similar kinematics to the signal we examine produce soft leptons that may not be identified as easily or may fall outside of the central region of the detectors where tagging is possible. Thus, backgrounds producing more than two leptons, where one is not tagged, may contribute to the signal regions. To account for these effects, we include backgrounds that produce between one and three leptons (e, µ, τ ), and 0-2 b-jets, in association with 1-3 light jets (with n b + n j ≤ 3), since our signal is inclusive to low p T light jets. The following background processes are generated: Table 1. The ratio of cross section that passes the trigger cut to the generated cross section for 200k generated events. Kinematic dependent tagging efficiencies are already incorporated into the cross sections. All leptons (e, µ, τ ) are generated with a minimum p T > 10 GeV, but τ decays to leptons can result in a p e,µ T < 10 GeV. The columns in this table are not necessarily independent, as it is possible for an event to simultaneously pass multiple triggers. • pp → γ * /Z + (0, 1, 2, 3)j, γ * /Z → + − ; • pp → W ± + bb + (0, 1)j, W ± → ± ν (ν ); • pp → W ± + b(b) + (0, 1, 2)j, W ± → ± ν (ν ); • pp → W ± + (0, 1, 2, 3)j, W ± → ± ν (ν ); where = (e, µ, τ ) and j are light jets (u, d, s, c, g) that can come from associated production. Each entry in the list above is produced with the number of quoted jets, and MLM matching and merging is incorporated to avoid double counting of the light jet production with the . Trigger ratios for each signal region, normalized to the produced and tagged cross section, based on varying the leading muon p T . The µe a trigger is based on a subleading electron p T = 12 GeV, while µe b is based on a subleading electron p T = 17 GeV. The 2µ trigger for SR3 is based on a subleading muon p T = 15 GeV rather than the p T = 10 GeV discussed in the text, as the trigger response for the lower subleading p T is very similar to the single muon rate due to the minimum p T settings in the event generation stage and tagging thresholds. initial state radiation (MLM matching with XQCU T = 15 and QCU T = 20). The two largest contributions to our backgrounds are the inclusive Z production modes (included in the first three entries) and tt (included in the seventh entry), but these are effectively reduced by kinematic cuts. The kinematic distributions of the signal and backgrounds are included in Appendix A. Based on the kinematic distributions we examined, we have identified a number of possible cuts that improve the signal significance. These cuts are focused on reducing the tt and Z +nj backgrounds. The tt and other backgrounds with W + W − lepton production can be reduced with cuts that involve the E T measurement, including a direct E T cut, as well as the transverse mass m T = 2p 2nd T E T (1 − cos θ). Backgrounds with a Z resonance can be reduced by a cut on the dilepton invariant mass, m . In addition, a large fraction of the backgrounds producing both leptons and jets have a large total p T . Thus, we consider cuts on the scalar sum In the case of SR3, dilepton invariant mass cuts are implemented in a fixed range. While the branching ratio to dimuons is small (< 0.1%), the a → τ + τ − → µ + µ − + E T branching ratio is similarly small, and the invariant mass peak of the direct decay is reconstructible with low smearing. Thus, it may be possible to observe the pseudoscalar with a resonance search methodology. For SR3, we consider only events within a 2-3 GeV invariant mass bin centred at the mass of the pseudoscalar. In contrast, the analysis for SR1 and SR2 are based on a cut-and-count methodology, since the dilepton peak is significant smeared out due to the loss of information from the neutrinos originating from the τ decays. For these signal regions, we do not employ a narrow invariant mass window and instead employ m cuts to exclude backgrounds only. The cuts for SR1 and SR2 are considered separately in each of two distinct scenarios: hard cuts are better for high luminosity searches and have a greater overall reach, while soft cuts are better for low luminosity searches. Kinematic threshold values for the considered cuts were chosen by maximizing σ sig * L/ σ sig * L + σ bkg * L + 2 sys σ 2 bkg * L 2 , for a systematic uncertainty of sys = 0.2 and luminosity of L = 100/fb, while maintaining σ cut sig /σ tot sig ∼ 0.5(0.8) for hard (soft) cuts for m a > 40 GeV. The dimuon signal region, SR3, is analyzed assuming only a single cut scenario, as background events with m ∼ m a generally have similar acceptance rates to the signal. The expected search reach using these cuts is given in the next section. Further details about the acceptance rates for each cut are provided in Appendix B. Alternative approaches for determining the cut regions, such as those incorporating repeated algorithmic refinements of the phase space, would optimize cuts for a single mass value and be unable to account for the full range of parameters we explore. Maximizing the acceptance rate for m a = 40 GeV would result in a poorer reach in g d values for m a = 80 GeV, for example. We feel our approach is more appropriate for a general search strategy. Results We can now investigate the extent to which the light pseudoscalar parameter space consistent with the Fermi signal can be probed by the searches we propose. Due to the low pseudoscalar mass region of interest in this study, as well as the cut-andcount search method for SR1 and SR2, systematic uncertainties are a particularly challenging aspect of performing this search. To estimate the effect of systematic uncertainties, we consider two scenarios in addition to our two cut (hard/soft) scenarios -low systematics, with sys = 10%, and high systematics, with sys = 30%. Our analysis of the discovery potential is based on a signal significance, given by where N s = σ s * L and N b = σ b * L are the number of signal and background events, respectively, after cuts for a given integrated luminosity, L. Contours of constant luminosity are plotted in Figures 5, 6 and 7. For small enough values of g d , systematic uncertainties dominate the signal, and we expect that greater luminosity will be insufficient to illuminate any signal. Note that we have also verified that each signal data set considered has at least 5 events after cuts. The soft cut scenarios of SR1 and SR2 are optimized for early searches with low luminosity, but suffer from a larger systematics-dominated region, since the total backgrounds are much larger. Thus, their ability to exclude the parameter space ends at approximately L = 10/fb integrated luminosity. Alternatively, hard cuts scenarios have a better reach with exclusions from L = 100/fb, though larger luminosity will be unlikely to push this boundary any further. As discussed, the expected sensitivites for each case are affected by three primary components: production, trigger and cuts. Production rates decrease with increasing mass, m a , reducing the overall cross section and number of events at the LHC. In contrast to this, trigger response improves for heavier pseudoscalars, but has a significant effect on the lighter pseudoscalar scenario. However, the pseudoscalar is produced in association with b quarks, which results in a boost to the a that allows a large enough fraction of events to pass trigger and thereby make the search viable. Lastly, eliminating backgrounds resulting from the Z peak results in a choice of cut thresholds that has a larger impact on events from heavier pseudoscalar masses, especially for the hard cut scenarios. These issues combined result in the typical shape observed in Figures 5 and 6, with reduced exclusion reach for both the lowest and highest mass scenarios. The dimuon search uses a different approach, incorporating a pseudo-resonance search methodology. While we do not fit a line-shape over the background and compare the signal, we employ a narrow invariant mass window with a sliding center that effectively estimates the result from such an approach. In practice, an approach that fits a line to the continuum background will reduce systematic uncertainties that are associated with the cut-and-count methodology, which requires simulations to estimate the backgrounds. As a result, we suspect that the low systematics scenario in Figure 7 is potentially a more realistic case, in contrast with the other signal regions, where low systematics may be overly optimistic. As a result of the relatively large width of the SM Z, combined with detector smearing effects, a dimuon resonance at 80 GeV will contend with increased backgrounds from the Z peak (which is why we do not consider heavier masses). If we assume similar systematic uncertainties for each signal type, then the most promising reach for the high m a region is in the 1e1µ signal regions, while the 1 1τ signal regions are more promising for the low m a regime. Note that the reach in the dimuon signal region is not as promising as the others for any part of the parameter space under the assumption of similar systematics. As mentioned, however, systematic uncertainties in the dimuon search will likely be smaller than in the other modes, and so all signal regions combine to form a complimentary and robust search strategy. Comparing Figures 1 and 5-7, we see that the searches we propose will cover a significant portion of the otherwise unconstrained parameter space consistent with the Galactic Center excess in scenarios with light pseudoscalar mediators, even with rather low integrated luminosity. This region is both theoretically and phenomenologically well-motivated, and we encourage both ATLAS and CMS to consider searches along the lines of those presented here. Application to the NMSSM To illustrate the usefulness of our results in a UV-complete model, we can consider how our searches impact the Z 3 -symmetric NMSSM parameter space consistent with the excess. To set our conventions, we take the superpotential to be with soft supersymmetry breaking terms given by For sizable tan β and cos θ not too small, g d will be larger than 1. Our conventions follow those found in Refs. [23,77], to which we refer the Reader for further details regarding the spectrum. There have been two scenarios proposed in the Z 3 -invariant NMSSM to explain the GC excess involving neutralino annihilation into SM particles through a light singlet-like pseudoscalar [16,17] (see also Ref. [46] for an analysis of the general NMSSM, which in some cases may also be probed by the searches we present). The first involves a mixed singlino/Higgsino-like neutralino, which, to achieve a Standard Model-(and not singlet-) like 125 GeV Higgs, requires the lightest pseduoscalar to be a nearly pure singlet [16] (i.e. cos θ 1). Since the pseudoscalar couplings to SM fermions are suppressed, to explain the GC excess this scenario requires m a ≈ 2m χ to within about a GeV precision, as well as additional Z-mediated contributions to the annihilation rate in the early universe to drive down the relic abundance. This would seem quite finely tuned, requiring a fortunate conspiracy of parameters to achieve. Instead, we focus on the second possibility, namely that the neutralino is bino/Higgsino-like. In this case, the singlet component of the 125 GeV Higgs is naturally small, and so the lightest pseudoscalar can feature a more significant amount of mixing between the singlet and SU (2) states. As a result, the requirement that the neutralino annihilation is on resonance is relaxed, allowing one to consider a much larger range of masses not precisely tuned to m a ≈ 2m χ [16]. It is worth mentioning that analyses of the NMSSM subsequent to Ref. [16] have found somewhat different results, favoring the singlino/Higgsino scenario [17,109]. However, taking the systematics into account in fitting the Fermi signal [12,13], we find that the bino/Higgsino scenario is fully compatible with both the GC signal and the Fermi dwarf spheroidal limits. Another reason the bino/Higgsino scenario may have been disfavored in Ref. [109] is that the large pseudoscalar couplings to the SM fermions in the bino/Higgsino scenario are constrained by rare meson decays, in particular B s → µ + µ − . As pointed out in Ref. [16], these constraints can be avoided rather straightforwardly by taking advantage of mild cancellations between the various SUSY contributions to BR(B s → µ + µ − ). Such points can be difficult to sample in a large scan of the parameter space, as employed in Refs. [17,109]. However, we have verified that the bino/Higgsino scenario is still in fact viable when taking these constraints into account, as claimed in Ref. [16]. The bino/Higgsino explanation for the GC excess maps directly onto our simplified model (only that the WIMP is a Majorana, instead of Dirac, fermion). To illustrate the effect of our searches on the viable bino/Higgsino parameter space of the NMSSM, we performed a Markov Chain Monte Carlo scan of the parameter space using NMSSMTools 4.4.0 [110], interfaced with micrOmegas 3.1 [111]. Motivated by the parameter space presented in Ref. [16], we Figure 8. Application of our results to the Z 3 -symmetric NMSSM. The black (gray) contours correspond to the reach at 100 fb −1 (1 fb −1 ) for the hard (soft) cut scenarios and low systematics in the various search channels. The gray points are the result of a Markov Chain Monte Carlo scan of the parameter space (described in the text) consistent with all existing phenomenological constraints with no requirements on the LSP relic abundance or annihilation rate with parameters as in Eq. 5.5 and m A = 550 GeV. The green, blue, and orange points correspond to points capable of explaining the Fermi signal and consistent with the recent dwarf spheroidal constraints for m A = 500, 550, and 600 GeV, respectively. The red band is an example of the NMSSM parameter space found to be consistent with the excess in Ref. [16]. The sample point of Table 2 below is indicated with a star. Note that it may be possible to choose parameters minimizing the haa coupling to fill in the m a < m h /2, g d > 1 region, which we did not attempt in our scan. with all other soft masses and triscalar couplings at 1 TeV, while varying tan β, κ, and A κ . We required all points to satisfy all existing constraints discussed earlier and implemented in NMSSMTools. The results of the scans are shown, along with our results for the LHC reach across the parameter space, in Fig. 8. The gray points were generated without requiring the lightest supersymmetric particle (LSP) to explain the Galactic Center excess or satisfy constraints on its relic abundance. The green, blue, and orange points correspond to m A = 500, 550, 600 GeV and feature a bino-like LSP with a relic abundance compatible with WMAP Table 2. Example parameter space point in the NMSSM capable of explaining the GC excess and consistent with the Fermi dwarf spheroidal limits. All dimensionful parameters are in GeV unless otherwise stated. The remaining parameters are set to the values shown in Eq. 5.5. This point would likely be probed by the searches we propose at the 13 TeV LHC with 100 fb −1 of integrated luminosity. and Planck measurements (including a 2σ theoretical uncertainty) [23] 0.091 ≤ Ωh 2 ≤ 0.138 (5.6) and compatible with both the Galactic Center excess and the dwarf constraints, for self-conjugae dark matter. Points satisfying these constraints typically have small, but non-negligible, p-wave suppressed contributions at freeze-out, such as those involving the Z (but still consistent with limits on the invisible Z width). This slightly reduces the relic abundance relative to the value suggested by χχ → a → bb annihilation alone and allows these points to circumvent the dSph limits. Note that we did not attempt to minimize the haa coupling, and so no points were found with 2m a < m h and g d > 1. However, it might be possible to reach this parametric regime [23] as suggested in Ref. [16], whose results we show along with ours in Fig. 8 by the red band. These values were taken from Fig. 6 of Ref. [16] for m χ = 35 GeV, while our scan was performed assuming m χ ≈ M 1 = 45 GeV. Table 2 provides the detailed spectrum information for an example parameter space point consistent with the GC excess and which would be probed by a → τ + τ − , µ + µ − at the 13 TeV LHC. This point is marked by the black star in Fig. 8. Note also that our scan did not find points with g d > 18. Larger values of g d are typically excluded by LHC limits on the heavy MSSM-like pseudoscalar for the values of tan β sampled. In theories that do not rely on mixing with the SM-like Higgs, these constraints, as well as those from h → aa decays, are often significantly relaxed or absent. The contours in Fig. 8 show the sensitivity of our proposed searches to the NMSSM parameter space consistent with the Galactic Center excess at both 1 fb −1 and 100 fb −1 . A significant portion of the favored region with sizable g d would be probed by the 13 TeV LHC at these luminosities. Even more reach would be expected at the 14 TeV LHC. Our searches are complementary to h → aa observations as well as existing LHC searches for MSSM Higgs bosons and would access regions of the parameter space not currently probed by other experiments, providing a potential window into a dark sector difficult to access otherwise. Summary and Conclusions Many dark matter models feature WIMPs that can be very difficult to observe at colliders. Scenarios of this type can be consistent with the Galactic Center excess observed by the Fermi Large Area Telescope. Exploring these "coy dark sectors" at the LHC suggests a shift away from missing transverse energy signals and towards direct signatures of the particle(s) mediating the interaction of the dark matter with the Standard Model. Models involving pseudoscalar mediators and consistent with the GC excess can be of the coy variety. A good fit to the Fermi signal can be provided by relatively light WIMPs annihilating through a pseudoscalar into b quarks. In many realistic scenarios this suggests substantial couplings of the mediator to down-type Standard Model fermions. The signal favors WIMP masses in excess of ∼ 35 GeV, while current collider bounds often imply pseudoscalar masses below 90 GeV (provided they satisfy constraints from LEP). An interesting and currently untested explanation of the GC signal thus involves a pseudoscalar with mass below about 90 GeV with sizable couplings to down-type fermions and small branching fraction into WIMPs. The latter is generically small in this scenario since the on-shell decay of the mediator into dark matter is often kinematically forbidden and because the pseudoscalar's coupling to WIMPs is relatively small. Our study has attempted to extend LHC coverage to this scenario by taking advantage of the mediator's enhanced couplings to Standard Model fermions (relative to those of a SM-like Higgs boson of the same mass) and studying the production and decays of the pseudoscalar involving down-type final states. To this end, we explored signals that include one to two b-jets and with either τ or µ lepton pairs in the final state. We employed a simplified model, in which we assumed that the couplings of the pseudoscalar to Standard Model fermions were proportional to their mass, modulo common scaling factors for down-and up-type fermions. While this need not be the case, this situation is common in UV completions involving Type II 2 Higgs doublet models, as in supersymmetry. Our results can be applied to models with different coupling structures by a straightforward re-scaling of the production cross-section and branching ratios. Due to the rather low pseudoscalar masses we consider, trigger is an important factor in the search reach. We thus performed an analysis of the trigger response of the signal, and explored cuts that were effective in improving the signal significance. Our search strategy comprises a signal excess analysis for the 1e1µ + 1 − 2b and 1 1τ + 1 − 2b modes, including low luminosity (soft cuts) and high luminosity (hard cuts) scenarios, and a dilepton resonance search in the µ + µ − + 1 − 2b signal. Since signal excess searches suffer from large systematics from comparisons to simulated instead of data driven backgrounds, we also analyzed the impact of systematic uncertainties on the LHC reach in all three signal modes. In the most optimistic scenarios, we find that the LHC should be able to explore values of the reduced pseudoscalar coupling to down-type fermions as low as g d ∼ 8 for 100/fb of integrated luminosity at √ s =13 TeV. Even in more pessimistic scenarios with higher systematics, we find that the LHC should be able to explore down to g d ∼ 10 for some values of m a . This reach, however, is highly dependent on the trigger settings, and so we strongly recommend that the experimental collaborations attempt to account for this type of signal when finalizing their trigger thresholds for leptons, particularly those triggers for muons. The parameter space in the NMSSM not covered by h → aa searches, with m a ∼ 60 − 80 GeV, should be explorable to some extent, and further optimization of the search strategy could focus on this narrow region of masses. More generally, the searches we propose are highly complementary to those already existing at the LHC or elsewhere, highlighting their importance in the interest of fully covering the parameter space in question. In summary, light pseudoscalars with significant couplings to Standard Model fermions are well-motivated mediators for dark matter annihilation and arise in many models, including those explaining the Fermi Galactic Center excess. In many cases, these new particles would have evaded previous searches but should be testable at the LHC. Significant regions of the parameter space can be explored even with low luminosity, and so this signal presents a possibility for ongoing examination throughout the full LHC program. B Cut Flow Matrices When examining the potential of enhancing the visibility of the signal through cuts, we considered a variety of possible kinematic variable distributions, some of which are shown in Appendix A. Of those examined, we chose to consider only those cuts in which the shape of the backgrounds was distinctly different than the shape of the signal for at least one of the signal regions so that cuts on the background had a larger fractional effect on the backgrounds than on the signal. The variables that most effectively improved the signal significance were E T , p T of the leading lepton (p T ), dilepton mass (m ), total scalar sum of visible momenta (H T ), transverse mass of the subleading lepton (m T 2nd ), and the scalar sum of the lepton p T and E T ( H T ). Variables with E T components were most effective at eliminating backgrounds containing decays of W bosons, including m T 2nd where backgrounds containing intermediate W 's have a longer tail on the distribution. Of note, we found that the transverse mass distribution based on the leading lepton p T had a longer tail for the signal, and so was not quite as effective. Since E T , for example, is a component of multiple cuts, we consider the correlation between the events passing each pair of cuts in cut flow matrices in Tables 3 through 7. Diagonal entries are the acceptance rate for the single cut labeled in both the column and row headers, where red text indicates background acceptance rates and black text indicates signal acceptance rates. Each off-diagonal entry in these tables represents the acceptance rate (A) of the cut labeled by the row (r) header on the events remaining after performing the cut in the column (c) header, such that each entry is given by For example, the upper right most entry of Table 3 shows an 87.4% acceptance rate for background events and 84.3% acceptance rate for signal events from applying the E T cut to the pool of events that already passed the H T cut. The lower left most entry shows that 17.2% of background events and 87.7% of signal events pass the H T cut after applying the E T cut. Since the E T cut removes a similar number of events for the signal and background once the H T cut has been applied, the E T cut is superfluous once the H T cut has been applied, and thus should not be included in the final set of cuts. In fact, the E T distribution for signals and backgrounds have similar shapes once the H T cut has been applied, and thus no E T cut value will be effective. Table 4. Cut flow matrix for SR1: 1e1µ+1−2b+0j signal with soft cuts. The cuts are: E T < 50 GeV, p 1 T < 40 GeV, 12 < m < 45 GeV, H T < 140 GeV, M 2 T < 40, H T < 120. The red/top entry in each cell is the acceptance rate for all backgrounds combined, while the black/bottom entry shows the acceptance rate for the signal. The total cross sections for the 1e1µ + 1 − 2b + 0j signal after applying the trigger cuts are σ bkg = 2187 fb and σ sig = 60.4 fb (m a = 60 GeV, g d = 25). Table 5. Cut flow matrix for SR2: 1 1τ +1−2b+0j signal with hard cuts. The cuts are: E T < 30 GeV, p 1 T < 40 GeV, 12 < m < 45 GeV, H T < 130 GeV, M 2 T < 25, H T < 100. The red/top entry in each cell is the acceptance rate for all backgrounds combined, while the black/bottom entry shows the acceptance rate for the signal. The total cross sections for the 1 1τ + 1 − 2b + 0j signal after applying the trigger cuts are σ bkg = 742 fb and σ sig = 84.3 fb (m a = 60 GeV, g d = 25). Table 6. Cut flow matrix for SR2: 1 1τ +1−2b+0j signal with soft cuts. The cuts are: E T < 55 GeV, p 1 T < 55 GeV, 12 < m < 60 GeV, H T < 190 GeV, M 2 T < 45, H T < 140. The red/top entry in each cell is the acceptance rate for all backgrounds combined, while the black/bottom entry shows the acceptance rate for the signal. The total cross sections for the 1 1τ + 1 − 2b + 0j signal after applying the trigger cuts are σ bkg = 742 fb and σ sig = 84.3 fb (m a = 60 GeV, g d = 25 Table 7. Cut flow matrix for SR3: 2µ + 1 − 2b + 0j signal. The cuts are: E T < 60 GeV, p 1 T < 50 GeV, H T < 120 GeV, M 2 T < 45, H T < 120. The red/top entry in each cell is the acceptance rate for all backgrounds combined, while the black/bottom entry shows the acceptance rate for the signal. The total cross sections for the 2µ + 1 − 2b + 0j signal after applying the trigger cuts are σ bkg = 7249 fb and σ sig = 108 fb (m a = 60 GeV, g d = 25). C Variation of Exclusions As discussed in section 3, calculations of signal events using the 5FS is quite strongly dependent on the factorization and renormalization scales used. In MadGraph5, we employed a dynamic scale scheme that we then varied by an overall scaling factor between 0.5 and 2.0 (see Eq. 3.1). This factor had the largest effect for low mass pseudoscalar calculations, with a factor of 0.5 reducing the total cross section by approximately 22% for m a = 20 GeV, while only reducing the total cross section by a factor of 4% at m a = 80 GeV. Alternatively, the authors of [103] use a fixed renormalization and factorization scale scheme based on the sum of the masses of the pseudoscalar and the on-shell b quark masses. Variations of this scale by a factor between 0.5 and 2.0 results in a cross section reduced by as much as 50%. In addition, our calculations of the backgrounds were performed at leading order. Higher order effects, as well as possible unaccounted-for experimental issues, may result in larger backgrounds than we predict. In order to address concerns regarding these two issues, we explore much more conservative contours determined by performing the same calculations but with a factor of 2.0 larger backgrounds, and separately with a factor of 0.5 smaller signal. Figures 18, 19 and 20 give these results. Of note, many regions of parameter space are still explorable at the LHC with 100/fb of integrated luminosity even in the more pessimistic scenarios. The black lines represent the boundary of the systematics dominated region, the red lines represent the discovery potential at L=10/fb, while the yellow lines represent the discovery potential for L=1/fb. The black lines represent the boundary of the systematics dominated region, the red lines represent the discovery potential at L =10/fb, while the yellow lines represent the discovery potential for L =1/fb. and conservative factors applied to the signal (dotted) and backgrounds (dashed) (solid lines show the original bounds without any factor applied to the signal or background). Contours correspond to constant values of log(L × fb) needed to achieve k = 3. The black lines represent the boundary of the systematics dominated region, the red lines represent the discovery potential at L =10/fb, while the yellow lines represent the discovery potential for L =1/fb.
2015-01-28T21:00:11.000Z
2015-01-28T00:00:00.000
{ "year": 2015, "sha1": "f593dd53f727d44aeaba9e798e682173fd0d68b0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2015)046.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f593dd53f727d44aeaba9e798e682173fd0d68b0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249317834
pes2o/s2orc
v3-fos-license
28 GHz over-the-air measurement using an OTFS multi-user distributed MIMO under Doppler effect Abstract This paper describes the experimental investigation of orthogonal time-frequency space (OTFS) modulation using a 28 GHz multi-user distributed multiple-input multiple-output (D-MIMO) testbed in over-the-air (OTA) and mobility environments to enhance cell throughput in a millimeter-wave band. We build the D-MIMO testbed having newly developed OTFS modulator and demodulator, and measured OTFS signals and orthogonal frequency-division multiplexing (OFDM) signals with up to four user simultaneous connections on an actual office floor. Additionally, the Doppler effect on OTFS signals is mathematically analyzed and is confirmed in the measurements. OTFS indicates higher robustness in time-variant channels than OFDM. The error vector magnitude (EVM) and system throughput of OTFS are −22 dB and 1.9 Gbps with 100 MHz signal bandwidth, respectively. To our knowledge, this is a first paper describing the OTA measurements of EVM, throughput, and spectral efficiency using OTFS modulation on the 28 GHz coherent beamforming system. Introduction The enhancement of cell throughput is a key requirement in recent mobile communications, such as 5G or later. Multiple-input multiple-output (MIMO) techniques, using a millimeterwave (mmWave) band and a sub-terahertz (THz) band, and new modulation scheme are important to enhance the cell throughput significantly. Spatial-division multiplexing (SDM) is one of the MIMO techniques that can multiply many layers. The benefits of using mmWave and sub-THz are wide available frequency range, whereas its difficulties are varying propagation channels more sensitively than in sub-6 GHz. The modulation technique for 4G and 5G is orthogonal frequency-division multiplexing (OFDM). Although OFDM has highspectral efficiency and good robustness in multi-path fading, inter-carrier interference due to the Doppler effect of time-varying channels degrades OFDM performance for mobility environments. Especially, Doppler effect in mmWave and sub-THz is larger than that in sub-6 GHz. To suppress the degradation, OFDM system allocates reference signals (RS) more frequently and needs to calculate SDM weight matrixes for each OFDM symbol. However, increasing RS allocations decreases spectral efficiency and increases computational complexity for weight calculation drastically. Orthogonal time-frequency space (OTFS) is suggested to tackle the time-varying channels [1]. OFDM multiplexes information symbols in the time-frequency (TF) domain, whereas OTFS multiplexes them in the delay-Doppler (DD) domain. Because the OTFS modulation spreads each element in the DD grid into the TF domain entirely, all OTFS elements experience the same and nearly constant propagation channel. Using simulation, previous works report that OTFS has a higher bit rate than OFDM for high-mobility environments [1][2][3]. Additionally, in the sub-THz band, direction-based beam-forming will be used as SDM to enhance cell throughput because propagation channels are too sensitive to use null steering method such as zero-forcing (ZF) and minimum mean square error. The direction-based beam-forming utilizes user equipment (UE) positions and velocities to predict the near future UE directions. OTFS distinguishes UEs in the DD domain, and thus OTFS has the possibility to estimate the UE positions and velocities. We investigate the robustness of OTFS modulation in the Doppler environment with over-the-air (OTA) experiments using a 28 GHz multi-user distributed MIMO (D-MIMO) testbed. D-MIMO is one of the techniques to maximize the SDM performance by using geometrically distant multiple number of antennas [4,5]. This paper describes the practical OTFS channel estimation and channel quality with simultaneous multiple user connections for mobility environments. An earlier version of this paper was presented at the 2021 51st European Microwave Conference and was published in its proceedings [6]. with the OFDM modulation. The numerology of the OFDM modulation is based on the specifications of the 3GPP TS 36.211 format [7] except for subcarrier spacing and signal bandwidth, which use 60 kHz and 80 MHz, respectively. A radio unit (RU) for UE, called UE-RU, converts the time-domain digital signal s i (t) to an analog signal and then radiates it from UE antennas. OTFS modulation Subframe 0 has only RSs to estimate a channel impulse response (CIR), which is called CIR-RS. CIR-RS uses the root Zadoff-chu sequence with a length of 19 as defined in 3GPP TS 36.211 [7] and is allocated to the DD-domain grids as shown in Fig. 2. Table 1 shows delay indexes l c,u and Doppler indexes k c,u for the CIR-RS center locations of four UEs, UE0-UE3. The other grids in subframe 0 are blank. Subframes 1-9 have quadrature phase shift keying (QPSK) TX information and phase compensation reference signals (PCRSs). PCRS for the u-th UE is a QPSK sequence and is allocated to x DD i(l p ,0) , where l p = 48v + u, v = 0, 1, …, 24, and u = 0, 1, 2, 3, as shown in Fig. 2. PCRS and CIR-RS locations are different for each UE to prevent contamination of the RSs between UEs. The amplitude of CIR-RS is 17 dB larger than those of the TX information sequence and PCRS to equalize the peak power of the time-domain signal s i (t) between subframe 0 and the other subframes as shown in Fig. 3. OTFS demodulation A DU for access point (AP), called AP-DU, generates a TF-domain OTFS signal y TF i(m,n) [ C D×1 , where D = 8 is the number of distributed antennas (DAs), from the time-domain signal r i (t) ∈ ℂ D×1 received by an RU for AP (AP-RU) with the OFDM demodulation. The propagation channels are estimated by using subframe 0 having CIR-RS. First, the channel estimator in the OTFS postprocessor on a PC converts the TF-domain subframe y TF 0(m,n) to the DD-domain subframe y DD 0(l,k) [ C D×1 with SFFT, and then extracts CIR-RS from y DD 0(l,k) . The extracted signal g DD u(l,k) [ C D×1 for the u-th UE is expressed as where l r,u and k r,u are the CIR-RS analysis ranges of delay domain and Doppler domain, respectively, as shown in Table 1. The CIR-RS analysis range is larger than the CIR-RS allocation range because CIR-RS is distributed by delay and Doppler effects. When the Doppler indexes of CIR-RS analysis range are over subframe 0, the indexes are folded in subframe 0 as shown in Fig. 2. The channel estimator converts the extracted signal g DD u(l,k) to the TF-domain signal g TF u(m,n) [ C D×1 with inverse SFFT and calculates the propagation channel of the u-th UE, h u(m,n) , as where x TF 0,u(m,n) is the TF-domain TX signal of the u-th UE in x TF 0(m,n) . The channel estimator obtains the channel matrix The first equalizer (EQ1) performs equalization in the TF domain. The equalized OTFS signal, where W (m,n) ∈ ℂ U×D is the equalization weight calculated from the channel matrix H (m,n) by ZF. Subsequently, the OTFS postprocessor converts the equalized signal z EQ1 with SFFT. The second equalizer (EQ2) corrects the DD-domain signal as follows: where The symbols ⊙ and ø denote the Hadamard product and division, respectively. The correction parameters c i(lÓlp,k) in the delay indexes without PCRSs are linearly interpolated by using the correction parameters c i(l[lp,k) in the delay indexes having PCRSs. Mathematical analysis of OTFS Doppler effect This section analyses OTFS signals under Doppler effect and mathematically confirms our demodulator corrects the effect. In this section, for simplicity, the number of UEs is 1, the number of DAs is 1, and the modulation in UE-DU omits adding cyclic prefix. The OTFS pre-processor calculates the TF-domain signal with inverse SFFT as follows: UE-DU pads zero to the TF-domain signal on the basis of the 3GPP specification [7]. Subsequently, the zero-padded TF-domain signal, x ′TF i(m ′ ,q) , is converted to the parallel timedomain signal with inverse discrete Fourier transform (DFT) as where p = 0, 1, …, M F − 1 is a time sample number, M F is 2048, q = 0, 1, …, N − 1 is an OTFS symbol number, and m ʹ = 0, 1, …, M F − 1 is a zero-padded frequency index. The orthogonality between DFT for l in (6) and inverse DFT for m ′ in (7) except for when l = Mp/M F is approximated in (7). The time-domain signal s i (t) is generated by parallel-to-serial conversion of s i( p,q) . In subframe 0, CIR-RS is simplified to pulse shape signal, and the transmission DD-domain signal is expressed as x DD 0(l,k) = 1 (l = 0 and k = 0) 0 (l = 0 or k = 0) . Substituting (8) into (6) and (7) gives the TF-domain signal: The received (RX) signal radiated from a moving UE is affected by Doppler effect, and its carrier frequency is shifted. The RX signal after serial-to-parallel conversion is expressed as r i( p,q) = hs i( p,q) e j2pfots(MF Ni+MF q+p) , where h is a propagation coefficient, f o is a frequency shift caused by the Doppler effect, and t s is a sampling interval. t s (M F Ni + M F Mq + p) in (11) means the received time from the frame beginning. This analysis assumes constant propagation coefficient and omits additive white Gaussian noise (AWGN). AP-DU converts the RX signal with DFT as follows: Subsequently, AP-DU suppresses zero padded in UE-DU and generates the TF-domain signal y TF i(m,n) . The TF-domain signal in subframe 0 is expressed as The channel estimator estimates the propagation channel using y TF 0(m,n) . Although the channel estimator extracts CIR-RS from y TF 0(m,n) in the DD domain, the extracted signal g TF u(m,n) is equal to y FT 0(m,n) due to U = 1 and omitted AWGN in this analysis. Thus, substituting (9) and (13) into (2), the estimated propagation channel is given as The estimated propagation channel includes the phase rotation, e j2pfotsMF n , depending on the symbol number n, which is caused by the Doppler shift. EQ1 calculates the equalization weight from the estimated propagation channel with ZF and equalizes the OTFS signal as follows: The TF-domain signal after EQ1 is converted to the DD domain with SFFT as The phase rotation in (16) does not depend on the symbol number k due to (15) in EQ1, whereas it depends on the subframe i and the delay index l. EQ2 substitutes (16) into (5) and obtains the correction parameter as follows: Subsequently, EQ2 corrects the phase rotation by multiplying the correction parameter as shown in (4). Thus, our demodulator corrects the Doppler effect and obtains the TX signal. Figure 4 shows the block diagram of a 28 GHz base station AP-RU that consists of the mixed signal processing (MSP) unit and eight DAs. MSP has a field-programmable array of Xilinx ZU29DR that integrates eight analog-to-digital converters (ADCs) and eight digital-to-analog converters (DACs). The DACs generate TX intermediate frequency (IF) signals, and the ADCs receive RX IF signals directly. MSP and DAs have newly designed six signal multiplexers, which multiple a TX IF signal, a RX IF signal, a 3.3 GHz local oscillator (LO) signal, a timedivision duplex control signal, a radio frequency (RF) integrated circuit (IC) control signal, and 24 V direct current power. Thus, MSP can connect each DA using a single coaxial cable, which has up to 20 m length. The frequency of TX and RX IF signals is 1.5 GHz to decrease cable losses. DA converts between the IF signals and 28.25 GHz RF signals by mixing with an LO signal produced by eight times of an original 3.3 GHz signal. Each DA has eight element waveguide array antenna having vertical polarization and vertically aligning with element intervals of half wave length at 28 GHz. The eight antenna elements connect to the eight channel bi-directional transceiver IC based on 65 nm CMOS integrating gain and phase shifters [8]. The measured effective isotropic radiated power of DA is 22 dBm. are used for two-user multiplexing. Our measurements have three parts as follows. In part-0, UE0 is fixed at the initial locations, and its carrier frequency is shifted by 4 kHz, which corresponds to the Doppler effect of a 42 m/s moving UE, by changing LO frequency to confirm Doppler effect of a high speed UE. In part-1, UE0 moves to left direction as shown in Fig. 5 at walking speed, whereas the other UEs are fixed at the initial locations. Because CIR is spread in the Doppler domain for mobility environments, the CIR-RS analysis range of the Doppler domain for UE0, k r,u=0 , is three as shown in Table 1. The CIR-RS analysis ranges k r,u for the other UEs are zero to decrease AWGN contaminated to the channel estimation. In part-2, all UEs are fixed at the initial locations, and their CIR-RS analysis ranges k r,u are zero. Although the constellation before EQ2 rotates by the frequency offset as shown in (16), the constellation after EQ2 concentrates at the original QPSK position. Figure 7 shows the UE0 OTFS constellations in the part-1 measurements. The constellation before EQ2 shown in Fig. 7(a) is rotated at v = 14 • per subframe by the Doppler effect, which is expressed as exp( j2πf o t s M F Ni) in (16). Thus, the frequency shift is estimated as ω/(2πt s M F N ) = 150 Hz, where M F N is 30 720 by considering cyclic prefix, which indicates the UE0 moving speed of 5.8 km/h in the DOA. In a low-mobility environment, the UE moving speed in the DOA can be estimated using the phase variation of the EQ2 correction parameter per subframe. EQ2 corrects the Doppler effect by using PCRS and (4), and obtains the RX DD-domain signal as shown in Fig. 7(b). Figure 8 shows the measured error vector magnitudes (EVMs), which are calculated from the demodulated signal z EQ2 i(l,k) on the PC based on 3GPP TS 38.141 [9], as a function of the number of simultaneously connected UEs. AP D-MIMO can demodulate the multi-user OTFS signals, which are emitted in the same frequency band and the same time, with ZF in the actual OTA and Doppler environments. In the part-2 measurements, with all UEs are fixed, the OTFS EVMs are about the same as the OFDM EVMs regardless of the number of connected UEs. In contrast, the EVMs of the moving OTFS UE0 in the part-1 measurements are several dB less than those of the moving OFDM UE0, which is shown by the solid black circles in Fig. 8. The solid gray circle in Fig. 8(b) shows the EVMs of the moving OFDM UE0 using phase tracking reference signals (PTRSs). PTRS is based on the specifications in 3GPP TS 38.211 [10], which is allocated for each 48 subcarrier intervals in 4th-14th OFDM symbols for each subframe. The OFDM EVMs of UE1-UE3 with PTRS are about the same as those without PTRS because these UEs are fixed. Although the moving OFDM UE0 with PTRS has less EVM than the moving OTFS UE0, computational complexity for equalizations increases to use PTRS. The computational complexity to obtain the equalization weight W (m,n) of each element is O(U 3 + DU 2 ), and thus the computational complexity of all elements in a subframe is O(MN(U 3 + DU 2 )). The OTFS demodulator calculates the equalization weights of all elements in a subframe for each 10 subframes, whereas the OFDM demodulator using PTRS calculates the weights for each subframe. Thus, the computational complexity to obtain the equalization weights of OFDM using PTRS is 10 times greater than that of OTFS. Table 2 shows comparison with the previously reported OTFS and OFDM systems without using PTRS. The system throughput (STP) and spectral efficiency of this work are estimated from the UE0 EVMs in the part-1 measurements by using the MATLAB 5G toolbox. The STP is sum of four-user throughputs with 100 MHz bandwidth signals. Although the improvement in spectral efficiencies from OFDM to OTFS is not large because of the low UE moving speed such as walking, the experimentally estimated spectral efficiencies of this work are compatible with those of the previous work [11] calculated by simulation. Conclusion This paper presented experimental investigation of OTFS performances using the 28 GHz D-MIMO testbed in OTA and mobility environments. We built the D-MIMO testbed including the OTFS modulator and demodulator, and measured EVM with up to four user simultaneous connections on the actual office floor. To our knowledge, this is a first experimental verification of EVM, throughput, and spectral efficiency using OTFS modulation in 28 GHz OTA environments. The newly developed OTFS demodulator corrected the largeand small-frequency offset and indicated that the Doppler effect on OTFS signals corresponds to the mathematical analysis. OTFS has less EVM and higher spectral efficiency than OFDM without PTRS for moving UE and has higher robustness in timevariant channels. Additionally, the OTFS demodulator estimated the frequency offset, which has the possibility to estimate the moving velocity of UE. These findings suggest that OTFS is one of the key technologies to realize 5G or later mobile communication systems used for high-mobility environments and in high-frequency range such as mmWave and sub-THz. Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/S1759078722001209.
2023-05-07T15:16:31.281Z
2023-05-05T00:00:00.000
{ "year": 2023, "sha1": "0ac5bc118eb53a29ac07fa0df5d44649653790c4", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1177FD4ECF5900801D9A4D46A15FB0FA/S1759078722001209a.pdf/div-class-title-28-ghz-over-the-air-measurement-using-an-otfs-multi-user-distributed-mimo-under-doppler-effect-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "1cd8b72905e8579df32dfc8e14abadb16d7299f7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
256938764
pes2o/s2orc
v3-fos-license
Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms Currently, diagnosis of skin diseases is based primarily on the visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography (OCT) has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and, in conjunction with decision-theoretic approaches, used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue. and subcutaneous fat 20 . The epidermis is four to five layers of stratified epithelia with no blood vessels, the most superficial being the stratum corneum 21 . The epidermis connects to the dermis by a layer known as the dermal-epidermal junction (DEJ). Cutaneous appendages, including sensory receptors, nerves, glands, blood vessels and hair follicles, reside in the dermis. Skin varies in color, thickness, and texture in different parts of the body according to specific functional needs. Regional variations include thickness of the stratum corneum, the presence of a stratum lucidum on palms and soles, epidermal thickness and variable numbers of sebaceous glands, eccrine glands and hair follicles 21 . In this study, we have looked at nose, preauricular, neck, volar forearm, palm, back, thumb, dorsal forearm, sole, and calf as representative of the variety of skin architectures and epidermal thicknesses across the body. The most notable features of thick skin (palm, thumb and sole) are the thick stratum corneum, presence of a stratum lucidum, an abundance of eccrine sweat glands and, a lack of hair follicles, sebaceous glands and apocrine glands. In OCT images of skin from the palm and sole, the stratum corneum is the first visualized layer of the epidermis, appearing as a homogenous layer of cells with scattered eccrine sweat ducts. The eccrine sweat ducts of thick skin have a recognizable spiral lumen when observed with high intensity reflected light, a result of the large refractive index mismatch between sweat duct and the keratinocytes of the epidermis 22 . The stratum lucidum, a clear thin layer of dead cells found only on the thickened epidermis of palms and soles, is just beneath the stratum corneum 20 . The prominent morphological features of the skin of the nose, preauricular, volar forearm, neck, back, dorsal forearm and calf are: thinner epidermis, no stratum lucidium and presence of hair and sebaceous glands. The stratum corneum of thick skin is about 300 µm, in contrast to an average of 14 µm in thin skin, where it is too thin to be visualized in detail by OCT. In thin skin, epidermal thickness fluctuates between 70 µm to 120 µm, with the full thickness of the epidermis plus the dermis varying between 1000 µm to 2000 µm 23 . Quantification of tissue cellular and architectural characteristics through extraction of optical and textural features of skin tissue can be utilized in the analysis of OCT images [22][23][24][25] . Optical properties describe cellular characteristics of skin tissue that can be extracted by solving the light-matter equation, using single or multiple scattering models 26 , conjugated with some OCT image analysis algorithms. The single scattering model assumes that only the light undergoing single scattering (ballistic photons) preserves the coherence properties and contributes to the OCT signal. The multiple scattering model however, is based on the extended Huygens-Fresnel (EHF) principle where the shower curtain effect is taken into account 24,27 . Both models have been used for investigating optical properties of tissue 9,26 . Among optical properties derived from OCT images, attenuation coefficient, defined as light intensity decay due to absorption and scattering, has been successfully used for clinical diagnosis and characterization of skin abnormalities and diagnosis 28,29 . Textural features are formed from the variation in back-scattered light returned from micro-compartments with different sizes and densities 30,31 . Such variations are generated when a tissue, with structures of the same scale or smaller than the wavelength of the light source, is illuminated by a spatially coherent light 32 . First-order texture features are statistics calculated from the image histogram that measures the probability of a certain pixel occurring in the image, and do not consider pixel neighborhood relationships. To derive second-order statistics, the statistical texture features from the gray level co-occurrence matrix (GLCM) 33 , the spatial relationship between two pixels, are considered. The GLCM tabulates the number of times different combinations of pixel pairs, of a specific gray level, occur in an image, in four different directions (0°, 45°, 90° and 135°). To derive higher order statistics, the statistical texture features from the gray level run length (GLRLM) matrix 34 , the spatial relationship between more than two pixels, are considered. In a given direction, GLRLM measures the number of times there are runs of consecutive pixels with the same value. Diagnosis of skin disease currently relies on the training, experience, visual perception and judgment of the clinician. Further diagnostic information is obtained from histologic interpretation of biopsies of tissue samples. Both visual and microscopic inspection of tissue rely on physicians analyzing visible patterns to guide the diagnosis. Issues arise when, for the same patient, dermatopathologists disagree on the clinical and histological diagnosis, due to variability in visual perception. Tools for automated pattern recognition and image analysis provide objective information to support clinical decision-making and may serve to reduce this variability. Previous studies have demonstrated utilizing OCT techniques such as polarization-sensitive OCT in conjunction with advanced image analysis methods, healthy and neoplastic tissues, particularly basal cell carcinoma, can be differentiated 15,35,36 . However, in some of those studies typically qualitative and visual features 37 are used for structure identification. Other limitations of those studies include the use of in vitro data and use of complex, expensive imaging techniques such as polarization-sensitive OCT 38 , using less efficient features, and/or using inefficient analysis methods 15,35,36 . Other studies did not fully incorporate all available data acquisition and analysis techniques. This study attempts to address some of those limitations by using a clinical OCT machine, in vivo human samples, and extensive analysis techniques to accurately identify features of healthy tissue as well as BCC and SCC and classify them. We propose a model based on analysis of optical and texture features to describe the gray-level patterns, pixel interrelationships, and the spectral properties of an image, in order to provide the objective analysis of tissue samples in a noninvasive manner. The aim of this study is to create comprehensive in vivo models of human skin diseases using numerical features extracted from OCT images and to use such models to assist in the diagnosis of common skin disorders. Our study is designed to be completed in two phases. In the first phase, optical and textural features extracted from OCT images of healthy skin at different body sites in vivo are analyzed and compared. In the second phase, the same features are extracted from OCT images of diseased skin and surrounding healthy tissue, these are used for computational modeling. The models will then be tested on diseased images to identify possible dermatological conditions. Dataset Construction All imaging procedures and experimental protocols were approved and carried out according to the guidelines of the US National Institutes of Health, and Institutional Review Board (IRB) approval board of the Wayne State University and informed consent was obtained from all subjects before enrollment in the study. Images for the skin conditions were collected in the Wayne State University Physician Group Dermatology Clinic, Dearborn, MI. Healthy skin in OCT images. A stack of 170 images were taken from different transversal crossections for each of 10 body sites for each of the 10 healthy subjects, providing 17,000 images to develop a comprehensive analytical model of healthy tissue. A specialized holder is used for the OCT probe to make sure that we consistently imaged the same area of skin on each subject. The OCT B-scan images of nose, preauricular, volar forearm, neck, palm, back, thumb, dorsal forearm, sole, and calf were taken from male subjects aged between 25 and 52 years old, none of whom had any skin conditions. Among numerous images, collectively, the resulting 1000 images represented the data set for the first part of study. See Fig. 2 for image examples. The images were despeckled and then were segmented into two distinct layers using our semi-automatic DEJ detection algorithm that is based on graph theory. The algorithm was performed in an interactive framework by graphical representation of the attenuation SCIENTIfIC RepORTS | (2017) 7:17912 | DOI:10.1038/s41598-017-17398-8 coefficient map through a uniform-cost search method 39 (see Figure Supplementary S3). The segmentation results were also verified by manual segmentation performed by two dermatologists. The main reason for studying healthy skin were; we wanted to study different regions of healthy skin to generate a small-scale atlas of OCT images. This allows us to have insight into textural and statistical features, to study feature variation prior to classification, and to better understand details of specific sites where we performed feature extraction for classification. This information is then used to compare features extracted from healthy and cancerous tissues in the classification workflow. Diseased skin in OCT images. The characteristics of diseased skin, hence the corresponding features in the OCT image, are altered compared to those of healthy skin. We studied epithelial skin tumors, i.e., basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) for this study. The diseased images in this study were taken from 11 subjects, aged between 25 to 52 years old, with histopathologically confirmed diagnosis of BCC or SCC. Each patient had one tumor; 5 with BCC and 6 with SCC. We collected 170, 2D images from each tumor at different transversal crossections. We selected our sample images among those images. Although, we collected many images, some of them were excluded. One reason for exclusion was the inability to confirm with their histopathology match. Another reason was to have distinct SCC and BCC samples, in some cases SCC and BCC were very similar and cannot easily be distinguished. Our dermatologists with histopathology expertise evaluated the OCT images and compared the results with biopsied tissue samples from that site, to identify the presence of BCC or SCC. For healthy images histology was not acquired. The images were manually (with the confirmation of histology images) annotated, generating 242 diseased skin images comprised of 119 BCC and 123 SCC images as our dataset. An additional 240 images were collected from locations at a minimum distance from the tumor that they could dermatologically be confirmed tumor-free. Based on histology results, our dataset comprised nodular, superficial, and infiltrative subtypes of BCC and invasive SCC. In Figure Supplementary S1, the OCT images and corresponding histology images for BCC are shown. In both the OCT and histology images of BCC, the central portion of the epidermis is ulcerated and covered with a crust (green arrow). SCC lesions develop from atypical cells with squamous cell characteristics proliferating in the dermis and underlying tissue. On the skin surface this appears as destruction of the epidermis, and local thickening of the tissue due to hyperkeratosis and disordered epidermal layering. Criteria used to determine SCC in OCT images were changes to tissue layer architecture and disruption of the basement membrane 15,40 . In Figure Supplementary S2, the OCT image and its corresponding histology image for an SCC sample are shown. Results Optical, first order statistical, and textural features, including sixty-three features, were extracted for both healthy and diseased image datasets. OCT Healthy skin. These features are investigated and compared for both epidermis and dermis layers of healthy skin of patient's ten body sites. We observed that the value of these features varies between skin of different sites due to the composition and arrangement of cells and organelles. We used ANOVA analysis (interval plots) to analyze the variation of the features for different sites of body and t-test to measure the inter-correlation between the features of the layers in both dermis and epidermis. Optical features, attenuation coefficient, is determined based on light intensity decay. Attenuation coefficient has been computed for different skin sites based on the single scattering calculation algorithm. A simple block diagram of the computational algorithm as well as the attenuation coefficient calculation algorithm are explained in Materials and Methods Section. In Figure Supplementary S4, attenuation coefficients of dermis and epidermis are shown for (a) nose, (b) preauricular, (c) volar forearm, (d) neck, (e) palm, (f) back, (g) thumb, (h) dorsal forearm, (i) sole and (j) calf of ten healthy individuals. We observed that the palm and thumb are closely correlated in terms of attenuation coefficient. The attenuation coefficient is significantly different between the group of sole, palm and thumb compared to the other sites of body (p < 0.05) in both dermis and epidermis. Variation is also observed between preauricular and other sites for both dermis and epidermis. For the dermal layer, differences were detected between the sole and nose as well as between the sole and volar forearm. Figure Supplementary S4 also shows the map of p-values for epidermis and dermis of different body sites. First-order statistical features (FOS) extracted from the OCT images, were mean, standard deviation, variance, skewness, kurtosis, median and entropy. We observed slight differences for all FOS features extracted from epidermis and dermis layers in all skin sites. Figure Supplementary S5 OCT versus high resolution ultrasound. High-frequency ultrasound is mainly used to estimate tumor thickness in melanoma, to plan one-step excisions with appropriate margins, and help to determine the necessity of sentinel lymph node biopsy 5 . Its penetration depth lies around 8 mm at 20 MHz. We imaged the skin of the same body sites with several OCT and ultrasound imaging systems in order to compare their resolutions and penetration depths. The modalities used were a swept source OCT (SS-OCT), clinical ultrasound (9 MHz), high frequency (HF) ultrasound (48 MHz), ultra-high frequency (UHF) ultrasound (70 MHz) and high definition (HD) OCT. These images are shown in Figures Supplementary S9 to S12, and their histology images given in Figure Supplementary S13. The speckle size in OCT and ultrasound images of a fabricated tissue-mimicking phantom, composed of TiO 2 and polyurethane, were listed in Table 1 for comparison. Average speckle size is estimated by using the full width at half maximum (FWHM) of the auto-covariance function of the speckle pattern 46 . Theoretically, some of high frequency ultrasound systems have a resolution close to that of OCT or even better. We however observed more distinct structures in OCT images. In Table 1, we also compared the resolution, field of view and penetration depth of these imaging modalities. Comparing the results, OCT surpasses other modalities in terms of speckle size. SS-OCT is the most favorable one due to its moderate penetration depth, resolution, field of view, and speckle size. Discussion OCT is an effective imaging modality capable of aiding in the diagnosis of skin conditions including inflammatory diseases and non-melanoma skin cancer. The diagnosis of skin disease is based primarily on the visual assessment of the dermatologist and recognizing patterns of morphology. Noninvasive skin imaging techniques, including OCT, can provide further information to the clinician. Currently clinicians rely on their visual pattern recognition skills and expertise as a physician viewing the images. Tools for automated pattern recognition and image analysis can provide objective information to support clinical decision-making. This study presents the incorporation of clinical and detailed quantitative textural assessment of OCT images to first generate a comprehensive morphological and computational atlas of healthy human skin in vivo. The reference system of in vivo healthy skin OCT images can then be used to assess a wide variety of skin disorders with the aim of improving diagnosis. We generated a small-scale OCT atlas of human skin from sites shown in Fig. 2 (nose, preauricular, volar forearm, neck, palm, back, thumb, dorsal forearm, sole, and calf), which covers variations of skin tissues throughout the body. We imaged healthy skin from a variety of body sites from different individuals. The images were then segmented using our dermal-epidermal junction (DEJ) detection algorithm. The algorithm is based on graph-theory representation of the attenuation coefficient map through a uniform-cost search method. Features including attenuation coefficient, statistical, and textural features were extracted from ten evenly distributed ROIs in both epidermis and dermis of different body sites. The average values and their corresponding 95% confidence interval (CI) across different skin sites were calculated. The derived features were different for the dermis and epidermis in healthy skin of different sites. These features were then extracted from OCT images of diseased and healthy skin and used for classification. The epidermis and dermis vary in different anatomic areas. Optical properties and hence the corresponding numerical features in OCT images vary based on sizes, shapes, concentration and orientations of tissue microstructure; cell membranes and blood vessel walls act as scatterers/reflectors and refractors. In texture analysis, the attribute 'contrast' of the GLCM is a measure of texture analysis, showing the difference between the highest and lowest intensity values of a set of pixels. This parameter was significantly different between the values calculated from palm/sole and nose. The attribute 'energy' of the GLCM matrix is a measure of uniformity of pixel pair recurrences and identifies disorders in texture. High-energy values occur when gray level distribution has a constant or periodic form. Significant variations of energy were measured in sole samples as compared to all other sites for both the epidermis and dermis. In the case of the attribute 'entropy' of the GLCM, we have an identifier of disorder or complexity of an image that is large when the image is not texturally uniform. Sole, palm and thumb showed a significant difference in entropy when compared to that of other sites in both dermis and epidermis. The attribute 'inverse difference moment' or 'homogeneity' of the GLCM, in spite of having dissimilarity, did not offer a significant distinction among different sites. With the numerical features extracted from OCT images, we successfully trained a classifier to differentiate between healthy and abnormalities of dermal microstructure. Among the classifiers we examined, SVM offered the best accuracy to differentiate between normal and abnormal tissue samples. This objectively determined information could assist clinicians to diagnose, develop treatment plans, and determine individual prognoses more accurately. In this workflow, we used an efficient, limited number of features and a modified PCA algorithm for feature selection. Thus, our algorithm might be limited as result of PCA limitations 41 . Although this selection of features covers an adequate variety in the projected space, their values may not linearly (or quadratically) discriminate between two classes. Therefore, future directions for research include, a larger data set, exploring other efficient features, and investigation of more efficient feature selection and classification algorithms. Based on our data analysis in terms of recall and perception, it is observed some examples where the propsed classifier has failed and BCC or SCC skin tissue were assessed as non-cancerous by proposed workflow. The reason for this classifier misinterpretation may be due to similarity of the cancerous tissue to surrounding texture. In summary, we have extracted optical, textural, and statistical properties from OCT healthy skin images to create a computational atlas of the normal skin at different anatomic sites. We observed that skin cellular architecture varies across the body, and so do the textural and morphological characteristics in the OCT images. There is, therefore, a critical need to systematically analyze OCT images of different sites and identify their significant qualitative and quantitative differences. We demonstrated that the computational models can assist in diagnosis of abnormalities of dermal microstructure, i.e., BCC vs. healthy, or SCC vs. healthy, and hence aid in the determination of treatment. The proposed workflow can be generalized for detection of other tissue abnormalities. The result of this study can be extended as an interactive machine learning kernel interface addable to OCT devices. Materials and Methods OCT system. Figure It is worth mentioning that we introduced dynamic focus OCT 47 , in which there is no need to decorrelate the effect of confocal gate and sensitivity drop-off since the peak of the confocal and coherence gates move simultaneously. Similarly, due to the multi-beam configuration, our Vivosight OCT can be considered approximately as a discrete dynamic focus OCT and, with a good approximation, these parameters can be neglected 48,49 . Therefore, compensation for confocal parameter of the lens and for the fall in laser coherence was not performed. Data Analysis. Healthy OCT images of skin are first segmented into two distinct layers using our semi-automatic DEJ detection algorithm 39 . The algorithm works based on converting a border segmentation problem to a shortest-path problem using graph theory. It is performed in an interactive framework by graphical representation of an attenuation coefficient map through a uniform-cost search method. To smooth borders, a fuzzy algorithm is introduced enabling a closer match to manual segmentation. The details of this method have been reported previously 39 . The diseased parts of the OCT image are manually selected based on the histopathology images. A 200 × 200 pixel ROI was selected such that the tumorous region is within it. ROIs from the surrounding healthy skin were also chosen. The images then go through the procedure depicted in Fig. 5(a), where the optical, statistical and textural features are extracted from the images. To suppress the speckle noise, a BM3D filter 50 was used. The despeckled images were used for better visualization as well as segmentation. Optical feature. We calculated the attenuation coefficient as the optical property of the tissue. The A-scans in each ROI were averaged. The Levenberg-Marquardt algorithm was used for curve-fitting. The attenuation coefficient of the ROI in the sample was then the slope of the fitted curve on the averaged A-scan (see Fig. 5(b)). First-order statistical features: Mean, variance, standard deviation, skewness, median, entropy and kurtosis were calculated for each ROI. First-order measures are statistics calculated from the original image values, and do not consider pixel neighborhood relationships. They are computed based on the intensity value concentrations on all or part of the histogram. Second-order statistical features: We used statistical texture features from the gray level co-occurrence matrix (GLCM) to represent second-order statistics 33 of a specific gray level occur in an image in different directions. Homogeneity, contrast, energy, entropy and correlation in four directions, 0°, 45°, 90° and 135°, are calculated as second-order statistics. Higher order statistical features: We used statistical texture features from the gray level run length (GLRLM) matrix to represent higher order statistics. These features demonstrate spatial relationship between more than two pixels. In a given direction, GLRLM measures the number of times there are runs of consecutive pixels with the same value including short run emphasis (SRE), long run emphasis (LRE), gray-level nonuniformity (GLN), run percentage (RP), run length nonuniformity (RLN), low gray-level run emphasis (LGRE), high gray-level run emphasis (HGRE) 34 . We constructed a feature vector comprised of FOS textures, GLCM textures, and GLRLM features in four angular directions, 0°, 45°, 90° and 135°. The mean of the obtained features for dermis and epidermis and their corresponding 95% confidence intervals (CI) across different skin sites were estimated. ANOVA analysis (interval plots) was used to analyze the variation of these features for different sites of body in both dermis and epidermis. The differences in image features between sites were compared using t-test. We used Minitab Statistical Software (version 17.0, Minitab Inc., Pennsylvania, USA) for ANOVA analysis. Classifiers. Prior to classification, features were normalized, then a feature selection algorithm was performed to obtain the most discriminative features. We used principal component analysis (PCA) as our feature selection method. PCA finds a linear map from the data in a high dimensional space to a desired low dimensional space trying to preserve the data variance 39,41 . To perform PCA, we obtained the principal components and then kept the features which provided the greatest contribution to the first sixth principal components. After feature selection was performed, the images we had collected to fill the learning database were classified using machine learning classifiers. We tested SVM (with two different kernels of linear, LSVM, and 2 nd degree polynomial (QSVM)), logistic regression (LR), k-nearest neighbor (KNN), linear discriminant analysis (LDA) and artificial neural networks (ANN). It has been shown previously that although SVM is designed to solve linear classification tasks, by using some kernel tricks, it can be used for nonlinear classification tasks and is very well suited for binary (two class) problems 42 . In LR classification, the probability that a binary target is true is modeled as a logistic function of a linear combination of features 43 . For (KNN) the rule classifies each unlabeled sample by the majority label among its K-nearest neighbors in the training set 44 . LDA, searches for a linear combination of variables that best separates binary targets. An ANN classifier consists of many neurons, i.e., highly interconnected processing components, that work constructively and coherently to solve specific problems 36,37 . Classifiers were validated using 10 × 10-fold cross-validation method. In 10-fold cross-validation, the data is randomly split into 10 equal folds. The classification procedure is implemented in an iterative manner. For each run nine folds are used for training and one-fold is used for testing. The process is repeated ten times and the final accuracy is the average of all the fold accuracies. Implementation. The approaches described in this study have been implemented in Matlab ® 2016 except the segmentation algorithm which is developed in Delphi. The experiments were carried out on a standard computer workstation (3.10 GHz Intel Core i7, 32 GB RAM). In addition to custom routines and semi-automatic ROIs selection developed by the authors using Matlab's built-in functions, publicly available source code for BM3D has been utilized 50 .
2023-02-17T14:35:46.010Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "f7f8a725b09d4e97d6b52d149d2dd096717e5ead", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-17398-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f7f8a725b09d4e97d6b52d149d2dd096717e5ead", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
236705793
pes2o/s2orc
v3-fos-license
Energy saving and energy efficiency of the Belarusian economy: analysis of concepts and evaluation criteria, proposed approaches to improving the energy efficiency of the housing stock The article analyzes the concepts and indicators for assessing energy saving and energy efficiency, which revealed a clear terminological relationship between the studied concepts and the absence of evaluation criteria that characterize important components of energy efficiency by type of economic activity and economic spheres. The most important components of energy efficiency were identified and calculations confirming the importance of the national policy to improve the energy efficiency of the housing stock as the sector with the highest energy saving potential were made. It was confirmed that in order to achieve high indicators of real economic growth, both consistent and constructive measures are needed to create energy facilities of a new formation, as well as to improve existing capacities and facilities, for example, in the form of major repairs of the housing stock. Introduction The relevance of energy efficiency issues at various levels of the economy is determined by the specific conditions of any economic system, the peculiarities of the functioning and development of the energy sector, as well as the current trends in the global economy amid global fuel and energy problems and environmental issues. Under these conditions, the state policy of transition of the Republic of Belarus to the path of innovative resource and energy-saving development, which provides for implementation of the strategy of energy efficiency, energy saving and energy substitution, is absolutely justified and practically non-alternative. This is enshrined in a number of legal acts, including the National Strategy for Sustainable Socio-Economic Development until 2030 [1], the Law of the Republic of Belarus "On Energy Saving" [2] and others. The priorities for resource-and energy-saving development are selected based on a number of quite obvious arguments formulated in scientists' studies:  in terms of the GDP energy intensity (purchasing power parity), our country was in the category of countries with inefficient economies;  electric energy has taken a key place in the material foundation of modern society;  heat energy in the required amount is a fundamental condition for a comfortable life of the population;  sustainable growth of the society's welfare is possible only with a decrease in the growth rate of specific energy consumption;  the large-scale replacement of non-renewable mineral energy resources with renewable ones, and traditional technologies of the electric and heat energy production with advanced ones, makes it possible to extend the availability of traditional energy resources and reduce man-made pressure on the environment. Due to the high degree of relevance, the problem of improving energy efficiency has been studied by individual scientists and research teams of Russian scientists such as V.V. Efremov, G.Z. Markman, I.A. Bashmakov, R.F. Araslanov, A.A. Tupikina, A.S. Gorshkov, A.A. Gladkikh and others. [3,4,5,6,7] [8,9,10,11,12] and others are engaged in the development of theory and methodology, specific methodological tools to ensure energy efficiency in the Belarusian economy. In addition to domestic sources, the authors also turned to the publications of foreign scientists whose areas of interest are the optimal use of energy resources and energy efficiency analysis [13,14,15,16]. Objects, tasks and stages of the study The object of the research is processes and phenomena in the field of energy saving and energy efficiency, their assessing criteria and ways to improve them. The purpose of the study is to analyze and improve the concepts and indicators for assessing energy saving and energy efficiency, as well as to evaluate the effectiveness of the proposed measures to improve the energy efficiency of the housing stock of the Republic of Belarus within the framework of the national policy of energy efficiency of residential buildings to achieve high indicators of real economic growth. The objectives of the study are to identify the terminological relationships of the concepts of energy saving and energy efficiency and their constituent elements; to systematize the criteria for assessing energy efficiency at different levels of the economy, types of activities and the constituent elements; to develop an algorithm for implementing an energy efficiency program in the form of heat modernization project for the residential sector; to conduct a feasibility study and calculate the economic effect of energy efficiency measures in housing stock. The stages of the research are theoretical and methodological justification; financial and economic calculations of the proposed activities; formulation of conclusions and recommendations. In researching the efficient use of energy resources, the terms energy saving and energy efficiency are used to identify them. The authors of the article take the position that energy efficiency and energy saving are different concepts. In the study, they adhere to the point of view that energy saving refers to conservancy of energy or any resource; energy efficiency should be understood as the process of optimal use of energy resources, taking into account at least the economic, environmental and social components in a certain time period. In scientific publications, "energy saving" is often interpreted as actions or measures aimed at reducing energy waste and energy consumption of technological processes, industrial and household equipment. According to the EU Energy Efficiency Directive (2012) [17], differentiation of the categories analyzed can be presented as follows: energy efficiency is the ratio of the value of productivity, goods, services and energy to the energy consumed; energy saving is the amount of energy saved, which is determined by measuring and (or) evaluating before and after the implementation of measures to improve efficiency while ensuring the normalization of external conditions that affect energy consumption. In European countries, the definition given by the Lawrence Berkeley National laboratory (2010) [18]is generally accepted: energy efficiency is "less energy consumption to provide the same services". In accordance with the Law of the Republic of Belarus No. 239-Z of January 8, 2015 [19], energy saving is an organizational, practical, scientific, informational and other activity of entities in the field of energy saving, aimed at more efficient and rational use of fuel and energy resources. In the same legislative act, energy efficiency is defined as a characteristic that reflects the ratio of the received effect from the use of fuel and energy resources to the costs of fuel and energy resources produced in order to obtain such an effect; or through indicators that reflect the ratio of the useful effect from the use of energy resources to their expenses produced in order to obtain such an effect, in relation to products, technological processes, legal entities, individual entrepreneurs. It should be noted that the efficient use of energy resources within the framework of the modern concept of "green" economy should be considered as an achievement of economically justified efficiency of energy resources use (transformative meaning -the result of actions) in conditions of the existing level of technology and technology development and compliance with the requirements to environmental protection. In order to study and analyze energy efficiency and energy saving, it is necessary to form systems of indicators, which are used for comparison and comparative analysis of data in dynamics and structure. Such a system of indicators makes it possible to compare the assessment result with the maximum possibilities of ensuring energy savings. The research by Prof. V.P. Samarina presents the following two approaches to energy efficiency measurement. "According to the first approach, only the result or effect is taken as an assessment of energy efficiency" (saving energy resources or reducing energy consumption) [20].Thus, the cost of achieving a result is not taken into account, which in Samarina's opinion, and we fully share this opinion, "cannot be called correct from an economic point of view" [20]. The second approach is to study "the ratio between economic outcomes (output, GDP, etc.) and energy costs (energy consumption, energy production costs, etc.). For example, International statistics of the United Nations and the World Bank energy efficiency is considered as a ratio of GDP to energy consumption in oil equivalent units" [20]. In principle, the presented approaches to energy efficiency assessment correspond to the difference in the interpretation of the terms energy saving and energy efficiency, which have been analyzed above. The global and scientific community on energy efficiency issues has so far developed the following system of indicators, which are presented both at the level of the economy as a whole and at the regional level, as well as at the level of energy efficiency of production complexes and processes. It includes energy intensity of GDP, energy efficiency of GDP, integrated energy efficiency indicator (energy efficiency index), energy intensity of GVA (gross value added). Next, we will consider the system of energy efficiency indicators in the region using the systematization presented by Y.N. Akulova [21]. It is characterized by determining the energy intensity of GRP (gross regional product), energy intensity of production, energy intensity of organizations, energy intensity of local budgets, profitability of energy efficiency measures (policies), energy-economic level of production in the region, etc. The authors of the work identified that the first approach divides energy efficiency indicators into economic (cost), technical and economic (physical), and indicators of the degree of implementation of energy-efficient technologies. Methods that implement this approach, for example, include one of the methods of the World Energy Council [18]. Thus, the most significant and most common in scientific publications indicators of energy efficiency at the macroeconomic and regional levels have been considered, as well as at the level of production complexes and processes in terms of various scientific approaches. It should be noted that most scientific publications lack indicators that characterize energy efficiency components, which in our opinion is an apparent gap in research. According to the authors, energy efficiency should be characterized through the following components: heat efficiency, electrical efficiency, renewable energy efficiency, etc. In addition, it is most reasonable to consider these indicators in relation to specific activities. This, in turn, is related to the specific activities of business entities at different levels of the national economy. As foreign and domestic experience shows, in order to achieve high indicators of real growth of the economy, consistent and constructive measures are necessary, aimed not only at creating industrial enterprises of a new formation, but also at improving existing production capacities in various sectors of the economy. In this direction, the national policy in the field of energy efficiency improvement of the housing stock (State Program "Energy Saving") plays an important role. The housing sector has the greatest potential for energy saving. It should be noted that the housing stock is a specific area of activity with its inherent features. First, it provides the population with residential area. Hence, the second feature is a direct impact on the quality of people's lives. In addition, the housing stock requires effective operation and timely maintenance, which, in turn, is also a peculiarity. The housing stock is one of the largest energy consumers along with other entities of the national economy of Belarus. The reduction of energy consumption by the housing stock has a significant impact on the use of fuel and energy resources for energy production by increasing the energy efficiency of residential property. Energy efficiency of the housing stock will be determined by improving the quality of energy production and use (primarily electricity and heat). For more detailed consideration, we propose to focus on a component of energy efficiency -heat efficiency of the housing stock. On the basis of the analysis of the points of view presented in various scientific sources, we have defined the category "heat efficiency of housing stock" as a rational use of heat energy, ensuring the maintenance of micro-climate in the room and the heat balance with the environment with the purpose of reasonable consumption of resources for its production by various methods. There are two main directions to improve energy efficiency of facilities through the impact on heat energy: improving the efficiency of heat supply systems and reducing heat loss of enclosing structures. The first direction involves the modernization of engineering networks of the city system and only indirectly concerns the housing stock. It is the reduction of heat loss of enclosing structures (for example, due to thermal modernization) that has a direct impact on the energy consumption of residential property. We propose to consider the economic effect of thermal modernization of the city's housing stock in order to reduce the consumption of heat energy. Research methods During the research, the authors used the following research methods: analysis, synthesis, observation, comparison, inference by analogy, a systematic approach, as well as special methods and techniques of financial and economic analysis. Results and discussion To achieve the purpose of the research, the authors have developed an algorithm for the implementation of the project on thermal modernization of residential property for an urban settlement with a population of 98.452 people (according to the national census of 2019). In previous studies, the authors determined that these measures would allow to save up to 40% of the heat energy supplied to the entity and, consequently, produced. This, in turn, provides the expected economic effect [24]. Belarus has a system of cross-subsidizing the population's expenses for housing and communal services -a significant part of expenses (about 70%) is covered by the State from the budget funds. In the near future, the government plans to fully fix the population with the costs for housing and communal services. The implementation of the proposed algorithm will allow to ease burden of payments for the population and to attract additional funds to local and republican budgets. In general, the project implementation includes two cycles of thermal modernization works. The number of entities involved in the first and second cycles of thermal modernization was determined by the ratio of 1:3. The total term of the project implementation is 20 years. This is determined by the government policy aimed at providing social and economic support to the population. Proceeding from the total duration, the first and second cycles will be completed in 5 and 15 years respectively. The project implementation algorithm includes not only direct implementation of modernization cycles, but also other stages ensuring successful implementation of the initiative. It describes fulfillment of the following stages: 1. Search for funding sources for the first project cycle. 2. Evaluation of project effectiveness. 3. Allocation of funds and implementation of the first project cycle. 4. Formation of the source of financing and works of the second cycle of modernization. Let's look at the phased implementation of the project in more detail. The main problem that arises right at the start of the project is the identification of funding sources. Based on the presence of cross-subsidization, the authors propose to divide the costs of heat modernization in the ratio of 1:1 between the government and the population. For the population, it is proposed to restructure the cost of housing and communal services under the article "Capital repair" including the cost of heat modernization of housing facilities. The system of organization of the capital repair in the Republic is distributed and does not imply the accumulation of funds allocated by citizens to a particular residential building. These funds are accumulated in the accounts and are constantly in circulation (when they are credited to the account they are sent to settlements with contractors), which ensures the constant implementation of work on capital repairs of the housing stock in the current year according to the schedule of capital repairs approved by local authorities. Determining the project performance indicators at the initial stage of the proposed algorithm plays a crucial role. When considering the project, a scenario approach was used: pessimistic (saving 10% of energy), optimistic (saving 40% of energy) and the most probable (saving 20% of energy) variant of its implementation were considered. Options are based on determining the need for heat supply to entities, changes in the cost of services, and other parameters. To illustrate the potential effectiveness of the project, an optimistic implementation option is further considered. Based on the calculated data of the study, the payback period of the project (PP) will be 11.98 years. According to the previous research [24] the net present value (NPV) of the project is 1667270 BYN, and the annuity rate (IRR) is 33.3%. Since the discount level of 15% is included in the calculation, we can conclude about the effectiveness of this project. When the source of funding is formed and the project is recognized as effective, it is possible to proceed to the first project cycle. The estimated data is given in Table 1. As it can be seen from the calculation, in the first cycle of the project implementation due to reduction of heat supply and, as a result, its production, savings for this purpose will amount to 6,479,834 BYN. This is the start-up capital for the implementation of the second cycle of the project. Having spent the initial capital, the second cycle goes into the stage of self-financing. This is possible by redistributing the saved funds from energy production to further thermal modernization. The calculation for the second cycle is shown in Table 2. The calculation is based on an optimistic scenario, as for the first cycle. Note -* -year from the start of the project (reason for choosing a discount factor) ** -the calculation is made considering the discount factor Thus, the subsequent income, which will bring a reduction in heat consumption, will be 7,099,400 rubles annually (excluding the growth in the cost of residential heating services). The income, that will be received after the proposed project is completed, will go to the state budget and over time will cover all expenses of the State that it bears during the implementation of the first project cycle of the project. In terms of energy efficiency of the economy of the Republic of Belarus, the implementation of the proposed project will lead to a significant economic effect calculated by the authors per capita of the settlement (about 150 rubles), which in terms of the total number of residents of Belarus will be 1,374,399,200 rubles. Conclusions Thus, the authors of the publication have analyzed the concepts and indicators for assessing energy saving and energy efficiency, which revealed a clear terminological relationship and absence of evaluation criteria that characterize important components of energy efficiency by type of economic activity and economic sphere. This made it possible to identify the most important components of energy efficiency and perform calculations confirming the importance of the national policy to improve energy efficiency of the housing stock as a sector with the highest energy saving potential. It is confirmed that in order to achieve high indicators of real economic growth, both consistent and constructive measures to create energy facilities of new formation and improvement of existing capacities and facilities (for example, in the form of thermal modernization of the housing stock) are needed. Improving the efficiency of work on the project of thermal modernization of the city housing stock proposed by the authors to reduce heat consumption under the state program "Energy Saving" is possible due to a number of the following factors: application of quality management systems, motivation of employees, search for rational ways of financing, which will reduce the project implementation time. It should be noted that the approaches proposed by the authors to increase energy efficiency are not the only possible in the national economy. However, a significant economic effect of the measures proposed by the authors is evident at both the micro and macro levels: energy saving in the domestic market may increase energy exports; energy savings will reduce budget expenditures. In this regard, the proposed project takes into account the environmental component in the form of a reduction in the anthropogenic impact on the environment due to a reduction in heat supply from residential premises into the atmosphere, as well as the social component in the form of saving money by households for heat energy under conditions of continuing growth of electricity and heat tariffs.
2021-08-03T00:06:05.248Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "8584af56791fda136c4b1c0ac65a7862d4b2282c", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/42/e3sconf_ti2021_02018.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bcc26691c62c9f781705551edc2a4c1cdfacde31", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Engineering" ], "extfieldsofstudy": [ "Economics" ] }
4656226
pes2o/s2orc
v3-fos-license
Clinical and genetic aspects of defects in the mitochondrial iron–sulfur cluster synthesis pathway Iron–sulfur clusters are evolutionarily conserved biological structures which play an important role as cofactor for multiple enzymes in eukaryotic cells. The biosynthesis pathways of the iron–sulfur clusters are located in the mitochondria and in the cytosol. The mitochondrial iron–sulfur cluster biosynthesis pathway (ISC) can be divided into at least twenty enzymatic steps. Since the description of frataxin deficiency as the cause of Friedreich’s ataxia, multiple other deficiencies in ISC biosynthesis pathway have been reported. In this paper, an overview is given of the clinical, biochemical and genetic aspects reported in humans affected by a defect in iron–sulfur cluster biosynthesis. Introduction In eukaryotes, the Fe/S-cluster biosynthesis machinery is classically divided into the mitochondrial iron-sulfur cluster assembly (ISC) and export machinery, and cytosolic iron-sulfur protein assembly (CIA) system. Nowadays, 9 CIA proteins and 20 ISC proteins are known to assist the major steps in biogenesis. All steps in these machineries are evolutionarily conserved from yeast to man. Once incorporated into their target protein, Fe/S-cluster function as catalysts or take part in electron transfer. They also serve as sulfur donors in lipoate and biotin cofactor biosynthesis [1]. Fe/S-cluster-bearing proteins are located in the mitochondria and in the cell nucleus, where they play a role in gene expression regulation [2]. Moreover, proteins involved in DNA replication, DNA repair (Pol α, Pol ε, Pol δ, and Pol γ) and those functioning as DNA helicase harbor Fe/S clusters [3]. The cytosolic ABC protein ABCE1 required for ribosome assembly and protein translation has two [4Fe/4S] clusters [4]. Apart from being synthesized and incorporated as cofactor into apoproteins, iron-sulfur clusters also serve as redox center, as they are incorporated in the complexes I, II, and III of the oxidative phosphorylation system (OXPHOS), embedded in the inner mitochondrial membrane. Considering the pleiotropic subcellular localization and the essential role of the Fe-S-cluster-bearing enzymes in cell viability, it is easy to understand that faulty synthesis and lack of insertion of these inorganic elements can have detrimental effects on human health. An overview will be given of the genetic and clinical aspects of the molecular defects located in the biosynthesis pathway of iron-sulfur clusters in the mitochondria. Until now, diseases resulting from pathogenic mutations in proteins involved in CIA have not been reported. Mitochondrial iron-sulfur biosynthesis (ISC) ISC can be divided into three major steps. The first part encompasses formation of a [2Fe-2S] Fe/S cluster on a scaffold protein. Subsequently, the cluster released from the scaffold protein by dedicated chaperones is maintained in a gluthatione (GSH)-dependent fashion. The synthesized product is further processed intramitochondrially or exported into the cytosol to be processed by CIA. The exact nature of the exported structure is unknown, but it might be The original version of this article was revised due to a retrospective Open Access order. a glutathione-stabilized [2Fe-2S] ([2Fe-2S](GS) 4 2− ) [5]. The export of this structure is mediated by ABCB7 in cooperation with ALR, an FAD-dependent sulfhydryl oxidase. The implication of ALR is, however, still a matter of debate, as a study in yeast could not show impaired cytosolic iron-sulfur cluster assembly [6]. The mitochondrial machinery can synthesize [2Fe-2S] or [4Fe-4S] clusters and incorporate these into the appropriate apoproteins (Fig. 1). For a detailed description of the iron-sulfur cluster pathway and the specific role of each enzyme within this pathway, we refer to other papers within this issue. Faulty ISC synthesis can lead to mitochondrial failure, preferentially affecting high energy consuming organs, i.e., central nervous system, skeletal muscle, heart muscle, and liver. A deficiency in ISC synthesis can thus mimic clinical phenotypes of oxidative phosphorylation defects (Table 1). Considering the intricate relationship between ISC biosynthesis and cellular iron homeostasis, ISC deficiencies can lead to iron accumulation. De novo synthesis of [2Fe-2S] clusters One of the mitochondrial matrix proteins dedicated to Fe-Scluster synthesis is ISCU [7]. The ferrous iron required for Fe/S synthesis is imported into the mitochondria through the mitochondrial solute carriers mitoferrin 1 (SLC25A37) and 2 (SLC25A28). In contrast to mitoferrin 1, which is exclusively expressed in developing erythroid cells, mitoferrin 2 is widely expressed [8]. How iron gets into the scaffold protein is not totally elucidated yet, but frataxin (FXN) plays an important role in this process at least in humans. Interaction of frataxin with ISCU is important for ISC biosynthesis and seems to be iron dependent [9]. The sulfur component is delivered through desulfuration of cysteine into alanine by cysteine desulfurase (NFS1), acting in a dimer conformation, in association with the cofactor pyridoxal 5' phosphate. In addition, NFS1 requires association with a heterodimer composed of ISD11 (encoded by LYRM4) and acyl carrier protein (ACP) [10] for stabilization [11]. ISCU As ISCU functions as a scaffold protein at the start of Fe/Scluster synthesis, defective ISCU is predicted to impair overall cluster synthesis resulting in large downstream effects, often with lethal consequences. This was confirmed in mice [12]. However, tissue specific splicing and partial enzymatic impairment are found in subjects harboring pathogenic variants in ISCU. ISCU deficiency can cause myopathy with exercise intolerance and lactic acidosis, which is called the Swedish type myopathy. Subjects may experience cramps and show rhabdomyolysis. Interestingly, most of the affected subjects carry the common homozygous intronic mutation (7044G > C, or IVS5 + 382G-C), and all of them (except for one Norwegian) are native from a region in Northern Sweden, explaining the denomination of Swedish myopathy [13]. In 2009, two siblings were identified with an exonic missense mutation (149G > A, Gly50Glu) in a compound heterozygous state, which includes the common intronic [74,76] mutation. They had a more severe phenotype with early onset (around the age of 2 years) of severe muscle weakness and muscle wasting, and hypertrophic cardiomyopathy [14]. Very recently a dominant mode of inheritance (c.287G > T, pGly96Val) was reported in an Italian subject presenting with ptosis, hypotonia and exercise intolerance, showing worsening over time [15]. Biochemical features were different from previously reported cases, including complex IV deficiency in addition to the classically reported complex I, II, and III deficiencies [15]. Analysis of transcript expression showed that the highest level of mutant transcript was in skeletal muscle (80%), while liver and heart had lower levels (46 and 30%, respectively), explaining the tissue specific phenotype of this disease [12]. SLC25A37 and SLC25A28 Disease-causing mutations have not been detected yet in genes encoding these proteins. However, in the subjects with refractory anemia and ring sideroblasts (RARS), increased expression of mitoferrin 1 (SLC25A37) in bone-marrow mononuclear cells was found [16]. RARS is a form of myelodysplastic syndrome leading to isolated anemia, hypochromic erythrocytes, hyperplastic ineffective erythropoiesis and mitochondrial ferritin accumulation in erythroid precursor cells. FXN Being involved early in the ISC synthesis pathway, a decrease of frataxin protein has an impact on the overall Fe/S-cluster synthesis. Indeed, frataxin null mutations were shown to be lethal in mice [17]. Defective frataxin protein is seen in Friedreich's ataxia. The latter is caused by the presence of a triplet repeat expansion (GAA) in the first intron of the FXN gene in the homozygous or in a compound heterozygous state with a missense or nonsense mutation [18,19]. The disease is characterized by progressive ataxia, the absence of lower limb tendon reflexes, dysarthria, limb weakness leading to loss of ambulation after several years, decreased vibration sense, scoliosis, diabetes mellitus and cardiomyopathy. The neurological symptoms reflect specific vulnerability of dorsal root ganglia, sensory peripheral nerves, corticospinal tract and dentate nucleus [20]. The age of onset is before 20 years. The length of the triplet expansion correlates directly with left ventricular wall thickness [21] and inversely correlates with age of onset and faster exacerbation of symptoms [22]. The affected subjects become ultimately wheelchair bound and cardiomyopathy is often the cause of fatal outcome. Cardiomyopathy seldom causes death before neurological symptoms are fully developed [23]. In accordance with the early involvement of the frataxin protein in ISC biosynthesis pathway, deficiencies of aconitase and of the OXPHOS complexes I, II, and III have been reported in subject's cardiomyocytes [24]. Mitochondrial iron accumulation was another striking finding. In cultured skin fibroblasts from of Friedreich's ataxia patients, the activities of complexes I and II were decreased [25]. Importantly, Friedreich's ataxia is the first iron-sulfur cluster deficiency for which therapeutic options are being developed. Currently, 51 clinical trials are going on or have recently been completed. These are studying different therapeutic approaches aiming (a) to reduce intramitochondrial oxidative stress (idebenone, coenzymeQ, vitamin E, iron chelators), (b) to enhance frataxin endogenous expression (erythropoietin, pioglitazone), or (c) to increase FRDA gene expression (HDAC inhibitors, interferon γ). For further detailed information on this topic, we refer to recently published papers [26,27]. Some of the proposed strategies, alone or combined, showed improvement on disease rating scales, but are not disease-modifying or curing. However, more promising results are emerging from gene therapy. In a conditional mouse model with complete Fxn deletion in cardiac muscle, intravenous administration of adeno-associated virus (AAV) rh10 vector expressing human FXN intravenously prevented occurrence of cardiomyopathy or completely restored heart function [28]. Increased frataxin expression in patient derived lymphoblast was observed after excising the GAA expansion repeat in one allele using zinc finger nuclease [29]. NFS1 Not much is known about the clinical characteristics of an NFS1 protein defect in humans as only one report was published until now describing three subjects from consanguineous descent all sharing the same homozygous missense mutation, c.215G > A, p.Arg72Gln [30]. This conserved residue was recognized to be an important residue for the hydrogen bond formation between NFS1 and ISD11 [10,11]. The first subject presented at 7 months of age with lethargy, myocardial failure, and generalized seizures during an infectious episode ultimately leading to fatal outcome 3 days later. The second subject presented with hypotonia and feeding problems, and developed multiple organ failure, as well as focal seizures due to cerebral infarction. Heart failure was the cause of death at the age of 7 months. The third subject who was started on vitamin supplementation since the age of 6 months was still alive at 11 years and suffered from mild developmental delay and truncal and limb hypotonia [30]. Biochemical features included increased lactate in body fluids and decreased complex II and III activity in skeletal muscle and liver (complex I not tested individually) [30]. ISD11 LYRM4 encodes iron-sulfur protein biogenesis desulfurase interacting protein 11kDa (ISD11). Until now, only two subjects were reported with homozygous pathogenic missense variant in LYRM4. Although both harbored the same genotype, their phenotypes were different. One subject was alive at 20 years of age without symptoms, while the other died at the age of 2 months. The first presented with respiratory distress and hypotonia during the neonatal period. Thereafter, he improved gradually and was lost to follow-up, but reevaluation at age 20 years showed no remarkable clinical anomalies. The latter suffered from neonatal respiratory distress and had hepatomegaly. She developed seizures. Her clinical condition deteriorated and ultimately became fatal. Both subjects showed no anomalies on cerebral imaging [31]. The different outcomes in both individuals could possibly be explained by the availability of sulfur sources during the first weeks of life. Indeed, availability of cysteine, the major sulfur source, is restricted in the neonatal period due to reduced activity of hepatic cystathionase [32]. Affected individuals showed increased lactate in body fluids due to impaired complex functioning of the OXPHOS complexes I, II, and III in skeletal muscle and liver. In both tissues, cytosolic and mitochondrial aconitase and also ferrochelatase showed decreased expression. Incorporation of the mutation in S. cerevisiae resulted in growth restriction. In E. coli, the variant had no impact on the oligomerization of ISD11 with its partner protein NFS1. The enzymatic desulfurase activity was severely impaired in the same model [31]. FDXR FDXR deficiency has only recently been reported. In one paper, eight individuals from four different families were described. The core clinical features were restricted to sensorineural hearing loss (auditory neuropathy) and optic atrophy. Age of onset for hearing impairment ranged from five to 20 years and from 2 to 36 years for visual impairment. Brain imaging revealed no abnormalities. All individuals had missense mutations, except for one who had a nonsense mutation in compound heterozygous status. Biochemical analysis in affected subjects was performed in cultured skin fibroblasts showing impaired complexes I, III, IV, and V in one subject and impairment of complexes I and III in the other. The absence of complex II involvement was remarkable, certainly considering the decreased expression of SDHB. Finally, iron overload, in combination with decreased IRP1 content, was noticed [33]. FDX2 Only one subject with a defect in FDX2 (encoded by the FDXL) was reported, so far. A homozygous missense mutation in the start codon resulted in a severe decrease of expression of FDXL. The subject suffered from myopathy characterized by episodes of acute cramps, rhabdomyolysis, and myoglobinuria after moderate physical activity. During follow-up, a slowly progressive muscle weakness was noticed. The mental capacities were not altered. During acute episodes, serum lactate was increased. OXPHOS activity analysis in skeletal muscle showed typical features of Fe/S-cluster deficiency with decreased activities of complexes I, II, and III, and a decreased activity of the Fe/Scluster matrix enzyme aconitase [34]. Interestingly, the clinical presentation of FDX2 deficiency mimics the phenotype of individuals with ISCU deficiency. Tissue specific splicing cannot be an explanation for the skeletal muscle specific phenotype as FDX2 is expressed ubiquitously. The authors suggest that FDX2 is not a vital component in Fe-S biogenesis and that FDX1 may partly take over the function in basal conditions, but not in extreme conditions [34]. [2Fe-2S] cluster release Cluster release starts by binding of the J-type co-chaperone HSCB (HSC20, Jac1), and HSPA9 (mortalin, HSPA9B) to the ISCU-Fe/S, resulting in loosening of the [2Fe-2S] cluster in an ATP dependent process [36]. In addition, GLRX5, possessing a [2Fe-2S] cluster itself, binds to the complex in a dimeric conformation and recruits GRPEL1 for final cluster release [35,36]. Synthesized [2Fe-2S] clusters or intermediate elements meant for further processing by the cytosolic iron-sulfur machinery are exported by the ABCB7 translocator. The FAD-dependent sulfhydryl oxidase ALR is thought to support the export [37]. It is generally accepted that proper cytosolic ISC synthesis relies on exported mitochondrial intermediates. Subsequently, we can conclude that all protein deficiencies occurring before this stage can lead eventually to disruption of cytosolic ISC synthesis, and ultimately to mitochondrial iron overload. HSPA9 This heat shock protein is associated with cluster release. It is supposed to have many other functions, including correct folding of proteins after being imported into the mitochondria. Three subjects with a HSPA9 deficiency carried a homozygous missense mutation or missense mutation in a compound state with a nonsense mutation. Affected subjects presented overlapping symptoms recollected in the acronym EVEN-PLUS syndrome, reflecting epiphyseal, vertebral, ear and nose malformation, plus associated findings [38]. Indeed, all subjects presented 'bifid' distal femurs and epiphyseal dysplasia of the femur head, resulting in short stature, bilateral microtia and hypoplastic nasal bones. One subject showed vertebral coronal clefts and another lateral vertebral clefts. Other findings comprised arched eyebrows with mild synophrys and atrial septum defects for all reported patients. Two of them had anal atresia and a small area of aplasia cutis. One had hypodontia, which is another feature of ectodermal tissue involvement. Only one subject presented with developmental delay and had abnormal cerebral imaging (dysgenesis of the corpus callosum). This subject also had vesico-ureteral reflux and kidney nephropathy [38]. Biochemical features in affected subjects were not provided, except for the reported anemia. Indeed, a HSPA9 deficient zebrafish, called 'crimsonless', showed ineffective hematopoiesis and also deleterious effects on early development of musculature, fins and internal organs leading to death at the 72 hpf stage [39]. An acquired interstitial deletion of the long arm of chromosome 5 [del(5q)] creating haploinsufficiency for a large set of genes including HSPA9 haploinsufficiency is a known cause of myelodysplastic syndrome characterized by ineffective hematopoiesis [40]. GLRX5 Although the number of reported patients is still limited, two clearly distinct phenotypes, characterized either by isolated spasticity of the lower limbs or by isolated sideroblastic anemia, can be found. An erythroblastoid phenotype is expected, considering the abundant expression of GLRX5 in erythroid cells (CD71 +) and only minimal expression in other tissues [7]. However, in mice apart from erythroblasts high expression of GLRX5 was also demonstrated in the hippocampus and Purkinje cells of the cerebellum [7]. Three individuals with GLRX5 deficiency, caused by homozygous in frame deletion or out of frame insertion leading to premature stop codon in combination with the same in frame deletion, were identified in a cohort of patients with non-ketotic hyperglycinemia (NKH). In contrast to the classical presentation of NKH characterized by neonatal epileptic encephalopathy, these subjects developed symptoms much later and showed a milder disease course. One subject had only mild learning difficulties and two individuals had normal mental development, but one of them suffered from progressive deterioration of vision, in accordance with progressive optic nerve atrophy. Spasticity of the lower limbs occurred between the age of 2 and 7 years. These symptoms correlated with varying degrees of diffuse and progressive white matter alterations seen on brain MRI in two subjects, and only mild alteration of the upper spinal cord in another. All patients were still alive at the time of publication of the paper, i.e., aged between 7 and 11 years [41]. Two adults, both harboring missense mutations, were described with congenital sideroblastic anemia and hepatosplenomegaly, without signs of spasticity. Later, in the disease course, they developed diabetes mellitus type 1 [42,43]. One patient had cirrhosis and hypogonadism [42]. Biochemical analysis in the subjects with the spastic phenotype showed increased glycine concentration in serum and cerebrospinal fluid (CSF). In accordance, the activity of glycine cleavage enzyme in liver tissue was low in all subjects. This was probably due to defective lipoylation of the H-protein moiety. Defective lipoylation of αKGDH and PDHC has already been demonstrated in cultured skin fibroblasts. Activity of OXPHOS complexes was not tested in all patients, but OXPHOS activities were not defective in cultured skin fibroblasts and skeletal muscle. Lactate was normal in these subjects [41]. In the probands with sideroblastic anemia, a significant amount (> 15%) of ring sideroblasts was detected in bone-marrow smears. Apart from erythroid iron accumulation, transferrin saturation and ferritin concentration were increased in serum. Further evidence of defective ISC biosynthesis was provided by decreased Fe/S incorporated in cytosolic aconitase and decreased catalytic activity of this enzyme [42,43]. Activities of the mitochondrial aconitase and complex II were normal in lymphoblasts [41]. However, complex I activity and complex I expression were significantly decreased in cultured skin fibroblasts [7]. The cultured skin fibroblasts also showed increased iron accumulation, both in mitochondria and cytosol [7]. Interestingly, therapy with the iron chelator, deferoxamine, resulted in improvement of anemia in both patients [42,43]. In HeLa cells depleted of GLRX5, a decreased activity of mitochondrial aconitase and xanthine oxidase was detected, confirming the essential role for cytosolic and mitochondrial ISC, both [4Fe-4S] and [2Fe-2S] clusters. These cells also showed increased (approximately doubled) total non-heme mitochondrial iron content and lower ferritin expression [7]. ABCB7 Deficient export of ISC or ISC intermediates results in iron accumulation in mitochondria. As synthesis per se is not affected, OXPHOS complexes are not deficient. Complementation studies in yeast showed the importance of ABCB7 protein for cytosolic ISC maturation. A defect in ABCB7 resulted in abolition of the activity of cytosolic aconitase, which functions as an iron regulatory protein (IRP), leading to increased transferrin receptor synthesis and subsequent increase in iron uptake. Anemia is explained by decreased ferritin and decreased erythroid 5-aminolevulinate synthase (ALAS2) synthesis. All reported subjects with either hemizygous or heterozygous missense mutations presented with sideroblastic anemia in combination with ataxia. Although classically a nonprogressive ataxia was reported, some adult patients suffered from regression of motor function. Early motor development varied from normal to delayed. Age of onset of the central nervous symptoms varied from early childhood to late adulthood. Ocular symptoms with nystagmus and/or small saccades were reported in three adults [44]. Cerebral imaging is normal or shows isolated cerebellar atrophy. Heterozygous females may suffer from hypochromic microcytic anemia, but do not present with neurological symptoms [45][46][47][48]. More recently, isolated cerebellar hypoplasia without sideroblastic anemia was reported in the affected individuals in one family. These individuals also harbored a deletion on chromosome X, affecting two other genes (ATP7A and PGAM4) that might have influenced the phenotype [49]. ALR The ALR protein is encoded by the GFER gene. It is an oxidase essential for the mitochondrial disulfide relay system, which is extremely important for protein import into the mitochondrial intermembranary space [50]. ALR may be involved in export of ISC synthesized intermediates into the cytosol [37]. Affected subjects, all harboring missense mutations, have variable degrees of developmental delay, hypotonia and congenital cataract. In serum, lactate is increased. In the first report, three subjects of consanguineous origin were described presenting with congenital cataracts, early onset progressive muscular hypotonia, sensorineural hearing loss, delay of motor skills and speech development [51]. In a second paper, an adult subject was described with infantile-onset adrenal insufficiency, cataract and subsequently poor feeding, irritability and hepatomegaly. Cerebral imaging revealed mildly increased signals bilaterally in the globus pallidus, which was resolved later on. By the age of 18 months the clinical situation stabilized and the child had only a slightly delayed development. At an early adult age, truncal hypotonia and muscle wasting were noticed, leading to respiratory insufficiency [52]. Very recently, two families, each with two affected siblings, were reported. Two siblings presented with regression at the age of 9 months associated with hypotonia evolving to severe developmental delay, dystonia, choreic movements and absent or minimal language development. Cerebral imaging showed moderate brain atrophy in one sibling. The other two subjects also presented with developmental delay associated with hypotonia and mild dystonic features. All reported subjects had congenital cataracts, and none had hearing loss [53]. When tested, serum lactate was found to be increased [51,52]. Interestingly, OXPHOS testing in skeletal muscle revealed a combined OXPHOS deficiency involving complexes I, II, and IV in one subject [51] and deficiency of the OXPHOS complexes I, II, III, and IV in another [52]. A deficiency involving the complexes I, III, and IV was reported by Nambot et al. (2017) in two subjects. In another subject, only isolated complex IV deficiency was found in cultured skin fibroblasts [51]. Authors did not report anemia in the affected individuals, but three subjects were found to have low serum ferritin [51]. Fe/S-cluster targeting and further maturation After [2Fe-2S] clusters are released from the scaffold protein, they can immediately be incorporated into the apoproteins or the OXPHOS complexes I, II, and III. Incorporation needs the assistance of chaperone proteins. For further maturation to [4Fe-4S], assistance of a chaperone protein is also necessary, i.e., ISCA1 and ISCA2 as well as the folate binding protein IBA57 [54]. ISCA1 and ISCA2 are important for iron incorporation onto the [2Fe-2S]. NFU1 is a dedicated chaperone protein for [4Fe-4S] incorporation into complexes I and II as well as for LIAS (lipoic acid synthase), but not for mitochondrial aconitase. BOLA3 is another chaperone protein that has a similar role as NFU1. IND1 (encoded by NUBPL) which stands for the iron-sulfur protein required by NADH dehydrogenase, which was initially thought to be needed for incorporation of [4Fe-4S] clusters into complex I [55]. Its function was, however, reconsidered after studies with the A. thaliana ortholog Ind1, which showed that it functions as a translation factor necessary for expression of multiple complex I subunits [56]. Considering the uncertain nature of its action, we included a discussion on the clinical features associated with NUBPL dysfunction. ISCA1 ISCA1 is one of the most recently reported defects in Fe/Scluster biogenesis. Together with ISCA2 and IBA57 it acts at a late stage of the ISC biosynthesis pathway and is required for [4Fe-4S] cluster assembly. In two unrelated families with two affected children, ISCA1 deficiency caused by homozygous missense mutations was reported. The affected individuals had normal prenatal development, but in the young infantile period showed developmental delay, poor head control and signs of spasticity. Extensive cerebral and cerebellar abnormalities including pachygyria, enlarged lateral ventricles and abnormal cerebral and cerebellar white matter signals were seen on brain imaging. Seizures were apparent within the first months (2nd-5th) of life, and the children died between 11 months and 5 years. When documented, blood lactate was increased and a lactate peak was seen by cerebral MR spectroscopy [57]. Measurements of OXPHOS complex activities were not reported. ISCA2 Although only 16 subjects with ISCA2 deficiency were reported until now, all caused by a missense mutation, an apparent uniform phenotype seems to segregate. Affected individuals presented a leukodystrophy characterized by diffuse white matter alterations seen on brain MRI extending into the corpus callosum and posterior limb of the capsula interna, and also alterations in the mesencephalon and cerebellar white matter. In some patients, abnormal cervical spinal cord signaling was also seen. The subjects became symptomatic after an uneventful pregnancy, between the age of three and 7 months. Presenting symptoms were loss of fixation with or without nystagmus and loss of acquired motor and social skills. Eventually all developed spasticity of the upper and lower limbs and optic nerve atrophy. All subjects were considered as having a degenerative condition and died a few months to 2 years after the onset of the initial symptoms [58,59]. For some subjects, symptoms started after an inflammatory episode (infection or vaccine) or after mild head trauma [59]. CSF lactate and to a milder extent plasma lactate was increased. This was also the case for CSF and plasma glycine [59]. Analysis of OXPHOS complex activities in subject's cultured skin fibroblasts revealed that complex I was severely impaired but complex II or III were normal. In HeLa cells depleted of ISCA2 using siRNA, a decreased expression of several ISC bearing proteins (mitochondrial aconitase, SDH, NDUFS3, NDUFA9, NDUFB4, NDUFA13, UQCRFS1) was found but no decrease of ferrochelatase. Activity measurements showed impaired activity of the complexes I and II and of mitochondrial aconitase. In accordance to its distal action in the ISC biosynthesis pathway, no deleterious effect on heme synthesis could be detected in ISCA2 depleted cells [58]. IBA57 Since the first publication in 2013 [60], four other papers were reported describing subjects with IBA57 deficiency, in total now 28 subjects. Three different phenotypes can be discerned with IBA57 deficiency, all with involvement of the central nervous system. The severity and type of lesions and the age of onset of symptoms are variable. Two phenotypes are associated with early fatal or debilitating outcomes [60][61][62][63], and one phenotype with a milder phenotype [64]. No genotype-phenotype correlations could be made. Most of the subjects harbored missense mutations. Insertions with frameshifts were also reported [63]. The first described subjects were siblings born from consanguineous parents presenting with intra-uterine growth retardation, polyhydramnion and microcephaly. At birth, they were hypotonic and presented with signs of encephalopathy. They had dysmorphic features, including retrognathia, high arched palate, widely spaced nipples, arthrogryposis of elbows, wrists, fingers and knees. Despite prompt adequate intensive support, the conditions deteriorated leading to early death. Cerebral MRI was abnormal with hypoplasia of the corpus callosum and medulla oblongata and bilateral frontoparietal polymicrogyria, and severely enlarged lateral ventricles [60]. Subjects with the mildest phenotype presented with spastic paraplegia, variably associated with optic nerve atrophy and peripheral neuropathy, abbreviated as SPOAN. Subjects suffered from slowly progressive gait impairment due to spastic paraparesis together with peripheral neuropathy and superficial sensory loss. Age of onset of gait impairment varied between 3 and 12 years. Central nervous system lesions were minor, as all affected subjects led an independent adult life, without cognitive impairment. Cerebral MRI in one subject showed, aside from bilateral optic nerve atrophy, scattered white matter alterations. Only one subject presented with mild cerebellar and cervical spinal cord atrophy [61]. In three reports, a third phenotype was described, all together in 15 subjects. The children presented with loss of motor and mental skills between 4 and 15 months of age. These findings correlated with extensive white matter alterations in cerebrum, cerebellum, mesencephalon and in the upper spinal cord. The corpus callosum and basal ganglia were not affected [61][62][63]. Increased lactate in serum and CSF and increased glycine were common biochemical features in most of the affected subjects. In all subjects, deficient activity and expression of complexes I and II and faulty lipoylation in all analysed tissues (lymphoblasts, cultured skin fibroblasts and skeletal muscle) were detected [60][61][62][63]. None of the described subjects had signs of anemia. Similar as for ISCA2, HeLa cells depleted of IBA57 using siRNA, showed decreased expression of the ISC bearing subunits of complex I, complex II, and complex III but not of ferrochelatase. NFU1 The role of NFU1 in human ISC biogenesis was deduced from the biochemical alterations detected in affected subjects. As complexes I and II and lipoylation were deficient, it was presumed that NFU1 acts as a chaperone dedicated to ISC incorporation in lipoic acid synthase, complex I and complex II. Expression studies demonstrated ubiquitous expression of the protein, with highest expression profiles in brain and heart, which is in parallel with mitochondrial content [65]. Missense [66][67][68] as well as splice site variants [68][69][70] were reported. The first reported subjects presented with early onset encephalopathy and fatal outcome before the age of 1 month [66]. Ten other patients were reported with variable age of onset (between one and 9 months), and fatal outcome before the age of 15 months. This permitted a classification into three groups. In the first group, the affected subjects presented with failure to thrive, pulmonary hypertension, hypotonia and irritability. Cerebral MRI showed bilateral extensive white matter alterations. Recently, another paper confirmed the severe clinical picture in two children aged three and 4 months [71]. In a second group the affected individuals presented with pulmonary hypertension and regression of acquired skills after an intercurrent infection. A third group, with the mildest symptoms, showed only pulmonary hypertension and variable failure to thrive [70]. Ahting et al. (2015) described seven other subjects. All of them had central nervous system involvement with evolution into spastic tetraparesis and a declining clinical condition. Four of them suffered from pulmonary hypertension. One subject had dilated cardiomyopathy which became fatal at the age of 3 years [68]. Two individuals initially presented with early onset decline evolving to a stable spastic tetraparesis or paraparesis at adult age [69,72]. The common feature for all reported subjects was an increased serum glycine and defective lipoylation resulting in decreased activity of complex II and PDHC (pyruvate dehydrogenase complex). Biochemical testing was not performed in the two most recently published cases [71]. BOLA3 Most reported subjects developed neurological symptoms at an early age, including seizures, leading to death before the age of 1 year, together with cardiomyopathy. Optic nerve atrophy was a variably occurring symptom [41,66,73]. When available, cerebral MRI showed various degrees of white matter alterations. Homozygous missense [41,73] and homozygous duplication leading to a frameshift with premature a stop codon [66] have been reported. Only one subject had a milder presentation with slowing of development starting at the age of 6 months. Subsequently, a slowly progressive spasticity and ataxia, as well as loss of language skills between 3 and 8 years of age were observed. Acute regression was seen at the age of 9 years with recurrent status epilepticus and worsening of spasticity. This subject finally died at the age of 11 years and was found to have only mild cerebral and cerebellar atrophy and no white matter changes [41]. Similar to NFU1 deficient subjects, increased lactate and glycine are a common feature of subjects with BOLA3 deficiency, together with defective lipoylation. Similar to NFU1 patients, the OXPHOS profile in cultured skin fibroblasts and skeletal muscle displayed a decreased activity of complexes I and II. When tested, PDHC was also deficient, but mitochondrial aconitase was normal [66,73]. NUBPL This protein is specifically dedicated to the incorporation of [4Fe-4S] clusters into complex I, leading to exclusive deficiency of complex I when NUBPL is deficient. Reported patients were all heterozygous for exonic missense mutations, deletions or insertions leading to a frameshift and premature stop codon. One intronic mutation was found to be located at intron 9-10 introducing a cryptic acceptor splice site leading to the introduction of an additional 72 bp, and finally resulting in a frameshift and nonsense mediated decay. This allele was found in heterozygous state with a complex genomic rearrangement detected in seven of the reported subjects [74][75][76]. The first reported subjects had isolated encephalopathy [74]. Another subject was described with progressive nystagmus, cerebellar ataxia and pyramidal tract signs in adult life. Cerebral MRI in the latter showed hyperintense lesions in the cerebellum, anterior mesencephalon and pyramidal tract [75]. A group of six patients was identified based on their cerebral MRI findings, showing extensive signal abnormalities in the cerebellar cortex, deep cerebral white matter and corpus callosum. Interestingly, cerebral white matter and corpus callosum abnormalities improved or even disappeared in the course of the disease, while cerebellar abnormalities became more extensive and abnormalities in the brainstem (basal pons and dorsal medulla oblongata) became visible. All subjects presented clinically with slow progression of motor function and ataxia, and in the majority of them with spasticity. Intellectual capacities varied from normal to severely impaired [76]. When reported, lactate was increased in plasma and CSF [76]. All described patients showed deficient complex I activity. Conclusion Considering the strong evolutionary conservation of iron-sulfur clusters and its biosynthetic pathway throughout eukaryotes, it is not surprising that deficiencies in this synthesis pathway can cause severe impairment of cellular functioning and affect the viability of affected organisms. In this review we compiled the acquired knowledge on the clinical and genetic features of ISC deficiencies. Strikingly, a strict definable 'ISC biosynthesis deficiency' phenotype cannot be found. Most affected organs are the brain, resulting in developmental delay, epilepsy or regression, and skeletal muscle involvement leading to muscle weakness. Increased lactate and glycine in body fluids as well as anemia, eventually with the presence of sideroblasts, are possibly accompanying biochemical features associated with ISC deficiencies. This clinical and biochemical profile, except for increased glycine, is reminiscent of the clinical characteristics reported in the subjects with mitochondriopathies. The parallelism is not unexpected as three OXPHOS complexes harbor iron-sulfur clusters that function as electron carrier. Deficiencies in the cytosolic iron-sulfur cluster synthesis pathway have not been reported yet, and will probably result in clinical pictures different from those seen in CIA. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
2018-04-26T19:49:04.743Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "10c029cb00aa7665f494b8b087ef96300b825ba3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00775-018-1550-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10c029cb00aa7665f494b8b087ef96300b825ba3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
118556379
pes2o/s2orc
v3-fos-license
Stellar wind induced soft X-ray emission from close-in exoplanets In this paper, we estimate the X-ray emission from close-in exoplanets. We show that the Solar/Stellar Wind Charge Exchange Mechanism (SWCX) which produces soft X-ray emission is very effective for hot Jupiters. In this mechanism, X-ray photons are emitted as a result of the charge exchange between heavy ions in the solar wind and the atmospheric neutral particles. In the Solar System, comets produce X-rays mostly through the SWCX mechanism, but it has also been shown to operate in the heliosphere, in the terrestrial magnetosheath, and on Mars, Venus and Moon. Since the number of emitted photons is proportional to the solar wind mass flux, this mechanism is not very effective for the Solar system giants. Here we present a simple estimate of the X-ray emission intensity that can be produced by close-in extrasolar giant planets due to charge exchange with the heavy ions of the stellar wind. Using the example of HD~209458b, we show that this mechanism alone can be responsible for an X-ray emission of $\approx 10^{22}$~erg~s$^{-1}$, which is $10^6$ times stronger than the emission from the Jovian aurora. We discuss also the possibility to observe the predicted soft X-ray flux of hot Jupiters and show that despite high emission intensities they are unobservable with current facilities. INTRODUCTION X-ray emission has been observed for many of the Solar system objects, e.g. for Mars (Holmström et al. 2001;Gunell et al. 2004;Dennerl 2002), Venus (Bhardwaj et al. 2007;Dennerl et al. 2002), Earth and the Moon (Collier et al. 2014;Bhardwaj et al. 2007), Jupiter and the Galilean satellites Bhardwaj et al. 2007), Saturn (Branduardi-Raymont et al. 2010), comets (Cravens 2002;Lisse et al. 2004), and in the heliosphere (Cravens et al. 2001). For the Solar system planets, X-rays are known to be generated via different mechanisms. The main mechanisms are: -continuum Bremsstrahlung emission due to collisions with electrons (produces mostly hard X-rays); -excitation of neutral species and ions due to collisions, e.g., with electrons (charged particle impact), followed by line emission; -stellar X-ray photon scattering from neutrals in planetary atmospheres (elastic scattering and K-shell fluorescent scattering, requires a significant column density); -charge exchange between the solar wind ions with neutrals (SWCX), followed by X-ray emission; -X-ray production from the charge exchange of energetic (energies of about a MeV/amu) heavy ions of planetary magnetospheric origin with neutrals or by direct excitation of ions in collisions with neutrals (this is known to be effective on Jupiter, e.g. Cravens et al. 2003;Bhardwaj et al. 2007). The cross sections at solar wind energies for charge exchange with the solar wind heavy ions are several magni-tudes larger than the cross sections for the excitation of the neutral species by electrons (Bhardwaj et al. 2007), which makes the SWCX mechanism more effective. In the present article, we discuss the SWCX mechanism and X-ray scattering as applied to close-in giant exoplanets, in particular to HD 209458b. We discuss the observability of the X-ray emission from HD 209458b, Xray emission from other giant planets, and the influence of the host star age. Other X-ray production mechanisms are beyond the scope of the present article, but will be the goal of a future study. SOLAR WIND CHARGE EXCHANGE MECHANISM In the SWCX mechanism, an electron is transferred from a neutral atom or molecule to a highly charged heavy ion of the solar wind. This mechanism is known to produce soft X-rays in cometary comas (Cravens 2002). In the case of a magnetized planet, these ions can enter the neutral atmosphere following the open field lines near the polar cusp. It is known from experimental and theoretical studies that solar wind heavy ions can undergo charge exchange reactions when they are within approximately 1 nm of a neutral atomic species (e.g., Lisse et al. 2004;Cravens 2002;Bhardwaj et al. 2007 and references therein): where A is a charged heavy ion in the solar wind (the projectile), q is the projectile charge and B is a neutral component (target). The product ion A (q−1)+ * is still highly charged and is almost always left in an excited state (marked by an "*"). Then, the excited ion emits Kislyakova et al. (2014) b solar wind value assumed one or several X-ray photons in the following reaction: (2) Although the de-excitation usually represents a number of cascading processes through intermediate states, if q is high then an X-ray photon (at least one, though usually several) is emitted (Cravens 2002). The composition of the solar wind by volume is 0.92 hydrogen, 0.08 helium, and ≈ 10 −3 heavier elements. Since the solar wind quickly becomes collisionless as it expands, the charge states that the heavy ions have in the hot solar corona are frozen-in, and therefore the heavy elements are usually highly charged. In the solar wind, the most common heavy ions are The cross sections for such charge transfer collisions are very high at solar wind energies, exceeding 10 15 cm 2 (e.g., Greenwood et al. 2001). The types of the species that undergo charge exchange define the energy of the emitted X-ray photons, which is usually in the range of 0.3-0.5 keV. STUDY OF HD 209458B In this section, we discuss the soft X-ray emission which can be produced by the SWCX mechanism on close-in exoplanets. As an example, we consider HD 209458b, which is a well-studied close-in gas giant orbiting a 4±2 Gyr old G-type star. Planetary and stellar wind parameters are summarized in Table 1. In our further estimations, we rely on results of Kislyakova et al. (2014), who investigated the magnetosphere and stellar wind parameters in the vicinity of HD 209458b by means of modelling. Their result support a magnetic moment of HD 209458b of approximately 10% of the Jupiter's and a stellar wind with a velocity of 4 × 10 7 cm s −1 at the time of observation. For a very simple estimate of the X-ray intensity, I, emitted in the region of the atmosphere exposed to heavy ion precipitation one can use the following expression : where N is a factor of 2 or 3 and represents the number of photons emitted per ion (below we assume N = 3). The additional factor of 2 on the right hand side is a flank magnetosheath enhancement factor . For Jupiter, Equation 3 yields 4πI ≈ 10 5 cm −2 s −1 while values of 2 × 10 6 − 2 × 10 7 cm −2 s −1 are necessary to explain the observed auroral soft X-ray emission, which means that SWCX is not the main mechanism that produces soft X-ray emission for Jupiter. Given its proximity to its host star, it is unclear whether HD 209458b is located in the sub-Alfvénic or super-Alfvénic region of the wind. The exact regime depends on the magnetic moment of the host star and stellar wind parameters. Although the results of Kislyakova et al. (2014) support that HD 209458b is rather in the super-Alfvénic regime and thus outside the stellar plasma corotation region, we make an estimate also for the corotation case. HD 209458 has been observed to have a rotational velocity of 4.4 km/s which corresponds to a rotational period of ≈11.5 days (Mazeh et al. 2000). This gives the corotational velocity of plasma at 0.047 AU of v cor ≈ 2.7 × 10 6 cm s −1 . Taking into account also the Keplerian orbital speed v orb ≈ 1.4× 10 7 cm s −1 , this corresponds to the plasma flow velocity in the vicinity of HD 209458b of v flow ≈ 1.2×10 7 cm s −1 . Substituting it into the equation 3 instead of v sw , one obtains an estimate of 4πI ≈ 3.6 × 10 8 cm −2 s −1 , which is still 18-180 times the observed Jovian value. To estimate the aurora size of HD 209458b we follow the approach of Vidotto et al. (2011). The fractional area of the planetary surface that has open magnetic field lines is (1 − cos α 0 ) for both the north and south auroral caps, where R p is the radius of a planet, and R s is the magnetosphere stand off distance at the substellar point. Assuming R s = 2.9R p estimated by Kislyakova et al. (2014), we obtain α 0 ≈ 0.63 and 1 − cos α 0 ≈ 0.19. This gives the size of the aurora of A ≈ 2.17 × 10 20 cm 2 , or ≈ 217 times the Jovian aurora A ≈ 10 18 cm 2 ). Now we can estimate the power of the soft X-ray emission from HD 209458b in both the corotation and noncorotation regimes. For simplicity, we assume the energy of each emitted X-ray photon is 0.3 keV . Using the solar value of f = 10 −3 , we estimate the total X-ray power of HD 209458b to be ≈ 1.3 × 10 20 erg s −1 in the non-corotation regime (point C on Fig.1) and ≈ 2.3 × 10 19 erg s −1 in the corotational regime ( Fig.1, point D). Note that these values can still present a lower limit. If charge exchange occurs not only in the auroral regions of HD 209458b, but in the whole hemisphere with the radius of R s , the values should be multiplied by a factor of ≈ 88, which is the ratio of the interaction area sizes. This gives ≈ 1.1 × 10 22 erg s −1 in the non-corotation regime (point A in Fig.1) and ≈ 2.0 × 10 21 erg s −1 in the corotation regime (Fig.1, point B). Since the atmospheres of hot Jupiters in general and HD 209458b in particular are highly inflated and are believed to extend beyond the magnetosphere (Kislyakova et al. 2014), this is a more realistic case than interaction only in the au-rora region (Jupiter type). The X-ray production can be even larger if the region outside the magnetosphere (the volume between the magnetopause and the bow shock) is included (Robertson & Cravens 2003). Contribution of stellar X-ray photon scattering Although only a small fraction of the incident stellar Xrays are reflected by the planetary atmosphere, in the Solar system this mechanism is known to contribute to the total soft X-ray luminosity of planets and dominates, for example, the X-rays from Venus . Cravens et al. (2006) showed that the scattering albedo for the outer planets is quite small and equals 10 −3 at 3 nm. We assume this albedo for HD 209458b as a crude estimate. The total X-ray luminosity of HD 209458 was first observed to be log L X ≈ 27.02 ± 0.2 erg s −1 (Kashyap et al. 2008). However, later it was reported that this result may present a luminosity of a nearby star and a new upper limit of log L X 26.12 erg s −1 was reported (Sanz-Forcada et al. 2010). Both values are close to the Solar X-ray luminosity of log L X⊙ ≈ 26 − 27 erg s −1 . The value of log L X = 26.12 erg s −1 yields an X-ray flux of ≈ 39 erg cm −2 s −1 in the vicinity of HD 209458b and an X-ray luminosity of reflected soft X-rays from the planet of ≈ 2.3 × 10 19 erg s −1 , which is comparable only to our lowest estimate for the X-ray flux produced by the SWCX mechanism (point D in Fig1). OBSERVABILITY In order to see whether the soft X-ray exoplanetary emission described above could be observable with currently available facilities, we have considered the maximum X-ray luminosity estimated for HD 209458b (i.e., 1.1 × 10 22 erg s −1 ) and that of its host star (log L X = 27.02 erg s −1 ; Kashyap et al. 2008). Note that Sanz-Forcada et al. (2010) reported only an upper limit of log L X < 26.12 erg s −1 on the X-ray luminosity of HD 209458 and concluded that Kashyap et al. (2008) might have confused the star with a nearby object. For our purposes we are interested in the best possible scenario and therefore assume the X-ray luminosity given by Kashyap et al. (2008). A lower stellar X-ray luminosity would make the detection of the planetary X-ray emission harder to detect compared to what described here. By rescaling both luminosities to the distance of HD 209458 (d = 49.6 pc; van Leeuwen 2007), we obtained that the maximum X-ray flux of HD 209458b would be 3.9 × 10 −20 erg cm −2 s −1 , while from the star we would get an X-ray flux of 3.6 × 10 −15 erg cm −2 s −11 , resulting in a difference of about 5 orders of magnitude. The planetary X-ray emission could be observed using the secondary transit, as commonly done in the infrared, for example. The ratio between the in-transit and out-of-transit fluxes is expected to be of the order of 10 −5 erg cm −2 s −1 , which would require a signal-to-noise ratio (S/N) of the measurements of the order of 10 5 to be detected. Such a high precision is currently reached in the optical and infrared bands (mostly with space observations), but it is prohibitive at X-ray wavelengths. 1 The slight difference in the stellar X-ray flux with that given by Kashyap et al. (2008) is due to the use of a different distance. To highlight this, we used the available count rate simulator 2 for the XMM-Newton telescope that, among the facilities currently available, has the largest efficiency in the soft X-rays. Taking into account the 0.1-1.0 keV band, a plasma with a temperature of 10 6 K, and the count rate given for the pn detector and the "thin" filter we obtained a count rate of 3.9 × 10 −3 counts s −1 for the X-ray emission of HD 209458. As a result, the S/N obtained exposing for 10 3 seconds is about 2 and it would require several thousand years of exposure time to reach the S/N required to detect the planetary X-ray emission. The situation might slightly improve if the planetary Xray emission has a different spectral behaviour compared to that of the star, but the detectability would probably remain unfeasible. Here we have not considered the intrinsic stellar X-ray variability which will hamper the detection of the planetary X-ray emission. SWCX MECHANISM ON OTHER GIANT EXOPLANETS In this section, we briefly consider the influence of the orbital distance on the soft X-ray emission generated by the SWCX mechanism (the radius of HD 209458b is assumed). For the stellar wind parameters, we consider two cases: for one case, we assume the wind has the properties of the slow component of the current solar wind, and for the other case, we scale the wind properties to match what we might expect for the young solar analogue EK Dra. We calculate the solar slow wind parameters as a function of distance from the star using a 1D hydrodynamic wind model that is constrained by in situ spacecraft measurements of the real solar wind. The model was developed by Johnstone et al. (2015b) and provides a very good description of the real solar wind outside of the solar corona. Although little is known about the properties of winds from other stars, it is suspected that more active stars have mass fluxes that are significantly higher than the mass flux of the current solar wind (Wood et al. 2005;Holzwarth & Jardine 2007;Suzuki et al. 2013). To scale the slow solar wind model to EK Dra, we use the scaling relation for mass loss rate derived by Johnstone et al. (2015a) and the parameters for EK Dra given by Güdel (2007). We find a mass loss rate, and therefore corresponding values for n sw v sw , for EK Dra that are approximately a factor of 15 higher than in the current solar wind. Based on the example of the solar wind, we might expect that the abundances of heavy ions are similar in the corona and in the wind. While it is known that the coronal abundances are correlated with coronal activity for Sun-like stars, the coronal abundances of the most active stars are only approximately a factor of two different from the solar values (Telleschi et al. 2005). Since this is an insignificant difference compared to all other uncertainties, for simplicity we assume a solar value of f ≈ 10 −3 . Magnetic moments of tidally locked gas giants are believed to be smaller than M because of their slower rotation (Grießmeier et al. 2004;Khodachenko et al. 2012). For HD 209458b, this hypothesis was lately also Fig. 1.-Dependence of the soft X-ray emission produced by the SWCX mechanism on the orbital semi-major axis of a HD 209458b-like giant planet. Left panel: the calculation using the stellar wind parameters f , nsw, vsw of the current solar wind (4.5 Gyr old G2V dwarf). The bright shaded area shows the locations where planets could potentially be tidally locked and the dark shaded area shows where planets will be tidally locked. The letters A, B, C, and D mark the estimates for HD 209458b. The upper and lower green lines are estimates for emission from auroral region with aurora size calculated according to Eq. 4 and for a fixed Jovian aurora of A = 10 18 cm 2 respectively. The upper and lower red lines are estimates for a tidally locked planet for the charge exchange in the whole hemisphere restricted by Rs and only auroral regions respectively. The upper and lower blue lines illustrate the estimates for a tidally locked planet in a corotation regime. The vertical dashed blue lines show the semi-major axes of some known hot Jupiters orbiting G dwarf stars. The point labelled "Jupiter" marks the observed soft X-ray emission from the Jovian aurora, and "Jupiter (SWCX)" stands for the intensity produced via the SWCX mechanism only. Right panel: the same as the left panel, but assuming the stellar wind parameters of the young stellar analogue EK Dra (100 Myr old G1.5V dwarf). supported by a modelling result based on the Lyα transit observations (Kislyakova et al. 2014) which predicted a planetary magnetic moment of M p ≈ 0.1M . Although a prediction of magnetic moments of hot Jupiters ≥ M also exists (Christensen et al. 2009), in the present study we assume a moment value in the range of ≈ 0.05 − 0.5M (see Fig. 2 in Khodachenko et al. 2012). The size of the aurora is calculated according to Eq. 4 and the magnetopause stand off distance following the relation (Baumjohann & Treumann 1996) where µ 0 is the diamagnetic permeability of free space, f 0 ≈ 1.22 is magnetosphere form-factor. Fig. 1 presents the dependence of the emitted soft Xray power on the orbital distance of the planet. The letters mark the emission levels for HD 209458b estimated above. We should note that we did not take into account the unknown rotation rate of the host star, which leads to an overestimate of the plasma flow speed and, respectively, the emission level in the corotation regime. The letter marks for HD 209458b don't lie exactly on the lines because of the difference between simple estimates used for R s and M p to those estimated via comprehensive modelling by Kislyakova et al. (2014). As a consequence these plots only qualitatively describe the behaviour of the soft X-ray emission due to the SWCX mechanism. For every particular planet, an individual consideration should be made similar to the one above for HD 209458b. The main conclusion of our results is that the soft Xray emission is the highest for closest hot Jupiters and strongly depends on the interaction area size (see the difference between the aurora and the whole hemisphere case -lower and upper red and blue lines, respectively). The simple Equation 4 (Vidotto et al. 2011) can be used only for hot Jupiters and yields a significant overestimate for Jupiter (see the two green lines). For non tidally locked gas giants on wide orbits, the lower green line presents the most plausible estimate. We should also note that the corotation regime probably breaks closer to the star than shown on Fig. 1. However, this is not so easy to restrict because of many unknown parameters. Soft X-ray emission from exoplanets orbiting a younger star with a denser stellar wind is always stronger than the emission from an planet embedded in the current solar wind (eq. 3), which is confirmed by Fig. 1b, simply because the number of emitted photons is proportional to the wind mass flux assuming the same f . CONCLUSIONS In this work, we presented a simple estimate of the possible X-ray emission from the close-in gas giants emitted due to the SWCX mechanism. We have shown for the example of HD 209458b that this mechanism alone can be responsible for the X-ray emission intensity of the order of ≈ 10 22 erg s −1 , which is ≈ 10 6 times higher than the X-ray emission from Jupiter. We have discussed the possibility to observe the soft X-ray flux from close-in extrasolar giant planets and have shown that although this emission exceeds the intensity of the Jovian soft X-ray emission by several orders of magnitude, it is unobservable with present-day facilities because of the large distances to the systems. The main conclusion of the study is that hot Jupiters should be bright X-ray sources in comparison to the Solar system giant planets. The spectrum of this emission as well as the influence of other X-ray producing mecha-nisms should be the subject of future study. This study was carried out with the support by the FWF NFN project S116601-N16 "Pathways to Habitability: From Disk to Active Stars, Planets and Life" and the related subprojects S116 604-N16 and S116 607-N16. L.F. acknowledges financial support from the Alexander vun Humboldt foundation. L.F. thanks Lorenzo Lovisari for useful discussions. V.Z. acknowledges support from MES RF project 14.Z50.31.0007 "Lab.Astrophysics".
2015-03-24T09:14:07.000Z
2015-01-23T00:00:00.000
{ "year": 2015, "sha1": "3cb42fdf38c73c710f8a0e6371a414b44ad21e9d", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/2041-8205/799/2/L15/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "3cb42fdf38c73c710f8a0e6371a414b44ad21e9d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25424185
pes2o/s2orc
v3-fos-license
Dissociation of the Disilatricyclic Diallylic Dianion [(C4Ph4SiMe)2]−2 to the Silole Anion [MeSiC4Ph4]− by Halide Ion Coordination or Halide Ion Nucleophilic Substitution at the Silicon Atom The reductive cleavage of the Si-Si bond in 1,1-bis(1-methyl-2,3,4,5-tetraphenyl-1-silacyclopentadiene) [(C4Ph4SiMe)2] (1) with either Li or Na in THF gives the silole anion [MeSiC4Ph4]− (2). The head-to-tail dimerization of the silole anion 2 gives crystals of the disilatricyclic diallylic dianion [(C4Ph4SiMe)2]−2 (3). The derivatization of 3 (crystals) with bromoethane (gas) under reduced pressure provides [(MeSiC4Ph4Et)2] (4) quantitatively. The reverse addition of 3 in THF to trimethylsilyl chloride, hydrogen chloride, and bromoethane in THF gives 1-methyl-1-trimethylsilyl-1-silole [Me3SiMeSiC4Ph4] (6), 1-methyl-2,3,4,5-tetraphenyl-1-silacyclo-3-pentenyl-1-methyl-1-silole [C4Ph4H2SiMe-MeSiC4Ph4] (7), and 1-methyl-2,5-diethyl-2,3,4,5-tetraphenyl-1-silacyclo-3-pentenyl-1-methyl-1-silole [C4Ph4Et2SiMe-MeSiC4Ph4] (8), respectively. The reaction products unambiguously suggest that the silole anion [MeSiC4Ph4]− is generated by coordination of the chloride ion at the silicon atom in 3 or by the nucleophilic substitution of either chloride or bromide ion at one of two silicon atoms in 3. The quenching reaction of 3 dissolved in THF with water gives 1,2,3,4-tetraphenyl-2-butene, the disiloxane of 1-methyl-2,3,4,5-tetraphenyl-1-silacyclo-3-pentenyl [O(MeSiC4Ph4)2] (10) and methyl silicate. Interestingly, the addition of the trimethylchlorosilane to 3 dissolved in THF provides only 1,1bis(1-methylsilole) [(MeSiC 4 Ph 4 ) 2 ] (1) along with hexamethyldisilane (Scheme 2). Moreover the reverse addition of 3b in THF to trimethylchlorosilane in THF for an extended reaction time gives [Me(Me 3 Si)SiC 4 Ph 4 ] (6) in 65% yield (Scheme 2). These results indicate that the disilatricyclic diallylic dianion 3b is dissociated into the silole anions 2 in THF solution. A similar dissociation of the gallole dimer [(C 4 Me 4 Ga-t-Bu) 2 ] in benzene was previously reported [36]. The minor product 4 unambiguously indicates that some of the unique disilatricyclic structure in 3 is sustained during the reaction. However, the major products of the disilanes 7 and 8 clearly suggest that in THF the silole anion 2 and the silyl halide [R 2 Ph 4 C 4 SiMeX] (R=H, X=Cl for 7, R=Et, X=Br for 8) are generated in a ratio of 1:1 from 3, as indicated in Scheme 5: Scheme 5. Suggested mechanism for the formation of disilanes (7 and 8). The coupling reaction of an allylic anion in one of two 5-membered rings with RX releases above the C 4 Si ring the latter's halide ion, which is easily associated with the vicinal silicon atom of the silicon in 3. This association easily induces ring opening in the highly strained 1-3-disilacyclobutane via a pentacoordinated state since the angles of the C-Si-C bonds in both C 4 Si ring and 1,3silacyclobutane ring are nearly 90 degrees [29]. The nucleophilic substitution at the silicon atom produces a [Si=C] double bond in the other 5-membered ring to generate the novel silole anion [C 4 Ph 4 SiMe] − (9) via a pentacordinated anionic intermediate, in which the methyl group and one Si-C bond of the C 4 Si ring occupy pseudo-axial positions and two bonds of X-Si-C and the other Si-C bond of the C 4 Si ring occupy pseudo-equatorial positions. Simultaneously the pushing of the electron pair in the other Si-C bond of 1,3-disilacyclobutane ring to the C α carbon produces an allylic carbanion, which immediately reacts with RX. The just generated silole anion 9 is beneath the silicon atom of the C 4 Si ring, so coordination of the double bond [Si=C] to the silicon atom above of the C 4 Si ring would be an alternative mechanism involving a pseudo-pentacoordinated intermediate to stabilize the [Si=C] moiety of the silole anion 9, such as in (η 5 -C 5 Me 5 )(PR 3 )RuH(η 2 -CH 2 =SiPh 2 ) [37,38] and H 2 (PMe 3 ) 3 Ru(η 2 -CH 2 =SiMe 2 ) [39]. However, the preferred mechanism is a coupling reaction of the silole anion 9 with the halide. Then the nucelophilic substitution at the silicon provides the Si-Si bond of 7 or 8 while the allylic anion is rearranged to form a C=C double bond in the other C 4 Si ring to lead the silole moiety [MeSiC 4 Ph 4 ]. Similar pseudo-pentacoordiated silole intermediates lacking highly electronegative atoms on the silicon atom have been proposed, and are produced by the apical attack of methyl lithium, diphenylmethylsilyl lithium [40,41], sodium bis(trimethylsilyl)amide [42], and potassium hydride [35] on the less hindered SiC 4 ring. Even the neutral pseudo-pentacoordinate silole [Me(Me 2 NNp)SiC 4 Ph 4 ] has been reported [44]. In those pentacordinated anionic intermediates it was suggested that the SiC 4 ring occupies one axial and one equatorial position during its peudorotation and substitution reactions [45]. For some reason the reverse addition of 3 in THF to an excess of trimethylchlorosilane in THF produces chloride ion from the trimethylchlorosilane. Then, the association of the chloride ion to the silicon atom of 3 induces ring opening dissociation in the highly strained 1,3-disilacyclobutane to form [Si=C] bonds in two silole anions 9 via a pentacoordinated state, in which the methyl group and one Si-C bond of the C 4 Si ring occupy pseudo-axial positions and two bonds of X-Si-C and the other Si-C bond of the C 4 Si ring occupy pseudo-equatorial positions. Then the cation coordination enhances the stability by the delocalization in the silole ring [32]. The silole anion 9 is not consumed instantly by the reaction with trimethylchlorosilane to produce [Me(Me 3 Si)SiC 4 Ph 4 ] (6) due to the bulkiness of trimethlsilyl group [35]. Therefore, only from the reverse addition of 3 in THF to an excess of trimethylsilylchloride for the extended reaction time 6 could be obtained (Scheme 6). Scheme 6. Suggested mechanism for the formation of 1. (1) when trimethylsilylchloride is added to 3 in THF since the reaction rate of the silole anion 9 with trimethylchlorosilane is much slower than that of the anion 9 with the Si-Si bond of 6. Alternatively, the silole anion 9 is persistent in the solution for a while due to the lower reactivity of the bulky trimethylsilylchloride. A similar result in the form of a radical reaction of 1,1-bis(1-phenylsilole) [(PhSiC 4 Ph 4 ) 2 ] has been reported by Jutzi and Karl [46]. The quenching reaction of 3 dissolved in THF with water gives no 1,2,3,4-tetraphenyl-1,3butadiene, which should be obtained if 1-methyl-2,3,4,5-tetraphenyl- The disiloxane 10 is the condensation product of 1-methyl-1-hydroxy-2,3,4,5-tetraphenyl-1silacyclo-3-pentene [MeHOSiC 4 Ph 4 H 2 ] (11), which is produced from the protonation of the allylic anions and the hydrolysis cleavages of two Si-C bonds in 1,3-disilacyclobutane ring of 3. 1,2,3,4-Tetraphenyl-2-butene and methylsilicate are the hydrolysis products of 10 and/or 11. These reaction products unambiguously indicate that there is no 1-methyl-2,3,4,5-tetraphenyl-1-silacyclopentadienyl moiety, but rather the disilatricyclic diallylic dianion species in the THF solution of 3 and/or the silole anion 9. However, the self dissociation of 3 in THF to the silole anion 9 is not plausible since there is no derivatization at the silicon atom from the reaction of 3 with hydrogen chloride. Therefore it is suggested that the disilatricyclic diallylic dianion of 3 is sustained in THF solutions of 3 without halide ion to give its hydrolysis products (Scheme 8). An alternative mechanism would be involve a bent η 2chloride bridge between two silicon atoms of 1,3-disiacyclobutane in 3 to lead the silole anion 9 [48]. (2). From the reaction products the resonance form A (Scheme 9) can be proposed as a prominent contributor to the silole anion. The other four resonance forms are reduced to two sets of the symmetric resonance forms (B and E, C and D), which have the allyllic carbanion and silaethene moiety. The coordination of the cation, especially lithium, to the silole anion 2 enhances the delocalization in the silole ring [32], then the negative charge or electron density of the silicon atom moves to the carbons in the ring to induce some siliconcarbon double character. Consequently, the silicon atom becomes more electrophilic or less nucleophilic than that of the silyl anion A. It is reasoned that the the silole anion 2 has considerable Si=C double bond character and it dimerizes by head-to-tail [2 + 2] cycloaddition, which is known as silenes [49]. A t-Bu [25] or Si(SiMe 3 ) 3 [26] substituent on the silicon hinders the dimerization, in addition the electronic effect of the substituents on the silicon and on the carbons may be critical for the planarity of the silole anion to induce a stable 2-silaallylic anion in it (Scheme 9). It is noteworthy that the novel silole anion 9 coincides with resonance forms B and D or E and C of the silole anion 2 and it is an analogue of 2H or 3H-silole. General All reactions were performed under a nitrogen atmosphere using standard Schlenk techniques. Air sensitive reagents were transferred in a nitrogen-filled glove box. THF and diethyl ether were distilled from sodium benzophenone ketyl under nitrogen. Pentane was stirred over concentrated H 2 SO 4 and distilled from CaH 2 . NMR spectra were recorded on Bruker WP SY and Bruker AM 200 FT-NMR spectrometers. MS data were obtained on a mass spectrometer DMX 300. IR spectra were recorded as KBR pellets on a Shimazu IR 440 and melting points were measured on a Wagner & Meunz Co. capillary type apparatus. Elemental analyses were done using a Yanaco elemental analyzer at the Analytic Center of College of Engineering, Seoul National University. [(MeSiC 4 Ph 4 ) 2 ] (1): Stirring of 1-chloro-1-methyl-2,3,4,5-tetraphenyl-1-silacyclopentadiene [MeClSiC 4 Ph 4 ] (4.58 g, 10 mmol) and sodium (0.23 g, 10 mmol) in THF (170 mL) at room temperature for 10 hrs produced a pale green precipitate. After evaporating THF from the mixture, the remaining solid was treated with water and dichloromethane. Removal of dichloromethane under reduced pressure from the organic suspension, gave a pale green solid. The solid was washed with ether for purification. It was identified by the comparison of its analytical data with an authentic sample [50]. Yield: 3.69 g (90%); mp 322-329 °C (lit. [50] mp > 300 °C), 13 [(MeSiC 4 Ph 4 Me) 2 ] (5): Silole anion crystals (1.42 g, 1.7 mmol for the lithium salt, 1.48 g, 1.70 mmol for the sodium salt) was exposed to methyl iodide vapor for 2 hrs under reduced pressure. After evacuating the unreacted methyl iodide, the residual yellow solid was extracted with ether. Concentration and storage at −15 °C for 1 day gave yellow crystals of 4. Yield: 0.85 g (60%) for 3a, 1.22 g (87%) for 3b; mp 334−338 °C (lit. [29] 336−339 °C). The NMR data of 5 agreed with those reported earlier [30]. 2 O] (10): Aqueous HCl (0.10N) was added to a THF solution of 3 with stirring at room temperature until the pH of the mixture reached neutrality. Filtration of the hydrolyzed mixture gave a white solid, which was insoluble in organic solvents and water, did not melt when heated to over 300 °C, and showed a broad band at 1000−1100 cm −1 in the IR spectrum. After THF was removed from the filtrate, the residue was extracted with diethyl ether. The concentrated ether solution was kept at −20 °C for 1 day and pale green crystals of 10 were obtained. The mother solution was concentrated and after standing at −20 °C for 1 day it yielded colorless crystals of 1,2,3,4,-tetraphenyl-2-butene (yield: 0.72 g, 40%), whose spectral data agreed with those reported earlier [9]. Yield (10) Conclusions The pathway for generation of the silole anion 9 is dependent on the reactivity of the alkyl halide used. Two silole anions 9 are generated from the disilatricyclic diallylic
2014-10-01T00:00:00.000Z
2011-10-01T00:00:00.000
{ "year": 2011, "sha1": "ff474265b225dfab9a5a08975dcde3588c2d32a3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/16/10/8451/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff474265b225dfab9a5a08975dcde3588c2d32a3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
110052241
pes2o/s2orc
v3-fos-license
Simulation of an Improved Microactuator with Discrete MSM Elements Magnetic Shape Memory (MSM) alloys are a new class of “smart” materials. In the martensite state, they exhibit a reversible strain due to a reorientation of twin variants, based on twin boundary motion driven by an external magnetic field occurring in the martensite state. This effect allows for the development of linear microactuators. This work presents the simulation results for the fabrication of a microactuator based on an MSM alloy with an optimized design. A stator element consists of a NiFe45/55 flux guide, two poles, and double-layer Cu coils wound around each pole for generating the magnetic field. The MSM material applied is NiMnGa. The integrated microactuator is subjected to dynamic simulation, using a “checkerboard” pattern to locally switch the magnetic properties when the relative permeability µr is changed. The model is described with the Ansys Parametric Design Language (APDL). Design, modeling, and simulation of the magnetic system including MSM material, are conducted by Finite Element Method (FEM) analysis using the software tool ANSYS™. Introduction Magnetic Shape Memory (MSM) alloys are recognized as promising and high performance materials in the field of Micro Electro-mechanical System (MEMS) applications. The recent progress in designing a new class of MSM alloys is based on the martensite-martensite twin boundary motion driven by a magnetic field [1]. Material exposed to an external magnetic field shows a Magnetic Field Induced Strain (MFIS). The field induced martensite twin reorientation is possible in materials with high magnetocrystalline anisotropy energy (MEA) and low energy of twin boundary motion. With 5 to 10 percent, the MFIS observed in MSM alloys is substantial, which allows to using this effect in linear microactuators. For the investigation of a microactuator with discrete MSM elements, the Institute for Microtechnology (imt) at the Leibniz Universitaet Hannover received MSM stripes (NiMnGa) from the Hahn-Meitner Institute in Berlin. The samples feature a relative permeability µ r of 2 before the reorientation and a permeability µ r of 6 after the reorientation. Based on these data, a microactuator was simulated and optimized. In the first stage, the magnetic system was designed and modelled using the Finite Element Method (FEM). The simulations are executed applying the software tool ANSYS™. FEM analyses were used to find out the optimal design of the magnetic micro system by changing the magnetic properties of the MSM material. The optimal microactuator consists of pairs of U-shaped thin-film cores, with each pole carrying a thin-film coil. The coil system consists of a double-layer coil, where the top coil straddles the left pole and the bottom coil the right pole. The fabrication steps and characterization of the optimal microactuator with discrete NiMnGa MSM bulk material is presented in [10]. This paper describes the 2-D and 3-D modeling as well as the simulation results of an improved MSM microactuator using thin-film technology for the stator fabrication and discrete MSM stripes as actuating element. The goal of these simulations is to maximize the fraction of the microactuator´s cross section exposed to the external magnetic field. In this model, the coil system consists of a left and right double-layer coil with two turns each. The improved design differs from the first research stage in the coil arrangement. By straddling the left and the right coil around the left and the right pole, respectively, the exposed fraction of the cross section can be increased. Furthermore, a dynamic modeling of the improved microactuator was conducted. To model the local switch of the magnetic properties after a change between the MSM material's twin variants, a "checkerboard" pattern was used. It allows to locally allocating the material's relative permeability µ r . Modeling of Magnetic Microsystems Miniaturized magnetic actuators are key components in micro systems. Microactuation based on the electromagnetic principle provides rather high forces, high frequencies, and features a low driving voltage [11]. Optimizing the fraction of the cross section exposed to the external magnetic field in a magnetic microactuator is a key requirement for a highly efficient microactuator. For designing an improved microactuator, complex electromagnetic simulations are needed. An approach to design magnetic microactuators with discrete MSM elements is shown in Fig. 1. The magnetic microactuator with integrated MSM elements consists of a NiFe45/55 flux guide, two poles, and double-layer Cu coils wound around each pole for generating the magnetic field. For flux guides, a magnetic material with a high permeability µ r (featuring a low magnetic reluctance) is used while the air gap represents an element with a great reluctance typically required to create a force and motion. The first step of modeling magnetic microsystems is an analytical approach for defining a preliminary design with an analysis of the components. In our case, the components of the excitation system are a NiFe45/55 flux guide and Cu coils as the basic elements. The air gap between the basic elements and the discrete MSM elements is 5 µm. The width of the MSM stripes is 100 µm. Each system consists of double-layer spiral coils with 2 x 2 turns featuring an aspect ratio of 1.5 to 1. All geometries are presented in Table 1 and in Fig. 2. The basic actuator components are joined to create a complete actuator and a magnetic circuit. 182 Ferromagnetic Shape Memory Alloys II The next step was the generation of an FEM model. Fig. 2 shows the 2-D model of the MSM actuator used for the simulations. Table 2 presents the material properties for all dimensions that where used in the FEM simulations. The microactuator with discrete MSM elements was simulated using the software tool ANSYS™. Applying an MSM material, the critical magnetic field strength H crit is an important parameter for the simulations. For NiMnGa, the magnetic field H crit to initiate a switch between the twin variants is 50 kA/m. H crit was determined by Vibrating Sample Magnetometer (VSM) measurements [12]. During the whole actuation process, the material remains in a martensitic state. For the stator coils, a nominal current density J in impulse mode of 2.0 x 10 9 A/m 2 is selected and applied. This value is dictated by the current carrying capability for the selected micro coil fraction of the cross section. To further investigate the microactuator behavior, a 3-D simulation was conducted. The 3-D simulation yields less favorable conditions than the 2-D simulation, which is typical. For the original design, the fraction of the cross section reaching the critical field strength H crit is 40 percent [13]. For the new optimized design, the fraction of the cross section increases to 48 percent. Fig. 4 illustrates the simulation results. Dynamical Simulation The presented dynamical simulation of the improved microactuator with discrete MSM elements will be used to calculate approximately the mechanical response (elongation of the actuator) as function of the time increasing the magnetic field. It was conducted with the Ansys Parametric Design Language (APDL). The first tool of the ANSYS™ levels is a preprocessor, which executes the process modeling, defines the material properties, and generates the finite element model. Building a finite element model requires to define the element types, material properties, and the model geometry. For the simulation, PLANE13 (2-D coupled-field solid) is used as element type. The element has nonlinear magnetic capabilities for modeling the B-H curve, the relative permeability µ r , and the demagnetization curves. After defining the element type, the material properties were determined. As mentioned before, for allowing a local allocation of magnetic properties, a "checkerboard" system was used, it consists of 30 quadrants. For any of the "checkerboard" fields, the relative permeability µ r can be chosen individually, thus representing the MSM material's actual state. The next step was defining the boundary conditions of the model. As requirements of an improved microactuator from major preference is providing a sufficient magnetic field H crit (necessarily 50 kA/m for NiMnGa) in vertical direction in the MSM area to achieve a switch from one twin variant to the other. This field, generating a reorientation of the MSM material was determined. Next, the model was solved and the results were postprocessed. Of particular interest is to implement a new simulation method indicating the change of the magnetic properties. This approach allows to change the relative permeability µ r of the MSM material when switching between the twin variants. In this case, the magnetic permeability µ r in the unmagnetized state is changed by the simulation tool automatically, as soon as the critical magnetic field H crit in the exposed area of the MSM material is exceeded. The simulations show, that in a defined area of the MSM "checkerboard", the exposed area grows. In this case, the domain in the material expanding due to the external magnetic field is arranged in all areas. In the last step of the dynamic simulations, the field strength H exceeds H crit in all quadrants. Finally, the results can be observed and they can be plotted. The algorithm to calculate the dynamic simulations is assembled with APDL. The argument in ANSYS™ commands uses arithmetic equations and functions. The ANSYS™ software tool works with FORTRAN™ functions. In this case, the model is generated and solved automatically. Conclusions Previous work has demonstrated the importance of a dynamic FEM analysis step for the microactuator design. The main challenge was to determine if a sufficient magnetic field strength required for the change from one twin variant to the other (for NiMnGa 50 kA/m) was reached. The next goal was to maximize the fraction of the cross section of the MSM element exposed to the magnetic field, which was accomplished. Using a 3-D simulation, an increase of the fraction of the cross section exposed to an external magnetic field for the optimized microactuator using NiMnGa as MSM element from 40 percent to 48 percent could be accomplished. Furthermore, by choosing a dynamical approach for the simulation of magnetic microactuators, a local change in magnetic properties could be modeled.
2019-04-13T13:07:08.612Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "28ede60b285e084fb65c6c6e41b3ec7e6fe3c059", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/MSF.635.181.pdf", "oa_status": "HYBRID", "pdf_src": "ScientificNet", "pdf_hash": "6821c81b9ff30a35aef1e323d9dc2bedd73aa935", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
134659747
pes2o/s2orc
v3-fos-license
Oceans of opportunity: a review of Canadian aquaculture Purpose – The world ’ s population is expected to increase by 30 percent to 10bn people by 2050 and with 70 percent of the earth ’ s surface covered by water aquaculture will play an important role in producing food for the future. The paper aims to discuss this issue. Design/methodology/approach – While Canada has the longest coastline in the world by far (202,080km) with 80,000 km of marine coastline capable of supporting aquaculture and fisheries, it ranks only 25th in terms of world aquaculture production. The reasons are many and varied, and this review examines statistical reports and publications to trace the beginnings of the aquaculture sector in Canada, and highlights some areas of strength and potential, and the challenges for future growth and expansion. Findings – Currently, less than 1 percent of the 3.8m hectares of freshwater and marine areas that are considered suitable for seafood (i.e. finfish, shellfish and aquatic plants) production are being farmed so Canada has an ocean of opportunity to be a leader in world aquaculture production in the future. Originality/value – The review highlights the need for a national strategic plan to increase aquaculture production in Canada and the need to simplify the current complex regulatory framework that has resulted in significant uncertainties and delays that have limited growth in this sector. The review highlights the potential and interest to triple current production while fostering greater involvement of First Nation communities. Introduction The world's population in 2017 was approximately 7.5bn people, and by 2050, it is expected to increase by at least 30 percent to nearly 10bn people.Sustaining this growing population will require a commensurate increase in food production through agriculture, capture fisheries and/or aquaculture.While advances in technology, including the use of genetically modified plants and organisms, have and will continue to enhance food production, food quality, and food security in developed and developing countries (Herrera-Estrella and Alvarez-Morales, 2001;Bouis, 2007;Azadi and Ho, 2010), physical limits on available arable land are likely to temper any significant increases in traditional terrestrial plant and animal production.Climate change will also further challenge our ability to produce substantially more food from agriculture both as a result of rising temperatures as well as increased demand on our limited water resources due to population growth and other competing uses.With more than 70 percent of the earth's surface covered in water, it is clear that the best opportunity to increase food production is likely through increases in fish, shellfish and aquatic plant production and primarily through aquaculture, given the pressures facing wild fish stocks.Indeed, the demand for seafood production is expected to more than double by 2050, an increase that is disproportionately higher than the rate of population growth. Fish and shellfish provide many valuable nutrients with numerous health benefits and demand for more fish, and shellfish in our diet has grown substantially over the past several decades.Per capita consumption of seafood globally has risen from approximately 8 kg per capita in 1950 to ~22 kg per capita in 2015, and is expected to increase in a similar fashion in the future (Noakes, 2014b).Currently, approximately 170m tonnes of marine and freshwater fish, shellfish, and aquatic plants are commercially harvested or grown annually with the commercial catch of fish and shellfish holding relatively constant at 90m tonnes since the mid-1980s.The assessment of global fish stocks suggests that nearly 35 percent of all fish stocks are currently being overfished, 55 percent of the stocks are fished "sustainably" with the remaining 10 percent rated as underutilized (Noakes, 2014b).As such, there is little or no opportunity to increase catch from commercial fisheries, so any new growth in this sector must come from increased aquaculture production. Total aquaculture production is currently about 80m tonnes, and while this is less than the commercial catch, it represents more than 50 percent of all seafood consumed by humans due to waste and alternative uses of some commercial catch.Globally, aquaculture is the fastest growing food production sector with world aquaculture output increasing by approximately 6.3 percent per year or about three times the rate of increase for meat production (beef, poultry, pork) between 2001and 2010(Food and Agriculture Organization (FAO), 2012).Annual growth rates have moderated slightly in recent years but were still 4.4 percent in Asia and 3.6 percent in Africa with little or no growth in the Americas in 2014 (FAO, 2016).Given the current and projected demand for seafood, production will need to increase by at least 30m tonnes by 2025 and increase substantially more by 2050, all of which must come from aquaculture since world fish stocks are unable to support increased catch (FAO, 2016). In addition to food production, the seafood and, in particular, the aquaculture sector also have substantial social and economic benefits particularly for individuals living in rural communities (Noakes, 2014b).Globally, approximately one in ten people relied on the fisheries and aquaculture sector for their livelihood in 2014, so it influences the lives of a large segment of the world's population in a direct way.In 2014, about 56.6m people including 10.8m women were engaged in primary capture fisheries and aquaculture operations with 18.8m involved in the aquaculture sector (FAO, 2016).While the vast majority were from Asian and African countries in coastal and rural areas, this is an important sector for many countries around the world.Globally, the value of seafood exports was approximately US$150bn in 2014 with China and Norway accounting for 20 percent of this total and Canada ranking tenth at US$4.5bn.The largest importers of seafood in 2014 were USA and Japan (~25 percent) with many of the other top ten importers being European countries.It is difficult to overstate the economic and social benefits of this sector to rural and coastal communities including those in Canada, given the level of employment and financial benefits and contribution to food security. Canada has a long history in both fisheries and aquaculture and, in particular, providing global leadership in fisheries science and conservation, technological innovation and environmental sustainability.Canada has the longest coastline in the world by far (202,080 km), and even when areas in the Arctic are excluded, Canada still has nearly 80,000 km of marine coastline capable of supporting aquaculture and/or fisheries or roughly the same coastline as Norway, Australia and Japan combined.It also has one of the largest renewable internal freshwater resources per capita (80,200 cubic meters in 2014, compared to 153,100 in 1962), a measure of our internal renewable resources related to river flows and groundwater from rainfall, although this has decreased by 47.6 percent since 1962 (https://data.worldbank.org/indicator/ER.H2O.INTR.PC).The reduction has been much larger (55.8 percent) on a global basis reflecting the looming freshwater crisis the world faces over the next few decades.Despite substantial freshwater and marine natural resources and a long history in fisheries and aquaculture, Canada ranks only 20th in terms of its capture fisheries and 25th in terms of world aquaculture production.While further expansion in commercial fisheries is limited and likely to decrease, there are certainly MAEM opportunities for future growth in both finfish and shellfish aquaculture.The following is a brief history of aquaculture in Canada and views on significant opportunities and challenges in this area. Historical overview Aquaculture has been practiced in various forms around the world for more than 2,500 years, but its commercial beginnings in Canada date back to the mid-1800s.Historically, many of the early initiatives and developments in this sector were small-scale and to a large extent undocumented but they nevertheless established the foundation for many of the key species cultured today.The aquaculture sector in Canada was in fact relatively small (a few thousand tonnes) until the last 30 or 40 years when advances in fish and shellfish husbandry combined with more favorable market and regulatory conditions resulted in substantial growth in the number of species being farmed and production.From that perspective, the aquaculture sector is not unlike other industry sectors where a few key breakthroughs laid the foundation and then technological innovation and market forces resulted in the creation of a viable and sustainable industry.The following are examples of some of the initiatives and breakthroughs that helped move the development of the Canadian aquaculture sector forward, but the list is by no means intended to be comprehensive. From a species perspective, hatchery operations began for Atlantic salmon (Salmo salar) and Brook trout (Salvelinus fontinalis) in Quebec in 1857 and Atlantic oyster (Crassostrea virginica) production began in Prince Edward Island two years before Confederation in 1865 (Dunfield, 1985;Library of Parliament, 2010).In both Canada and the USA, Rainbow trout (Oncorhynchus mykiss) hatcheries became wide spread in the early 1870s, and this species was propagated and introduced into waterways on every continent except Antarctica to support recreational fishing and aquaculture (Knight, 2007).Pacific oysters (Crassostrea gigas) were imported from Japan to British Columbia in 1913 and oyster seed imports from Japan continued until the 1930s (Bourne, 1979).Pacific oyster farming began in the 1920s and now supports a large commercial industry.Commercial Mussel (Mytilus edulis) culture using seed collected from the wild did not start in Prince Edward Island until the mid-1980s but industry growth has been significant from about 40 tonnes in 1980 to more than 18,000 tonnes in 2015.Finally, in 1889, a Norwegian scientist, Adolph Nielsen, established a Cod (Gadus morhua) hatchery in Newfoundland and during its seven years of operation over a billion cod fry were released into the Newfoundland and Labrador coastal waters (Baker et al., 1992).While species such as Atlantic salmon, trout, oysters and mussels remain some of the key species grown in Canada today, lessons learned in early attempts to farm these fish and shellfish helped to improve efficiencies in many areas and diversify the number of species grown (Table I). From a science perspective, aquaculture like agriculture depends on favorable environmental conditions for its success but both rely heavily on good science and technology as well.The establishment of the Pacific Biological Station in Nanaimo, British Columbia and St Andrews Biological Station in St Andrews, New Brunswick both in 1908 lead to significant advances in fisheries science that resulted in the development and expansion of the aquaculture industries on both the Atlantic and Pacific coasts.Both were established to study the biology of commercially harvested stocks some of which were experiencing overfishing but the focus expanded to include the study of life histories and disease as well as oceanography.Eventually, the Fisheries Research Board of Canada was established in 1937 to oversee the programs at these institutes as well as at other research centers across Canada.The work of these research stations now continues as part of Fisheries and Oceans Canada.In addition to increasing our understanding of the basic biology of fish and shellfish, research on husbandry practices, nutrition and diseases of fish and shellfish (and the management of the diseases) substantially advanced industry's Oceans of opportunity ability to commercialize a wide number of species both within Canada and around the world.While research at university and government continues, there is also significant research capacity and ability within industry today, particularly where production scale experiments and trials are required. The final component required for the industry to grow was a regulatory framework to facilitate industry development and trade.While the management, harvest and trade aspects of "fishing" were well established, some unique aspects of aquaculture, including the tenure of crown land, stock ownership, overlap with fisheries involving the same species, and some operational details posed some new and challenging problems for producers, regulators and others.An appropriate workable framework was needed to provide the certainty required for investors and farmers to build the infrastructure and systems to grow, process and sell their shellfish and fish in Canada and internationally.To that end, in 1984, the Prime Minister named the Department of Fisheries and Oceans as the lead agency for aquaculture albeit without clarity around a number of key issues.In 1986 at the First Minister's meeting, the Prime Minister and Provincial Premiers agreed on a statement of broad national goals and principles for aquaculture development, and between 1986 and 1989, negotiations resulted in the development of a federal-provincial memorandum of understanding to clarify the delineation of responsibilities between the two levels of government.Over the next six years, subsequent consultations between government, industry and a broad spectrum of stakeholders lead to the development of Canada's first Federal Aquaculture Development Strategy in 1985 (Fisheries andOceans Canada, 1995).This concept and related documents have evolved over time with the Canadian Council of Fisheries and Aquaculture Ministers producing the "Aquaculture Development Strategy 2016-2019" in 2016(Fisheries and Oceans Canada, 2016).The expected long-term outcomes from this new strategy are an improved MAEM federal-provincial-territorial regulatory framework for aquaculture, improved coordination aquaculture fish health management, and improved federal and regional support for regional economic growth through aquaculture.While these broad goals or objectives have not fundamentally changed since 1995, there have been substantial changes and improvements in the industry with respect to production and environmental sustainability with First Nations also playing a greater role in various aspects of the industry.It is fair to say the different stakeholders involved have distinct views about what has and has not been accomplished over the last quarter century, but most would agree that improvements have been made slowly. Canadian aquaculture and production trends Until the early 1980s, the Canadian aquaculture industry could be described as developmental with total production being less than 10,000 tonnes.Production began to increase sharply in 1984 and by 1991 Canadian aquaculture production was 49,500 tonnes with a farm gate value of about CAN$233.6m.Finfish, primarily Atlantic salmon, represented approximately 80 percent of the production by weight and that ratio and the importance of Atlantic salmon aquaculture to this sector has not changed significantly over time.By 2016, 56 species (Table I: 27 finfish, 20 shellfish and 9 aquatic plant species) were being farmed commercially in Canada with a total production of approximately 200,565 tonnes and an approximate value of CAN$1.347bn(Figure 1).In 2016, there were 917 aquaculture sites in Canada, and, not surprisingly, most were located on either the east or west coasts (Table II) Oceans of opportunity compared to the East coast, many are associated with the salmon farming which substantially to the overall economic benefit for Canada (Table II).Commercial aquaculture operations can be found in every province and the Yukon Territory (Table I) with the majority of the production in 2016 from British Columbia (51 percent -102,325 tonnes), Newfoundland and Labrador (14.1 percent -28,622 tonnes), and New Brunswick (13.8 percent -28,082 tonnes) (Table III).For all three of these provinces, the predominant species farmed is Atlantic salmon.At 24,115 tonnes, Prince Edward Island is the largest producer of shellfish in Canada with Mussels being the main species cultured.British Columbia is the second largest producer of shellfish with approximately 10,000 tonnes of mostly Pacific oysters being grown annually.Ontario is the largest producer of trout (5,440 tonnes in 2016) primarily grown in land-based systems.While many species are grown in each region, there are as expected areas of concentration by species reflecting local growing conditions (Table IV ). In 2016, direct employment in aquaculture production and subsequent processing was 25,000 full-time equivalents with a payroll in excess of CAN$1.16bn.The contribution to GDP was in excess of CAN$2.0bn with a net economic activity of CAN$5.16bn(Canadian Aquaculture Industry Alliance (CAIA), 2017 and Statistics Canada).The full value-chain economic activity of the Canadian aquaculture sector in 2016 was estimated at CAN $7.3bn, contributing CAN$3.75bn to Canada's GDP and providing employment for 54,000 people.The majority (75-80 percent) of our aquaculture production is exported with the USA being our primary market (BC Ministry of Agriculture, 2017a; BC Salmon Farmers Association (BCSFA), 2017a; CAIA, 2017).Approximately 90,000 tonnes of farmed Atlantic salmon (majority produced in British Columbia) valued at over CAN $900m is currently exported each year, making it Canada's third most important seafood export behind Atlantic lobster and snow crab (Table V ).At CAN$524.2m, farmed Atlantic salmon was British Columbia's largest agricultural export in 2016 (and for the past several years) as well as being larger than all other seafood exports from British Columbia combined (BC Ministry of Agriculture, 2017a). The nature of the industry has also changed substantially over time with much more involvement with local communities.For instance, more than 40 indigenous (First Nation) communities are involved in aquaculture ventures across Canada providing employment and associated financial and social benefits.In British Columbia, approximately 80 percent of farmed salmon are produced in partnership or under a collaborative agreement with local First Nations a level not matched by any other industry sector in Canada.There have also been significant improvements with respect to the environmental performance of the aquaculture sector in particular improvements in waste and fish health management and a reduction in its overall environmental footprint.Significantly, the Monterey Bay Aquarium Seafood Watch program recently rated BC Farmed Atlantic Salmon as a "Good Alternative" having conducted an extensive review of all of the available science and the policies and practices governing the industry.Their peer-reviewed process is comprehensive and their recommendations are widely respected and accepted.The BC salmon farming industry is also moving toward full Oceans of opportunity Challenges and opportunities Like any industry, there are challenges and associated with the aquaculture sector in Canada some of which are the same or similar to those in other countries.The challenges fall into three broad categories: environmental and/or technical, an uncertain and complex regulatory framework, and changing socio-economics.While significant scientific and technical advances have increased efficiencies in production and improved environmental sustainability, problems such as disease outbreaks and large escapes of farmed fish that occurred in the 1970s and early 1980s have tainted attitudes toward the industry and negatively influenced public confidence to this day.The past 30 years have seen significant improvements in many areas including work on developing new vaccines to reduce or eliminate the incidence of some diseases and significantly reduced the amount of antibiotics used, new diet formulations that reduce the amount of fishmeal used and improved husbandry practices including using camera to monitor and control feeding reducing the production of waste (BCSFA, 2017b;McPhee et al., 2017;Noakes, 2014a).While there is general acknowledgment even by opponents of the aquaculture industry of significant improvements in performance and a more positive view of this industry, the highly polarized debate has left a small segment of the population firmly opposed to the industry regardless of any scientific evidence to the contrary.As noted previously, the salmon farming industry in BC is moving toward full certification by the Aquaculture Stewardship Council in an effort to gain public confidence and improvements in performance have been noted by international organizations such as the Monterey Bay Aquarium Seafood Watch (2017) Program.Similar issues with salmon farming exist on the East Coast of Canada albeit with its own set of specific problems, given a different mix of species and unique oceanographic conditions.There are certainly far fewer environmental problems associated with shellfish aquaculture but there are many technical and scientific problems to overcome in order to increase production and to develop advanced hatchery technologies for new species. Problems associated with governance and regulatory issues largely apply equally to both finfish and shellfish aquaculture and are, in many respects, one of the significant impediments to industry development and growth in Canada (Government of Canada, Standing Senate Committee on Fisheries and Oceans, 2016a).Currently, as many as 17 federal departments and agencies are involved in the oversight and governance of aquaculture in Canada in addition to a number of Provincial ministries and agencies, making it one of the most highly regulated industry sectors.This is not a situation that has developed over time but rather has remained largely unchanged since the mid-1990s when the problem was first highlighted in the 1995 Federal Aquaculture Development Strategy (Fisheries and Oceans Canada, 1995).In addition, aquaculture is currently managed as a fishery under the Fisheries Act despite some very obvious differences between aquaculture MAEM and traditional commercial fisheries and its much closer resemblance agriculture.Both nationally and provincially, the industry has, for many years, proposed creating an Aquaculture Act, but this has not materialized despite some interest from the Canadian Council of Fisheries Ministers.The complexities and uncertainties associated with having so many diverse groups involved in the oversight and management of the aquaculture sector has significantly affected decision-making processes and makes both short-term and long-term investment by industry and other much more and needlessly difficult. Adding to tensions are significant declines in wild stocks and associated fisheries at the same time that aquaculture production is increasing.This is particularly evident on Canada's west coast where climate change, habitat loss and overfishing have resulted in significant declines in Pacific salmon stocks and the closure of many fisheries including First Nations food fisheries (Noakes and Beamish, 2011).The expected changes in temperature and freshwater distribution and abundance over the next 30-50 years are so significant (BC Ministry of the Environment, 2016) that many Pacific salmon stocks are likely to see further declines with some stocks in the interior of British Columbia disappearing completely (Noakes and Beamish, 2011).Despite the lack of any credible scientific evidence to link declines in Pacific salmon stocks at a population level to salmon farming (Noakes et al., 2000;Noakes, 2011), those opposed to salmon farming and those wishing for a simplistic solution to restore Pacific salmon stocks to historic high levels suggest that removing salmon farms will accomplish that goal which it will not.This action will only serve to eliminate or significantly curtail the CAN$400m salmon farming industry in British Columbia which will have significant negative economic and social consequences for coastal communities and in particular First Nation communities involved in aquaculture. There are also significant changes in how industry and others now are expected to interact with First Nation (Indigenous people) in Canada in particular with respect to the development or use of natural resources.Recent court decisions in Canada with respect to First Nations rights and titles, the United Nations Declaration on the Rights of Indigenous Peoples (United Nation, 2008) and recommendations from the Truth and Reconciliation Commission of Canada (2015) increase the need for improved consultation and cooperation.It will take some time to bring clarity and agreement among stakeholders as to what those expectations are but the direction is clear.To that end, salmon farming companies in British Columbia have been building relationships and agreements with First Nations over the past two decades to the point where today nearly 80 percent of the farmed salmon produced in the province is done so with agreements or partnerships with local First Nations.The shellfish aquaculture sector has not made as much progress but there is significant interest for First Nations to become more involved in this sector.While progress in developing relationships can be time consuming and challenging, the short-term and long-term benefits of these agreements provide important opportunities, given farms are often located in remote areas in or adjacent to the traditional territories of local First Nations. There are of course many significant opportunities for aquaculture development in Canada.An estimated 3.8m hectares of freshwater and marine areas are considered suitable for seafood ( finfish, shellfish and aquatic plants) production in Canada.However, currently less than 1 percent of this available area is used for aquaculture production despite interest from many stakeholders to see growth in this sector.The usage varies by province and shellfish aquaculture such as oyster and mussel culture typically takes more area per tonne of production.For instance, Prince Edward Island which predominantly grows mussels uses approximately 3.2 percent of its available area while British Columbia uses only 0.8 percent of its available coastal area to produce approximately 80 percent of the farmed salmon and 75 percent of all oysters grown in Canada (CAIA, 2017).Compared with other nations, Canada lags far behind in terms of aquaculture production per km of coastline (Canada 2.1, USA 9.6, Norway 52.5 and Chile 157.8) (FAO, 2016).Recognizing the potential and importance of Oceans of opportunity expanding the Canadian aquaculture industry, the Finance Minister's Economic Advisory Council (2017) has suggest tripling aquaculture production with some provinces, such as Newfoundland and Labrador, supporting doubling of their aquaculture production to 60,000 tonnes (CAIA, 2017).The challenge, of course, will be realizing this planned growth, given the complex regulatory framework in place that significantly lengthens the time for decision making and introduces considerable uncertainty for potential investors. Discussion On September 25, 2015, the UN adopted the 2030 Agenda for Sustainable Development Goals (SDGs) with 17 objectives with 169 targets to guide government and international agencies over the next 15 years (www.undp.org/content/undp/en/home/sustainable-development-goals.html).Two of the seventeen goals that are directly related to aquaculture are: SDG 2: End hunger, achieve food security and improved nutrition, and promote sustainable agriculture; and SDG 14: Conserve and sustainably use the oceans, seas and marine resources for sustainable development.Both are part of the FAO's Blue Growth Initiatives that are intended to support the sustainable management of living aquatic resources, balancing their use and conservation in an economically, socially and environmentally responsible fashion (www.fao.org/policy-support/policy-themes/blue-growth/en/).The importance of these goals and initiatives cannot be overstated, given the world's population is set to grown by 30 percent or more over the next 30-40 years.Aquaculture has and will continue to play an ever-increasing role in contributing to world food production in a changing climate that will provide its own set of challenges for decades. At just over 200,000 tonnes, Canada ranks 25th in the world in terms of aquaculture production by volume and 20th in terms of value (FAO, 2016).While this contribution to seafood production is important and welcome, it does not reflect the potential for expanding the aquaculture sector nor the interest and wishes of participants (CAIA, 2017;Finance Minister's Economic Advisory Council, 2017;Fisheries and Oceans Canada, 2016).Significant progress in forming partnerships and working relationships with First Nations has been realized over the past two decades, and the social and economic benefits of the aquaculture industry have been clearly demonstrated to these communities, governments and others.The environmental performance of the industry has also improved substantially over the past three decades to the point where Canada's farming practices are now recognized as world class (MBASW, 2017).While there is still work to be done in these areas, there is general agreement by decision makers and others that Canada could easily double its production within a decade with the potential for still further growth.What is still missing and needed is a modern regulatory framework (national and provincial) that is specifically designed to govern a responsible and sustainable aquaculture industry and a commitment to implement the required legislation and aquaculture development strategy.Canada has provided leadership in the development of sustainable fisheries and aquaculture and can do so in the future. Atlantic salmon are farmed while all other species are from capture fisheries Source: www.dfo-mpo.gc.ca/stats/stats-eng.htm
2019-04-27T13:10:16.249Z
2018-07-23T00:00:00.000
{ "year": 2018, "sha1": "eeb2fa0a68e5fed39e264611276997bf70d99c19", "oa_license": "CCBY", "oa_url": "https://www.emeraldinsight.com/doi/pdfplus/10.1108/MAEM-06-2018-002", "oa_status": "BRONZE", "pdf_src": "ScienceParsePlus", "pdf_hash": "f02310aebb3667155eff10117ebee840fc501d7d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
245022958
pes2o/s2orc
v3-fos-license
Measurement of ultrasonic echo intensity predicts the mass and strength of the tongue muscles in the elderly. PURPOSE The purpose of this study was to investigate the relationship between the echo intensity (EI) on ultrasound images of the tongue, tongue thickness, and tongue pressure to examine the effectiveness of EI measurement for assessing the tongue function. METHODS A total of 100 elderly outpatients were enrolled. Tongue thickness and EI were measured using ultrasonography. The distance from mylohyoid muscle surface to dorsal surface of the tongue was measured for tongue thickness. Subsequently, this area was vertically divided into four areas: top of tongue dorsal side (DT), bottom of tongue dorsal side (DB), top of basal tongue side (BT), and bottom of basal tongue side (BB), and the EI was measured in each area. RESULTS The mean EIs of DT and DB were lower than those of BT and BB. In the three areas apart from BB, the EI decreased with an increase in tongue thickness. In particular, a significant correlation between the EI in DB and tongue thickness was found. In all areas, the EI decreased with an increase in tongue pressure. CONCLUSION The results of this study suggested that the measurement of EI could be an important indicator for assessing the tongue function in the elderly. Introduction With an increase in the elderly population worldwide, increasing medical and nursing care costs have become a major social problem [1,2]. Concomitantly, the concepts of "sarcopenia," a decline in the muscle mass of the skeletal muscles of the limbs with aging and the associated loss of muscle strength and function; and "frailty" which is classified into physical, mental, and social frailty, have also attracted considerable attention [3,4]. Regarding oral function, the relationship between the decline in oral function and systemic function has been examined [5][6][7], and the concept of oral hypofunction in elderly people has also been proposed [8]. In particular, it has been reported that tongue pressure, which indicates the muscle strength of the tongue, is related to grip strength and is significantly affected by the amount of skeletal muscle and nutritional status [9][10][11]. The tongue plays the most important role in oral functions such as mastication, swallowing, and speech; tongue function is one of the most important oral factors relevant to systemic function. Regarding the assessment of "sarcopenia", computed tomography, magnetic resonance imaging, and ultrasonography have been used to assess the muscle mass and quality [3,12]. Ultrasonography is especially popular because of its relatively easy performance as well as minimal invasiveness and burden to patients. Ultrasonography provides the echo intensity (EI) as well as morphological measurements, including the thickness, width, and area of the cross-sectional image. Regarding EI on ultrasound images, previous studies have reported that a higher EI is associated with lower muscle strength and lower muscle mass. Taniguchi et al. examined the relationship between the EI of the rectus femoris muscle and muscle strength in elderly women and reported that the knee extensor strength showed a significant negative correlation with the EI [13]. Fukumoto et al. reported that the EI of the quadriceps femoris muscle increased with age in a comparative study of young and elderly people [14]. In addition, Watanabe et al. investigated the relationship between the EI of the anterior compartment of the right thigh and muscle strength, which is the maximum isometric torque of knee extension in elderly men and found a negative correlation between the EI and the muscle strength [15]. The EI of the tongue has also been investigated in several previous studies. Ogawa et al. examined the relationship between the prevalence of dysphagia due to sarcopenia as classified by the food intake level scale and the mean EI of the whole tongue and proposed that the EI of the tongue was a significant independent variable predicting dysphagia [16]. Chantaramanee et al. demonstrated that there was a significant correlation between the mean EI of the whole tongue and tongue thickness, and that the EI of the tongue was a significant independent variable predicting the rate of oral diadochokinesis [17]. Although it is expected that EI assessment of ultrasonography is effective for the assessment of tongue function, and previous studies have suggested that the EIs of the ultrasound images are related to muscle mass and function, little information exists about the detailed relationship of the EI with tongue morphology and tongue pressure. Consequently, a more detailed analysis is needed. In this study, the EI on ultrasound images of the tongue was focused on, and the relationship between the EI, tongue thickness, and tongue pressure was investigated to examine the effectiveness of EI measurement for assessing the tongue function in elderly people. Subjects The study participants were 100 elderly individuals aged 65 years and over (52 men and 48 women, mean age = 75.4 ± 6.5 years), who visited the dental division of Tokushima University Hospital between January 2017 and April 2017 and who were able to participate in this study. Inclusion criteria included participants who were able to walk independently and follow the instructions of the examiner. Individuals with a serious systemic disease or a maxillofacial defect were excluded. This study was approved by the clinical research institutional review board of the Ethics Committee of Tokushima University Hospital (Approval No. 2225) and was carried out in compliance with the Declaration of Helsinki. Measurements were taken after providing sufficient explanation about the study to all participants and obtaining consent. In addition, age, sex, and body mass index (BMI) were recorded as baseline characteristics. The individuals in this manuscript gave written informed consent to the publication of these case details. Although historical data from the previous study [18] were used for the analysis of this study, the mean EIs of the images were uniquely analyzed. A power analysis was performed to determine the number of subjects needed in this study using a statistical power analysis software (G* power 3.1.9.7, Heinrich-Heine-University, Düsseldorf, Germany, free software). The significance level (α), power (β), and moderate effect size were set to 0.05, 0.80, and 0.3, respectively. A total of 84 subjects were found to be required, and the number of subjects in this study was confirmed to exceed this standard. Measurements of tongue thickness and echo intensity Tongue thickness and EI were measured to a depth of 80 mm under like conditions using ultrasonography (Vscan with Dual Probe, GE Healthcare Japan, Tokyo, Japan) by a single examiner. During the measurements, the participants were seated in a dental chair with the headrest adjusted so that the Frankfurt plane was horizontal with the floor, and the head and back were secured to the backrest to prevent body movement. The ultrasound probe (frequency, 4.0-8.0 MHz; contact face size, 9 × 25 mm; mechanical index, 0.4; thermal index, 0.1) was positioned perpendicular to the Frankfurt plane and in the middle of the bilateral second premolars (Fig. 1a). After the patient had swallowed the saliva and taken the mandibular rest position, three still images were recorded once the tongue was in a stable position. The distance from the mylohyoid muscle surface to the dorsal surface of the tongue was measured in still images and was defined as the tongue thickness (Fig. 1b). The 40 pixels (11.1 mm) wide area across the axis of distance measurement was vertically divided into four areas: top of the tongue dorsal side (DT), bottom of the tongue dorsal side (DB), top of the basal tongue side (BT), and bottom of the basal tongue side (BB). The mean EI was measured in each area (Fig. 1c). Image analysis software (ImageJ, NIH, Bethesda, MD, USA) was used to analyze the tongue thickness and EI in the ultrasound images. The thickness was measured after calibration with a known distance in the image and setting the units to mm. Each measurement was performed in three still images for each participant, and the mean value of the three measurements was used as the representative value. The intraclass correlation coefficients (ICCs) of EI measurements in three images were calculated in four areas to evaluate the reliability of repeated measurements. Measurement of tongue pressure Tongue pressure was measured using a JMS tongue pressure measurement device (JMS, Hiroshima, Japan), as shown in Fig. 2. The measurement was performed after calibration outside the oral cavity, and by placing the tongue pressure probe between the tongue and palate. The rigid ring of the tongue pressure probe was lightly held with the incisors, and the participants were instructed to raise the tongue with maximum force against the palate for 7 s. The value displayed as the maximum pressure by the digital tongue pressure measurement device was recorded as the maximum tongue pressure. Each measurement was performed three times and the mean value was used as the representative value. Statistical analysis The Mann-Whitney U test was used to analyze the differences in age, BMI, tongue thickness, tongue pressure, and mean EI between men and women. Spearman's correlation analysis was used to analyze the relationship between tongue thickness, tongue pressure, and mean EI. The participants were divided into two groups: with tongue pressure of 30 kPa and over, and with that of less than 30 kPa according to the criteria of oral hypofunction by the Japanese Society of Gerodontology [8], and the EIs in four areas were compared in the two groups. SPSS version 25.0 (IBM, Chicago, IL, USA) was used for all statistical analyses, and the risk rate was set to a significance level of less than 5%. Results The ICCs in the DT, DB, BT and BB were 0.779, 0.798, 0.752, and 0.925, respectively. Such ICC values of more than 0.75 showed a good reliability considering the guidelines by Koo et al. for the ICC: values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively [19]. Table 1 shows the means and standard deviations of the measurements in men and women. There were no significant differences in age and BMI between men and women. Moreover, the values of tongue thickness and tongue pressure were comparable to those reported by Chantaramanee et al. [17] and Tamura et al. [20] in elderly people of the same age group. Therefore, the participants of this study can be considered to represent the common elderly population. No significant differences in tongue thickness and tongue pressure were found between men and women. Although the mean EIs of DT and BT in men were significantly lower compared to those of women, the EIs of other areas were not significantly different. The mean EIs of the tongue on the dorsal side (DT and DB) were lower than those of the submandibular side (BT and BB). Table 2 shows the mean EIs of four areas in the two groups: with tongue pressure of 30 kPa and over, and with that of less than 30 kPa. In the DB and BT, the mean EIs in the group with tongue pressure of 30 kPa and over were significantly lower than those with tongue pressure of less than 30 kPa. Figure 3 demonstrates the relationship between tongue thickness and mean EI of each area. In the three areas apart from BB, which include a large part of the geniohyoid muscle, the EI decreased with an increase in the tongue thickness. In particular, the EI in DB exhibited a significant correlation with the Spearman's rank correlation coefficient of −0.454. Figure 4 shows the relationship between the tongue pressure and mean EI of each area. In all four areas, the EI decreased with an increase in the tongue pressure, and the significant correlation coefficients of DT, DB, and BT were −0.198, −0.211, and −0.229, respectively. Age, sex, and BMI had less effect on the mean EI, tongue thickness, and tongue pressure. How-ever, tongue thickness and BMI showed a slight correlation. Discussion In this study, the EI on ultrasound images of the tongue was focused on. Previous studies have reported that a higher EI is associated with lower muscle strength and lower muscle mass. Yamaguchi et al. have reported the tendency to show a negative correlation between the EI of the tongue and tongue thickness in healthy young subjects [21]. The similar preliminary results were found in this study: a negative correlation (correlation coefficient: −0.149) between the EI of whole tongue and tongue thickness, and a significant negative correlation (correlation coefficient: −0.217) between the EI and tongue pressure. In this study, the region from the mylohyoid muscle surface to the tongue dorsal surface was vertically divided into four areas and the EIs in the four areas were measured in more detail. Overall, the lower the EI, the greater the tongue thickness and tongue pressure. These results agreed with those reported by Chantaramanee et al. [17]. In particular, the EI of DB showed a higher and significant correlation with the tongue thickness and tongue pressure. Considering the anatomical structures in the four areas in this study, the DT is just below the tongue dorsal surface and is mainly composed of the intrinsic muscles of the tongue such as the superior and inferior longitudinal muscles of the tongue, as well as the vertical and transverse tongue muscles. The DB is the deep layer under the DT and refers to the styloglossus muscle, which is an extrinsic muscle of the tongue. The BT predominantly includes the areas of the genioglossus and hyoglossus muscles, which are extrinsic muscles of the tongue, and the BB mainly refers to the geniohyoid muscle. Tongue pressure can be exerted by both the intrinsic and extrinsic muscles of the tongue. The function of the intrinsic muscles of the tongue is to change the shape of the tongue itself, while the styloglossus muscle can pull the tongue backwards and raise the dorsum of the tongue, and the genioglossus muscle can push the tongue forward [Brand RW et al., Anatomy of orofacial structures. 4th ed. C. V. Mosby Co., Maryland Heights, MO, USA, 1990]. The anatomical structure as well as results in this study suggest that the tongue pressure is affected by the intrinsic muscles of the tongue, as shown by the EI of DT and extrinsic muscles of the tongue, as shown by the EIs of DB and BT. The results of this study were also consistent with the findings of Chantaramanee et al.; the EI of the middle of the tongue obtained by placing an ultrasound probe perpendicular to the Frankfurt plane, as in this study, was significantly related to the number of oral diadochokinesis /ta/, and the EI of the base of the tongue by placing the probe at a 45-degree angle to the Frankfurt plane was significantly related to the number of oral diadochokinesis /ka/ [17]. This suggests that only the mean EI of the whole tongue derived by placing the ultrasound probe in each direction was related to the function of the muscle in each direction. Therefore, the results of the present study verified that the measurement of EI following area segmentation of the tongue can provide an assessment of the mass/performance of the tongue muscles based on the anatomical structure. The result, in which the mean EIs in the group with tongue pressure of 30 kPa and over were significantly lower than those with tongue pressure of less than 30 kPa in the DB and BT, also enhances the significance of EI measurements. The result, in which the EIs in men, especially of DT and BT, were lower compared to those of women, might be explained by the gender differences in muscle changes with aging. It is reported that men have a greater change in muscle mass with aging, while women have more fat deposition and fat accumulation compared to men. Thus, the EI in women would be higher. As the muscles in DB and BB involved in swallowing are maintained, fewer gender differences might occur. It is clinically known that the so called "low tongue position", in which the tongue looks thin, flat, and large, is often found in elderly people [18,22] and is associated with a decline in the muscle strength of the tongue. However, the anatomical and functional meanings of the low tongue position remain unclear. The results of the present study suggest that a thin tongue is associated with a higher EI, especially of DB, which represents the intrinsic and extrinsic muscles of the tongue. Additionally, the height of the tongue may depend on the mass of the tongue muscles. While the correlation between tongue pressure and EI on ultrasound images was significant, it was lower than that between tongue thickness and EI. Fujimoto et al. reported that there was no significant correlation between tongue thickness and tongue pressure [18]. Tongue pressure may be attributed to factors other than the tongue muscles. In other words, tongue pressure may be affected not only by the extrinsic and intrinsic muscles of the tongue but also by the fixation of the mandible. Therefore, the tongue pressure might not be always proportional to only the muscle mass of the area where the EI was measured in this study. The EI is better suited for assessment of the muscles of tongue. The measurement of the tongue pressure, which was the principal examination to estimate the tongue function in the present study, is based on active behavior and patient-reliant effort. Thus, it is difficult to measure in patients with higher-level dysfunctions such as dementia and aphasia. It is also difficult to measure in complete denture wearers and edentulous patients because the measurement probe has to be held with the incisors. In contrast, since ultrasonography is a passive test for patients, it can be applied to such patients. Furthermore, ultrasonography, especially the device used in the present study, can be easily equipped on the side of the dental chair. Considering the increase in the elderly population in the future, the EI measurement of the ultrasound images can provide the muscle mass and strength of the tongue in patients who cannot undergo conventional tongue pressure measurement. The present study has several limitations. Firstly, it is impossible to identify each muscle from the ultrasound images. In this study, the analysis area was geometrically divided into the four as a substitute. The values of EIs in each area need to be discussed in relation to this point. Secondly, the EI measurement should be validated using additional external criteria. The participants in this study were outpatients receiving regular maintenance care at a university hospital; therefore, they were not representative of the general elderly population in Japan. Further clinical research is required to examine the effects of confounding factors, such as the number of remaining teeth and systemic and nutritional statuses, and to confirm the effectiveness of ultrasound for the assessment of tongue function. In conclusion, the EI on ultrasound images of the tongue, especially in the area of DB, was related to both tongue pressure and tongue thickness. Therefore, the measurement of EI can be considered as an important indicator for assessing the tongue function.
2021-12-10T06:17:13.541Z
2021-12-08T00:00:00.000
{ "year": 2021, "sha1": "b18ec8a0908501388314626e7899c791b2ef0a00", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/josnusd/advpub/0/advpub_21-0351/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7a9591b223251fd619a19278a01fe25def8bb9eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
197538025
pes2o/s2orc
v3-fos-license
Mandibular Tori Limiting Treatment of Carcinoma of the Upper Aerodigestive Tract Background: Mandibular tori are a rare cause of difficult direct visualization of the upper aerodigestive tract. In the setting of aerodigestive tract pathology necessitating direct visualization, removal of mandibular tori may be required to facilitate treatment. Methods: In the first case, large bilateral symmetric mandibular tori were removed to facilitate access to the anterior commissure and removal of a T1 glottic squamous cell carcinoma (SCC). In the second case, large bilateral mandibular tori were removed to access a markedly exophytic SCC in the right vallecula. Subsequently, the tumor was removed with robotic assistance with excellent exposure. Results: Both patients were free of recurrence at last follow-up. Conclusion: Mandibular tori are an uncommon cause of difficult direct laryngoscopy. In situations that require direct visualization of the anterior commissure or base of tongue for diagnosis and management of lesions, surgical removal of the tori may be required as in the cases presented here. Introduction Mandibular tori are exostoses with an estimated prevalence of 12% to 27%. 1-3 Their cause is unclear but different possible etiologies have been suggested from developmental anomalies that are functional adaptations to forces of mastication 4 to having a genetic predisposition. Mandibular tori are commonly found in the premolar and molar regions of the mandible 5 and in the American population, higher prevalence is observed in males and African-Americans. 3 Mandibular tori are a rare cause of difficult direct visualization of the upper aerodigestive tract. 6,7 In the setting of aerodigestive tract pathology necessitating direct visualization, restricted access consequent to mandibular tori may require intervention prior to aerodigestive tract pathology treatment. We report the first cases of mandibular tori in the setting of glottic and base of tongue carcinoma limiting direct visualization with particular attention to management of the tori. Both patients provided written informed consent for patient information and images to be published. Case 1 A 67-year-old male presented with 5 years of gradual dysphonia. Evaluation by an otolaryngologist found glottic abnormalities and operative excision was planned. However, direct laryngoscopy was difficult and only a biopsy was performed, the results which were concerning for malignancy. Physical examination at presentation to our clinic revealed large bilateral symmetric mandibular tori and a round, erythematous nodule at the anterior commissure. Computed tomography (CT) performed for evaluation of the laryngeal lesion confirmed the mandibular tori (Figure 1-Left). The patient was counseled and informed consent was obtained to remove of the tori if necessary. In the operating room, attempts to visualize the upper border of the lesion with multiple laryngoscopes were limited. A burr was used to reduce the size of the tori to match the Mandibular Tori Limiting Treatment of Carcinoma of the Upper Aerodigestive Tract Christopher M Low, Daniel L Price and Jan L Kasperbauer Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA. ABSTRACT: BACkgRoUnd: Mandibular tori are a rare cause of difficult direct visualization of the upper aerodigestive tract. In the setting of aerodigestive tract pathology necessitating direct visualization, removal of mandibular tori may be required to facilitate treatment. MeThodS: In the first case, large bilateral symmetric mandibular tori were removed to facilitate access to the anterior commissure and removal of a T1 glottic squamous cell carcinoma (SCC). In the second case, large bilateral mandibular tori were removed to access a markedly exophytic SCC in the right vallecula. Subsequently, the tumor was removed with robotic assistance with excellent exposure. Clinical Medicine Insights: Case Reports contour of the adjacent bone. Subsequently, the larynx and anterior commissure were visualized well and a CO2 laser used to excise the nodule. Pathology revealed squamous cell carcinoma (SCC) with negative margins confirmed on frozen section and the patient was free of recurrence at 22 months. Case 2 A 46-year-old male presented with 2 months of dysphagia, globus and a single self-limited episode of hemoptysis. Evaluation by an otolaryngologist revealed a right vallecular mass and the patient was referred for management. At our clinic, physical examination revealed large bilateral mandibular tori and a markedly exophytic mass in the right vallecula. CT exam with 3D reconstruction confirmed the bilateral mandibular tori (Figure 1-Right). In the operating room, a biopsy was obtained showing SCC. It was noted that the inferior aspect of the lesion could not be exposed with multiple laryngoscopes, and that any future attempts at oncologic resection would require removal of the tori. The patient discussed his treatment options with a multidisciplinary team and decided to pursue surgical resection with adjuvant radiotherapy. In the operating room, the tori were removed by utilizing an osteotome and burr. Subsequently, good exposure of the base of tongue was obtained and the tumor was removed with robotic assistance. The patient underwent adjuvant radiotherapy and was free of recurrence at 26 months. Discussion Mandibular tori are an uncommon cause of difficult direct laryngoscopy. Reports of mandibular tori of the mandible impeding direct visualization of the larynx during attempted intubation have been managed by utilizing other methods of laryngeal visualization such as video laryngoscopy 8 or flexible fiberoptic endoscopy. 9 Other groups have reported their experience with blind nasotracheal intubation 10 or near blind endotracheal intubation. 11 If a patient is asymptomatic from their mandibular tori, they can be observed and not removed. The most common indications for surgical removal include the need for dental prosthetic treatment and as a site of harvest for cortical bone grafts. 12 However, in situations that require direct visualization of the anterior commissure or base of tongue for diagnosis and management lesions, indirect visualization of the larynx and base of tongue that bypass the tori may not suffice. Instead, surgical removal of the tori may be required and can provide the anatomic changes necessary for direct visualization and subsequent aerodigestive tract pathology management, as in the cases presented here.
2019-07-20T13:04:15.749Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "b2c0b21fc27dad6693c97f12f024222ec787bb06", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1179547619856599", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2c0b21fc27dad6693c97f12f024222ec787bb06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3811197
pes2o/s2orc
v3-fos-license
Estimating the storage of anthropogenic carbon in the subtropical Indian Ocean: a comparison of five different approaches . The subtropical Indian Ocean along 32 ◦ S was for the first time simultaneously sampled in 2002 for inorganic carbon and transient tracers. The vertical distribution and inventory of anthropogenic carbon (C ANT ) from five different methods: four data-base methods ( 1 C*, TrOCA, TTD and IPSL) and a simulation from the OCCAM model are compared and discussed along with the observed CFC-12 and CCl 4 distributions. In the surface layer, where carbon-based methods are uncertain, TTD and OCCAM yield the same result (7 ± 0.2 molC m − 2 ), helping to specify the surface C Introduction The fate of the anthropogenic CO 2 (C ANT ) emissions to the atmosphere is one of the critical concerns in our attempts to better understand and possibly predict global change and its impact on society (IPCC, 2007).Quantifying the global carbon cycle is still the subject of much scientific effort, especially where new processes and carbon pathways have to be considered (Cole et al., 2007;Duarte et al., 2005;Prairie and Duarte, 2007).While being a well-known and significant process, the uptake of C ANT by the world ocean is not well-constrained at present and its magnitude and variability may change in the future.Different approaches to estimate the global oceanic C ANT uptake have arrived at essentially the same number, about 2 Pg C yr −1 (Wetzel et al., 2005).However, despite this general agreement, questions about the reliability of these estimates remain. The Joint SOLAS-IMBER implementation plan has identified the research priorities for ocean carbon research, among them the separation of natural from anthropogenic carbon, the oceanic storage and transport of C ANT and the effect of decreasing pH, ocean acidification, on the marine biogeochemical cycles, ecosystems and their interactions (www.imber.info/products/CarbonPlan final.pdf).Large impacts of ocean acidification are expected to occur in high latitudes, i.e. the Southern (Bopp et al., 2001;Orr et al., 2005) and the Published by Copernicus Publications on behalf of the European Geosciences Union. The first attempts to estimate C ANT from oceanic measurements were based on the back-calculation method proposed independently by Brewer (1978) and Chen and Millero (1979).This method was reformulated by Gruber et al. (1996) and specific improvements were proposed for the Atlantic Ocean (Pérez et al., 2002) and Southern Ocean (Lo Monaco et al., 2005a).Several other methods based upon completely different concepts also arose; such as one based on water mass mixing (the MIX method; Goyet et al., 1999), another based on estimating Transit Time Distribution (TTD) or ages from transient tracers as CFCs, SF 6 or CCl 4 (TTD method; Hall et al., 2002), and one based on a composite tracer (TrOCA method; Touratier et al. 2004a, b;Touratier et al., 2007).In addition there are simulations from threedimensional Ocean General Circulation Models (OGCM) (Orr et al., 2001).Despite these efforts, no clear conclusion has been achieved about the best method after several comparison exercises (Coatanoan et al., 2001;Feely, 2001;Hall et al., 2004;LoMonaco et al., 2005b;Sabine and Feely, 2001;Wanninkhof et al., 1999;Waugh et al, 2006;Touratier et al., 2007), which are still necessary and on-going, for example, within the Integrated Project CARBOOCEAN for the Atlantic basin (Vázquez-Rodríguez et al., 2009). The contribution of the Indian Ocean to the global C ANT storage was initially discussed in Chen (1993) and Sabine et al. (1999) and later in Sabine et al. (2004) from a global perspective.While the ocean volume of the Indian Ocean is 20% less than that of the Atlantic, the total C ANT inventory of the Indian Ocean is only half that of the Atlantic, and it contributes ∼21% to the global ocean C ANT inventory (Sabine et al., 2004).The relevant areas or processes introducing C ANT into the Indian Ocean are: 1. full equilibration of the upper mixed layer; 2. the formation of Red Sea -Persian Gulf Intermediate water (Papaud and Poisson, 1986;Mecking and Warner, 1999) in the northwestern Indian Ocean spreading equatorward; 3. the formation of Subantarctic Mode Water (SAMW) north of the Subantarctic Front (McCartney, 1977), including the large volume of SAMW being formed in the southeast Indian Ocean (e.g., Sloyan and Rintoul, 2001;Sallée et al., 2006) and transported equatorwards; and, 4. the formation of Antarctic Intermediate Water (AAIW), usually delineating the lower limit of C ANT penetration (Sabine et al., 2004) in the Indian Ocean. The thermostad associated with SAMW is formed by deep mixing in winter on the equatorward side of the Subantarctic Front (McCartney, 1977), and is found at about 400-600 m. SAMW is linked to AAIW in the Southeast Pacific (Mc-Cartney, 1977) circulating in this basin through subduction.AAIW of the South Atlantic and Indian oceans is produced in the confluence of the Malvinas and Brazil currents by injection of surface water into the subtropical gyre, which then circulates eastwards and northwards in the South Atlantic and Indian Oceans, where no other AAIW sources are found (Talley, 1996;Hanawa and Talley, 2001). Deep water formation is another important mechanism sequestering C ANT into the ocean.The formation and transport of North Atlantic Deep Water (NADW) is associated with high C ANT inventories present in the North Atlantic (e.g., Álvarez et al., 2003;Sabine et al., 2004;Touratier and Goyet 2004b).Another major pathway theoretically introducing C ANT into the deep ocean could be the production of Antarctic Bottom Water (AABW).However; large discrepancies exist in estimates of the role of the Southern Ocean in the C ANT uptake and storage.Both OGCMs (Caldeira and Duffy, 2000;Orr et al., 2001) and inversion estimates based on OGCMs but constrained with data (Mikaloff Fletcher et al., 2006) find a high C ANT uptake but low C ANT storage in the Southern Ocean, with high uncertainties.These studies also find high C ANT transport northwards toward the Antarctic convergence zone.The low storage is supported by databased estimates (e.g., Poisson and Chen, 1987;Gruber, 1998;Hoppema et al., 2001) that rely on factors such as the very high Revelle factor of these waters, the relatively short contact time with the surface between upwelling and subduction and decreased CO 2 uptake due to the presence of seaice.However, these findings are contradicted by the detection and accumulation of CFCs in Antarctic deep and bottom waters (e.g., Meredith et al., 2001;Orsi et al., 2002) and the C ANT accumulation detected south of Australia (McNeil et al., 2001) and in the South Atlantic Ocean (Murata et al., 2008), or using the TTD technique for the whole Southern Ocean (Waugh et al., 2006).Recent carbon-based studies also detect significant C ANT accumulation in deep and bottom waters of the Southern Ocean (Lo Monaco et al., 2005a, b;Sandrini et al., 2007). This study is a comparison exercise between different data-based (carbon-based and TTD) techniques for estimating C ANT applied along a transoceanic section along 32 • S in the Indian Ocean.Differences among the C ANT distributions and inventories are presented, and the strengths and weaknesses of the individual methods are discussed.To provide an independent and unrelated comparison, results from an OGCM are presented alongside the data-based techniques.The final aim is to obtain the best C ANT inventory in the subtropical Indian Ocean with new data and based on different approaches. Data set During March-April 2002, cruise 139 of RRS Charles Darwin (CD139), a trans-Indian hydrosection was made nominally along 32 • S (Bryden et al., 2003).The section used here consisted of 133 full-depth stations (Fig. 1) with a typical spacing over the deep basins of 90 km and a maximum of 120 km.Over the shelf and other topography the station spacing was decreased. CTD data were taken with a SeaBird 9/11 plus system.Discrete samples for dissolved oxygen (O 2 ) were analysed by a semi-automated whole-bottle Winkler titration unit with spectrophotometric end-point detection.Inorganic nutrient concentrations were measured using a Skalar San Plus autoanalyser, configured according to the manufacturer's specifications (Kirkwood, 1995).The overall accuracy for O 2 , nitrate, phosphate and silicate is 1, 0.1, 0.01 and 0.6 µmol kg −1 , respectively. Chlorofluorocarbon (CFC) samples were collected from the same Niskin bottles sampled for Total Alkalinity and pH.Concentrations of CFC-11 and CFC-12 in seawater were measured in about 2100 water samples using shipboard electron capture gas chromatography (EC-GC) techniques similar to those described by Bullister and Weiss (1988).A subset (∼540) of the water bottles sampled for CFCs were also sampled and analyzed for dissolved carbon tetrachloride (CCl 4 ) on a separate analytical system using similar techniques.CFC and CCl 4 concentrations are reported in picomoles per kilogram seawater (pmol kg −1 ).The overall accuracy for dissolved CFC-11 and CFC-12 measurements was estimated to be 2% or 0.010 pmol kg −1 (whichever is greater) and 3% or 0.012 pmol kg −1 for CCl 4 measurements. The CFC-12 age (τ ) of any water sample has been calculated following Doney and Bullister (1992) assuming 100% initial saturation.Reconstructed CFC-12 annual mean dry air mole fractions in the Southern Hemisphere were taken after Walker et al. (2000), extended with yearly mean values from the AGAGE sampling network.Note that the TTD method uses a different approach to estimate ages from CFCs. The CD139 trans-Indian section was completely analysed for pH; 69 stations were analysed for Total Alkalinity (TA), typically every other station.Since these two variables allow the carbonate chemistry system to be fully constrained, Total Inorganic Carbon (C T ) was not routinely measured.However, C T samples were collected at 4 stations and were analysed post-cruise on land as quality control. pH was measured spectrophotometrically following Clayton and Byrne (1993) using a seawater m-cresol purple dye solution.Replicate analysis from deep Niskin bottles shows a reproducibility of ±0.0009.pH analysis on CRM samples were also performed.For more details about the pH analyses and quality control see Appendix A1. TA was measured using a double end-point automatic potentiometric titration (Pérez and Fraga, 1987).Concentrations are given in µmol kg-sw −1 .Determinations of TA on CRM were made during the cruise to monitor the titrator performance.At a test station, the whole set of bottles were closed at the same depth.The resulting TA standard deviation of a total of 24 analyses over 12 bottle samples was 1.04 µmol kg −1 .For more details about the TA analyses and quality control see Appendix A2. C T samples were collected in 500mL borosilicate bottles, immediately poisoned with HgCl 2 and stored in the dark until analyzed on shore.C T was measured in the lab using a coulometer with a SOMMA (Single Operator Multiparameter Metabolic Analyzer) inlet system (Johnson et al., 1993).CRM were used to correct any offset in the analysis. The above results indicate the high quality and internal consistency of the CD139 CO 2 data base.Despite this, we have also performed a crossover analysis using The following sections provide a basic description of the different back-calculation methodologies used in this comparison exercise: 1) C* approach: developed by Gruber et al. (1996) and applied specifically to the Indian Ocean by Sabine et al. (1999, hereinafter SAB99); 2) IPSL approach: an improved version of the backcalculation technique proposed by Lo Monaco et al. (2005a, hereinafter LM05) for the Southern Ocean; 3) C* combined ( C*Comb): several improvements were suggested to the C* approach in the North Atlantic by Pérez et al. (2002), some of them can be also applied in the Indian Ocean. C* (SAB99) approach where C T represents C T measurements, C bio T reflects the change in C T due to biological activity, C 280 T denotes the C T in equilibrium with the pre-industrial atmosphere and C dis T reflects the air-sea CO 2 disequilibrium when water masses are formed.The first three terms make up the quasiconservative tracer C* ( C*=C T − C bio T −C 280 T ), which reflects both the anthropogenic signal and the air-sea CO 2 disequilibrium ( C*=C ANT + C dis T ) (Gruber et al., 1996). −106/104 × N * where: AOU is the Apparent Oxygen Utilization (oxygen saturation (Benson and Krause, 1984) minus measured oxygen); TA 0 is preformed TA; R C and R N are stoichiometric ratios C/O 2 and N/O 2 according to Anderson and Sarmiento (1994); N* is a quasi-conservative tracer used to identify nitrogen excess or deficits relative to phosphorus.N* values are converted to carbon with a denitrification carbon to nitrogen ratio of 106:-104 (Gruber and Sarmiento, 1997). where: S is salinity; PO a conservative tracer (PO=O 2 +170×P; Broecker, 1974); θ is potential temperature; P and N are the phosphate and nitrate concentrations, respectively. C 280 T was obtained from thermodynamic equations of C T as a function of preformed alkalinity for a pre-industrial partial pressure of CO 2 (pCO 2 ) of 280 ppm (Gruber et al., 1996).In order to keep the definition of C* conservative, Gruber et al. (1996) linearized C 280 T about the mean values of temperature, salinity and alkalinity observed in surface waters of the Atlantic Ocean, which yield an uncertainty of 4 µmol kg −1 .SAB99 used the linearized formula obtained by Gruber et al. (1996) for the Atlantic Ocean given below: Finally, SAB99 obtained C dis T following the technique proposed by Gruber et al. (1996): i) for water masses younger than 40 years they used the C*τ method where C τ T is the C T in equilibrium with the atmospheric CO 2 at the time of water mass formation: t form =t obs − τ , the water mass age τ being calculated from CFC-12 ages; the atmospheric time history for CO 2 is taken from the South Pole SIO station (Keeling and Whorf, 2005)]; ii) for old waters with CFC-12 concentrations lower than 0.005 pmol kg −1 they assumed no anthropogenic carbon so that C dis T is given by C* mean values; and iii) for waters older than 40 years with significant CFC-12 concentrations they used a combination of the two methods mentioned above.Values of C dis T were determined along σ θ intervals.One of the main assumptions of this method is that the effective disequilibrium values remain more or less constant within the outcrop region of each isopycnal surface.Consequently, we have used SAB99 disequilibrium values on this work, using their Tables 2 and 3. The C* approach assumes that: 1. total alkalinity is not significantly affected by the CO 2 increase in the atmosphere; 2. the effective CO 2 air-sea disequilibrium has stayed constant within the outcrop region of a particular isopycnal surface; 3. water transport is mainly along isopycnal surfaces; 4. preformed O 2 is in equilibrium with the atmosphere; 5. the decomposition of organic matter follows a constant Redfield relationship. See Matsumoto and Gruber (2005) for a discussion of these assumptions.).The improvements introduced in C bio T calculation by LM05 are i) to account for the oxygen disequilibrium in waters formed under the ice and ii) a better characterization of TA 0 by using two different relationships (depending on water mass origin) which were determined using either winter and early spring surface measurements (0-50 m) or subsurface measurements (50-150 m): IPSL approach where: k stands for the mixing ratio of ice-covered surface waters determined using an optimum multiparameter method (OMP, see Appendix B); and α is the mean O 2 undersaturation in ice-covered waters, 12% as justified in LM05.The stoichiometry term (1/R C + 0.5/R N ) equals 0.8 following Körtzinger et al. (2001).Linear equations for the preformed values, TA 0 and C 0,obs T were obtained from winter and early spring surface data using South Atlantic and Indian ocean data (WOCE (Key et al., 2004) and OISO (Metzl et al., 2006) where all the terms have been previously defined.The stoichiometry terms (R N and R P ) are taken from Körtzinger et al. (2001). In the subtropical Indian Ocean the only contribution of northern water is North Atlantic Deep Water (NADW) entering the Indian Ocean south of Africa.The mixing ratio of NADW (k NADW ) is obtained from the OMP analysis, the southern water contribution is then given by 1-k NADW .The expressions used to estimate the preformed values are as follows: C 0,REF T is calculated using a water mass formed before the industrial revolution which serves as a reference (details are given in LM05).In 2002 along the CD139 cruise, NADW was detected on the Mozambique Basin between 2000 and 4000 m (Fig. B2) as a salinity maximum, old enough to be C ANT free, where no CFC-12 or CCl 4 were found (Fig. 5b). Following LM05 and using data with a contribution of NADW higher than 50%, the mean value for C 0,REF T is −58.6±1.4 µmol kg −1 (37 samples).This value is not significantly different from that used by Vázquez-Rodríguez et al. (2009) when using the IPSL method in the whole Atlantic Ocean. The IPSL approach assumes that: 1. total alkalinity is not significantly affected by the CO 2 increase in the atmosphere; 2. the oxygen disequilibrium in ice-covered waters is constant in space and time; 3. the decomposition of organic matter follows a constant Redfield relationship. C* combined approach where: C bio T is calculated as in SAB99 (Sect.3.1); TA 0 is calculated following the LM05 approach (Sect.3.2, Eq. 12); and C 280 T is calculated as a function of θ, S, TA 0 and pCO 280 2 using the constants from Lueker et al. (2000) instead of the linearized Eq. ( 5).pCO 280 2 includes the water vapour correction term as indicated by Pérez et al. (2002).C dis T is calculated as in SAB99 (Sect.3.1) but using the corresponding modified TA 0 and C 280 T .The C* combined approach shares the same assumptions as C* described above. Regarding the uncertainty of each method a detailed error assessment is given in the corresponding publications.Typical uncertainties converge to a common value of 6-10 µmol kg −1 .www.biogeosciences.net/6/681/2009/ Biogeosciences, 6, 681-703, 2009 This carbon-based method uses the semi-conservative parameter TrOCA (Tracer combining Oxygen, inorganic Carbon and total Alkalinity).A detailed description of the TrOCA approach is given in Touratier and Goyet (2004a, b) and further improvements in Touratier et al. (2007): where C ANT is calculated as the difference between current (Eq.15b) and pre-industrial TrOCA (TrOCA 0 , Eq. 15c) according to Touratier et al. (2007) divided by a stoichiometric coefficient, a. TrOCA 0 and the coefficient a were adjusted using 14 C and CFC-11 data to identify water masses with particular ages.The parameter values used are a=1.279±7.3×10−3 , b=7.511±5.2×10−3 , c=−1.087×10 −2 ±2.5×10 −5• C −1 and d=−7.81×10 5 ±2.9×10 4 (µmol kg −1 ) 2 .The TrOCA approach assumes that: below the mixed layer, the decomposition of organic matter follows a constant Redfield relationship and today's air-sea CO 2 disequilibrium is the same as in pre-industrial times.No explicit assumptions are made about the preformed values for alkalinity or inorganic carbon.The estimated uncertainty for the TrOCA approach to estimate C ANT is about 6 µmol kg −1 (Touratier et al., 2007). TTD method The Transit Time Distribution (TTD) method is a formal way of describing the history of the individual components (e.g.water molecules) making up a water sample.For any water sample collected at a given location in the ocean, the various water molecules making up the sample will have travelled different pathways to reach that point, with each molecule having its own "age", i.e. time since it was last in contact with the atmosphere.The distribution of all these ages comprises the TTD of a water sample.Once the TTD is established, in principle the concentration of any other passive tracer (e.g.anthropogenic CO 2 ) entering the ocean at the surface can be calculated.In several previous studies (following Waugh et al., 2004;Waugh et al., 2006) the TTDs have been assumed to have an inverse Gaussian shape, with the mean age ( ) and the width ( ) of the TTD as fundamental descriptors.In these studies and in this work it is also assumed that the ratio / =1, i.e. the mean age is equal to the width of the TTD.This is found to be a realistic assumption of the relation between advective and diffusive transport in the Ocean (Waugh et al., 2004(Waugh et al., , 2006;;Tanhua et al., 2008). The TTD method used here to estimate C ANT concentrations is that described by Waugh et al. (2004Waugh et al. ( , 2006)).We assume that C ANT is an inert passive tracer (with a well known atmospheric history), and that the transfer of inorganic carbon from the atmosphere to the ocean can be determined by using the empirical relations between surface salinity and alkalinity (e.g.Brewer et al., 1986) and the inorganic carbon chemistry.Thus, with only observations of salinity, temperature and tracer, the oceanic C ANT input function for each water sample can be determined.We used CFC-12 data to determine the TTDs of the water samples using the timedependent saturation described in Tanhua et al. (2008) and we have assumed that the disequilibrium of carbon between the atmosphere and the surface ocean did not change during the last few hundred years.The latter assumption is possibly the single largest single source of error for the C ANT TTD calculation; other sources of errors are discussed in Waugh et al. (2006) and Tanhua et al. (2008).For instance, uncertainties in the / ratio propagates to uncertainties in C ANT TTD and is dependent on CFC concentrations and / ratio; the C ANT TTD estimate is relatively insensitive to errors in the / ratio for CFC-12 levels higher than 0.5-0.6 pmol kg −1 and to errors in the / ratio for moderate to large mixing ( / ≥0.75).The TTD method is also sensitive to uncertainties on the CFC saturation state at the time of water mass formation; the biasing effect is larger for CFC-12 concentrations larger than about 450 ppt due to the low atmospheric increase rate in recent times (Tanhua et al., 2008). General ocean model C ANT The model used here is OCCAM (Ocean Circulation and Climate Advanced Modelling), a global, medium-resolution, primitive equation ocean general circulation model (Marsh et al., 2005, describe a high-resolution version).OCCAM's vertical resolution is 66 levels (5 m thickness at the surface, 200 m at depth), with a horizontal resolution of typically 1 degree.OCCAM's prognostic variables are temperature, salinity, velocity and free-surface height.The model includes an Elastic Viscous Plastic sea-ice scheme, a K-Profile Parameterization mixed layer and Gent-McWilliams eddy parameterisation.Advection is 4th order accurate, and the model is time-integrated using a forward leapfrog scheme with a 1 h time-step.Surface fluxes of heat and freshwater are not specified but are calculated empirically using NCEP-derived atmospheric boundary quantities (Large and Yeager, 2004).In this way, simulations are forced for the period January 1958 to December 2004, and repeat cycles of this 47-year forcing are used to spin-up the model.OCCAM incorporates a NPZD (Nitrate Phytoplankton Zooplankton Detritus) plankton ecosystem model (Oschlies, 2001;Yool et al., 2007) which drives the biogeochemical cycles of nitrogen, carbon, oxygen and alkalinity.Air-sea fluxes of CO 2 and CFC tracers (for watermass age) make use of the protocols developed for Biogeosciences, 6, 681-703, the OCMIP-2 project (see Dutay et al., 2002;Matsumoto et al., 2004).The simulation shown here was initialised from rest using physical and biogeochemical climatological fields (Conkright et al., 2002;Key et al., 2004), and underwent an initial 47 year cycle of pre-industrial spin-up.After this, the model used three 47 year cycles to simulate the period 1864 to 2004, during which model atmospheric pCO 2 followed the historical record.A duplicate C T tracer that was not exposed to this record was used to both separate the natural cycle of carbon from the anthropogenic perturbation and control for simulation drift. C* distributions C* is calculated here by two methods: SAB99 (Sect.3.1) and a combined (Comb) approach (Sect.3.4).The difference between these stems from their estimations of TA 0 , included in the C bio T term, and C 280 T , also dependent on TA 0 .In SAB99, TA 0 is taken from Eq. ( 3) and adjusted as a function of Indian Ocean data shallower than 60 dbar.C 280 T is calculated from Eq. ( 5) which was obtained by Gruber et al. (1996) for the Atlantic Ocean.In the case of the combined approach, TA 0 discerns waters with southern and northern origins, taking into account the mixing, and C 280 T is obtained using a thermodynamic formula to calculate C 280 T as a function of pCO 280 2 and TA 0 (see Sect. 3.3).These details in the C* calculation have a significant impact on estimates of C ANT .Figure 2 shows the vertical distributions of C* SAB99 and C* Comb: below 1000 dbar C*SAB99 is lower, more negative, than the C*Comb by about 10 µmol kg −1 , with maximum negative differences (SAB99-Combined C*) found in the NADW core (Fig. 3).This difference inverts in the upper 1000 dbar, C*Comb is higher by up to 5 µmol kg −1 (Fig. 2 and 3).A closer look at the contribution from C bio T and C 280 T to the C* difference shows (Fig. 3) that the C 280 T is primarily responsible for the differences between SAB99 and Comb C* in the upper 1000 dbar.Here the mean±STD C 280 T contribution to the mean±STD C* difference is −4.5±1.9 compared to 3.1±2.5;while below 1000 dbar both terms have a similar contribution, 4.7±1.8from C 280 T and 4.2±1.7 µmol kg −1 from C bio T ( C* difference −8.9±2.4 µmol kg −1 ).This result points to the importance of using accurate approximations for preformed values, especially where mixing of extreme origin water masses is occurring, and also that the complete thermodynamic equation for C 280 T should be used in preference to linearized functions obtained for different basins. C dis T values The next step in the C* back-calculation approach is to obtain the C dis T values by σ θ intervals (Sect.3.1).C dis T values should ideally be calculated along a wide regional or age range, therefore, databases with ample coverage are needed.These also enable the identification and treatment of different water mass end-members.The database used by SAB99 covered the whole Indian Ocean, where most of the waters below the mixed layer have a southern origin except intermediate waters formed in the Arabian Sea.Taking this into account and the assumption of a constant C dis T with time we used SAB99 C dis T values.No Arabian Sea endmember was considered.However, to check the impact of using different TA 0 and C 280 T approximations on C dis T we recalculated new values for the effective air-sea disequilibrium: using our own CD139 data set for C* and C*τ i) from Sect.3.1, the SAB99 approach; and ii) from Sect.3.3, the combined approach.Figure 4 own data for σ θ <26.8 should be disregarded due to the reduced density of samples with these characteristics (situated above 600 dbar along the CD139 section).Data between 27.55>σ θ >27.25 should be weighted means between C* and C*τ ; however, our own data has no points that fulfil the conventions in SAB99, and consequently they were linearly interpolated.C dis T values between 27.2>σ θ >26.8 are obtained from C*τ values and they show a clear consistency between the SAB99 (SAB99 Tables and our own data) and combined (for our data) approaches.For the C*τ calculation the main source of error is the estimation of the age, calculated equally on the three C*τ approaches, here the effect of a different preformed TA 0 (for the combined method) is minor (Matsumoto and Gruber, 2005).In deep layers, σ θ ≥27.6, the C dis T disagreement is obvious between the two SAB99 methods and the combined one, the two SAB99 estimations coincide as expected, with values from −11 to −18.6 µmol kg −1 while with the combined approach, C dis T varies from −4 to −6.5 µmol kg −1 .This difference stems from the C 280 T and TA 0 (Fig. 3), included in the C* formula. C ANT distributions The vertical pCFC-12, pCCl 4 and C ANT distributions along with some reference neutral density levels are shown in Figs. 5 and 6.In deep layers below γ =27.7, no C ANT is expected according to CFC-12 levels (Fig. 5a), here the C* method is used to calculate C dis T , and C ANT SAB99 values are practically null, 0±3 µmol kg −1 , which is less than the limit of detection of the method.However, according to CCl 4 levels C ANT is expected in this layer.In this sense, C ANT IPSL and TrOCA estimates range from 0 to 10µmol kg −1 , with slightly lower values estimated by the TTD method and values below 5 µmol kg −1 simulated by OCCAM.The western NADW core (Fig. B2) between 2000-3000,dbar presents consistently negative values for the SAB99 method (Fig. 5b).A combination of processes leads to these negative values: predominantly the erroneous estimation of TA 0 in the biological correction, and also the application of a C dis T too negative and more representative of southern waters, dominant in the rest of the section.The TrOCA, IPSL and TTD methods do detect this core while the OCCAM results are quite homogeneous for deep waters (Figs. 5 and 6).The influence of water formed under ice is clearly detected by the C ANT -IPSL method below 4000 dbars (Fig. B2) where C ANT slightly increases towards the bottom (Fig. 5f) as pCCl 4 does (Fig. 5b).The TrOCA and TTD methods also show this slight increase (Fig. 6b, d) despite being based on completely different assumptions. In the region between roughly 1000 and 1500 dbar the SAB99 method uses weighted means of C*τ and C* to estimate C dis T (Fig. 4), and a steep gradient in C ANT is detected here (Fig. 5c), with a clear discontinuity with pressure not apparent in any of the other approaches or physical, biogeochemical or tracer variables (not shown).Similar results were found by Waugh et al. (2006) in the Atlantic Ocean.The other C ANT methods with a smoother C ANT gradient in this pressure range point to a deeper penetration of C ANT below the AAIW limit. In thermocline waters, upper 1000 dbars, all methods, except TTD, show C ANT values increasing eastwards, near to the formation region of SAMW (McDonagh et al., 2005).Although the distributions are similar, absolute values differ as will be commented using Fig. 7. Figure 7 shows the mean±STD vertical C ANT profile difference referred to SAB99.Any of the regional plots show a sharp change around 1300 dbar due to the discontinuity in Biogeosciences, 6, 681-703, 2009 www.biogeosciences.net/6/681/2009/C ANT SAB99 where all the methods yield higher C ANT values compared to SAB99 up to a maximum of 8 µmol kg −1 with the TrOCA method in the middle part of the section (Fig. 7c), where the salinity minimum is clearly noted.Surprisingly, the IPSL, TrOCA and TTD differences have similar values and distributions below 3500 dbar, with consistently increasing C ANT values towards the bottom, especially in the western and central portion of the section (Fig. 7b, c), where younger AABW arrives at the section from the Weddell Sea.In the upper 1000 dbar, TrOCA and SAB99 are practically in agreement, while OCCAM presents lower values compared to SAB99 especially in the western part of the section (Fig. 7b), and TTD differences continuously increase towards the surface and reach up to 15 µmol kg −1 .In this depth range, IPSL values are consistently higher compared to any other method.Between 1500 and 3500 dbars, as a whole (Fig. 7a), TrOCA and TTD, compared to SAB99, give higher results than OCCAM and IPSL, but differences arise within regions. C ANT inventories Studying the C ANT specific inventories by water mass domains shows again clear discrepancies and similarities.We took the neutral density layers definition by Robbins and Toole (1997) to define five layers (Fig. 5 Deep Water, CDW, and NADW on the western end) deep waters; v) bottom water, below roughly 3500 m with an Antarctic origin.A similar approach was used by McDonagh et al. (2008) to constrain the velocity field along this section. The initially calculated C ANT values are randomly modified by ±5 µmol kg −1 .A set of 100 perturbations are done for the five methods, finally a mean and standard deviation for the total and layer C ANT inventory is calculated, the standard deviation for each layer is weighted by the layer contribution to the total section area.Inventories are shown in Fig. 8 and Table 1. The SAB99 method estimates the lowest total inventory compared with any other method (Fig. 8, Table 1), even with OCCAM which seems to underestimate C ANT in deep and bottom layers (Fig. 7).Discrepancies and similarities arise when inventories are studied by layers.Biological processes in the upper mixed layer (comprised within the surface layer here defined) occurring during the cruise prevent the use of carbon-based methods (SAB99, TrOCA and IPSL), for ex-ample when AOU is negative.These methods neither resolve seasonal variability in the mixed layer.TTD and OCCAM do provide C ANT estimates in this layer by circumventing the direct use of biogeochemical variables: TTD relies on CFC ages that are more precise in upper, younger waters; OCCAM accounts for surface circulation and air-sea CO 2 equilibrium, and its upper waters are less affected by uncertainties in model physics and chemistry.Interestingly, the mean specific inventories for the upper layer from TTD and the OCCAM are similar, around 7 molC m −2 (Table 1). Within the SAMW layer, all methods, except IPSL, agree within ±2 molC m −2 .Within AAIW, IPSL is again high, TTD and TrOCA agree and OCCAM and SAB99 are lower.In deep waters, TTD and TrOCA give similar results significantly higher than any of the other methods, even IPSL.In the bottom layer, TrOCA, IPSL and TTD provide similar results with a significant C ANT accumulation in this layer, while OCCAM, SAB99 and Iwithout the oxygen undersaturation correction (IPSL-Zero) show no accumulation. Discussion In most of the comparative studies about C ANT estimation in the ocean no clear conclusion about the best method is given because all of them are subject to uncertainties (Coatanoan et al., 2001;Feely, 2001;Hall et al., 2004;LM05;Sabine and Feely, 2001;Wanninkhof et al., 1999;Waugh et al, 2006).In this work with the help of transient tracers, we discuss backcalculation, TrOCA, TTD and OGCM C ANT methods, trying to assess their strengths and caveats, and finally which may provide the optimal range of estimations. Disequilibrium values on the C* method The assumptions of the C* method and its main sources of uncertainty are thoroughly discussed in Matsumoto and Gruber (2005).They conclude that the change in air-sea CO 2 disequilibrium over time is the single most important contribution to the bias in C*-based C ANT estimations.This method assumes a constant disequilibrium over time.However, the uptake of C ANT by the ocean is occurring with an increasing more negative disequilibrium.Consequently, C ANT will be overestimated especially in upper and younger waters, causing a positive bias of 5 PgC for the whole ocean (Matsumoto and Gruber, 2005). Disequilibrium values obtained here from C*τ values are equal, as the main source of uncertainty is the age estimate, while biases from using different expressions for TA 0 and C 280 T cancel out (Fig. 4).When disequilibrium values are calculated from C* values in waters with no C ANT expected, the disagreement is clear (Fig. 4).Here, TA 0 and C 280 T estimates do matter (Fig. 3) causing high discrepancies in the final disequilibrium estimate.C dis T estimated by the SAB99 method become more negative with increasing density, while in the combined approach they remain slightly negative and practically constant (Fig. 4). Which C dis T values are more reasonable with our current knowledge about CO 2 dynamics in the upper ocean?The temporal evolution of the disequilibrium can only be obtained from OGCM (e.g., Matear et al., 2005;Matsumoto and Gruber, 2005) or transient tracers transit-time distributions (Hall and Primeau, 2004), while current values of the total (natural plus anthropogenic) disequilibrium can be approximated from measured (Takahashi et al., 2002) or empirical (McNeil et al., 2007) winter-early spring surface ocean-atmosphere gradients in pCO 2 .The models provide the change in C dis T from the preindustrial era till the 1990s, showing small changes, less than −5 µmol kg −1 , in the subtropical Indian ocean, and moderate changes, −5 to −10 µmol kg −1 south of 60 • S. Consequently, current pCO 2 gradients can be compared with C dis T values obtained from the C* approach. Figure 9a shows the mean surface pCO 2 gradient sorted by surface density in the winter Indian Ocean from the Takahashi et al. (2002) climatology and the empirical approach by McNeil et al. (2007).Comparing Figs. 4 and 9b discrepancies are evident, first in the shape of the curves and second in the range of values estimated.The discontinuity in C dis T values from SAB99 appears to be an artefact derived from the approximations used in the method. Deep waters, around σ θ =27.4 have C dis T values around −8 µmol kg −1 according to SAB99, but values around −3 µmol kg −1 by Takahashi et al. (2002) or −5 to −10 µmol kg −1 according to McNeil et al. (2007).If we consider the latter work more reliable for waters south of 60 • S, the C dis T SAB99 are reasonable in deep waters and C ANT values as well. Nevertheless, we have to question whether it is sensible to compare surface winter air-sea C T disequilibrium values from the whole Indian Ocean for water masses formed in distinct regions of the Southwest Atlantic (AAIW), the Southeast Indian (SAMW) oceans, or the Weddell Sea (WSDW).The data of McNeil et al. (2007) show a large variability in waters denser than σ θ =26.5 (found south of 55 • S) suggesting the difficulty in defining a C dis T value for intermediate and deep waters along the CD139 section. Disequilibrium values obtained from the SAB99 method seem to be flawed; in upper and intermediate waters they lead to a C ANT underestimate, while in deep and bottom waters cancel any C ANT accumulation.The other carbonbased methods without any a priori disequilibrium assumption, IPSL and TrOCA, predict 1.34 and 1.13 times higher C ANT specific inventories in surface, SAMW and AAIW waters; while TTD and OCCAM estimates are 1.29 and 1.18 times higher.Transient tracers such as CFC-12 and CCl 4 provide useful information on oceanic ventilation and transport times and therefore on the uptake and storage of C ANT in the ocean.The increase of C ANT in the atmosphere began earlier than for these transient tracers.The presence of a significant CFC-12 or CCl 4 concentration indicates that the water parcel was exposed to anthropogenic CO 2 in the atmosphere.However, a region free of these tracers may still have significant amounts of C ANT , leading to C ANT underestimation (e.g., Goyet and Brewer, 1993;Matsumoto and Gruber, 2005;Tanhua et al., 2004).Significant concentrations of CCl 4 are detected in the atmosphere after 1940 compared to CFC-12, detected after 1960, so CCl 4 can be used to trace C ANT via the C* method (Holfort et al. 1998;Wallace, 2001). Another important assumption that can cause significant biases in the C*-based C ANT estimations is that CFCs would provide accurate ventilation ages (Matsumoto and Gruber, 2005).This assumption is only true if several conditions are fulfilled: first that preformed tracers were saturated and second, that transport is mainly advective.Mixing biases on age are practically compensated when different but significant tracer concentration waters mix within a linear trend in the atmospheric time evolution (e.g., Haine and Hall, 2002).According to Matsumoto and Gruber (2005) using singletracer ages in the C* method only apply to CFC ages less than 30 years causing limited biases in the C ANT estimation as both tracers, CFCs and C ANT , increase roughly linearly in the 1990s. To assess which method provides a more robust C ANT estimate we can study the relationship between the partial pressures of , CCl 4 (pCCl 4 ) (calculated assuming 100% saturation and using solubility equations from Warner and Weiss (1985) and Bullister and Wisegarver (1998), respectively) and the theoretical upper and lower limits of oceanic C ANT .Assuming a mainly advective transport, with low mixing, the theoretical time evolution of oceanic C ANT can be calculated for two different types of surface waters, with Antarctic (TA=2280 µmol kg −1 , θ=4 • C, Sal=34, PO 4 =2 µmol kg −1 and SiO 2 =20 µmol kg −1 , Rev-elle=13) and subtropical (TA=2340 µmol kg −1 , θ=17 • C, Sal=35.5, PO 4 =0.2 µmol kg −1 and SiO 2 =3 µmol kg −1 , Rev-elle=10) origins, using the atmospheric evolution of CO 2 in the southern hemisphere and a preindustrial CO 2 value of 280 ppmv, and CO 2 constants from Lueker et al. (2000).Physical and chemical characteristics for surface waters in the subtropical Indian Ocean and the Indian sector of the Southern Ocean were taken from the GLODAP atlas and Metzl et al. (2006), Revelle factors are in agreement with Sabine et al. (2004).The theoretical curves are only valid under time-constant temperature, salinity and alkalinity, an assumption that is compromised by global warming in the ocean (e.g., Levitus et al., 2001), the likely alkalinity changes due to ocean acidification (e.g., Sarma et al., 2002), and salinity increases (e.g., McDonagh et al. 2005).Mixing should be relevant in deep and bottom waters according to the formation mechanism of CDW and AABW.Consequently, the calculated curves should only be considered as indicative of the possible upper and lower C ANT evolution limits in the upper water of the subtropical Indian Ocean since both will be overestimations in deep and bottom waters, where mixing overcomes advection. The relationship between pCFC-12 and C ANT in upper waters is shown in Fig. 10, along with the theoretical curves.IPSL stands out with high C ANT values compared with the upper (Revelle=10) theoretical limit.Despite being based on different approaches the TrOCA and the SAB99 C ANT values are very similar in between the two theoretical limits.C ANT TTD values approach the upper limit towards the surface (where most waters have a subtropical origin) and the lower one towards the AAIW layer.OCCAM is the opposite of IPSL, with lower than any expected values.However, despite the fact that CFC-12 penetration in OCCAM compares well with our data, temperature and salinity data seem to be lower than expected, so consequently pCFC-12 in OCCAM is underestimated and the OCCAM pCFC-12 vs. C ANT relationship shown in Figure 10 is decomposed at higher temperatures (Huhn et al. 2001).C ANT values in deep waters are generally lower than the theoretical value.This is not surprising as before 1960 the relationships between CFC-12 or CCl 4 and C ANT are very nonlinear, there is a strong mixing bias and some water masses incorporate ice-shelf waters formed with a 45% undersaturation in CFC-12 or CCl 4 (Huhn et al., 2001).Significant tracer concentrations are found in deep Indian waters which points to a likely exposure to atmospheric C ANT levels of these waters when formed.Higher levels of pCFC-12 and pCCl 4 below 4000 dbar were found on the western part of the section (Fig. 5) where younger AABW arrives to the section (Orsi et al., 1999).The SAB99 and OCCAM mean concentrations in Fig. 11b are 0.4±2.8 and 0.4±0.6 µmol kg −1 , while respectively for IPSL, TrOCA and TTD methods it is 1.5±3, 3±3, and 3±2 µmol kg −1 .Neither Fig. 10 nor 11 show the lower part of the AAIW layer, but here TrOCA and TTD yield similar C ANT values, SAB99 is compromised by its discontinuity and IPSL estimates are 10 µmol kg −1 higher in the upper AAIW than in the lower AAIW. Evaluation of the five C ANT methods Taking into account i) that most of the water masses in the subtropical Indian Ocean have been formed in the Southern Ocean, a particularly challenging area for data-or modelbased C ANT estimates, and ii) the uncertainties inherent for the set of five C ANT methods here evaluated; we attempted to summarise the consistency of each method using the knowledge about water mass formation and the relation between C ANT estimates and transient-tracers in five water mass layers defined along the 32 • S Indian Ocean section. Upper surface waters, shallower than about 200 dbars, occupy a small fraction of the whole water column.Carbonbased methods are unable to properly correct for biological activity and therefore unable to estimate C ANT (Fig. 8, Table 1), while TTD is based on tracer distributions independent of biological activity, but affected by seasonality; and OCCAM assumes CO 2 air-sea equilibration, which is not always true.Despite the uncertainties, we consider TTD and OCCAM more reliable in the surface layer, and therefore the inventory here would be 7±0.2molC m −2 . At the SAMW layer, between roughly 200-700 dbars, carbon-based, TTD and OCCAM show relatively good agreement in C ANT values (to within ±6µmol kg −1 ) and inventories (±2 molC m −2 ).CFC-12 levels are higher than 0.6 pmol kg −1 in this layer and the relation between surface CO 2 and CFC saturation is one (Fig. 6 in Matear et al., 2003), so the TTD estimates offer highly reliable support to the other methods, except IPSL which appears too high.Thus the inventory in this layer would be 8.9±0.5 molC m −2 (mean from SAB99, TTD, TrOCA and OCCAM). In the AAIW layer, ≈700-1500 dbars, the situation is more complicated.CFC levels below 1000 dbars sharply change to values lower than 0.5 pmol kg −1 , and CO 2 /CFC saturation varies between 0.8-0.9 in the formation area of AAIW, at the Brazil-Malvinas current confluence, consequently, the uncertainty in C ANT TTD increases.OCCAM, predicts a CFC-12 penetration in agreement with observations, and C ANT values in agreement with the other methods, despite this, relatively low inventories are obtained because of the density field after the misleading temperature and salinity fields.C ANT distributions obtained from TrOCA, TTD and OCCAM are quite similar (Figs. 6 and 7), while SAB99 has the discontinuity below 1200 dbar and, IPSL estimates are 10 µmol kg −1 high in the upper AAIW (Fig. 5).No definitive support for any data method can be derived from the relation with tracers between 1000-1500 dbars where CFC-12 is too low and CCl 4 is affected by decomposition.However, SAB99 can be disregarded because of the unreasonable discontinuity and OCCAM because of its inaccurately simulated density field.Thus an inventory of 9.4±0.1 molC m −2 can be assigned to the AAIW layer (mean from TTD and TrOCA). Here OCCAM fails to predict any CFC-12 or CCl 4 signal in these layers while data shows an increase with depth in their values, especially CCl 4 below 3500 dbars.SAB99 C ANT estimates distribute randomly around zero at these layers.As a result, a low confidence is attributed to SAB99 or OC-CAM in these layers.The TTD, TrOCA and IPSL methods predict significant and consistent C ANT values, from 0 to 10 µmol kg −1 , except at the NADW core where they agree with C ANT ≈ 0 µmol kg −1 .Relying on TTD, TrOCA and IPSL on both layers, the mean inventory would be 1.5±0.8 and 1.3±0.1 molC m −2 in deep and bottom waters, respectively. Summing up, we would suggest a best estimate for the water column specific inventory in the subtropical Indian Ocean of 28±2 molC m −2 , which is significantly higher than the C* value, 24±2 molC m −2 . Conclusions This work investigates the C ANT penetration and inventory in the subtropical Indian Ocean along 32 T values are inconsistent with the current airsea CO 2 disequilibrium found in current Indian Ocean winter C dis T values.Although this could also be misleading taking into account that water masses are usually formed in particular times and areas of the ocean. Previous estimates of C ANT inventory in the subtropical Indian Ocean with the C* method appear to be underestimates.Considering the C ANT estimates derived from those methods consistent with the tracer distributions and the knowledge about water masses, our best estimate for the mean C ANT specific inventory is 28±2 molC m −2 which compares with 24±2 molC m −2 from C*, 17% higher. These conclusions, so far, apply only to the Indian Ocean subtropical gyre.Despite this, our conclusions may have important implications not only for quantifying the uptake and storage of C ANT in the Indian Ocean basin, but also for predicting the consequences of acidification on the local carbon cycle and marine biota.Although tedious, comparison exercises within other ocean basins are still necessary and revealing.Combining them with time-evolution approaches from repeat sections or time-series analysis will further help to constrain how, where and how much C ANT is penetrating into the ocean and also the data and model-based C ANT estimation methods.Fig. A1.Spectrophotometric pH25T measurements on the CRM batch 55 during the cruise.Each set of analyses consists of 7 to measurements from the same CRM bottle.The first six sets of CRM pH analysis (black dots) were discarded in the calculation of the final mean and standard deviation (STD), 7.9169±0.0018.Note that the pH values contain the R and 0.0047 corrections. Appendix A Further information on the CO 2 data quality assessment A1 Quality assessment on the pH measurements Method details: pH was measured spectrophotometrically following Clayton and Byrne (1993) using a seawater mcresol purple dye solution.Samples were analyzed in optical cells with a 10-cm path length thermostated at 25±0.2 • C in a CECIL 3041 spectrophotometer.Following determination of the blank, absorbance measurements were done.pH values are reported on the total scale at 25 • C (pH T 25 , mol kg-sw −1 ). Accuracy: practically every other day during the CD139 cruise (24 over 45 days), seven to eight samples from a CRM bottle (batch 55, certified chemical characteristics for salinity, 33.506; silicate, 2.7 µmol kg −1 ; nitrate, 0.46 µmol kg −1 ; nitrite, 0.0 µmol kg −1 ; phosphate, 0.41 µmol kg −1 ; total alkalinity, 2227.85±0.54µmol kg −1 ; and total inorganic carbon, 2012.06±0.34µmol kg −1 ) were drawn carefully to avoid bubbles and analysed for pH using the spectrophotometric method.Those chemical characteristics combined with the dissociations constants from Lueker et al. (2000) and an Excel-based CO 2 software program developed by M. Álvarez and F. F. Pérez, yield a theoretical pH T 25 value of 7.917.The mean and standard deviation of the seven or eight samples for each day is shown in Fig. A1.The first six sets of CRM pH analysis were too high compared to the theoretical value.There is no clear explanation for this; perhaps the transfer procedure from the CRM bottle into the 10 cm cell was not optimized at the beginning.Taking into account 16 of the 24 sets (white circles in Fig. A1) the mean value for the CRM measurements was 7.9169±0.0018.This value takes into account the double addition correction (see below) and the 0.0047 addition proposed by DelValls and Dickson (1998) and Millero (2007). Reproducibility: several cells from the same Niskin bottle were collected along the cruise to check the reproducibility of our measurements.Table A1 shows the characteristics of these deep water samples and the mean, standard deviation (STD) and number of cells collected from each bottle.The mean of the STD is 0.0009 which could be considered as the reproducibility of pH measurements during the cruise. Dye addition correction: the injection of the indicator solution into seawater perturbs the sample pH slightly (Clayton and Byrne, 1993).The magnitude of this correction depends on the different acidity between the sample and the indicator solution.Consequently it should be quantified for each batch of dye solution.During the CD139 cruise, one batch of indicator solution done in seawater (1mM) was used, 100 µl of this solution were added to the sample and the absorbance ratio R 1 =(578A−730A)/(434A−730A) is calculated.From a second addition of the dye solution another R 2 is also calculated.This operation is done over a wide pH range using the upper and deep CD139 samples.Figure A2 shows the relationship between R (=R 2 -R 1 ) and R 1 .The correction equation is the following: R = −0.0087±0.0007× R 1 + 0.0099±0.0009,(A1) Consequently, the final R (R ok ) value is calculated as: where R is calculated with Eq.A1 as a function of R 1 . A2 Quality assessment on the TA measurements TA was measured using automatic potentiometric titration with 0.1M HCl to a final pH of 4.4, with a Metrohm 6.0233.100combination glass electrode and a Pt-100 probe to check the temperature (Pérez and Fraga, 1987).The electrode was standardised using a 4.4 buffer made in CO 2 free seawater (Pérez et al., 2000).Concentrations are given in µmol kg-sw −1 .The 0.1 N hydrochloric acid was prepared mixing 0.5 mol (18.231 g) of commercially HCl supplied by Riedel-deHaën (Fixanal 38285) with Milli-Q water into a graduated 5L beaker at controlled temperature conditions and referred to 20 • C (Table A2).The variation caused by salinity after the titration is lower than 0.1 units, which is taken into account in the final TA calculation. CRM analyses were performed in order to control the accuracy of our TA measurements (Fig. A3).Accordingly, the final pH of every batch of analyses was corrected to obtain the closest mean TA on the CRM analyses to the certified value.Usually, each sample was analyzed twice for alkalinity.Table A2 shows the average difference of the replicates analyzed during each batch of analysis.This difference was about 1.0 µmol kg −1 . Surface seawater was used as "quasi-steady" seawater substandard, it consists in surface seawater taken from the nontoxic supply and stored in the dark into a large container (25 l) during 2 days before use.This substandard seawater was analyzed at the beginning and at the end of each batch of analyses to control the drift in the analyses for each batch. A3 Comparison between calculated and measured C T As explained in the Data set section, salinity-normalized C T calculated from pH T 25 and TA (NC T calc) using Lueker et al. (2000) constants compared to normalized coulometric C T (NC T coul) with a linear relationship: NC T calc=1.006±0.007×NCT coul−14±15 (r 2 =0.998, n=51, 0±4 µmol kg −1 , mean ±STD of the residuals).Figure A4 shows this relationship. A4 Crossover analysis for CO 2 variables Another approach to evaluate the quality of physical or chemical data from a particular cruise is to compare the distribution of the tracers with already calibrated cruises at crossing or overlapping positions, at density or pressure levels where no temporal changes are expected.The final aim is to integrate the new cruise within the merge-calibrated data set.The Global Ocean Data Analysis Project (GLO-DAP, http://cdiac.esd.ornl.gov/oceans/glodap/Glodaphome.htm) (e.g., Key et al., 2004) was a major effort of the oceanographic community to produce calibrated and uniform data bases for the different ocean basins.Here, we use the GLO-DAP calibrated cruises for the Indian Ocean crossing the CD139 cruise (Fig. A5). The CD139 western section from Africa to the Madagascar basin completely overlaps with the 1995 I5W section (Donohue and Toole, 2003) (Fig. A5).North Atlantic Deep Water (NADW) is confined by the Madagascar and Daves ridges to the Natal Valley and Mozambique basin (Toole and Warren, 1993).This water mass is the oldest in the Indian Ocean were no temporal trends in tracers should be found if we assume that hydrography and biogeochemical processes have been in steady state for the period covered by the data (1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002).NADW is characterized by a salinity maximum around θ=2 • C and nitrate and C T minimum. the threshold value used in the GLODAP comparison exercises (6 µmol kg −1 , Lamb et al., 2002;Wanninkhof et al., 2003); in the case of C T , the 2002 data seem to be 4 µmol kg −1 higher, but this value is still near the threshold value (4 µmol kg −1 , Lamb et al., 2002;Wanninkhof et al., 2003).No corrections are applied based in this first crossover. The CD139 transindian section also intersects with other GLODAP lines in the Indian Ocean (Fig. A5).The quality and consistency of all the tracers measured on these cruises have been assessed in GLODAP and further information regarding the CO 2 data can be found in SAB99.Here we present results for TA, C T , CFC-11 and CFC-12, the approximation used to calculate the differences between data sets follows the convention agreed for the synthesis effort on post WOCE cruises currently on course within the framework of the European CARBOOCEAN integrated project.The methodology is as follows: at least three stations from two cruises are selected falling within a limit of 100 km, tracer data below 1500 m are selected and plotted against σ 4 , a mean profile for each cruise is obtained at standard densities using a piecewise cubic hermite interpolation function, con-sequently a mean and standard deviation profile is obtained for each cruise, after subtracting those two profiles, a mean and standard deviation difference value is calculated for every pair of cruises at each crossover.Differences are considered significant when the mean value is higher than twice the standard deviation.The thresholds limits for correction are 0.005 for CFCs, 4 µmol kg −1 for C T and 6 µmol kg −1 for TA. Table A4 and Fig. A6 show the mean and standard deviation differences between any cruise and the CD139 one.At the light of these results CD139 CFCs and TA data are consistent with the other GLODAP cruises but CD139 C T data seems to be higher in about 3 or 4 µmol kg −1 .We decreased the calculated CD139 C T data in 4 µmol kg −1 . Fig. 1 . Fig. 1.Positions of the CTD stations occupied during the CD139 cruise crossing the Indian Ocean.Stations 59 and 113 are marked. Fig. 1 . Fig. 1.Positions of the CTD stations occupied during the CD139 cruise crossing the Indian Ocean.Stations 59 and 113 are marked. Fig. 2. ∆C* (µmol kg -1 ) vertical distribution along the CD139 section estimated according to SAB99 (a, b) and the combined method (c, d).White lines correspond to σ θ isopycnals used as references. Fig. 4 . Fig. 4. Air-sea CO 2 disequilibrium (µmol kg -1 ) by σ θ density intervals.Blue points correspond to the values taken directly from SAB99 work, light blue and orange points correspond to the values estimated using the CD139 biogeochemical data, following SAB99 (orange) and combined (light blue) methods.Interpolated points are highlighted with black open circles. Fig. 4 . Fig. 4. Air-sea CO 2 disequilibrium (µmol kg −1 ) by σ θ density intervals.Blue points correspond to the values taken directly from SAB99 work, light blue and orange points correspond to the values estimated using the CD139 biogeochemical data, following SAB99 (orange) and combined (light blue) methods.Interpolated points are highlighted with black open circles. Fig. 8 . Fig. 8. Mean ± standard deviation C ANT specific inventories (molC m -2 ) for each water mass layer (see text) and for the whole section on the right.The total specific inventory is calculated as the sum of the SAMW, AAIW, Deep and Bottom contributions plus 7 molC m -2 , i.e., the TTD and OCCAM mean specific inventory for the upper layer. Fig. 8 . Fig. 8. Mean±standard deviation C ANT specific inventories (molC m −2 ) for each water mass layer (see text) and for the whole section on the right.The total specific inventory is calculated as the sum of the SAMW, AAIW, Deep and Bottom contributions plus 7 molC m −2 , i.e., the TTD and OCCAM mean specific inventory for the upper layer. Fig. 9 . Fig. 9. a) Late winter-early spring pCO2 gradient (ocean-atmosphere, in µatm) for the Indian ocean south of about 40ºS by σθ density from the Takahashi et al. (2002) climatology: raw data (crosses) and mean ± STD (green line); and empirical data from McNeil et al. (2007): raw data (blue points) and mean ± STD (red line).b) Equivalent CT disequilibrium values calculated from ∆CT dis = ∆pCO2/359 • 2020/R, 359 is the mean atmospheric pCO2 and 2020 the mean surface ocean CT for 1995, R the Revelle factor, taken as 9 and 13, data from Takahashi et al. (2002) are the black and green lines, data from McNeil et al. (2007) in pink and red. Fig. 9 . Fig. 9. (a) Late winter-early spring pCO 2 gradient (oceanatmosphere, in µatm) for the Indian ocean south of about 40 • S by σ θ density from the Takahashi et al. (2002) climatology: raw data (crosses) and mean±STD (green line); and empirical data from McNeil et al. (2007): raw data (blue points) and mean ± STD (red line).(b) Equivalent C T disequilibrium values calculated from C dis T = pCO 2 /359×2020/R, 359 is the mean atmospheric pCO 2 and 2020 the mean surface ocean C T for 1995, R the Revelle factor, taken as 9 and 13, data from Takahashi et al. (2002) are the black and green lines, data from McNeil et al. (2007) in pink and red. Fig. 10 . Fig. 10.Partial pressure of CFC-12 (ppt) and C ANT estimates (µmol kg -1 ) fo potential temperature higher than 5ºC, pressure higher than 200 dbar and CFC years.Also shown in black is the atmospheric evolution of CFC-12 and C ANT u of 10 and 13 (see text for details), some time markers are shown. Fig. 10 . Fig. 10.Partial pressure of CFC-12 (ppt) and C ANT estimates (µmol kg −1) for upper waters with potential temperature higher than 5 • C, pressure higher than 200 dbar and CFC-12 age less than 30 years.Also shown in black is the atmospheric evolution of CFC-12 and C ANT using Revelle factors of 10 and 13 (see text for details), some time markers are shown. Fig. 11.a) Partial pressure of CFC-12 (ppt) and b) partial pressure of CCl 4 (ppt) versus C ANT (µmol kg -1 ) estimates for deep waters with potential temperature lower than 5ºC, pressure higher than 200 dbar and CFC-12 age higher than 40 years.In the case of CCl 4 , the temperature limit is 3ºC.Also shown in black are the atmospheric evolution of CFC-12, CCl 4 and C ANT using Revelle factors of 10 and 13 (see text for details), some time markers are shown. Fig. 11 . Fig. 11.(a) Partial pressure of CFC-12 (ppt) and (b) partial pressure of CCl 4 (ppt) versus C ANT g(µmol kg −1 ) estimates for deep waters with potential temperature lower than 5 • C, pressure higher than 200 dbar and CFC-12 age higher than 40 years.In the case of CCl 4 , the temperature limit is 3 • C. Also shown in black are the atmospheric evolution of CFC-12, CCl 4 and C ANT using Revelle factors of 10 and 13 (see text for details), some time markers are shown. Fig. A1.Spectrophotometric pH25T measurements on the CRM batch 55 during the cruise.Each set of analyses consists of 7 to 8 measurements from the same CRM bottle.The first six sets of CRM pH analysis (black dots) were discarded in the calculation of the final mean and standard deviation (STD), 7.9169±0.0018.Note that the pH values contain the ΔR and 0.0047 corrections. Fig. A2 . Fig. A2.Perturbation of sample pH induced by addition of indicator, expressed as R (=R 2 -R 1 ) as a function of R 1 .R 1 is the first addition and R 2 the double addition.R is the ratio between absorbances ((578A-730A)/(434A-730A)). Fig. A4.Relationship between measured and calculated (from pH and alkalinity) salinity normalized total inorganic carbon (NCT in µmol kg -1 ). Fig. A5 . Fig. A5.Indian Ocean map showing the intersections or crossovers between the CD139 a GLODAP cruises.Each crossover is identified with a number. Fig. A5 . Fig. A5.Indian Ocean map showing the intersections or crossovers between the CD139 and GLODAP cruises.Each crossover is identified with a number. Fig. A6.Mean and standard deviation between the CD139 and GLODAP cruises at each crossover for (a) CFC-11; (b) CFC-12; (c) CT and (d) TA.Each crossover is identified with a number as in Figure A5.Results also shown in TableA4. Fig. A6 . Fig. A6.Mean and standard deviation between the CD139 and GLODAP cruises at each crossover for (a) CFC-11; (b) CFC-12; (c) C T and (d) TA.Each crossover is identified with a number as in Fig. A5.Results also shown in TableA4. GLODAP cruises overlapping our cruise (see Appendix A4) and concluded that the calculated CD139 C T data should be reduced by 4µmol kg −1 . www.biogeosciences.net/6/681/2009/Biogeosciences, 6, 681-703, 2009 WOCE-3 Back-calculation methods to estimate C ANT presents the three C dis , estimated with the SAB99 and Comb.methods.The NADW core is highlighted in black.All in µmol kg -1 .For clarity every 4 th sample was represented.SAB99 and Comb.methods.The NADW core is highlighted in black.All in µmol kg −1 .For clarity every 4th sample was represented. Table 1 . Mean±standard deviation C ANT specific inventory (molC m −2 ) by water masses in the subtropical Indian Ocean for the different methods here evaluated.Mean and standard deviation values were obtained randomly modifying the initially calculated single C ANT values by ±5 µmol kg −1 .A set of 100 perturbations were done for the five methods.The standard deviation for each layer is weighted by the layer contribution to the total section area.See text for the acronyms.N/D stands for not determined.Values between brackets correspond to the IPSL method assuming a 100% oxygen saturation in Eq. (7), α=0.The total specific inventory is calculated as the sum of the SAMW, AAIW, Deep and Bottom contributions plus 7 molC m −2 , i.e., the TTD and OCCAM mean specific inventory for the upper layer. * • S calculated with data collected in 2002.Five different methods are compared and discussed: three carbon-based methods ( C*, IPSL and TrOCA), TTD and a simulation from the OCCAM global model.Comparatively, the C* method seems to yield too shallow penetration of C ANT , with no or very low C ANT detected in deep or bottom waters, mainly due to the formulation and assumptions of C dis T .Our results indicate that SAB99 C dis TableA2shows the pH ( pH) correction applied to each batch and the mean value of the CRM determinations after applying the former correction. Table A1 . Location and physical characteristics of the samples analyzed for pH replicates.The mean, standard deviation (STD) pH 25T and number of samples drawn from each bottle is shown.Note that the pH values contain the R and 0.0047 corrections. Table A2 . Alkalinity analysis supplementary information for each batch of analysis: N HCl is the normality referred to 20 • C of the hydrochloric solution used; pH is the pH correction applied to refer the TA determinations on the CRM to the corresponding nominal value (batch 55 with a certified TA of 2227.85±0.54µmol kg −1 ).The mean value of the TA measurements on the CRM samples is also shown (Fitted TA±standard deviation (number of analysis)).The average of the difference (Av.Dif.and number of duplicates) in the duplicate's analyses is shown. Table A3 . NADW characteristics (1.8≤θ ≤2.3 • C and salinity≥34.81)from the 1995 I5W and 2002 CD139 cruises.Mean plus/minus standard deviation and number of samples (n) considered for the physical and chemical variables. Table A4 . Mean and standard deviation (STD)difference between a GLODAP cruise and the 2002 CD139 CFC, C T and TA data for each crossover (Xover number in Fig.A5).Samples are taken below 1500 dbar.The minimum number of stations for each cruise in each crossover is three.Except for the I5 cruise in 1987, the others were done in 1995.Xover CruiseCFC-11 pmol kg −1 CFC-12 pmol kg −1 C T µmol kg −1 TA µmol kg −1
2018-03-09T19:25:13.514Z
2009-04-27T00:00:00.000
{ "year": 2009, "sha1": "2b5d2bb20c7334cdcd11ac834955d27848a9daed", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/6/681/2009/bg-6-681-2009.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a026d92f2f538432f1164ffb21faaa5461e6e782", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
198168505
pes2o/s2orc
v3-fos-license
Cell cytotoxity and anti-glycation activity of taxifolin-rich extract from Japanese larch, Larix kaempferi The larches, the Larix genus of plants are known as a natural source of taxifolin (dihydroquercetin), and extracts of its taxifolin rich xylem are used in dietary supplements to maintain health. In the present study, to assess biological activities of a methanol extract of the Japanese larch, Larix kaempferi (LK-ME), the effects of LK-ME on cell viability, inflammatory cytokine expression, and glycation were investigated. The effects of taxifolin which is known to be a main compound of LK-ME, and its related flavonoids, quercetin and luteolin were also examined. The results show that taxifolin exhibits lower growth inhibition activity and lesser induction activity of inflammatory cytokines in a human monocyte derived cell line, THP-1 cells, while in vitro anti-glycation activities of taxifolin were inhibiting at comparable levels to those of quercetin and luteolin. The growth inhibition and the cytokine induction activities, and the anti-glycation effects of LK-ME are assumed to have properties similar to taxifolin. The results of high performance liquid chromatography (HPLC) analysis indicated that taxifolin was detected as the main peak of LK-ME at the absorbance of 280 nm, and the concentration of taxifolin was measured as 3.12 mg/ml. The actual concentration of taxifolin in LK-ME is lower than the concentration estimated from the IC50 values calculated by the results of glycation assays, suggesting that other compounds contained in LK-ME are involved in the anti-glycation activity. Introduction Japanese larch, Larix kaempferi is a deciduous needle leaved tree and afforested in Hokkaido and Nagano, Japan on a large scale. Among other Larix genus plants, Larix sibirica and Larix gmelinii are known to contain abundant taxifolin (dihydroquercetin), a flavonoid, in the xylem, and taxifolin rich extracts of these larch xylems are used in a dietary supplement [1,2,3]. Taxifolin is known to be an anti-oxidative agent [4], and beneficial effects of taxifolin have been reported. Previous reports showed possible beneficial effects of taxifolin using animal models, including improvement of microcirculation [5], hepatoprotective effects [6], anti-viral activity [7], and prevention of diabetic nephropathy [8] as well as cardiomyopathy [9]. Further, in vitro studies demonstrated that taxifolin exhibits anti-bacterial [10], anti-fungal [11], and anti-parasitic [12] effects, and taxifolin inhibits acetylcholinesterase and carbonic anhydrase isoenzymes [13]. Taxifolin also inhibits oligomer formation of amyloid β proteins in mice, and is thought to be effective to prevent Alzheimer related diseases [14]. Similar to L. sibirica and L. gmelinii, the xylem of L. kaempferi is also known to contain much taxifolin [15]. At present L. kaempferi extracts have not been used in dietary supplements, and the effects of L. kaempferi extracts on the health of humans or animals are not well known. Advanced glycation end products (AGEs) are produced by nonenzymatic reactions (Maillard reaction) of sugars and proteins. Excess energy intake, especially over-consumption of hydrocarbons increases blood sugar levels followed by induced glycation reactions. Elevated blood AGE levels are found in patients with diabetes mellitus [16], and it is thought to be involved in the disease onset of diabetic complications, such as in diabetic neuropathy [17], diabetic retinopathy and diabetic nephropathy [18]. In addition, accumulation of AGEs is known to progress with aging [16,19]. It has been reported that accumulation of AGEs in the frontal lobe is found in Alzheimer dementia patients [20]. Overall, inhibition of production and accumulation of AGEs are thought to be important to prevent age related diseases. In the present study, to evaluate potential of L. kaempferi extract to use for a supplement to maintain health of humans and animals, the effects of L. kaempferi methanol extract (LK-ME) on cell viability, induction of inflammatory cytokine mRNAs, and inhibition activity of glycation were investigated. These effects were also examined on taxifolin, a main compound of LK-ME, and compared with other taxifolin related flavonoids, quercetin and luteolin. Quantification of the taxifolin concentration in LK-ME A methanol extract of L. kaempferi saw dust was used as LK-ME in this study. In this extraction condition, taxifolin is thought to be effectively extracted from the saw dust of L. kaempferi, and to substantiate this, and assess the quality of the extract, the concentration of taxifolin in LK-ME was quantified using high performance liquid chromatography (HPLC). As shown in Fig. 1A, taxifolin in LK-ME was detected as the main peak. The purity of the standard taxifolin was was calculated as 95.4% (Fig. 1B). The concentration of taxifolin in LK-ME was calculated as 3.12 mg/ml by a comparison with the peak area and the purity of taxifolin standard solution (20 μg/ml). The effects of LK-ME on THP-1 cells To investigate the effects of LK-ME on immune cells, the effects of LK-ME on the cell viability of a monocyte derived cell-line, THP-1 cells [21], were examined. Effects of taxifolin, a main component in the xylem of L. kaempferi, and the taxifolin related compounds, quercetin and luteolin on the cell viability were also examined. The chemical structures of taxifolin, quercetin, and luteolin are shown in Fig. 2A. The results show that LK-ME inhibits growth of THP-1 cells in a dose dependent manner (Fig. 1B). The inhibition by taxifolin on the growth of THP-1 cells was weaker than those of quercetin and luteolin ( Fig. 1 C-E). The growth inhibition activity of a 100-fold dilution of LK-ME was slightly higher than that of 300 μM Taxifolin. Next, the immune stimulation activity of LK-ME against THP-1 cells was investigated using real-time RT-PCR analysis. Here, THP-1 cells were stimulated with LK-ME, taxifolin, quercetin, or luteolin, and the mRNA expressions of inflammatory cytokines, interleukin-8 (IL-8), and tumor necrosis factor-α (TNF-α), were monitored. As shown in Fig. 3, the results indicate that the expressions of IL-8 and TNF-α mRNAs are significantly increased after the stimulation with LK-ME, as well as with taxifolin, quercetin, and luteolin. The induction activities of IL-8 and TNF-α mRNAs were different for these compounds, and these mRNAs were more effectively induced after stimulation with luteolin and quercetin than that of taxifolin. Pro C18, YMC, Kyoto, Japan), and eluted by a linear gradient of phosphate buffer and acetonitrile. The eluted compounds including taxifolin were detected spectrophotometrically at column wavelength of 280 nm. Taxifolin peak is indicated with arrow head. Anti-glycation activity of LK-ME To investigate the anti-glycation activity of LK-ME, glucose, fructose, and glyceraldehyde were reacted with albumin, collagen, and elastin, respectively. Production of glycated proteins were determined by measurement of the fluorescence intensity. The anti-glycation activities of taxifolin, quercetin, and luteolin were also examined. As shown in Fig. 4A-C, LK-ME inhibited glycation in a dose dependent manner. Quercetin is known to be an anti-glycation agent [22]. Similar to quercetin ( Table 1. In the albumin-glucose reaction, luteolin exhibits the strongest anti-glycation activity. Taxifolin was more efficiently inhibiting glycation in the collagen-fructose reaction than quercetin and luteolin, and in the elastin-glyceraldehyde reaction more effectively than quercetin. Discussion In present study, the effects of LK-ME on cell growth, induction of inflammatory cytokines, and glycation were investigated. Overall, the biological activities of LK-ME investigated in this study were similar to those of its main compound, taxifolin. The inhibition activity to cell growth and the induction activities of IL-8 and TNF-α mRNA of taxifolin were weaker than those of quercetin and luteolin, while the antiglycation activities of taxifolin were comparable to those of quercetin and luteolin. LK-ME is assumed to exhibit the following attractive properties: low cytotoxicity, low inflammatory activity, and strong antiglycation activity, as those determined for taxifolin. The results of the cell viability analysis showed that the THP-1 cell viability was more strongly inhibited after treatment with quercetin and luteolin than taxifolin (Fig. 2B-E). Correlated with the growth inhibition activities, IL-8 and TNF-α mRNAs were more strongly induced after stimulation with quercetin and luteolin when compared with that of taxifolin (Fig. 3). The TNF-α and IL-8 are known to be inflammatory cytokines induced by various stressors [23,24,25]. Based on this, the stress induced after stimulated with the flavonoids is thought to be involved in the induction of TNF-α and IL-8, and the stress induction after treatment with taxifolin is assumed to be less pronounced than that of the other flavonoids examined in this study, quercetin and luteolin. The results of the HPLC analysis showed that taxifolin was detected as the major compound which absorbs at 280 nm in LK-ME used in this study (Fig. 1). There are several minor peaks in the chromatogram chart of LK-ME. A previous report showed that a metanol extract of L. kaempferi contains dihydrokaempferol, naringenin, 4-Hydroxybenzaldehyde, p-Coumaryl aldehyde as minor compounds [15]. Therefore, the minor peaks found in the HPLC chart are thought to be including these compounds. The results of the cell viability analysis indicate that the growth inhibition activity of a 1,000-fold dilution of LK-ME is equivalent to 30-300 μM of taxifolin ( Fig. 2B and C). This gives the taxifolin concentration in LK-ME, estimated by the growth inhibition activity, as 9.1-91.3 mg/ml. The IL-8 and TNF-α mRNA induction activities of a 100-fold dilution of LK-ME extract are comparable to 300 μM of taxifolin (Fig. 3). The taxifolin concentration in LK-ME is estimated as < 9.1 mg/ml by the mRNA induction activities. Further, based on the IC 50 values indicated in Table 1, anti-glycation activities of LK-ME against albumin-glucose, collagen-fructose, and elastin-glyceraldehyde were equivalent to 6.70, 15.51, and 9.12 mg/ml taxifolin, respectively. The actual concentration of taxifolin in LK-ME was calculated as 3.12 mg/ml by the HPLC analysis, and these concentrations of taxifolin in LK-ME estimated by the experimental results are higher than the concentrated quantities determined by the HPLC analysis. The difference between the estimated concentrations between the experimental results and the actual concentrations suggest that other compounds contained in LK-ME are involved in the biological activities investigated in this study. Japanese larch, L. kaempferi is thought to be a good source of taxifolin, and as shown in this study, taxifolin is simply extracted from the saw dust of the xylem of L. kaempferi. Although, further investigations including toxicity tests are required before it becomes possible to use L. kaempferi extract as a dietary supplement, the results shown in this study suggest L. kaempferi extract to be of promise for a supplement to maintain health of humans and also animals. Preparetion of L. kaempferi extract (LK-ME) LK-ME used in this study was prepared by methanol extraction from saw dust of L. kaempferi. Saw dust of L. kaempferi was obtained from the Forestry cooperative of Shimokawa town, Hokkaido, Japan. The saw dust (7.43 g) was extracted with 40 ml of methanol overnight at room temperature. The debris was removed by centrifugation, and the supernatant was filtrated with a 0.45 μm filter. Then, 1 ml of the extract was dried, resolved into 150 μl of dimethyl sulfoxide (DMSO, for assays using cultured cells) or ethanol (for glycation assay), and used in this study. Quantitation of taxifolin concentrations in LK-ME The concentration of taxifolin in LK-ME was quantified using high performance liquid chromatography (HPLC). The HPLC system (Shimadzu Corporation, Kyoto, Japan) consist of a Model LC-20AD high pressure pump, a Model CTO-20AC column oven, a Model SIL-20AC total-volume injection-type auto-sampler, and a Model SPD-20A variable wavelength UV-Vis detector. Samples were separated using YMC-Pack Pro C18 (internal diameter: 3.0 mm, length: 150 mm, YMC, Kyoto, Japan) at 40 C and the mobile phase consisted of 10 mM phosphoric acid (A) and acetonitrile (B) at 0.5 ml/min flow rates. Purified taxifolin (20 μg/ml) was used as the standard compound, and the concentration of taxifolin in LK-ME was calculated by the peak area of absorbance units at 280 nm compared with the standard. The HPLC analysis was performed by the Biodynamic Plant Institute, Sapporo, Hokkaido, Japan. Error bars indicate standard deviations calculated from three independent experiments, and the asterisks (*) indicate that the difference is statistically significant (p < 0.01) and larger than two-fold, compared with that of the control. Cell culture and monitoring of cell viability A human monocyte-derived cell line, THP-1 cells (ATCC TIB-202) [21] were grown and maintained in RPMI 1640 medium supplemented with 10% fetal bovine serum, 100 U/ml penicillin, 100 mg/ml streptomycin (Life Technologies, Carlsbad, CA, USA), The cells were grown at 37 C in 5% CO2 in a humidified incubator. The cell viability was monitored using a Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) in Human serum albumin (Fraction V; Nacalai tesque, Kyoto, Japan), elastin derived from bovine neck ligament (MP Biomedicals, Irvine, CA, USA), and collagen derived from bovine skin (Type I, acid soluble; Nippi, Tokyo, Japan) were purchased as commercially available products. To solubilize elastin, 100 mg of bovine elastin was heated in 1ml of 0.1N NaOH at 99 C, and the supernatant recovered. After repeating this step twice, the remaining pellets were autoclaved in 1 ml of 0.1N NaOH, and the supernatant was collected. This step was also repeated twice, and then the collected supernatant containing solubilized elastin was neutralized using 1N HCl, and sterilized using a 0.22 μm filter. The protein concentration was measured using a Pierce BCA Protein Assay Kit (Thermo Fisher Scientific Waltham, MA, USA) according to the manufacturer instructions, and used in this study. The production of AGEs was measured by monitoring the increment in fluorescence intensity. Human albumin (8 mg/ml) and glucose (0.2 M), bovine collagen (0.3 mg/ml) and fructose (0.2 M), or bovine elastin (0.5 mg/ml) and glyceraldehyde (0.05 M) were reacted in 0.05 M phosphate buffer (pH 7.4) with a series of concentrations of LK-ME, taxifolin, quercetin, or luteolin at 60 C. After the incubation, the fluorescence intensity (excitation: 365 nm emission: 410-460 nm) was measured using a multimode microplate reader (GloMax Multi Detection System; Promega, Madison, WI, USA). Statistical analysis To determine statistically significant differences between data pairs, a two-tailed unpaired Student's t-test was performed in this study. Author contribution statement Daisuke Muramatsu: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Hirofumi Uchiyama: Conceived and designed the experiments; Performed the experiments. Hiroshi Kida: Contributed reagents, materials, analysis tools or data; Wrote the paper. Atsushi Iwai: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This study was funded by Aureo Co., Ltd., Kimitsu, Japan and Aureo-Science Co., Ltd., Sapporo, Japan. The funders had no role in the study design, data collection, or analysis, decision to publish, or preparation of the manuscript. Table 1 The half maximal inhibitory concentration (IC 50 ) of LK-ME against production of AGEs. These IC50 values were calculated from the results of the glycation assay (Fig. 3). n.d.: no data.
2019-07-25T13:03:56.756Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "cb4587cbb4decc8f4ac02fae1bb92497330d32d4", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2019.e02047", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e3e94de18adb2d7526b2cee610dc319c56be3f8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
235266
pes2o/s2orc
v3-fos-license
Regulation of HTLV-1 Gag budding by Vps4A, Vps4B, and AIP1/Alix HTLV-1 Gag protein is a matrix protein that contains the PTAP and PPPY sequences as L-domain motifs and which can be released from mammalian cells in the form of virus-like particles (VLPs). The cellular factors Tsg101 and Nedd4.1 interact with PTAP and PPPY, respectively, within the HTLV-1 Gag polyprotein. Tsg101 forms a complex with Vps28 and Vps37 (ESCRT-I complex) and plays an important role in the class E Vps pathway, which mediates protein sorting and invagination of vesicles into multivesicular bodies. Nedd4.1 is an E3 ubiquitin ligase that binds to the PPPY motif through its WW motif, but its function is still unknown. In the present study, to investigate the mechanism of HTLV-1 budding in detail, we analyzed HTLV-1 budding using dominant negative (DN) forms of the class E proteins. Here, we report that DN forms of Vps4A, Vps4B, and AIP1 inhibit HTLV-1 budding. These findings suggest that HTLV-1 budding utilizes the MVB pathway and that these class E proteins may be targets for prevention of mother-to-infant vertical transmission of the virus. Background The Gag polyprotein of HTLV-1 is the only viral protein that is both necessary for and sufficient to drive the release of virus particles through a budding process [1][2][3][4][5][6][7]. During or after the process of particle release, the action of the retroviral protease cleaves Gag to produce mature matrix (MA), capsid (CA), and nucleocapsid (NC) proteins. Three functional domains that are critical for the assembly and budding processes have been identified in the Gag protein. The membrane-binding domain (M-domain) is required for myristoylation of the Gag N-terminal region and subsequent targeting of the protein to the plasma membrane. The interaction domain (I-domain) appears to be a major region involved in Gag multimerization. The late assembly domain (L-domain) plays a critical role in pinching off of virus particles from the plasma membrane of infected cells. It has also been reported that inactivation of the viral protease has no effect on the production of HTLV-1 particles, similar to our previous observations in Mason-Pfizer monkey virus (M-PMV) [7,8]. Three L-domain consensus sequences, PPXY, PT/SAP, and YPXL, have been identified within the matrix proteins of many enveloped RNA viruses, including retro-, rhabdo-, filo-, and arenaviruses [1,2,4,5,[9][10][11][12][13][14][15][16][17][18][19][20][21]. The majority of retroviruses possess PPXY and/or PT/SAP motifs as an Ldomain, one exception being equine infectious anemia virus (EIAV), which possesses a YPXL motif. Most of the host factors that interact with the L domain are involved in the class E vacuolar protein-sorting pathway, suggesting that budding into the lumen of multivesicular bodies (MVBs) in late endosomes and viral budding at the plasma membrane are topologically identical and share a common mechanism. Three ESCRT complexes, ESCRT-I, -II, and -III, play critical roles in the MVB sorting pathway, acting in a sequential manner. In the final step of protein sorting, AAA-type ATPase Vps4A/B interacts with ESCRT-III to catalyze disassembly of the ESCRT machinery to recycle its components. The PTAP motif was first identified in human immunodeficiency virus (HIV) p6Gag and has been reported to interact with Tsg101, which is a ubiquitin-conjugating E2 variant and participates in vacuolar protein-sorting (Vps) machinery. The interaction between p6Gag and Tsg101 is required for HIV-1 budding, and Tsg101 appears to facilitate this budding by linking the p6 late domain to the Vps pathway [22,23]. The PPXY motif has been shown to be the core sequence involved in binding to the WW domain, a sequence of 38 to 40 amino acids containing two widely spaced tryptophan residues, which are involved in protein-protein interaction. In fact, it has been shown that the viral PPXY sequences interact with the WW domains of the cellular Nedd4-like ubiquitin ligases, such as Nedd4 and BUL1 [4,[24][25][26]. The YPXL motif in EIAV p9 and a related sequence YPLASL in HIV-1 p6 have been shown to interact with AIP1/Alix, which has been reported to be linked to ESCRT-I and -III [23,[27][28][29]. In this study, to investigate the mechanism of HTLV-1 budding in detail, we analyzed HTLV-1 budding using DN forms of the class E proteins. Our results showed that the DN forms of Vps4A, Vps4B, and AIP1 markedly suppressed VLP production, suggesting that HTLV-1 budding utilizes the MVB pathway and that these class E proteins may be the targets for prevention of mother-to-infant vertical transmission. HTLV-1 Gag budding utilizes Vps4A and Vps4B Vps4A and Vps4B are ATPases, each of which is the final effector in the MVB sorting pathway in cells. Recent stud-ies using DN of Vps4A have shown that activity of this enzyme is required for efficient budding of HIV-1, murine leukemia virus, equine infectious anemia virus, Mason-Pfizer monkey virus, simian virus 5, vesicular stomatitis virus (VSV), human hepatitis B virus, Ebola virus, and Lassa virus [10,12,22,[31][32][33][34][35]. In contrast to Vps4A, the contribution of Vps4B in virus budding has not been demonstrated, although we previously showed that Lassa virus budding utilizes Vps4B [12]. To examine the involvement of Vps4A and Vps4B in the egress of HTLV-1 Gag-induced VLP, we analyzed the effects of overexpression of DN mutants of Vps4A and Vps4B, termed Vps4AEQ and Vps4BEQ, respectively (Fig. 1A) [12]. Both DN mutants were expressed as proteins containing a Flag tag at their N-termini. As shown in Fig. 2A and 2B, HTLV-1 Gag-induced VLP production was significantly reduced by the overexpression of Vps4AEQ or Vps4BEQ. Relative levels of production of VLPs from cells expressing Vps4AEQ and Vps4BEQ were 25% and 33%, respectively (Fig. 2B). To further examine the effects of overexpression of wild-type Vps4A and Vps4B, we also cotransfected pVps4A and pVps4B with pK30-Gag into 293T cells. As shown in Fig. 2C and Fig. 2D, overexpression of Vps4A and Vps4B did not promote VLP production. These results indicate that endogenous Vps4A and Vps4B are sufficient for producing VLPs, but the enzymatic activities of Vps4A and Vps4B are clearly required for efficient budding of HTLV-1. DN form of AIP1/Alix suppresses the egress of the HTLV-1 VLP production To examine the involvement of AIP1/Alix in HTLV-1 budding, we overexpressed mutant forms of AIP1/Alix with pK30-Gag (Fig. 1B). As shown in Fig. 3A and 3B, the AIP1/ Alix mutant AIP1 (1-628) significantly inhibited the production of HTLV-1 Gag-induced VLP. On the other hand, another mutant of AIP1/Alix, AIP1 (424-628), had no effect. Overexpression of WT AIP1/Alix suppressed the Gag-induced VLP production. Although we could not detect the interaction between HTLV-1 Gag and AIP1 (data not shown), AIP1 may regulate HTLV-1 budding indirectly. Similar results were obtained in a previous study examining the involvement of Tsg101 in HIV-1 budding [36]. Overexpression of the wild-type and C-terminal deletion mutant of Tsg101 inhibited HIV-1 production. The intracellular levels of Tsg101 appear to be strictly regulated for its physiological function. AIP1/Alix may be subject to similar regulation in cells. The mechanism responsible for HTLV-1 budding has not been addressed in detail. Previous studies showed that HTLV-1 Gag protein plays a key role in viral budding as in other retroviruses, and that Tsg101 and Nedd4.1 recognize PTAP and PPPY within the HTLV-1 Gag polyprotein, respectively. Although it is well known that Tsg101 and Nedd4.1 play important roles in HTLV-1 budding, further mechanisms have not been characterized. In this study, to investigate the mechanism of HTLV-1 budding in detail, we analyzed HTLV-1 budding using DN forms of the class E proteins Vps4A, Vps4B, and AIP1 (Fig. 1). The results indicated that the catalytic activities of Vps4A and Vps4B are required for budding of HTLV-1 VLPs, suggesting that HTLV-1 budding mimics the MVB pathway, similar to observations in other envelope viruses. The DN form of AIP1 expressing only the N-terminal region from residues 1-628 also suppressed the budding of HTLV-1 VLPs. The Bro1 domain of AIP1, which is present in the N-terminal region, has been reported to interact with CHMP4 [23,33]. PRR binds to Tsg101. The V domain can bind to HIV-1 p6 and EIAV p9, and overexpression of a mutant containing only the V domain suppresses HIV-1 and EIAV particle release [28,29]. Our results shown in Fig. 3 can be explained by binding of AIP1 (1-628) to CHMP4, thus disturbing the downstream parts of the MVB pathway. On the other hand, AIP1 (424-628) had no effect on HTLV-1 budding. AIP1 (424-628) appears to be sufficient for the function of AIP1 in HTLV-1 budding, suggesting that HIV-1 and HTLV-1 utilize AIP1 in different ways [28,29]. Taken together, these results strongly suggest that HTLV-1 budding utilizes the MVB pathway and that these class E proteins may be useful as targets for prevention of mother-to-infant vertical transmission. Conclusion In the present study, we showed that the enzymatic activities of Vps4A and Vps4B are required for efficient budding of HTLV-1 and that endogenous Vps4A and Vps4B are sufficient for VLP production. In addition, it was shown that AIP1 (1-628) acts as a DN mutant for HTLV-1 budding. Cells Human 293T cells were maintained in Dulbecco's minimal essential medium (Sigma, St. Louis, MO) supplemented with 10% fetal bovine serum and penicillinstreptomycin at 37°C. The involvement of Vps4A and Vps4B in HTLV-1 Gag bud-ding Figure 2 The involvement of Vps4A and Vps4B in HTLV-1 Gag budding. A. 293T cells were cotransfected with pK30-Gag and the expression plasmid for Vps4AEQ or Vps4BEQ, or the empty vector as a control. Extracellular VLPs were pelleted from the culture fluids. VLP-associated or cell-associated Gag was detected by western blotting (WB) using anti-HTLV-1 p19 monoclonal antibody. C. 293T cells were cotransfected with pK30-Gag and the expression vector for wild-type Vps4A or Vps4B, or the empty vector as a control. The proteins were detected as described in A. B and D. Intensities of the bands corresponding to cell-and VLP-associated Gag in A and C were quantified using the LAS3000 imaging system (Fuji film). The efficiency of Gag-induced VLP budding in cells cotransfected with pK30-Gag and control vector (VLP/Cellular) was set to 1.0. The data represent averages and standard deviations (SD) of 3 independent experiments. VLP budding assay Forty-eight hours after transfection, the cell supernatant was clarified from cell debris by centrifugation (13,000 × g, 10 min) and then VLPs were pelleted by ultracentrifugation through a 20% sucrose cushion (345,000 × g, 60 min at 4°C). Cells and VLPs were lysed with Lysis A buffer (1% TritonX-100, 25 mM Tris-HCl, pH 8.0, 50 mM NaCl, and 10% Na-deoxycholate). Cell lysates and VLPs were resolved by SDS-PAGE, and the proteins were then transferred onto nitrocellulose membranes. The mouse anti-HTLV-1 p19 monoclonal antibody TP-7 (Abcam, Cambridge, UK) was used to detect K30Gag. The mouse anti-Flag monoclonal antibody M2 (Sigma) was used for detection of Vps4A, Vps4B, Vps4AEQ, and Vps4BEQ. The mouse anti-HA monoclonal antibody 6E2 (Cell Signaling Technology, Beverley, MA) was used for detection of HA-AIP1 WT and DN series. Horseradish peroxidase-conjugated goat anti-mouse IgG antibody A-2304 (Sigma) was used as a secondary antibody. Immunoreactive bands were visualized using ECL plus (Amersham Pharmacia Biotech, Upsalla, Sweden), followed by the LAS-3000 system (Fuji Film, Tokyo, Japan). For quantification, the signal intensity on western blots was evaluated with Image Gauge version 4.1 (Fuji Film) using the LAS-3000 system. Effects of AIP1/Alix DN mutants on HTLV-1 VLP production Figure 3 Effects of AIP1/Alix DN mutants on HTLV-1 VLP production. A. 293T cells were cotransfected with pK30-Gag and the expression plasmid for AIP1 (1-628), AIP1 (424-628), or AIP1 WT, or the empty vector as a control. Extracellular VLPs were pelleted from the culture fluids. VLPor cell-associated Gag was detected by WB using anti-p19 monoclonal antibody. B. Intensities of the bands corresponding to cell-and VLP-associated Gag in A were quantified using the LAS3000 imaging system (Fuji Film). The efficiency of Gag-induced VLP budding in cells cotransfected with pK30-Gag and the control vector (VLP/Cellular) was set to 1.0. The data represent averages and standard deviations (SD) of 3 independent experiments.
2014-10-01T00:00:00.000Z
2007-07-02T00:00:00.000
{ "year": 2007, "sha1": "91014dcdffd1efdb761f1a9af23cb5619bf0c0f8", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-4-66", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "66939571f59b440f1eec48d4e19acbd0e69ca3d4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
201646121
pes2o/s2orc
v3-fos-license
On the Bounds of Function Approximations Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time. Introduction The typical machine learning task can be abstracted out as the problem of finding the set of parameters of a computable function, such that it approximates an underlying probability distribution to seen and unseen examples [19]. Said function is often hand-designed, and the subject of the great majority of current machine learning research. It is well-established that the choice of function heavily influences its approximation capability [5,55,59], and considerable work has gone 1 into automating the process of finding such function for a given task [9,10,18]. In the context of neural networks, this task is known as Neural Architecture Search (NAS), and it involves searching for the best performing combination of neural network components and parameters from a set, also known as the search space. Although promising, little work has been done on the analysis of its viability with respect to its computation-theoretical bounds [14]. Since NAS strategies tend to be expensive in terms of their hardware requirements [23,40], research emphasis has been placed on optimizing search algorithms, [14,32], even though the search space is still manually designed [14,26,27,60]. Without a better understanding of the mathematical confines governing NAS, it is unlikely that these strategies will efficiently solve new problems, or present reliably high performance, thus leading to complex systems that still rely on manually engineering architectures and search spaces. Theoretically, learning has been formulated as a function approximation problem where the approximation is done through the optimization of the parameters of a given function [12,19,37,38,52]; and with strong results in the area of neural networks in particular [12,16,21,42]. On the other hand, NAS is often regarded as a search problem with an optimality criterion [10,14,40,50,59], within a given search space. The choice of such search space is critical, yet strongly heuristic [14]. Since we aim to obtain a better insight on how the process of finding an optimal architecture can be improved with relation to the search space, we hypothesize that NAS can be enunciated as a function approximation problem. The key observation that motivates our work is that all computable functions can be expressed in terms of combinations of members of certain sets, better known as models of computation. Examples of this are the µ-recursive functions, Turing Machines, and, of relevance to this paper, a particular set of neural network architectures [31]. Thus, in this study we reformulate the function approximation problem as the task of, for a given search space, finding the procedure that outputs the computable sequence of functions, along with their parameters, that best approximates any given input function. We refer to this reformulation as the Function Approximation (FA) problem, and regard it as a very general computational problem; akin to building a fully automated machine learning pipeline where the user provides a series of tasks, and the algorithm returns trained models for each input. 2 This approach yields promising results in terms of the conditions under which the FA problem has optimal solutions, and about the ability of both machine learning and NAS to solve the FA problem. Technical Contributions The main contribution of this paper is a reformulation of the function approximation problem in terms of sequences of functions, and a framework within the context of the theory of computation to analyze it. Said framework is quite flexible, as it does not rely on a particular model of computation and can be applied to any Turing-equivalent model. We leverage its results, along with well-known results of computer science, to prove that it is not possible to devise a procedure that approximates all functions everywhere to zero error. However, we also show that, if the smallest class of functions along with the operators for the chosen model of computation are present in the search space, it is possible to attain an error that is globally minimal. Additionally, we tie said framework to the field of machine learning, and analyze in a formal manner three solution strategies for FA: the Machine Learning (ML) problem, the Architecture Search problem (ASP), and the less-strict version of ASP, the Approximate Architecture Search problem (a-ASP). We analyze the feasibility of all three approaches in terms of the bounds described for FA, and their ability to solve it. In particular, we demonstrate that ML is an ineffective solution strategy for FA, and point out that ASP is the best approach in terms of generalizability, although it is intractable in terms of time complexity. Finally, by relating the results from this paper, along with the existing work in the literature, we describe the conditions under which a-ASP is able to solve the FA problem as well as ASP. Outline We begin by reviewing the existing literature in Section 2. In Section 3 we introduce FA, and analyze the general properties of this problem in terms of its search space. Then, in Section 4 we relate the framework to machine learning as a mathematical problem, and show that it is a weak solution strategy for FA, before defining a stronger approach (ASP) and its computationally tractable version (a-ASP). We conclude in Section 5 with a discussion of our work. Related Work The problem of approximating functions and its relation to neural networks can be found formulated explicitly in [38], and it is also mentioned often when defining machine learning as a task, for example in [2,4,5,19,52]. However, it is defined as a parameter optimization problem for a predetermined function. This perspective is also covered in our paper, yet it is much closer to the ML approach than to FA. For FA, as defined in this paper, it is central to find the sequence of functions which minimizes the approximation error. Neural networks as function approximators are well understood, and there is a trove of literature available on the subject. An inexhaustive list of examples are the studies found in [12,16,21,22,25,35,36,38,42,44,50]. It is important to point out that the objective of this paper is not to prove that neural networks are function approximators, but rather to provide a theoretical framework from which to understand NAS in the contexts of machine learning, and computation in general. However, neural networks were shown to be Turing-equivalent in [31,45,46], and thus they are extremely relevant this study. NAS as a metaheuristic is also well-explored in the literature, and its application to deep learning has been booming lately thanks to the widespread availability of powerful computers, and interest in end-to-end machine learning pipelines. There is, however, a long standing body of research on this area, and the list of works presented here is by no means complete. Some papers that deal with NAS in an applied fashion are the works found in [1,9,10,29,43,48,49,51], while explorations in a formal fashion of NAS and metaheuristics in general can also be found in [3,10,44,58,59]. There is also interest on the problem of creating an end-to-end machine learning pipeline, also known as AutoML. Some examples are studies such as the ones in [15,20,23,57]. The FA problem is similar to AutoML, but it does not include the data preprocessing step commonly associated with such systems. Additionally, the formal analysis of NAS tends to be as a search, rather than a function approximation, problem. The complexity theory of learning and neural networks has been explored as well. The reader is referred to the recent survey from [33], and [2,7,13,17,53]. Leveraging the group-like structure of models of computation is done in [39], and the Blum Axioms [6] are a well-known framework for the theory of computation in a model-agnostic setting. It was also shown in [8] that, under certain conditions, it is possible to compose some learning algorithms to obtain more complex procedures. Bounds in terms of the generalization error was proven for convolutional neural networks in [28]. None of the papers mentioned, however, apply directly to FA and NAS in a setting agnostic to models of computation, and the key insights of our work, drawn from the analysis of FA and its solution strategies, are, to the best of our knowledge, not covered in the literature. Finally, the Probably Approximately Correct (PAC) learning framework [52] is a powerful theory for the study of learning problems. It is a slightly different problem than FA, as the former has the search space abstracted out, while the latter concerns itself with finding a sequence that minimizes the error, by searching through combinations of explicitly defined members of the search space. A Formulation of the Function Approximation Problem In this section we define the FA problem as a mathematical task whose goal isinformally-to find a sequence of functions whose behavior is closest to an input function. We then perform a short analysis of the computational bounds of FA, and show that it is computationally infeasible to design a solution strategy that approximates all functions everywhere to zero error. Preliminaries on Notation Let R be the set of all total computable functions. Across this paper we will refer to the finite set of elementary functions E = {ψ 1 , ..., ψ m } as the smallest class of functions, along with their operators, of some Turing-equivalent model of computation. be a sequence of elements of S applied successively and such that i 1 , ..., i k ∈ I for some I ⊂ J. We will utilize the abbreviated notation f = (φ i ) k i=1 to denote such a sequence; and we will use to describe the set of all n-or-less long possible sequences of functions drawn from said S, such that f ∈ S ⋆,n ⇔ f ∈ R. For consistency purposes, throughout this paper we will be using Zermelo-Fraenkel with the Axiom of Choice (ZFC) set theory. Finally, for simplicity of our analysis we will only consider continuous, real-valued functions, and beginning in Section 3.3, only computable functions. The FA Problem Prior to formally defining the FA problem, we must be able to quantify the behavioral similarity of two functions. This is done through the approximation error of a function: Definition 1 (The approximation error). Let f and g be two functions. Given a nonempty subset σ ⊂ dom(g), the approximation error of a function f to a function g is a procedure which outputs 0 if f is equal to g with respect to some metric d : R × R → R ≥0 across all of σ, and a positive number otherwise: Where we assume that, for the case where Definition 2 (The FA Problem). For any input function F , given a function set (the search space) S, an integer n ∈ N >0 , and nonempty sets σ ⊂ dom(F ), is minimal among all members of S ⋆,n and σ. The FA problem, as stated in Definition 2, makes no assumptions regarding the characterization of the search space, and follows closely the definition in terms of optimization of parameters from [37,38]. However, it makes a point on the fact that the approximation of a function should be given by a sequence of functions. If the input function were to be continuous and multivariate, we know from [24,34] that there exists at least one exact (i.e., zero approximation error) representation in terms of a sequence of single-variable, continuous functions. If such single-variable, continuous functions were to be present in S, one would expect that the FA problem could solved to zero error for all continuous multivariate inputs, by simply comparing and returning the right representation. 3 However, it is infeasible to devise a generalized algorithmic procedure that outputs such representation: There is no computable procedure for FA that approximates all continuous, real-valued functions to zero error, across their entire domain. Proof. Solution strategies for FA are parametrized by the sequence length n, the subset of the domain σ, and the search space S. Assume S is infinite. The input function F may be either computable or uncomputable. If the input F is uncomputable, by definition it can only be estimated to within its computable range, and hence its approximation error is nonzero. If F is a computable function, we have guaranteed the existence of at least one function within S ⋆,n which has zero approximation error: F itself. Nonetheless, determining the existence of such a function is an undecidable problem. To show this, it suffices to note that it reduces to the problem of determining the equivalence of two halting Turing Machines by asking whether they accept the same language, which is undecidable. When n or σ are infinite, there is no guarantee that a procedure solving FA will terminate for all inputs. When n, σ, or S are finite, there will always be functions outside of the scope of the procedure that can only be approximated to a nonzero error. Therefore, there cannot be a procedure for FA that approximates all functions, let alone all computable functions, to zero error for their entire domain. ⊓ ⊔ It is a well-known result of computer science that neural networks [12,16,19,21,22], and PAC learning algorithms [52], are able to approximate a large class of functions to an arbitrary, non-zero error. However, Theorem 1 does not make any assumptions regarding the model of computation used, and thus it works as more generalized statement of these results. For the rest of this paper we will limit ourselves to the case where n, σ, and S are finite, and the elements of S are computable functions. A Brief Analysis of the Search Space It has been shown that the solutions to FA can only be found in terms of finite sequences built from a finite search space, whose error with respect to the input function is nonzero. It is worth analyzing under which conditions these sequences will present the smallest possible error. For this, we note that any solution strategy for FA will have to first construct at least one sequence f ∈ S ⋆,n , and then compute its error against the input function F . It could be argued that this "bottom-up" approach is not the most efficient, and one could attempt to "factor" a function in a given model of computation that has explicit reduction formulas, such as the Lambda calculus. This, unfortunately, is not possible, as the problem of determining the reduction of a function in terms of its elementary functions is well-known to be undecidable [11]. However, the idea of "factoring" a function can still be leveraged to show that, if the set of elementary functions E is present in the search space S, any sufficiently clever procedure will be able to get the smallest possible theoretical error for S, for any given input function F : Proof. By definition, E can generate all possible computable functions. If E ⊂ S, then |S ⋆,n | < |E ⋆,n |, and so there exist input functions whose sequence with the smallest approximation error, f o , is not contained in S ⋆,n . ⊓ ⊔ In practice, constructing a space that contains E, and subsequently performing a search over it, can become a time consuming task given that the number of possible members of S ⋆,n grows exponentially with n. On the other hand, constructing a more "efficient" space that already contains the best possible sequence requires prior knowledge of the structure of a function relating S to F -the problem that we are trying to solve in the first place. That being said, Theorem 2 implies that there must be a way to quantify the ability of a search space to generalize to any given function, without the need of explicitly including E. To achieve this, we first look at the ability of every sequence to approximate a function, by defining the information capacity of a sequence: Definition 3 (The Information Capacity). Let f = (φ i ) n i=1 be a finite sequence, where every φ i has associated a finite set of possible parameters π i , and a restriction set ρ i in its domain: Then the information capacity of a sequence f is given by the Cartesian product of the domain, parameters, and range of each φ i : Note that the information capacity of a function is quite similar to its graph, but it makes an explicit relationship with its parameters. Specifically, in the case where π i ⊂ Π for every π i in some f , At a first glance, Definition 3 could be seen as a variant of the VC dimension [7,53], since both quantities attempt to measure the ability of a given function to generalize. However, the latter is designed to work on a fixed function, and our focus is on the problem of building such a function. A more in-depth discussion of this distinction, along with its application to the framework from this paper, is given in Section 4.1, and in Appendix B. A search space is comprised of one or more functions, and algorithmically we are more interested about the quantifiable ability of the search space to approximate any input function. Therefore, we define the information potential of a search space as follows: Definition 4 (The Information Potential). The information potential of a search space S, is given by all the possible values its members can take for a given sequence length n: The definition of the information potential allows us to make the important distinction between comparing two search spaces S 1 , S 2 containing the same function f , but defined over different parameters π 1 , π 2 ⊂ Π; and comparing S 1 and S 2 with another space, S 3 , containing a different function g: the information potentials will be equivalent on the first case, U (S 1 , n) = U (S 2 , n), but not on the second: U (S 3 , n) = U (S 1 , n). For a given space S, as the sequence length n grows to infinity, and if the search space includes the set of elementary functions, E ⊂ S, its information potential encompasses all computable functions: In other words, the information potential of such S approaches the information capacity of a universal approximator, which depending on the model of computation chosen, might be a universal Turing machine, or the universal function from [41], to name a few. In the next section, we leverage the results shown so far to evaluate three different procedures to solve FA, and show that there exists a best possible solution strategy. The FA Problem in the Context of Machine Learning In this section we relate the results from analyzing FA to the field of machine learning. First, we show that the machine learning task can be seen as a solution strategy for FA. We then introduce the Architecture Search Problem (ASP) as a theoretical procedure, and note that it is the best possible solution strategy for FA. Finally, we note that ASP is unviable in an applied setting, and define a more relaxed version of this approach: the Approximate Architecture Search Problem (a-ASP), which is the analogous of the NAS task commonly seen in the literature. Machine Learning as a Solver for FA The Machine Learning (ML) problem, informally, is the task of approximating an input function F through repeated sampling and the parameter search of a predetermined function. This definition is a simplified, abstracted out version of the typical machine learning task. It is, however, not new, and a brief search in the literature ( [4,5,19,37]) can attest to the existence of several equivalent formulations. We reproduce it here for notational purposes, and constrain it to computable functions: Definition 5 (The ML Problem). For an unknown, continuous function F defined over some domain dom(F ), given finite subsets σ ⊂ dom(F ), a function f with parameters from some finite set Π, and a function m : As defined in Definition 2, any procedure solving FA is required to return the sequence that best approximates any given function. In the ML problem, however, such sequence f is already given to us. Even so, we can still reformulate ML as a solution strategy for FA. For this, let the search space be a singleton of the form S ML = {f }; set m to be the metric function d in the approximation error; and leave σ as it is. We then carry out a "search" over this space by simply picking f , and then optimizing the parameters of f with respect to the approximation error ε σ (f, F ). We then return the function along with the parameters π o that minimize the error. Given that the search is performed over a single element of the search space, this is not an effective procedure in terms of generalizability. To see this, note that the procedure acts as intended, and "finds" the function that minimizes the approximation error ε σ (f, F ) between f and any other F in the search space S ML . However, being able to approximate an input function F in a single-element search space tells us nothing about the ability of ML to approximate other input functions, or even whether such f ∈ S ML is the best function approximation for F in the first place. In fact, we know by Theorem 2 that for a given sequence length n, for every F there exists an optimal sequence f o in E ⋆,n , which is may not be present in S ML . Since we are constrained to a singleton search space, one could be tempted to build a search space with one single function that maximizes the information potential, such as the one as described in Equation 4, say, by choosing f to be a universal Turing Machine. There is one problem with this approach: this would mean that we need to take in as an input the encoding of the input function F , along with the subset of the domain σ. If we were able to take the encoding of F as part of the input, we would already know the function and this would not be a function approximation problem in the first place. Additionally, we would only be able to evaluate the set of computable functions which take in as an argument their own encoding, as it, by definition, needs to be present in σ. In terms of the framework from this paper we can see that, no matter how we optimize the parameters of f to fit new input functions, the information potential U (S ML , n) remains unchanged, and the error will remain bounded. This leads us to conclude that measuring a function's ability to learn through its number of parameters [19,47,53] is a good approach for a fixed f and single input F , but incomplete in terms of describing its ability to generalize to other problems. This is of critical importance, because, in an applied setting, even though nobody would attempt to use the same architecture for all possible learning problems, the choice of f remains a crucial, and mostly heuristic, step in the machine learning pipeline. The statements regarding the information potential of the search space are in accordance with the results in [55], where it was shown that-in the terminology of this paper-two predetermined sequences f and f ′ , when averaging their approximation error across all possible input functions, will have equivalent performance. We have seen that ML is unable to generalize well to any other possible input function, and is unable to determine whether the given sequence f is the best for the given input. This leads us to conclude that, although ML is a computationally tractable solution strategy for FA, it is a weak approach in terms of generalizability. The Architecture Search Problem (ASP) We have shown that ML is a solution strategy for FA, although the nature of its search space makes it ineffective in a generalized setting. It is only natural to assume that a stronger formulation of a procedure to solve FA would involve a more complex search space. Similar to Definition 5, we are given the task of approximating an unknown function F through repeated sampling. Unlike ML, however, we are now able to select the sequence of functions (i.e., architecture) that best fits a given input function F : Definition 6 (The Architecture Search Problem (ASP)). For an unknown, continuous function F defined over some domain dom(F ), given a finite subset σ ⊂ dom(F ), a sequence length n, a search space S ASP , and a function m : R × R → R ≥0 , find the sequence f = (φ i ) k i=1 , φ i ∈ S ASP , k ≤ n such that m(f (x), F (x)) is minimal for all x ∈ σ, and all f ∈ S ⋆,n ASP . Note that we have left the parameter optimization problem implicit in this formulation, since, as pointed out in Section 4.1, a single-function search space f would be ineffective for dealing with multiple input functions F , no matter how well the optimizer performed for a given subset of these inputs. At a first glance, ASP looks similar to the PAC learning framework [52]. However, FA is the task about finding the right sequence of computable functions for all possible functions, while PAC is a generalized, tractable formulation of learning problems, with the search space abstracted out. A more precise analysis of the relationship between FA and PAC is described in Appendix A. As a solution strategy for FA, ASP is also subject to the results from section Section 3. The key difference between ML and ASP is that ASP has access to a richer search space, which allows it to have a better approximation capability. In particular, ASP could be seen as a generalized version of the former, since for any n-sized sequence present in S ML , one could construct a space with bigger information potential in ASP, but with the same constrains in sequence length. For example, we could use E as our search space, choose a sequence length n, and so U (S ML , n) ⊂ U (E, n). Since ASP has no explicit constraints on time and space, this procedure is essentially performing an exhaustive search. Theorem 2 implies that, for fixed n and any input F , ASP will always return the best possible sequence within that space, as long as the search space contains the set of elementary functions, E ⊂ S. On the other hand, it is a cornerstone of the theory and practice of machine learning that learning algorithms must be tractable-that is, they must run in polynomial time. Given that the search space for ASP grows exponentially with the sequence length, this approach is an interesting theoretical tool, but not very practical. We will still use ASP as a performance target for the evaluation of more applicable procedures. However, it is desirable to formulate a solution strategy for FA that can be used in an applied setting, but can also be analyzed within the framework of this paper. To achieve this, first we note that any other solution strategy for FA which terminates in polynomial time will have to be able to avoid verifying every possible function in the search space. In other words, such procedure would require a function that is able to choose a nonempty subset of the search space. We denote such function as B, such that for a search space S, B(S) ⊂ S ⋆,n . We can now define the Approximate Architecture Search Problem (a-ASP) as the formulation of NAS in terms of the FA framework: Definition 7 (The Approximate ASP (a-ASP)). If F is an unknown, continuous function defined over some domain dom(F ), given a finite subset σ ⊂ dom(F ), a sequence length n, a search space S ASP , a function m : R × R → R ≥0 , and a set builder function B(S ASP ) ⊂ S ⋆,n ASP , find the sequence is minimal for all x ∈ σ and f ∈ B(S ASP ). Just as the previous two procedures we defined, a-ASP is also a solution strategy for FA. The only difference between Definition 6 and Definition 7 is the inclusion of the set builder function to traverse the space in a more efficient manner. Due to the inclusion of this function, however, a-ASP is weaker than ASP, since it is not guaranteed to find the functions f o that globally minimizes ε σ (f o , F ), for all given F . Additionally, the fact that this function must be included into the parameters for a-ASP implies that such procedure requires some design choices. Given that everything else in the definition of a-ASP is equivalent to ASP, it can be stated that the set builder function is the only deciding factor when attempting to match the performance of ASP with a-ASP. It has been shown [56] that certain set builder functions perform better than others in a generalized setting. This can be also seen from the perspective of the FA framework, where we have available at our disposal the sequences that make up a given function. In particular, if S = {φ 1 , ..., φ m } is a search space, and B is a function that selects elements from S ⋆,n , a-ASP not only has access to the performance of all the k sequences chosen so far, {ε σ (f i , F ), f i ∈ B(S ⋆,n )} i∈{1,...,k} , but also the encoding (the configurations from [56]) of their composition. This means that, given enough samples, when testing against a subset of the input, σ ′ ⊂ σ, such an algorithm would be able to learn the expected output φ(s) of the functions φ ∈ S, and their behavior if included in the current sequence f k+1 = (f k , φ)(s), for s ∈ σ ′ . Including such information in a set builder function could allow the procedure to make better decisions at every step, and this approach has been used in applied settings with success [30,26]. It can be seen that these design choices are not necessarily problem-dependent, and, from the results of Theorem 2, they can be done in a theoretically motivated manner. Specifically, we note that the information potential of the search space remains unchanged between a-ASP and ASP, and so, by including E, a-ASP could have the ability to perform as well as ASP. Conclusion The FA problem is a reformulation of the problem of approximating any given function, but with finding a sequence of functions as a central aspect of the task. In this paper, we analyzed its properties in terms of the search space, and its applications to machine learning and NAS. In particular, we showed that it is impossible to write a procedure that solves FA for any given function and domain with zero error, but described the conditions under which such error can be minimal. We leveraged the results from this paper to analyze three solution strategies for FA: ML, ASP, and a-ASP. Specifically, we showed that ML is a weak solution strategy for FA, as it is unable to generalize or determine whether the sequence used is the best fit for the input function. We also pointed out that ASP, although the best possible algorithm to solve for FA, is intractable in an applied setting. We finished by formulating a solution strategy that merged the best of both ML and ASP, a-ASP, and pointed out, through existing work in the literature, complemented with the results from this framework, that it has the ability to solve FA as well as ASP in terms of approximation error. One area that was not discussed in this paper was whether it would be possible to select a priori a good subset σ of the input function's domain. This problem is important since a good representative of the input will greatly influence a procedure's capability to solve FA. This is tied to the data selection process, and it was not dealt with on this paper. Further research on this topic is likely to bear great influence on machine learning as a whole. Appendices A PAC Is a Solver for FA PAC learning, as defined by Valiant [52], is a slightly different problem than FA, as it concerns itself with whether a concept class C can be described with high probability with a member of a hypothesis class H. It also establishes bounds in terms of the amount of samples from members c ∈ C that are needed to learn C. On the other hand, FA and its solution strategies concern themselves with finding a solution that minimizes the error, by searching through sequences of explicitly defined members drawn from a search space. Regardless of these differences, PAC learning as a procedure can still be formulated as a solution strategy for FA. To do this, let H be our search space. Then note that the PAC error function e pac (h, c) = P r x∼P [h(x) = c(x)], c ∈ C, h ∈ H, is equivalent to computing ε σ (h, c) for some subset σ ⊂ dom(c), and choosing the frequentist difference between the images of the functions as the metric d. Our objective would be to return the h ∈ H that minimizes the approximation error for a given subset σ ⊂ C. Note that we do not search through the expanded search space H ⋆,n . Finding the right distribution for a specific class may be NP-hard [7], and so e pac requires us to make certain assumptions about the distribution of the input values. Additionally, any optimizer for PAC is required to run in polynomial time. Due to all of this, PAC is a weaker approach to solve FA when compared to ASP, but stronger than ML since this solution strategy is fixed to the design of the search space, and not to the choice of function. Nonetheless, it must be stressed that the bounds and paradigms provided by PAC and FA are not mutually exclusive, either: the most prominent example being that PAC learning provides conditions under which the choice subset σ is optimal. With the polynomial constraint for PAC learning lifted, and letting the sample and search space sizes grow infinitely, PAC is effectively equivalent to ASP. However, that defies the purpose of the PAC framework, as its success relies on being a tractable learning theory. B The VC Dimension and the Information Potential There is a natural correspondence between the VC dimension [7,53] of a hypothesis space, and the information capacity of a sequence. To see this, note that the VC dimension is usually defined in terms of the set of concepts (i.e., the input function F ) that can be shattered by a predetermined function f with img(f ) = {0, 1}. It is frequently used to quantify the ability of a procedure to learn the input function F . In the FA framework we are more interested in whether the search space-also a set-of a given solution strategy is able to generalize well to multiple, unseen input functions. Therefore, for fixed F and f , the VC dimension and its variants provide a powerful insight on the ability of an algorithm to learn. When f is not fixed, it is still possible to utilize this quantity to measure the capacity of a search space S, by simply taking the union of all possible f ∈ S ⋆,n for a given n. However, when the the input functions are not fixed either, we are unable to use the definition of VC dimension in this context, as the set of input concepts is unknown to us. We thus need a more flexible way to model generalizability, and that is where we leverage the information potential U (S, n) of a search space.
2019-08-26T22:32:33.000Z
2019-08-26T00:00:00.000
{ "year": 2019, "sha1": "73c5f034b9fc3e7f85e309cee0f3a5df6f1cd98d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1908.09942", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cf32144f0f0d6652f8b08228457d3d63cbb7298b", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
227132448
pes2o/s2orc
v3-fos-license
Discontinuation of medications classified as reuptake inhibitors affects treatment response of MDMA-assisted psychotherapy MDMA-assisted psychotherapy is under investigation as a novel treatment for posttraumatic stress disorder (PTSD). The primary mechanism of action of MDMA involves the same reuptake transporters targeted by antidepressant medications commonly prescribed for PTSD. Data were pooled from four phase 2 trials of MDMA-assisted psychotherapy. To explore the effect of tapering antidepressant medications, participants who had been randomized to receive active doses of MDMA (75–125 mg) were divided into two groups (taper group (n = 16) or non-taper group (n = 34)). Between-group comparisons were made for PTSD and depression symptom severity at the baseline and the primary endpoint, and for peak vital signs across two MDMA sessions. Demographics, baseline PTSD, and depression severity were similar between the taper and non-taper groups. At the primary endpoint, the non-taper group (mean = 45.7, SD = 27.17) had a significantly (p = 0.009) lower CAPS-IV total scores compared to the taper group (mean = 70.3, SD = 33.60). More participants in the non-taper group (63.6%) no longer met PTSD criteria at the primary endpoint than those in the taper group (25.0%). The non-taper group (mean = 12.7, SD = 10.17) had lower depression symptom severity scores (p = 0.010) compared to the taper group (mean = 22.6, SD = 16.69). There were significant differences between groups in peak systolic blood pressure (p = 0.043) and diastolic blood pressure (p = 0.032). Recent exposure to antidepressant drugs that target reuptake transporters may reduce treatment response to MDMA-assisted psychotherapy. Introduction PTSD is a relatively prevalent disorder, affecting 3 to 4% of the global population (Hoge et al. 2004;Koenen et al. 2017). People with PTSD can face reduced quality of life in multiple areas, from workplace productivity to interpersonal relationships, and are at increased risk for suicidal thoughts or behavior (Sareen et al. 2007;Shea et al. 2010). Currently available treatments include pharmacotherapies and psychotherapies. Two selective serotonin reuptake inhibitors (SSRI), sertraline and paroxetine, are the only FDA-approved medications for PTSD, but other adjunctive drugs are also commonly prescribed for sleep disturbances and anxiety associated with PTSD. Six phase 2 randomized, double-blind placebo-controlled clinical trials were conducted to investigate MDMA-assisted psychotherapy for PTSD treatment (Mithoefer et al. 2019;Mithoefer et al. 2018;Mithoefer et al. 2011;Oehen et al. 2013;Ot'alora et al. 2018). In these studies, participants worked with a male and female co-therapy team who followed a manualized format of MDMA-assisted psychotherapy. The manualized treatment includes a course of preparatory psychotherapy, two to three 8-hour long MDMA sessions, and follow-up integrative psychotherapy. Symptom assessment was conducted by a blinded independent rater who was not present during therapy sessions. Encouraging findings have been reported from an analysis of data pooled across these six studies (Mithoefer et al. 2019). Compared to the placebo/ control group that received the same psychotherapy, participants receiving active doses of MDMA (75-125 mg) had significant reductions in symptoms, as measured via Clinician-Administered PTSD Scale for DSM IV (CAPS-IV). The between-group Cohen's d effect size was 0.8, indicating a large effect for active doses of MDMA as an adjunct to psychotherapy. There was also a trend for active dose participants to experience greater reductions in symptoms of depression. MDMA increases synaptic concentrations of serotonin, norepinephrine, and dopamine by reversing the flow of neurotransmitter through membrane-bound transporter proteins (SERT, NET, and DAT, respectively). Several medications commonly prescribed for PTSD and depression target one or more of these transporters, including selective serotonin reuptake inhibitors (SSRIs), serotoninnorepinephrine reuptake inhibitors (SNRIs), norepinephrine reuptake inhibitors (NRIs), and norepinephrine-dopamine reuptake inhibitors (NDRIs). When MDMA is co-administered with a reuptake inhibitor such as citalopram or fluoxetine, the subjective and psychological effects are markedly attenuated (Farre et al. 2007;Liechti et al. 2000). For this reason, in order to investigate the effects of MDMA-assisted psychotherapy on PTSD symptoms, participants in phase 2 studies tapered off psychiatric medications prior to commencing MDMA sessions. Reuptake inhibitors all modulate monoaminergic signaling by blocking re-uptake of neurotransmitters back into terminals, and the subsequent changes in neuronal discharge and transmitter release. In addition, long-term administration of SSRIs desensitizes and downregulates 5-HT1A autoreceptors, leading to reduced negative feedback and ultimately more 5-HT released into the synapse (Richelson 2001). Since the full therapeutic effects of antidepressants are present only after weeks of daily dosing, involvement of several mechanisms has been posited, such as effects on downstream gene transcription, synaptogenesis, inflammation, and the hypothalamic-pituitary-adrenal axis (Malberg and Schechter 2005;Stahl 1998;Walker 2013). After chronic treatment with sertraline or paroxetine, SERT is downregulated to varying degrees in humans, depending on the brain region, with greater SERT radioligand occupancy occurring in brain regions associated with depressive symptoms, namely the subgenual cingulate, amygdala, and raphe nuclei (Baldinger et al. 2014). In SSRI-treated rats, SERT binding was decreased by 80-90% in the CA3 region of the hippocampus, and this reduction was not attributed to decreased SERT gene transcription, suggesting that chronic use of SSRIs decreases synaptic SERT protein levels (Benmansour et al. 1999). Other neurotransmitters, such as GABA, dopamine, glutamate, and noradrenaline, are affected indirectly by SSRIs and may play a role in the antidepressant effects (Olver et al. 1999). Cessation of antidepressants is associated with a variety of psychological and physiological withdrawal symptoms, described as a discontinuation syndrome in the DSM-5. Tapering reuptake inhibitors is recommended for discontinuation, with longer periods (months) of tapering resulting in reduced frequency and severity of adverse effects compared to abrupt cessation or short tapers (2-4 weeks) (Horowitz and Taylor 2019). In order to understand whether or not having recently tapered off a medication targeting the same primary binding sites as MDMA would affect treatment response, we pooled data from four phase 2 studies that included both the CAPS-IV and the Beck Depression Inventory-II (BDI-II). PTSD and depression symptom severity were compared, as well as vital sign values during MDMA sessions, between participants that tapered off reuptake inhibitors and those that did not because they had not been taking them at the time of initial study screening. Setting Four randomized, double-blind trials were conducted at different study sites in the USA (studies MP-8, MP-12), Canada (study MP-4), and Israel (study MP-9). These four trials all included the BDI-II, while the other two early phase 2 trials (MP-1, MP-2) did not, and therefore are not included in this analysis. Data were collected from December 2010 to March 2017. Trials were approved by the Western-Copernicus Institutional Review Board (Research Triangle or Cary, NC; MP-8, MP-12), IRB Services/Chesapeake (Aurora ON; MP-4), and Helsinki Committee of Beer Yaakov Hospital (Israel; MP-9). Participants and study design Participant recruitment, inclusion/exclusion criteria, and study designs were covered in detail in prior publications (Mithoefer et al. 2019;Mithoefer et al. 2018;Ot'alora et al. 2018). Briefly, studies enrolled male and female participants with chronic PTSD (symptoms lasting greater than 6 months), and Clinician-Administered PTSD Scale for DSM IV (CAPS-IV) severity scores ≥50 (MP-8, MP-9, MP-12) or ≥ 60 (MP-4). Psychiatric medications were tapered and discontinued prior to commencing experimental sessions. The protocols specified for medications to be tapered gradually over a period of weeks to minimize withdrawal symptoms, and for them to be discontinued at least five half-lives of each drug prior to MDMA administration. Anxiolytics and sedative hypnotics were used as needed between experimental sessions. Participants taking gabapentin for pain management could continue to do so throughout the course of the study. Participants taking stimulants to treat attention deficit disorder were permitted to take them during the study, but had to discontinue use for five half-lives prior to each MDMA session through ten days after each session. All participants gave written informed consent. Enrolled participants were randomized to receive either active doses of MDMA (75-125 mg) or a control dose (0-40 mg MDMA) during psychotherapy sessions with a male/female co-therapy team. Since the aim of this paper is to evaluate the effect of having recently tapered off reuptake inhibitors on the treatment response, only data from participants who received active MDMA doses in the blinded study segment are included in this analysis (see supplemental content for control group CAPS scores). Blinded doses were administered during two 8-h psychotherapy sessions spaced 3-5 weeks apart. Each initial dose was followed approximately 1.5-2.5 h later by an optional supplemental dose equal to half the initial dose. Each blinded experimental session was followed by three non-drug 90-min integrative sessions. The primary endpoint occurred 1-2 months (depending on the study) after the second blinded session. Blinded independent raters, who were not present during any psychotherapy sessions, administered the primary outcome measure (CAPS-IV). Participants self-reported depression symptoms on a secondary measure (BDI-II). Assessments The CAPS-IV is a semi-structured interview addressing PTSD symptom clusters recognized by the DSM-IV (re-experiencing, avoidance, and hyperarousal) (Blake et al. 1995;Nagy et al. 1993;Weathers et al. 2001). The CAPS-IV contains frequency and intensity scores for each of the three symptom clusters that are summed to produce a total severity score, the primary outcome for these studies. The CAPS-IV has a dichotomous diagnostic score for meeting PTSD diagnostic criteria. The Beck Depression Inventory-II (BDI-II) is an established 21-item measure of self-reported depression symptoms (Beck et al. 1996). Responses are made on a four-point Likert scale and summed to produce an overall score. To monitor safety, vital signs were measured before, during, and after the experimental sessions. Blood pressure and heart rate were measured in intervals of 15 to 30 min, and body temperature every 60 to 90 min during MDMA sessions. Statistical analysis Data were pooled across four studies that all used the CAPS-IV and BDI-II. Only data from participants randomized to receive active doses of MDMA (75 mg, 100 mg, and 125 mg) were included in the analyses. All available data at each endpoint was used and missing data was not imputed. Participants were divided into two groups for exploratory analyses. The taper group consisted of participants who tapered off medications classified as reuptake inhibitors (see Table 1) at the time of screening or enrollment prior to commencing blinded sessions. Medications classified as reuptake inhibitors included selective serotonin reuptake inhibitors, serotonin-norepinephrine reuptake inhibitors, norepinephrine reuptake inhibitors, and norepinephrine-dopamine reuptake inhibitors. The non-taper group consisted of participants who did not taper medications in this drug class, but could have tapered medications from other drug classes (e.g., benzodiazepines). The primary analysis of CAPS-IV total severity scores was a repeated measures ANOVA with time (baseline and primary endpoint) as the within-subject factor and group (taper vs. nontaper) as the between-subject factor. If significant main effects were detected, Bonferroni post hoc tests were used for between group comparisons. BDI-II total scores were analyzed with the same method. Independent-samples t tests compared peak vital signs across the two experimental sessions. Pearson correlation analyses were used to determine the relationship between time of abstinence (antidepressant stop date to first MDMA session date), the change in CAPS-IV scores (primary endpoint-baseline), and the average peak vital signs in the MDMA sessions. Group differences in baseline characteristics, demographics, and PTSD diagnostic criteria (CAPS-IV) were evaluated with Pearson's chi-squared test or independent-samples t test. Table 1 displays the demographics and baseline characteristics of the taper and non-taper groups. Of the 50 participants randomized to active MDMA doses (75-125 mg), 16 met criteria for the taper group, and the other 34 for the non-taper group (Table 2). Most participants tapered off one drug (n = 12), but n.s., not significant some participants tapered off two (n = 3) or three drugs (n = 1). Table 1 shows the number of participants that tapered off each reuptake inhibitor. The average (SD) number of days from when the medications were stopped to the first MDMA session was 25.1 (17.7), range 4 to 70 days. The taper period was required to be an appropriate length to avoid withdrawal effects, but the start date for tapering period was not collected; therefore the number of days for the taper period is unknown. An interval of at least five times the particular drug and active metabolites' half-life plus 1 week for stabilization was needed before the first MDMA session. One participant in the nontaper group dropped out of the study prior to the primary endpoint. Sample For the taper group, nine participants (56.3%) were female with a mean (SD) age of 40.7 (14.19); for the non-taper group, 15 (44.1%) were female with a mean (SD) age of 39.8 (11.35). The majority in both groups were White/Caucasian (non-taper group: 82.4%; taper group: 93.8%). For these demographics, there were no significant differences between groups. Outcome measures-CAPS-IV and BDI-II The mean (SD) change from baseline to the primary endpoint was −41.1 (19.86) for the non-taper group (n = 33) and − 22.6 (33.80) for the taper group (n = 16). There was a significant time × group interaction (F(1,47) = 5.86, p = 0.019) in the overall ANOVA for CAPS-IV scores. At the end the primary endpoint (Table 3), the non-taper group had significantly (p = 0.009) lower CAPS-IV total scores (mean = 45.7, SD = 27.17) compared to the taper group (mean = 70.25, SD = 33.60). More participants in the non-taper group did not meet PTSD criteria at the primary endpoint than those in the taper group (63.6% vs. 25.0%), (X 2 (1) = 6.437, p = 0.011). There was no difference in CAPS-IV total scores at baseline between groups. There was no significant correlation (r = 0.13, p = 0.633) between the time of abstinence from the reuptake inhibitor and the change in CAPS-IV total scores at the primary endpoint. For BDI-II total scores, there was a significant time × group interaction (F(1,47) = 4.88, p = 0.032) in the overall ANOVA. The non-taper group (mean = 12.4, SD = 10.17) had lower depression symptom severity (p = 0.010) at the primary endpoint compared to the taper group (mean = 22.6, SD = 16.69). The mean (SD) change from baseline to the primary endpoint was −17.2 (11.48) for the non-taper group and − 8.3 (16.70) for the taper group. Baseline BDI-II scores were equivalent between groups. Vital signs For vital sign values across the two blinded sessions, there were significant differences between taper groups for peak (maximum elevation) values during the session for systolic (p = 0.043) and diastolic (p = 0.032) blood pressure. The non-taper group had higher maximum blood pressure values (systolic mean = 152.5, SD = 17.60; diastolic mean = 93.1, SD = 11.74) than the taper group (systolic mean = 144.5, SD = 18.54; diastolic mean = 87.8, SD = 9.78). No significant between-group differences were detected for body temperature or heart rate, and no differences were found between groups at the pre-dose measurement or the session endpoint for any vital signs. The number of days abstinent from reuptake inhibitors prior to the first MDMA positively correlated with average maximum body temperature (r = 0.381, p = 0.032) during the MDMA sessions. Systolic (r = 0.326, p = 0.069) and diastolic blood pressure (r = 0.307, p = 0.088) trended in the same direction, with longer periods of abstinence associated with higher max blood pressure readings. Discussion MDMA-assisted psychotherapy reduces PTSD symptom severity. Recent prior use and tapering of medications that target monoamine reuptake transporters resulted in blunted therapeutic and physiological responses to MDMA in phase 2 trials. Participants who tapered reuptake inhibitors at the time of study enrollment had significantly higher CAPS scores at the primary endpoint compared to participants who had not recently taken medications in these drug classes. More participants still met PTSD diagnostic criteria in the taper group (75%) compared to the non-taper group (36.4%) at the primary endpoint. Moreover, expected increases in systolic and diastolic blood pressure following MDMA administration were reduced in the taper group compared to the non-taper group. There are a few possible explanations for these results. The binding sites (SERT, NET, DAT) for MDMA may have still been downregulated in individuals who tapered reuptake inhibitor medications at the time of study enrollment. In studies with knockout mice strains, SERT and DAT were necessary for MDMA-stimulated efflux of serotonin and dopamine in the striatum and prefrontal cortex (Hagino et al. 2011). Transporter receptor occupancy studies in humans have found that SSRI treatment at minimum therapeutic doses resulted in a mean SERT occupancy of 76-85% (percent reduction in binding potential) (Meyer et al. 2004), and in rats treated with SSRIs, receptor densities are reduced to a similar extent (Wamsley et al. 1987). Because of these neuroadaptations, gradual tapering is recommended for discontinuation of drugs in this class to minimize withdrawal symptoms. The time required to recover normal function remains uncertain, but patients can experience withdrawal symptoms for weeks to months, and sometimes even years after cessation of reuptake inhibitors (Davies et al. 2018;Horowitz and Taylor 2019). The severity of withdrawal symptoms appears to be related to the drug, dose, duration of taking the medication, taper duration, and step-down dosing patterns. In addition to SERT, other serotonin receptors important for modulating the effects of MDMA could have been functioning differently after chronic use of these medications. For example, rats were dosed daily with fluoxetine for 14 days and subsequently challenged with a 5-HT 1A agonist at various time points after discontinuation. Two days post-treatment, 5-HT 1A mediated release of ACTH and oxytocin was reduced by 68-74% compared to placebo controls, and 60 days postdiscontinuation, oxytocin response was reduced by 26% (Raap et al. 1999). MDMA enhances release of both oxytocin and ACTH (Dumont et al. 2009;Grob et al. 1996;. Increased oxytocin may partially mediate the prosocial effects of MDMA and the processing of negative emotional stimuli (Hysek et al. 2014;Kirkpatrick et al. 2015), and both oxytocin and ACTH could be involved in the therapeutic effects observed in MDMAassisted psychotherapy trials. Alterations in function of other serotonin receptors could also impact the subjective effects of MDMA. Prior studies have found less sensitivity of 5-HT 2A and 5-HT 4 receptors in humans after administration of SSRIs (Haahr et al. 2014;Meyer et al. 2001). In the MDMA-assisted psychotherapy trials, participants were required to have completed tapering off psychiatric medications at least five drug half-lives prior to starting the blinded sessions. In this sample, there was a large range in the number of days of abstinence from the reuptake inhibitors but there was no significant relationship between days of abstinence and PTSD symptom severity at the primary endpoint. However, the small sample size and different types of medications tapered may have occluded information about abstinence duration and the treatment effect. A greater maximum body temperature during MDMA sessions was associated with longer periods of abstinence, suggesting a larger pharmacological effect of MDMA. The short taper duration and minimal period of abstinence (average 25 days) may not have been sufficient for neurotransmitter systems to reach homeostatic equilibrium. MDMA-induced elevations of vital signs are dependent on enhanced monoamine release, which occurs through binding of MDMA to transporter proteins. Reduced peak systolic and diastolic blood pressure in the taper group is consistent with the hypothesized lower concentrations of extracellular monoamines after MDMA administration in this group. However, this is not concordant with findings of no significant differences detected between groups for peak heart rate or body temperature. It has been demonstrated that pre-treatment with the SSRI citalopram reduced MDMA-induced increases in systolic and diastolic blood pressure and heart rate, but not body temperature (Liechti and Vollenweider 2001). MDMA-stimulated elevations in body temperature are partially dependent on norepinephrine and possibly serotonin (Liechti 2014), although the exact contribution of each transmitter remains unclear. Taken together, this evidence suggests that the reduced rise in blood pressure for the taper group in our sample may have resulted from blunted efflux of serotonin after MDMA administration. The other possible explanation for the reduced response to MDMA-assisted psychotherapy in the taper group is that participants were experiencing withdrawal symptoms and discontinuation syndrome after cessation of medications. A g r e a t e r n u m b e r o f i n d i v i d u a l s i n t h e t a p e r group discontinued anxiolytics and psychostimulants prior to the first experimental session, which may have also elicited negative effects. This could have influenced the results in one of two ways. If participants were having bothersome psychological and somatic symptoms after stopping reuptake inhibitors or other medications, they may not have been able to fully engage in the therapeutic processing of traumatic memories during MDMA sessions. Alternatively, some of the withdrawal symptoms could have overlapped with symptoms of PTSD or depression, and therefore influenced the results of the CAPS-IV or the BDI-II. However, baseline depression and PTSD severity scores were equivalent between the taper and non-taper groups, suggesting that withdrawal symptom severity was not responsible for the differences in outcome between groups. In addition, withdrawal symptoms would not be likely to cause a differential in blood pressure elevations. The placebo group showed a similar response across the taper/non-taper groups to psychotherapy alone suggesting that discontinuation of reuptake inhibitors did not interfere with psychotherapeutic processing. In a study of MDMA-assisted psychotherapy for social anxiety in autistic adults (Danforth et al. 2018), one participant failed to exhibit expected changes in vital signs and reported no changes in subjective effects during the blinded sessions. The co-therapy team and the participant both guessed with high certainty that placebo had been administered, but an analysis of a plasma sample taken during the experimental session confirmed that MDMA had been ingested. This person had tapered off an SSRI at the time of study enrollment. Other factors besides medication tapering could be involved, but it is worth noting that a lack of response to MDMA was observed in a different population under investigation. Limitations There are limitations that should be noted in interpreting the findings presented here. The sample sizes were small, with an unequal number of participants in each group (taper group n = 16 vs. non-taper group n = 34), and data were pooled across four similar studies at different sites. In the taper group, the number and duration of medications tapered varied between participants, and sample sizes were too small to determine how these factors affected treatment responses. In addition, the length of each medication taper was not available; therefore, we could not include this information in the analyses. The taper group also discontinued other psychiatric medications which could have impacted outcomes. Given the number of medications they were on at study enrollment, it is possible that the taper group may have represented a more severe burden of PTSD that was not reflected in the outcome measures. Data from ongoing phase 3 trials will provide a larger sample to further characterize the effects of discontinuation of specific medications. Conclusions Discontinuation of antidepressant medications classified as reuptake inhibitors reduced the positive outcomes of MDMA-assisted psychotherapy compared to participants who had not recently taken these medications. These preliminary findings have implications for clinical practice if MDMA-assisted psychotherapy becomes an FDA-approved treatment after phase 3 trials are completed. Adjustments to taper procedures, specifically allowing for a significantly longer period for tapering completely off reuptake inhibitors prior to initiating MDMA sessions, could potentially increase the effectiveness of MDMA when used as an adjunct to therapy. Acknowledgments The authors thank the staff at MAPS and MAPS PBC, the therapists and individuals who participated in these trials, and the study site personnel. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-11-24T14:07:33.750Z
2020-11-21T00:00:00.000
{ "year": 2020, "sha1": "1f625a7cfd1c0e8e5398729eab3daeeee61151e8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00213-020-05710-w.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "94bc8d1ad83f9eb8096031b13f66878f970996d6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55045708
pes2o/s2orc
v3-fos-license
Effect of attractant stimuli, starvation period and food availability on digestive enzymes in the redclaw crayfish Cherax quadricarinatus (Parastacidae) Chemical stimuli in crayfish have been extensively studied, especially in the context of social interactions, but also to a lesser extent in relation to food recognition and the physiological response of digestive enzymes. This is particularly important in commercial species in order to optimize the food supplied. The first objective of this study was to determine whether incorporation of squid meal (SM) in food (base feed, BF) acts as an additional attractant for Cherax quadricarinatus and, if so, the concentration required for optimal effectiveness. Incorporation of SM was evaluated through individual and group behavioral tests. The second objective was to analyze the effect of food availability on behavior and level of digestive enzyme activity after short-term (48 h) and long-term (16 d) starvation periods. To assess the effect of either starvation period, 3 different treatments were conducted: no feed (control), available BF, and BF present but not available. Individual and group behavior showed no differences among treatments with different percentages of SM inclusion in BF. The time spent in chambers with different percentages of SM was similar in all treatments. Levels of amylase activity and soluble protein, as a function of food availability after a shortor long-term starvation period, were not altered. Digestive enzyme activity was not affected after 2 d of starvation in response to the treatment. However, change was observed in enzymatic profiles after juveniles were deprived of food for 16 d. The main responses were given by lipase, protease and trypsin activity. Based on previous studies and the present results, we propose a hypothesis for a possible regulation of the digestive and intracellular lipase activities depending on food availability. ABSTRACT: Chemical stimuli in crayfish have been extensively studied, especially in the context of social interactions, but also to a lesser extent in relation to food recognition and the physiological response of digestive enzymes.This is particularly important in commercial species in order to optimize the food supplied.The first objective of this study was to determine whether incorporation of squid meal (SM) in food (base feed, BF) acts as an additional attractant for Cherax quadricarinatus and, if so, the concentration required for optimal effectiveness.Incorporation of SM was evaluated through individual and group behavioral tests.The second objective was to analyze the effect of food availability on behavior and level of digestive enzyme activity after short-term (48 h) and long-term (16 d) starvation periods.To assess the effect of either starvation period, 3 different treatments were conducted: no feed (control), available BF, and BF present but not available.Individual and group behavior showed no differences among treatments with different percentages of SM inclusion in BF.The time spent in chambers with different percentages of SM was similar in all treatments.Levels of amylase activity and soluble protein, as a function of food availability after a short-or long-term starvation period, were not altered.Digestive enzyme activity was not affected after 2 d of starvation in response to the treatment.However, change was observed in enzymatic profiles after juveniles were deprived of food for 16 d.The main responses were given by lipase, protease and trypsin activity.Based on previous studies and the present results, we propose a hypothesis for a possible regulation of the digestive and intracellular lipase activities depending on food availability. Crustaceans exhibit relatively slow and intermittent feeding activity and this has an impact on food acquisition and processing.These behavioral characteristics affect the physical properties of feed pellets, such as water stability (hydrostability), and as a consequence, water quality (Saoud et al. 2012).Inasmuch as food is a significant expense in aquaculture production systems, the need to maximize food consumption and reduce wasted food is fundamental for economic success (Lee & Meyers 1996). Considering the importance of chemical signals during the development of crustaceans, it might be assumed that the incorporation of attractants to food would allow individuals to find potential food in a shorter period of time, increasing the possibility of ingestion (Mendoza et al. 1997).It has been demonstrated that squid meal acts as a stimulant, increasing food consumption in Homarus gammarus (Mackie & Shelton 1972), Penaeus stylirostris and P. setiferus (Fenucci et al. 1980), P. monodon (Smith et al. 2005), and Litopenaeus vannamei (Nunes et al. 2006).Similarly, shrimp protein hydrolysates stimulate feed consumption in C. quadricarinatus (Arredondo-Figueroa et al. 2013).There are few studies regarding the use of chemoattractant substances incorporated into the diets of cultured freshwater decapod crustaceans (Arredondo-Figueroa et al. 2013) and their effect on feeding responses (Tierney & Atema 1988, Lee & Meyers 1996, Kreider & Watts 1998). Under natural conditions where crayfish may feed ad libitum on foods appearing in various forms and compositions, differences in digestive processes are likely to occur (Kurmaly et al. 1990).Crustaceans alternate between periods of feeding and non-feeding during their development as a result of sequential molting (Vega-Villasante et al. 1999).Molting involves several stages with different feeding behaviors, including the cessation of external food intake from late premolt through early postmolt; therefore, energy needs can be met with different available external food sources or lipid reserves.Digestive enzymes are used as a physiological response to fasting (Cuzon et al. 1980, Jones & Obst 2000, Muhlia-Almazán & García-Carreño 2002, Rivera-Pérez & García-Carreño 2011, Calvo et al. 2013).Artificiallyinduced fasting and starvation may allow elucidation of the metabolic routes used in hierarchical order during molting and may initiate alternative biochemical and physiological adaptation mechanisms (Barclay et al. 1983, Comoglio et al. 2008).The midgut gland of crustaceans is the main organ for synthesis and secretion of digestive enzymes (including proteinase, lipase and carbohydrase), absorption and storage of nutrients (lipids and glycogen), which can be mobilized during the non-feeding periods (Icely & Nott 1992, Ong & Johnston 2006).The level of the digestive enzymes in decapod crustaceans does not remain constant during the molt cycles (van Wormhoudt 1974) as a result of both internal and external factors such as starvation and the availability, quantity and quality of food.In C. quadricarinatus, Loya-Javellana et al. (1995) demonstrated that crayfish are potentially capable of regulating their digestive processes according to food availability. In the present study, we focused on factors affecting feeding in C. quadricarinatus.Our main hypothesis was that chemical signals from food affect digestive enzyme activity, and this response is modulated by food availability and starvation periods.Our first objective was to determine whether squid additives make food more attractive to crayfish and, if so, what concentration of additives elicits maximum food searching behavior.The second objective was to analyze the effect of food availability on digestive enzyme activity after short-and long-term starvation periods.This information may be useful to understand food searching behavior, and to determine the modulating effect of food presence on digestive physiology in order to design new diets and maximize food handling for the species. Effect of squid attractant on juvenile ability to detect food For the behavioral experiment, a 30 × 40 × 20 cm glass aquarium without water flow was designed (Fig. 1A) based on Jaime-Ceballos et al. (2007).The aquarium was divided into 3 similarly-sized, parallel chambers: the middle chamber was used for acclimation, and the right and left compartments were used as 'attractant chambers'.The aquarium was placed inside a white box to minimize disturbance to crayfish behavior.Food containers (4.5 × 4.5 × 6 cm, Fig. 1B) consisted of an acrylic box surrounded by nylon mesh (1 mm mesh pore).There was a net tube (1.5 × 4.5 cm, diam.× length) inside the container to prevent small particles of food from falling out when the acrylic structure was moved by the animals. The ingredient tested as a food attractant was squid meal (SM, Illex argentinus), and its inclusion in BF was analyzed.The protein concentrate extraction of SM was performed by the Soxhlet method, with isopropyl alcohol as a solvent.The protein residue was then oven-dried at 80°C for 24 h according to The ability to detect food was evaluated under 2 experimental conditions: individually (April 2012) and in groups (April 2013).Twenty individual juvenile behaviors were observed in the glass aquarium per treatment, except for the reference positive control (N = 10) (weight: 1.35−3.25 g; N = 110) and group behavior was observed with 4 juveniles (weight: 1.21 to 3.75 g) per experiment with 5 replicates for each treatment (N = 60).The group behavior experiment was only performed for Treatments (1), ( 3) and ( 5) due to the results of individual behavior experiments. Test specimens were acclimated to BF for 1 wk prior to the assays, and behavioral experiments were always performed between 09:00 and 13:00 h in the presence of artificial light, in order to avoid any effects of circadian rhythms (Sacristán et al. 2013).All crayfish were starved for 48 h prior to behavioral evaluation, and all were at intermolt, since it has been suggested that the level of responsiveness varies from stage to stage of the molt cycle (Harpaz et al. 1987).Only test specimens with complete sensory append ages (i.e.antennae and antennules) were selected. At the beginning of each assay, juveniles were maintained in the acclimation chamber for 10 min as in Nunes et al. (2006).After each trial, water was discarded completely, the aquarium was washed with tap water and refilled with new filtered water.Water quality parameters were measured in order to avoid water quality effects on responses by test specimens to the chemoattractant (Lee & Meyers 1996).These parameters, i.e. dissolved oxygen (6 ± 1 mg l −1 ), pH (7.7 ± 0.5), hardness (80 ± 10 mg l −1 as CaCO 3 equivalents), and temperature (27 ± 1°C) were within the ranges recommended for aquaculture (Jones 1997, Boyd & Tucker 1998). Behavioral response to the presence of the attractant was recorded visually by 1 observer positioned in front of the glass aquarium.The location of SM (i.e.left or right chamber) was chosen randomly for each behavior session.After acclimation, the glass doors of the chamber were opened and the following variables were evaluated: (1) first choice (SM or no SM) of the juveniles, and (2) residence time in each chamber for 10 min (a period established in a pre liminary bioassay).The food amount (BF, SM+BF or TetraColor) offered in each trial was 5% of the mean body weight of all crayfish.The percentage of positive choice was calculated as: positive choice (%) = (total number of positive choices / total number of comparisons) × 100, as in Nunes et al. (2006).The % residence time was calculated as: residence time (%) = (total time of positive residence / total assay time) × 100. Effect of food availability on digestive enzyme activity To evaluate the effect of food availability on digestive enzymes, 2 experiments were performed according to length of starvation period (short or long).In both experiments, treatments were: (1) no BF (control), (2) available BF (ABF), (3) BF present but not available (NABF).For each treatment, an 18 × 35 × 19 cm plastic aquarium was used; food was unprotected in the ABF treatment but was protected by a food container in the NABF treatment.Either the food or the food container was placed in the middle of the aquarium.In the ABF and NABF treatments, the amount of food offered was 5% of the juvenile's weight. Expt 2: long-term starvation period A total of 72 intermolt phase crayfish (weight: 1.75 -5.17 g) were selected and starved for 16 d in individual plastic containers (500 cm 3 ) filled with 350 ml of de chlorinated water under continuous aeration.These containers were placed in 53 × 40 × 12 cm aquaria with water maintained at 27 ± 1°C.Starvation days were established in preliminary studies.During this period, the plastic containers were cleaned and water was renewed 3 times a week (during experiments no molting organisms were observed).Thereafter, the same procedure as in Expt 1 was performed, but the analysis times were 0, 30 and 120 min; at each time 8 crayfish were anesthetized in cold water and the midgut gland was dissected. Enzymatic preparation and activity assays At the end of short-and long-term starvation experiments, the midgut glands were dissected, weighed (± 0.1 mg) and immediately frozen at −80°C.Each midgut gland was homogenized in Tris-HCl buffer (50 mM, pH 7.5, 1:4 w/v) in an ice-water bath, with a Potter homogenizer.After centrifugation at 10 000 × g for 30 min at 4°C (Fernández Gimenez et al. 2009), the lipid layer fraction was removed and the supernatant was stored at −80°C until used as an enzyme extract for the enzymatic analysis.The absorbance of enzymatic assays was read on a JASCO CRT-400 spectrophotometer. The amount of total soluble protein was evaluated with the Coomassie blue dye method according to Bradford (1976) using serum bovine albumin as the standard.Total proteinase activity was assayed using 1% azocasein as the substrate in 50 mM Tris-HCl, pH 7.5 (García-Carreño 1992).One proteinase unit was defined as the amount of enzyme required to increase 0.01 optical density (OD) units min −1 at 440 nm (López-López et al. 2005).Lipase activity of each enzyme extract was determined according to Versaw et al. (1989).The assay mixture consisted of 100 µl of sodium taurocholate 100 mM, 1900 µl of buffer Tris-HCl (50 mM, pH 7.5) and 20 µl of enzyme extract.After pre-incubation (25°C for 5 min), 20 µl of β-naphthyl caprylate substrate (Goldbio N-100) 200 mM in dimethyl sulfoxide (DMSO) was added.The mixture was incubated at 25°C for 30 min.Then 20 µl Fast Blue BB (100 mM in DMSO) was added.The reaction was stopped with 200 µl of tri chloroacetic acid (TCA, 0.72 N), and clarified with 2.76 ml of ethyl acetate:ethanol (1:1 v/v).Absorbance was recorded at 540 nm.One lipase unit was defined as the amount of enzyme required to increase 0.01 OD units min −1 at 540 nm (López-López et al. 2005). Amylase activity of each extract was determined according to Vega-Villasante et al. (1993).The assay mixture consisted of 500 µl Tris-HCl (50 mM, pH 7.5) and 5 µl enzyme extract; 500 µl starch solution (1% in Tris-HCl, 50 mM, pH 7.5) was added to start the reaction.The mixture was incubated at room temperature for 10 min.Amylase activity was determined by measuring the production of reducing sugars resulting from starch hydrolysis as follows: immediately after incubation, 200 µl of sodium carbonate (2 N) and 1.5 ml DNS reagent were added to the reaction mixture and the mixture was boiled for 15 min in a water bath.The volume was adjusted to 10 ml with distilled water, and the colored solution was read at 550 nm.Reference tubes were prepared similarly, but crude extract was added after the DNS reagent.One amylase unit was defined as the amount of enzyme required to cause an increase of 0.01 OD units min −1 at 550 nm (López-López et al. 2005). Trypsin activity was assayed according to Erlanger et al. (1961).The substrate solution was prepared using 100 mM benzoyl Arg-p-nitroanilide (BAPNA) dissolved in 1 ml of DMSO and brought to a volume of 100 ml with Tris-HCI 50 mM, pH 8.2 containing 10 mM CaCl 2 .Activity was measured by mixing 80 µl enzyme extract and 1.25 ml of substrate solution, and then the mixture was incubated for 20 min at 37°C.Subsequently, 0.25 ml of acetic acid was added, and the hydrolysis of BAPNA was determined by measurement of free p-nitroaniline at 410 nm.The trypsin activity was measured at 0, 30, and 120 min for Expts 1 and 2. Statistical analysis The positive choice and residence time data derived from paired comparisons of feeding behaviors were tested using the chi-squared test of independence (Zar 1999) and 1-way ANOVA (Sokal & Rohlf 1995) respectively.Digestive enzyme data from the short-and long-term starvation experiments were analyzed using generalized linear mixed models (GLMMs) with the statistical program R and the GLMMs package (Zuur et al. 2009), including treatments (control, ABF and NABF) and time as fixed factors.The significance level was set at α = 0.05. Effect of chemoattractant on juvenile response The results of individual and group crayfish behaviors are shown in Table 2.For individual crayfish response, no significant differences were found among treatments with different percentages of SM in cluded in the BF.Residence times in the chambers with different percentages of SM were similar in all treatments (p = 0.22).Group behavior showed that the percentage of positive choice was the same for all treatments (p > 0.05).Additionally, the crayfish did not preferentially stay in the chamber with the attractant (p = 0.91). Expt 1: short-term starvation period The results of specific enzyme activity for amylase, lipase, protease, trypsin and soluble protein level in the short-term starvation experiment are presented in Fig. 2. The digestive enzyme profiles and soluble protein from midgut gland extracts showed a similar pattern among treatments.Specifically, crayfish from the NABF treatment had significantly lower levels (p < 0.05) of amylase activity at 5 and 120 min (5.24 and 5.14 U mg protein −1 respectively) than the control and ABF (Fig. 2A).No significant difference was found between ABF and the control group (p > 0.05).Lipase activity of crayfish was not significantly affected (p = 0.19) by the treatments over the 120 min period of the experiment (Fig. 2B).Protease activity in the midgut gland of the juveniles in the NABF treatment was significantly lower (1.02U mg protein −1 ) than those in the control and ABF treatments at 5 min (p < 0.05) (Fig. 2C).Moreover, the crayfish in the ABF group differed significantly from the control (p < 0.05) only at 30 min. Trypsin activity showed significant differences (p < 0.05) among control, ABF and NABF at 120 min (Fig. 2D); furthermore, the soluble protein level of the crayfish was not significantly affected (p = 0.47) by the treatments over 120 min (Fig. 2E). Expt 2: long-term starvation period The effect of food availability on the digestive enzyme activity of crayfish after long-term star vation is shown in Fig. 3.There was no significant difference in amylase activity (p = 0.37) among treatments over the 120 min observation period (Fig. 3A).Lipase activity of ABF exhibited a significantly lower activity (p < 0.05) than NABF at 30 and 120 min (61.52 and 48.31 U mg protein −1 respectively) (Fig. 3B).There were significant differences in protease activity (p < 0.05) among NABF, control and ABF at the initial time (Fig. 3C).At 30 min, the protease activity in NABF (1.15 U mg protein −1 min −1 ) and ABF (1.18 U mg protein −1 ) was significantly (p < 0.05) lower than the control group.The ABF decreased significantly (p < 0.05) to 0.70 U mg protein −1 at 120 min.Trypsin activity for ABF showed significant differences (p < 0.05) compared to the control and NABF treatments at the end of the experiment (Fig. 3D), but the levels of soluble protein concentration in the crayfish starved for 16 d did not show significant differences (p = 0.13) among treatments during the time assayed (Fig. 3E).Similar results were found when total en zyme activity (U mg midgut gland −1 ) was calculated (data not shown). DISCUSSION Although crayfish have polytrophic feeding habits (Saoud & Ghanawi 2013), this study showed that squid protein extract in the tested concentration range did not increase the attractiveness of feed to Cherax quadricarinatus.This result disagrees with other studies on Pleoticus muelleri, Homarus gammarus, Litopenaeus vannamei, Penaeus monodon, P. setiferus and P. stylirostris (Mackie & Shelton 1972 et al. 1980, Díaz et al. 1999, Smith et al. 2005, Nunes et al. 2006) even though the SM and the method for incorporating it into the experimental feed were the same across all studies.However, these other studies were performed on marine crustaceans (lobsters and shrimp), whereas this study is the first to test the effectiveness of SM as an attractant for a freshwater decapod.The experiment on food detection shows that crayfish wander around the aquarium regardless of where food is localized.Subsequently, when the animal is sufficiently close to or in contact with potential food, their chemoreceptors play a fundamental role in acceptance or rejection of potential food (Heinen 1980).Thus, given the feeding habits of C. quadricarinatus and our results, we propose the hypothesis that C. quadricarinatus mainly finds food due to the time they invest in environmental wandering.Given that our food searching behavior experiments were carried out in stagnant water, and that odor plumes are strongly affected by flow dynamics for detection of chemical signals (Weissburg 2011), crayfish food detection should also be studied using flowing water.Additionally, food searching behavior could be analyzed in a Y-maze apparatus to enhance the sensitivity of the experimental setup.Mackie (1973) determined that the squid-soluble extract is rich in proline, glycine, alanine and arginine.According to Heinen (1980), glycine and arginine act as attractants in decapod crustaceans.Tierney & Atema (1988) found that cellobiose and sucrose are important feeding stimulants for Orconectes rusticus; the amino acids glycine and glutamate elicited feeding movements in O. rusticus and O. virilis crayfish.Amino acids are abundant in the tissues of marine organisms and they probably guide predators and scavengers to food.Amino acids may likewise signal food availability to crayfish (Tierney & Atema 1988).Further research is needed to determine if the use of complex ingredients such as krill meal, or simple components like amino acids can act as signals for food detection in C. quadricarinatus. Under natural conditions where crayfish may feed ad libitum on food of various forms and compositions, differences in digestive processes are likely to occur (Kurmaly et al. 1990).In the study on foregut evacuation, return of appetite and gastric fluid in C. quadricarinatus, Loya-Javellana et al. (1995) demonstrated that ingested food was evacuated linearly with time in crayfish fed daily, and in a somewhat curvilinear trend in those fed every 2 d.This would indicate that crayfish are potentially capable of regulating their digestive processes according to food availability (Loya-Javellana et al. 1995). Our results on starvation period and food availability demonstrated that crayfish starved for 48 h had a higher digestive enzyme specific activity than crayfish starved for 16 d.Relative to the highest activity level of amylase, lipase, protease and trypsin recorded during the analysis period of both starvation treatments, we found 4.95, 3.70, 0.29 and 1.12 times, respectively, more activity in crayfish starved in the short-term experiment compared to crayfish starved for the long-term.However, in terms of total activity, the differences were smaller, with similar total protease and trypsin activity, and only 60% higher lipase and amylase activity.Therefore, we can conclude that digestive enzyme activity is not affected after 2 d of starvation and in response to treatment.However, different enzymatic profiles were observed in C. quadricarinatus juveniles deprived of food for 16 d.The main responses occurred in lipase, protease and trypsin activity, which were higher in control and NABF groups; however, this may have been due to the protein provided by the food (in the case of specific activity), or due to the additional weight provided by the ingested food to the tissue (midgut gland), in the case of total activity; in both cases decreasing the enzyme activity. The levels of amylase activity and soluble protein as a function of food availability after short-or longterm starvation were not altered.Calvo et al. (2013), analyzing C. quadricarinatus juveniles (1 g), observed that starvation did not have an effect on amylase activity, but an accentuated tendency to decrease after 50 d of starvation and to increase after 40 d of re-feeding.Our results are opposed to those of Clifford & Brick (1983), who found that fasting Macrobrachium rosenbergii use the energy from the oxidation of carbohydrates. Our research demonstrates that C. quadricarinatus juveniles respond differently to food availability after a long-term starvation period (16 d).These results agree with Calvo et al. (2013), who observed low levels of lipase activity after a 50 d starvation period, suggesting that lipase is not synthesized when food is not available.In the same species, Yudkovski et al. (2007) demonstrated that lipase transcripts decrease in the hepatopancreas during non-feeding stages. Studying the effect of starvation on the expression of lipase transcripts in Litopenaeus vannamei, Rivera-Pérez & García-Carreño (2011) showed that 2 types of lipase exist: a digestive lipase and an intracellular lipase (lysosomal).The digestive lipase is only found in the digestive gland and is negatively regulated during fasting by the absence of food, whereas the intracellular lipase is expressed in various tissues (digestive gland, uropods, pleopods, digestive tube, gills, hemocytes, muscle and gonad), and is positively regulated during starvation, suggesting that it is responsible for lipid mobilization from lipid depots (energy reserves). Based on these previous studies and our present results, we propose a possible regulation pathway for the digestive lipase activity, which is summarized in Fig. 4. We hypothesize that the detection of food promotes de novo synthesis of digestive lipase (ABF and NABF treatments).Subsequently, manipulation, ingestion, stomach food content and nutritive molecules in the digestive gland stimulate digestive lipase secretion into the digestive gland ducts and stomach, interacting with food and carrying out degradation in both sections of the digestive tract (ABF).Therefore, the digestive lipase present in the stomach, intestine and digestive gland is acting on the food and cannot be fully quantified when the lipase activity is measured solely in the digestive gland (Fig. 4A).This would agree with the fact that digestive lipase activity does not increase when food is available (ABF).Therefore, the presence of food inside the stomach, and subsequent products of stomach digestion in the digestive gland and intestine would stimulate further degradation of the food.However, detection of food inhibits the intracellular lipase synthesis pathway, and thus stored lipids are not used as an energy source. When there is no food in the environment for a long period of time, the intracellular lipase de novo synthesis is likely to be stimulated, and as a consequence, lipid stores mobilized.The pathway of digestive lipase synthesis is inhibited and digestive lipase remains at basal levels (Fig. 4B).In the present study, this assumption is supported by our observation of the low level of digestive lipase activity recorded in the control group after 16 d of fasting.However, when food is present but not available (only possible under experimental conditions), digestive lipase is synthesized and stored inside digestive gland cells and is not secreted.This could be due to the fact that there is no handling stimuli, ingestion, food content in the stomach, and food stomach digestion products in the digestive gland.However, because the synthesis of intracellular lipase is likely to be inhibited in the short-term, there may or may not be a mechanism to counteract this experimental effect (Fig. 4C) i.e. when C. quadrica rinatus detects the presence of food, the intracellular lipase synthesis is not active to restrict mobilization of lipid reserves and the synthesis of extracellular digestive lipase is also stimulated, in spite of the presence or absence of food in the stomach and digestive gland.This assumption is supported by our observation of increased activity of digestive lipase in the NABF treatment of this study. Protease and trypsin activities reflected a similar response to food availability after 16 d of fasting.This result supports the concept that trypsin (together with chymotrypsin) is one of the main proteases of the digestive gland in decapod crustaceans, and it is believed to be responsible for 40 to 60% of total protein digestion in penaeids (Galgani et al. 1984, Tsai et al. 1986). Our results show that the presence of available food stimulates trypsin secretion at 120 min after long-term starvation but not after short-term starvation.It is possible that during short fasts, the levels of digestive capacity do not decrease significantly because of the history of previous food.The result of the differential response of trypsin secretion under different fasting periods is related to the findings of Muhlia-Almazán & García-Carreño (2002), who reported that trypsin activity from the hepatopancreas in L. vannamei was diminished in response to fasting.Cuzon et al. (1980) demonstrated that trypsin activity in shrimp decreases during starvation.In turn, C. quadricarinatus juveniles exposed to nonaccessible food after 16 d of fasting show an increase in trypsin activity relative to the ABF group, which would indicate that it might be stored inside digestive gland cells until food entry to the digestive system.This is also related to the results reported by Hernández-Cortés et al. (1999), who demonstrated the presence of trypsinogen in the digestive gland of the crayfish Pacifastacus leniusculus.Furthermore, Sainz et al. (2004), in their study on trypsin synthesis and storage as zymogen in fed and fasted individuals of L. vannamei, revealed that trypsinogen is not totally secreted from a single cell, but rather appears to be secreted partially as an effect of ingestion.It must be considered that trypsinogen can be spontaneously reactivated during the preparation of enzyme extracts and therefore be quantified as an active enzyme, such that it might not be distinguished from the enzyme that is activated and secreted for food digestion by natural causes (Sánchez-Paz et al. 2003).Therefore, more studies in C. quadricarinatus are needed to clarify this issue.More studies are also needed regarding possible changes in messenger expression, as well as immunohistochemical studies of digestive tract cells as a reflection of physiological changes in the digestive enzyme secretion in this species.We observed a differential response (in terms of reaction time) in lipase, protease and trypsin activity when food became available again after pro- Lipase responded rapidly (after only 30 min) to the presence of food, whereas protease and trypsin levels responded only after 120 min.Therefore, these enzymes could be used as a tool to analyze the nutritional status of C. quadricarinatus. During periods of starvation, crustaceans must use their energy reserves to meet their needs, and so enzymatic activity must be finely regulated to degrade the necessary energy reserves while preserving cell integrity as much as possible (Sánchez-Paz et al. 2006).Hence, changes in food intake during development may have important consequences for life history (Brzek & Konarzewski 2001).Although instantaneous ecological consequences of poor and sporadic nutrition are sometimes difficult to identify, the reproductive potential of any organism experiencing such conditions may be reduced (Sánchez-Paz et al. 2006). Acknowledgements.This study was part of H.J.S.'s postgraduate scholarships (ANPCYT and CONICET) and PhD dissertation (University of Buenos Aires, Argentina).We are grateful to Dr. Raymond Bauer, University of Louisiana, Lafayette, LA, USA, and to the reviewers for their comments to improve the manuscript, to Lic.Amir Dyzenchauz for language revision, to Centro Nacional de Desarrollo Acuícola (CENADAC, Argentina) for providing the reproductive stock and to Dr. Gerado Cueto for his help with the statistical analysis.L.S.L.G. is grateful to Agencia Nacional de Promoción Científica y Tecnológica (PICT 2007, project 01187 and 2012 project 01333), CONICET (PIP 2009-2011, number 129 and PIP 2012-2014) KEY WORDS: Chemical stimuli • Crustaceans • Digestive enzyme • Food searching behavior • Food attractants • Starvation
2018-12-07T17:10:24.763Z
2014-12-17T00:00:00.000
{ "year": 2014, "sha1": "036d2663a6a04c2f3afcf61aeebed654f0b16835", "oa_license": "CCBY", "oa_url": "https://www.int-res.com/articles/ab2015/23/b023p087.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "036d2663a6a04c2f3afcf61aeebed654f0b16835", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
14409890
pes2o/s2orc
v3-fos-license
Radiative Transfer Modelling of the Accretion Flow onto a Star-Forming Core in W51 We present an analysis of the temperature, density, and velocity of the molecular gas in the star-forming core around W51 e2. A previous paper (Ho and Young 1996) describes the kinematic evidence which implies that the core around e2 is contracting onto a young massive star. The current paper presents a technique for modelling the three-dimensional structure of the core by simulating spectral line images of the source and comparing those images to observed data. The primary conclusions of this work are that the molecular gas in e2 is radially contracting at about 5 km/s and that the temperature and density of the gas decrease outward over 0.15 pc scales. The simple model of the collapse of the singular isothermal sphere for low-mass star formation (Shu 1977) is an inadequate description of this high-mass molecular core; better models have temperature decreasing outward as r^-0.6, density as r^-2, and velocity increasing as r^+0.1. The core appears to be spherical rather than disk-like at the scale of these observations, 0.3 pc. In this paper we show how a series of models of gradually increasing complexity can be used to investigate the sensitivity of the model to its parameters. Major sources of uncertainty for this method and this dataset are the interdependence of temperature and density and the assumed NH3 abundance. Introduction In recent years, radio interferometric telescopes have provided a wealth of data on the internal structure and dynamics of molecular clouds and star forming regions on scales of 1 pc and smaller. For example, observations of neutral gas have shown the presence of infall and spin-up motions, rotating disks, outflows, and expanding molecular shells at later stages in the formation of stars (Ho & Haschick 1986;Keto, Ho, & Haschick 1987, 1988Torrelles et al. 1989;Sargent & Beckwith 1991;Carral & Welch 1992;Torrelles et al. 1993;Kawabe et al. 1993;and Ho & Young 1996, hereafter Paper I). The details of the collapse process are vital to understanding how stars can be formed-how angular momentum and magnetic flux are transported out and how simultaneous accretion and outflow determine the mass of the resultant star. Theoretical models have mostly focused on the formation of low-mass stars. For example, Shu (1977) analyzed the gravitational collapse of non-rotating, non-magnetic, isothermal gas spheres. He proposed an "inside-out" collapse in which the contracting portions of the cloud should develop an r −1.5 density profile and an r −0.5 radial infall velocity. More recently, Mouschovias and others considered the effects of magnetic fields; they found that under certain common conditions the density will vary as r −1.5 to r −2 over scales of 10 −3 pc to a few tenths of a parsec (e.g. Basu & Mouschovias 1994). Scoville & Kwan (1976) studied the temperature distribution in a centrally condensed cloud heated by a source of radiation such as an HII region. Assuming thermal equilibrium between the radiation and the dust, and between the dust and the gas, they calculate that the temperature of the gas and dust should decrease with radius as r −0.3 or r −0.4 . They expect to find temperatures of 50-70 K at a distance of 0.07 pc from an object of luminosity 10 5 L ⊙ , given typical gas and dust conditions. The present paper is part of a study to observationally measure some basic properties of high-mass star formation and to determine whether the results mentioned above, some of which are intended for low-mass stars, also describe high-mass star formation. Paper I presents observations of the NH 3 (J,K) = (2,2) 24 GHz inversion transitions in the star forming region W51. Those observations detected an accretion flow of a few km s −1 , extending over about 0.3 pc in diameter, onto a young star. The star must be massive because it has created an ultracompact HII region, called W51 e2, which is embedded in the molecular core. The current paper takes a numerical modelling approach to finding the structure of the star forming core in W51. Paper I's observations of W51 e2 are compared with theoretical spectra that would be observed from a model with a specified temperature, density, and velocity structure. We begin with the simplest model with the fewest parameters, that of a uniform density isothermal sphere, and show how the addition of an infall velocity and temperature and density gradients improves the fit to the data. This approach constrains the physical conditions of the molecular gas in the core; it also has the important advantage of revealing how tightly the present data and models can constrain those conditions. Background W51 is an active region of high mass star formation, as shown by its large luminosity (3×10 6 L ⊙ , Thronson & Harper 1979), IR objects (Genzel et al. 1982;Bally et al. 1987), H 2 O masers , and shocked H 2 emission (Beckwith & Zuckerman 1982). Genzel et al. (1981) used the method of statistical parallax of H 2 O masers around the HII region W51 e2 (Scott 1978) to determine a distance of 7.0±1.5 kpc, and similar distances have been found for the other objects in the W51 complex. This paper concerns the ultracompact HII region W51 e2 and the molecular core surrounding it. Figure 1 shows the 1.3 cm continuum emission in W51. Scott (1978) noted that the flux from the HII region in e2 could be accounted for by the presence of one ZAMS star of spectral type B0-O9. HII regions e1 and e2 also are surrounded by condensations of warm NH 3 , detected in both emission and absorption by Ho, Genzel, & Das (1983). Though outflows are seen in other parts of the W51 complex, there is little evidence of outflow activity near the e2 core on the arcsecond scales of interest here (Mehringer 1994;Zhang & Ho 1997). Data The observations for Paper I and the data reduction techniques used are described in detail in that paper. Briefly, we used the NRAO Very Large Array (VLA) 2 to observe the NH 3 (J,K) = (1,1) and (2,2) inversion transitions at 23.69 GHz and 23.72 GHz. The bandwidth of 6.25 MHz covers the main quadrupole hyperfine component and all four satellite components with 64 velocity channels. The velocity resolution of this data is 1.24 km s −1 , and the bandpass was centered on a velocity of +60 km s −1 with respect to the local standard of rest (LSR). The images studied in this paper were made with a spatial resolution (FWHM) of 2.6 ′′ , or 0.09 pc at a distance of 7.0 kpc. The rms noise level in the line-free channels is 6 mJy/beam = 1.9 K. Prediction of geometry in the core e2 Figure 2 presents a position-velocity diagram made along a slice through the e2 molecular core; the models of this paper attempt to reproduce the features shown in this diagram. (Paper I presents many additional figures showing the distribution and velocity structure of the molecular gas in and around W51 e2.) A good model of the molecular gas in the e2 core should reproduce the following features which are visible in Figure 2 and in the figures of Paper I. 1. The five hyperfine components of the transition are visible in both emission and absorption because the absorption is consistently redshifted with respect to the emission. The absorption corresponds to cool (relative to the HII region) molecular gas in front of the HII region, whereas molecular gas to the side or in back of the HII region is in emission. 2. The central hyperfine component of the emission line in e2 shows a curvy "C" shape. Emission east and west of the HII region (upper and lower edges of the panel; see also Paper I) is at a velocity of about 55 km s −1 . Towards the center of the panel (the center of the HII region and molecular core) the emission line becomes more blueshifted. The line center and systemic velocity of the core seem to be at 54-55 km s −1 , which is about halfway between the emission and absorption, at 50 and 60 km s −1 respectively. 3. The "C" shape appears in any position-velocity diagram that is made through the core e2, regardless of orientation. Thus, position-velocity diagrams through e2 show approximate circular symmetry on the sky. 4. The position-velocity diagrams that just miss the HII region (see Paper I) do not show absorption or a curved C-shaped emission line, but they do show that the lines are wider at the position of the HII region than, for example, east or west of it. In other words, this feature is an increase in line width at small spatial scales. 5. Optical depths in the (2,2) transition in e2 are high; the ratio of the central hyperfine emission component to the satellites is 2.5:1 to 3:1 in the core (Paper I), implying emission optical depths of 7-10 (Ho & Townes 1983). In absorption all five hyperfine components have approximately the same strength, implying very high optical depth. 6. In e2 the peak of emission and the peak of absorption are not spatially coincident, as might be expected for a perfectly spherically symmetric core. Instead the emission and absorption peaks are offset by 3 ′′ or 0.1 pc, suggesting that the HII region is off-center with respect to the molecular gas or that the properties of the gas are not spherically symmetric. This offset is confirmed by higher resolution observations (Zhang & Ho 1997). Paper I proposes a simple explanation, summarized here, that explains these observed features. The molecular core e2, about 0.13 pc in radius, is a roughly spherical cloud of gas which is contracting onto a young massive star and HII region near the center of the sphere. Thus the front of the cloud is moving away from us and the back is moving toward us, as required by the redshifted absorption and blueshifted emission; the "C" shape is a projection effect. The assumption of a roughly spherical contraction explains the approximate circular symmetry in the plane of the sky (point 3 above). Paper I discusses this interpretation in more detail. Procedure We investigate the structure of the molecular core around e2 by radiative transfer modelling of the NH 3 emission. Because the data show approximate circular symmetry in the plane of the sky (Section 2.2), we modelled only one two-dimensional slice, or position-velocity diagram, through the e2 core. Figure 2 is the position-velocity diagram selected for modelling; it is the (J,K) = (2,2) transition, and passes through the center of the HII region. Figure 2 shows many of the features described in Section 2.2. The data in Figure 2 were spatially subsampled by taking five pixels separated by the FWHM of the beam, and were trimmed in velocity by selecting the central 50 of the observed 64 velocity channels. The result of the subsampling is shown in the top left panel of Figure 3; it is made up of 250 independent data points. The radiative transfer code used in our spectral line modelling is described briefly in Keto (1990). Based on Paper I's discussion of physical conditions in e2, we determined an initial guess of the structure-temperature, density, and velocity field-of the core. For simplicity, the physical parameters are described as power-law parametrizations in radius. Level populations of NH 3 are determined using the assumption of local thermodynamic equilibrium (LTE), and the line brightness is computed by integrating the radiative transfer equation along the line of sight. The calculated line radiation is convolved to the resolution of the observed data and converted to the same physical units as the observed data. Thus the models in Figures 3 and 4 are sampled at the same spatial frequency as the data in the top left panel of Figure 3, and their intensities are directly comparable. In addition, we have added for this project a least squares fitting procedure to optimize the modelled physical conditions. The multidimensional least-squares fit is done using a downhill simplex algorithm (Press et al. 1993). This algorithm is a gradient descent procedure which reaches a local minimum but not necessarily a global minimum. The fitting routine imposes no constraints on "reasonable" or "acceptable" physical conditions or power-law slopes aside from the requirements that gas temperatures exceed 3 K and densities exceed 10 cm −3 . Energetic and dynamic consistency of the models are considered in Section 5.1. The radiative transfer/model fitting code also performs a simple error analysis. It estimates the error in each parameter as the second derivative of χ 2 with respect to the parameter using the values of χ 2 at the optimized value of the parameter and at two symmetrically offset values of the parameter (Bevington, 1969). This procedure for estimating the error assumes that the parameters are uncorrelated and that the model is linearly dependent on them. Neither of these conditions are true; however, comparison with a limited Monte Carlo analysis indicates that our derived errors are at least of the correct magnitude. Subsequent sections present the results of the data fitting for the molecular core e2, employing a series of models of gradually increasing complexity. For example, the first model is a spherical cloud of molecular gas of constant temperature and density and an HII region inside. In successive models, parameters allowing for infall velocity and for gradients in the temperature, density, and velocity are added. There is no evidence for rotation in e2 on the scales of interest here (Paper I), so the models do not include rotation. (Zhang & Ho [1997] found evidence of spin-up in e2 but only at radii less than 0.2 ′′ , much smaller than the 1.3 ′′ resolution used for the current study.) The technique of gradually increasing the complexity of the models proves extremely valuable because comparisons between the models reveal (1) which parameters are important and which are not; (2) how well determined are the physical conditions in the core. Model 1: quiescent cloud The first model consists of an HII region surrounded by a spherical cloud of molecular gas with uniform density and temperature, and no infall velocity. A turbulent line width (FWHM) of 1.25 km s −1 in the molecular gas is assumed, based on the observed line width in an optically thin envelope of gas surrounding e2 and e1 (Paper I). We also assume a fractional abundance NH 3 /H 2 = 1.4×10 −6 in order to translate from the NH 3 density, which is constrained by the data, to the H 2 densities quoted in this paper. This NH 3 abundance is based on modelling of a similar high-mass star formation region, G10.6-0.4 (Keto, Ho, & Haschick 1988). Abundances around 10 −6 are also estimated for the NH 3 near G9.62+0.19 and G29.96-0.02 (Cesaroni et al. 1994). The turbulent line width and NH 3 abundance remain fixed for all models. Model 1 has six free parameters: the systemic velocity of the HII region, the continuum opacity of the HII region, the radius of the HII region (taken to be the inner radius of the molecular gas), the outer radius of the molecular cloud, and the temperature and density of the gas in the cloud. This model provides a null hypothesis for comparison to models with contracting molecular gas. Table 1 gives the initial guesses of the parameters describing the gas in this and subsequent models; Table 2 gives the optimized parameters and reduced χ 2 (per degree of freedom) for all models. The initial guesses are presented because the more complicated models sometimes give substantially different output from different sets of initial conditions; this issue is discussed more fully in Section 4.4. Table 3 also gives maximum and minimum values for the gas temperature, density, and velocity. Figure 3 presents the best fit position-velocity diagram for model 1 and subsequent models. The panel labelled "1" should be compared to the data in the top left panel of the same figure. The most obvious problem with this model is that because infall (or outflow) is not allowed, emission and absorption are constrained to have the same line width and same radial velocity. In this fit, the radius and continuum opacity of the HII region are consistent with zero: 0.003±0.06 pc and (0.02±3)×10 −19 cm −1 . The fitted optical depth of the HII region is only 3×10 −5 , so low that absorption is not seen. The value of the reduced χ 2 (per degree of freedom) for this model is 11.2, which is little better than the χ 2 of blank sky (12.2). Model 2: contracting cloud with uniform temperature and density The second model incorporates all the features of the first model, namely an HII region surrounded by a spherical cloud of gas with uniform temperature and density, and adds a radial infall velocity of the form v = v 0 (r/r 0 ) αv . Model 2 fits eight free parameters: v 0 , α v , and the six parameters of model 1. Again, Figure 3 and Table 2 present the results of this optimization. Clearly, the addition of infall velocity improves the model immensely. The value of reduced χ 2 drops by almost a factor of three, to 3.8. The fitted continuum opacity of model 2 is 1.4×10 −19 cm −1 , producing a continuum optical depth of 0.02 for the HII region. (Subsequent models have very similar continuum optical depths. However, we caution that these continuum parameters are poorly constrained because the HII region is not resolved by these observations; see Gaume, Johnston, & Wilson 1993.) In contrast to model 1, the gas in front of the HII region is now redshifted and is seen in absorption. The pattern of redshifted absorption and blueshifted emission, so obvious in the data, is now reproduced by the model as well. Model 3: Shu (1977) The simple analytic solution of Shu (1977) for the properties of a collapsing molecular core can be directly tested using our observations of W51 e2. Since that analytic solution was developed specifically for the case of low mass star formation, the relevance of the solution for the e2 core is not obvious. However, the Shu (1977) solution is included as model 3 because comparisons between model 3 and models 2, 4a, and 4b help disentangle the importance of the various parameters. Model 3 is similar to model 2 except for the addition of a radial gradient in density, fixed as n ∝ r −1.5 . In addition, the infall velocity is fixed at v ∝ r −0.5 , and there is no gradient in temperature. Thus, model 3 has seven free parameters: the eight of model 2, minus the slope describing the gradient in infall velocity. The minimized value of reduced χ 2 for model 3 is greater than that for model 2 by 0.3. As for model 2, also an isothermal model, the emission main/satellite hyperfine intensity ratio is too low and the emission is not strong enough, which would be rectified by the addition of some hotter and optically thinner molecular gas. Thus Shu's low-mass stellar collapse model is not an adequate description of the collapse of the high-mass e2 molecular core. An obvious reason is the increased importance of central heating in the high-mass case. Subsequent models return to fitting the density, velocity, and temperature slopes as free parameters. Model 4: gradients in temperature and density Model 4 incorporates the features of model 2 and adds radial gradients in temperature and density. Radial gradients are expected to improve the fit to the data for the following empirical reason. The data ( Figure 2) show stronger emission in the main hyperfine component than in the satellites, whereas models 2 and 3 produce about the same intensity in all emission components. Density and temperature gradients allow the introduction of some hotter, optically thinner gas, which would increase the relative strength of the central hyperfine component. Model 4 has 10 free parameters: the same eight as model 2, plus the exponents in the temperature and density power laws. Table 2 presents the results of two optimizations of model 4, and the corresponding position-velocity diagrams are both presented in Figure 3. The difference between model 4a and model 4b is simply the initial guess (Table 1); model 4a starts with a higher density and a lower temperature than model 4b. The reason for running these two cases is that we know the brightness of a single molecular line cannot be used to uniquely determine both the gas temperature and density. In the optically thin case, for example, the line brightness temperature is the product of the temperature and optical depth. Thus, the two different models 4a and 4b allow us to gauge how well temperature and density can be constrained. Both models 4a and 4b are significant improvements over the models 2 and 3; their values of reduced χ 2 are 3.2 and 3.0, respectively, compared to 3.8 for model 2 and 4.1 for model 3. The difference in χ 2 between 3.2 and 3.0 is not significant. The optimized infall velocities are not very different in models 4a and 4b from the infall velocities in model 2 (Table 3), which implies that the infall velocities are well constrained in this technique. Comparing models 4a and 4b to models 2 and 3, the central emission components are stronger and the main/satellite intensity ratios are higher. The emission also has a larger spatial extent in models 4a and 4b than in model 2. In the optimized models 4a and 4b, molecular gas densities drop by two orders of magnitude between the outer radius of the HII region and the outer radius of the cloud; the temperatures drop by about 10-15 degrees. Thus, a good description of the core e2 requires radial gradients (decreasing outwards) in temperature and/or density. Model 3, which is isothermal but has a density gradient, suggests that a radial gradient in density alone is not sufficient for a model of e2; a gradient in temperature is also required. Of course, a temperature gradient should not be surprising since there is a heat source (the star) in the center of the core. Analysis of the low-mass star-forming core B335 (Zhou et al. 1993) also shows evidence for a temperature gradient in that core. As expected, the results of models 4a and 4b show that the temperature and density are not independent parameters; to some extent, a lower temperature can be compensated by a higher density. Fitting two NH 3 transitions simultaneously, such as (1,1) and (2,2), would remove this ambiguity. Uncertainties are discussed further in Section 5.3. Models 5a and 5b: offset HII region The modest asymmetries in the observed NH 3 emission suggested that a better fit to the data might be achieved by displacing the HII region a few arcseconds (up to 0.1 pc) from the center of the spherical molecular core (Section 2.2). Models 5a and 5b elaborate on models 4a and 4b by including the position of the HII region within the core as free parameters. The molecular gas temperature is calculated with respect to distance from the HII region, the heat source. Density and infall velocity are calculated with respect to distance from the center of the spherical core, as the young star's mass is much less than the mass of molecular gas in the core (Paper I; Zhang & Ho 1997). This rather simplistic model has the advantage that it introduces some asymmetry without adding many new free parameters. Models 5a and 5b have twelve free parameters: the same ten from models 4a and 4b, with the addition of the HII region's offset in two directions (along the line of sight and the direction of right ascension). As for models 4a and 4b, models 5a and 5b differ in their initial guess parameters. The results of these optimizations are given in Figure 4 and in Table 2. For model 5a, the initial guess offset of the HII region is 0 pc, and for model 5b the initial offsets are 0.05 pc in each of the two directions, chosen to agree with the asymmetries in the data. Neither the optimized model 5a nor 5b achieves a significant improvement in reduced χ 2 over models 4a and 4b. Model 5b better matches the east-west asymmetry of the data. However, in both models 5a and 5b the optimized offsets are not significantly different from the initial guesses. It is possibile that our downhill simplex procedure failed to optimize this particular model, despite its success with the others. More likely, this result suggests that the model of the HII region offset from the center of its parent accreting cloud is not correct in the case of e2. The asymmetry might be better described by a different model. For example, there could be an overall east-west density gradient in the molecular core around e2. Another possibility is that the molecular core might not be spherical; we explore this possibility in models 6a and 6b. Models 6a and 6b: molecular disk Because a non-spherical cloud model might help reproduce some of the east-west asymmetry in the observations of e2, models 6a and 6b describe the molecular gas in e2 as a disk rather than a sphere. The disk is simply an oblate spheroid whose unique axis is confined to lie somewhere between the line of sight and the right ascension axis (the direction of the position-velocity diagram). Since only one position-velocity diagram is modelled, only one inclination angle is required to specify a unique orientation of the disk. Models 6a and 6b have 12 free parameters: the same 10 from models 4a and 4b, with the addition of the axial ratio of the spheroid and the inclination angle. The position of the HII region is fixed at the center of the disk. No constraints are placed on the axial ratio or inclination of the disk, but the approximate observed circular symmetry (Section 2.2) implies that a highly inclined thin disk is an unreasonable solution. Again, the two disk models 6a and 6b differ in their initial guess parameters. Model 6a had an initial aspect ratio of 1:1, and its optimized parameters are quite similar to those of models 4a and 5a. Model 6b started with an aspect ratio of 4:1 and an inclination of 45 • (0 • is face-on); its result is a flat disk (10:1), with an inclination of 0 • . Figure 4 shows that model 6b does the best job of all models in reproducing the large spatial extent of the central emission component. Model 6b also has the highest central-to-satellite intensity ratio in emission, which probably results from the relatively high temperatures in this model (Table 3; Section 5.2). However, neither model 6a nor 6b has significantly lower χ 2 than the simpler models 4a and 4b or the offset HII region models 5a and 5b. Furthermore, as in the case of the offset HII region, the two disk models do not converge to the same result. These facts suggest that the aspect ratio and inclination of any putative disk structure are not well constrained by the current procedure. On the philosophy that we should adopt the simplest model which best describes the data, we plot reduced χ 2 against the number of parameters in each model ( Figure 5). This figure shows that as parameters are added, the fit of the models to the data improves until model 4 is reached. Models 5 and 6 increase the complexity of the model without significantly improving the fit to the data. Nevertheless, there are still a number of features of the data which are not reproduced by the models (Section 5.2). We infer that the e2 core cannot be described as a simple sphere, but the specific asymmetries described by models 5 and 6 are not required nor ruled out by the data. We should therefore base our conclusions on model 4, which requires the presence of infall and a centrally condensed and centrally heated molecular core. Quantitative results Most of the optimized density exponents are close to n ∝ r −2 . This slope is steeper than most theoretical predictions for low-mass stars, which give n ∝ r −1.5 within the region of contraction (e.g. Shu 1977). However, a slope of −2 also agrees with the empirical results of Zhang & Ho (1997) for the W51 e2 core. They used higher resolution VLA observations to fit the column density of NH 3 versus radius in e2 and find n ∝ r −2.0 + − 0.1 within 5 ′′ (0.2 pc) of the HII region. This agreement between the empirical results and radiative transfer modelling gives additional confidence in the modelling technique. Of course, the model results are based on the assumptions of LTE and constant NH 3 abundance. Uncertainties introduced by these assumptions are discussed further in Section 5.3. There are two models with density slopes quite different from −2. In model 4b the density falls off quite steeply (r −3.9 ), which might be due to a trade-off between its high temperatures (relative to the other models) and density. In model 5b the density increases with larger radii (r +0.9 ), which is dynamically unstable and physically unrealistic. This unusual result probably comes from calculating the density with respect to the center of the sphere whereas the HII region is in fact offset by 0.07 pc from that center in this model. The radiative transfer models also have the temperature falling off a bit more steeply than expected. Models 4a, 5a, 5b, 6a, and 6b have T ∝ r −0.6 , whereas Scoville & Kwan (1976) predicted T ∝ r −0.4 . In contrast, Zhang & Ho (1997) find no evidence of temperature gradients in the central 5 ′′ (0.2 pc) of the e2 molecular core. Formal errors for the temperature exponent (see Table 2) are consistently close to 0.05. However, those formal errors are most likely an underestimate of the true uncertainties (Section 5.3), so the temperature gradients in e2 may be consistent with theoretical models. Since the isothermal models (numbers 2 and 3) have significantly higher χ 2 than those that fit a temperature gradient, we conclude that an isothermal model is firmly ruled out by the radiative transfer fitting technique. It is not clear why this modelling should give a different slope than Zhang & Ho (1997) find, except perhaps for the fact that their result is based on line ratios whereas the current technique uses essentially the beam-diluted brightness temperature. Those models which fit an infall velocity gradient have slopes between v ∝ r 0.2 and v ∝ r 0 . Even models 2, 4a, and 6a, whose initial slope was −0.5, have optimized slopes greater than zero. Those slopes are not consistent with the "inside-out" scenario proposed by Shu (1977), in which the infall velocity must decrease with increasing radius. In a different star-forming core, however, an inside-out collapse has been inferred. In G10.6-0.4, the infall velocity decreases with radius at least as quickly as v ∝ r −0.5 (Keto, Ho, & Haschick 1988). If an inside-out, accelerating collapse were present in W51 e2 as well, we would expect to observe higher velocities on smaller spatial scales-at least 10 km s −1 at the 0.01 pc scales observed by Zhang & Ho (1997). However, such high velocities are not observed. This result could be related to the fact that the HII region is actually offset from the center of the molecular core. Table 3 presents maximum and minimum values of the temperature, density, and infall velocity in each model, as well as the total gas mass. The extrema are calculated at the inner and outer edges of the shell of molecular gas, as appropriate for each model. From this table we see that the models all have infall velocities of 4-6 km s −1 , which are consistent with the value inferred in Paper I. Such velocities are about a factor of 10 higher than the isothermal sound speed (0.5 km s −1 for molecular hydrogen at 50 K). Basu & Mouschovias (1994, 1995 have theoretically analyzed the collapse of magnetized molecular cloud cores with ambipolar diffusion, and they predict infall velocities close to the sound speed, rather than an order of magnitude higher than the sound speed. The reason for this discrepancy is not clear, though Basu & Mouschovias (1995) state that less efficient coupling between neutrals and ions can give rise to higher infall velocities. Molecular hydrogen densities calculated for the radiative transfer models range between n ∼5×10 4 -5×10 7 cm −3 (Table 3). The critical density for exciting these NH 3 transitions is about 10 4 cm −3 (Ho & Townes 1983), so in this sense the fitted densities are consistent with expectations. There is a considerable range in the estimates of the density of the gas, especially near the outer edges of the core (Table 3) where values differ by three orders of magnitude. The uncertainties in the density of the gas are large because, in the optically thick case, small errors in fitting the strength of the hyperfine components translate into large errors in the density (see also Section 5.3). All of the models have gas temperatures which are lower than calculated from other techniques. The (1,1) to (2,2) line ratios in the gas surrounding e2 imply temperatures of 25-35 K at 0.2-0.3 pc from the HII region e2 (Paper I). Zhang & Ho (1997) also use high angular resolution line ratios to find temperatures of 40-50 K inside the core (inside 0.2 pc) and temperatures of 25-30 K outside the core, at ≥ 0.2 pc from the HII region. In contrast, the models have peak temperatures of only 20-40 K and values of a few to 10 K at 0.2 pc from the HII region. As discussed in Section 5.3, factors of two in the temperature are within the uncertainties caused by an inverse correlation between temperature and density. Furthermore, if the molecular gas is clumpy, a beam filling factor less than one would make the modelled temperatures, which are essentially derived from the brightness temperature of the gas, lower than the excitation temperatures. From the ratio of the observed continuum flux to the absorption line strength, the beam filling factor for the redshifted absorbing gas is 0.8 (cf. Keto, Ho, & Haschick 1987). Consistency checks The only physical constraints placed on the model molecular cores are that the temperature of the gas must be above 3 K and the density must be above n ∼10 cm −3 . In other words, the models are fit without regard to energetic or dynamic self-consistency. Thus, some simple consistency checks are in order. If the collapse begins from a state in which the gas is stationary, cold, and essentially infinitely far from the central star, the total of the potential, kinetic, and thermal energy should be zero at every radius. Table 4 gives the total energy in the molecular gas in the form of gravitational potential energy, infall kinetic energy, and thermal energy. (The potential energy is the usual integral of −GM(r)m(r)/r, where M(r) is the total mass inside r and m(r) is the mass at r; the kinetic energy of infall is the integral of mv 2 /2, and the thermal energy is 3kT /2 per molecule.) The thermal energy in the gas is always dominated by the turbulent energy corresponding to the assumed intrinsic linewidth of 1.25 km s −1 (540 K). In turn this assumed turbulent energy is always less than the kinetic energy of infall. In most cases the sum of kinetic and thermal energy is within a factor of 2-3 of the potential energy, indicating approximate energy balance. The exceptions to this statement are models 2 and 3, which are rejected in any case because of their relatively high values of χ 2 , and model 5b, which has the density increasing outwards. The total mass of gas in each model appears in Table 3 and varies from 100 to 10 4 M ⊙ . These masses are consistent with the 100 to 200 M ⊙ lower limit inferred in Paper I, based on the assumption that the gas is moving at the free fall velocity. It is also possible to estimate a mass infall rate from the molecular core using the density, velocity, and radius values in Table 2 or 3. The implied rates are around 5×10 −2 M ⊙ yr −1 , much higher than the infall rates expected for low-mass star formation. However, the infall rate onto the star itself might be lower than the rate we estimate at these 0.1 pc scales. Spin-up motions and stellar winds/outflows are both observed in e2 at radii < 0.01 pc (Zhang & Ho 1997;Gaume et al. 1993). Magnetic fields are undoubtedly also important. There is an absence of good theoretical models of high-mass star formation which would place our inferred mass infall rates in an appropriate context. Observed features which are not reproduced All of the models underestimate the strength of the main hyperfine component in emission, though they fit the strength of the satellite components fairly well. This discrepancy seems to indicate that all of the models are lacking some hot, optically thin gas. In fact, measured molecular gas temperatures of ∼ 100 K in the core e2 using the (3,3) line of NH 3 . Our models, however, do not contain gas at such high temperatures. The models also fail to reproduce some emission near the center of the cloud, seen in Figure 2 at 19 h 21 m 26.25 s and 62 km s −1 , at a level of 36 mJy/beam or 12 K (6σ). The velocity of this gas is more redshifted than most of the gas seen in absorption. Since this gas at 62 km s −1 is seen in emission, and our spatial resolution is much larger than the actual size of the HII region, it is not possible to know if the gas is in front of or behind the HII region. This emission could come from gas behind the HII region; in that case, its redshifted velocity suggests expansion or outflow from the molecular core. Thus, it is possible that the e2 molecular core is experiencing simultaneous infall and outflow. This emission at 62 km s −1 could also be explained by the presence of some hot, optically thin gas in front of the HII region. The brightness temperature of the HII region is about 80 K. (The HII region is not resolved by the current observations; see Paper I.) The optical depths of the molecular gas in e2 are very high, 5-10. Thus, molecular gas in front of the HII region would need an excitation temperature of only about 90 K in order to be seen in emission at 12 K in front of the HII region. As discussed above, did indeed find evidence of temperatures around 100 K in the e2 molecular core. Although the models of this paper do not favor an inside-out collapse structure (see Section 4.7), the gas described here-warmer gas, presumably closer to the HII region, and moving at higher velocities-may provide some evidence in favor of an inside-out collapse. In any case, whether the gas at 62 km s −1 is infalling or outflowing, it does not contradict the conclusion of Paper I that the bulk of the gas in e2 must be infalling. Finally, none of the spherically symmetric models reproduces the asymmetries apparent in the data. Model 5b, with an offset HII region, is asymmetric but the overall fit to the data (reduced χ 2 ) is not improved (Section 4.5). Uncertainties The simple error analysis described in Section 3 gives estimates of the 1σ uncertainties in the model parameters, assuming the parameters are not correlated. Typical values for these uncertainties are presented in the last column of Table 2. The errors of fitting are typically quite small, and they do not reflect the true uncertainties because they ignore systematic errors. Major sources of error in this technique are the flux calibration of the data, the distance uncertainty, the assumed NH 3 abundance, and the interdependence of temperature and density. One source of systematic error is the uncertainty in the flux calibration of the VLA data and the primary beam correction. Changes in the flux calibration scale the brightness temperature by some multiplicative factor. This scaling factor should translate into an uncertainty in the temperature of the cloud, since the absolute magnitude or strength of the lines should be determined largely by the temperature of the gas. Experience indicates that the uncertainty in the flux calibration of VLA data may be as large as 20% at K-band (23 GHz). In addition, the primary beam correction could be as large as 30% at the position of e2, though random pointing errors would tend to decrease the primary beam correction. Another source of systematic error is the distance uncertainty. As mentioned earlier, the method of statistical parallax applied to the masers in W51 gives a distance of 7.0 ± 1.5 kpc (Genzel et al. 1982). This 21% uncertainty in the distance to the cloud produces a 21% uncertainty in the linear radius of the cloud and hence the gas density n 0 (in order to produce the same total column density). Because the densities quoted here are molecular hydrogen densities, scaled up from the data by an assumed NH 3 abundance, the unknown NH 3 abundance of course contributes to uncertainties in density. We have adopted NH 3 /H 2 = 1.4×10 −6 , but this value is probably uncertain by at least a factor of 10 (Ho & Townes 1983). If we had adopted an abundance value a factor of 10 smaller, the densities in Tables 1, 2, and 3 would increase by that factor. Moreover, the NH 3 abundance could vary with radius in the core. An abundance gradient would mimic the effect of a density gradient, and the present modelling technique cannot distinguish between the two. We have also assumed that the NH 3 level populations are determined by LTE. If this assumption does not hold, the model gas densities and temperatures would be inaccurate; however, because of the complex source geometry, it is difficult to predict whether they would be underestimated or overestimated. At the high densities found in the e2 core, the LTE assumption is likely to cause smaller uncertainties than those introduced by the NH 3 abundance. In this modelling technique, it is difficult to make a unique determination of kinetic temperature and volume density because of an inverse correlation between these two quantities (see also Section 4.4). This correlation arises from the fact that a spectrum of a single NH 3 inversion transition constrains the optical depth of the transition and the beam-diluted brightness temperature, not the volume density or kinetic (or excitation) temperature (Ho & Townes 1983). The comparison between models 4a and 4b and between models 6a and 6b (Table 2) shows that one can trade off a factor of two increase in temperature for a factor of 2 to 4 decrease in density and achieve the same χ 2 . Because good fits are not obtained for variations much beyond this range, we conclude that this interdependence between temperature and density brackets the temperature to a factor of two and the density to better than an order of magnitude. Simultaneous fitting of more than one transition would remove much of the ambiguity. An inaccuracy of a few percent results from the coarse gridding of the data in Figure 2. That is, the value of χ 2 depends on exactly how the original data cube is sampled because the final convolution of the model to approximate the resolution of the VLA only uses 49 discrete points that cover an observing beam. We find errors on the order of 6% based on sampling. In addition, uncertainties in the continuum level translate into uncertainties in the temperature of the gas and continuum opacity of the HII region via the strength of the absorption. The errors in continuum subtraction are probably on the order of 6 mJy/beam (2 K) or less, as that is the rms noise in the line-free regions of the data. In comparison, the strongest absorption in the core e2 is 120 mJy/beam. Continuum subtraction probably produces relatively small errors. The fitting procedure employed in this work is a local minimization procedure, rather than a global minimization. However, in practice a large amount of global minimization has already been done. The reason for this is that the output model is extremely sensitive to the initial conditions, so that if the initial guess is not relatively good the program tends to run the model down to blank sky instead of to a meaningful fit. Thus the various sets of initial conditions presented in Table 1 are only a small subset of the ones that were attempted, most of which gave unacceptable results. Conclusions We present radiative transfer modelling of the NH 3 (J,K) = (2,2) transition in the molecular core around the ultracompact HII region W51 e2. Paper I described the NH 3 observations and presented a model in which the molecular core (radius ∼0.1 pc) is undergoing roughly spherically symmetric contraction at about 5 km s −1 onto the young massive star. This paper investigates the physical properties and three-dimensional structure of the core in more detail through numerical techniques, using a series of models of gradually increasing complexity in which the gas temperature, density, and infall velocity are parametrized as power laws. The parameters of these models were optimized so that the expected line radiation best matched the observed data. Comparison of the series of models yields insights into the importance of various model parameters. For example, the core is contracting at a velocity of about 5 km s −1 . A good model of the core requires that the temperature and density of the gas both decrease with increasing distance from the center of the cloud, over 0.1 pc scales. Major uncertainties arise from the assumed NH 3 abundance and from the fact that the temperature and density cannot be determined independently in this project. The flux calibration of the data and the distance to W51 also introduce significant uncertainties. An important feature of this work is that, regardless of the numerical uncertainties, comparing models of gradually increasing complexity yields insights into the sensitivity of the model to the parameters and indicates which parameters are most important. For example, models without infall and isothermal models are clearly inadequate descriptions of the molecular core. The axis along the bottom shows pixels of velocity; high velocities (redshifted gas) are on the right-hand side of the plot. The other plots in the figure show optimized output of the model number indicated in each top right corner. Improvements are realized until there are ten free parameters (models 4a and 4b), but models 5a through 6b make no further improvement in χ 2 .
2014-10-01T00:00:00.000Z
1998-06-03T00:00:00.000
{ "year": 1998, "sha1": "e3bd9dab45d8149a8c546c924561be681a357584", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1086/306310/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cb2f10507915b473a8b37e4808a76602d98d0f80", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46947675
pes2o/s2orc
v3-fos-license
The US SimSmoke tobacco control policy model of smokeless tobacco and cigarette use Background Smokeless tobacco (SLT) prevalence had been declining in the US prior to 2002 but has since increased. Knowledge about the impact of tobacco control policies on SLT and cigarette use is limited. This study examines the interrelationship between policies, cigarette use, and SLT use by applying the SimSmoke tobacco control policy simulation model. Methods Using data from large-scale Tobacco Use Supplement and information on policies implemented, US SimSmoke was updated and extended to incorporate SLT use. The model distinguishes between exclusive SLT and dual use of SLT and cigarettes, and considers the effect of implementing individual and combined tobacco control policies on smoking and SLT use, and on deaths attributable to their use. After validating against Tobacco Use Supplement (TUS) survey data through 2015, the model was used to estimate the impact of policies implemented between 1993 and 2017. Results SimSmoke reflected trends in exclusive cigarette use from the TUS, but over-estimated the reductions, especially among 18–24 year olds, until 2002 and under-estimated the reductions from 2011 to 2015. By 2015, SimSmoke projections of exclusive SLT and dual use were close to TUS estimates, but under-estimated reductions in both from 1993 to 2002 and failed to estimate the growth in male exclusive SLT use, especially among 18–24 year olds, from 2011 to 2015. SimSmoke projects that policies implemented between 1993 and 2017 reduced exclusive cigarette use by about 35%, dual use by 32.5% and SLT use by 16.5%, yielding a reduction of 7.5 million tobacco-attributable deaths by 2067. The largest reductions were attributed to tax increases. Conclusions Our results indicate that cigarette-oriented policies may be effective in also reducing the use of other tobacco products. However, further information is needed on the effect of tobacco control policies on exclusive and dual SLT use and the role of industry. Background Adult smoking prevalence in the US declined from 26% in 1993 to 14% in 2015 [1]. Much of that decrease can be attributed to the implementation of tobacco control policies, including smoke-free air laws, marketing restrictions, media campaigns, treatment and tax increases [2,3]. While smoking prevalence has declined, the use of other tobacco products, such as little cigars or smokeless tobacco (SLT), and of e-cigarettes has increased [4][5][6][7]. Much of that is multi-product use, of which 60% includes cigarettes [7]. Although male SLT use had declined in the US from 4.2% in 1993 to 2.8% in 2002 [8,9], it increased to 3.0% by 2011 [6,10,11], with snuff sales increased by 65% [12]. SLT use has been shown to be a direct cause of oral and esophageal cancer, and may also cause heart disease, gum disease and oral lesions [13]. With concerns about the health effects and increasing use of SLT, some states have directed policies at reducing SLT use, including increased SLT taxes, educational campaigns, and cessation treatment [14,15]. In addition, the 2009 Family Smoking Prevention and Tobacco Control Act (FSPTCA) authorized the Food and Drug Administration to regulate the marketing, promotion and sale of cigarettes and SLT. Policies directed at reducing SLT use may also impact cigarette use. For example, cigarette use may increase if youth and young adults initiate smoking instead of SLT or if smokers are discouraged from using SLT to help quit cigarette use. However, SLT-oriented policies could reduce cigarette use if the two tend to be used together (i.e. dual use) and the policies encourage cessation, or if SLT acts as a gateway to cigarette smoking. Similarly, policies directed at reducing cigarette use may discourage SLT use if the two are used together or may encourage SLT use if SLT is used as a cigarette substitute. Policy evaluations have provided limited information on their effects [15]. Knowledge of the policy impacts can help to better design policies towards SLT use, and may have implications for other nicotine delivery products, such as e-cigarettes [16]. This paper employs simulation modeling to examine the inter-relationship of tobacco control policies and patterns of cigarette and SLT use. We adopt the well-established SimSmoke simulation model [2,3]. The model incorporates population and smoking dynamics and focuses on the major cigarette-oriented tobacco control policies, including taxes, smoke-free air laws, media campaigns, marketing restrictions, cessation treatment policies and youth access enforcement. SimSmoke has been used for advocacy and planning purposes to examine the impact of past and projected future policies individually and in combination [17]. The model has been developed and validated for over 25 nations and 8 states with a wide range of different policy changes [2,[18][19][20][21][22][23][24][25][26]. The SimSmoke model is extended here to incorporate SLT use, distinguishing between exclusive SLT and dual (both cigarette and SLT) use. We consider the effect of tobacco control policies implemented between 1993 and 2017 on cigarette and SLT use and on the deaths attributed to that use. Methods The model begins with the 1993 population distinguished by age and gender and further distinguished as never tobacco users, and both current and former users among exclusive cigarette, exclusive SLT, and dual users. As shown in Fig. 1, cigarette and SLT use age change over time through modules for population, tobacco use, tobacco-attributable deaths and separate modules for each policy. Population Population data were obtained by single age (0 through 85) from the Census Bureau for 1993-2013 [27][28][29] and for 2016-2067 [30] from the Census Bureau's Population Projections Program. Starting with the population in 1993, the population evolves through births, deaths and net immigration, with births up to age 14 based on the obtained population data and older age groups subject mortality rates from the CDC [31]. Mortality rates by age and gender were averaged by age group over the years 1999 through 2013 and then smoothed using Tobacco use Individuals evolve from never tobacco users to current tobacco users through smoking and SLT initiation. Tobacco users become former users through quit rates, but may return to their prior tobacco use state through relapse. A discrete time, first order Markov process was assumed for these transitions. Baseline estimates of exclusive smoking, exclusive SLT and dual use status by age and gender were obtained from the nationally-representative 1992/3 Tobacco Use Supplement (TUS) of the Current Population Survey [33]. Current smokers were defined as individuals who have smoked more than 100 cigarettes in their lifetime and currently smoke cigarettes either daily or on some days . A question was asked regarding whether the individual "regularly" used SLT. Those regular SLT users were further distinguished as dual users (with cigarette use) and exclusive SLT users. Former users were defined as those who met the respective definitions for use, but reported no current use. Former smokers were split into exclusive smokers and former dual users using the age-specific ratio of exclusive smokers and dual users, and former exclusive SLT users were estimated by the ratio of former to current smokers. Former exclusive smokers and dual users were distinguished by years since quitting (< 1, 1, 2 …, 15, > 15 years). Since former SLT users were not asked about years since quitting, the initial percentages were assumed the same as for former smokers. Because evidence on initiation and early transitions to SLT use from the literature was mixed [34][35][36][37][38] and because the TUS did not provide such information, we employed a measure of net initiation, whereby initiation was measured for each of the three user groups as the difference between the base year prevalence at a given age and base year prevalence at the previous age. Thereby, this measure incorporates initiation, cessation and switching between tobacco products, similar to previous SimSmoke models without the ability to switch products [2,3]. This method ensures stability and internal consistency of the model. We allowed for initiation through age 30 for males and age 27 for females, the respective ages when net initiation for all three user groups began to decline. Cessation occurs after the last age of net initiation. Data on smoker quit rates were obtained from the TUS, measured as those who quit in the last year, but not the last 3 months [39]. Since sufficient data to estimate quit rates for exclusive SLT and dual users were not available from the TUS, we considered previous literature. Studies [40][41][42] generally found that quit rates were at least as high among SLT as cigarette users. With some exceptions [43], studies obtained similar quit rates for dual users and exclusive smokers [42,44,45]. Quit rates were set the same for dual and exclusive SLT users as for all smokers. Age-and gender-specific relapse rates by years quit were based on the rates for smokers [46][47][48][49]. Finally, since studies indicated limited switching between SLT and cigarettes, except at younger ages [40][41][42], switching only occurred through net initiation. Tobacco-attributable deaths Relative risk estimates for current and former smokers by age and gender were based on the Cancer Prevention Study II [48,50,51], as in previous US SimSmoke models [2,3]. Relative risks for dual users may be less than for exclusive smokers due to reduced quantity smoked [43], but studies have found similar risks [52,53] except with large quantity reductions [54]. We assigned the same risks to exclusive cigarette and dual users, so that risks decline at the same rate with years since quitting [48,50,51]. We estimate an exclusive SLT relative mortality risk of 1.15 based on a large-scale US study [55]. To obtain smoking-attributable deaths, the number of exclusive smokers at each age is multiplied by the excess mortality risks (exclusive smokers death rate minus never smokers death rate) to obtain attributable deaths by age, and then summed over ages. The same procedure was applied to former exclusive smokers and summed over current and former smokers. Separate estimates were derived in the same way for exclusive SLT and for dual users. Policies The model was initialized with 1993 policy levels, and incorporates US and state policy changes occurring between 1993 and 2017. Policy descriptions and effect sizes are shown in Table 1. Policies are generally modelled as having immediate effects on prevalence rates and ongoing effects through initiation and cessation rates. When more than one policy is in effect, the effects are multiplicatively applied as percent changes, subject to synergies (e.g., through publicity from media campaigns, see Table 1). In the tax module [56], prices were modeled as having constant proportional effects (i.e., constant price elasticities) with respect to price, as derived from demand studies. Based on previous reviews [56,57], the model assigns a prevalence elasticity for exclusive cigarette and dual use of − 0. The national average retail prices and manufacturer tax for SLT products through 2014 were measured by the state retail prices and manufacturer taxes weighted by the SLT smoker population [59], using manufacturer sales and quantity shipped in pounds [60], tax data [61], estimated weights per unit [60], and estimated mark-ups. We adjusted the 2014 price upward for 2015-2017 by the state population-weighted tax increase. For SLT users, we used a weighted price, with weights of 80% of the cigarette price and 20% of SLT price [59]. All prices were deflated by the consumer price index to adjust for price inflation. SimSmoke considers worksites, restaurants, pub and bars, and other public places laws, and the role of enforcement [62]. Studies of SLT use have found a negative relationship to smoke-free air laws [15]. Based on these findings and since smoke-free air laws are not explicitly directed at SLT use, exclusive SLT and dual use effect sizes were set at 25% those of cigarettes. Data on state level smoke-free air laws [63] were weighted by state smoker populations. The enforcement level was set at 80% for all years, as previously developed for US SimSmoke [2,3]. SimSmoke evaluates media campaigns in terms of overall tobacco control expenditures, much of which are for media campaigns [64]. They are categorized as high, medium, or low levels [65]. Studies have generally found SLT-oriented educational campaigns effective in reducing youth and adults and adult use [15], but due to reduced emphasis on SLT as compared to cigarette-oriented campaigns, exclusive SLT and dual effect sizes were set at 50% that of cigarettes. State per capita expenditures [66] were categorized by levels and weighted by the state smoker population, and were initially categorized as low level in 1993 increasing to medium level by 2004. SimSmoke considers restrictions on both direct and indirect marketing [67,68]. While no studies have directly examined the relationship of marketing restrictions to SLT use, awareness of and exposure to SLT advertisements has been associated with increased use [15]. SLT and dual use were assigned the same policy effect sizes as for cigarettes. Restrictions on advertising for both SLT and cigarette use were set at a minimal level from 1993 to 2009, reflecting an earlier media advertising ban, with enforcement set at 90% [2]. In 2010, they were increased to 25% moderate and 75% minimal, reflecting added 2009 FSPTCA restrictions on sponsorships and coupons, and in publications. The effectiveness of health warnings depends primarily on their size and whether they include graphics [69]. Limited effectiveness has been found for text-only warnings on SLT packages, but pictorial warnings were associated with less susceptibility to SLT use among youth and greater interest in cessation among adults [15]. We assume the same effect of SLT warnings on exclusive SLT and dual use as for cigarette warnings on cigarette use. Health warnings for cigarettes have been minimal since 1966. However, since 2010, SLT packaging is required to display large text warnings covering at least 30% of two principal sides of the package, larger than cigarette warnings. SLT warnings were assigned a minimal level until 2009 and a moderate level since 2010. Cessation treatment policy includes brief interventions, pharmacotherapy availability, financial coverage of treatments, and quitlines. [70] Reviews of randomized trials of pharmacological SLT interventions found mixed effects [13, 71,72] and have also found behavioral interventions to promote quitting among SLT users [15]. However, SLT users currently use these resources at low rates [73]. Compared to exclusive smokers, cessation treatment policies were assigned 50% the effect on SLT users, but 100% of the effect on dual users. The levels of cessation treatment use were based on previous versions of US SimSmoke [2,3,70]. Treatment coverage was initiated in stages beginning with minimal in 1997 increasing to moderate by 2007 [74]. A national (active) quitline was implemented at 25% capacity beginning in 2003 increasing in stages to 100% by 2007 [74]. Brief interventions were set at a level of 50% for all years. Most states currently have provisions for SLT advice and treatment, and consequently the policy levels were set the same as for cigarettes. Youth access enforcement include enforcement, and restrictions on vending machines and self-service. Strongly enforced and publicized youth access laws yield a larger reduction in youth smoking initiation for 10-15 year-olds than for 16-17 year-olds, further enhanced by vending machine and self-service bans [75]. Two studies of youth SLT use [76,77] found youth access policies affected SLT use, although the effect was weak, and two studies [78,79] found lower compliance rates for SLT than cigarette purchases. Youth access policy effect sizes for exclusive SLT use were assigned 50% of the effect sizes for cigarettes, while the effects on dual use were assigned the same effect sizes as for exclusive cigarette use. Enforcement levels for both SLT and cigarettes were set at none before 1997, at low-level from 1998 to 2002 and at mid-level since 2003 [6]. Levels for vending machine bans were set at 50% beginning in 1993 [80] increasing to 75% by 2000, and for self-service bans were set at 50% beginning in 1995. Both vending machine and self-service bans were increased to 100% in 2010, reflecting requirements under the 2009 FSPTCA. Validation To validate the model, we compared predicted cigarette and SLT prevalence rates (that incorporate policy changes) to the comparable use rates estimated from the 2002, 2010/11 and 2014/15 TUS surveys. Because screening questions on SLT use in the TUS changed from "regular use" to days use, current users from 2002 onward were defined as individuals currently using SLT at least 10 days in the last month [81]. For the years 2002, 2010/11 and 2014/15, we considered whether SimSmoke predictions were within the 95% confidence intervals (CI) from the TUS, assuming a binomial distribution for each use category. We also compared the relative change in prevalence rates from SimSmoke to those from the TUS by sub-periods (1993-2002, 2002-2011, and 2011-2015) and overall . The effect of past tobacco control policies Upon validating the model, we estimated the effect of policies on tobacco prevalence and tobacco-attributable deaths. First, we programmed SimSmoke with all policies remaining at their 1993 levels to estimate the counterfactual without any policies implemented. We then subtracted estimates incorporating all implemented policies from those for the counterfactual in order to estimate the net reductions due to the policies implemented since 1993. The contribution of individual policies were estimated by reprogramming SimSmoke to only allow for the change in that policy while holding other policies constant, which was compared to the counterfactual with no policies implemented. The relative reductions for each policy were measured relative to the summed effects of all policies, since the effects with multiple policies depend on assumed synergies and do not sum to one. Results Predictions of smoking and SLT prevalence from 1993 to 2014/15 SimSmoke predictions for 1993 to 2015 incorporating policy changes and estimated smoking prevalence from TUS are shown for exclusive cigarette, dual and exclusive SLT users in Table 2. For the adult population (ages 18 and above), SimSmoke predicted that exclusive male (female) cigarette prevalence fell from 25.6% (22.1%) in 1993 to 14.2% (12.4%) in 2015, while the TUS showed a decline from The effect of policies implemented through 2017 Results comparing exclusive smoking, dual use and exclusive SLT prevalence projections with policies implemented between 1993 and 2017 to a counterfactual with policies set to their 1993 levels (i.e., the absence of policy change) are shown in Table 3. Results for tobacco-attributable deaths and lives saved are shown in Table 4, with the last column showing the summation over the years 1993-2067 to obtain the lives saved over that period. In 1993, total tobacco-attributable deaths for males (females) were estimated as 226,979 (128,191) With no new policies implemented after 1993, SimSmoke projected that exclusive cigarette, dual and exclusive SLT use rates would have been 35, 32.5 and 16.5% higher respectively in 2017 for males, with similar relative differences for females. As a result of policies, annual tobacco-attributable deaths for males (females) were reduced by 34,800 (21,679) in 2017 alone with a cumulative impact of 268,628 (167,308) fewer tobacco-attributable deaths from 1993 to 2017. By 2067, the relative reductions for males (females) increased to 48% (50%) for exclusive cigarette, 44% (47%) for dual and 22% (21%) for exclusive SLT users, as policies continued to reduce tobacco use through increased cessation and reduced initiation. Due to policies implemented between 1993 and 2017, SimSmoke projected a total of 4,595,461 (2,939,392) premature deaths averted by 2067. Comparing the counterfactual for individual policies, much of the reduction in exclusive cigarette use was due to price increases. Price increases alone were predicted to reduce male (female) exclusive cigarette use rates in relative terms by 25% (25%) in 2017 and by 37% (38%) in 2067, and to have averted 3,128,890 (1,959,661) male (female) deaths in total by 2067. Smoke-free air laws yielded a 4% relative reduction in exclusive cigarette use in 2017, which increased to a 7% reduction by 2067. Cessation treatments and youth access enforcement showed 3-4 and 2% relative reductions respectively in 2017 increasing to 4-5 and 5% by 2067. Mass media campaigns and advertising bans showed 0.6 and 0.5% relative reductions respectively in 2017 increasing to 0.8 and 0.9% reductions by 2067. For exclusive cigarettes, taxes represented 71% of the total policy effects, followed by smoke-free air laws at 11%, and cessation treatment at 10% by 2017. Similar but slightly smaller relative reductions were projected for dual use. However, much smaller effects were projected for exclusive SLT use, where the largest relative reductions by 2067 for males (females) were 13% (12%) for prices, followed by 1.6% (2.5%) for cessation treatment and 1.1% (1.2%) for health warnings. Some categories show increased exclusive SLT use in future years, due to the larger pool of potential initiates from those who would have smoked cigarettes. Discussion Our estimates of the increase in exclusive cigarette use between 1993 and 2015 from US SimSmoke generally validated well against trends found in the large scale, nationally representative TUS. However, SimSmoke over-estimated reductions among male smokers for most ages, especially those 18-24, until 2002, while under-estimating reductions in later years. By 2015, SimSmoke female projections of adult exclusive and dual cigarette use were close to TUS estimates, while male reductions were under-estimated for dual use but over-estimated for exclusive SLT use. The deviations for dual use may reflect the relatively small number of such Consistent with previous literature [8,9], the model projected that overall SLT rates fell quite rapidly for both dual and exclusive SLT use through 2002, but decelerated in recent years. However, SimSmoke under-predicted the decline through 2002. While some policies were directed at SLT use between 1993 and 2002, most were directed at cigarette use, including tax increases, smoke-free air laws, and media campaigns. These policies may have also reduced SLT use, suggesting the importance of strong cigarette policies in reducing overall tobacco use. The model fails to predict well the increasing pattern of exclusive SLT and dual use found in recent TUS surveys and in recent studies [6,10,11,82,83]. The failure to predict these changes in trend may reflect the changing composition of the SLT industry. [84,85] indicate that cigarette companies began promoting SLT products as a way for smokers to satisfy nicotine cravings in places where smoking is banned, and marketing expenditures, including those on price promotions [86] and flavored products [87,88], increased. The largest increases in SLT use were among young adults, possibly reflecting marketing targeted toward this age group. Policies may need to be directed at this age group in order to reduce SLT and dual use. SimSmoke projected that policies implemented between 1993 and 2017 reduced cigarette use by about 35% and SLT use by 16.5%. Consistent with earlier SimSmoke analyses [89,90], the largest percentage reductions in cigarette and SLT use and in attributable deaths were due to taxes. Smoke-free air laws were next most important for cigarettes, while cessation treatment was next most important for SLTs. The importance of taxes and smoke-free air laws has also been found in previous US SimSmoke models of cigarette use [2, 20-22, 25, 26]. SimSmoke also provided estimates of the health effects of SLT use. SimSmoke estimated 6212 deaths attributable to exclusive SLT use in 2017 (down from 7449 in 1993), but projected general increases in future years. However, we treated SLT as a homogeneous category in terms of risks, potentially overestimating risks (e.g., SLT users switching to snus) [91][92][93][94][95]. The number of SLT-attributable deaths paled in comparison to the total deaths attributable to dual and exclusive cigarette use, which were estimated as 7449 and 385,594 respectively in 2017. The model did not distinguish the relative risks of dual use from that of exclusive cigarette use, although dual use may reduce the number of cigarettes smoked over a lifetime and, thereby, reduce mortality risks. Like all models, SimSmoke estimates are only as strong as the assumptions and underlying data. In particular, the projections of cigarette use were based on initiation and cessation rates derived in 1993 subject to policy changes over time. Cessation rates for exclusive SLT users were not available, and we were not able to distinguish cessation rates for dual as compared to exclusive cigarette use. In addition, the effect sizes of policies on SLT use that we used in SimSmoke, are tentative, largely reflecting studies prior to 2007 [17]. Better information is needed on policy effectiveness, especially for recent years since the cigarette companies came to dominate the industry, and on the extent to which policies, such as media campaigns, are directed at SLT use. Better information is also needed about the timing of policies effects and the potential synergies or overlapping effects of policies as they relate to cigarette and SLT use. Another limitation is that SimSmoke considers only cigarette and SLT use, and does not include the use of other nicotine delivery products, such as cigars, water pipes and e-cigarettes, that may substitute or complement the use of cigarettes and SLT. Growth in e-cigarette use between 2011 and 2015 [96,97] may explain the rapid reduction in cigarette use and the slowing growth of SLT use. Conclusions While the landscape for nicotine delivery products has dramatically changed in the last 10 years, some lessons can be gleaned from the modeling in this paper. With cigarettes still being the dominant form of nicotine delivery, cigarette-oriented policies may be an effective means, perhaps the most effective means, of reducing SLT use and possibly reducing the use of other nicotine delivery products, such as e-cigarettes. Policies directed at SLT use, especially those that affect youth and young adults, may also play a role but it should be recognized that substitution of exclusive SLT use (which is relatively low risk) for cigarette use can reduce overall harms. In developing a coherent policy approach, it will be important to monitor the use of other products, such as cigars and e-cigarettes. In addition, it will be important to monitor the marketing and pricing policies of cigarette companies, which have strong incentives to protect the high profit margins of cigarettes.
2018-06-08T13:05:11.021Z
2018-06-05T00:00:00.000
{ "year": 2018, "sha1": "821125d937f52da15a742e64367f25ea2c462aed", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5597-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "821125d937f52da15a742e64367f25ea2c462aed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
183387
pes2o/s2orc
v3-fos-license
Nanoparticle albumin-bound paclitaxel (nab-paclitaxel) as second-line chemotherapy in HER2-negative, taxane-pretreated metastatic breast cancer patients: prospective evaluation of activity, safety, and quality of life Background A prospective, multicenter trial was undertaken to assess the activity, safety, and quality of life of nanoparticle albumin-bound paclitaxel (nab-paclitaxel) as second-line chemotherapy in HER2-negative, taxane-pretreated metastatic breast cancer (MBC). Patients and methods Fifty-two women with HER2-negative MBC who were candidates for second-line chemotherapy for the metastatic disease were enrolled and treated at three centers in Northern Italy. All patients had previously received taxane-based chemotherapy in the adjuvant or first-line metastatic setting. Single-agent nab-paclitaxel was given at the dose of 260 mg/m2 as a 30-minute intravenous infusion on day 1 each treatment cycle, which lasted 3 weeks, in the outpatient setting. No steroid or antihistamine premedication was provided. Treatment was stopped for documented disease progression, unacceptable toxicity, or patient refusal. Results All of the enrolled patients were evaluable for the study endpoints. The objective response rate was 48% (95% CI, 31.5%–61.3%) and included complete responses from 13.5%. Disease stabilization was obtained in 19 patients and lasted >6 months in 15 of them; the overall clinical benefit rate was 77%. The median time to response was 70 days (range 52–86 days). The median progression-free survival time was 8.9 months (95% CI, 8.0–11.6 months, range 5–21+ months). The median overall survival point has not yet been reached. Toxicities were expected and manageable with good patient compliance and preserved quality of life in patients given long-term treatment. Conclusion Our results showed that single-agent nab-paclitaxel 260 mg/m2 every 3 weeks is an effective and well tolerated regimen as second-line chemotherapy in HER2-negative, taxane-pretreated MBC patients, and that it produced interesting values of objective response rate and progression-free survival without the concern of significant toxicity. Specifically, the present study shows that such a regimen is a valid therapeutic option for that ‘difficult to treat’ patient population represented by women who at the time of disease relapse have already received the most active agents in the adjuvant and/or metastatic setting (ie, conventional taxanes). Introduction Metastatic breast cancer (MBC) has always been a challenging disease to treat because of its poor prognosis and 5-year survival rate of only 23%-26%. 1,2 Data from population-based studies and analysis of clinical trials show that the outcome for women with MBC is slowly but steadily improving, as the risk of death is decreasing by 1%-2% each year 3,4 and the median overall survival (OS) has increased from 18 to 28 months in recent years. [5][6][7][8] It is likely the greatest improvement is related to the development and widespread availability of modern systemic therapies, including combinations with targeted biological agents in different breast cancer subtypes, that have been proved effective in increasing response rates, progression-free survival (PFS), and OS. 7,[9][10][11] However, therapeutic goals in the metastatic setting remain palliative in nature and are aimed at controlling symptoms, improving and maintaining quality of life (QoL), and prolonging survival, all while carefully balancing treatment efficacy and toxicity. [12][13][14] Currently, taxanes are considered the most effective cytotoxic drugs for the treatment of MBC, both in monotherapy and combined schedules, and have a proven survival benefit greater than those of other types of chemotherapy. 15,16 According to the most recent international guidelines, paclitaxel and docetaxel, the two most commonly used taxanes against breast cancer, are the agents of choice in patients progressing after anthracycline-containing chemotherapy. 17,18 Despite their clinical activity, the use of taxanes could be limited by significant toxicities observed in treated patients; most notably, effects such as hypersensitivity reactions and peripheral neuropathy remain major challenges. Premedication with corticosteroids and antihistamines before taxane administration is mandatory but causes additional side effects. [19][20][21] Nanoparticle albumin-bound paclitaxel (nab-paclitaxel) is a solvent-free colloidal suspension of paclitaxel and human serum albumin; this medication exploits the physiological transport of albumin from the bloodstream via the endothelium of the blood vessels. This system may also allow better delivery of the drug to the tumor microenvironment, and thus it is associated with more linear pharmacokinetics. 22 Nab-paclitaxel was developed to take advantages of the antitumor activity of conventional paclitaxel. After the completion of Phase I and pharmacokinetic studies to determine the maximum tolerated dose and optimal dosing, 23,24 a 300 mg/m 2 regimen every three weeks (q3w) of nab-paclitaxel was tested in a Phase II trial on 63 MBC women, 59% of whom had prior exposure to anthracyclines. An objective response rate (ORR) of 48% was achieved (41% in the pretreated patients, 64% in those chemotherapy-naïve for the metastatic disease); median time to progression and OS were 26.6 weeks and 62.6 weeks, respectively. 25 The efficacy and safety of nab-paclitaxel in the first-and second-line treatment of MBC was demonstrated in a large randomized Phase III trial comparing q3w nab-paclitaxel 260 mg/m 2 and q3w paclitaxel 175 mg/m 2 . The study showed the statistically significant superiority of nab-paclitaxel in terms of ORR (33% for nab-paclitaxel versus 19% for paclitaxel, P=0.001). In particular, the ORR was 42% for nab-paclitaxel and 27% paclitaxel in the first-line setting (P=0.029); in second-line-or-greater setting, the ORR was 27% for nab-paclitaxel and 13% for paclitaxel (P=0.006). In patients with visceral dominant lesions, the tumor response rate was significantly higher (P=0.002) with nab-paclitaxel (34%) than with paclitaxel (19%). In patients 65 years of age, the tumor response rate was significantly higher (P0.001) with nab-paclitaxel (34%) than with paclitaxel (19%). In the experimental arm, PFS was significantly longer with nab-paclitaxel than with paclitaxel (23 weeks versus 16.9 weeks, P=0.006). A trend in favor of nab-paclitaxel for OS was also observed (65.0 versus 55.7 months, P=0.046). Patients randomized in the experimental arm had a lower incidence of grade 4 neutropenia (9% versus 22%, P=0.046) despite a 49% higher taxane dose. Grade 3 sensory neuropathy was more common in the nab-paclitaxel arm (10% versus 2%, P0.01), with a median time of improvement to a lower grade of 22 days for the nab-paclitaxel group and 79 days for the paclitaxel group, respectively. 26 The results from the pivotal Phase III study led to the regulatory approval of nabpaclitaxel for the treatment of MBC by the US Food and Drug Administration in 2005 as monotherapy with a recommended dose of 260 mg/m 2 as a q3w regimen. 27 In Europe it is licensed for use in adult patients with disease progression despite the type of first-line treatment for metastatic disease and in whom standard, anthracycline-containing therapy is not indicated. 28 The next logical step in the clinical development of nab-paclitaxel was the investigation of a weekly schedule. In a direct comparison between weekly (100 mg/m 2 or 150 mg/m 2 ) nab-paclitaxel, q3w nab-paclitaxel (300 mg/m 2 ), and docetaxel (100 mg/m 2 ), each type of dose and schedule of nab-paclitaxel was superior to docetaxel in terms of ORR and PFS as a first-line treatment for MBC, and nab-paclitaxel had a favorable toxicity profile. 29,30 Additional studies have further demonstrated that the administration of weekly nabpaclitaxel is both safe and effective, even in heavily pretreated, taxane-refractory patients 31 or in combined regimens with other cytotoxic or targeted agents. [32][33][34][35] To date, little information is available regarding the approved q3w schedule in the real life clinical context, because most of the data have been provided by post hoc or retrospective analyses. [36][37][38] Finally, the impact of such a treatment option on patient QoL has not been specifically evaluated in this setting. Presented here are the results of a single-arm, multicenter, prospective study undertaken to assess the activity, safety, and impact on QoL of q3w nab-paclitaxel as second-line chemotherapy in HER2-negative MBC patients previously treated with taxanes in the adjuvant or metastatic setting. Patients and methods study design and endpoints This prospective, multicenter trial was designed to evaluate the antitumor activity, safety, and QoL of q3w nab-paclitaxel in patients with MBC who were previously treated with taxanes. The study was conducted in compliance with the Helsinki Declaration. 39 The primary efficacy endpoint was the overall ORR, defined as the percentage of patients having either a complete response (CR) or partial response (PR). The exact binomial for a 95% confidence interval (CI) for the ORR was used. A sample size of 52 MBC patients was targeted to ensure that the lower limit of the 95% CI exceeded 50% of the overall response rate. Secondary objectives included safety, QoL and treatment compliance evaluation, PFS, and OS. Patient selection Each eligible patient had to fulfill all the following criteria: 1) be histologically or cytologically confirmed to have locally advanced or metastatic breast cancer; 2) have HER2negative disease, defined as an immunohistochemistry score of 0-1+ or have an immunohistochemistry score of 2+ and no gene amplification by fluorescence in situ hybridization; 3) have no more than one prior chemotherapy for metastatic disease; 4) have an Eastern Cooperative Oncology Group (ECOG) performance status of 2; 5) be at least 18 years of age; 6) have adequate bone marrow (absolute neutrophil count 1,500 cells/µL, hemoglobin 9.5 g/dL, and platelet count 100,000 cells/µL), hepatic functions (serum bilirubin 2.0 mg/dL; alanine transaminase, aspartate aminotransferase, and alkaline phosphatase  double upper normal limit), and renal functions (serum creatinine 1.1 mg/dL); 7) have no active concomitant malig nancies; 8) have a life expectancy 3 months; and 9) have at least one bidimensionally measurable target lesion documented by computed tomography scan or magnetic resonance imaging according to the Response Evaluation Criteria in Solid Tumors. 40 Patients may have had previous hormonal therapy as adjuvant treatment and/or treatment for metastatic disease if they had progressive disease and they discontinued hormone therapy at study entry. Neoadjuvant and/or adjuvant chemotherapy was allowed. Patients had to be treated with taxane-containing chemotherapy as adjuvant or first-line treatment for the metastatic disease. Previous radiation therapy was allowed if the measurable lesions were completely outside the radiation field and 4 weeks had elapsed prior to study entry. Bisphosphonate therapy for bone metastases was allowed; however, treatment must have been initiated prior to the first dose of the study medication. Patients were excluded if they met any one of the following conditions: 1) had clinical signs of a central nervous system disorder and brain metastases or leptomeningeal infiltration; 2) had history of other cancers except for radically resected carcinoma in the uterine cervix or nonmelanoma skin cancer; 3) had poorly controlled medical disorders (diabetes, hypertension, infection); 4) had pre-existing peripheral neuropathy of grade 1 based on National Cancer Institute Common Toxicity Criteria (NCICTC) Version 2.0; 41 5) were pregnant or lactating. The had baseline staging consisted of a complete clinical examination; chest and abdomen computed tomography scans with contrast enhancement, positron emission tomography, or computed tomography and/or X-ray and abdominal ultrasound; bone isotope scan; electrocardiogram and echocardiography with left ventricular ejection fraction evaluation; complete blood count; and routine biochemistry. Written informed consent was obtained from each patient before enrollment in the study. Treatment and procedures All of the enrolled patients were treated in the outpatient setting. Single agent nab-paclitaxel was given at the dose of 260 mg/m 2 as a 30-minute intravenous infusion on day 1 of each 3-week cycle. A standard antiemetic regimen with 5-HT3 receptor antagonists was given; no premedication to prevent hypersensitivity reactions was provided. Treatment could be delayed for a maximum of 2 weeks in case of hematological toxicity, febrile neutropenia, sepsis, or any other grade 3-4 nonhematological toxicity. Dose adjustments for nab-paclitaxel (with a dose reduction first by 25% and then by 50%) were planned to correspond with type and grade of observed toxicity when appropriate. If an adverse event required dose interruption, the nab-paclitaxel dose was reinitiated at the start of a treatment cycle if the patient's absolute neutrophil count was 1,500 cells/µL, the patient's platelet count was 100,000 cells/µL, and any other toxicity resolved to grade 1. Patients experiencing grade 3-4 neutropenia, with or without fever, or grade 2 symptomatic anemia could receive hematological support with granulocyte colony-stimulating factor or erythropoietin. Treatment was administered until documented disease progression, unacceptable toxicity, or patient refusal. 40 Responses were evaluated during every second chemotherapy cycle with repeated clinical and appropriate radiological assessments based on the extent of the disease defined at baseline. A patient was considered assessable for response if she received a minimum of two cycles of treatment. Overall response was defined as the best confirmed response detected in each patient from the date of enrollment until the end of the study. Response duration was computed from the initiation of treatment to the first evidence of disease progression for all responsive patients. Objective response rates (ORR) and clinical benefit rates (defined as the sum of the number of patients who achieved a CR, the number of patients who achieved a PR, and the number of patients whose disease remained stable for a minimum of 6 months) were tabulated together with 95% CI by following the exact method. Subset analysis according to baseline characteristics was performed for ORR; PFS, defined as the time from the date of enrollment to the first documented progression, and OS, defined as the time between study enrollment and date of death, were estimated using the Kaplan-Meier method. 42 All treated patients were included in the intent-to treat (ITT) analysis and were analyzed for safety. Toxicity was monitored by clinical evaluation, complete blood cell count, and full serum chemistry before each cycle. Cardiac assessment was performed by clinical evaluation and by electrocardiography and echocardiography with left ventricular ejection fraction measurements at baseline and, thereafter, when clinically indicated. Toxicity was graded according to NCICTC, version 2.0. 41 Patients who received at least one cycle of therapy were considered evaluable for safety analysis. QoL was measured at baseline and then at the start of every cycle by using the self-administered European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Breast 23 (EORTC QLQ-BR23, Italian translation). 43 For a more accurate evaluation of treatment compliance, patients were also asked to concomitantly complete an institutional validated questionnaire in which patients' subjective perceptions of tolerability of the most recent therapy prior to the start of the study and of the therapy provided during the study was graded as 'very good', 'good', 'satisfactory', or 'insufficient'. Patient population From February 4, 2011 to May 11, 2013, 52 consecutive MBC patients were enrolled and treated in three centers in Northern Italy. The main patient characteristics at baseline are reported in Table 1. The median age was 53 years (range 33-71 years), and the ECOG performance status was 0-1 in 92% of cases. The median time from initial diagnosis was 28 months (range 19-57 months); about 35% of patients had a disease-free interval 2 years. Visceral involvement was present in 67% of cases, and more than 70% of patients had metastases to 2 sites. All patients received prior adjuvant therapy, anthracycline-based in 27% and taxane-based in 65% of cases. Moreover, all of the patients had received one prior regimen as first-line treatment for the metastatic disease; this treatment consisted of taxane-based Treatment activity All patients were evaluable for the primary study endpoint ( Table 2). The ORR was 48% (95% CI, 31.5%-61.3%) and included CRs in 13.5%. Disease stabilization was reached in 19 patients and lasted more than 6 months in 15 of them; the overall clinical benefit rate was 77%. The median time to response was 70 days (range 52-86 days). In Table 3, the rate of responding patients in the whole series is scattered by the main baseline characteristics of patients, tumor and pretreatment that could potentially affect the chance of response. The CIs suggest that none of the considered variables significantly affected the probability of response. However, with limitations due to the small sample size, it appears that elderly patients (those older than 65 years of age), patients whose ECOG performance status was poor (1-2), and patients whose breast cancer onset was 2 years had low response rates. By contrast, young patients (those 65 years of age) with triple negative disease, patients whose disease-free interval from breast cancer diagnosis was 2 years, and patients whose predominant metastases were hepatic were highly responsive to treatment. The curve based on Kaplan-Meier estimates of PFS in the ITT population is reported in Figure 1. The median PFS was 8.9 months (95% CI, 8.0-11.6 months, range 5-21+ months). As of the data cut-off, 11 women (21%) had died. Therefore, the median OS was not reached because the data were not mature (ie, 75% of the patients were censored for the endpoint). Overall, 36 women received further chemotherapy at the time of disease relapse, three had third-line hormone therapy, and two other patients were treated with palliative radiotherapy for symptomatic metastatic bone disease. Toxicity and compliance All of the enrolled patients were assessable for safety analysis (Table 4). A total of 378 chemotherapy cycles were given to the 52 patients. The median number of courses was six per patient (range 4-26 cycles). Treatment was well tolerated; 92% of patients received nab-paclitaxel at the protocol-specified dose throughout the study, and 40% of them had 9 cycles. The median relative dose intensity Notes: The % symbol denotes percent of total of 52 patients. a Clinical benefit is determined by the sum of the number of patients with a complete response, the number of patients with a partial response, and the number of patients whose disease stability lasted more than six months. Abbreviations: n, number of patients; CI, confidence interval. 2194 Palumbo et al was 98%. Neutropenia was the most commonly seen effect of hematological toxicity; grade 3-4 toxicity was observed in 11 patients (21%) and corresponds to 5% of administered cycles. Granulocyte colony-stimulating factor support was given for five patients (9.6%) during eight cycles of treatment. Neutropenia was usually brief and not cumulative, and no episodes of febrile neutropenia occurred; chemotherapy administration was delayed by 1 week in nine out of 52 women (17%) because of hematological toxicity or patient convenience (six and three patients, respectively). As expected, peripheral neuropathy was the most significant effect of nonhematological toxicity: 22 women (42%) experienced grade 1-2 neuropathy during treatment. Three patients (5.8%; one at cycle 3, one at cycle 5, and one at cycle 8) required dose reduction by 25% because of grade 3 sensory neuropathy. The onset of such a toxicity occurred after a median of six treatment cycles (range 3-14 cycles); the median time to improvement to a lower grade was 19 days (range 16-26 days). Nausea/vomiting was mild on standard antiemetic regimens. Self-limited mucositis was detected in ten patients, eight of whom had grade 1 severity and two of whom had grade 2 severity. Transient and reversible increases in serum transaminases were observed in four patients. Grade 2 fatigue occurred in three women. All patients experienced treatmentrelated alopecia, which was of grade 1 in 51% of cases and grade 2 in 42% of cases; in four patients, grade 3 hair loss occurred over 32 treatment courses. No hypersensitivity reactions were documented. All of the observed treatment-related adverse events were fully resolved at the time of the first follow-up visit, and no long-term toxicity was detected. Overall, the toxicity profile in women aged 65 years did not significantly differ from that of younger patients. Short-term infusion and not needing premedication allowed good patient compliance throughout the whole population. Information on treatment tolerability was available on all of the treated patients (Table 5). 'Very good' tolerability was reported in 28%-33% of the patients who received nabpaclitaxel, a proportion that was higher than the 19% that was reported for the patients' last therapy. The percentage of patients reporting 'insufficient' tolerability did not exceed 6%. Overall, 60% of all of the patients reported an improvement in tolerability after switching to nab-paclitaxel from their last therapies, mostly from 'satisfactory' to 'good' or from 'good' to 'very good'. Qol assessment The QLQ-BR 23 complementary questionnaire was found to be feasible and easily completed by the majority of patients: 50/52 (96%) women returned the completed modules at the start of each chemotherapy cycle. Figure 2 provides an overall profile of the investigated parameters of QoL of all enrolled patients during the first nine cycles of treatment. No significant deterioration of QoL was observed for most of the evaluated aspects, such as systemic therapy side effects, breast and arm symptoms, and distress over hair loss; a non statistically significant decrease in median score regarding body image was observed during cycles 5-6, while scores of future perspectives improved during treatment. Interestingly, such an improvement was maintained in women receiving prolonged treatments (eight courses and over). Discussion The treatment of MBC is evolving as researchers continue aiming to improve the QoL, the duration of remission, and, in the last couple of decades, the OS. Today, there is no standard of care for a disease as heterogeneous and complex as HER2-negative MBC, and many criteria will need consideration when selecting not only the best drug but also the best regimen. [12][13][14] Weekly 80 mg/m 2 paclitaxel and q3w 75-100 mg/m 2 docetaxel are considered the gold standard in MBC on the basis of results of randomized clinical trials. [44][45][46][47][48] Moreover, how these agents stand relative to each other in terms of efficacy remains difficult to judge. The issue of the sequential versus the combined chemotherapy approach in the metastatic setting remains unresolved. [49][50][51][52] Therefore, the choice of the optimal therapeutic strategy is made on an individual basis and with the consideration of both objective clinical/biological parameters (age, HER2 status, disease-free interval, previous neo-and adjuvant treatments, metastatic sites, predominant symptoms) and the patient's attitudes and preferences. Indeed, the increasing use of anthracycline-and taxane-based chemotherapy in the early stage of breast cancer makes the management of relapsing disease more difficult, and new active therapeutic options need to be identified in such a 'difficult to treat' patient population. Despite the current lack of a standard of care for the metastatic disease, a considerable proportion of women receive multiple lines of treatment, including taxane rechallenge, that are prescribed on the basis of previous efficacy and tolerance; the results of these treatments justify this practice. [53][54][55][56] However, there are very few data available that detail the outcomes of this pragmatic approach for MBC. 30,[57][58][59] Specifically, in the second-line treatment, the challenge is how to deliver full doses of the chosen agents without causing unacceptable levels of toxicity. The primary objective of this prospective, multicenter study was to assess the activity and tolerability of the approved single agent q3w nab-paclitaxel regimen as a second-line treatment in women previously treated with taxane-based chemotherapy in the adjuvant or metastatic setting. To the best of our knowledge, this is the first prospective study specifically focused on this issue, since currently available data are derived from trials testing the nab-paclitaxel weekly schedule or from post hoc retrospective analyses. A clinical demonstration that nab-paclitaxel does not demonstrate absolute cross-resistance with first-generation taxanes was firstly provided by Blum et al in 2007. 31 In this Phase II trial treatment with weekly nab-paclitaxel, 100 mg/m 2 and 125 mg/m 2 were both associated with an ORR of 14%-16%, a median PFS of 3.0-3.5 months, and a median OS of 9.1-9.2 months in 181 women with MBC who were heavily pretreated with taxanes. Among 75 women given 125 mg/m 2 nab-paclitaxel, the disease control rate was 45% in those treated with conventional paclitaxel and 46% in those with prior exposure to docetaxel; in the whole population, median survival was similar for responding patients and those with disease stabilization 16 weeks. 31 The activity of q3w nab-paclitaxel observed in our study was higher than that previously reported in taxane-pretreated MBC patients, 31,36,37 but a cross-comparison of results is difficult because of the different characteristics of enrolled patients. We reported an ORR of 48%, including 13% complete responses, in 52 evaluable patients; 13 out of 24 women (54%) previously given paclitaxel/bevacizumab or docetaxel/capecitabine as a first-line treatment for the metastatic disease demonstrated an objective response. Overall, 77% of patients had a clinical benefit from their secondline treatment with nab-paclitaxel, since 15 stable diseases lasting more than 6 months were observed. A short time to response was also noted; 98% of responder patients achieved maximum response by cycle 3. These findings appear of importance in the practical management of MBC patients, since tumor response to chemotherapy can lead to restoration of organ function, symptom relief, and improvement in patient QoL. On the other hand, in this setting, obtaining prolonged stabilization of disease can provide the same clinical advantage as exhibiting an objective response. Findings from the randomized Phase II and III studies and subsequent exploratory analyses suggest that patients treated with nabpaclitaxel achieved a CR or a PR, and they appeared to live longer than those who did not receive nab-paclitaxel. This trend was observed across various patient subgroups. However, whether tumor response could be indicative of a survival benefit with nab-paclitaxel is unknown, and the role of surrogate endpoints to predict OS benefit to chemotherapy remains unclear as well. [60][61][62] The results of our statistical analysis, which was performed in order to identify factors potentially predictive of treatment response and clinical outcome, suggest a higher chance of response for women who are usually in the 'poor prognosis' subset: women who are 65 years of age; women who are affected with triple negative subtype; women whose disease free interval (DFI) from the time of diagnosis is short; and women whose predominant metastatic disease is in the liver. Similar data were previously reported in a post hoc analysis of two randomized trials of nab-paclitaxel that aimed to examine whether patients with DFI 2 years and visceral dominant metastases demonstrate outcomes similar to the ITT population in these studies. The results of the analysis showed that the treatment benefits observed with nab-paclitaxel, but not with paclitaxel or docetaxel, in these trials also apply to women with poor prognostic factors. 38 Recently reported data further support the effectiveness of the drug in MBC patients with features typically associated with more aggressive disease (including the triple-negative phenotype, a higher number of metastatic sites, the presence of visceral metastases and a short DFI), both in the firstline setting and in the context of progressive or resistant disease. [63][64][65] The secondary efficacy endpoint of our study was treatment safety and tolerability, including a prospective assessment of QoL. Overall, treatment-related toxicity was Drug Design, Development and Therapy 2015:9 submit your manuscript | www.dovepress.com Dovepress Dovepress 2197 nab-paclitaxel in taxane-pretreated metastatic breast cancer manageable in the outpatient setting, and in no case did treatment have to be stopped because of unacceptable side effects or patient refusal. Specifically, the incidence of severe peripheral neuropathy was less frequent than expected and less frequent than previously reported in taxane-pretreated populations: 26,31 only three patients, all previously given docetaxel-based chemotherapy for the metastatic disease, experienced grade 3 sensory neuropathy, and these episodes occurred during the third, fifth, or eighth cycle of treatment. All of these episodes were easily managed with dose reduction and treatment delay until improvement to grade 2. No significant differences in the whole safety profile in elderly patients were detected; this finding confirms results previously reported for weekly nab-paclitaxel in patients 65 years old. 26,29,66 As described previously, in our study, nab-paclitaxel given at 260 mg/m 2 every 3 weeks resulted in a good patient compliance even for patients given long-term treatment. Treatment tolerability, as reported by the patients, was 'very good' or 'good' in more than 80% of the whole cohort. Interestingly, 31 patients (60%) reported better tolerability of therapy with nab-paclitaxel than with their last therapy, which consisted of docetaxel or paclitaxel-based chemotherapy in 46% of them. No significant deterioration of QoL for most of the evaluated aspects over treatment was detected. The non statistically significant decrease we observed in median scores regarding the body image during cycles 5-6 of therapy is probably related to the onset of sensorial neuropathy, which impacts daily activities. Interestingly, we observed that scores for the item of future perspectives improved over treatment. Because the time of responding to the QoL questionnaires coincided with the instrumental re-evaluation of the disease, this finding could reflect the better functioning of patients continuing therapy, who were informed by the physician that the treatment had a positive effect. Despite more than 40 years of clinical research, treatment choices beyond the first line in MBC are still difficult to determine. Drug selection and combination are complicated because the majority of patients were exposed to docetaxel and/or paclitaxel at the time of disease relapse. The introduction of nab-paclitaxel opened a novel scenario in the treatment of MBC. For choosing the best drug for each patient for a particular set of benefits, more options are now available that allow for the possibility of tailoring taxane-based therapy in the decision making process. The challenge to pick the adequate dose for the individual patient will depend on the therapeutic index of the different possible regimens. The issue with the use of nab-paclitaxel in clinical practice is linked to the probability of sensory neuropathy. As elegantly highlighted in a recent editorial, 67 further investigation is required to better manage this 'difficult-to-quantify' toxicity, since data in MBC are equivocal at the present time. For clinical practice, the time to reversibility of neuropathy appears to be an important variable to be considered when choosing the dose and schedule of nab-paclitaxel for treating MBC patients. The data reported in this study confirm that sensorial neuropathy occurs late in the course of treatment with the q3w schedule also in taxane-pretreated patients, and adequate management by dose reductions or treatment delays allows the maintenance of an adequate dose-intensity of the drug. Conclusion In conclusion, our study demonstrated that q3w nabpaclitaxel produces good antitumor activity with manageable toxicity and no significant deterioration of QoL as second-line chemotherapy in MBC patients, confirming the previously reported efficacy data. Specifically, our study shows that such a regimen is a valid therapeutic option for that 'difficult to treat' patient population represented by women who at time of disease relapse have already received the most active agents in the adjuvant and/or metastatic setting, such as taxanes. To further optimize the role of nab-paclitaxel in the management of taxane-pretreated patients, future clinical research in this setting should include investigating specific patient and tumor characteristics that can be used as biomarkers to potentially predict the response to this therapy.
2016-08-09T08:50:54.084Z
2015-04-15T00:00:00.000
{ "year": 2015, "sha1": "11b0cec8cdcd3b7417755acc6c61efccf1be0e95", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24623", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11b0cec8cdcd3b7417755acc6c61efccf1be0e95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151599100
pes2o/s2orc
v3-fos-license
Introduction to the special issue: racialized bordering discourses on European Roma ABSTRACT In the introduction to this special issue, we briefly introduce everyday bordering as the theoretical framing for the papers and explore its relationship to the process of racialization. We introduce our situated intersectional approach to the study of everyday bordering, illustrating the importance of capturing the differentially situated gazes of a range of social actors. We then go on to contextualize the importance of this framing and approach in a wider discussion of Roma in Europe before concluding with a summary of the particular contributions of each of the papers in this special issue to these debates. expressed by violence, hate speech, exploitation, stigmatisation and the most blatant kind of discrimination". 1 The articles in this issue focus separately and comparatively on several European countriesspecifically Hungary, Finland and the UKand show the racialized constructions of Roma in Europe. The category and boundaries of the Roma (and related communities such as Romani Gypsies and Travellers) have always been contested (Acton 1997;Hancock 2002;Matras 2002) but in recent years we have seen a growing movement of self-determination encompassing them all, at least nominally, in the European Union (EU) and the United Nations (Feys 1997;Klímová-Alexander 2007) under the umbrella term of Roma. We therefore choose to use this label to include all the heterogeneous collectivities discussed in this issue. Special funds and policies aimed at "integrating" and improving the welfare of Roma people have been developed, but at the same time there has been no significant change in the social processes locating them as "Others". After the collapse of the Soviet Union and the enlargement of the EU, differentiation between "indigenous" and migrant Roma began to emerge within racialized discourses towards Roma. In recent populist debates on East European migration to the UK, for example, there has also been a collapse of the categories "Roma" and "Romanians" with a focus on the actions of the former being used to demonize the latter (see Wemyss and Cassidy 2017). Most of the scholars writing for this issue have been studying the social, economic and political contexts of Roma populations as part of a large European research project on EUBorderscapes and everyday bordering. 2 Within the project, the racialized constructions of Roma in media discourses as well as intersectional narratives of everyday social and state borderings, which differentiate, rather than homogenize, different groupings of Roma people, have been the focus of particular strands of the research and analysis. The first part of this introductory paper focuses on the relationship of racism in general and towards Roma people in particular and intersectional situated constructions of everyday bordering. It then describes in broad brush the history and policies towards Roma people in Europe before introducing the specific articles in this special issue. Racism and everyday bordering Racism, or, rather, the process of racialization, is a discourse and practice which constructs immutable boundaries between collectivities which is used to naturalize fixed hierarchical power relations between them (Anthias and Yuval-Davis 1992;Goldberg 2009;Rattansi 2007;Solomos and Back 1996). Barth ([1969] 1998) and others following him have argued that it is the existence of ethnic (and racial) boundaries, rather than of any specific "essence" around which these boundaries are constructed that is crucial in processes of ethnicization and racialization. Any physical or social signifier, from the colour of the skin to the shape of the elbow to accent or mode of dress, can be used to construct the boundaries, which differentiate between "us" and "them". As the different articles in this issue show, although some of the racialization of the Roma can be seen as linked to the white majority's perceptions of Roma as "dark skinned", 3 it is mainly linked traditionally to the anti-nomadism of sedentary populations (see e.g. Kabachnik 2010;McVeigh 1997). However, it is important to emphasize that the racialization of Roma continues also when they become sedentary (as a result of a variety of forced and voluntary social practices and policies) but continue to be, to a large extent, a distinct segment of the labour market. In this way, the Roma case echoes Stuart Hall's famous articulation of "class is the modality in which race is lived" (Hall [1978(Hall [ ] 1996. However, to describe contemporary racialization of Roma only as an intersection of "race" and class is an oversimplification. This racialization is closely linked to particular political projects of belonging (Yuval-Davis 2011) in which Roma are constructed and reconstructed as an "other" by continuous processes of everyday bordering. Different political projects of belonging determine where and according to which criteria the boundaries between the collective self and others would be delineated as well as the permeability and solidity of these boundaries. State borders are but one of the technologies used to construct and maintain these boundaries. It is for this reason that contemporary border studies largely refer to "borderings" rather than to borders; seeing them more as a dynamic, shifting and contested social and political spatial processes rather than just territorial lines (Newman 2006;van Houtum and van Naerssen 2002). However, these borders and boundaries are not just top-down macro social and state policies but are present in everyday discourses and practices (Yuval-Davis, Wemyss, and Cassidy 2017) of different social agents, from state functionaries to the media to all other differentially positioned members of society. All of them are engaged in everyday borderings, however, in somewhat different ways and it is for this reason that we need to add the analytical and methodological perspective of situated intersectionality to our study of everyday bordering (Yuval-Davis 2014). Situated intersectionality Intersectionality (e.g. Anthias 2012; Brah and Phoenix 2004;Crenshaw 1989;Hill Collins 1990;Yuval-Davis 2006) has become a major theoretical and methodological perspective in analysing social relations. Indeed, it is argued that it should be adopted as the most valid approach to analysing social stratification, as it is the most comprehensive, complex and nuanced and does not reduce social hierarchical relations into one axis of power, be it class, race or gender. The analysis in this special issue follows the specific approach to intersectionality that Yuval-Davis (2014) has named "situated intersectionality". Fundamental to this approach is that intersectionality analysis should be applied to all people and not just to marginalized and racialized women, with whom the rise of Intersectionality theory is historically linked, so as to avoid the risk of exceptionalism and of reifying and essentializing social boundaries. Epistemologically, intersectionality can be described as a development of feminist standpoint theory, which claims, in somewhat different ways, that it is vital to account for the social positioning of the social agent. Situated gaze, situated knowledge and situated imagination, construct differently the ways we see the world. However, intersectionality theory was interested even more in how the differential situatedness of different social agents relates to the ways they affect and are affected by different social, economic and political projects. In this way it can no doubt be considered as one of the outcomes of the mobilization and proliferation of different identity group struggles for recognition (Taylor 1994). At the same time it can also be seen as a response to some of the problems of identity politics (however important they have been historically in terms of mobilization and exposure of different kinds of oppression), when they conflated social categories and social groupings, individuals and collectives and suppressed the visibility of intra-group power relations and plural voices for the sake of raising the visibility of the social grouping/social category as a whole. Methodologically, different intersectionality approaches have tended to use what McCall (2005) calls inter-or intra-categorical approaches. By intercategorical approach McCall means focusing on the way the intersection of different social categories, such as race, gender and class affect particular social behaviour or distribution of resources. Intra-categorical studies, on the other hand, are less occupied with the relationships among various social categories but rather problematize the meanings and boundaries of the categories themselves, such as whether black women were included in the category "women" or what are the shifting boundaries of who is considered to be "black" in particular place and time. Our approach to the study of everyday bordering has seen the two as complementary, combining the sensitivity and dynamism of the intra-categorical approach with the socioeconomic perspective of the inter-categorical approach. Another related issue concerns the importance of differentiating between people's positionings along socio-economic grids of power; their experiential and identificatory perspectives of where they (and others) belong; and their normative value systems (Yuval-Davis 2011, 12-18). These different facets of intersectionality analysis are related to each other but are also irreducible to one other. There is no direct causal relationship between the situatedness of people's gaze and their cognitive, emotional and moral perspectives on life. Our team has been able to analyse discourses on everyday bordering from differential situated gazes of different social agents in specific locations in several European countries (e.g. politicians, officials, activists, journalists, local residents of different ethnicities both male and female). As can be seen in the articles in this issue which are concerned with media and contesting discourses, we were able to compare intersectional discourses in relation to different temporal points as well as locational. 4 Roma in Europe There are currently between ten and twelve million 5 Roma living in Europe. Estimates are variable, in part, because of the contested nature of Roma identity (Nirenberg 2010). The term Roma was first adopted at the inaugural World Romani Congress in London in 1971. We are aware of the fluid and heterogenous nature of such self-identification, and a number of the papers in the special issue (cf. Wemyss and Cassidy) explore the impacts of homogenizing discourses in more detail. We use the term Roma as the endonym from the Romani language, meaning man, rather than other terms in common usage. Originally from the Indian subcontinent, by the time they were first documented in Europe in the fourteenth century, many were already enslaved and/or excluded and marginalized. Other kingdoms across Europe also put to death, expelled or deported (to colonies in the New World) Roma throughout the sixteenth century when the population spread. Whilst some Roma left Europe for North America from the mid-1800s until the outbreak of the Second World War, these flows were relatively modest. In spite of the genocide of Roma under the Nazi regime, Central and Eastern Europe (CEE) was still home to large numbers of Roma at the end of the Second World War, many of whom were subjected to forced assimilation policies within the newly established state socialist regimes. However, as Ruzicka (2012) has argued, it is important that we do not mask the very different experiences of Roma under state socialism. Under socialism, many Roma were resettled in urban centres in the present-day Czech Republic and these populations were more greatly affected by the "crisis" of transition (Sokol 2001)deindustrialization leading to high unemployment and the regeneration of inner-city areas, which often displaced them from social housing (Keresztely, Scott, and Virag 2017). Recent academic research and human rights monitors have repeatedly identified a significant decline in the socio-economic status of Eastern European Roma/Gypsies, marked by deepening poverty and increasing levels of residential segregation (Barany 2002;Ladányi and Szelényi 2006). As a result of multiple national projects of belonging across Europe, which seek to exclude Roma, we have seen the emergence of a frame that posits Roma as a people that exist everywhere but belong nowhere. The enactment of processes of non-belonging in everyday life results in daily practices of segregation in schooling, housing, and recreation. These processes of everyday bordering in relation to Roma strengthen the majority population's identity (Fidyk 2013). Roma are effectively banished from the imagined communities of European nations (Anderson 1982). The collapse of state socialism led to emerging Roma engagement with political processes in the fledgling democracies, as well as new media and cultural programming in Romani languages. For the Roma, the opening up of channels to the rest of the world presented opportunities for greater international links. However, as Gheorghe (1991) also points out, the removal of state control over the media and other spheres of everyday life in the countries of CEE also led to increases in anti-gypsy discourses and even conflict and attacks on Roma people (Puxon 2000). Many of CEE's estimated eight million Roma sought asylum in the West from the mid-1990s. In spite of NGO reports demonstrating institutionalized racism towards the Roma in the Czech Republic and Slovakia their claims were largely refused on the basis that CEE countries were deemed safe, having the required legislative frameworks to protect minority rights (Guy 2003). Many more Roma live in Europe than are afforded European citizenship, due to systemic processes of exclusion, which make it difficult for them to meet the requirements of "residency-based" citizenship criteria (Guillem 2011). This is not to support the assumption that Roma or Romani culture is inherently or necessarily nomadic, which has often been central to exclusionary processes (Orta 2010;Pusca 2010). The process of EU accession and enlargement has been one of the key reasons for the emergence of a focus on Roma within EU policy circles. The EU has suggested that they and their members have a "special responsibility towards the Roma". Not only are there many more Roma living in the EU since its eastward expansion, but they have also been highly visible in the East-West migration, which has dominated the continent both prior to and following 2004. The extent of the exclusion of the Roma within the Union led the Commission to adopt a Framework to address the complex issues facing Roma people living in all its member states. However, the EU's framing of their approach to addressing Roma exclusion has been highly problematic. First and foremost, because it bolsters national projects of belonging, which exclude Roma by suggesting they are a "European" people. In addition, the EU's usual process of "norm-spreading", which is used to place pressure on member states to conform to particular ideals and values has been strongly resisted by members because of the differing attitudes towards and existing norms relating to Roma. Although attempts to create a movement focusing on the rights of Roma have been limited by the heterogeneity of the population (McGarry 2012), there are many initiatives being undertaken by Roma activists across Europe. With its roots in the 1920s and 1930s, calls to recognize the Roma as a nation without a state have increased since 1991 and particularly the late 1990s. Initiatives incorporating Roma into mainstream anti-discrimination policies have largely been perceived as inadequate. It is thanks to the sustained efforts of activists in the heart of the EU's bureaucratic institutions in Brussels and elsewhere that the 2011 European Framework for National Roma Integration Strategies was adopted. Whilst organizations such as the European Roma Rights Centre (ERRC) and European Roma Policy Coalition (ERPC) have broadly welcomed some of the EU's initiatives under the Framework to counter exclusion in the spheres of education, health, housing and employment, a joint statement issued in 2011 expressed their disappointment at the EU's failure to address anti-Gypsyism in member states (ERRC/ERPC 2011). Anti-Gypsyism lies at the heart of Roma exclusion and the EU's Framework can hardly be successful whilst it fails to tackle the associated everyday manifestations of this phenomenon, which include intimidation, harassment and violence against Europe's Roma people. The ERRC continues to advocate for the Framework with partners via the EPRC. In addition, the Centre has also worked on growing its grassroots base by training activists across the region. Some of its programmes also focus on training for professionals, for example, in the legal field, as well as briefings for politicians and policy-makers in Brussels and beyond relating to key themes, such as child protection and gender inequalities. Whilst the EU's efforts in tackling Roma discrimination should be recognized, there is inevitably the question that in Europeanizing the problems of Roma they risk Europeanizing the solution. This can lead to a homogenizing process, in which realities of local and national contexts and relations disappear. As Vermeersch cautions, "even if problems seem similar, causes may vary a lot from place to place and each community might possess different resources and dynamics to deal with these problems" (2012, 15). Anti-Gypsyism is by no means the same in every country. Roma as a reified ethnic group play different political and social roles within the domestic and international politics of different states. We sought contributions, which would highlight the multilevel complexities and diversity of Roma experiences of bordering discourses in different and shifting European contexts, that situated dominant and competing discourses about Roma socially and politically and which sought out Roma voices that challenged their representation. Within the framework of everyday bordering discussed above several themes run through all papers: the recognition of the long histories of discrimination experienced by Roma communities across Europe; the changing policies of the EU and the tension between the inter-European de-bordering and the selective and restrictive immigration policies introduced as each state reacts to free movement in different ways; the continuing racism experienced by Roma people in their interaction with these bordering technologies; the homogenizing "racialized othering" and construction of Roma as a "criminal category" co-existing with the differentiations made between "indigenous" and "migrant" Roma central to the dominant bordering discourses and the heterogeneity, contestations and agency of Roma populations. The first paper engages with political and economic issues that contribute to the production of discourses about Roma through focusing on the increased dependency of Romani organizations and media on non-government donors leading to the marginalization of Roma-led advocacy. Plaut explores how the Romani journalism that now dominates aims at intervening to challenge negative representations of Romani populations and at convincing non-Romani populations that Roma can be included in the wider European identity, drowning out Romani activism and advocacy in Roma-targeted media. The second paper presents an analysis of how discursive and material processes of urban regeneration in Budapest have contributed to the exclusion of long-standing Roma residents. Keresztély, Scott and Virag expose the political intentions of the local government to marginalize Roma families through re drawing social and spatial borders between social and ethnic groups living in the neighbourhood. The third paper extends the analysis beyond the European territorial frame to contrast media discourses in Hungary and Canada about the motivations of and reactions to Hungarian Roma migration to Canada since the 1990s. Varju and Plaut locate the competing discourses in relation to the shifting contexts of the increasingly violent far right politics in Hungary, economic pressures and Canadian migration and welfare policies. The fourth paper explores how Roma from Eastern Europe who have migrated to Finland navigate a "limboscape" where indirect bordering techniques limit their access to social rights and welfare provision. Tervonen and Enache demonstrate that whilst Roma are clear targets of bordering regimes, such regimes are set up to also deal with other legitimate "unwanted migrants". The government's prioritizing of this "hostile environment" has led to inadequate welfare provision whilst migrant Roma employ diverse economic activities and transnational family networks to challenge the effects of such policies. A similarly "hostile environment" is the context of the fifth paper that focuses on the bordering experiences of Roma and non-Roma migrants in the UK. Wemyss and Cassidy track the reproduction and contestation of discourses about EU migration associated with the ending of transitional controls showing that as the restrictions on work by A2 citizens in the UK ended, negative discourses about them conflated diverse Roma and non-Roma groups, extending the border further into the lives of both groups in different and complex ways. The final paper compares how press discourses on the heterogeneous Roma populations of Hungary, Finland and the UK have, since the 1990s, worked as bordering processes differentiating between those who belong to their national collectivities and those who do not. Yuval-Davis, Varju, Tervonen, Hakim and Fathi relate national level discourses about Roma to the political positions of the press and the politics of governments in the context of EU expansion, securitization and neo-liberal economies. The extent to which the media give space to Roma voices is shown to be influenced by the historical and political contexts of each state. Despite the more recent inclusion of Roma voices, the authors conclusion that the trajectories of the discourses are towards more racialization, criminalization and exclusion and less collective recognition of Roma populations in the three countries resonates with the findings of the other contributors.
2019-05-10T13:08:08.427Z
2017-01-06T00:00:00.000
{ "year": 2017, "sha1": "9d5fae1ff6345dc6952caba3ddc5bcd76733b9be", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/01419870.2017.1267382?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "7ee095a675355182e6b41560d339a8de4278d8b2", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
257066157
pes2o/s2orc
v3-fos-license
Tobacco use behaviors and views on engaging in clinical trials for tobacco cessation among individuals who experience homelessness Background Clinical trials that include contingency management for smoking cessation have shown promising results for short-term quitting, but none have explored this approach for long-term abstinence in people experiencing homelessness. We designed a clinical trial of an extended contingency management intervention for smoking cessation for people experiencing homelessness. This study has two aims: (1) to explore tobacco use behaviors, and views toward smoking cessation, and (2) to explore factors influencing acceptability of engaging in such a trial in a sample of adult smokers experiencing homelessness. Methods We administered a questionnaire to obtain information on tobacco use behaviors and conducted in-depth, semi-structured interviews with 26 patients who had experienced homelessness and were patients at a safety net health clinic in San Francisco, California, where we planned to pilot the intervention. We obtained information on triggers for tobacco use, prior cessation experiences, attitudes toward cessation, attitudes toward engaging in a clinical trial for cessation, and factors that might influence participation in our proposed contingency management clinical trial. We analyzed transcripts using content analysis. Results Participants described the normative experiences of smoking, co-occurring substance use, and the use of tobacco to relieve stress as barriers to quitting. Despite these barriers, most participants had attempted to quit smoking and most were interested in engaging in a clinical trial as a method to quit smoking. Participants noted that desirable features of the trial include: receiving financial incentives to quit smoking, having a flexible visit schedule, having the study site be easily accessible, and having navigators with lived experiences of homelessness. Conclusion A patient-centric clinical trial design that includes incentives, flexible visits and navigators from the community may increase feasibility of engaging in clinical trials among individuals experiencing homelessness. Introduction Over 70% of people experiencing homelessness report current use of tobacco [1,37,38], compared to 14% in the general population [2]. Tobacco-attributable cancer and heart disease are the leading causes of morbidity and mortality among individuals experiencing homelessness who are aged 50 years and older [1,3]. People experiencing homelessness who smoke make quit attempts as frequently as the general population [1], but they face a unique set of barriers that hinder long-term abstinence [4]. Inadequate access to health care, smoking cessation services or smoke-free living environments are one set of structural barriers that hinder cessation attempts and increase relapse to smoking after a quit attempt among adults experiencing homelessness [4]. The high prevalence of co-occurring mental health and substance use disorders among people experiencing homelessness are associated with high nicotine dependence and low abstinence rates [5]. The COVID-19 pandemic has worsened existing challenges to obtaining healthcare access among adults experiencing homelessness [6]. Smoking cessation may be less of a priority, particularly when people use tobacco to cope with the stressors of homelessness and unmet subsistence needs [7]. There is an urgent need for interventions that increase efficacy of quit attempts to initiate and sustain long-term abstinence among individuals experiencing homelessness. Smoking cessation interventions that include behavioral counseling and pharmacotherapy are feasible and acceptable for people experiencing homelessness and in clinical trials have been efficacious in achieving short-term abstinence [4]. However, the ten randomized controlled trials (RCTs) of smoking cessation interventions for adults experiencing homelessness have demonstrated abstinence rates between 9% and 17% at 6 months follow-up [8][9][10][11][12][13][14][15][16], rates that are substantially lower than the observed 30%-40% abstinence rates in the general population [17]. Contingency management, a behavior change strategy that reinforces positive health behaviors with incentives (e.g., cash), can effectively reduce tobacco and substance use behaviors in the general population [18][19][20][21]. Smokers who abstain periodically receive modest incentives that reinforce healthy behavior as it is sustained over time [19]. Short-term RCTs of tobacco cessation interventions with contingency management among populations experiencing homelessness have found higher abstinence rates in the intervention than control groups. At 4-8 weeks follow-up, rates of 22%-48% have been reported in intervention conditions compared to 8%-9% in controls [8,15]. While these studies hold promise, they are limited by small sample sizes and short duration of intervention (8 weeks), which may not be sufficient for individuals with high levels of nicotine dependence [22,23]. Studies are needed to evaluate the feasibility, acceptability, and efficacy of extended duration (i.e., 6 months or more) contingency management smoking cessation interventions among adults experiencing homelessness. The majority of individuals experiencing homelessness in the U.S. seek health care in safety net health settings (Care., 2011b.). These safety net health clinics could provide optimal settings to scale up cessation interventions that are integrated with healthcare delivery. Features of these clinical trials that may increase engagement and optimize success with quitting among smokers experiencing homelessness include staff with lived experiences of homelessness, flexibility in patient contact schedules, and ability to provide nicotine replacement therapy in frequent allotments [24]. We developed a pilot protocol of a RCT of an extended duration contingency management tobacco cessation intervention for people experiencing homelessness who are engaged in primary care in a safety net clinic in San Francisco. In preparation for the RCT, we interviewed patients at the safety net clinic where we planned to conduct the trial who reported smoking and current or prior experiences of homelessness to obtain information on their tobacco use, prior experiences with tobacco cessation, their willingness to engage in a clinical trial for cessation, and their perspectives on the feasibility of the proposed contingency management intervention protocol. The primary research aims were to: (1) explore tobacco use behaviors, and views toward smoking cessation, and (2) explore factors influencing acceptability of engaging in such a trial in a sample of adult smokers experiencing homelessness. Such data could lead to better designed, more acceptable contingency management and extended interventions, and enhance recruitment and retention of people experiencing homelessness who smoke. Study design Between April 2021 and July 2021, we recruited participants from a safety net clinic in San Francisco, CA that serves a predominantly homeless population. The University of California, San Francisco institutional review board and the San Francisco Department of Public Health approved all study protocols (IRB # 20-31627). Setting and participants Eligible participants included patients who: 1) were 18 years or older, 2) were engaged in primary care and had a primary care provider at the safety net clinic in San Francisco, CA, 3) reported current smoking, and 4) reported current or past experiences of homelessness. Potential participants were identified using smoking and housing status information from the Epic electronic health record, a commonly used electronic health record in the U.S. that documents patient visits and health care delivered [39]. We asked primary care providers for permission to contact their patients to verify eligibility and to enroll interested patients into the study. The Epic data report (i.e., a list of patients who met our eligibility criteria) included 1534 potentially eligible patients, of whom we received permission to contact 733 patients. Providers informed study staff of patients who had cognitive impairment or who did not meet eligibility criteria because they were no longer smoking. Of the 733 patients, we called the first 163 patients on the list and were able to recruit N = 26 participants into the study. The most common reasons for excluding a participant was lack of a functioning telephone. Since our goal was to recruit a convenience sample of patients at the clinic who met eligibility criteria, we did not try to achieve representation of all providers' patients. We stopped recruiting participants once we reached thematic saturation in-depth interviews. We reimbursed participants $25 for the study. Tobacco and substance use Participants reported whether they smoked every day or somedays as well as the number of days they smoked in the past 7 days and the number of cigarettes smoked on smoking days. With these data, we calculated average daily cigarette consumption. We asked participants to report the time it took to smoke their first cigarette upon waking (within 5 min, 6-30 min, 31-60 min, or after 60 min) and their intention to quit smoking ("never expect to quit", "may quit", "will quit in the next 6 months", or "will quit in the next month"). Participants reported whether they attempted to quit in the past year, and those who had, described the cessation methods they had used during their last quit attempt. We asked participants to report past 30 days use of e-cigarettes, cigars or little cigars, roll-your-own tobacco, and blunts and past 30 days use of alcohol, cannabis, cocaine or crack, amphetamines, and opioids. Demographics and other covariates Participants reported their age, gender (female, male, or transgender), race/ethnicity (American Indian/Alaska Native, Asian, Native Hawaiian/Pacific Islander, Black/African American, Hispanic/Latinx, White, other/more than one race), and education (less than high school, high school or GED, some college, or college or professional training). We asked participants how the pandemic impacted them (moved from unsheltered environments like the street or vehicles to non-congregate [i.e., single rooms] or congregate shelters [i.e., dormitory style rooms]) and their tobacco use (i.e., change in tobacco use and motivation to quit) [25]. Qualitative measures We used an open-ended interview guide to explore experiences with homelessness during the COVID-19 pandemic, triggers for tobacco use, attitudes towards tobacco cessation, previous quit attempts and use of cessation aids, and perspectives on engaging in clinical trials for tobacco cessation. Study staff described the proposed contingency management intervention protocol, including the purpose of the clinical trial, the frequency of study visits, the starting incentive amount ($13.00), the escalating incentive of $0.50 with each negative expired carbon monoxide sample, and the potential final amount at 6 months follow-up ($475). The staff described the potential for including patient navigators to support recruitment and retention, the requirement that participants engage in smoking cessation care at their clinic, and for study staff to engage with participants' clinical team to facilitate cessation. We asked participants to provide their opinions on all aspects of the protocol, and to describe modifications, if any, to improve the feasibility and acceptability of the protocol. Interviews lasted between 30 min and 60 min and were conducted by study staff either in person or by telephone. Quantitative data analysis We described sample characteristics and tobacco use using proportions for categorical variables and median (interquartile range [IQR]) for continuous variables. Qualitative data analysis The audio recorded in-depth, semi-structured interviews were transcribed verbatim by a contracted professional transcription service, and transcribed texts were redacted of any personal identification data. We used Atlas. ti.8 qualitative data analysis software to facilitate efficient coding, and analyzed transcripts using content analysis [26]. J.M, J.C., and D.A. coded the transcripts, and the PI (M.V.) reconciled codes. During the initial phase, we used deductive coding using a pre-defined set of codes developed through our prior work and that were relevant to the current analysis [27][28][29]. We used inductive coding to assign new codes to emergent themes from the transcripts. After independently coding the first four transcripts, the research team met to develop the first iteration of the consolidated codebook. We used this codebook to code subsequent transcripts and met regularly during the coding process to refine the codebook by resolving disagreements in assignment or description of codes. The Cohen Kappa score for interrater reliability was used to assess agreement between the two coders for each transcript (kappa = 0.72). We further refined and reduced the number of overall codes by grouping them into broad categories, after which we identified themes and subthemes in an iterative process. Exemplar quotations were selected to reflect each theme. Sample characteristics and tobacco use behaviors Of the 26 participants, 12 (46.2%) were female, 9 (34.6%) were Black/African American, and 6 (23.1%) were Hispanic/Latino ( Table 1). The median age was 48.5 years. Of the 26 participants, 1 was unsheltered, 7 stayed in a shelter-in-place hotel, 15 stayed in short-term single room occupancy hotels, 2 were doubled-up with family and friends, and 1 stayed in their vehicle. The majority of participants reported cannabis use in the past 30 days, and over half reported amphetamine use in the past 30 days. Of the 26 participants, 8 (30.8%) had moved from unsheltered environments to non-congregate or congregate shelters during the first year of the pandemic. Almost all participants reported daily smoking (88.5%), with over half reporting smoking within 30 min of waking (Table 2). Over half reported using an e-cigarette in the past 30 days. A third of the participants reported making a quit attempt in the past 30 days, and among those, quitting "cold turkey" was the most common method of quitting. Participants did not change their smoking behaviors during the COVID-19 pandemic. Qualitative results We identified five themes: 1) experiences related to homelessness and tobacco use, 2) attitudes toward tobacco cessation, 3) attitudes toward engaging in a clinical trial for cessation, 4) barriers to engaging in a clinical trial for cessation, and 5) factors that would increase feasibility of participating in a clinical trial (Table 3). Experiences related to homelessness and tobacco use 3.2.1.1. Tobacco use and homelessness. Participants described how tobacco use factored into their lives before and during periods of homelessness. Most participants initiated tobacco use between the ages of 11 years and 20 years. By the time they had experienced their first episode of homelessness, they had been smoking for years. Early exposure to tobacco through family and friends was common and one of the primary triggers for tobacco use. Most participants described increased displacement due to the COVID-19 pandemic, with frequent moves from unsheltered environments to non-congregate or congregate shelters. Participants reported smoking more while being homeless than housed, and using tobacco to cope with the stressors of homelessness. While a few participants preferred to smoke alone, most liked the social connection that smoking facilitated. Experiences among sexual and gender minority participants. Four of the participants described their experiences of coming out in the transgender community during their youth or young adulthood. Participant experienced stigma and discrimination during the process of coming out into the transgender community, and described using tobacco as a way to cope with those stressors. Others described smoking as accepted and widely prevalent in the transgender community. Tobacco and substance use. Participants reported that spending time with people who were current users of tobacco and other substances were triggers for tobacco use. Substance use lowered their inhibitions, which triggered other high-risk behaviors. Alcohol and cannabis went "hand-in-hand" with tobacco use, and their ready availability, unlike other illicit substances where a "dealer" might be needed, also facilitated co-use. Tobacco mellowed out the effects of crack/ cocaine or methamphetamine, and mitigated the "low" from the effects of these substances wearing off. Motivation to quit. Participants' attitudes toward smoking cessation were shaped by their motivation to quit as well as the barriers they faced in trying to sustain abstinence. Almost all participants were motivated to quit because of their own health or that of their family members, and others were concerned with the impact of secondhand smoke on their children and families. One transgender participant was motivated to quit because smoking cessation was a requirement for gender affirming surgery. Some participants were unmotivated to quit. These participants described high levels of nicotine addiction and the anxiety they experienced from nicotine withdrawal during prior quit attempts. Having friends in their social network who smoked and/or used other substances was a barrier to quitting. There was a prevailing belief that quitting other substances was more important than quitting cigarettes. Participants described experiences of forced quit attempts while in prison; however, they resumed smoking after release despite long periods of abstinence. Attitudes toward treatment for cessation. About half the participants had tried cessation medications at residential drug treatment facilities. Participants described scenarios of not being able to smoke in those facilities and needing either to use chewing tobacco or nicotine replacement to mitigate withdrawal symptoms. These participants had limited success with prior uses of nicotine replacement therapy, and preferred to use medications such as bupropion or varenicline. However, the use of psychiatric medications with varenicline was a concern for some participants because they had heard about its neuropsychiatric adverse effects. Engaging in a smoking cessation trial as a means to quit smoking. Most participants responded positively to participating in a clinical trial as a means to get additional support for smoking cessation. A few participants indicated that they had temporary positive experiences with smoking cessation programs or nicotine replacement therapy, but they relapsed to smoking. However, others expressed negative attitudes towards smoking cessation trials because their previous attempts with nicotine replacement therapy were unsuccessful, they were not motivated to quit smoking, or they enjoyed smoking. Despite these hesitations, there was more of a consensus around wanting to quit tobacco use than continuing tobacco use. Despite prior unsuccessful attempts, participants expressed an eagerness to try a smoking cessation clinical trial as a means to achieve abstinence. Safety and needing to self-isolate. Participants described several barriers to engaging in clinical trials including competing priorities from being homeless. A few participants raised safety concerns. One participant described his home environment as being unsafe, and feared that if he left his home for too long, his belongings would be stolen. Another described how a trial that required frequent visits would be difficult for them because of general stigma associated with identifying as transgender. A few participants reported that engaging in clinical trials would be challenging because they isolated themselves as a coping strategy for depression. A clinical trial with many visits and interactions with study staff could potentially exacerbate these challenges. Location and frequency of study visits. Participants described barriers to accessing clinical trial sites, particularly if the trial site was not co-located in their medical homes or they lacked money for public transportation. Some reported that getting to the clinical trial location for frequent visits would be challenging due to work schedules. Patient-centric factors to increase feasibility of engaging in a clinical trial 3.2.5.1. Clinical trial features. All participants responded favorably to having a clinical trial site location that was a short distance from their clinic and/or home. While two participants expressed that many study visits would be challenging, others felt that having frequent visits with study staff would be a motivator for smoking cessation. Participants expressed enthusiasm for having their primary care providers involved in prescribing medications for cessation, and felt that engagement in a clinical trial would support and increase motivation for them to engage in clinical care for smoking cessation. Participants described the bidirectional relationship between the clinical trial team and their clinical team as a positive feature that would increase behavioral control over smoking. All but two participants expressed interest and enthusiasm in serving in a navigator role calling on personal attributes such as wanting to be helpful to the study team, being good with people, knowing a lot of people, and wanting to assist others with smoking cessation. Financial incentives for cessation. All participants responded favorably to financial incentives for smoking cessation. Tobacco use was a financial burden, and quitting smoking while also receiving financial incentives was a motivator for smoking cessation. "You get paid to quit" was one of the participant's slogans for advertising the clinical trial. Participants differed in their opinions on whether incentives should be offered for behavior change, particularly for addictions, which came with associations of personal blame and/or failure. Almost all participants felt that external motivators such as financial incentives were important for behavior change. But one participant believed that external motivators would not help if there was no will to quit. Another participant felt that providing money that could be used to support other substance use behaviors was counterproductive. Despite these varying opinions, almost all participants were willing to engage in a clinical trial that offered financial incentives for smoking cessation and were supportive of frequent visits during the rapid escalation of incentives. All but one participant felt that a starting amount of $13 with escalating incentives up to $475 over a period of 6 months was adequate as a motivation for cessation. Discussion In this study of people with current and past experiences of homelessness who were currently smoking, a third of the sample reported having made a quit attempt in the past year and about half expressed an intention to quit smoking in the next six months. Prior quit attempts were generally unassisted. However, most participants expressed an interest in participating in a clinical trial for cessation, as a potential method for quitting, particularly if the trial incorporated features like convenient visit site and schedule, monetary incentives, and patient navigators. Participants' beliefs around tobacco use were shaped by co-occurring substance use and their addiction to nicotine. Stressors were prevalent in their lives and despite its acknowledged negative impact on health, smoking played an integral role as anxiety and stress relief. Most participants did not reduce their smoking behavior during the COVID-19 pandemic despite its potential negative health risks. Findings from our study were similar to those from a study of adults experiencing homelessness who were engaged in a clinical trial for smoking cessation [7]. Participants described the normative experiences of smoking in shelters, and relying on smoking to pass time, which detracted from efforts to find housing [7]. Smoking incurs a substantial financial toll, accounting for up to 30% of monthly income, which is the amount that people may have to pay for renting a subsidized apartment [30]. Thus, smoking cessation could increase financial and housing stability, if the money saved from tobacco use could be directed to meeting basic needs such as housing [7,30]. Consistent with prior studies [4], most participants in our study had attempted to quit smoking but were unsuccessful and relapsed to smoking. In a recent study of individuals experiencing homelessness, the majority preferred to "quit cold turkey" rather than using supportive smoking cessation medications, findings that were consistent with participants in this study [31]. These findings highlight the need for interventions that increase engagement in cessation treatment to increase efficacy of quit attempts. A clinical trial for cessation could be one such approach that would promote access to treatment, and participants in this study supported that approach. However, competing priorities of finding housing and previous experiences of trauma could pose barriers to smoking cessation and engaging in a smoking cessation clinical trial. Several participants described avoiding group situations because they were triggering for other substance use behaviors. Others, including a few transgender participants, preferred self-isolation for personal safety. Women who described lifetime experiences of homelessness and trauma also shared a similar perspective of self-isolation as a pathway to healing and recovery [32]. Thus, the commonly used model of group based cessation treatment may not be well suited to individuals experiencing homelessness preferring to self-isolate [32]. Training study staff to use a trauma-informed approach to engage with participants and offering individual meetings at convenient locations may be critical to engage individuals experiencing homelessness in a clinical trial [32]. Barriers to participating in clinical trials included time and resource constraints such as lack of transportation. Other barriers include mistrust of the clinical trial system and lack of comfort with the clinical trial process [33,34]. While few of our participants expressed not trusting clinical trials, most participants described barriers to transportation. We proposed to hold the clinical trial in a public outdoor space, to minimize exposure to COVID-19, while also being co-located with shelters and clinics. Other features that may increase retention in long clinical trials include providing multiple attempts to make a study visit, scheduling visits at a time convenient for participants, frequent communication via text messaging, phone calls, or leaving messages through their friends or healthcare providers [24]. Allowing for a one week wait time between enrollment and the first study visit may also ensure that enrolled participants are committed to attending clinical trial visits and may reduce attrition [24]. Participants supported receiving financial incentives for a finite period of time (e.g., 6 months) to support smoking cessation and to relieve financial burden. Financial incentives may work by providing external motivation to engage in a health behavior. This in turn may facilitate good outcomes by increasing self-efficacy for smoking cessation, and encouraging the use of smoking cessation medications [35,36]. Participants supported bi-directional communication between the clinical trial team and their healthcare team. The clinical trial team could facilitate smoking cessation by providing the healthcare team information on their patients' progress during the clinical trial. The relationship with the healthcare team could also facilitate retention in clinical trials by allowing for another point of contact for patients. A significant number of the participants endorsed a peer navigation program. Participants believed that having peers from their community to help them navigate a smoking cessation program could increase their engagement in a clinical trial. Peers from the same community share life experiences, cultural beliefs and norms, who can provide culturally sensitive messaging on smoking cessation. Peers can also help with recruitment, coordination and retention activities [34]. Our study has limitations. We recruited participants from one clinic in San Francisco, and only those who had a functioning cell phone or landline. Therefore, the views of participants enrolled may not be generalizable to other populations experiencing homelessness in San Francisco and elsewhere. Attitudes and norms of the participants in this study could be shaped by local tobacco control policies which, could be different in other localities and states. Response bias toward expression of attitudes perceived as desirable may be present in a paid interview managed by researchers who have described the features of their proposed protocol to participants. Conclusion This study suggests best practices for conducting smoking cessation clinical trials for people experiencing homelessness. We found that most adults experiencing homelessness are interested in smoking cessation, and would be willing to engage in clinical trials for cessation if they included financial incentives, flexible scheduling for study visits, patient navigators with lived experiences of homelessness, and staff who were familiar with addressing participants' experiences of trauma and need for self-isolation. These findings will be used to modify our proposed clinical trial protocol for restructuring study visits to take place in the preferred afternoon time, for training staff in trauma-informed approaches, and to increase opportunities for participants to engage in recruitment efforts as navigators to the study.
2023-02-22T16:04:00.579Z
2023-02-19T00:00:00.000
{ "year": 2023, "sha1": "e63512a01e74746528187310b851ed1f5d0c8f4e", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ae64ed65fc426bd42e18d7eb0cecde5e5087d133", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225645901
pes2o/s2orc
v3-fos-license
The effect of limb-removing and placement-depth on the growth rate of mud crab juvenile, Scylla tranquebarica Mud crabs, Scylla tranquebarica cultured in brackishwater ponds need three to four months to achieve marketable size. However, rapid movement and cannibalism seemed to be responsible for causing low survival rate of mud crabs. Therefore, a rearing system that can control movement and cannibalism in the crab’s grow-out system is needed. The purpose of the study was to evaluate the growth performance of mud crabs with limb-removing and non-limb- removing were grown in plastic boxes and placed in the different water depths. Two factors were tested namely, first factor is mud crabs with limb-removing, which consists of two levels, A1). All limbs were not removed and A2). All limbs were removed except for swimming legs. The second factor is placement-depths for the crabs growth in brackishwater ponds which have three levels, namely, B1). 0 cm, B2). 35 cm and B3). 70 cm under surface of pond waters. The crabs with mean weight 88.99±5.895 g were tested in each treatment with three replications. The crabs were fed with chopped trasfish at 5% of total weight−day. The experiment lasted after 42 days. The molt crabs were observed daily and the crab growth was monitored every week by measuring their weight. The final weight, weight gain, and specific growth rate were compared among treatments tested. The water quality in the pond, (temperature, salinity, dissolved oxygen, and pH) were also observed. The results showed that 100% of crabs with removed their limb (A2) were molting, while crabs with unremoved their limb (A1) 44.44% were molt. The highest weight gain obtained in A1B3 (61.61 g/ind.) from crabs unremove their limb and placed at a depth of 70 cm under water surface and showed a significant difference (P<0.05) with A1B1 (the crab unremoved their limb and placed on the water surface with weight gain is 9.6g/ind. However, limb-removing and the interaction between limb-removing and placement-depths were not significantly different (P>0.05). The range of water quality such as salinity (17-25 ppt), dissolved oxygen (2.71-8.51 mg/L), water temperature (28.5-31.5°C), and pH (7.5-8.5) in this study are still within the tolerance limit for crab juvenile growth. The unremoved limb crabs and growth at 70 cm as the better alternative to eliminates violation of animal ethics by removing the limb crab. Introduction High economic value of mud crab, Scylla spp. in the Asia-Pacific region, has led its high exploitation in the wild [1][2][3][4][5]. Otherwise, mud crab grow-out in brackishwater pond has been developed in some 2 areas, such as in brackishwater pond near Cenranae mouth river, Bone Regency, South Sulawesi Province and in brackishwater ponds of Kuala Lupak, Barito Kuala Regency, South Kalimantan Province, Indonesia. In that both ponds area, the wild seed each size approximately 50-150 g/ind. were stocked in brackishwater pond. The crab reared in ponds without feeding and also the ponds without fencing to prevent of mud crab escape from the pond. However, after two to three months of culture, the mud crab size was attained to 200-400 g/ind. and selected to harvest mainly for the fatten and matured gonad crabs. Stocking density, crab size, pond size and feed availability in ponds were influences to the mud crab growth in brackishwater ponds. High cannibalism, molting failure, and escape from the pond are most factors that affected to the crab survival rate. Gunarto and Rusdi [6] reported that crab stocking density in brackishwater was 1 ind./m 2 where the pond with bamboo fencing. After three months was obtained 80% of crab's survival rate. Other factors such as size equality, male and female ratio also suitable water quality for crab growth are influences to the mud crab survival rate. The increasing salinity to 45 ppt after three months cultured in ponds was affected to the decreasing of crab survival rate to 15-30% [7]. The suitable salinity for the growth of mud crab juvenile is 15-25 ppt [8]. Futhemore, Wijaya et al [9] reported that the crab growth in the plastic box and placed in the depth of 40 cm under surface water resulted the significantly higher compared to the crab growth in the box and placed in the water's surface. High cannibalism and mobility are influences to the crab survival rate when the crab cultured in brackishwater pond. Furthermore, limb removing is the way to minimizing cannibalism and also to accelerate the molting process. However, the limb removed on the mud crab Scylla serrata is not significant affect to the growth rate. Smith [10] stated that loss of appendage could affect to the slow growth rate, hinder foraging ability, reducing defensive capability. Removing cheliped is not significant influences to the crab growth rate, but effective in reducing cannibalism [11]. Meanwhile, limb removing has been widely applied by mud crab soft shell company in Indonesia to stimulated mud crab fast molting. Therefore, it will shorten the rearing period for soft shell crab production and able to increasing mud crab soft shell production. However, limb removal is opposite to ethic animal welfare. Hence, it needed another technique to accelerate mud crab molting and growth without the opposite of ethic animal welfare. The objectives of the research is to evaluate the effect of limb removing and non-limb removing on the growth and survival rate of mud crab S. tranquebarica juvenile reared at the different depth of brackishwater pond. Material and methods The research was carried out at the Maranak experimental pond Station, Research Institute for Coastal Aquaculture and Fisheries Extension for 42 days from 1 May 2016 to 12 June 2016. The juvenile mud crabs with mean weight at 89.0±5.9g and carapace width 79.92±2.63 mm were prepared for growth test. Plastic boxes perforated each size of 20x30x10 cm were used for individual mud crab culture. These plastic boxes were placed in a raft that made from one-inch polyvinyl chloride (PVC) pipe and bamboo split for compartment space that plastic box with mud crab inside was placed. Then these rafts containing crabs in a plastic box were placed at various depths of the pond water of a 500 m 2 pond size. A two-factor was tested in this experiment to determined the effects of limb removing (factor A) and the depth of water level for individual crab culture (factor B) on the growth rate of mud crab, S. tranquebarica juvenile. Two-level of factor A, i.e. A1) All limb (claws, walking and swimming leg) of the crab were not removed and A2) All limb (claws, walking leg) were removed except for swimming legs were intact. Factor B had three levels of water depth for crab culture, i.e. B1). Surface of pond water. B2). A depth of 35 cm under surface of pond water. B3). A depth of 70 cm under surface of pond water. To remove all the claws and walking legs was done by cutting a part of the legs and claws, then the crab will automatically release the legs and claws from their body. Each plastic box contained one crab representing one replicate unit in each treatment. The crabs were fed chopped fresh fish at a dose of 5% of the total crab weight -day was given to the crab in a plastic box in the morning at 8.00-9.00. and in the afternoon at 16.00-17.00. Pond waters were exchanged at high tide by opening the intake and outlet pipe. Pond water level was maintained at a depth of 80-100 cm throughout the experiment. Crab growth was monitored by measuring crab weight in all treatments was conducted every week until the sixth week to obtained final weight. A digital scale balance with has an accuracy of 0.1g was used for measuring those crab weight. Weight gain during cultured is expressed with the formula: Weight gain (g) = L1 -L0. Where L1 = Crab weight at the end of the study, L0 = Crab weight when it stocked. The molt crabs were monitored in each treatment at every day. When obtained the molting crab, the old carapace throughout of the plastic box, then the new molt crab measured the carapace width and weight. Specific growth rate (SGR) was also calculate follow the formula from Quinitio and Estepa [11] as follows: Specific Growth Rate (% / day) = (ln L2 -ln L1) x 100 / (T2 -T1), where ln = natural logarithm, L2 = final weight (g), L1 = initial weight (g), T2 -T1 = length of cultured (42 days). Water quality parameters consisting of salinity, pH, water temperature and dissolved oxygen were measured using a YSI Professional Plus Multy DO meter. The final weight, weight gain and specific growth rate of unremoved limb crab and removed limb crab were compared and the significant differences determined by one-way ANOVA using the IBM-SPSS-Statistics-24 package followed by Tukey's post hoc (multiple comparisons) tests (α=0.05) while the water quality data were analyzed descriptively to see the relationship with the growth of crabs cultured. The number of crabs molt The fastest molt crabs were obtained in the second week from the removed limb crabs and grown at the water surface (A2B1) and also at a depth of 35 cm below the water surface (A2B2). In the third week found one crab was molt from unremoved limb crabs placed at depth 35 cm (A1B2). Whereas in the removed limb crabs found five individual crabs were molting, namely one crab from water surface (A2B1) and two crabs from the depth of 35 cm (A2B2) and 70 cm (A2B3) respectively. In the fourth week, there was no crab molt in A1(unremoved limb crabs) treatment. Whereas, in removed limb crabs (A2), two crabs molted from the crab in the water surface (A2B1), and a depth of 70 cm (A2B3). In the fifth, to the sixth weeks, the molting crab was only found in unremoved limb crabs, namely one individual crab from the depth of 35 cm (A1B2) and two individual crabs from the depth of 70 cm (A1B3). Unfortunately, no molting crab from unremoved limb crabs grew on the water surface (A1B1) during 42 days (Table 1). During 42 days crabs cultured in different depths of brackishwater pond, the total crabs molt from unremoved limb crabs was found four individual (44.44%), i.e. two individual from the crab at a depth of 35 cm (A1B2) and 70 cm (A1B3) respectively. Whereas from removed limb crabs was found nine individual molt crabs (100%), i.e. three individuals each from the crabs kept on the water surface (A2B1), 35 cm (A2B2) and 70 cm below the water surface (A2B3). The result of this research was proven that limb removal was affected by the molting acceleration and intensity of mud crab, S. transquebarica. Fujaya et al [12] reported that vitomolt supplementation in feed affected significantly to the molting acceleration and intensity of mud crab S. olivacea. The other researcher, Djunaedi [13] reported that eye ablation was the highest stimulated molt in mud crab, S. serrata, compared to limb removing, ovaprim hormonal application and control without any application. Molt in the crustacea is a consequence for their tissue growth inside the crab body and old carapace is not able to cover and protect tissue growth, furthermore, it need to replaced old carapace with the new carapace. When the crab molt, their body is soft, then automatically the crab will absorb the water enter the crab body. Its activity will impact the increasing crab size. This research was proven that the limb removing on crab, will impact to the growth tissue concentrated in the crab body and an effort to build the new limb. When body tissue has been growth maximum, it stimulated the crabs to molt. In this activity, ecdysteroid hormon have an important role in manage the molt crabs process and tissue growth. The speed of molt crabs is also influenced by the size of the crab. The larger crab size, the time required to the next molt, is longer, especially if the crab is kept in a limited place such as in a plastic box. The removed limb crabs were grown on the water surface began to molt in the second week until the fourth week and had reached nine individual crabs molt during 42 days rearing. Whereas in the unremoved limb crabs go until the third week has only one individual crab molt and goes to the fifth week only has four individual molts. Thereby, removed limb crab in this study has accelerated the crab molt, even though, the crab was cultured on the water surface and below the water surface. According to Quinitio and Estepa [11] the molting interval will be longer in crabs that have autotomy in the intermolt or premolt stage than in crabs that are deliberately removed their limb (trimmed) or normal. In this study, the limb was accidentally released (trimmed), thus causing the process of an accelerating the crab molts. The relationship between the moon cycle (dark moon, full moon) with the number of crabs molt The number of molting crabs in relation to the moon cycle was illustrated in Figure 1. It appears that in both the dark and bright moon conditions, the crabs are molted, although the numbers of molt crabs are different. In the bright moon (Date 6-17 of the lunar calendar) the total number of molt crabs is higher compared to in the dark moon (date 18-5 of the lunar calendar). The molt crabs were found started around the 6 th of the lunar calendar and the peak is on the 9-12 th with six crab molted in that period. The crab molted extended until the 16 th to 17 th of the lunar calendar, but the number is decreased. Only one by one individual crab is molt. The relation between mud crab molt cycle and moon rhythms was reported by many researchers. Fujaya et al. [14] reported that molts peak is not occurred when the peak of ecdysteroid, but it occurred when ecdysteroid hormon started to decrease, specifically at the crescent moon or waxing gibbous moon. Mud crab growth Weekly growth data of crabs unremoved and removed their limb grown at the different depths of pond waters are shown in Figure 2. The highest of final weight, weight gain and specific growth rate (151.82±3.50 g; 61.605±2.326 g; 0.526%/day) are found in unremoved limbs crabs that are grown at a depth of 70 cm below the water surface and show significant differences (P <0.05) with final weight, weight gain and specific growth rate of unremoved limb crabs that are grown on the water surface (0.113%/day), but not significantly different (P> 0.05) with final weight, weight gain and specific growth rate of unremoved limb crabs that are grown at a depth of 35 cm below the water surface (0.259%/day). It was also not significantly different (P>0.05) with the removed limb crabs that are grown at water surface, a depth of 35 cm and 70 cm under water surface. The SGR of the removed limb crabs and grown at the depth 70 cm (0.298%/day) under water surface were significantly different (P<0.05) with the SGR of unremoved limb crabs and grown at the water surface (0.113%/day), but not significantly different (P>0.05) with the SGR of unremoved limb crabs and grown at a depth of 35 cm and 70 cm under water surface. It was also not significantly different (P>0.05) with SGR of removed limb crabs and grew at the depth of water surface (0 cm) and 35 cm under water surface. The survival rate of mud crabs is on average 100%, except for unremoved limb crabs and grown at a depth of 70 cm. The survival rate is only 66.6% the death of crab caused by a failure of molting. Mud crab's specific growth rate (%/day) Specific growth rate (SGR) of crabs unremoved and removed their limb grown at the different depth of pond waters is shown in Table 2. The highest weight gain and specific growth rate (0.526%/day) are found in unremoved the limb crabs that are grown at a depth of 70 cm below the water surface and show significant differences (P <0.05) with weight gain and specific growth rate of crabs that are grown on the surface of the water (0.113%/day), but not significantly different (P> 0.05) with weight gain and specific growth rate of the crabs that are grown at a depth of 35 cm below the surface of the water (0.259%/day). Remove the limb crabs that are grown at water surface, a depth of 35 cm and 70 cm under water surface with weight gain and specific growth rate are not significantly different (P> 0.05) with weight gain and specific growth rates of unremoved the limb crabs that are grown at the water surface, a depth of 35 cm and 70 cm under surface water ( Table 2). The removing of crab limb and the interaction between the removing of crab limb with the different depth of crab grown did not have a significant effect (P> 0.05) on weight gain and the specific growth rate of mud crab cultured. The survival rate of mud crabs is on average 100%, except for crabs that are not removing the limb and grown at a depth of 70 cm the survival rate is only 66.6%. Death of crab caused by failure of molting. The highest specific growth rate of crab was obtained at the crab grown at a depth 70 cm under water surface. It may be caused by the suitable salinity (range of 17-25 ppt) and temperature (range of 28.4-31.1 o C with mean 29.47±0.757 o C) for mud crab growth. Mya and Shah [15] reported that increase of salinity gradually from 5 to 25 ppt resulted from highest survival and growth rate of Scylla serrata juvenile size carapace width 2.06±0.29 cm and body weight 1.67±0.75 g. Futhermore, Sandeep and Ramudu [16] reported that constant high survival rate and growth for mud crab Scylla tranquebarica when the crabs reared at the salinity 15-25 ppt. Another researcher stated that growth rate of male and female mud crab, Scylla serrata at confinement plastic box at salinity 30-31 ppt during 30 days culture is not significantly different [17]. Mud crabs are nocturnal animal, they are active in the dark [18]. In this study that crabs are grown at a certain water depth (35 and 70 cm, then the environmental conditions are darker than on the water surface). Hence, at a depth of > 35 cm, crabs become more active feeding and finally, the crabs grow faster and significantly different (P <0.05) with the crabs grown at the water surface (0 cm). However, Morales and Roberto [18] reported that the crabs reared at differences water depth of 10, 20 and 30 cm had no significant effect (P> 0.05 ) to increase the growth rate. Presumably, the depth difference between treatments is only 10 cm, so that the environmental conditions are likely to be almost similar. Therefore, there was no significant effect on the difference in growth rate of crabs. According to Xiaowu et al. [19] that lighting intensity does not significantly affect the speed of molting capability but has a significant effect on increasing the weight of molting crabs, especially for the crabs that are kept in lighting intensity 0 or dark condition. Another factor that increases the survival rate of mud crabs is the use of shelters [20]. Sand shelter placed at the bottom of the tank rearing significantly affects the survival of mud crab juveniles. The crabs show bigger after molting. The best growth (tissue growth and molt frequency) and the best feed efficiency was obtained in crabs fed on the diet 40% crude protein [21]. In this study, mud crabs were given trash fish as their feed which containing 57% protein, 13.65% fat, 0.17% crude fiber and 7.69% moisture content. Water quality 3.5.1 Salinity Mud crab can live in a fairly wide range of salinity in estuary areas and mangrove ecosystems, from low salinity in the rainy season to high salinity in the dry season. At the pond, salinity of 40 ppt mud crab starts to death [7]. Sandeep and Ramudu [16] reported that decreasing salinity from 29.6 ppt to 10.4 ppt caused decreasing significant survival rate from 87% to 45%. At the beginning of the research, the salinity was 17-20 ppt, then had increased to 25 ppt at the end of the research (Figure 3). Therefore, that salinity was still in optimal conditions for the growth of mud crabs Scylla tranquebarica. One of the abiotic factors that influence the crab growth and survival is pH. The optimum pH value will have the maximum impact on mud crabs osmoregulation process. Maintenance of water pH will affect the enzymes that work on the gills, for example, ATPase, carbonic anhydrase, Na-K ATPase, and enzyme activity on the gills related to respiration rate, osmoregulation, and excretion. In waters that have muddy substrates tend to have acidic pH, whereas in sandy substrates tend to contain alkaline pH. In this study, water pH was in the range of 7-8. Shelley and Lovatelli [22] established water quality standards for the mud crabs culture, with an optimum DO range of ˃5 ppm, temperature of 25 -35 ° C, water pH of 7.0 to 9.0; TAN <3 ppm, alkalinity ˃80 ppm, and turbidity ˃ 30 mg / L. Dissolved oxygen Some water quality parameters are very important to support mud crab growth in brackishwater ponds. DO plays a role in the process of oxidation and reduction of organic and inorganic materials, DO is needed by all living bodies for breathing, metabolic processes or exchange of substances which then produce energy for growth and breeding. The main source of oxygen in aquatic systems comes from a process of diffusion from free air and photosynthesis of organisms that live in the waters. The speed of oxygen diffusion from the air is influenced by several factors, such as water turbidity, temperature, salinity, water and air mass movements such as currents, waves and tides. The waters with higher temperature and salinity will have a low DO value, and vice versa the DO value will be high if the waters have low temperature and salinity. The DO concentration at 70 cm below the water surface was 5.044±1.39 mg/L is always lower than the DO concentration of at a depth of 35 cm (5.721±0.944 mg/L) and water surface (6.141±1.2 mg/L). Mud crab weight gain when it grew at the different depth of pond waters namely at 29.67±5.70 g to 61.605±2.326 g (depth 70 cm), 22.5±7.21 g to 26.27±26.21 g ( depth 35 cm) and 9.61±3.97 g to 20.77±2.730 g (waters surface). It appears that the possible effects of oxygen enrichment from phytoplankton on the surface of the water are very clear compared to the depths of 35 cm and 70 cm (Figure 4). The DO concentration in all water levels still good to support crab growth in brackishwater pond. Sandeep and Ramudu [16] stated during grow out mud crab S. tranquebarica for 90 days with DO concentration at the range 5.5 mg/L to 6.0 mg/L resulted average daily growth rate (ADGR) 1.25 g day -1 to 2.68 g day -1 . Statistical analysis for the DO concentration in this research showed that DO concentration in waters surface was not significantly different (P>0.05) with DO concentration in the depth of 35 cm and 70 cm under water surface. Water temperature Temperature is the most important factor in the growth and development of mud crabs in their habitat. Water temperature can affect the growth rate, activity and appetite of mud crabs. At low temperatures, it can cause drastic activity and appetite for mud crabs decreases dramatically. At that time growth will be hampered even though the mud crabs will still be alive. Temperature affects metabolic activity and growth of mud crabs. Maximum weight-specific growth rate was 16% day -1 obtained at 30 o C and salinity 10 -20 ppt [23]. Temperature also influences to the molting interval. Gong et al [24] reported that at temperature of 32 °C induced an increasing of ecdysone receptor (EcR) gene and reduced molting interval in the Crab D-1, while at 39°C the expression of EcR gene decreased and finally the crabs died without molt. In this study, based on data from the monitoring of water temperatures in ponds that water temperatures appear to fluctuate. The highest water temperature is at the water surface in the range of 29.5-31.5 o C (mean 30.37±0.59 o C) and it was significantly different (P<0.05) with the water temperature at a depth of 70 cm, in the range of 28.4-31.1 o C (mean 29.47±0.76 o C) and not significantly different (P>0.05) with the water temperature at a depth of 35 cm, is 28.2-31.4 o C ( Figure 5). Based on these results it appears that the best water temperature for the growth of mud crabs in brackishwater pond is in the range of 28.4-31.1 o C (mean 29.95±0.806 o C), because at a depth of 70 cm the mud crabs experience the highest biomass growth compared to the crabs that are cultured at water surface and 35 cm under water surface. Conclusion The highest weight-gain (61.605±2.326 g/ind.) was obtained in removed limb crabs placed at a depth of 70 cm below water surface. Limb-removing and the interaction between limb-removing and placement-depths were not significantly different (P> 0.05) on the growth of the mud crab. By this finding was proved that still another technique to accelerate molting crab without removing their limb, exactly by placed in the depth of 70 cm under surface water, the crab will grow fast.
2020-07-16T09:06:46.675Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "4b2f2279855575bf0c634574cc3fef3fb5152355", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/521/1/012027", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f7ac67f3c4db87682c27be79d90e0c292a3bde80", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
11495075
pes2o/s2orc
v3-fos-license
Mobile Network Planning Process Case Study-3 G Network Third Generation cellular networks (3G) were developedwith the aimof offering high data rates up to 2Mbps and 384Kbps for stationary and mobile users respectively which allows the operators to offer a multimedia connectivity and other data services to the end customers. In this work we apply techniques to design a 3G radio network, in particular we study the planning and implementation in developing countries, and state of Palestine as a case study. In order to carry a 3G radio network planning for a selected regions, wemust followa roadmap consists of set of phases; First of all, wemust determine the region under study on the digitized map in order to obtain some useful information, such as the area distribution; thus using digital maps gives a good clearness for areas classification, land use, land terrain, heights, vectors, etc. Also we must forecast subscriber profile to perform coverage and capacity dimensioning process to achieve nominal cell plan. This paper work studies Nablus as one of the major Palestinian cities. then, subscriber forecast profile is applied in order to calculate the service traffic demand, the capacity and and coverage requirements, this study is carried in corporation withWataniya which is one of the leading mobile telecommunication service provider in Palestine. For Nablus city we’ve found that 28 sites are required to be installed to meet the given capacity requirements, on the other hand 46 sites are for coverage. At this point we should make decision about how many site will be implemented. In general we have to select number of sites relative to coverage requirement; so to serve Nablus city by 3G services we should implement 46 sites. At final stage, we have to be sure that our proposed 3G network is suitable not only for the first year of running the 3G services over the deployed network, our design takes into consideration the growth of subscribers number and their demands, so periodically, the networking administrators and the network planning department, assess the network current status and upgrade the network to meet the future demands. Introduction Cellular technology evolution has been started since 1950s, and the first commercial systems came in the late 1970s.Where cellular networks can be classified into different generations, namely, First Generation, Second Generation [GSM], Third Generation [3G] and Fourth Generation [4G].In this paper we study the planning phases in order to upgrade the mobile operator network, to build a third generation in parallel with the current deployed second generation network.Many earlier works have been conducted regarding the 3G radio network planning i.e. [book3G], our study is related to the 3G planning and implementation for Nablus city, as one of the major Palestinian cities.The planning process have some main attributes and factors such as subscriber profile forecasting and the calculation of the service traffic demand, in addition to the capacity coverage requirements.This paper ends up with a proposed for an applicable 3G network to be deployed in the selected cities. Motivation for Carrying Out this Research It is worth to know that the Palestinian cellular network operators are still working on GSM/GPRS/EDGE (Global System for Mobile Communications/ General Packet Radio Service/ Enhanced Data Rates of GSM Evolution) that offers voice service and non matured data services.Nowadays, Palestinian subscribers are demanding more data communication services and aligned with the international subscribers demand.As figure 2 represents that GSM is already covered almost all over the world. The telecommunication operators and providers followed accordingly.Third Generation is the next step for Palestinian, the implementation of the 3G in Palestine, is still not approved by the Palestinian MTIT (Ministry of Telecommunication and Information Technology) [MTIT]; that is typically due to political issues between Israel and Palestine.in all cases, the demand for 3G services become crucial, we have to be ready and have a study and road-map for the upgrading to 3G network, thus we are ready once the local mobile operators has got the permission to implement and run the 3G network service. while figure 2 depicts the coverage of different mobile communication technologies; 2G, 3G and LTE (Long-Term Evolution) [LTE] based on the world map coverage for year 2015 [MAPWorld] [MAPVerizon], where we focus on the 2G and 3G technology coverage in the middle-east area. As a result, we selected this topic to be aligned with the coming data evaluation in Palestine.This study will be very useful for the current operators to upgrade for the 3G network. Literature Review and RelatedWork In this paper, we outline the "3G Radio Planning Design" in order to plan and design UMTS/3G (Universal Mobile Telecommunications System) network in Palestine.It provides design requirements and assumptions (e.g.service quality, link budgets, etc) that should be used for planning a UMTS network; the Radio network design process is outlined keeping in mind that Radio design will be an overlay on the existing 2.75G GSM network.Palestinian telecommunication mobile operators upon getting the approval and the permission to implement the 3G mobile technology, they have to design and deploy a UMTS network across all regions that will be capable of providing voice and packet data (including High Speed Packet Access HSPA) services to meet the rapidly data demand.The design will be an overlay on the existing GSM networks in Palestine to keep the current available infrastructure for the operational companies and to keep the utilization of current 2.75G services. Third Generation Evolution The wireless voice services started by the first generation circuit switched analogue service, which was for voice only; this technology did not provide SMS (Short Message Services) or other data services.Upgrading from first generation to Second digital system Generation (2G); this transition was posted due to several services provided by 2G technologies such as: data storing, coping, encryption and compression, and permits data transmission without loss be supporting error correction.Many other several cellular and wireless data services are provided in 2G networks, such as internet access, with speeds up to 14.4.kbps as a theoretical value.In addition, voice quality improved, even though, 2G is also circuitswitched, but this generation still not meets the required data rates and throughput compared with the demanded data volumes. The Second generation technologies 2G, includes GSM (Global System for Mobile Communication) that is based on both time division multiple access mechanism (TDMA), and frequency division multiple access mechanism(FDMA), where a sectrum is divided into small slices, as well each slice is divided in time to multimple time slices, where users are allocated in turn to specific spectrum slice, and specific time slice as well. Moving from 2G to 2.5G technologies, GSM/GPRS [GSM] [GPRS]; GPRS (General Packet Radio Services), that is a data-oriented technology extending the GSM voice services where GPRS theoretical can provide up to 200Kbps; which made an introduction to another revolutionary change.An enhanced GPRS is also introduced and named as EDGE which provides more data rate.The Third Generation cellular networks (3G) were developed with the aim of offering high speed data and up to 2 Mbps in the served areas or more which allow the operators to offer a multimedia connectivity and other data services to the end customers.A few technologies are able to fulfill the mentioned data rate such as CDMA (Code Division Multiple Access), UMTS and others.High Speed Packet data Access (HSPA) has been an upgrade to Wideband Code Division Multiple Access (WCDMA) networks used to increase packet data performance.The required Moreover, the upgrading to the Fourth Generation system 4G means that more data demand going to be booming during the coming decades, the fourth generation which called LTE (Long Term Evolution) is developed to meet the rapidly data demand.The 4th generation still not utilizing major part in the market share due to the lack of devises that can support the LTE (Orthogonal Frequency-DivisionMultiple Access OFDMA) [OFDMA] technique and the existing network infrastructure required. Nowadays, the LTE still supporting the data services only and not the voice, but it is in the process and development to support the voice also, after that, it will be called as "advanced LTE".The LTE can support more than 100 Mbps depends on the network structure and spectrum used. Related Work The first case study of 3G network in Europe, including the design and implementation was undertaken in Isle of Man, as the Manx Telecom 3G project [wu2015optimization], where all design and planning decisions were based on consideration of the desired end user experience, considering network quality, service coverage and performance of new data applications.Moreover, many other related works have been carried to study the 3G mobile network planning, where Guo et al., 2003 have studied the coverage and capacity calculations for 3G mobile network planning, where The planning process aims to allow the maximum number of users sending and receiving adequate signal strength in a cell.Moreover, the work carried in [amaldi2008radio] [tarapiah2015radio] has shown that the network planning process does not depend only on the signal prediction, moreover, it is not appropriate to depends on the classical second system generation in terms of formulas and parameters.Notwithstanding,mathematical programmingmodels for supporting the decisions on where to install new base stations and how to select their configuration (antenna height and tilt, sector orientations, maximum emission power, pilot signal, etc.) are discussed in [amaldi2008radio] [tarapiah2015common] which finds a trade-off between minimizing costs as well maximizing the covered area.In general the model take into consideration signal-quality constraints and requirements in both directions uplink and downlink, in addition to the power control mechanism and the pilot signal.More sophisticated work has been run, to automate the cellular planning process of the 3G network as being stated in [skianis2013introducing] [tarapiah2015advanced], some more important factors can be taking into consideration during the planning process, besides the coverage plan, and capacity pan, the Quality of Service QoS, resource utilization, and economical aspects have been considered in the work in [wu2015optimization].In this work, we are going to state and describe full and complete methodology, design steps,and calculations for mobile planning of 3G network, on top of existing 2G network, for a given Palestinian city as a case study. ResearchMethodology This paperwork has been conductedwith the cooperationwithWataniyaMobile [WATANIYAH], The paperwork focuses on and discussing three major aspects of radio network planning and design which are: coverage, capacity, and Quality of Service (QoS).The designed methodology is requiring an advanced tools and procedure to accomplish the mentioned parties of the planning.WataniyaMobile offers the radio network planning tools to be used in their Head Quarter (HQ) and to support the idea and the techniques of capacity design and calculation, the QoS in theWCDMA [WCDMA] technology directly linked to the coverage and capacity design.To perform a complete 3G radio network design, we configured the radio planning tools to meet paper target and scope, the work include filling and configuring 3G radio capacity sheet to calculate and automate the capacity planning, moreover, to be more close to the actual design which started with nominal design, we did an actual site survey in addition to visit some selected sites of the designed ones based on the radio planning guidelines i.e. to match between the nominal and the actual sites location.At the end, we formulate the design based on the mentioned required important steps which include the coverage, capacity, and QoS steps.This will include the sub-actions like the propagation model, link budget calculations,... etc. Design Procedure and Analysis The process of designing the radio network is considered as one of the most important and crucial issue in the wireless design since it depends on many variables related to the land terrain, population density, allocated spectrum, and the target itself.The design process for any wireless system could have some common steps like the check list matrix, where figure 3 states the simple flow that will be followed during the planning and design for the 3G network, the design process can be enumerated as: The first stage of the design process is to define the target required of the design, this stage holds surveying for an optimal solution and identifying the required tools and data to in order to start the designing and planning task i.e coverage percentage, forecasted number subscribers, subscriber traffic profile, ...e.t.c. 2. Radio Network Dimensioning: The next step, involves starting data mining, and to calculate the required capacity and equipments and tolls to meet the forecasted demand. 3. Radio Propagation Model Tuning: This stage concerns looking for the most important service Key Performance Indicators KPIs which is the coverage footprint, where the WCDMA coverage prediction depends on the loaded traffic, we have to allocate the mentioned step two traffic per sector to the tool in order to have better accuracy in the coverage prediction.The propagation model is one of the most important steps to be tuned and suitable for the land terrain to have the efficient coverage prediction. Nominal Cell Planning: The planning tool is used to create a nominal cell plan by using the engineering sense and the tools features.The tools will support the engineering decision with many plots analysis related to the coverage, spectrum, and interference. 5. Site Survey: The cell planner, and site hunters, identify the suitable and applicable sites location that fits the radio coverage requirements.The site leasing/rental issues and the construction obstacles are also taken into consideration at this stage. 6. Implementation: This step includes all the sub-steps that are required for nominal cell planning.Whereas, planning tools could be used to evaluate many related parameters such as cell parameters, handover cell candidates, as well as the best location from Site Survey step,Antenna, RBS type, feeders,...etc are to be selected in this phase. 7. Initial Tuning: This step considers performing the drive test of the selected target area, where there are some available drive tests tools which can be applied in this stage, the outcomes and findings of the related target area measurements will be used further in order to tune the network to achieve the intended KPIs based on the design requirements. Requirements Definition The design requirements of the radio network include coverage, capacity and QoS, those requirements are related to different area types: dense urban, urban, suburban and rural during the design assumptions.In addition to these main area types, roads have to be considered due to their importance in the service continuity and traffic volume. Area and Population Most WCDMA networks are rolled out in phases, where for each phase of the network the number of km-square of each area classification (Dense Urban, urban, sub-urban, and rural and road) in addition to the subscribers distribution per area have to be defined and determined.Starting with Nablus city, and by using the digitized map that is typically supported by any planning tools (TEMS [TEMS] Cell planner are used in this project-it is a Swedish tool).The used digital map is created from high resolution satellite in 2015 and have 2 meter resolution which can be described as an accurate digital map, the used map have Palestinian land terrain, elevations, land use (urban, sub urban etc) and vectors (roads, main streets,...), figure 4 shows an example of Nablus city based on the available and the digital map in use for the land use purposes.As per the digital map filtering, the area to be used for the planning issues will be 20.57Km-square which could be classified as mentioned in the digital as stated in table 1. According to statistics of the Palestinian Central Bureau of Statistics in 2015, Nablus city has 187,839 people, so we expect that the number of population in 2018 to be around 205,000 people based on the population growth in west-bank/Palestine.The design assumption is targeting to serves 30% of the population as a market share which means we will end up with 60,000 subscribers at the end of 2018. Required Equipments This section enumerates the required equipments for the implementation of the proposed 3G network, as being described: 1 • Star connection of the RRU, where each RRU is connected to the MU. • RRUWand RRUS can support cascade connections, where only one fiber cable is connected between theMU and one of the RRU, while other RRUs are then connected to each others, this solution reduces the length of the optical fiber cable needed and can be used in multiple applications when the RRU are located far away from the MU. Antenna Configuration The antenna configuration recommended being able to configure different setups to cope with the strategic installation and keeping the existing infrastructure (co-located GSM and UMTS).All antenna configurations assume that the number of antenna ports available equals the number of feeder lines that can be installed.During the design we select antennas with specifications according to the project need and selected area as stated in table 2 4.1.3Traffic Requirements Considering both area and population requirements, the traffic requirements will vary depending on the area type (dense urban, urban, sub-urban, rural and road).Whereas the percentage of subscribers using each service is used to calculate "Average Subscriber Traffic Profile" as stated in the followings: 1. Speech Traffic Requirements: The subscriber Busy Hour (BH) traffic profile for speech must be calculated from the given requirements in terms of: • Busy Hour Call Attempts (BHCA). CS64 Traffic Requirements: The subscriber Busy Hour (BH) traffic profile for CS64 is to be calculated based on the following given requirements in terms of: • Busy Hour Call Attempts (BHCA). R99 PS Traffic Requirements: This requirement is already given in Kbyte\h, and we assumed that the uplink traffic is 10% of the downlink. Radio Network Dimensioning During the design, we use Ericsson Radio Network Proposal Tool (RNPT) to perform R99 dimensioning and preparing the Bill of Quantity (BoQ), which details the hardware required to implement the radio network design, this information can be used for pricing purpose.We can summarize this process into the following steps: Step 1 Calculating the limitation on the capacity on basis of the maximum allowed traffic load on both directions; uplink and downlink. Step 2 Based on the available given number of sites, compute and determine the actual cell load on both directions; uplink and downlink. Step 3 Calculating the interface margin at uplink (B IUL) and ensure it is greater than 0 dB. Step 4 Calculating the required power at the Common Pilot Channel (CPICH) at the reference point (C (PICH,ref)) take into consideration, that the power must be less than 10% of the nominal power at the reference point (P (nom,ref)). Step 5 Calculating the total power at the reference point (P (tot,ref)) and check that it must be less than 75% of nominal power at the reference point (P (nom,ref)). Step 6 Calculating the required power at themaximumtransportDedicated Channel (DCH) at the reference point (P (DCH,ref)), where the calculated power must be ess than 30% of the nominal power at the reference point (P (nom,ref)). Table 3 states the network requirements for Nablus city, based on the output and findings from the R99 dimensioning process, where the maximum load should not exceed 70% and 76% uplink and downlink direction respectively. Dimensioning Process Step 1 (Capacity) based on the RNPT Cell Load Calculator on the "Tools" sheet, the findings show that the maximum number of users supported 70% uplink load as shown in figure 6, Figure 6.Number of Uplink Capacity Sites where the outputs states 1320 subscribers per cell, thus in order to serve 60,000 subscriberswhichmeans that 60,000\1320 = 45 cells or 45\3 = 15 sites.The number of subscribers supported in the downlink at 76% load can also be calculate using the RNPT cell load calculator as shown in figure 7, where the findings show that 1335 subscriber per cell, thus in order to serve 60,000 subscribers thismeans that 60,000\1335 = 45 cells or 45\3 = 15 sites.Based on capacity requirements at least 15 sites or 45 cells are required each for serving 60,000 subscribers.when 15 sites is defined, the uplink load is 69% and the downlink load is 74%. Dimensioning Process Step 3 (UL Interface 15 sites) in order to find the uplink interface margin, the cell range must be performed earlier based on the formula 1. (1) Moreover, cell range can be used to determine the maximum uplink interference margin, as well the guaranteed load as depicted in figure 9 Figure 9. Maximum Uplink InterferenceMargin-1 Site Number Iteration (Increased from 15 to 28): Unfortunately, the maximum interference margin is 0.43 dB ¿ 0, but the uplink load here is less than 69%, thus we have to increase the number of sites, where the uplink load increases, the cell range decreases.So, by increasing the number of sites in in the earlier dimensioning process phases, we will have a number of sites equal to 28 sites and a cell range of 0.61 Km, also the load increase to 71.15% and the maximum interference margin will be 5.4 dB ¿ 0 as shown in figure 10 The total downlink power calculation can be performed using the "DL total power calculations" for the calculated cell range and load, as stated in figure 12.The maximum DCH power calculation can be performed using the "DL DCH power calculations" for the calculated cell range and load as depicted in figure 13, Figure 13.Maximum Downlink DCH Power as this stage represents the last step in the dimensioning, the maximum DL DCH power is less than 30% of the nominal power at the reference point (P (nom,ref)).So we will end up with number of sites equal to 28 sites, with a cell of range around 0.61 Km. Radio Propagation Model Tuning The 3rd Generation Partnership Project (3GPP [3GPP]) and Third Generation Partnership Project 2 (3GPP2) industry alliances jointly developed channel models that are to be used for the evaluation of cellular systems with multiple antenna elements.The models are defined for three environments, namely urban microcells, urban macrocells, and suburban macrocells.The model is a mixed geometrical stochastic model that can simulate a cellular layout including interference, where as one of the most important steps is to tune the propagation model in order to get the best model parameters, that fit the land terrain to have the efficient coverage prediction, during the cell planning process, radio cell planning tool is used to predict the radio coverage by means of propagation models, for a particular site configuration.Different propagation models are considered according to the different environments and site configurations.The Algorithm 9999 model [R9999] is without knife-edge and spherical earth loss contribution implemented by Ericsson and based on the Okumura-Hatamodel [hata] which is the best suited for large cell coverage (distances up to 100 km) and it can extrapolate predictions up to the 2GHz band.This model has been proven to be accurate and is used by computer simulation tools.The earlier mentioned propagationmodel is the one adopted byWataniyaMobile.Themodel tuning (model calibration) is performed in order to obtain more reliable radio propagation predictions.Measured and predicted signal strength samples are compared, and the mean error between them is minimized. Coverage and Nominal Cell Planning The service footprint and availability is the most important key Performance Indicators (KPIs) in any wireless technology, this KPI determine if the end customer can access the network or not.In order to meet the design required coverage area, we have to classify and study the areas aspects i.e. population, building types, land terrain, ... all the mentioned issues need advanced tools to automatically iterate the calculations.In this work, we used advanced tools to support our network design goals which are provided byWataniyaMobile for the target of this study.The tools can predictmost of the required engineering analysis i.e. technology coverage, equipment infrastructure configurations, and required and accepted level of signal noise.Thus, in order to cover Nablus city with 3G services, we end up with 46 sites., where figure 14, and figure 15 represent the coverage sites distribution using cell planning tool and Google Earth software respectively: Site Search and Survey After designing the radio nominal points, the ideal scenario is to install the sites based on the exact location outcome from the planning tool, but in reality it usually it differs due to the geographical constraints, streets, houses, and some other environmental factors, thus site survey is to to be performed which translates the ideal solution to be an actual solution, and mostly we will shift or move the nominal points into near points due to ground obstacles i.e. leasing problem, nominal points locates in a middle of major street, not accessible piece of lands, no electricity, or any other arising constraints that may appear in site.Continuous refining and re-planning tasks will be accomplished during the site survey to maintain the major coverage objectives.The basic concepts of the "Site Survey" are very simple, where it intends to indicate one or more points as possible candidates, as shown in figure 19which in fact represent the selected site number 34 to visit.More alternative sites is preferred to be considered, which allow a better margin for trading in the area responsible for this engagement.This is because, may be the first indicated point has many problems to be installed in that location such as the proprietary owner do not want to allow the operator to use his proprietary or any other constrains transmission problems, unavailability of infrastructure,.. etc.In order to avoid these constraints, the "Site Survey" stage will be conducted together with the areas of RF, Transmission, and Infrastructure, Contract. Required Equipments for Site Survey There is not a mandatory rule about what equipment to have in the site, but here's a little "Check List" with the main equipment desired.As always, everything depends on what is needed, as it may depend the 'Survey' type, Region,... etc.We could use during our work in the site survey the followings: 1. GPS [GPS] In order to define the geographical location based on the longitude and latitude coordinates in addition to altitude, a Global Positioning System GPS device is used in the site, where figure 20 shows the GPS device used in the selected sute 34. Figure 20.GPS Device 2. Camera usually in the site, some photos are required, especially to have the panorama view of 360 degree, the captured photos may be used for further analysis.the photos is illustrative, and various other factors must be taken into account in this decision, but in general, not having a limited vision, and get a macro view always helps to get the best result.21 shows 360 degrees divided by 30 to 30 degrees photos for N-34 site chosen as nominal point. Conclusions Advances in technology have lead to massive data communications, telecommunication operators need to cope with this evolution; so new technologies are being applied to satisfy customers need.3G is one of the candidate technologies that supplies customers with high data rate and throughput.Several phases are considered to apply 3G radio design in any region, first of all, a region must be determined to be subjected to study on the digitized map in order to obtain some useful information such as the area distribution; this gives a good clearness for areas classification, land terrain, heights, vectors, and some other parameters.Therefore, propagation model is determined and defined along with the link budget, capacity and coverage calculations, as a result of this process; the nominal cell planning is produced.The design of the network architecture have to capable to accommodate with increasing population and subscriber data demands, so the network can be upgraded to maintain it services and sustainability. Figure 3 . Figure 3. Radio Network Design Process Figure 7. Number of Downlink Capacity Sites Figure 14 . Figure 14.Coverage Sites Distribution according to Cell Planning Tool Figure 19 . Figure 19.Site Number 34 at Nablus City Figure 21 . Figure 21.30 to 30 degrees photos for N-34 site. 3. Compass In order to determine the orientation of azimuths, a compass with north orientation is used as depicted in Figure22. Table 1 . Nablus city areas classification Figure 4. Nablus city in TEMS digital map . Radio Base Station (RBS) Main remote Solution A Main Remote solution, optimized to deliver high radio performance for efficient cell planning in a wide range of indoor and outdoor applications.The Main Remote Radio Base Station, in which each Remote Radio Unit (RRU) is located near an antenna, reduces feeder losses and enables the system to use the same high-performance network features at lower output power, thereby lowering power consumption and both capital and operational expenditure.The Main Remote concept is designed to support all technologies in virtually any combination.The main Remote Solution is divided into a Main Unit (MU) and multiple Remote Radio Unit (RRU) that are connected to the MU through optical fiber cables.Where figure5shows RBS MU, Remote Radio Unit RRU and RBS 3-sector site respectively. Figure 5. Ericsson RBS MU, RRU and 3-sector site respectively 2. Remote Radio Unit Remote Radio UnitWCDMA (RRUW) and Remote Radio Unit Standard (RRUS) are designed to be installed close to the antennas, and can be either wall or pole mounted.The RRUW has got WCDMA capability, and it is Multi Standard Radio (MSR) capable, that means RRUS is capable of running GSM, WCDMA and LTE on the same RRU hardware.The unit configuration can be done by software reload.The RRUS hardware is prepared for running mixed mode configurations, where RRUW and RRUS sustainable average output power is 60 Watt, for very large coverage and high capacity requirements.Dual band configurations are also supported by connecting RRUW or RRUS for different frequency bands to the same MU.The RRUW and RRUS contain most of the radio processing hardware.The main parts of the RRU are the followings:
2017-10-24T05:47:07.075Z
2016-07-30T00:00:00.000
{ "year": 2016, "sha1": "2822d23ccbc6d368bed7639cfd096a23496eb0f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/cis.v9n3p115", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "2822d23ccbc6d368bed7639cfd096a23496eb0f8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246437948
pes2o/s2orc
v3-fos-license
Fecal Microbiota Transplant for Hematologic and Oncologic Diseases: Principle and Practice Simple Summary The transfer of a normal intestinal microbial community from healthy donors by way of their fecal material into patients with various diseases is an emerging therapeutic approach, particularly to treat patients with recurrent or refractory C. difficile infections (CDI). This approach, called fecal microbiota transplant (FMT), is increasingly being applied to patients with hematologic and oncologic diseases to treat recurrent CDI, modulate treatment-related complications, and improve cancer treatment outcome. In this review paper, we discussed the principles and methods of FMT. We examined the results obtained thus far from its use in hematologic and oncologic patients. We also propose novel uses for the therapeutic approach and appraised the challenges associated with its use, especially in this group of patients. Abstract Understanding of the importance of the normal intestinal microbial community in regulating microbial homeostasis, host metabolism, adaptive immune responses, and gut barrier functions has opened up the possibility of manipulating the microbial composition to modulate the activity of various intestinal and systemic diseases using fecal microbiota transplant (FMT). It is therefore not surprising that use of FMT, especially for treating relapsed/refractory Clostridioides difficile infections (CDI), has increased over the last decade. Due to the complexity associated with and treatment for these diseases, patients with hematologic and oncologic diseases are particularly susceptible to complications related to altered intestinal microbial composition. Therefore, they are an ideal population for exploring FMT as a therapeutic approach. However, there are inherent factors presenting as obstacles for the use of FMT in these patients. In this review paper, we discussed the principles and biologic effects of FMT, examined the factors rendering patients with hematologic and oncologic conditions to increased risks for relapsed/refractory CDI, explored ongoing FMT studies, and proposed novel uses for FMT in these groups of patients. Finally, we also addressed the challenges of applying FMT to these groups of patients and proposed ways to overcome these challenges. Introduction The human intestinal tract is colonized by thousands of different microbial species. In the last two decades, various studies have established the importance of these microbial organisms in maintaining and facilitating human health and well-being. These commensal microbial communities play vital roles in regulating host metabolism, maintaining intestinal microbial homeostasis, and influencing the host's adaptive immunity [1]. Consequently, it is not surprising that alterations in the normal microbial composition result in disease states. Therefore, it follows that restoring the intestinal microbial composition may treat disease states and ameliorate symptoms. Many factors affect normal intestinal microbial composition [2]. The most common factor by far is medication, especially broad-spectrum antibiotics. In addition to removing the causative factors and waiting for the spontaneous normalization of the normal intestinal microbial community, probiotics and prebiotics may help with the recovery. However, the most rapid and effective method through which to restore the intestinal microbiome is through a fecal microbiota transplant (FMT) from donors with a normal intestinal microbial composition. FMT involves the instillation of stool that has been collected from a healthy donor and processed according to different institution-specific protocols into the intestinal tract of a patient with an altered intestinal microbiome. The term fecal microbiota refers to the complex array of microorganisms that live symbiotically within the intestinal tract of the host. The concept of FMT is not new. It was first used in China in the form of a "yellow soup" in the fourth century to treat diarrhea [3]. There were also some reports about the consumption of fresh, warm camel feces by the Bedouins as a remedy for bacterial dysentery [4]. The first documented successful use of FMT was in 1958, when it was used to treat four patients affected by pseudomembranous colitis [5]. However, it was not until 1983 when the next case of successful use of FMT in a patient with Clostridioides difficile (C. difficile) infection (CDI) was reported [6]. FMT has since primarily been applied to patients with relapsed/refractory CDI. However, there has been increasing use of this therapeutic approach for other intestinal and systemic diseases, albeit on a research basis. Surveys in the United States and in Europe have indicated that the number of procedures being performed has climbed rapidly over the last few years [7,8]. Therefore, FMT is an emerging therapeutic approach with very broad potential applicability. Due to the complexity of the diseases and their treatment, patients with hematologic and oncologic diseases may be particularly suitable candidates for FMT. In this paper, we will discuss the principles and biologic effects of FMT, examine the factors rendering patients with hematologic and oncologic conditions to increased risks of relapsed/refractory CDI, explore the ongoing FMT studies, and propose novel uses of FMT in these groups of patients. Finally, we will address the challenges of applying FMT to these groups of patients and propose ways to overcome these challenges. The Steps of FMT FMT can be divided into two steps ( Figure 1): (1). bowel preparation and (2). decal material delivery. Step 1 of FMT involves bowel preparation using antibiotics to create the spatial niche for the transplanted microbes to populate and proliferate. The importance of this has been clearly demonstrated in a mouse model of FMT, in which pre-transplant antibiotic treatment facilitated more efficient engraftment compared to no bowel preparation or bowel preparation using a laxative [9]. Unlike in patients with CDI who usually have very restricted intestinal microbial diversity, bowel preparation with antibiotics may be even more important for successful FMT for non-CDI purposes. Based on these considerations, the European Consensus Conference on FMT recommends that patients with recurrent CDI should receive three days of either vancomycin or fidaxomicin before the FMT procedure [10], although we typically administer oral vancomycin for seven days prior to FMT in patients with active colitis, with the last dose being given 24 h before the procedure. The aim of the antibiotics is to decrease the abundance of C. difficile load and to create space for the establishment of the transplanted donor microbes. Routine administration of oral antibiotics in the absence of active colitis is generally not recommended due to concerns of diminished efficacy, especially in patients with diarrhea-predominant irritable bowel disease, in which antibiotic pretreatment has been shown to significantly reduce bacterial engraftment [11]. Bowel preparation with two to three liters of oral polyethylene glycol with electrolyte purgative is carried out on the day prior to FMT. Typically, 200-300 g of donor stool suspended in 200 to 300 mL of sterile normal saline is administered within ten minutes of the preparation of the stool mixture. The patients resume regular diet and medications two hours after the procedure. There is currently no consensus on the optimal protocol for FMT administration, and the protocol varies at each institution. prior to FMT in patients with active colitis, with the last dose being given 24 h before the procedure. The aim of the antibiotics is to decrease the abundance of C. difficile load and to create space for the establishment of the transplanted donor microbes. Routine administration of oral antibiotics in the absence of active colitis is generally not recommended due to concerns of diminished efficacy, especially in patients with diarrhea-predominant irritable bowel disease, in which antibiotic pretreatment has been shown to significantly reduce bacterial engraftment [11]. Bowel preparation with two to three liters of oral polyethylene glycol with electrolyte purgative is carried out on the day prior to FMT. Typically, 200-300 g of donor stool suspended in 200 to 300 mL of sterile normal saline is administered within ten minutes of the preparation of the stool mixture. The patients resume regular diet and medications two hours after the procedure. There is currently no consensus on the optimal protocol for FMT administration, and the protocol varies at each institution. Figure 1. The two steps of fecal microbiota transplant. In Step 1, patients undergo bowel preparation with oral antibiotics followed by laxative. At least 24 h after the last dose of oral antibiotics, the patient will receive the donor fecal material via capsule, naso-enteral tubes, or upper or lower gastrointestinal endoscopy. Up until 1989, fecal material was delivered by retention enemas. However, alternative methods were subsequently developed, including fecal infusion via duodenal tubes, rectal tubes, colonoscopy, and colonic transendoscopic enteral tubing [12,13]. Nowadays, enteral routes include the use of an endoscope, a naso-enteric tube, or capsules by ingestion. FMT for recurrent CDI is equally successful whether given via colonoscopy, Figure 1. The two steps of fecal microbiota transplant. In Step 1, patients undergo bowel preparation with oral antibiotics followed by laxative. At least 24 h after the last dose of oral antibiotics, the patient will receive the donor fecal material via capsule, naso-enteral tubes, or upper or lower gastrointestinal endoscopy. Up until 1989, fecal material was delivered by retention enemas. However, alternative methods were subsequently developed, including fecal infusion via duodenal tubes, rectal tubes, colonoscopy, and colonic transendoscopic enteral tubing [12,13]. Nowadays, enteral routes include the use of an endoscope, a naso-enteric tube, or capsules by ingestion. FMT for recurrent CDI is equally successful whether given via colonoscopy, nasogastric tube, or enemas administered at home [14]. A meta-analysis of four studies on the relative rate of CDI cure following oral FMT capsules compared to FMT delivered through colonoscopy was performed and did not find any differences in efficacy between the two methods. There were no reports of serious adverse effects that could be attributed to oral FMT capsules other than those associated with treatment failure. Oral FMT capsules are becoming more accessible and should be administered as per the protocol of the capsule manufacturer. One possible barrier to their use is that the number of capsules that has to be ingested for a full dose is frequently large and may lead to the gastrointestinal symptoms of nausea, vomiting, and bloating [15]. However, a larger meta-analysis involving 24 studies reported that FMT by lower gastrointestinal endoscopy was superior to all other delivery methods [16]. Biologic Consequences of FMT Intestinal microbial communities regulate host metabolism, maintain intestinal microbial homeostasis, and modulate the host immune response (Figure 2) [17]. As a result, FMT re-equilibrates these functions in patients with the disease state due to intestinal dysbiosis. Unlike CDI in which the intestinal dysbiosis is clearly characterized by an overgrowth of toxigenic C. difficile [13], it remains unclear if the intestinal dysbiosis observed in other pathologic conditions is just an association rather than causation. If the relationship between the disease state and the intestinal microbial composition is merely one of association, restoring the normal intestinal microbial profile will not result in the improvement of the disease state and the amelioration of symptoms. , which contributes to reducing food intake and improving glucose metabolism [17]. Outside the context of hematologic and oncologic conditions, FMT has been used to regulate host metabolism in both animal models of obesity [18,19] and in obese humans [20,21]. FMT from lean donors resulted in variable improvements in the insulin-sensitivity in obese recipients [22,23] and in patients with metabolic syndromes [22,23]. Improvement was associated with an increased abundance of butyrate-producing intestinal microbes. Short chain fatty acids such as butyrate and propionate interact with G-protein coupled receptors GPR-43/41 on L cells to produce glucagon-like peptide 1 (GLP-1) and peptide YY (PYY), which contributes to reducing food intake and improving glucose metabolism [17]. Outside the context of hematologic and oncologic conditions, FMT has been used to regulate host metabolism in both animal models of obesity [18,19] and in obese humans [20,21]. FMT from lean donors resulted in variable improvements in the insulin-sensitivity in obese recipients [22,23] and in patients with metabolic syndromes [22,23]. Improvement was associated with an increased abundance of butyrate-producing intestinal microbes. FMT has also been used to re-establish normal intestinal microbial homeostasis. Currently, the most common indication for FMT is for relapsed/refractory CDI. FMT restores the diversity of the intestinal microbial compositions to create an ecologic competition between organisms to overcome and treat C. difficile overgrowth. Success rates of nearly 90% have been reported in most studies in patients with recurrent/refractory CDI (Table 1) [24][25][26][27][28]. Restoration of the normal intestinal microbial composition may also successfully eradicate colonization by multidrug resistant organisms such as extendedspectrum beta-lactamase-producing (ESBL) Escherichia coli (E. coli) [29], vancomycin-resistant Enterococcus (VRE) [30], and carbapenem-resistant Enterobacteriaceae (CRE) [30]. Since the intestinal microbial community modulates host immune responses, FMT has been applied to patients with inflammatory bowel disease. Randomized studies and non-randomized studies with a control arm have found higher clinical remission at eight weeks in patients with ulcerative colitis who were treated with FMT compared to the groups treated with placebo colonoscopic infusion [31]. To date, there has not been any published randomized clinical trial of FMT in Crohn's disease. However, a meta-analysis of 11 case series and uncontrolled observational cohort studies found that slightly more than 50% of the patients achieved clinical remission [32]. Administration of a second FMT within 4 months of the initial FMT treatment maintained the clinical benefits of the first FMT treatment [33]. FMT has also been tried in other conditions, such as in human irritable bowel syndrome [34,35] and autism spectrum disorder [36], and in mice and humans for multiple sclerosis [37,38] and in mice for Parkinson's disease [39]. In all of these disease states, the target for FMT is the gut-brain axis, which may be related to the breakdown of the gut barrier functions due to changes in intestinal metabolomics, such as the decrease in the production of short chain fatty acids caused by alterations in the normal intestinal microbial composition. CDI in Patients with Hematologic and Oncologic Diseases Patients with hematologic malignancies are particularly at risk for the development of CDI. CDI occurs in 7-14% of cases [40], and recurrent CDI (rCDI) occurs in 11-31% [41,42] of patients with hematologic malignancies such as acute leukemias, multiple myeloma, and Hodgkin's and non-Hodgkin's lymphoma. The incidence of CDI in patients with acute myeloid leukemia has been reported to be between 4.8 and 9%, and in those who undergo autologous hematopoietic stem cell transplantation (HSCT), a rate between 4.9 and 7.5% is observed; in those who undergo allogeneic HSCT, between a 14-30.4% incidence is observed in allogenic HSCT recipients [43,44]. The cumulative risks for developing peri-transplant CDI for those patients undergoing allogeneic HSCT who had CDI within 9 months of the transplant was reported to be nearly 40% [45]. Similarly, the incidence of CDI among those with solid tumors was also very high, reported to be between 10-20% [46]. CDI in these patients adds to the morbidities of the already debilitated physical state due to the underlying malignancies and may contribute to treatment-related mortality. CDIrelated mortality in these patients is approximately 20% [47]. CDI is, therefore, a significant complication in patients who are receiving chemotherapy for malignant diseases. Factors Predisposing Patients to CDI The mechanisms that are responsible for CDI pathogenesis in these groups of patients are multifactorial. In general, CDI risks are increased if there are changes to the normal commensal microbiota community (intestinal dysbiosis), innate intestinal immunity, or disruption to the integrity of the intestinal epithelial lining (Figure 3). By far, the biggest culprit contributing to the risks for CDI in these patients is the liberal use of broad-spectrum antibiotics that alter the intestinal microbial diversity and density, providing the opportunity for the colonization and proliferation of C. difficile, which is resistant to these antibiotics. Although the early initiation of broad-spectrum antibiotics reduces morbidity and mortality in patients who develop fever in the presence of chemoradiation-induced neutropenia [48], a retrospective study of 251 adult cancer patients found that despite patients having an absolute neutrophil count of more than 500/µL and 75% of the patients testing positive for a respiratory virus, 32% were still prescribed broad-spectrum antibiotics [49]. One of the first deterrence to C. difficile colonization in the intestine is the acidity of the gastric secretion. Both C. difficile spores and vegetative forms are inhibited by low gastric pH. It is therefore not surprising that the use of proton pump inhibitors (PPIs) is associated with an increased risk of CDI. A meta-analysis of 23 observational studies involving more than 300,000 patients found that PPI use was associated with a 65% increase in the incidence of CDI [50]. PPIs are often prescribed to hematologic and oncologic patients with severe thrombocytopenia and mucositis following chemoradiation therapy to reduce the risk of gastrointestinal bleeding. Therefore, PPIs increase the susceptibility of these patients to CDI. The primary bile acids, chenodeoxycholic acid (CDCA) and cholic acid (CA), which make up 95% of the primary bile acids within the intestine foster C. difficile spore gemination to the vegetative cells within the ileum [51]. Medications that affect the transit time of these primary bile acids will favor the germination of the C. difficile spore to promote the colonization, proliferation, and induction of CDI. Opioids induce intestinal hypomotility that will increase the bile acid transit time. Opioids have also been found to induce intestinal dysbiosis [52]. The incidence of hospital-onset CDI among chronic opioid users is two times higher than that of the general hospital population [53]. Chronic opioid use to treat cancer-related pain therefore increases the risk of CDI in these patients by not only inducing intestinal dysbiosis but by also creating a condition that promotes the proliferation of C. difficile. One of the first deterrence to C. difficile colonization in the intestine is the acidity of the gastric secretion. Both C. difficile spores and vegetative forms are inhibited by low gastric pH. It is therefore not surprising that the use of proton pump inhibitors (PPIs) is associated with an increased risk of CDI. A meta-analysis of 23 observational studies involving more than 300,000 patients found that PPI use was associated with a 65% increase in the incidence of CDI [50]. PPIs are often prescribed to hematologic and oncologic patients with severe thrombocytopenia and mucositis following chemoradiation therapy to reduce the risk of gastrointestinal bleeding. Therefore, PPIs increase the susceptibility of these patients to CDI. The primary bile acids, chenodeoxycholic acid (CDCA) and cholic acid (CA), which make up 95% of the primary bile acids within the intestine foster C. difficile spore gemination to the vegetative cells within the ileum [51]. Medications that affect the transit time of these primary bile acids will favor the germination of the C. difficile spore to promote the colonization, proliferation, and induction of CDI. Opioids induce intestinal hypomotility that will increase the bile acid transit time. Opioids have also been found to induce intestinal dysbiosis [52]. The incidence of hospital-onset CDI among chronic opioid users is two times higher than that of the general hospital population [53]. Chronic opioid use to treat cancer-related pain therefore increases the risk of CDI in these patients by not only inducing intestinal dysbiosis but by also creating a condition that promotes the prolifera- Patients with hematologic and oncologic diseases are rendered more susceptible to CDI because their innate host immunity is suppressed due to treatment [54]. This occurs due to the primary disease process or the one that is induced by the chemotherapeutic agents used to treat the diseases. Chemotherapeutic agents affect host immunity by their direct cytotoxic effects on the lymphocytes and by inducing neutropenia. In the setting of allogeneic HSCT, the use of immunosuppressive agents to prevent or treat graft-versus-host disease (GVHD) has also been found to increase host susceptibility to CDI [54]. Intestinal epithelial injury in the form of mucositis interacts bidirectionally with CDI. On the one hand, CDI induces mucosal damage, on the other hand, the presence of mucosal injury places the host an increased risk of CDI. Normal intestinal epithelium not only consists of enterocytes but also of supportive cells that include the goblet cells that are responsible for the production of mucin and Paneth cells that produce the antimicrobial peptide (AMP) [55]. Both intestinal mucin and AMP regulate the intestinal microbial community and density. Changes in the intestinal microbial community and density may not only result in alterations in the intestinal microbial metabolites such as in the short chain fatty acids (SCFAs) that play a major role in enterocyte health [56], but they may also create a niche favoring the colonization and proliferation of C. difficile. Damage to the normal intestinal epithelium, by chemotherapy or GVHD, will affect the integrity and functions of the goblet cells and Paneth cells and alter the production of mucin and AMP, respectively. Injury to the intestinal epithelium can also result in the release of damage-associated molecular patterns (DAMPs) that will also affect the intestinal microbial composition and density [57]. Cancer patients receiving chemotherapy that induces mucositis and patients with GVHD are there for at a higher risk for the development of CDI. Use of FMT in Hematologic and Oncologic Patients outside Treatment of CDI The immune regulatory effects of the intestinal microbial community have been exploited for treating acute GVHD following allogeneic HSCT. In total, the efficacy of FMT has been reported in 72 patients with corticosteroid-refractory acute GVHD (Table 2) [58][59][60][61][62][63][64][65]. Responses were observed in more than 50% of these groups of patients. More importantly, the procedures were all well tolerated, except for the development of lower gastrointestinal bleeding and hypoxemia in one patient and of bacteremia in two patients, although it was deemed unrelated to the FMT in all three cases. However, fatal donor-derived ESBL septicemia was reported in two patients who received FMT, one patient with hepatitis C infection in a clinical study of FMT for refractory hepatic encephalopathy, and another patient with therapy-related myelodysplastic syndrome in a study on the use of pre-emptive FMT following allogeneic HSCT [66]. The risks for such complications will be reduced as more stringent screen for the microbial composition in donors become more stringent. Reference Data Source Number of Patients (n) Outcome Adverse Events Goeser et al. [64] Two-center retrospective study 11 (9 by capsule and 2 by nasojejunal tube administration) Attenuation of stool volume and frequency was observed in all 11 patients Abdominal pain occurred in 3 patients and vomiting in 1 patient Mao et al. [65] Case report 1 (received two cycles of FMT administered by capsules) CR None reported Preclinical observations determined that the intestinal microbiota affected the response to immune checkpoint inhibitors (ICIs) [67]. Various retrospective studies also found that broad-spectrum antibiotics alter the intestinal microbial community and adversely impacted responses in cancer patients being treated with ICIs [68][69][70][71]. Based on these findings, two studies were performed on the use of FMT in a cohort of patients with immunotherapy-refractory malignant melanoma to determine whether the FMT could reverse the refractoriness to anti-Program Cell Death (PD) 1 immunotherapy. Three of the ten patients in one study restored the response to immunotherapy following FMT [72], and 6 of 15 in another study showed clinical benefits [73]. Ongoing FMT Studies in Patients with Hematologic and Oncologic Diseases The initial successes observed with FMT in patients with hematologic and oncologic diseases have led to many clinical studies being currently ongoing in various institutions worldwide. Currently, there are nearly 40 studies registered with Clinicaltrials.gov. Table 3 shows the representative studies in the US and in Europe. These studies primarily evaluate the safety of FMT, the use of FMT to prevent and treat GVHD following allogeneic HSCT, improvement of ICI response, and the treatment of the complications that arise due to cancer therapy. It is expected that many of these studies will report their mature data on these outcomes within the next five years. Table 3. Clinical studies registered in Clinicaltrials.gov for hematologic and oncologic patients in the US and in Europe. Harnessing the Potentials of FMT for Future Studies in Hematologic and Oncologic Diseases The potential range of functions of a balanced intestinal microbial composition is wide. This provides great opportunities to tap into these potentials. Thus far, FMT has primarily been employed to restore the normal microbial homeostasis to treat CDI and to exploit the immune regulatory effects to treat corticosteroid-refractory GVHD following allogeneic HSCT and to restore the treatment responsiveness in melanoma patients who developed refractoriness to immunotherapy. Based on the assumption that the host immune system may have already developed a tolerance to the intestinal microbiota, it may be possible to extend the immune regulatory mechanisms of FMT to induce immune tolerance and to reduce the risk of developing intestinal GVHD using a combined allogeneic HSCT and FMT from the same donor. Various studies have implicated a breakdown in the intestinal barrier function being responsible for the pathology of certain diseases. The breakdown of the intestinal barrier occurs frequently in patients with hematologic and oncologic diseases due to the direct cytotoxic effects of chemotherapy on the enterocytes or indirect effects of chemotherapy in modifying the intestinal microbiome and interrupting with the formation of the paracellular tight junctions (TJs) [74]. This increases the risks for the translocation of luminal bacterial products into the systemic circulation to induce culture-negative fever and bacteria to elicit bacteremia and septicemia. Fortifying the gut barrier and restoring the mucosal integrity using keratinocyte growth factors resulted in a reduction in the incidence of culturenegative fever and documented bacteremia/septicemia following high-dose chemotherapy and HSCT [75]. The facet of a balanced intestinal microbial composition in maintaining the gut barrier function through the production of the SCFAs that fortify enterocyte health, and paracellular TJ development may therefore be tapped into for similar purposes. Recent studies in sickle cell disease (SCD) in mice [76,77] and in humans [78,79] have highlighted the presence and the role of disrupted gut barrier functions in affecting the phenotypes of the disease. This has been associated with intestinal dysbiosis that is characterized by a lower abundance of Alistipes and Pseudobutyrivirio [80]. Manipulation of the intestinal microbial community with the antibiotic rifaximin that led to an increased abundance of Akkermansia [81] was associated with a reduced frequency of painful vasoocclusive crisis [82], creating an opportunity to use FMT from non-sickle cell donors with or without ex vivo enrichment with Akkermansia or Alistipes, which may be explored in the future to change the disease course in SCD. Challenges Facing FMT Use in Hematologic and Oncologic Patients The risk of introducing new infections remains the biggest concern of applying FMT to patients with hematologic conditions and oncologic patients. This anxiety among treating physicians has been amplified following the report on the fatal ESBL E. coli septicemia in patients with myelodysplastic syndrome who received pre-emptive FMT [66]. The risks are obviously higher in these groups of patients who are often neutropenic and immunosuppressed and who have an intestinal barrier that is already compromised. Therefore, any use of FMT in this group of patients, even if being used for CDI treatment, should be carried out in tightly controlled well-designed clinical studies. Another challenge that faces these patients is the risk for bowel perforation and gastrointestinal bleeding due to instrumentation during FMT in a background context of intestinal mucositis. The development of capsule-delivered FMT should reduce this risk. The biggest challenge affecting the successful use of FMT in patients with hematologic and oncologic diseases is the persistence of the factors predisposing these patients to the conditions that need FMT. Patients being treated for CDI will likely still require the frequent use of broad-spectrum antibiotics throughout the course of their cancer treatments. The continued use of systemic antibiotics has been found to predict FMT failure [83]. Even after completing the courses of chemotherapy, these patients also remain in an immunosuppressed state that predisposes them to further risks for CDI. In patients treated for GVHD, the intestinal microbiome likely reverses back to a dysbiotic state a few months after FMT since the alloreactivity persists in the background, and the use of immunosuppressive agents continues. Therefore, intermittent repeat FMT will be needed to maintain the restored intestinal microbial composition. The availability of capsule-delivered FMT may provide the solution, although it is still associated with the adverse events of diarrhea and abdominal discomfort/pain/cramping [84]. One could envisage the initial restoration of the intestinal microbial composition using a full FMT followed by daily/weekly maintenance of the microbiome using FMT capsules. Interestingly, a recent systematic review of the procedures performed over the last two decades found that FMT-related adverse events were the lowest when the colonic transendoscopic tubing method was used (6.33%) and the highest with gastroscopy (31.92%). The incidence of FMT-related adverse events was unexpectedly high with capsules (28.97%) [84], arguing against the safety of the capsule method in the treated patients, although how the capsule method compares with the other methods in patients with hematologic and oncologic diseases remains to be determined. The problems associated with re-infection have been investigated in various studies. The incidence of failure and re-infection has been estimated to be around 14% [83]. Repeat FMT significantly reduces the failure rate in patients treated for CDI [16]. Patients who experience recurrence can, however, still be salvaged with bezlotoxumab [85]. Concluding Remarks FMT is an emerging therapeutic approach that has an enormous number of potential applications. However, concerns remain among treating physicians on its use in patients with hematologic and oncologic diseases due to concerns related to introducing infections. A further barrier preventing the successful use of FMT in these groups of patients is the persistence of the factors predisposing the patients to the conditions needing FMT. Future work will focus on methods to overcome these obstacles. Until the indications are wellestablished, FMT in patients with hematologic and oncologic diseases should only be performed in closely monitored clinical trials.
2022-02-01T16:13:24.122Z
2022-01-29T00:00:00.000
{ "year": 2022, "sha1": "7fe55aef69f2d178f5cff3c8ba3ba4fc6d788cec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/3/691/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "048c4f0357fd8c806427fbbbc36b92bbdee6de78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258107876
pes2o/s2orc
v3-fos-license
Mitigation of Nitrogen Vacancy Ionization from Material Integration for Quantum Sensing The nitrogen-vacancy (NV) color center in diamond has demonstrated great promise in a wide range of quantum sensing. Recently, there have been a series of proposals and experiments using NV centers to detect spin noise of quantum materials near the diamond surface. This is a rich complex area of study with novel nano-magnetism and electronic behavior, that the NV center would be ideal for sensing. However, due to the electronic properties of the NV itself and its host material, getting high quality NV centers within nanometers of such systems is challenging. Band bending caused by space charges formed at the metal-semiconductor interface force the NV center into its insensitive charge states. Here, we investigate optimizing this interface by depositing thin metal films and thin insulating layers on a series of NV ensembles at different depths to characterize the impact of metal films on different ensemble depths. We find an improvement of coherence and dephasing times we attribute to ionization of other paramagnetic defects. The insulating layer of alumina between the metal and diamond provide improved photoluminescence and higher sensitivity in all modes of sensing as compared to direct contact with the metal, providing as much as a factor of 2 increase in sensitivity, decrease of integration time by a factor of 4, for NV $T_1$ relaxometry measurements. Introduction The rapidly-developing field of 2-D materials has the opportunity to provide advances in the fields of data storage, magnetometry, and quantum information processing. However, due to their low-dimensional nature, established bulk characterization techniques, such as nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR) spectroscopy, lack the sensitivity to properly probe electron dynamics responsible for phenomena such as magnetism and superconductivity. A series of measurements have been proposed [1,2] and realized [3,4]offering the nitrogen-vacancy (NV) center as a new probes of low dimensional electronic phases of quantum materials. Due to its long lived electronic spin state and ease of read-out and control, the NV is an excellent sensor of magnetic and electric noise with bandwidth ranging from DC to GHz provided through a variety of sensing modalities [5,6,7,3,8]. A challenge of working with NV centers is preserving the notable spin properties as the NVs form closer to the diamond surface. Without extensive oxidation treatments [7,9,10], the diamond surface can provide a strong upward band bending, depleting the NV of it electrons and converting the NV to its magnetically-insensitive neutral and positive charge states. Additionally, the surface provides a source of noise that worsens the NV's dephasing, decoherence, and relaxation [11,12]. These problems can be exacerbated by the integration of metals, conductive materials, or materials with large work functions to the diamond surface. The addition of a metal to the diamond surface forms a positive charge at the interface that is compensated by the negative charge in traps and defects in the diamond, like the aforementioned NV center. Other sources of negative charges, such as substitutional nitrogen (N s ), may also be ionized in this process. This creates an involved competition with regards to sensing, where some ionization of N s maybe beneficial in improving coherence properties, but too much ionization may result in destabilizing the NV − Here, we explore how dense ensembles of NVs are affected by the integration of such materials. We deposited a thin film of metal onto the diamond surface and characterized the spin properties the NVs under the metal film, and compared to an uncoated area. We repeated this for a range of NV ensembles of different depths. We find that, depending on the depth of the NVs, spin properties such as T * 2 and T 2 times, improve by as much as a factor of 1.4 and 1.6 respectively, while relaxation times and photo-luminescence rates are quenched due to proximity to the thin metal film. We tune this effect by adding a thin layer of alumina in between the metal and diamond, and find the photoluminescence intensity improves while preserving some degree of improvement to T * 2 and T 2 times and providing minimal impact on relaxation times, preserving utility for T 1 relaxometry. We conclude by estimating the shot-noise-limited sensitivity in different sensing modalities, and find that for all forms of sensing, the addition of the insulating layer improves the sensitivity as compared to direct contact with the thin film. Sample Preparation We start with a series of electronic-grade diamond (Element 6) with natural 13 C abundance (1.1%). The samples are implanted with 15 N and annealed to form NVs. The implant parameters of the samples are described in Table 1. We implant these samples to achieve a N s concentration of 100 ppm according SRIM. We follow the annealing procedure and oxidation treatment described in our previous work [7]. After oxidation treatment, the samples have 2 nm of alumina (Al 2 O 3 ) deposited by atomic layer deposition (ALD) at 200C. The alumina is etched from half of the diamond and then a perpendicular half of the diamond has 50 nm of copper deposited using electron beam evaporation. This results in 4 regions: the bare diamond 'Ref' region, diamond with alumina, 'AlOx,' diamond with copper, 'Cu,' and diamond with alumina and copper, 'Cu + AlOx' ( Fig. 1(a)). This is done so we can compare the effects of each environment in the same measurement series. We excite NV centers using a 532 nm laser with an optical power of 280 mW (before the objective) and focused down to a roughly 40 µm spot [7]. The excitation provides a mechanism to spin initialize into the |0 spin state and readout the spin state from the spin-dependent fluorescence rate [13]. The samples rest on a sapphire substrate with a copper loop fabricated on it. The copper loop is connected to an amplified and gated microwave (mw) source to provide MW pulses for spin control. The photoluminescence (PL) is filtered using a 550 nm dichroic mirror, a 650 nm long-pass filter, and a 532 nm notch filter to suppress laser leakage and NV 0 PL. The PL is detected by an A-CUBE-S1500-3 APD with variable gain. We perform scanning PL measurements using the stepper motors of a Thorlabs NanoMax 300. Sample characterization For each sample, we measure the PL intensity ( Fig. 1(b)), NV polarization time ( Fig. 1(c)), T 2 ( Fig. 1(d)), T * 2 ( Fig. 1(e)), and T 1 ( Fig. 1(f)). Representative data for a single point in each region is shown in Fig 1. The NV polarization time is measured by measuring two consecutive PL time traces, one with a mw π pulse, and one without, and taking the difference between the time traces. This can be a probe of processes like Fluorescence Resonant Energy Transfer (FRET) [14], as the reduced excited-state lifetime reduces polarization efficiency. We measure T 2 and T * 2 using Hahn echo ( Fig. 1(d)) and Ramsey interferometry ( Fig. 1(e)) respectively. Both measurements are done on the |0 ↔ | − 1 transition of the NV ground-state spin sublevels and are thus not immune to strain and electric field fluctuations [15,16]. The second π/2 of both of these measurements toggles between a π/2 and 3π/2 pulse and the difference between two consecutive measurements is taken to suppress noise. T * 2 is the limiting timescale for ODMR linewidth and an important value for DC magnetometry with NVs. T 2 is the lifetime of a coherent state and is the limiting timescale for nanoscale NMR spectroscopy with NVs or sensing of low-frequency (100s kHz -10s MHz) noise. The T 1 measurement is referenced by applying a π pulse on every other measurement and subtracting two consecutive measurements. The relaxation time, T 1 , is sensitive to noise near the NV spin resonance frequency, providing a sensing mechanism over a wide range of frequencies tuned with a magnetic field [8,17,18]. In order to account for sample and microwave driving inhomogeneities, spin measurements are performed at points spaced by 25-50 µm along a pair of horizontal lines 500 µm (750 µm for T 1 ) long, from the 1Cu + AlOx' region into the 1AlOx' region and from the 'Cu' to 'Ref' regions (Red lines in Fig 1(b)). At each point we measure Rabi oscillations and NV resonance frequency along with the spin properties of the NVs. The PL rate, NV polarization time, contrast-weighted shot-noise (C I pl t ro ), T * 2 , T 2 , T 1 are then averaged in each of the regions. No notable spatial dependence was observed within any given region or near the transition from one region to another for all properties except T 1 . The T 1 of shallower NV ensembles did not reach a stable reference level until the probed position was a few hundred µm away from the metal edge. These measurements and processing steps are performed for all four regions across all six samples, with the exception of the 3 keV sample, where only the 'Cu + AlOx' and 'AlOx' areas are measured due to the signal-to-noise ratio of the 'Cu' area being too small to achieve usable signal in a reasonable time. For the NV ensemble samples with depths less than 10 nm, no notable differences were observed between the 'Ref' region and the 1AlOx' region. For the deepest two NV ensembles, a slight increase in PL was observed for 1AlOx' relative to 'Ref. ' Due to the wide range of PL rates across the various NV ensembles, the gain of our APD needed to be adjusted from one diamond to another. Due to this, we do not quote explicit photon count rates as the APD responsivity and noise floor is not the same from diamond to diamond. To this end, all measurements or calculations that require a PL rate are expressed as ratios between regions. This still provides the critical information of relative PL rates in different regions across a single diamond. NV characterization We perform the previously mentioned measurements across a series of diamond samples with variable depth (Table 1). In Fig. 2, we show the ratios of NV properties critical to DC and AC sensing between different regions. We emphasize that, although for shallower NV ensembles the properties in both 'Cu' and 'Cu + AlOx' regions are strictly worse than the 'Ref' region, the relevant comparison for sensing of nanoscale systems is 'Cu + AlOx' to 'Cu.' However, comparisons between the Cu coated regions and the 'Ref' region do provide interesting insight into the changes in the environments in the differing regions caused by the integration of the material. The PL rate, shown as a function of depth for the different regions, in Fig. 2(a), shows a gradual increase in PL relative to the 'Ref' region for both 'Cu' and 'Cu+AlOx' regions. Importantly, we saw the 'Cu + AlOx' region has consistently high PL than the 'Cu' region. The level of improvement decreases for deeper NV ensembles. For deeper NV ensembles we find the PL actually increases by as much as a factor of 4. This may be due to the metal and alumina increasing collection efficiency by acting as a mirror and reflective coating or an increase in the spontaneous emission rate via plasmonic interaction [19]. We also observe the PL contrast between spin states in the 'Cu + AlOx' and 'Cu' regions to be much lower than that of the 'Ref' region, plateauing at 10 nm. This may be due to background PL from NV 0 for shallower NVs. Also, a slightly higher Rabi frequency was observed for the regions "Ref' and 'AlOx' due to closer proximity to the mw loop. This could result in slightly lower contrast due to lower excitation bandwidth. When comparing 'Cu+ AlOx' to 'Cu,' we find an average 25% increase in contrast with no clear depth dependence. We observe the polarization time increasing by as much as a factor of two for the shallowest NV ensembles (Fig. 2(c)), when compared to the 'Ref' region. A fast polarization time for the NV is essential to reducing the overhead for of measurements. We attribute this increase to a reduced excited state lifetime due non-radiative relaxation cause by the metal through processes like FRET or Surface Energy Transfer (SET) [14]. For deeper ensembles, we find the polarization rate improves. We attribute this to the thin metal film acting as a mirror and providing better laser excitation. The 'Cu + AlOx' region is less impacted by the these processes due to an additional 2 nm stand off from the material. An important parameter that appears in all shot-noise-limited sensitivity estimates is C I pl t ro . This value is the fluorescence contrast between spin states times the shotnoise of a single measurement. When considering the comparison to the 'Ref' region, we notice a dramatic drop, as low as a fifth the reference value ( Fig. 2(d)). This is a mixture of notably reduced PL caused by band bending ionizing NV − , as well on nonradiative relaxation reducing photon generation from NV − [14] However, the comparison between 'Cu + AlOx' and 'Cu' regions sees a notable increase in this parameter for the shallowest NVs. Dephasing and Decoherence We observe an interesting phenomenon when we look at T * 2 , the dephasing time ( Fig. 2(e)), and T 2 , the decoherence time (Fig. 2(f)). We find the coherence properties of the NVs improve under the metal. To explore why this happens we consider what are the major causes of decoherence and dephasing at this depth and nitrogen density. At the depths of our ensembles, surface noise from dangling bonds or other surface imperfections (Fig 3(a)) has been seen to play a major role in decoherence [11,12]. However, due to our very high nitrogen concentration (100 ppm), we posit paramagnetic noise from the N s (Fig 3(a)) is the dominant decoherence and dephasing source [15]. We provide a qualitative explanation for the trend in T * 2 and T 2 through a competition between these two noise sources, with the N s being ionized by the band bending caused by the metal. The N s is known to have a donor level 1.7 eV below the conduction band [20]. The NV − ground state level has been found to be 2.6 eV below the conduction band [21]. We propose that for very shallow NV centers, both the N s and the NV are ionized by the band bending; the Fermi level drops below the defect levels (Fig 3(b)). This regime provides a substantial decrease in PL, with spin properties dominated by surface noise, but also reducing the noise environment created by the N s . As the NVs get deeper, an ideal depth appears where there is sufficient N s to charge NVs into the negatively charged state but not so much N s , that the NVs are still dominated by their noise. This regime is defined by the Fermi level being greater than the NV level but less than the N s level. As NVs get sufficiently deep, the influence of the band bending becomes negligible, and the N s keep their electron and the NVs become dominated by the nitrogen noise again, where the Fermi level approaches its bulk value determined by the nitrogen doping level. We show the explicit T * 2 and T 2 values as a function of depth (Fig 3(c,d)) with relevant regions highlighted according to our qualitative description. An important note to this discussion is that this is an indirect effect. The metal or material is not directly reducing the T 2 or T * 2 ; it is engineering the electronic environment in such a way that has an impact on the NV magnetic noise environment. Additionally, these changes are very much nitrogen concentration dependent; lower nitrogen densities may simply not have enough electrons to compensate the surface charge induced by the metal. Any sort of relaxometry using T 2 or T * 2 would need to account for these environmental changes. For other modes of sensing, such as DC sensing which is T * Spin-Lattice Relaxation While T 2 and T * 2 are critical parameters for sensing of DC magnetic fields and low frequency sources on the order of MHz, many proposals for using NVs to probe nanoscale electronic states use T 1 relaxometry [1,23]. T 1 relaxometry has already been used as a probe of conductivity in metals [24,25]. We use the previously-established techniques to demonstrate two features: we can recover the same information from T 1 relaxometry measurements with and without the alumina film and, for sensing Johnson noise in conductors, the additional stand-off provides no major deficits and even improves the sensitivity of T 1 relaxometry in the sample dominated regime, where the induced relaxation rate from the metal is much greater than the intrinsic relaxation rate. In order extract the conductivity from our T 1 data, shown in Fig 4(a), we follow the process laid out in Ref. [24]. We must determine the relaxation rate from external sources, Γ ext (d, σ). Here d is the stand-off between the NV ensemble and the metal, and σ is the conductivity of the metal. We do this by measuring the intrinsic relaxation rate, Γ N V,int , of NVs unperturbed by the metal and the relaxation rate of NVs affected by the metal, Γ N V (d, σ). Due to the nature of our experimental configuration, we can use the 'Ref' region to determine our Γ N V,int . With this we can use the following equation to determine the extrinsic relaxation rate, Γ ext With Γ ext , we can fit the depth dependence to determine the conductivity to the following function where γ e is the electron gyromagentic ratio, µ 0 is the vacuum magnetic permeability, k B is the Boltzmann constant, and T is the temperature. σ is left as a free parameter to vary for the fit. Fitting was attempted to account for the film thickness, but the thickness was consistently fit to an arbitrarily large value, indicating our range of depths is much smaller than the film thickness. The results of the calculation of Γ ext and the estimation of the conductivity are shown Fig 4(b). A note, regarding the data for the 'Cu + AlOx' region, the x-axis is shifted by 2 nm to account for the additional 2 nm spacing provided by the alumina. The most important feature is that the the determined conductivities agree with each quite well, and both values agree with previous examinations of thin copper films [26]. It is worth noting that T 1 of the 'Cu + AlOx' region did not differ from the 'Cu' region substantially (Fig 4(a)). This is because, for the case of Johnson noise, the induced relaxation rate scales as d −1 (see Eqn. 2). In this case, an additional 2 nm is not a substantial change within the error of our measurement. Influence on Sensitivity We now discuss the idea of sensitivity and the impact that the AlOx layer has on the sensitivity of the NV ensembles for different sensing modalities. The sensitivity is the noise floor of a measurement given 1 second of integration time. This definition means a lower sensitivity provides a better sensor. We compare the sensitivity of NV ensembles in the different regions at different depths. As mentioned earlier, the contrastweighted shot-noise appears in all sensitivity estimates with η ∝ shot-noise are from region to region, it plays a dominant role when comparing different regions' sensitivity. This can be seen by looking at the dependencies of the sensitivity in different modes: We observe improvements in the T 2 and T * 2 by a factor of 1.5 when compared to the 'Ref' region. This results in improvements of the sensitivity by a factor of roughly 1.2 ( √ 1.5) for η DC and η M ean AC , the sensitivities relevant for DC magnetometry and AC magnetometry. For nanoscale NMR measurements, the sensitivity, η V ar AC , will improve by a factor of 1.8 (1.5 3/2 ), due to the stronger dependence on T 2 for variance detection. While these are sizable improvements, the constrast weighted shot-noise is reduced by a factor of 6 as compared to the reference. In this regard, the sensitivity, as compared to the reference, is strictly worse except for the deeper ensembles which see a slight improvement. Our main focus is the comparison between the two copper coated regions. We do note that the sensitivity ratios when compared to the reference region reaches 1 at around 10 nm deep and goes below 1, meaning improved sensitivity, for deeper ensembles. This change is can be appreciable, as much as 0.5 and the causes for this increase are the increased PL rate and T 2 or T * 2 . When the 'Cu+AlOx' region is compared to the 'Cu' region, the relevant comparison for sensing of quantum materials, it is clearly more sensitive. Although the insulating layer does decrease the degree of improvement for T 2 and T * 2 ,it improves the contrast-weighted shot-noise (inverse shown in Fig 5(a), by as much as a factor of 2, resulting in a reduction (improvement) in sensitivity of a factor of 2. There is also the extreme case of the 3 keV implanted sample where measurement in the 'Cu' area were not feasible due to very low signal-to-noise ratio. For all forms of quantum sensing, the 'Cu+AlOx' region improves the sensitivity for all measured NV ensemble depths when compared to the 'Cu' region. The impact is most prominent for the shallowest NV ensembles, but still a 20-40% improvement for the deeper NV ensembles. As mentioned, the changes in PL rate dominate resulting in very similar looking data for all modes of sensing. For DC sensing, the relative change in T * 2 was effectively flat (see Fig. 2(e)). The DC and AC mean sensitivities scale as the square root of the T * 2 and T 2 respectively, further flattening the small difference between the two regions resulting in trends dominated by the change in C I pl t ro (Fig. 5(b,c)). A slight deviation from this trend is observed in the AC variance sensitivity, which scales like T 3/2 2 . In Fig2(f),we saw a slight depth dependence in T 2 , showing the T 2 ratio between the regions under discussion being less than 1 for shallow ensembles. The stronger T 2 dependence in variance sensing amplifies this dependence and a slightly weaker improvement in η V ar AC as a function of depth is observed. The sensitivity of T 1 relaxometry needs to be treated separately due to the fact that the measurement revolves around observing changes in T 1 . In the regime where Γ ext Γ int , the sensitivity can be estimated [8,17] by This is a sensitivity to changes in T 1 with respect to the intrinsic T 1 . If Γ ext = 1/T 1 , this means a shorter T 1 takes less time to sense. The sensitivity ratio between the 'Cu + AlOx' and 'Cu' regions is shown in Fig. 5(e) for different depths. As previously mentioned, the alumina provided minimal change on the T 1 , thus this sensitivity is dominated by the PL improvement provided by the insulating layer. For the shallowest samples we see sensitivity improvements as much as a factor of 2. Conclusions This work highlights the importance of how integrating a material on the diamond surface can impact NV performance. Though we chose thin copper films, hardly a low dimensional quantum material, to demonstrate these affects, we view this as a sort of worst-case scenario. Not all process observed here will be observed to the same degree in other materials. Low dimensional materials, like magic-angle graphene, will not only provide a magnetic noise source as it approaches superconductivity [1], it will provide an acceptor for processes like FRET [14] and can reduce PL intensity. Surface charging caused by the integration of the material may cause band bending, making changes in spin properties difficult to isolate due to changes in the magnetic environment. Our approach of incorporating an insulating layer of alumina, has reduced the impact of the integration process and enabled the use of shallower NV ensembles for nanoscale quantum sensing. We have characterized the PL rate, NV polarization time, T 2 , T * 2 , and T 1 for a series of NV ensembles at variable depths. These measurements were performed in 4 different regions: with copper in direct contact with the diamond surface, 'Cu', with copper insulated from the diamond surface by 2 nm of alumina, 'Cu+ AlOx', just the alumina layer, 'AlOx', and bare diamond, 'Ref. ' We observed a general decrease in PL rate for NV ensembles closer to the copper film, that could be tuned by with the addition of an insulating layer. We also found a relative increase of the NV polarization time as NV ensembles approached the film. We saw a non-monotonic improvement in T 2 and T * 2 over the characterized depths. We attribute this improvement to band bending caused by the metal film, ionizing paramagnetic noise sources near the NV center inside the diamond. We considered the impact of the alumina film in terms of sensing for different sensing modalities. For the deepest NV ensembles, the alumina and metal seemed to provide an overall improvement, in terms of sensitivity when compared to the reference region. In general, NVs at or deeper than 10 nm had sensitivities on par with the reference reference region, though the exact contributions to that sensitivity was different that that of the reference. For sensing with T 1 relaxometry, the 'Cu + AlOx' region provides consistently superior sensitivity to the 'Cu' region providing as much as a factor of 2 increase in sensitivity, decrease of integration time by a factor of 4, for T 1 relaxometry measurements. There are further techniques that could be developed to suppress the influence of integrated materials. Recent work on using applied electric fields to engineer the charge environment in a more deterministic means has been demonstrated for single NVs [27]. Another approach would be to use another donor to provide charge such as phosphorus. Such co-doping has been shown to provide high NV conversion efficiency, provide, better NV properties, and would provide more charge to passivate the induced charge with minimal cost to sensor quality [28,29,30] technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2023-04-14T01:15:55.014Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "0986e6724327e5279eed8e85c1eec99dc84aba43", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0986e6724327e5279eed8e85c1eec99dc84aba43", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221912451
pes2o/s2orc
v3-fos-license
A rare case of bilateral congenital upper eyelid eversion managed conservatively Schisis‐detachment results from movement of schitic fluid through outer layer breaks to cause an area of localized retinal detachment. Mostly it is asymptomatic as the typical behavior of schisis‐detachment is that only a limited amount of fluid passes into the subretinal space, probably because of the very viscid nature of the intracystic fluid. However, especially if very large outer layer break occurs, progressive retinal detachment results as evident in this case. The ratio of the occurrence of asymptomatic “schisis‐detachment” to the progressive symptomatic variety which requires surgery is about 178 to 1.[3] RS with progressive, frank, RD is a rare complication affecting 1 in 2000 patients associated with breaks in both layers, or with only outer layer breaks.[1,3] Surgical intervention in the form of PPV is indicated for RS with progressive RD and for SD with rare posterior extension of schitic fluid.[3] This case becomes atypical in its presentation by having a traction element to the RS‐SD‐RD complex, apart from a rare giant outer layer break underneath macula. A rare case of bilateral congenital upper eyelid eversion managed conservatively Nilesh Jain, Julie Jain 1 Key words: Bilateral, eversion, hypertonic saline, upper eyelid A 12-h-old female infant was referred to our tertiary care hospital from another hospital for the management of eversion of both upper eyelids. The full-term neonate was born by cesarean section for failed induction of labor. On ocular examination, her upper eyelids were totally everted (left eye more than the right eye) with severe conjunctival chemosis and greenish discharge suggestive of secondary infection [ Fig. 1]. After instillation of 0.5% proparacaine eye drops and using Desmarre's lid retractor, the anterior segment was examined which was normal with negative fluorescein staining of the cornea. On laboratory testing, the neonate had C-reactive proteins (CRP) level of 5.1 mg/dL. Elevated total leukocyte count (17,000/µL) with raised total (10.2 mg/dL) and indirect bilirubin (9.46 mg/dL) levels. The neonate was admitted to the neonatal intensive care unit (NICU) for neonatal sepsis and hyperbilirubinemia. The baby was started on systemic antibiotics and phototherapy. For ocular pathology, manual eversion was tried without any success. After that, magnesium sulfate soaked dressings were given to reduce chemosis but there was no positive response. So, we tried 5% hypertonic NaCl soaked dressings every 6 h. This resulted in a reduction in chemosis from day 2 and on day 6 the chemosis was completely reduced with normal closure of the eyelid [ Fig. 2]. Along with this, the neonate was also given 0.5% moxifloxacin eye drops and topical 1% carboxyl methylcellulose (CMC) eyedrops every 6 h. The presentation of the disease can be unilateral or bilateral. The exact etiopathogenesis is not known but multiple theories have been proposed to describe its etiology. This includes birth trauma, hypotonia of orbicularis oculi muscle, anterior lamellar shortening and posterior lamellar widening of the eyelid, failure of the orbital septum and levator aponeurosis fusion, elongation of eyelid laterally, and inelastic lateral canthal ligament. [5] The orbicularis spasm leads to venous stasis and conjunctival chemosis thus preventing the cornea from infection and exposure. [8] In conservative management, various treatment modalities available which include moist dressing, taping of the eyelid, pressure patching, hypertonic saline dressing, topical antibiotics, and lubricants. [8,9] The surgical management options include scarification of exposed conjunctiva, temporary tarsorrhaphy, subconjunctival injection of hyaluronic acid, fornix sutures, and full-thickness upper lid skin graft. [5,9,10] But these are used in cases not responding to conservative treatment. Also, lid manipulations can lead to stimulation of autonomic effects such as respiratory arrest in neonates. [3] Our case was managed with hypertonic saline dressing, antibiotic and lubricating eyedrops. Early treatment can prevent complications like conjunctival scarring, epimerization, and secondary infection. This case report advocates the need for a strictly conservative approach in management. The aim of this photo essay is to create awareness among those who are first-time viewers like health care professionals in ophthalmology and neonatology. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-09-26T13:05:52.939Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "f0872d7162de0e88aa119ea4a928976f585277f3", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_35_20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d48acab01551e1409839906047875e2017f941b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24712082
pes2o/s2orc
v3-fos-license
3,4-Bis(4-nitrophenyl)-1,2,5-oxadiazole 2-oxide The title compound, C14H8N4O6, a new 1,2,5-oxadiazole N-oxide derivative, was formed by dimerization of 4-nitrobenzaldehyde oxime. The compound crystallizes with two independent molecules per asymmetric unit. The N-oxide O atom is disordered over two sites in each molecule; site occupancy factors are 0.57/0.43 and 0.5/0.5. The mean planes through the two benzene rings are inclined to the planar 1,2,3-oxadiazole ring by 25.03 (11) and 41.64 (11)° in one molecule, and 22.58 (11) and 42.66 (11)° in the other molecule, the smaller angle being for the ring on the oxide side of the oxadiazole ring in each case. In the crystal structure, the individual molecules form centrosymmetric dimers linked via C—H⋯O hydrogen bonds. The dimers of one molecule are then linked to those of the other molecule via C—H⋯O hydrogen bonds, forming a three-dimensional network. The title compound, C 14 H 8 N 4 O 6 , a new 1,2,5-oxadiazole Noxide derivative, was formed by dimerization of 4-nitrobenzaldehyde oxime. The compound crystallizes with two independent molecules per asymmetric unit. The N-oxide O atom is disordered over two sites in each molecule; site occupancy factors are 0.57/0.43 and 0.5/0.5. The mean planes through the two benzene rings are inclined to the planar 1,2,3-oxadiazole ring by 25.03 (11) and 41.64 (11) in one molecule, and 22.58 (11) and 42.66 (11) in the other molecule, the smaller angle being for the ring on the oxide side of the oxadiazole ring in each case. In the crystal structure, the individual molecules form centrosymmetric dimers linked via C-HÁ Á ÁO hydrogen bonds. The dimers of one molecule are then linked to those of the other molecule via C-HÁ Á ÁO hydrogen bonds, forming a three-dimensional network. sup-1 Acta Cryst. (2008). E64, o511 [ doi:10.1107/S1600536807066640 ] 3,4-Bis(4-nitrophenyl)-1,2,5-oxadiazole 2-oxide G. Alhouari, A. Kerbal, N. B. Larbi, T. B. Hadda, M. Daoudi and H. Stoeckli-Evans Comment In the course of our research aimed at the synthesis of new efficient antitubercular agents containing simple pharmacophore sites of the type X-C-C-Y we turned our attention to the spiro-isoxazolines which posses a rigid (O=C-C-O) pharmacophore. These compounds display interesting biological properties, such as herbicidal, plant-growth regulatory and antitumor activities (Howe & Shelton, 1990;Smietana et al., 1999). The preparation of the spiro-isoxazolines, in which we are interested, normally involves the reaction of a nitriloxyde [(E)-4-nitrobenzaldehyde oxime, (I)] with an isothiochromanone in a solution of hydrogen peroxyde (Kerbal et al., 1990). We have noted many times the formation of a by-product during this reaction. Finally this compound has been isolated and examined crystallographically. It was found to be a new 1,2,5oxadiazole N-oxide derivative, (II). The molecular structure of compound (II) is shown in Fig. 1. The compound crystallizes with two independent molecules (1 & 2) per asymmetric unit. The 1,2,5-oxadiazole units are disordered with two alternative positions for the N-oxide O-atom [atom O1a/O1b in molecule 1, and atom O21a/O21b in molecule 2]. There are some short intramolecular C···O contacts in the two molecules involving the disordered atoms; O1b and neighbouring C-atoms C2 and C3, and atom O21b with atom C22. A search of the Cambridge Crystallographic Data Base (Version 1.8, last update May 2007: Allen, 2002) indicates that such short interactions are not unusual. The 1,2,5-oxadiazole ring is planar [to within 0.008 and 0.009 Å, in molecules 1 and 2, respectively] and the bond distances and angles are similar to those in the diphenyl analoque 3,4-Diphenylfurazan N-oxide, (III) [Sillitoe & Harding, 1978)]. They do not indicate the presence of delocalized electron density as in the dichlorophenyl analoque 4,5-bis(2,6-Dichlorophenyl)-1-oxide-2-oxa-1,3-diazole, (IV) [Easton et al., 1995] or a D-mannose-derived furoxan [Baker et al., 2002]. The C?N bonds being significantly shorter than the C-C or O-N bonds. The remainder of the bond distances in (II) are within normal limits (Allen et al., 1987). The best planes through the phenyl rings are inclined to the best plane through the 1,2,5-oxadiazole ring by 25.03 (11) and 41.64 (11)° in molecule 1, and 22.58 (11) and 42.66 (11)°i n molecule 2. This is quite different to the situation in (III), where the same dihedral angles are 16.7 and 59.6°, or in (IV), where the same dihedral angles are 63.1 (3) and 65.6 (5)°. In the crystal structure of (II) the individual molecules are linked to their symmetry related molecule via C-H···O hydrogen bonds to form centrosynmetric dimers. These dimers are in turn linked by other C-H···O hydrogen bonds to form a three-dimensional network. Details of the hydrogen bonding are given in Table 1 and Fig. 2. The formation of compound (II) is similar to that described by Baker et al. (2002), who have studied in detail the synthesis and X-ray structure of 3,4-dipyranosyl-1,2,5-oxadiazole 2-oxide. Similarly we found that the reaction of 4-nitrobenzaldehyde oxime with pure NaOCl in CHCl 3 , but never CH 2 Cl 2 , gives an almost quantitative yield of (II) (95%), on simply stirring at room temperature for 16 h. supplementary materials sup-2 Experimental The reaction of 4-nitrobenzaldehyde oxime with pure NaOCl, in a 2:1 molar ratio, in CHCl 3 (but never CH 2 Cl 2 ) gives an almost quantitative yield of (II) (95%), on stirring at room temperature for 16 h. Yellow block-like crystals suitable for X-ray analysis were obtained by slow evaporation of an ethanol solution of (II). Refinement The N-oxide O-atom is disordered over two sites in each molecule (1 & 2); the occupancies were finally fixed at O1a/O1b = 0.57/0.43 and O31a/O31b = 0.5/0.5. The hydrogen atoms could all be located from difference Fourier maps. They were included in calculated positons and treated as riding atoms with C-H distances = 0.95 Å and U iso (H) = 1.2U eq (parent C-atom). Fig. 1. Molecular structure of the two independent molecules (1 and 2) of compound (II), showing the crystallographic atom-numbering scheme and displacement ellipsoids drawn at the 50% probability level. The disordered N-oxide O-atoms, O1B and O21B, bonded to atoms N2 and N22, respectively, are drawn with red and white checkered patterned ellipses. The hydrogen atoms have been omitted for clarity. 3,4-Bis(4-nitrophenyl)-1,2,5-oxadiazole 2-oxide
2014-10-01T00:00:00.000Z
2008-01-23T00:00:00.000
{ "year": 2008, "sha1": "bfccbfebe1c57a9d1c8d38996338c7ac6996e4e4", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2008/02/00/bg2154/bg2154.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfccbfebe1c57a9d1c8d38996338c7ac6996e4e4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Computer Science", "Medicine" ] }
91497126
pes2o/s2orc
v3-fos-license
Reproductive biology of king weakfish , Macrodon ancylodon ( Perciformes , Sciaenidae ) from the northeastern coast of Brazil The reproductive biology of Macrodon ancylodon, a commercially important fish widely distributed along South America’s Atlantic coast, is described from 240 specimens collected in northeastern Brazil. Specimens ranged from 18.2 to 33.5 cm in length, exhibiting positive allometry. Total length at first sexual maturity was 21.13 cm. M. ancylodon has an asynchronous gonadal maturation and would be a batch spawner within a reproductive period. Total weight and length of females do not affect the relative fecundity values in this species. Analysis of the gonadosomatic index, condition factor and frequency of maturational stages shows that M. ancylodon was capable of reproducing throughout the year, despite the fact that spawning peaks were observed during November-December and August-September. The results are evidence that the study area is used by M. ancylodon for reproduction. The data will also be important for the development of stock management strategies. INTRODUCTION Macrodon ancylodon (Bloch & Schneider, 1801), popularly known as king weakfish, belongs to the family Sciaenidae.It is a valuable commercial fish species in Brazil (Haimovici et al. 1996, Isaac &Braga 1999), particularly the northeastern state of Maranhão (Serpa 2004, Mourão et al. 2009) where it ranks third in commercial importance, with landings of 2,724.9t or 6.9% of total fish caught in the state (Almeida et al. 2000, IBAMA 2008, Almeida et al. 2009). The distribution of M. ancylodon is restricted to the western part of the Atlantic Ocean, from Venezuela to Argentina (Cervigón 1993, Carvalho-Filho 1999).It is a demersal species found in estuarine and marine habitats of tropical and subtropical regions at depths of up to 60 m (Camargo & Isaac 2005, Castro et al. 2015).It is characterized by an elongated, yellowish/grayish body, up to 45 cm in length, moderately compressed with arched dorsal and ventral fins (Santos et al. 2006).The head is pointed and compressed; the mouth is large, oblique and terminal (Alfaro et al. 2012).The jaw projects in front of the maxillary, and the teeth are rounded with large caninelike teeth that are exposed even when the mouth is closed (Cervigón 1993). Data relating to the reproductive characteristics of this economically important species are important for understanding its behavior, assessing fish stocks and developing suitable management strategies (Santos et al. 2003).In the specific case of M. ancylodon populations in the coast state of Maranhão, data are urgently required as the fishing fleet is expanding but management policies are still being formulated.If the fishery is not managed coherently with the introduction of a closed season and increased protection of breeding grounds, exploitation will soon exceed the carrying capacity of the fish community.Thus, this study investigated the reproductive biology of M. ancylodon in the municipality of Raposa (a major fishing community of the state of Maranhão) via macroscopic/microscopic descriptions of the maturational stages, determination of the total weight/overall length relationship, sex ratio, size at first sexual maturity, fecundity, and spawning activity. SAMPLING AREA Fish were collected in the Maranhense Gulf region, near the city of Raposa, encompassing the bays of São Marcos and São José.The region is part of the metropolitan area of São Luís city and is washed by the Atlantic Ocean.The municipality of Raposa is located northwest of São Luís (or Upaon Açú) Island, 30 km from the regional capital, between coordinates 2°25'22"S and 44°5'21"W (Fig. 1). SAMPLE COLLECTION Samples of M. ancylodon were obtained between December 2012 and November 2013 from fish markets in Raposa.Bimonthly, one 16.5 kg box of fish of each available market grade (medium or large) was purchased and processed for general biological data.An average of 40 specimens were collected in each survey period and all samples, which were previously identified and weighed, were analyzed in the Fisheries Laboratory of Aquatic Ecology, State University of Maranhão (UEMA).Total length (TL) and standard length (SL) in centimeters (cm) were recorded, along with total weight (TW) and gutted weight (EW) in grams (g). LABORATORY PROCEDURES For each specimen, a ventral longitudinal section was performed to extract the gonads for macroscopic examination.The following characteristics were noted: color, vascular volume in relation to the abdominal cavity, blood flow, visibility of oocytes plus presence of sperm and consistency. The sex and maturity stage of each individual were identified macroscopically and microscopically.A previously established maturity scale was used for the macroscopic classification of gonads, adapted by Brown-Peterson et al. (2011), as follows: A) Immature (never spawned); B) Developing (ovaries beginning to develop, but not ready to spawn); C) Spawning capable (fish are developmentally and physiologically able to spawn in this cycle); D) Regressing (cessation of spawning); E) Regenerating (sexually mature, reproductively inactive).The gonads were weighed (WG) on a precision scale and the weight was expressed in grams (g). For microscopic analysis, the gonads were dissected into three segments (proximal, medial and distal), and the medial portion fixed in a 10% formaldehyde solution for 24 h.After fixation, the gonads were dehydrated in an ascending alcohol series, cleared in xylene, embedded in paraffin wax for the preparation of ~5-μm-thick sections and stained with hematoxylin-eosin (HE).Histologically, female maturational stages were identified via characterization of the various oocyte types, following the scale of Brown-Peterson et al. (2011). For analysis of fecundity a volumetric method, described in Vazzoler (1996) was adapted.A small portion (approximately 5 g) of a fresh ovarian tissue of 80 'spawning capable' females were used.In brief, the oocyte mass was diluted in a known volume (s) of 70% alcohol, the flask was stirred and a 1-ml aliquot was removed with a Stempel pipette.The aliquot was placed on a processed Dollfus plate and the total number of oocytes counted (n) under a stereomicroscope with an 8x objective and 10x ocular lens.The diameter of the oocytes was measured with a stereomicroscope and their frequency distribution was used to estimate the type of spawn.The mean diameter of mature oocytes was estimated from the arithmetic mean of the size of all mature oocytes. STATISTICAL ANALYSIS The relationship between total length and total weight was established by nonlinear regression.The adjustment of the curve, represented by the mathematical expression TW = a x TL b , and adjusted by the least-squares method was obtained by the least-squares method, with a confidence level of ± 95% (P < 0.05) where a and b are parameters of the equation (Snedecor & Cochram 1980, Sokal & Rohlf 1987); where TW= total weight and TL= total length.Coefficient b was compared between males and females by the Student's t test (Zar 1996). The sex ratio was obtained for the entire experimental period, for 2 months (Bimonthly) and overall length classes (Vazzoler 1996).To check possible differences in these values, a χ 2 test was used with a significance level of P= 0.05. To analyze the size at first maturation (L 50 ), the maturation stages were grouped as immature (stage A) or mature (stages B, C, D and E), according to Brown-Peterson et al (2011).The percentage mature by length was calculated and considered a dependent variable (y); total length was considered an independent variable (x).These values were subsequently adjusted to a logistic curve using the STATISTICA 6.0 program, according to the formula: P= 1/(1+exp [-r(L-L50)] ). The reproductive period and spawning season were determined by analyzing the bimonthly frequency of the developmental stages, the variation in mean values of the gonadosomatic index (ΔRGS) and condition factor (ΔK).The periodicity of the reproductive process for this species was determined by analyzing the bimonthly frequency of the developmental stages. The gonadosomatic index (RGS) was calculated for mature stages B, C and D from gonad mass as a percentage of the total body weight to indicate the annual variations in gonadal development. The condition factor (K) was considered as a quantitative indicator of health status (healthiness) or welfare of fish, reflecting recent food conditions (Le Cren 1951).This factor was determined from the relationship between individual weight and length, and may be expressed by isometric or allometric factors.Two models were considered in estimating allometric K values: K= TW/TLb (total condition factor) and K*= GW/TLb (somatic condition factor); where TW= total weight, TL= total length and GW= gonad weight.Differences in the distribution of bimonthly values for ΔRGS and ΔK were tested using the nonparametric Kruskal-Wallis method (Kruskal & Wallis 1952).All tests were performed using STATISTICA 6.0 software (StatSoft Inc.). Absolute fecundity was defined as the number of mature oocytes that can be expulsed in the reproductive period and estimated by N= (n•V)/v, where N is the total number of oocytes; n is the mean number of yolked oocytes obtained in the sub-samples (3 replicates); V is the total volume of the solution (300 ml) and v the volume of the subsample (5 ml).Relative fecundity was established from the relationship between the number of oocytes per gram of total body weight (TW) and total length (TL). For analysis of variance (ANOVA), normality and homogeneity were tested via Tukey and Fligner-Killeen tests, respectively.Statistical tests were performed in 'R Program' (R Development Core Team 2011). RESULTS The total length of Macrodon ancylodon specimens ranged from 18.2 cm to 33.5 cm, while the minimum and maximum individual weights observed were 40.7-370 g, respectively.The weight and length of females versus males were not significantly different (P < 0.05) (Table 1).Negative allometry was recorded for females and positive allometry for males (Fig. 2).The regression coefficient (b) showed no significant difference for males and females (t= 1.36; P > 0.05). The sex ratio for the entire sampling period was 2.72 females per one male.Females were dominant at all sampling times, except for the period April/May when a greater number of males were recorded (Table 2).The χ 2test value of 48.56 indicated no significant difference for all bimonthly periods, except April-May. Considering the sex ratio by length class in M. ancylodon, most females were between 26 and 28 cm in length.Significant differences were observed between the sexes for the length classes 24-26, 26-28, 28-30 and 30-32 cm, with the highest percentage in the length class 26-28 cm, which was statistically significant (Table 3).The size at first sexual maturity in M. ancylodon for the study period was 20.30 cm for males, 22.14 cm for females and 21.13 cm for the combined sexes (Fig. 3). The average values of ΔRGS and ΔK for M. ancylodon were not significantly different between bimonthly sampling periods (Kruskal-Wallis test, P > 0.05) (Fig. 4).Mature individuals for both sexes were observed throughout the year (Fig. 3); however, 2 reproductive activity peaks were observed in the period November-December and August-September.This pattern was confirmed by microscopic analysis, which registered the occurrence of oocytes at various stages of development (Fig. 5). The volume, color, thickness and blood supply to the ovaries of analyzed specimens varied according to the maturation stage, with tones ranging from light-rose color (resting period) to bright yellow (final maturity), due to the color of the yolk-filled oocytes.Macroscopic evaluation of ovaries allowed the categorization, in M. ancylodon, of 5 maturational stages: immature, developing, spawning capable, regressing and regenerating (Fig. 5). The correlation between phases for gonad development in females was identified through characterization of cells in the predominant oocyte types (Table 4). Mean absolute fecundity, defined as the number of oocytes that could potentially be eliminated in the next spawning (yolked oocytes), ranged from 27,310 to 246,287 oocytes.This variation resulted from differences in the total weight and total length between individual fish.Mean absolute fecundity was 116,129 oocytes per spawning, while the average relative fecundity in M. ancylodon was estimated at 4004 oocytes per cm total length and 531 oocytes per gram total weight.Diameter measurements of vitellogenic oocytes showed that there study, the sex ratio favored females by 2.72 to 1, which differed from the results of Santos (2007), who recorded a 3:1 female/male ratio.However, sex ratio could change by spatial segregation during spawning activity, or for sampling biases.Meanwhile, data from the REVIZEE (Program to Evaluate the Sustainable Potential of Living Resources in the Exclusive Economic Zone) noted a 2:1 female/male sex ratio (Ikeda et al. 2003), which differed from the findings of Ikeda (2003) on the Pará coast of a 1:1 male/female ratio. From histological analysis performed in this study, M. ancylodon was found to be a batch spawner, with spawning occurring mainly during November/December and August/September (dry season).Yamaguti (1967), Ikeda (2003) and Santos (2007) stated that this species is characterized by fractional spawning, but is able to reproduce throughout the year.Previously it was reported that this species is a batch spawner in Brazil (Vazzoler 1963, Isaac-Nahum & Vazzoler 1983) and in the Rio de la Plata region (Vizziano &Berois 1990 andMilitelli &Macchi 2004).Ikeda (2003) noted that the peaks of spawning were closely linked to rainfall.Multiple spawning is typical in species of tropical, temperate and subtropical waters, which is thought to improve survival (Nikolsky 1963). The size of M. ancylodon at which 50% of the population reached sexual maturity in this study was 21.13 cm total length for both sexes.Similar results were obtained by Camargo-Zorro (1999) in specimens from the Caeté River (21.5 cm), by Santos (2007) in São Marcos Bay (21.05 cm), and by Santana (1998) in fish (18.6 cm) from the coastal region of Salinas (Pará) on the northern coast of Brazil.Our estimates of L50 were considerably lower than those estimates for individuals of the same species from the Pará coastal area (25.08 cm) (Ikeda 2003).A comparison of these results with those of two populations from the south coast of Brazil (Vazzoler 1963, Yamaguti 1967) would seem to indicate that individuals from habitats at lower latitudes reach sexual maturity at lower average lengths.On the other hand, Militelli et al. (2013) found values of L50 for M. ancylodon (in the Río de La Plata and Buenos Aires Coastal Zone, Argentina) approximated to those recorded in our study (male= 19.27 cm; female= 23.07 cm; both sex combined= 21.1 cm).Santos et al. (2006) studying population genetic structuring of the Macrodon ancylodon in Atlantic coastal waters of South America, explains that populations of the North Brazil and the Brazil currents, with warmer waters, form a clade (tropical clade) separated by 23 fixed mutations from the populations that inhabit regions of colder waters are several size classes in a gonad (Fig. 6).Histological analyses carried out to establish whether the different diameters corresponded to different stages of oocyte development found that oocytes ranged from 140-770 μm in diameter, confirming that spawning in M. ancylodon is asynchronous and sporadic. DISCUSSION Data from the weight-length relationship for M. ancylodon analyzed in this study showed negative allometry for females and positive allometry for males, a totally different result to that reported by Camargo-Zorro (1999) from the Caeté River estuary.It was also confirmed that weightlength relationships were allometrically positive for specimens collected on the Pará coast (Ikeda 2003) and from São Marcos Bay (Santos 2007).In accordance with Ricker (1975), the cause of these variations may be differences among populations of the same species or the same population in different years, and is presumably associated with their nutritional conditions.The value from t-tests calculated in this study indicated that there was no significant difference between the sexes, (i.e., they can be analyzed in an aggregated manner).In studies performed with individuals from the Pará coast (Ikeda 2003), t-tests indicated significant differences between the sexes.Santos (2007) noted that in M. ancylodon, as in most teleosts, males reached maturity at a lower total length than females.Ikeda (2003) reports that, at shorter total lengths, males were predominant, unlike females, which were larger.Females may have a greater life expectancy than males or a higher growth rate.In our In this study, the absolute fecundity in M. ancylodon ranged from 27,310 to 246,287 oocytes.Vazzoler (1963) found a fecundity of M. ancylodon in southeastern Brazil (Santos -São Paulo) ranging from 26,210 to 178,114.On the other hand Militelli et al. (2013) found batch fecundity values of 12,400 to 225,700 for M. ancylodon in the Río de la Plata and Buenos Aires Coastal Zone, Argentina.High fecundity is a characteristic of fish species with free eggs and an absence of parental care (Lowe-McConnell 1987).There can be interspecific and intraspecific variations in fertility.For example, within the Sciaenid family, many marine species behave as what is known as r-strategists, i.e., they produce many offspring, whereas freshwater species tend to be k-strategists, i.e., they produce few offspring (Militelli & Macchi 2004, Juras & Yamaguti 1985, 1989;Militelli et al. 2013).In tropical waters, where temperature is not limited, spawning is mainly influenced by environmental factors related to food supply (Lowe-McConnell 1987). From the results of this study, it was concluded that M. ancylodon is a batch spawner.Analysis of the gonadosomatic index, condition factor and frequency of maturational stages indicated that this species is capable of reproducing throughout the year.The frequency of mature and spawned individuals in all maturity stages showed that M. ancylodon reproduced in the estuary near the city of Raposa (São Marcos Bay and part of São José Bay).The results of this study provide important data for the development of protective and management strategies of M. ancylodon stocks. Figure 2 .Figure 3 . Figure 2. Graphical representation of the relationship between the total weight (TW) and total length (TL) of a) females, b) males and c) combined sexes for Macrodon ancylodon collected in the municipality of Raposa, Maranhão State, Brazil, between November 2012 and December 2013 / Representación gráfica de la relación entre el peso total (TW) y la longitud total (TL) a) de las hembras, b) machos y c) sexos combinados para Macrodon ancylodon recolectados en el municipio de Raposa, Maranhão, Brasil, entre noviembre 2012 y diciembre 2013Table 2. Bimonthly sex ratio of Macrodon ancylodon specimens collected in the municipality of Raposa, Maranhão State, Brazil, between November 2012 and December 2013 / Proporción bimestral de sexos del Macrodon ancylodon recogidos en el municipio de Raposa, Maranhão, Brasil, entre noviembre 2012 y diciembre 2013 Table1.Parameters for the length-weight relationship of male and female Macrodon ancylodon collected in the municipality of Raposa, Maranhão State, Brazil / Los parámetros para la relación longitud-peso de machos y hembras de Macrodon ancylodon recolectados en el municipio de Raposa, Maranhão, Brasil Figure 6 . Figure 6.Distribution of the oocyte diameter frequency of gonads of Macrodon ancylodon at the final stage of maturation / Distribución de la frecuencia del diámetro de ovocito en las gónadas de Macrodon ancylodon durante la fase final de maduración
2019-04-03T13:07:10.437Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "c8ab049c828cc5349ae0b972b472bed92d2e523a", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/revbiolmar/v53n1/0718-1957-revbiolmar-53-01-00095.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c8ab049c828cc5349ae0b972b472bed92d2e523a", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
234843152
pes2o/s2orc
v3-fos-license
Research on the system and mechanism of investigation and treatment of port accidents Based on the analysis of the current situation of investigation and treatment of port accidents, this paper puts forward some suggestions, such as improving relevant laws and regulations, establishing an independent third-party investigation organization and consultation mechanism, and formulating investigation procedures, so as to promote the modernization of port accident investigation and management system and capacity. Introduction In 2016, the CPC Central Committee and the State Council issued the "opinions on promoting the reform and development of the field of work safety", which clearly pointed out that accident investigation and handling is an important content to achieve legal governance in the field of production safety, and put forward more stringent requirements for accident investigation. According to the opinion, a technical support system for accident investigation and analysis should be established, and a special section on technical and management issues should be set up in all accident investigation reports to analyse the deep causes of the accident in detail. At the same time, it is proposed to establish a supervision system for rectification of accident exposure problems. The local government responsible for accident investigation and the relevant departments of the State Council should organize and carry out evaluation and timely release to the public. Port accident investigation practice In the practice of investigation and disposal of port accidents, serious accidents are organized by the State Council and led by the General Administration of work safety / emergency management department to form an accident investigation team. The local government entrusts the emergency management department to carry out accident investigation for major and minor accidents. The local port administration department is excluded from the scope of the investigation team, and has the right of ignorance and interpretation, which is completely passive. The lack of laws and regulations such as the regulations on the investigation and handling of maritime traffic accidents and the regulations on the investigation and handling of inland river traffic accidents provided strong legal support and guarantee for the investigation and treatment of maritime traffic accidents. In addition, port operation and safety management are highly professional. Experts in emergency management often require port operation with chemical industry standards and management methods, which makes port enterprises and industry managers suffer a lot. According to the survey statistics, only once the port administration department organized the accident investigation according to the division of responsibilities document formulated by the municipal government. In accordance with the notice of (14) participate in the investigation of road transportation accidents of hazardous chemicals; To organize the investigation of safety accidents in the waterway transportation and port storage and transportation of hazardous chemicals within the scope of jurisdiction according to law, and report the investigation results in accordance with the provisions. "The investigation report was approved by Ningbo municipal government. It is necessary to further improve the technicality and professionalism of port accident investigation, separate technical investigation from administrative accountability investigation, and improve the discourse power of traffic (port) management department in accident investigation. legal perfection In China's current regulations on the reporting, investigation and handling of production safety accidents, a cooperative investigation organization is adopted in the investigation of work safety accidents, that is, the investigation team jointly formed by relevant administrative organs and judicial organs after the occurrence of production safety accidents. This arrangement strengthens the professionalism and diversity of the investigation team and provides effective resources for the smooth progress of the investigation work; however, the access of administrative organs limits the impartiality and independence of the investigation. In order to establish an objective, fair, scientific and authoritative accident investigation thought, it is necessary to formulate new national laws and regulations on accident investigation as soon as possible, make special and unified regulations on accident investigation, establish accident investigation system, strengthen the construction of independent accident investigation mechanism, and make the independence of accident reporting, investigation, handling, hearing, report legalized and programmed. Referring to the special equipment safety law, it is clearly stipulated that the following major accidents shall be investigated by the accident investigation team organized by the safety supervision and Management Department of the equipment together with the relevant departments. Relevant laws and regulations of transportation (port) industry, such as the port law or port operation and management, are added with corresponding provisions, or the Safety Commission Office of the State Council issues opinions to clarify that "the transportation department shall organize the investigation work of waterway transportation and port storage and transportation safety accidents within its jurisdiction according to law, and report the investigation results according to the regulations", so as to improve the traffic (port) management department in the port The right to speak in the investigation and handling of production accidents. Composition of the third party investigation organization Establish an independent accident investigation organization, make it a permanent administrative subject of accident investigation, and enjoy the right of administrative reconsideration and administrative litigation according to law. Under the current situation, an independent accident investigation pilot organization can be set up under the State Council or the Ministry of transport, which is mainly, aimed at the investigation of typical port disaster accidents. The accident investigation organization is independent of the supervision department, and its investigation process is not subject to the intervention and control of the safety supervision department, so as to ensure the fairness and comprehensiveness of the production accident investigation to the greatest extent; After the national administrative system reform is in place, the accident investigation institutions of other industries can be unified and the accident investigation institutions of port industry can be established. Investigation procedure Referring to the relevant rules, regulations and documents such as "provisions on the summary procedure for investigation and handling of water traffic accidents", the work flow of port accident investigation and treatment can be divided into simple procedure and general procedure according to the level and type of accidents. 3.3.1. Summary procedure. In order to investigate and deal with port production accidents in accordance with the law, facilitate the people and efficiently investigate and deal with port production accidents, the simple procedures for investigation and handling of port production accidents (hereinafter referred to as "minor accidents") with simple facts, clear facts and clear causal relationship (hereinafter referred to as "minor accidents") shall be investigated and handled according to the written application of all parties to the accident and confirmed by the traffic (port) management department, except for the following circumstances: (1) Accidents involving passenger transport, dangerous goods transportation, international ships and foreign ships; (2) An incident in which either party disagrees with the applicability of summary procedure before the case is closed; (3) It is found that the port and operators have serious violations such as false certificates and no certificates; (4) The traffic (port) management department thinks that the accident has the significance of widely learning lessons; (5) Accidents that the traffic (port) administration considers it inappropriate to investigate and deal with by simple procedures. The traffic (port) management department shall carry out accident investigation and evidence collection, and at least one person shall hold port administrative law enforcement certificate. Port investigators shall terminate the application of summary procedure, investigate and deal with it according to the prescribed procedure, and issue the notification of termination of application of summary procedure. The traffic (port) management department shall impose administrative punishment on the party involved in the accident in violation of the port administrative order found in the accident investigation. General procedure. The investigation and handling of port production accidents which are not applicable to the simple procedure shall be carried out according to the general procedures. The general procedures for the investigation and treatment of port production accidents include accident report, acceptance and filing, accident investigation, case settlement, filing and filing, post evaluation, etc. The general procedure workflow of port production accident investigation and handling is shown in figure 1. Conclusion The traffic (port) management departments at all levels re-examine the investigation work of major accidents in port production safety, establish an independent investigation agency for the investigation of port production safety accidents, promote the separation of administrative investigation and technical investigation, establish an accident investigation and consultation mechanism, study and build a major accident investigation and handling system suitable for the transportation power, so as to further improve the port production safety in China Management capacity, to ensure the safety of port production, reduce the occurrence of accidents.
2021-05-21T16:57:43.431Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "4171568636a95de9853ac6646d68828c51b0f585", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/1848/1/012147", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "df932f3c6e72a4bdb5a93767c1c8629e93ed02cb", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
55443732
pes2o/s2orc
v3-fos-license
An Interprofessional Study of the Effects of Topical Pilocarpine on Oral and Visual Function INTRODUCTION In light of expanding use of pilocarpine for numerous systemic disorders over the past decades, it is important to understand its effects on visual and oral function. OBJECTIVE To study the adverse effects of topical pilocarpine on visual and oral function in healthy volunteers via interprofessional collaboration. METHODS Thirty-six subjects, 21 years and older, were enrolled for the study. The study was designed to have each subject undergo tests for oral and visual function before and 20 minutes after a topical dose (2% ophthalmic solution), so the subjects served as their own controls. RESULTS The sample included 24 females and 11 males with a mean of 22 years. The pupil diameter was significantly reduced post treatment with pilocarpine. The effect was larger in dim light than in bright light. Distance and near visual acuity were significantly reduced by pilocarpine treatment. Distance visual acuity under low contrast illumination and automated perimetry were significantly reduced with pilocarpine. Remarkably, salivary volume was significantly increased. CONCLUSION In young normal subjects, pilocarpine adversely affects the visual acuity, contrast sensitivity, visual field and thus the overall visual function, but it positively increases salivary volume. Received: 05/15/2012 Accepted: 07/24/2012 Published: 09/24/2012 © 2012 Hua et al. This open access article is distributed under a Creative Commons Attribution License, which allows unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. H IP & Topical Pilocarpine ORIGINAL RESEARCH 1(3):eP1031 | 2 Introduction Pilocarpine is a natural alkaloid derived from the leaves of a plant, pilocarpus jaborandi, indigenous to South America (Vivino et al., 1999). Pilocarpine hydrochloride is known as a direct-acting cholinergic agonist that activates muscarinic receptors [M1-M5] nonspecifically resulting in a broad spectrum of pharmacological effects. It stimulates exocrine secretions from sweat, salivary, and lacrimal glands (Brown & Taylor, 2006). It is indicated for the treatment of inadequate salivary flow secondary to radiation therapy for cancers of the head and neck. It is also commonly used in patients with Sjogren’s syndrome, which is a chronic autoimmune disease that gradually damages the moisture-producing glands, causing significant dryness in the mouth and eyes (Bruce, 2003). In eye care, pilocarpine has been one of the earliest drugs used in the treatment of glaucoma (Grierson et al., 1978). Although it is no longer the first line of drug prescribed for treatment of glaucoma, pilocarpine is still being used widely to treat acute angle closure and to prepare the iris for laser peripheral iridotomy. Moreover, it is still a glaucoma drug of choice for many patients in third world countries because of its affordability (Wu et al., 2011). Muscarinic receptors are found on the secretory exocrine gland, on smooth and cardiac muscles, and distributed throughout the central nervous system including the eyes (Eglen, 2006). Pilocarpine acts unselectively on multiple subtypes of muscarinic receptors therefore it can cause various parasymthomimetic effects. Stimulation of peripheral muscarinic receptors produces salivation, lacrimation, sweating, rhinorrhea, bronchospasm, urinary frequency, diarrhea, bradycardia, and miosis. It is generally known that miosis induced by topical pilocarpine can adversely affects visual field, and ciliary spasm can cause reduction in accommodation and visual acuity (McCluskey et al., 1986; Gilmartin, Amer, & Ignleby, 1995). Although there have been a number of studies over the past few decades on the effects of topical pilocarpine on high-contrast visual acuity and visual field with respect to glaucoma treatment, little has been done to investigate the potential effect of topical pilocarpine on low-contrast visual acuity which is encountered in many daily environmental settings. Furthermore, the effects of topical pilocarpine on oral function are not well-documented. In light of expanding use of pilocarpine for numerous systemic disorders over the past decades, it is important to understand its adverse effects on visual and oral function. Therefore, we designed an interprofessional study to look at the effects of topical pilocarpine on visual function including low-contrast sensitivity; and its effects on oral function in healthy volunteers. Implications for Interprofessional Practice • Topical pilocarpine can induce headache as a side effect in young patients. • Topical pilocarpine can significantly reduce visual acuity, especially in natural (low contrast) environment. • Oral pilocarpine causes miosis and can also reduce visual acuity. • Clinicians who prescribe pilocarpine or other miotic drugs need to pay closer attention to the ocular side effects of this class of drug because they can signicantly affect the daily tasks that require clear vision. • Over the years, there has been an expanding use of miotics in medicine, so miosis-induced visual impairment affects an increasing number of patients, especially the elderly. H IP & ISSN 2159-1253 Health & Interprofessional Practice | commons.pacificu.edu/hip 1(3):eP1031 | 3 Materials and methods Thirty-five subjects, between 21 and 38 years of age, were enrolled for the study. All subjects were required to fill out informed consent forms, medical intake forms, and questionnaires related to ocular and dental health. The study was designed to have each subject undergo tests for oral and visual function before and 20 minutes after a topical dose (2 percent ophthalmic solution), so the subjects served as their own controls. Pilocarpine hydrochloride ophthalmic solution 2 percent (Bausch + Lomb, FL, USA) was used for the study. The study was reviewed and approved by Pacific University Institutional Review Board before initiation. The study location was the Pacific University Eye and Dental Clinics. Investigation oral hydration was performed using Saliva-Check BUFFER kit (GC Corporation, Tokyo, Japan). Prior to the dental visit, subjects were instructed not to smoke, consume food or drink, brush teeth, or use a mouth wash for at least one hour before the scheduled appointment time. The lower lip was blotted dry with a small piece of gauze and the examiner observed the mouth skin under good illumination. Level of hydration was inspected by measuring how long it took for saliva to form inside the lower lip. The volume of saliva was collected and measured by having subjects chew on a piece of wax to stimulate salivary flow and spat intermittently into the cup over a period of five minutes. The thickness of the saliva was examined for salivary consistency by observation. The pH of the saliva was tested with a pH test strip. Buffering capacity of the saliva was done via a simple chemical test supplied with the kit. Visual function was assessed by visual acuity (VA), contrast sensitivity and visual field. The size of pupil was also measured under bright and dim light. Distance VA was measured at 20 feet using LogMAR optotypes via Pro Video System (Innova System Inc, IL, USA). Near VA was taken at 16 inches using pocket-sized near vision card with Sloan letters (Good-Lite, IL, USA). Contrast sensitivity testing was measured using 5 percent chart in LogMAR sizes at 10 feet (Good-Lite, IL, USA). Visual field was tested via N-30-5 FDT screening on Humphrey Matrix (Carl Zeiss Meditec Inc, CA, USA). Pupil sizes were measured in bright and dim illumination using pupil gauge on pocket-sized near vision card. In addition, pictures of pupil size were taken using Handycam HDR-CX550V with infrared feature (Sony Corporation, Tokyo, Japan). Statistical analysis Descriptive statistics provided the basic findings. OD and OS measures were averaged to provide a single value. Before and after treatment values were compared with a paired t-test. Introduction Pilocarpine is a natural alkaloid derived from the leaves of a plant, pilocarpus jaborandi, indigenous to South America (Vivino et al., 1999).Pilocarpine hydrochloride is known as a direct-acting cholinergic agonist that activates muscarinic receptors [M1-M5] nonspecifically resulting in a broad spectrum of pharmacological effects.It stimulates exocrine secretions from sweat, salivary, and lacrimal glands (Brown & Taylor, 2006).It is indicated for the treatment of inadequate salivary flow secondary to radiation therapy for cancers of the head and neck.It is also commonly used in patients with Sjogren's syndrome, which is a chronic autoimmune disease that gradually damages the moisture-producing glands, causing significant dryness in the mouth and eyes (Bruce, 2003). In eye care, pilocarpine has been one of the earliest drugs used in the treatment of glaucoma (Grierson et al., 1978).Although it is no longer the first line of drug prescribed for treatment of glaucoma, pilocarpine is still being used widely to treat acute angle closure and to prepare the iris for laser peripheral iridotomy.Moreover, it is still a glaucoma drug of choice for many patients in third world countries because of its affordability (Wu et al., 2011). Muscarinic receptors are found on the secretory exo-crine gland, on smooth and cardiac muscles, and distributed throughout the central nervous system including the eyes (Eglen, 2006).Pilocarpine acts unselectively on multiple subtypes of muscarinic receptors therefore it can cause various parasymthomimetic effects.Stimulation of peripheral muscarinic receptors produces salivation, lacrimation, sweating, rhinorrhea, bronchospasm, urinary frequency, diarrhea, bradycardia, and miosis.It is generally known that miosis induced by topical pilocarpine can adversely affects visual field, and ciliary spasm can cause reduction in accommodation and visual acuity (McCluskey et al., 1986;Gilmartin, Amer, & Ignleby, 1995).Although there have been a number of studies over the past few decades on the effects of topical pilocarpine on high-contrast visual acuity and visual field with respect to glaucoma treatment, little has been done to investigate the potential effect of topical pilocarpine on low-contrast visual acuity which is encountered in many daily environmental settings.Furthermore, the effects of topical pilocarpine on oral function are not well-documented. In light of expanding use of pilocarpine for numerous systemic disorders over the past decades, it is important to understand its adverse effects on visual and oral function.Therefore, we designed an interprofessional study to look at the effects of topical pilocarpine on visual function including low-contrast sensitivity; and its effects on oral function in healthy volunteers. Implications for Interprofessional Practice • Topical pilocarpine can induce headache as a side effect in young patients. • Topical pilocarpine can significantly reduce visual acuity, especially in natural (low contrast) environment. • Oral pilocarpine causes miosis and can also reduce visual acuity. • Clinicians who prescribe pilocarpine or other miotic drugs need to pay closer attention to the ocular side effects of this class of drug because they can signicantly affect the daily tasks that require clear vision. • Over the years, there has been an expanding use of miotics in medicine, so miosis-induced visual impairment affects an increasing number of patients, especially the elderly. Materials and methods Thirty-five subjects, between 21 and 38 years of age, were enrolled for the study.All subjects were required to fill out informed consent forms, medical intake forms, and questionnaires related to ocular and dental health. The study was designed to have each subject undergo tests for oral and visual function before and 20 minutes after a topical dose (2 percent ophthalmic solution), so the subjects served as their own controls.Pilocarpine hydrochloride ophthalmic solution 2 percent (Bausch + Lomb, FL, USA) was used for the study.The study was reviewed and approved by Pacific University Institutional Review Board before initiation. The study location was the Pacific University Eye and Dental Clinics.Investigation oral hydration was performed using Saliva-Check BUFFER kit (GC Corporation, Tokyo, Japan).Prior to the dental visit, subjects were instructed not to smoke, consume food or drink, brush teeth, or use a mouth wash for at least one hour before the scheduled appointment time.The lower lip was blotted dry with a small piece of gauze and the examiner observed the mouth skin under good illumination.Level of hydration was inspected by measuring how long it took for saliva to form inside the lower lip.The volume of saliva was collected and measured by having subjects chew on a piece of wax to stimulate salivary flow and spat intermittently into the cup over a period of five minutes.The thickness of the saliva was examined for salivary consistency by observation.The pH of the saliva was tested with a pH test strip.Buffering capacity of the saliva was done via a simple chemical test supplied with the kit. Visual function was assessed by visual acuity (VA), contrast sensitivity and visual field.The size of pupil was also measured under bright and dim light.Distance VA was measured at 20 feet using LogMAR optotypes via Pro Video System (Innova System Inc, IL, USA).Near VA was taken at 16 inches using pocket-sized near vision card with Sloan letters (Good-Lite, IL, USA).Contrast sensitivity testing was measured using 5 percent chart in LogMAR sizes at 10 feet (Good-Lite, IL, USA). Visual field was tested via N-30-5 FDT screening on Humphrey Matrix (Carl Zeiss Meditec Inc, CA, USA).Pupil sizes were measured in bright and dim illumination using pupil gauge on pocket-sized near vision card.In addition, pictures of pupil size were taken using Handycam HDR-CX550V with infrared feature (Sony Corporation, Tokyo, Japan). Statistical analysis Descriptive statistics provided the basic findings.OD and OS measures were averaged to provide a single value.Before and after treatment values were compared with a paired t-test. Sample and Survey The sample included 24 females and 11 males with ages ranging from 22 to 38 with a mean of 22 years.Ten (10) reported no dry eye complaints, nine rarely had dry eye, 15 reported sometimes, and one always.Seventeen had no night time vision complaints and nine rarely had complaints, while nine sometimes had problems, three usually, and one always.Four people complained of dry mouth.Table 1 (following page) summarizes the numeric variables collected. Salivary Function Salivary consistency, buffering capacity, and pH of the saliva were not significantly affected by topical pilocarpine, but the salivary volume was significantly increased (see Table 2) . Discussion Pilocarpine, a direct acting cholinergic agonist, has been proven to be effective in the treatment of radia- Note: The variables in Table 2 were examined for association with dry eye, night vision complaints, and dry mouth (non-parametric median tests).There were no significant relationships. Table 1 Numeric variables collected Table 2 Change scores (post-application -pre-application) for numeric variables (left and right eyes averaged) Figure 1a Photograph of pupil diameter of a representative subject in dim light before treatment tion-induced xerostomia (Greenspan & Daniels, 1995).It was also found to increase salivary flow in patients with Sjogren's syndrome (Vivino et al., 1999;Fox et al., 1991).Therefore, pilocarpine (Salagen) tablets are commonly prescribed both for the treatment of dry mouth as a result of radiation therapy for cancers of the head and neck; and for dry mouth and dry eyes secondary to Sjogren's syndrome.Moreover, a number of studies over the years had shown that pilocarpine was also effective in relieving inadequate salivary flow caused by opioid psychoactive medications, with antimuscarinic and anticholinergic properties, leading to the increasing usage of pilocarpine (Sebastiano, 1998;Gotrick et al., 2004;Masters, 2005). While oral pilocarpine has been prescribed more frequently over the last decade, the use of topical pilocarpine has declined and replaced by newer and more effective glaucoma medications.Nevertheless, topical pilocarpine is still being used widely in third world countries for treatment of glaucoma because of its affordability.Furthermore, it is still being utilized to relieve intraocular spike in acute angle closure and to prepare pupil for laser peripheral iridotomy.Altogether, topical pilocarpine still remains an important drug in glaucoma management hence it is important to investigate its effects on visual function.This study aims to study the effects of topical pilocarpine on visual function and also examine its potential effects on oral function in healthy volunteers.In addition, the study may provide useful data for future study on the effects of oral pilocarpine. Topical pilocarpine has been known for decades to cause miosis via stimulation of muscarinic receptors present on constrictor muscles of the iris.Miosis is a physiological response regulating the amount of light reaching the retina for optimal vision.Pharmacological miosis, however, is unnatural because the pupil is fixed and unresponsive to light and excessive miosis can induce diffraction which interferes with vision. The data from the study confirmed that a single instillation of pilocarpine significantly reduced the pupil diameters in normal subjects.Edgar et al. (Edgar et al., 1999).On the contrary, another study conducted by Sloane et al. on the effect of senile miosis (n=11, M age=73) on contrast sensitivity comparing to young adults (n=13, M age=24) found that older adults' miotic pupils actually improved contrast sensitivity (Sloane, Owsley, Alvarez, 1988).The difference found in our study may be accounted for by the fact that pupil sizes induced by pilocarpine was significantly smaller as compared to senile miosis and thus diffraction could be an important factor affecting the vision in the young cohorts. Automated perimetry is an additional method to assess a person's visual function.It permits a thorough assessment of both the central and peripheral visual field.Topical pilocarpine significantly reduced the field of vision affecting more of the peripheral field than central field.The effects of pilocarpine on automated perimetry was consistent with previous studies (McCluskey et al, 1986;Webster et al, 1993). Cumulatively, the significant effects of topical pilocarpine on multiple visual tests and overall visual function could be attributed by the miotic effects of pilocarpine. Excessive miosis can decrease vision in two ways: decreased retinal illumination and the presence of diffraction (Campbell & Green, 1965).Optimal visual resolution is achieved when the pupil sizes are neither too large nor too small.Large pupils are more prone to optical aberrations, whereas small pupils are subjected to diffractions (Campbell & Gubisch, 1966).The pupil size for best axial resolution in the human was found to be about 4.30mm ± 1.90mm in a recent study (Donnelly & Roorda, 2003).According to Weber's law, the differential light threshold remains unchanged when the pupil size is altered, however the law only holds when the pupils are ranging approximately 3.0 mm to 7.0 mm under mesopic illumination levels (Edgar et al., 1999;Herse, 1992).In our study, a substantial number of subjects had pupil diameters below 3.0 mm, at which significant reduction in retinal illumination and diffraction could occur and break Weber's law as a possible explanation for significant worsening of visual function. In addition to the effects of pilocarpine on visual function, other side effects were noted by the majority of subjects including severe stinging upon instillation and supraorbital headache.Since our subjects were relatively young, ciliary spasm and over accommodation caused by cholinergic stimulation can account for the headache symptom.On the other hand, a number of subjects appreciated a relief of dry eyes with pilocarpine.However, this benefit was outweighed by the discomfort of headache and blurry vision.Thus, the subjects would not choose to use it to treat dry eye (based on survey questionnaire).Moreover, miotic agents have been found occasionally to cause retinal detachment and macular hole (Walker, 2007). Salivary function Saliva lubricates, cleans, and protects the oral tissues from infectious microorganisms.It also facilitates chewing, digesting, tasting, and swallowing of food (Atkinson, 2005).Xerostomia is a symptom of dryness in the mouth associated with salivary hypofunction due to various etiologies including autoimmune disease (Sjogren's), systemic disease (diabetes mellitus), anticholinergic effects of many drugs and aging (Narhi, 1999).Chronic xerostomia can significantly affect the quality of life because of the increased risk of dental cavities, oral ulcers and mucosal infection (Perno Goldie, 2007).The drug of choice for stimulating salivary flow in the treatment of xerostomia is either pilocarpine or cevimeline (Fox & Michelson, 2000;Porter, Scully & Hegarty, 2004).Pilocarpine (Salagen®) is available in both tablet formulation (5 mg) and 1 and 2 percent solutions (Bruce, 2003).Although oral pilocarpine has been known for decades to effectively increase the salivary volume, the effects of topical pilocarpine (2 percent) on salivary volume has not been measured.While we studied the effects of topical pilocarpine on visual function, we also wanted to know whether it might affect oral function.Remarkably, the data indicated that there was a significant effect of topical pilocarpine on salivary volume.The significant effects of topical pilocarpine on oral function suggested that there was residual pilocarpine reaching the salivary glands via the nasal lacrimal ducts from the eyes.Further study is needed to exam-ine whether the increased in salivary volume by topical pilocarpine is clinically beneficial. Chronic use of pilocarpine (12 weeks or more) has been shown to cause diaphoresis, increased urinary frequency and facial flushing (Nieuw Amerongen & Veerman, 2003).Serious pilocarpine toxicity is rare, but has been reported in a case of idiosyncratic reaction to the drug.The patient's heart rate was slowed to 38 beats per minutes and the blood pressure was decreased to 102/42 mm Hg.Intravenous atropine (0.5 mg) over two minutes was used successfully as an antidote (Hendrickson, Marocco & Greenberg, 2003).Therefore, it is important to educate patients on potential symptoms of pilocarpine toxicity. This study provides further support to previous studies on the effects of topical pilocarpine on visual function including contrast sensitivity.Interestingly, topical pilocarpine can significantly stimulate salivary volume and may relieve dry mouth symptom in patients who happens to take the drop topically for glaucoma.One difference in this study is that it focused more on visual function in general, whereas most previous studies concentrated on the effects of topical pilocarpine on visual field with respect to glaucoma treatments.The study also looked at the possible effects of topical pilocarpine on oral function which was rarely known.The limitations in this study include a young study cohort and one dose-response point.On the other hand, the strength of the study is the subjects served as their own controls before and after treatment, and the study is interprofessional including both visual and oral functional tests.The findings from this study serve as a good starting point for future studies on the effects of miotics on oral and visual functions of patients. Generally, the effect of pilocarpine as a miotic can be inferred for other drugs that could constrict pupils and affect visual function such as opioids and antipsychotics.Unfortunately, there are no alternative cholinergic agonists such as pilocarpine that is not a mitotic, because cholinergic receptors are abundant in the iris sphincter muscle, so knowing its potential visual and oral effects would ensure closer monitoring of the side effects of the miotics and adjusting the dosage appropriately. Conclusions This is the first study to investigate the effects of a miotic drug via interprofessional collaborations between optometry, pharmacy, and dental health science.The authors have better appreciation of other health profession and establish a good foundation for future collaboration. In young normal subjects, pilocarpine adversely affects the visual acuity, contrast sensitivity, visual field, and thus the overall visual function, but it positively increases salivary volume.Future study of the side effects of oral pilocarpine is necessary to better understand the full impact of oral miotics on visual and oral function. Table 2 (following page) summarizes the change scores for different visual tests.The pupil diameter was significantly reduced post treatment with pilocarpine.The effect was larger in dim light than in bright light.Representative photographs of pupils before and after pilocarpine treatment are shown in Figures1a and 1b(page 5).Both distance and near visual acuity (VA) were significantly reduced by pilocarpine treatment with larger effects on distance VA.Distance visual acuity under low-contrast illumination was significantly reduced with pilocarpine.Automated perimetry was also significantly affected by pilocarpine.Representative printouts of VF results are presented in Figure2(page 6). -contrast level, daily activities, however, consist of different low contrast environments.Some patients can have normal visual acuity, but have difficulty in doing daily tasks because of reduced CS at lower spatial frequencies.Therefore, it is important to test VA with low contrast optotype to better represent natural environment.The distance visual acuity under low-contrast illumination was decreased most significantly by topical pilocarpine.This finding is similar to what was found in previous study by The effect was larger in dim than in bright illumination.Both distance and near visual acuity were significantly reduced by pilocarpine treatment with larger effects on distance VA.Contrast sensitivity (CS) testing further assesses visual function beyond visual acuity because VA test only measures at one high
2018-12-12T12:35:00.033Z
2012-09-24T00:00:00.000
{ "year": 2012, "sha1": "f29717c95a9dc4c368bc5d362808d6f148ea8e66", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7772/2159-1253.1031", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "72468ed44a7163740546f025a1d479cfcc6ac836", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225777976
pes2o/s2orc
v3-fos-license
Time‐dependent warming amplification over the Tibetan Plateau during the past few decades It is reported that surface warming over the Tibetan Plateau (TP) has been faster than the global average, but exactly how much faster remains controversial. This study investigates the time dependency of warming amplification over the TP using CRU_TS4.01 grid dataset during 1961–2016 with a consideration of its consistency with the global average. We find that the magnitude of warming on the TP and its consistency with the global average have been variable. Compared to the global average, the TP warming during 1983–2016 is faster than in the period of 1961–1983, and has a higher consistency with the global average in the period of 1983–2016. The TP warming has a seasonal amplification of 1.1–1.4 times than the global average during 1983–2016, while warming amplification during 1961–1983 is relatively less evident. Generally, the magnitude of warming on the TP is smaller than in the northern high‐latitudes, but larger than in the Southern Hemisphere and the Tropics. Based on current scientific understanding, the possible contribution of snow/ice‐albedo feedback may have played an important role in warming amplification on the TP since the 1980s. | INTRODUCTION Climate warming has occurred widely in the globe in the last decades, and a regional difference of the rate of surface air warming has also been reported (IPCC, 2013). Typically, the Arctic has an evident warming amplification in the last decades (Screen and Simmonds, 2010;Serreze and Barry, 2011), while a relatively smaller rate of surface air warming occurred in the low latitudes (Wang et al., 2018). Moreover, the increase of surface air temperature in the high-elevation regions is larger than their low elevation counterparts (Wang et al., 2014). The Tibetan Plateau (TP) with an average elevation more than 4,000 m above sea level (asl) is the highest and largest plateau in the world, and it is also named the "Third Pole" . In recent decades, surface air temperature on the TP has a significant increase (Liu and Chen, 2000;Kang et al., 2010;Lu and Liu, 2010;Duan and Zhang, 2014;Kuang and Jiao, 2016;Liu et al., 2017;Xu et al., 2017). reported that the TP experienced a striking warming in the last decades and it is a result of the increasing greenhouse gases (GHGs) emissions. Similar to other mountain regions in the world, warming on the TP has an elevation-dependent effect (Liu et al., 2009;Qin et al., 2009;You et al., 2010;Rangwala and Miller, 2012;Pepin et al., 2015). Based on records of meteorological stations, Liu and Chen suggested that warming occurred earlier on the TP than in the Northern Hemisphere and the global average, and they also argued that the rate of warming on the TP exceeds those for the Northern Hemisphere and the same latitudinal zone during 1955-1996(Liu and Chen, 2000. Based on the analyses of observed records, and Kuang and Jiao (2016) suggested that the rate of warming on the TP is greater since the 1980s than the earlier period (i.e., 1960s-1980s). Moreover, the TP experienced an accelerated warming during 1998-2008 when the global warming had a hiatus (Duan and Xiao, 2015). These studies indicate that climate warming on the TP varies both linking to temporal changes and existing divergent trends with the global average. However, it is unclear whether there is a time-dependent warming amplification on the TP. In addition to the temporal characteristics of warming on the TP, seasonal difference has also been reported (Liu and Chen, 2000;Wang et al., 2018). Seasonal difference of warming on the TP has induced a weakening of temperature seasonality (Duan et al., 2017;2019a;2019b). These previous studies provide important knowledge for understanding seasonal warming over the TP, but it is not fully clear that how did the rate of seasonal warming depend on time. In this study, we investigate warming amplification over the TP compared to the other land grid cells of the globe in each season based on considerations of both the time-dependent warming on the TP and its consistency with the global average. | DATA AND METHODS Gridded data of CRU_TS4.01 land surface air temperature at a spatial resolution of 0.5 by 0.5 covering the time period of 1900-2016 (Harris and Jones, 2016) were used in this study. This dataset covers all land areas, excluding the Antarctica. It was constructed using the Climate Anomaly Method based on station data (Harris et al., 2014). Station anomalies were interpolated into 0.5 latitude/longitude grid cells, and combined with an existing climatology to obtain absolute monthly values. To be included in the gridding operations, each station series include enough data for normal to be calculated. Further details can be found in Harris et al. (2014). Considering that this dataset uses records of meteorological stations over the TP which started since the 1960s, all analyses in this study were performed within the period of 1961-2016. The scope of the TP in this study was defined as the area with an elevation more than 2,000 m asl ( Figure 1). The TP scope includes 1,028 grid cells, accounting for 10.5% of the global land grid cells of CRU_TS4.01 dataset. The representativeness of CRU_TS 4.01 dataset over the TP was examined using observations of annual mean temperature from 79 meteorological stations located above 2,000 m asl on the TP . The comparison of warming rate between station records and corresponding grids shows a significant but not strong enough relationship (Figure 2a). This is mainly due to the relatively large difference of warming rate among the F I G U R E 1 The scope of the TP (red area) with an elevation more than 2,000 m a.s. l. defined in this study. The black dots show the locations of the 79 meteorological stations located within the scope of the TP 79 stations resulted from the different elevation of station locations (elevation-dependent warming), while such difference is less obvious among the corresponding grids. Specifically, the CRU grid data did not capture successfully the anomalous local warming recorded in a few meteorological stations. For example, the lowest warming rate from the 79 stations is −0.20 C·decade −1 (station: 37.4 N, 101.6 E), while the warming rate from the corresponding CRU grid is 0.10 C·decade −1 (Figure 2a). The highest warming rate from the 79 stations is 0.67 C·decade −1 (station: 36.8 N, 93.68 E), while the warming rate from the corresponding CRU grid is 0.32 C·decade −1 . However, the regional average of annual mean temperature derived from the station and grid records shows a good agreement both for the full analysis period and the two sub-periods ( Figure 2b). This indicates that the anomalous local warming shown in a few meteorological stations does not influence the regional-averaged result largely, and the regionally averaged CRU gird data can represent the regional TP warming reasonably well. In this study, we treat the TP as a whole and not focus on the individual grid, and thus we think that the CRU grid dataset can be used to perform analyses. Moreover, we notice that a few extremes of station records are smaller than the CRU data before 1990, but are greater after 1990 (Fig. 2b). This induces a little greater magnitude of warming derived from station data than the CRU data (0.07 and 0.12 C for the early and late periods, respectively). This might be related to the limited number of meteorological stations located on the TP and their uneven spatial distribution. To examine this difference whether is significant, we performed the significance test of variance changes between the two series both for the full period and the two sub-periods (i.e., the early and late periods). The results show that the difference between station records and CRU data is not significant for the full period (p = .60), the early period (p = .06) and the late period (p = .57). These results indicate that the CUR data can represent the observations reasonably well in the regional scale. The minor differences between them and the much more accurate representativeness of the CRU grid dataset on the whole TP air temperature are expected to be studied in the future with much richer metrological station data. In the analyses, we first considered temporal-dependent changes in the rate of warming on the TP as well as its consistency with the global average. Then, we investigated the difference of magnitudes of seasonal warming between the TP and the other land grid cells of the globe, and calculated the seasonal warming amplification on the TP. The difference of magnitude of warming between the whole TP and each land grid cell (i.e., the magnitude of warming on the whole TP minus the magnitude of warming in each land grid cell) was calculated in different time periods. The magnitude of warming in individual grid was calculated as the rate of warming multiplies the number of years included in the analysis period. The magnitude of warming of the whole TP was calculated using the regional temperature series obtained by averaging all temperature series from individual grid included in the TP scope (Figure 1). Finally, the magnitude of warming of the whole TP was calculated as the rate of warming of the regional temperature series multiplies the numbers of years included in the analysis period. | RESULTS 3.1 | Warming of annual mean temperature over the TP and its consistency with the global average Trend of annual mean temperature shows that the TP experienced a significant temperature increase since 1961, but the rate of warming is greater since 1983 (0.33 C/10 years) than in the period of 1961-1983 (0.08 C/10 years) (Figure 3a). An accelerated warming over the TP since the 1980s was also found in previous studies using both observations Kuang and Jiao, 2016) and tree ring-density reconstruction (Duan and Zhang, 2014). The 11-year smoothing average indicates a similar trend as the warming rate ( Figure 3b). Moreover, the consistency of warming of annual mean temperature over the TP with the global average is higher since 1983 than during 1961-1983 (Figure 3c). Based on , 1961-1983 and 1983-2016). | Difference of the magnitude of warming between the TP and the other land grid cells of the globe Magnitude of warming over the TP is different from other regions of the world both in the seasons and in different time intervals (Figures 4-6). In the whole period of 1961-2016, the magnitude of warming over the TP is smaller than about two thirds land of the globe in spring and summer (Figure 4). In autumn and winter, the magnitudes of warming over the TP are greater than 36.4 and 58.1% land areas of the globe during 1961-2016, respectively. 1961-1983 and 1961-2016 for spring and summer (Figures 4 and 5). The general feature is that the magnitudes of spring and summer warming over the TP (in 1961TP (in -2016TP (in and 1961TP (in -1983) are greater than part of South America, Australia and the Tropics, but smaller than part of the northern high-latitudes and most of the Eurasia. Magnitudes of autumn and winter warming over the TP during 1961-1983 are greater than about 60% land of the globe ( Figure 5). The positive difference (the magnitude of warming over the TP minus the magnitude of warming in other land cells) mainly occurs in South America, Europe, Greenland, and the negative difference occurs in the central Eurasia and northwest America. In winter, the positive difference occurs in most of the Southern Hemisphere and the Greenland, but the negative difference occurs in Eurasia and most of the North America. In the period of 1983-2016, the positive difference of magnitudes of warming between the TP and the other land grid cells presents a larger percentage than in both the earlier period (i.e., 1961-1983) and the whole period of 1961, except autumn in 1961-1983). In spring and summer of the period 1983-2016, the magnitudes of warming on the TP are larger than approximately half of the global land, and their spatial patterns are basically similar (i.e., the positive difference mainly occurs in most of the Southern Hemisphere and part of North America). The magnitude of autumn warming on the TP during 1983-2016 is greater than 40% area of the is that the magnitude of warming over the TP is smaller than in the northern high-latitudes in any season. The grid cells with a negative value indicate that the magnitude of warming of the whole TP is less than those grid cells in the analysis period. The grid cells on the TP with a negative value in spring and summer in Figures 4-6 denote that those grid cells have a smaller magnitude of warming than the TP entirety in the analysis period. Generally, the magnitude of warming on the TP is smaller than in the northern mid-high latitudes (NMH) in spring and summer in all the three analysis periods (Table 1). In autumn, the magnitude of warming is greater on the TP than in the NMH during 1961-1983, but smaller than in the NMH during 1983-2016 and 1961-2016. In winter, the TP has a warming amplification compared to the NMH during 1983-2016, but slightly smaller than the NMH in the periods of 1961-1983 and 1961-2016. Compared to the global average, the TP has a warming amplification in winter in all the three analyses periods, and in autumn in the periods of 1961-1983 and 1983-2016.The TP has a warming amplification in all the four seasons about 1.1-1.4 times compared to the global average during 1983-2016. | DISCUSSION A few driving mechanisms have been reported about warming on the TP in the last decades (Chen et al., 2003;Rangwala et al., 2009;Lau et al., 2010;. Besides the increased GHG concentrations (Chen et al., 2003;, cloud amount changes , snow albedo and surface-based feedbacks (Lau et al., 2010), changes in surface water vapor (Rangwala et al., 2009) and atmospheric aerosol have also been suggested as important factors contributing to the recent surface air warming on the TP. In this study, we found that the rate of surface air warming on the TP is greater during 1983-2016 than in the period of 1961-1983 and the TP has a much more evident warming amplification relative to the global average in the period of 1983-2016 than in the period of 1961-1983. This is concurrent with the accelerated retreat of glacier fronts on the TP since the 1980s . The accelerated retreat of glacier fronts has induced a reduction in glacier areas over the TP. Reduction in glacier area can reduce the surface albedo with a subsequent increase in absorbed solar radiation, leading to an increase in the surface air temperature. Such snow/ice-albedo feedback mechanism has been simulated successfully in the future changes of surface air temperature on the TP and the surrounding high elevation regions (Lau et al., 2010). Moreover, such feedback mechanism primarily occurs during spring and summer (Giorgi et al., 1997;Lau et al., 2010). Our results show that the largest positive difference of TP warming amplification between the period of 1983 and 2016 and the other periods (i.e., 1961-1983 and 1961-2016) also occurs in spring and summer ( Figure 7). Therefore, we speculate that the rapid reduction in glacier area on the TP since the 1980s possibly contributed to the warming amplification of the TP. Further model simulations are needed to validate this speculation in the future. | CONCLUSIONS In this study, we analyzed the magnitude of seasonal warming on the TP during 1961-2016 using the gridded dataset of CRU_TS4.01. Comparisons between the CRU_TS4.01 data and meteorological station records show a good coherence between their regional series, indicating a good representativeness of the gridded dataset on the TP. Variation of annual mean temperature on the TP has a higher consistency with the global average since 1983 than in the earlier period (i.e., 1961-1983), and surface air warming on the TP is also greater in the period of 1983-2016 than in the period of 1961-1983. Compared to the other land grid cells of the globe, the TP warming has a seasonal amplification (1.1-1.4 times compared to the global average) during 1983-2016, while such a warming amplification during 1961-1983 is relatively less evident. Generally, the magnitude of warming on the TP is smaller than in the northern high-latitudes, but greater than in the Southern Hemisphere and the Tropics. Based on current scientific understanding, our results emphasize the possible contribution of snow/ice feedback on warming amplification over the TP since the 1980s. 1709081022.v4.01/tmn/. Meteorological station data are available for application at China Meteorological Data Sharing Service System (http://cdc.cma.gov.cn/home.do). , 1961-2016 and 1961-1983). The percentage of land grid cells with a smaller magnitude of warming than the whole TP in each season in the three analysis periods are also shown in Figures 4-6
2020-06-11T09:01:41.987Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "6aa05ea30ff8465ac47a626ac4810da127e3e494", "oa_license": "CCBY", "oa_url": "https://rmets.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/asl.998", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "77d0b31d2347a6859fdb03c594645e7ab35d4368", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
252969989
pes2o/s2orc
v3-fos-license
Evaluation of in vitro antibacterial effect of essential oil and some herbal plant extract used against mastitis pathogens Abstract Background Mastitis in dairy cattle is a highly prevalent infectious disease, caused by various pathogens, mainly Staphylococcu aureus and Escherichia Coli, considerable economic loss worldwide. Objectives The aim of this study was to evaluate the in vitro activity of Herbal plants used against S. aureus and E. coli bacteria which are the causative agents of mastitis. Methods Therefore, in this study we investigate the antimicrobial effect of plant to evaluate the in vitro antibacterial activity of squaw mint (Mentha pulegium L., Lamiaceae family), catnip (Nepeta cataria L., Lamiaceae), lemon balm (Melissa officinalis L., Lamiaceae), for mastitis treatment. Solutions prepared in fixed oils, against S. aureus and E. coli bacteria which are the main agents of mastitis. Isolation and antibiotic susceptibility analyses of milk samples taken from 100 subclinical mastitis dairy cows were performed. The antibacterial properties of the solutions were analysed by a disk diffusion method. Results In the bacterial isolation, S. aureus was determined 97.7% and E. coli 53.5% positive of cows with mastitis. Antibacterial susceptibility test of the Lemon balm extract and essential oil showed maximum zone of inhibition against S. aureus 30 µl (23 mm), followed by 20 µl (19 mm), E. coli (19 mm) and 10 µl (5–7 mm), of the same extract against the Gram‐positive bacteria. The ethanol extracts show the similar activity against the Gram‐negative bacteria at 30, 20, and 10 µl (18–20 mm). Followed by S. aureus, when the zone areas for the susceptible solutions (Lemon balm, and essential oil) and the control group were compared, determined that there was little difference between for S. aureus and E. coli. Conclusions This study hence indicated that in vitro cultured plantlets of lemon balm and peppermint oil can be used as the alternative method for production of mastitis and cheap source its precursor with antimicrobial activities. INTRODUCTION Mastitis is a complex disease caused by various bacterial pathogens, mainly Staphylococcus aureus, and Escherichia coli. It is also reported to be the most common reproductive disease in dairy cattle and a heavy burden disease in dairy farms worldwide (Alonso et al., 2020;Harjanti & Sambodho, 2020). It is believed that the bacterial infection in mastitis cases is related to the disruption of alveolar cell-integrity, sloughing of cells, induced apoptosis and increase of poorly differentiated cells. Since the ability of ruminant mammary glands to produce milk is determined by the number and activity levels of milk-secreting cells, the amount of milk produced and the protein, lactose, and fat concentrations in milk can be affected by the level of inflammation in the mammary gland (Harjanti & Sambodho, 2020). Drug residues in milk cause allergic reactions for the consumer, interference in the intestinal flora, and resistant bacterial populations, and accordingly, they can destroy the effect of antibiotic treatment (Amber et al., 2018). The World Health Organization (WHO) stated that E. coli and S. aureus are the priority pathogens for overcoming antimicrobial resistance and for the research and development of new antibiotics (Arbab et al., 2021a;Arbab et al., 2022;Klaas & Zadoks, 2018;Tepeli, 2020). The main treatment of mastitis is commonly administered by intramammary infusion of an ointment or intramuscular or intravenous injection of antibiotics, such as beta-lactams (Tepeli, 2020). However, the treatment is anticipated to become problematic in the near future owing to the rapid increase in antibiotic-resistant pathogens (Milk, 2014;Oliver & Murinda, 2012). There are an increasing number of published studies in the field of antimicrobial therapy using natural products (Boldbaatar et al., 2014;Rios & Recio, 2005), including studies on the antimicrobial effect of plant products on pathogens isolated from BM (Dorman & Deans, 2000;Gopinath et al., 2011). However, the majority of these studies are focused on plants which have a natural distribution, specific to certain geographical areas (Taemchuay et al., 2009). Despite the encouraging results of these studies, more studies including indigenous or acclimatised plants are required to cover distinct geographical areas in order to have a great availability and a low manufacturing cost for these products. Essential oils and plant extracts are rich with a wide variety of metabolite compounds (Taemchuay et al., 2009). Peppermint oil is recommended for mastitis therapy has been proven to be effective against a wide variety of microorganisms (Grzesiak et al., 2018), but there are only a few studies available that describe the antimicrobial effects of essential oils (Vlase et al., 2014). Onion bulbs contain a good number of phytochemical properties, most of which are hydrocarbons and their derivatives. Several studies have proved that the plant extracts and essential oils have antimicrobial effects (Zajmi et al., 2015). However, a large number of plant species have not been studied for their potential medicinal value (Duda et al., 2015). Previous studies have evaluated the antimicrobial effect of several medicinal plants on different collection strains of pathogens (Duda et al., 2015). The aim of this study was to evaluate the in vitro activity of solutions of essential oils, squaw mint (Mentha pulegium L., Lamiaceae family), catnip (Nepeta cataria L., Lamiaceae) and lemon balm (Melissa officinalis L., Lamiaceae) against S. aureus and E. coli bacteria which are the causative agents of mastitis. Animal's clinical data The lactating breed udder secretions were the test subjects in this study and clinical data were recorded from Cattle with Clinical mastitis (CM). In the study, 100 cows evaluated as CMT +: 1, ++: 2, +++: 3 and without clinical endometritis, laminitis were accepted as the exper- Extraction of essential oil Herb parts of squaw mint (Mentha pulegium L., Lamiaceae family), catnip (Nepeta cataria L., Lamiaceae), lemon balm (Melissa officinalis L., Lamiaceae), the region with the highest essential oil, were used, and then powdered. Ten grams of this powder was soaked in 100 ml of solvents namely ethanol, and essential oil for 24 h. The contents were then filtered through Whatman filter paper no. 1 and the filtrate was evaporated to dryness. This dried extract was further powdered and then dissolved in distilled water to make the working solution having 10 mg/ml concentration. Solvent controls (Ethanol) were prepared in the similar manner. Bacterial isolation from milk sample Bacterial strain isolation from milk samples was carried out following aseptic procedures as described by National Mastitis Council (NMCRC, 2004). A loopful of milk sample was streaked on blood agar (Humphries et al., 2021). The isolates were confirmed by biochemical tests and sub-cultured on differential and selective medium. The biochemical tests were oxidase activity, acid production (lactose sucrose and glucose fermentation), Indole production, Voges-Proskauer and hydrogen sulphide production. Fifty discs of 6 mm Whatman filter paper were obtained by punching and placing in bottle and sterilising in hot air oven at 170 • C for 30 min. Determination of antibiotic resistance profile The filter paper with 6 mm diameter discs was impregnated with 20 µl of each essential oil was diluted in 1 ml of distilled water (v/v) and compared with reference antibiotics and solvent or double-distilled water as negative control aseptically placed with a sterile forceps on Mueller-Hinton agar plates. The plates were inculcated at 37 • C for 24 h (Perez, 1990). The number of visible growth in minimum inhibitory concentration disc diffusion assay was subculture using a 10 µl inoculating loop onto a 5% sheep BAP and incubated at 37 • C for 24 h were found (Table 1). Statistical analysis The Bacteria isolation data The overall percentage of positive bacteria isolates from dairy mastitis cattle. A total of 100 cattle were examined and 73 were recorded positive for different organisms. The percentage prevalence of organisms is presented in Table 2. Out of 100 samples, S. aureus was isolated and identified positive sample in cattle 43 (97.7%) and E. coli 30 (53.5), respectively. All organisms were identified on their morphological, cultural characteristics and staining reactions. Organisms were further confirmed by their biochemical reactions. Bacterial zone diameters While some of the materials were found to have different levels of antibacterial activity on the tested microorganisms, some were found to be ineffective (Table 3 and Figure 1). Sensitivity of solutions Among the solutions, five solutions were found resistant for S. aureus, while six solutions were found to be sensitive (moderate and very sensitive). For E. coli, 10 solutions were found to be resistant, while six solutions were found to be sensitive (moderate and very sensitive) as shown in the Table 4. When the zone diameters were compared between the two antibiotics groups and solution used as the control group in the study, it was determined that was little statically significant difference as showed in the Table 5 and Figure 2. DISCUSSION Bovine mastitis is a serious disease causing considerable economic loss worldwide (Halasa et al., 2007). Based on previous studies and the recommendations of the National Mastitis Council, DCT is considered to S. aureus, which is known to be highly resistant to antibiotic treatment in mastitis, lives in the host's cells and becomes chronic by forming micro abscesses or granulomas in the mammary gland tissues (Azadi et al., 2011). In the bacterial isolation results, S. aureus bacteria were found to be 97.7% and E. coli 53.5% positive. It was determined that the effects of these two bacteria in the formation of mastitis were intense in the cows studied. Especially, in vitro solution trials were conducted against these two bacteria. These findings of these studies were agreement to the previous study by Arbab et al. (2021c). The values found were similar to other research results (Arbab et al., 2021d). Although all extracts showed antimicrobial activity against nearly all of the microorganisms tested, lemon balm and essential oil were found to exhibit broad-spectrum activity against selected bacterial pathogens isolated from clinical mastitis in dairy cows. The study found that extract of lemon balm and essential oil showed maximum inhibition against S. aureus (21-23 mm) and E. coli (19-20 mm). E. coli, which is already known to be multiresistant to drugs, was susceptible only to the extract from lemon balm essential oil. The essential oils contain many compounds that act synergistically and induce strong anti-algal effects. The active substances, including various polyphenols, are capable of dissolving the algal cell wall and penetrating into the cell, where they affect the cell metabolism (Bouari et al., 2011). Another researcher study conducted to check the antibacterial activity of ethanol extract of medicinal plant exhibited maximum inhibition against S. aureus (Arbab et al., 2020;Jothi et al., 2014). The present study also supported by Paiano et al. (2020). Researchers have found that peppermint oil is recommended for mastitis therapy (Grzesiak et al., 2018). It was determined that cinnamon, clove, oregano and thyme essential oils showed inhibition zone diameters of 36 mm, E. coli 20 mm, especially for S. aureus causing endometritis (Paiano et al., 2020). Peppermint oil showed positive activity maximum 30 µl 21 mm inhibition zone against S. aureus and 30 µl 20 mm for E. coli, respectively. There was no indication of inhibition against E. coli. In the study conducted by the researchers, the antioxidant and antimicrobial activities of the phenolic components of the phenolic extracts of olive oil varieties obtained from 11 Algerian varieties against various bacteria were investigated. Peppermint oil is used as a natural medicine for multiple therapeutic purposes in animals. It has been observed that peppermint essential oil inhibits the growth of Gram-negative microorganisms, especially
2022-10-19T06:16:27.115Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "a5b2708c5062ecffb3a78ea025f5f08af331e5f7", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/vms3.959", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec2692babd0c0637866ef1b7dae4cb9115bcac5c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259470603
pes2o/s2orc
v3-fos-license
Ovarian sclerosing stromal tumour: Report of a new entity with immunohistochemical study. : Sclerosing stromal tumour (SST) is a rare benign sex cord stromal tumour occurring in women in their second and third decades. Patients usually present with menstrual irregularity and pelvic pain. Microscopically, this tumour is characterized by epithelioid and spindle cells arranged in pseudolobules separated by areas with fibrous deposition of various amount. Presence of ‘staghorn like’ proliferating vasculature is the hallmark feature of this tumour. The main differential diagnoses are thecoma and fibroma. Immunohistochemistry can be used to differentiate these tumours. This relatively new entity should be kept in mind while reporting ovarian tumour in a young female. We have described SST in an 18 year old female in this case report. Introduction: Sclerosing stromal tumour (SST) is an extremely rare benign ovarian sex cord stromal neoplasm with distinctive clinical and pathological features of unknown aetiology. This entity was first identified in 1973 by Chalvardjian and Scully. It accounts for approximately 8% of all primary ovarian neoplasm generally occurring in young women and girls in the second and third decades of life. 1,2 The tumour is composed of an admixture of epithelioid and spindle cells of ovarian stromal origin. 3 It was included as a subtype of ovarian sex cord stromal tumour in WHO 2003 classification of female genital tumours. 4 SST is usually but not invariably hormonally inactive. Occasional cases have been reported with estrogenic and androgenic activity. Unusual presentation may occur with virilization and precocious puberty. It may be associated with pregnancy, rarely with Meig's syndrome and endometrial carcinoma. Majority of the patients have the complaints of menstrual irregularities, pelvic pain and ovarian mass effect, but may be asymptomatic. [2][3][4] The tumour is usually unilateral and well circumscribed. This microscopically heterogeneous tumour has most striking features of distinct cellular and hypocellular areas and peculiar vascular architecture. Cellular areas comprise fibroblast-like and luteinized theca-like cells arranged in pseudolobules. Intervening hypocellular areas consist of oedematous and collagenous stroma. 1,[5][6] Tumours are immunohistochemically positive for vimentin, SMA, desmin, CD99, estrogen and progesterone receptors, and sex cord markers, such as inhibin and calretinin. Surgery is the main therapeutic modality. 5,6 Exceptionally one patient was reported with low grade malignancy in 1990 4 and another one case with recurrence having features of capsular disruption, necrosis and significant mitotic activity. 3 Here we have described a case of SST in an 18-year-old female. Clinical summary A regularly menstruating 18 years old unmarried female had history of lower abdominal pain for 1 year duration. She had complaints of menorrhagia since the same duration. On clinical examination, a mass was palpable in the lower abdomen which was gradually increasing in size in the last 6 months before operation. Ultrasonography revealed a large complex mass, mostly cystic measuring about 15.3x10.8 cm arising from right adnexa. Magnetic resonance imaging showed a huge inhomogeneous cystic mass with solid component, regular and smooth in outline, not separately defined from right ovary, measuring about 15x13x12 cm in the pelvis extending up to the central abdomen and compressing the bowel loops. The lesion contained some internal septation and mixed intensity components from the wall, which showed enhancement after contrast. Mild ascites and right sided moderate hydroureteronephrosis were also present. Radiological opinion was in favor of mucinous cystadenocarcinoma. Routine laboratory parameters including tumour markers were within normal limits. Then the patient underwent right sided salpingo-oophorectomy operation and the resected specimen was sent to the Department of Pathology, BSMMU for frozen section which revealed negative for malignancy. Post-operative period was uneventful. The tumour did not recur in a follow up period over the following 1 year. Macroscopic findings The specimen consisted of a resected ovarian cyst with attached fallopian tube. The cyst measured about 17x16x10 cm. Outer surface was grey white, smooth and shiny. Clear fluid came out on incision. Cyst cavity was uniloculated. Cut surface of the cyst wall was grey white, edematous, rubbery in consistency containing small cystic spaces with maximum 2.5 cm of thickness. Microscopic findings Hematoxylin and Eosin stained sections showed a benign tumour composed of cellular and hypocellular areas arranged in pseudolobules. These pseudolobules contained lutein cells and spindle cells (figure-1A). Occasional signet ring like cells were present (figure-1B). Hypocellular areas showed edematous stroma with foci of myxoid changes. Numerous thin, branching hemangiopericytoma like blood vessels were also present within cellular areas. Mitoses were infrequent (<1/10HPF). Tumour cells were negative for PAS stain (figure-1C). Immunohistochemistry was done. SMA and desmin were positive focally in blood vessels and fibroblast like spindle cells ( figure-2A and 2B). Cytoplasm of the tumour cells was diffusely positive for vimentin (figure-2C). Inhibin was positive focally with weak intensity in around 5-10% of tumour cells' cytoplasm, especially in the plump vacuolated cells (figure-2D). Calretinin was negative in tumour cells (figure-2E). CD34 was found positive in blood vessels delineating peculiar ectatic and proliferating vascular pattern but negative in tumour cells (figure-2F). Discussion: Sclerosing stromal tumour is a relatively rare subtype of ovarian sex cord stromal tumour proposed to be originated from the perifollicular myoid stromal cells residing normally in the theca externa and ovarian cortical stroma. 4,7 Previous immunohistochemical studies supported the smooth muscle differentiation of the specialized gonadal stromal tissue. 2 This benign ovarian tumour predominantly involves the young age group ranging from about 14 to 51 years. 5,8 Grossly, the tumour may transform the involved ovary into a solid, or predominantly solid mass with cystic degeneration, or a unilocular cyst containing clear fluid. SST may contain polygonal lutein cells with eosinophilic cytoplasm showing clinical evidence of steroid hormonal activity. Despite of presence of such type of cells, the tumour is considered hormonally inactive in nature as these active appearing cells do not always secret clinically significant amounts of steroid hormones. 9 Features of masculinization or anovulation may be present in those cases occasionally associated with oestrogen and androgen secretion. 8 CT and MRI are the imaging modalities of choice for better visualization of ovarian tumours, that are larger than 5-10 mm in size. 10 Our presenting case was clinically and radiologically suspected as a malignant one due to having an enlarging mass attained at a huge size within short period and a heterogenous hypointense and hyperintense pelvic lesion containing internal septations on MRI impression. SST has characteristic histologic features to separate the entity from other types of sex cord stromal tumour. It does not require any immunohistochemical or ancillary tests for diagnosis except those cases with overlapping microscopic features. 5 The benign neoplasm is marked by presence of cellular and paucicellular areas with pseudolobular appearance of the former one. Cellular areas are composed of an admixture of fibroblast like spindle cells with elongated vesicular nuclei, and round to oval cells or epithelioid cells. The later neoplastic cells, often plump to polygonal in appearance, having eosinophilic, sometimes vacuolated cytoplasm due to presence of lipid and round nuclei, are termed to as 'luteinized theca like' cells. Occasional foci of signet ring like cells may be revealed which may show prominent luteinization, especially during pregnancy. 3 Thin walled ectatic branching 'hemangiopericytoma like' vascular channels are seen scattered throughout both the cellular areas as well as the intervening fibrotic stroma. 2,5 SST and thecoma are supposed to be of closely related entity on the basis of antigenic determinant and morphology. IHC has little role in differentiating these two entities. 6,7 SST may be considered as a neoplasm in transition arising from typical or luteinized thecoma which may be evolved into ovarian myxoma or end stage SST. 9 SST with prominent signet ring like cells can mimic signet ring stromal tumour. However, these signet ring cells show negative reaction for lipid, whereas signet ring like cells in SST are lipid rich. 11 SST may also develop from pre-existing fibroma. 6 Thus, differentiation of SST from other stromal tumours may impose a diagnostic challenge. Inhibin, calretinin and α glutathione S transferases (GST) are the biomarkers used for diagnosis of sex cord stromal tumours related to steroidogenesity of cells. Highly vascular sclerosing stromal tumour may mimic a vascular tumour, such as hemangiopericytoma, which is excluded by presence of inhibin and calretinin positivity. Inhibin is more specific (97%), whereas calretinin is more sensitive (97%) marker for sex cord stromal tumours. Stronger expression for inhibin and calretinin goes in favor of thecoma over fibroma. Vacuolated cells and scattered single cells of SST show marked intracytoplasmic positivity for GST. On other hand, thecoma show diffuse staining and fibroma show no staining for GST. Intensity of expression for GST is a reflection of inhibin as well as calretinin positivity, which correlates with degree of luteinization. 1-2,5-6,12 Massive ovarian edema is another differential diagnosis of SST. The possibility can be excluded by presence of preserved ovarian tissue within edematous areas and absence of cellular heterogeneity. 1 In present case, no preserved ovarian tissue was seen. In ovarian stromal tumour, vimentin, smooth muscle actin (SMA) and desmin may show cytoplasmic positivity. SMA and desmin often show positive staining in blood vessels wall as well as focal perivascular and stromal fibroblast like cells in SST with marked intensity. SMA and desmin is weakly and focally positive in thecoma. In fibroma, SMA staining is delicate wispy in nature with moderate intensity and desmin reactivity is negative. Staining of CD34 highlights complex branching vascular architecture in SST along with ovarian non neoplastic stromal cells. In fibroma and thecoma, tumour stromal cells may be positive for CD34. Sometimes presence of signet ring like cells create confusion for Krukenberg tumour. These malignant signet ring cells are positive for EMA, pancytokeratin, PAS, negative for inhibin and show atypical mitosis and nuclear features. 2,5,7,12 Other less reliable marker used for SST are CD99, CD56, WT1, S100, estrogen and progesterone receptors. FOXL2 and TFE3 reactivity has recently been reported in a subset of tumours. 3,9,11 Reticulin stain outlines tumour cell nests and aggregates in granulosa cell tumour along with reticulin fibres deposition. In thecoma and fibroma, it reveals pericellular reticulin staining pattern. Collagen fibres are found abundantly deposited in fibroma as highlighted by Masson's trichrome stain. 5 In our case, reticulin stain was positive around blood vessels and scattered around individual cells in perivascular areas. The present case also showed positivity for Masson's trichrome stain in collagenized areas. This special stain revealed deposition of fine collagen fibres in cellular areas which became thick collagen bundles at the periphery of the cellular areas. After consideration of clinical features, histomorphology, IHC findings and special staining patterns, the case was decided to be diagnosed for sclerosing stromal tumour. Ancillary investigation is not practically applied for the diagnosis of SST. A small subset of tumours has revealed presence of trisomy 12, FHL2-GLI2 fusion genes in tumour cells on FISH studies. 3 Conclusion: Preoperative assumption of sclerosing stromal tumour is difficult because of the rarity of the tumour. The tumour can simulate malignancy clinically and radiologically. Frozen section can play an important role for exclusion of malignancy and avoid further unnecessary surgical intervention. In cases of ovarian tumours in young female, SST should be borne in mind. It can be diagnosed mainly on the basis of peculiar heterogeneous cellular, vascular and sclerotic pattern. We can differentiate SST from fibroma, thecoma and granulosa cell tumour with special stains and immunohistochemistry in case of diagnostic dilemma.
2023-07-11T01:00:17.592Z
2023-06-16T00:00:00.000
{ "year": 2023, "sha1": "3c9716b63d3899114f592e0de62f5289d7168382", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/BJMS/article/download/65337/44867", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "181f53982d650793372abbe2bbb14ea12150328e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15481841
pes2o/s2orc
v3-fos-license
In-solution hybrid capture of bisulfite-converted DNA for targeted bisulfite sequencing of 174 ADME genes DNA methylation is one of the most important epigenetic alterations involved in the control of gene expression. Bisulfite sequencing of genomic DNA is currently the only method to study DNA methylation patterns at single-nucleotide resolution. Hence, next-generation sequencing of bisulfite-converted DNA is the method of choice to investigate DNA methylation profiles at the genome-wide scale. Nevertheless, whole genome sequencing for analysis of human methylomes is expensive, and a method for targeted gene analysis would provide a good alternative in many cases where the primary interest is restricted to a set of genes. Here, we report the successful use of a custom Agilent SureSelect Target Enrichment system for the hybrid capture of bisulfite-converted DNA. We prepared bisulfite-converted next-generation sequencing libraries, which are enriched for the coding and regulatory regions of 174 ADME genes (i.e. genes involved in the metabolism and distribution of drugs). Sequencing of these libraries on Illumina’s HiSeq2000 revealed that the method allows a reliable quantification of methylation levels of CpG sites in the selected genes, and validation of the method using pyrosequencing and the Illumina 450K methylation BeadChips revealed good concordance. INTRODUCTION DNA methylation is an important mechanism contributing to the control of gene expression. It is well known that changes in DNA methylation play a role in many human diseases as well as in normal development (1). There are a number of methods developed to assess DNA methylation (2). Currently, bisulfite sequencing is considered the 'gold standard' in DNA methylation analysis, as this method allows the investigation of DNA methylation patterns at a single-nucleotide resolution. Moreover, progress in DNA sequencing technologies has allowed the re-sequencing of whole human genomes within a reasonable time and cost (3). The combination of bisulfite-converted DNA with next-generation sequencing (NGS) allows for a powerful whole epigenome analysis (4). Coupling target enrichment techniques with bisulfite conversion of DNA allows researchers to focus on genomic regions within cellular or disease-related pathways of interest. It can also dramatically decrease the sequencing cost and time required per sample while maintaining the sequencing depth required for reliable quantification of DNA methylation levels. Currently, many methods for target enrichment of DNA have been reported [reviewed in (5)]. The common feature of all of these methods is to capture the targeted genomic DNA (gDNA) fragments by complementary in vitro synthesized oligonucleotide sequences (either baits, primers or probes). Given that bisulfite treatment dramatically decreases the sequence complexity of DNA (as most C residues are converted to Ts), it confers otherwise unrelated sequences into significantly similar ones. Furthermore, bisulfite treatment extensively degrades DNA, which complicates the coupling of enrichment procedures with bisulfite treatment. Despite these complications, there are some successive examples of combining target enrichment methods with the bisulfite treatment of DNA (6)(7)(8)(9)(10)(11)(12)(13). Each of these methods have their own limitations, such as high requirements for the amount of input DNA, a complicated in-house protocol for preparation of the capture library, a requirement for special equipment or a restricted number of CpG sites to be captured by primers or probes in a particular region of interest. We developed a novel protocol for combining DNA bisulfite treatment with the standard in-solution hybrid capture procedure provided by the Agilent SureSelect Target Enrichment System. Using a custom SureSelect library that was modified to capture bisulfite-converted DNA, we were able to enrich bisulfite-converted DNA samples for 3.9 Mb of target genomic non-contiguous intervals. Further sequencing of these target-enriched NGS libraries on Illumina's HiSeq2000 allowed the quantification of methylation states of >40 000 targeted CpG sites at the median depth ranging from 37Â to 61Â in the human gDNA samples assessed. Herein, we describe this protocol in detail and present results of a pilot study involving the capture of specific genomic regions that encode for enzymes involved in drug metabolism and excretion in four adult hepatic gDNA samples. Moreover, these pilot results serve as insight into novel aspects of gene regulation of drug metabolism and transport enzymes that may potentially explain interindividual differences in drug responses. MATERIALS AND METHODS The design of a bisulfite-specific Agilent SureSelect library A total of 174 genes encoding enzymes for absorption, distribution, metabolism and excretion of drugs (ADME genes) were selected as the genes of interest. Among these genes, 32 encode for the core ADME enzymes and 116 genes encode for enzymes in the extended ADME list as determined by www.pharmaadme.org. In addition, 26 genes encoding transcription factors known to regulate the expression of the aforementioned enzymes were included (see Supplementary File 3 for the complete gene list of interest). The genomic coordinates for each gene were obtained from the UCSC Genome Browser (genome.ucsc.edu) where the genomic region of interest included the gene plus 20 000 bp of both the 5 0 and 3 0 flanking sequences (Supplementary File 3). In total, our region of interest covered 16.26 Mb of genomic sequences and contained 191 534 CpG sites. These genomic coordinates were uploaded to the Agilent eArray web server (earray.chem.agilent.com/earray), and the SureSelect Target Enrichment library was generated according to the manufacturer's instructions with the following settings: Design Strategy = Centred, Bait Length = 120, Bait Tiling Frequency = 1Â, Genome Build = Hg19, Avoid Standard Repeat Masked Regions (RepeatMasker) = ON. As the standard repeat masked regions were avoided during the library generation procedure, the resulting design of the SureSelect library reduced the genomic sequence length of interest to 6.38 Mb, containing 82 184 CpG sites and yeilding 53 152 RNA baits (120 nt each). The next stage involved accommodating the generated custom SureSelect library to capture bisulfite-converted gDNA fragments. To this end, we developed a Python3 script that enabled the conversion of the generated SureSelect bait library to capture bisulfite-converted DNA whereby the threshold number of C-T mismatches was set to 8 (Supplementary File 2). This new output file (containing sequences of bisulfite-converted baits) was uploaded back to the eArray web server as a custom-designed SureSelect library, and the manufactured bisulfite-specific SureSelect library was used in the protocol for the preparation of target-enriched Illumina NGS libraries from bisulfite-converted human gDNA. Preparation of target-enriched NGS libraries gDNA from anonymous human liver tissue was isolated using QIAgen DNA Mini kit according to the manufacturer's protocol (Qiagen Cat. #51306). gDNA concentrations were measured with Quant-iT PicoGreen dsDNA assay kit (Invitrogen Cat. #P7589) using SpectraMax Gemini XPS/EM microplate reader (MolecularDevices), and gDNA purity was assessed using Nanodrop 1000 (ThermoScientific). Three micrograms of high-quality gDNA (A 260/280 = 1.8-2.0) were diluted with 120 ml of TE buffer, transferred to Covaris microTUBEs and subjected to shearing on the Covaris S2 sonicator (Covaris Inc.). Sheared gDNA was then purified with Agencourt AMPure XP beads (BeckmanCoulter Genomics Cat. # A63881) according to the manufacturer's instructions. The DNA was eluted from the beads with nuclease-free water, and 1 ml from each sample was assessed by Agilent 2100 Bioanalyzer (DNA 1000 assay). Following successful shearing, the gDNA was subjected to end blunting, dA-tailing and the ligation with methylated adapters using the TruSeq DNA Sample Prep kit v2 (Illumina Cat. # FC-121-2001). The four gDNA samples were ligated to TruSeq adapters containing different index sequences. Adapter-ligated DNA was purified with Agencourt AMPure XP beads and then bisulfite converted using the EZ DNA Methylation kit (ZymoResearch, Cat. #D5001) before pre-capture polymerase chain reaction (PCR). Details including components and conditions for the pre-capture PCR, as well as additional information on the comparison of four commercially available bisulfite conversion kits, can be found in Supplementary File 1. Amplified and purified samples were assessed for quality and quantity on the Agilent 2100 Bioanalyzer (DNA 1000 assay). The PCR amplified DNA was concentrated to $147 ng/ ml using a vacuum concentrator and used for hybridization with the custom SureSelect Target Enrichment library, strictly following the original Agilent instruction manual ['SureSelect Target Enrichment System for Illumina Paired-End Sequencing Library' (G3360-90020), pages 35-47]. Captured DNA fractions were cleaned up and used for the post-capture PCR. For post-capture PCR details, see Supplementary File 1. Purified post-capture PCR products, which successfully passed the quality check on the Agilent 2100 Bioanalyzer, High Sensitivity DNA assay were precisely quantified with Agilent QPCR NGS Library Quantification Kit for Illumina Genome Analyzer (Agilent Technologies Cat. #G4880A). The four NGS libraries were then pooled together and sequenced on a single lane of an Illumina HiSeq2000 v3 flowcell, using paired-end sequencing of 100 bp, with 0.5% PhiX spiked into the reaction. For more experimental details, see Supplementary File 1, 'The complete protocol for library preparation'. Infinium HumanMethylation450 BeadChip assay From each sample, 500 ng of gDNA was bisulfite modified using the EZ DNA Methylation kit (Zymo Research. Cat. No. D5004) according to the manufacturer's recommendations for the Illumina Infinium assay. The conversion reaction was incubated at 16 cycles of 95 C for 30 s and 50 C for 60 min, followed by a final holding step at 4 C. After purification, 4 ml of bisulfite-converted DNA from each sample was used for hybridization on Infinium HumanMethylation450 (450K) BeadChips, according to the Illumina Infinium HD Methylation protocol. The signal intensities were extracted using the GenomeStudio software. The methylation level of each CpG site was calculated as a beta value according to the fluorescent intensity ratio from the two alleles. The free software R and the Bioconductor package 'minfi' were used to pre-process the data and for quality control. The original IDAT files from the HiScanSQ scanner were used as input for the minfi package. 'Raw' pre-processing was used to convert the intensities from the red and the green channels into methylated and unmethylated signals. Beta values were computed using Illumina's formula [beta = M/(M+U+100)]. To combine the data from the Infinium type I and type II probes, peak-based correction was implemented (14). The beta values of all CpG sites with detection P-values (calculated by the GenomeStudio software) >0.01 were discarded. Pyrosequencing Specific genomic regions (with read depth ! 100 Â ) were randomly selected for validation using pyrosequencing of bisulfite-treated DNA. Primer sets, forward, reverse and sequencing primers for 3 amplicons were designed using PyroMark Assay Design 2.0.1.15 software (Qiagen). Methylation states of CpG's for validation were amplified using 20 ng of bisulfite-converted genomic DNA of all four samples investigated and 0.2 mM of forward and reverse primers, one of which was biotinylated. PCR reactions were performed using the PyroMark PCR Kit (Qiagen) optimized for bisulfite-treated DNA. Reaction conditions and PCR cycling were conducted as recommended by the kit instructions, adjusting only for optimized primer annealing temperatures, which were between 53-56 C. A total of 10 ml of PCR product and 0.3 mM of the respective sequencing primer were used for analysis. Quantitative DNA methylation analysis was carried out on the PyroMark Q24 instrument using the recommended PyroMark equipment and solutions (Q24 vacuum workstation, Q24 plates, binding buffer, denaturing solution, wash and annealing buffer) (Qiagen) and streptavidin sepharose high performance beads (34 mM, GE Healthcare). Results were analysed using the PyroMark Q24 Software in the CpG analysis mode, and only methylation values with high quality assessment were considered. Bioinformatics The 3 0 ends of NGS reads tend to have poor quality and thus may lead to mis-mapping and incorrect methylation calls. Moreover, contamination of reads with adapter sequences also complicates mapping and methylaton calling. To avoid these complications, we performed thorough quality control and trimming of the sequence reads using Trim Galore! wrapper script (version 0.1.4, www. bioinformatics.babraham.ac.uk/projects/trim_galore/) with the following settings: -quality 20 -phred64 -fastqcadapter AGATCGGAAGAGC -stringency 1 -length 0. Finally, sequence pairs were discarded if became not longer that 40 bp after trimming. The quality of the paired-end sequences was controlled before and after the trimming process using FastQC (version 0.10.1, www. bioinformatics.babraham.ac.uk/projects/fastqc/). Bisulfite-treated reads were aligned to the reference human genome (June 2010, GRCh37/hg19) using Bismark (version 0.7.3, www.bioinformatics.babraham. ac.uk/projects/bismark/) with the following settings: -fastq -phred64-quals -non_directional (15). Bismark served as a wrapper script for short read aligner Bowtie 1 (16). To exclude duplicate reads generated during the PCR amplification, alignments that mapped to the same position in the genome were removed using deduplicate _bismark_alignment_output.pl script, which is included into Bismark distribution. Then, DNA methylation calls were extracted from deduplicated Bismark output SAM files using methylation_extractor script (included into Bismark distribution). As our capture library only targeted the top strand of bisulfite-converted genome, only reads that aligned to the original top strand were considered for calling cytosine methylation. All subsequent steps of NGS data analysis were done using custom Python3 scripts, which are available on request. First, CpG sites having read depth <10Â were discarded. Among the remaining CpG sites, we selected those CpGs, which were analysed in all four gDNA samples simultaneously. These common CpG sites were further divided into 'on-target' and 'out-of-target' CpGs. 'On-target' CpG sites were defined as those overlapping with the coordinates of baits in our custom Agilent SureSelect Target Enrichment library. DNA methylation values with their 95% confidence intervals for each CpG site were calculated from the experimental binomial data according to Wilson method (17). CpG sites manifesting variable methylation among four samples were found using pair-wise Fisher's exact test (a = 0.01). Visualization of DNA methylation data corresponding to genes of interest was done using Matplotlib library (matplotlib.sourceforge.net). Correlations between NGS data and 450 K data were assessed using GraphPad Prism v5.01 (www.graphpad.com). Coordinates of known SNPs and CpG islands (CGI) were downloaded from the UCSC Table Browser (snp135Common and cpgIslandsExt primary tables, respectively). CGI shores were defined as regions within 2 Kb, but not inside CGIs. The CpG density of a sequence surrounding a certain genomic position was defined as the number of CpG sites within a 200-bp window centred to the given position. The nucleotide coverage of a genomic interval was calculated as a sum of all nucleotides mapped inside of given interval in all four samples. RESULTS The development of an algorithm to design a bisulfite-specific Agilent SureSelect library The typical way to accommodate a custom Agilent SureSelect library for the hybrid capture of bisulfiteconverted gDNA is to simply subject the sequence of each bait to in silico bisulfite conversion. However, the methylation state of each particular cytosine residue in a given DNA sample can differ from the state assumed during in silico bisulfite conversion of baits, thus leading to a potential mismatch between the bait and the corresponding DNA fragment. A high number of mismatches between a certain bisulfite-converted bait and the corresponding bisulfite-converted gDNA fragment will result in impaired efficiency of hybrid capture. To avoid this inconsistency, we developed an algorithm converting SureSelect baits into their bisulfite-specific counterparts by taking into account the number of possible mismatches for each bait. Previous studies have determined that, at least for 60 nt baits, as much as six mismatches do not significantly impair the efficiency of hybrid capture (18). Based on this observation, we selected 8 as the threshold of tolerated mismatches between our 120 nt baits and the corresponding DNA fragments. Thus, those baits in the original SureSelect library that cover less than eight CpGs (CpG-poor baits) are expected to have less than eight mismatches with the corresponding bisulfite-converted gDNA fragments at any possible pattern of their methylation. These CpG-poor baits resulted in only one bisulfite-converted bait (assuming that all cytosine residues are unmethylated, and thus all Cs are converted to Ts) ( Figure 1). In contrast, those baits in the original library that cover eight or more CpGs yielded two bisulfite-converted baits: one converted from the original bait assuming that all cytosines are unmethylated, and another obtained assuming that all cytosines in the CpG context are methylated and thereby protected from conversion (see Figure 1). Thus, under any possible pattern of CpG methylation in the gDNA, not more than half of the CpG sites within a given bait would contribute to a mismatch with bisulfite-converted gDNA. At the same time, the original SureSelect baits that did not cover any CpGs were not expected to capture CpG-containing gDNA fragments. Hence, these baits were excluded from the final bisulfite-converted library (see Figure 1). The workflow depicted in Figure 1 was implemented in a Python3 script allowing for rapid and easy conversion of the original input Agilent SureSelect library (generated by the Agilent eArray software) to the corresponding bisulfite-specific SureSelect library. This script can be found in Supplementary File 2. Using the approach explained in Figure 1, we generated our bisulfite-specific SureSelect library, which covered 3.9 Mb of target genomic sequences in 174 ADME genes (containing 82 184 CpG sites). Among these CpG sites, 15 432 were located in 262 CGIs and 10 126 in CGI shores (i.e. regions within 2 Kb, but not inside CGIs, manifesting intermediate CpG density). Efficiency of target enrichment of bisulfite-converted DNA The main quality metrics characterizing the efficiency of the target enrichment and the performance of the NGS of the four DNA samples are shown in Supplementary Table S1 (see Supplementary File 1). The observed number of NGS reads mapped with read depth !10Â allowed us to reveal the methylation states for >500 000 CpGs for each of the four gDNA samples analysed. Among them, 303 404 CpGs were detected in all four samples, suggesting that the gDNA fragments containing these CpGs are reproducibly captured by our bisulfite-specific SureSelect library. Owing to the inherent effect of bisulfite treatment, we experienced decreased specificity of target enrichment, where 41 922 of the reproducibly captured CpGs are found in the target 3.9 Mb region. Thus, we were able to analyse 51.1% of the 82 184 CpG sites located in the target region at sufficient depth. The distribution of the read depth for all CpGs in the target region that were analysed in the four samples is shown in Supplementary Figure S2 (see Supplementary File 1). In agreement with these data, the median read depth for CpGs in the target region ranges between 36Â and 77Â across the four samples (see Supplementary Table S1). Both in vitro bisulfite conversion of gDNA and in silico C-to-T conversion of SureSelect baits leads to a strong decrease of GC content (e.g. the median GC content of our SureSelect baits decreases from 49 to 23% on in silico bisulfite conversion). We found that GC content of bisulfite-specific baits can serve as a good predictor of both the nucleotide coverage (Supplementary Figure S3A) and the percentage of CpG sites analysed with sufficient read depth (Supplementary Figure S3B) at the corresponding genomic intervals. At that, extremely AT-rich baits (with GC content 20%) are almost non-functional and can be removed from the layout of the capture library without any significant loss of its performance. For example, 26.9% of our bisulfite-specific SureSelect library was composed of such AT-rich baits (covering 20.8% of the targeted CpGs), but in total they cover as little as 1.4% of all analysed CpG sites. Moreover, the GC content of the baits (both before and after in silico bisulfite conversion) correlates with the number of CpG sites covered by the given bait, i.e. with the CpG density. Accordingly, for CpG-rich baits, a higher percentage of CpG sites could be analysed with sufficient read depth compared with CpG-poor baits (Supplementary Figure S3C). The variability of DNA methylation Methylation levels (as well as their 95% confidence intervals) were calculated for the CpGs in the target region, which manifested a read depth !10Â (see Materials and Methods section). Owing to the relatively high read depth observed in the target region, for 90% of the CpG sites, their true methylation levels are expected to differ by not more than 15% from the measured methylation levels. All CpG sites in the target region were checked for possible variability in methylation levels between the four gDNA samples, and 1787 CpGs (4.3% of 41 922 CpGs analysed on target) were found to be differentially methylated. Theoretically, SNPs overlapping with cytosines in CpG context can influence methylation calling, thus providing the basis for false-positive methylation variability. However, we found that only 85 CpGs of 1787 overlap with known common SNPs. Hence, the remaining 1702 CpGs were judged to be differentially methylated among the four gDNA samples analysed. The distribution of these differentially methylated CpGs among the ADME genes of interest is shown in Supplementary File 4 (the corresponding legend can be found in Supplementary File 1). A few examples of the distribution of DNA methylation values along target genomic intervals are shown in Figure 2. The percentage of CpG sites with variable methylation correlates with the CpG density of the surrounding DNA sequence. The highest percentage of variably methylated CpGs is found in genomic regions with intermediate CpG density (Supplementary Figure S4A). Consistent with variable methylation within intermediate CpG density, the percentage of variably methylated CpGs is the highest in CGI shores and the lowest in CGIs (Supplementary Figure S4B). When comparing median methylation levels, CGIs were generally hypomethylated, CGI shores were highly variable in methylation levels and genomic regions outside of both CGIs and CGI shores were generally hypermethylated (Supplementary Figure S5), consistent with the current knowledge on CpG density and related methylation states. Validation of methylation data The validity of the CpG methylation levels produced by the NGS of our target-enriched samples as well as the bioinformatics analysis was confirmed with two individual techniques, both by pyrosequencing of selected DNA fragments, and by comparison with methylation values produced by the Illumina 450 K Methylation BeadChips. Pyrosequening, which is generally considered to be a very precise method for the quantification of DNA methylation, was used to validate the results retrieved from three genomic regions, which were randomly selected among those analysed with read depth !100Â. In total, the methylation levels of 12 CpG sites in four samples from the NGS study and from the pyrosequencing experiments were compared. The correlation between the methylation levels obtained by the two different methods (Spearman r = 0.88) is plotted in Figure 3A. Visualization of these methylation levels plotted against the genomic positions of 12 CpG sites validated is shown in Supplementary Figure S6. The NGS data were also compared with DNA methylation values obtained using the Illumina 450 K BeadChip assays for three of the DNA samples analysed (namely, samples 2, 3 and 4). This assay is expected to interrogate methylation levels of 486 429 CpG sites throughout the whole genome; however, its design is not biased towards ADME genes. This is why only 4933 CpG sites in our 16-Mb region of interest (and, among them, 3650 CpGs in the 3.9-Mb target region) are covered by the design of the BeadChip assay. Among the 348 688 CpG sites, which were detected by the 450 K assay with P-values <0.01 in all three samples compared, 1880 CpGs overlapped with those CpGs, which were analysed in the target region of our NGS experiment. A plot showing the correlation (Spearman r = 0.93) between the methylation values obtained from the NGS study and the 450 K BeadChips is represented in Figure 3B. DISCUSSION The aim of this study was to develop a method for the analysis of DNA methylation patterns in 174 ADME genes (including 20 Kb of their 5 0 -and 3 0 -flanking sequences) using bisulfite NGS on the Illumina HiSeq2000 platform. As we did not find existing methods for bisulfite target enrichment to be fully suitable for the purpose, we developed a novel protocol for bisulfite target enrichment, which relies on the hybrid capture of bisulfite-converted gDNA fragments by 120 nt of RNA baits included into a custom Agilent SureSelect library. A brief comparison of published protocols for targeted bisulfite NGS (BS-Seq) is presented in Supplementary File 5. Essentially, there are two alternative strategies for the integration of a bisulfite treatment step into a hybridization-based target enrichment protocol. The first is target enrichment of native gDNA followed by bisulfite conversion, and the second strategy is to perform the target enrichment on bisulfite-converted gDNA. The advantage of the first strategy is that the specificity of target enrichment remains the same as in the case of the original target enrichment protocol. However, to maintain the DNA methylation state in this scenario, all the required PCR amplification steps have to be omitted, thereby limiting the amount of DNA post capture. Limited The method where bisulfite treatment is used after hybridization-based DNA capture is best illustrated by the study of Lee et al. (11) reporting the successful enrichment of an 8-Mb target region using a custom oligonucleotide library. The authors managed to increase the number of intact DNA molecules post capture and bisulfite treatment by using high amounts of starting gDNA (as much as 20-30 mg) and up to six hybrid capture reactions in parallel for each gDNA sample. Using the Agilent SureSelect Human All Exon Kit where native DNA is also captured and then bisulfite treated, Wang et al. (13) demonstrated that with the optimization of the experimental conditions, 2 mg of input gDNA can be successfully used to enrich 38 Mb of genomic sequence (13). Moreover, Agilent recently announced their new SureSelect Human Methyl-SEQ system, claiming to enrich 84 Mb of genomic sequence from 3 mg of DNA with the use of a predesigned SureSelectXT library. A variation to these hybridization-based target enrichment approaches are methods that use target amplification by capture and ligation. Recently, two independent studies using ligation-based approaches, also performing enrichment of native gDNA followed by bisulfite treatment, showed successful DNA capture with low input gDNA requirements (200-250 ng) (9,10). Therefore, ligationbased protocols can be considered as another alternative to the hybridization-based methods, especially if the amount of starting material is limited. As previously mentioned, the second possible strategy for coupling target enrichment with bisulfite conversion involves bisulfite treatment of DNA before the hybrid capture. As this strategy uses bisufite-treated DNA and hence does not require omitting PCR amplification steps before capture, limited intact DNA post capture and bisulfite treatment can potentially be avoided. However, the specificity of the hybrid capture itself is expected to be impaired owing to the decreased complexity of bisulfite-converted DNA sequences that can result in a high percentage of NGS reads outside of the target region. Moreover, the sequence of bisulfite-converted DNA can be only partially predicted from the sequence of the corresponding native DNA. This complicates the library design for DNA capture, as cytosines in the CpG context may be either cytosines or thymines after amplification, depending on the methylation state. Despite these complications, the validity of target enrichment on bisulfite-converted DNA was first demonstrated in two independent studies using molecular inversion probes or padlock probes (7,8). Later, the commercial microdroplet PCR method was successfully applied for bisulfite-converted DNA, yielding the methylation states of >77 000 CpG sites localized in the promoters of 2100 genes (12). Additionally, it was demonstrated that 60-nt probes can also be successfully used for array-based hybrid capture of 258 Kb of bisulfite-converted DNA (6). In agreement with the aforementioned common considerations, the specificity of the hybrid capture was shown to be impaired, with not more than 12% of mapped bisulfite reads being in the target genomic intervals (6). Nevertheless, the validation of the NGS data with traditional Sanger bisulfite sequencing allowed the authors to conclude that the capture of bisulfite-converted DNA was not biased towards particular methylation states of original gDNA fragments (6). Thus, hybrid capture of bisulfite-converted DNA can be used for target enrichment; however, the existing protocols have a low genomic coverage of target-enrichment libraries. To this end, we developed a protocol for the Agilent SureSelect Target Enrichment System involving the bisulfite treatment step before the hybrid capture (see Materials and Methods). We used this modified SureSelect protocol to examine four different gDNA samples that had four barcoded Illumina libraries for paired-end sequencing. These libraries were pooled together and sequenced on a single lane of a v3 flowcell on the Illumina HiSeq2000 platform. As expected, the percentage of reads mapped on target (4.0-7.2%) is significantly lower than is usually observed in non-bisulfite target-enrichment experiments (i.e. 70-80%) (see Supplementary Table S1). The comparison of our protocol with the similar protocol developed by Hodges et al. (6) reveals some important improvements, including the enrichment of up to 6 Mb of genomic sequences of interest (versus 258 Kb), a significantly lower required DNA input, and the usage of in-solution hybrid capture, which does not require any special equipment, as opposed to solid-phase oligonucleotide arrays. The number of CpG sites analysed in the target region (n = 41 922) constitutes 51.1% of the total number of CpG sites (n = 82 184), which are located in non-repetitive sequences in our 16.26-Mb genomic region of interest and are covered by the designed SureSelect baits. This means that certain bisulfite-specific SureSelect baits work less efficiently than others. We found that those baits, which became extremely AT-rich (GC content 20%) on in silico bisulfite conversion, were unlikely to ensure sufficient read depth at the corresponding CpG sites. At that, GC content of baits (and hence the percentage of CpGs analysed with sufficient read depth) positively correlates with CpG density of targeted genomic regions. Our custom SureSelect library contains a substantial proportion of AT-rich baits, as the selection of genomic regions of interest was based solely on the coordinates of ADME genes, and it was not skewed towards a certain specific CpG density. Otherwise, if only genomic intervals with high and/or intermediate CG density would be used as templates for the design of the bisulfite-specific SureSelect baits, one could expect somewhat better quality metrics of target enrichment. Despite these complications, we were able to assess the methylation levels of 41 922 CpG sites in target regions with sufficient fidelity. The validation of the DNA methylation data obtained from the NGS study with both pyrosequencing and Illumina 450K BeadChip assay shows strong correlations (see Figure 3). Moreover, NGS-derived DNA methylation values do not seem to manifest a systematic shift towards either hyper-or hypomethylated states of analysed DNA fragments, thus suggesting that hybrid capture of bisulfite-converted DNA is apparently not biased towards specific methylation patterns at targeted CpG sites. In addition, among the targeted CpG sites, 1702 were shown to be differentially methylated among four human liver samples. The percentage of variably methylated CpG sites (from the number of CpG sites analysed in this study) was shown to be higher in the regions with intermediate CpG density, namely, in CGI shores, which is in line with previous observations (19). Hence, CGI shores deserve increased attention when studying interindividual differences in DNA methylation in human livers. Interestingly, the percentage of variably methylated CpG sites also varies significantly among ADME genes (see Supplementary File 4). Some ADME genes are characterized by relatively high percentage of variably methylated CpGs (e.g. CYP2E1, GSTP1, SLC7A5) compared with others. One can suggest that such genes are more probable to be regulated by DNA methylation than those showing low percentage of variably methylated CpGs. These considerations should however be regarded as preliminary because of the limited number of liver gDNA samples analysed in this study. Despite the recent progress in the development of novel methods for targeted bisulfite sequencing, protocols with higher efficiency are needed, which will widen the opportunities to analyse DNA methylation patterns in every genomic region of choice and thus contribute to further discoveries in the field of epigenomics.
2017-04-14T04:12:28.771Z
2013-01-15T00:00:00.000
{ "year": 2013, "sha1": "ec802dbea126e52e430db4b5abdd1f342dd96893", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/41/6/e72/25339790/gks1467.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec802dbea126e52e430db4b5abdd1f342dd96893", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237379085
pes2o/s2orc
v3-fos-license
A special oropharyngeal oxygenation device to facilitate apneic oxygenation in comparison to high flow oxygenation devices Oxygen application and apneic oxygenation may reduce the risk of hypoxemia due to apnea during awake fiberoptic intubation or failed endotracheal intubation. High flow devices are recommended, but their effect compared to moderate deep oropharyngeal oxygen application is unknown. Designed as an experimental manikin trial, we made a comparison between oxygen application via nasal prongs at 10 L/min (control group), applying oxygen via oropharyngeal oxygenation device (at 10 L/min), oxygen application via high flow nasal oxygen with 20 L/min and 90% oxygen (20 L/90% group), oxygen application via high flow nasal oxygen with 60 L/min and 45% oxygen (60 L/45% group), and oxygen application via sealed face mask with a special adapter to allow for fiberoptic entering of the airway. We preoxygenated the lung of a manikin and measured the decrease in oxygen level during the following 20 minutes for each way of oxygen application. Oxygen levels fell from 97 ± 1% at baseline to 75 ± 1% in control group, and to 86 ± 1% in oropharyngeal oxygenation device group. In the high flow nasal oxygen group, oxygen level dropped to 72 ± 1% in the 20 L/90% group and to 44 ± 1% in the 60 L/45% group. Oxygen level remained at 98 ± 0% in the face mask group. In conclusion, in this manikin simulation study of apneic oxygenation, oxygen insufflation using a sealed face mask kept oxygen levels in the test lung at 98% over 20 minutes, oral oxygenation device led to oxygen levels at 86%, whereas all other methods resulted in the decrease of oxygen levels below 75%. INTRODUCTION Awake fiberoptic intubation is the so-called "gold standard" for airway management in case of expected difficult airway. 1,2 The awake procedure however is perceived to be highly uncomfortable by patients. 3,4 As a consequence, the technique is usually performed under conscious sedation. 5 However, sedative medication dose-dependently decreases ventilation efforts and can cause severe hypoxemia and other severe side effects. [6][7][8] Thus, preoxygenation and oxygen application are prerequisites to maintain oxygenation at high and safe levels throughout the procedure. Failure to provide sufficient oxygenation may have deleterious consequences for the patient, as standard airway management has a high probability to fail due to the (known) difficult airway. Several strategies to maintain sufficient oxygenation during awake fiberoptic intubation have been reported. In a publication by Mir et al., 9 the authors described a "transnasal humidified rapid-insufflation ventilatory exchange" by the application of high-flow oxygen to maintain oxygenation. Further, actual anaesthesia guidelines (e.g., for difficult airway management in obstetric anaesthesia) are postulated to apply high flows of oxygen via a nasal cannula. 10 We developed a special oropharyngeal oxygenation device, which allows the deep laryngeal insufflation of oxygen, thus facilitating apneic oxygenation in difficult airway scenarios, while still offering the option to fiberoptically intubating the patient. 11 We connected a special commercial oropharyngeal tube split on the front forming a path for the bronchoscope, with an oxygen line ending at the top of the oropharnygeal tube (Figure 1). In this experimental manikin study, using a test lung and full preoxygenation, we evaluated oxygen decrease in this test lung while oxygenating either via oropharyngeal oxygenation device or via several other standard or high flow oxygenation devices. We hypothesized that we would find no difference in oxygen level decline between the groups. experimental setup We attached a male intubation manikin (Laerdal, Stavanger, Norway) to a rigid test lung with a capacity of 2.5 L, equalling the functional residual capacity (FRC) of a typical male adult. 12,13 A side-stream port was connected to the bottom of the test lung that continuously takes a gas probe at 200 mL/ min. This probe was analysed with a paramagnetic oximeter of a Primus anaesthesia machine (Draeger, Luebeck, Germany) with an accuracy of ±(2.5 Vol% + 2.5% relative). The gas probe A special oropharyngeal oxygenation device to facilitate apneic oxygenation in comparison to high flow oxygenation devices Wetsch volume nearly equals the amount of oxygen consumption during apnoea in an adult with the corresponding functional residual capacity. 14 For the oropharyngeal oxygenation device (OOD) group, a special oropharyngeal oxygenation device (with insufflation of oxygen into the deep laryngeal space) was used. Further, this device additionally provides the option to fiberoptically intubate the patient. This device was self-created using a special commercial split oropharyngeal tube together with an oxygen line ending at the top of the oropharnygeal tube ( Figure 1A and B). The device has been described in detail. 11 For high-flow nasal oxygen (HFNO), the Airvo2 HFNO machines (Fisher&Paykel Healthcare Ltd., Auckland, New Zealand) with OptiFlow cannulas (Fisher&Paykel Healthcare Ltd.) were used. For continuous positive airway pressure (CPAP), a Dräger Primus (Primus, Dräger, Lübeck, Germany) was used. It was set to manual mode with 100% oxygen at a flow of 10 L/min, with the adjustable pressure limiting valve set to 5 cmH 2 O, thus generating a corresponding positive end-expiratory pressure. The facemask was fit tight to the manikin's face using rubber straps, and a special adapter (Mainzer Adapter, Karl Storz, Tuttlingen, Germany) to allow the entering of the airway with a bron-choscope was used. experimental procedures In the setting, we always made five consecutive experiments with each device, in a randomized order. After preoxygenating the test lung to an oxygen level of 97 ± 1%, the experiments were performed for each of the five groups, measuring the decrease in oxygen levels at the bottom of the test lung during a period of 20 minutes; values at each full minute were recorded. Typical apneic oxygenation strategies are achieved as follows: application of oxygen with a flow of 10 L/min via a standard nasal cannula (control group); application of oxygen with a flow of 10 L/min via the OOD (OOD group); application of HFNO with a special nasal cannula with a flow of 20 L/min and 90% oxygen (HFNO 20 L/90% group), application of HFNO with a special nasal cannula with a flow of 60 L/min and 45% oxygen (HFNO 60 L/45% group); and application of oxygen using CPAP with positive end-expiratory pressure 5 cmH 2 O and 100% oxygen via a sealed face mask (CPAP group). Statistical analysis Data are reported in mean ± standard deviation (SD). After testing for normal distribution (Shapiro-Wilk test) and equal variance test (Brown-Forsythe test), a one-way analysis of variance for repeated measurements was made to evaluate the differences between the groups. This was then followed by Tukey's post hoc test for pairwise multiple comparisons. This was done using the Sigmaplot software (V 14.0; Systat, San Jose, CA, USA). P-values < 0.05 were considered as being significant. ReSULTS One-way analysis of variance found a significant difference between all five groups (P < 0.001). After 20 minutes, oxygen levels fell from 97 ± 1% at the beginning to 75 ± 1% in the control group (P < 0.003, vs. HFNO 20 L/90% group; P < 0.001, vs. all other groups) and to 86 ± 1% in the OOD group (P < 0.001, vs. all the other groups). When using the HFNO device, oxygen levels dropped to 72 ± 1% in the 20 L/90% group (P < 0.001, vs. all other groups) and to 44 ± 1% in the 60 L/45% group (P < 0.001, vs. all the other groups) after 20 minutes. O 2 level remained at 98 ± 0% in CPAP group over the entire 20-minute observation period (P < 0.001, vs. all other groups; Figure 2). DISCUSSION In this simulation study, using the OOD for laryngeal oxygen insufflation was more effective in oxygenation than HFNO at any of the used settings, but less effective than using CPAP via sealed face mask. Since desoxygenation regularly occurs www.medgasres.com during intubation attempts, [15][16][17] performing preoxygenation carefully has proven to reduce deoxygenating events during intubation attempts. [18][19][20][21][22][23] Applying oxygen in high concentration may be in additional safety tool to maintain apneic oxygenation if breathing of patients is impaired due to sedating agents. Interestingly, in the HFNO 20 L/90% group, oxygen levels decreased faster than in the control group with 10 L/min oxygen application via nasal prongs. At least we would have expected 90% of oxygen percentage in the test lung after 20 minutes if high-flow application had prevented air and thus nitrogen entering the airway, which was obviously not the case. This phenomenon has been described before, and may be a result of mixing of the insufflated oxygen with nitrogen entering through the manikin's mouth, thereby generating a turbulent flow like a mixing chamber. 24 In contrast, higher flow of 20 L/min resulted in even more gas mixing than when using 10 L/min. We speculate this could only be explained by more turbulent gas flows and Bernoulli effects that suctioned ambient air (and thus nitrogen) into the airway, as it has been described in detail for emergency ventilation device. 25 At very high flows in the HFNO 60 L/45% group, this effect was not visible, since we ended at exactly 44% oxygen percentage, which contradicts any additional nitrogen pouring in the airway than that applied by the high flow oxygenation device. Potentially this very high gas flow at high oxygen concentrations may have resulted in oxygen concentrations in the test lung after 20 minutes that may maintain apneic oxygenation. Unfortunately, with the used Airvo2 HFNO system, due to technical limitations, a higher flow with a higher fraction of inspiratory oxygen could not be achieved: 45% O 2 is the maximum achievable oxygen concentration at 60 L/min; and 90% is the upper limit at 20 L/min. If there are devices available that deliver flows of high concentrated oxygen at very high flows, this should be subject of further studies. Not surprisingly, the completely closed system by applying a tightly sealed CPAP mask was most effective in avoiding nitrogen entering the airway. However, the special adapter used to place the bronchoscope into the airway would not allow a tracheal tube to pass due to a small diameter. A potential solution might be to a place a special long wire over the working channel into the trachea. This wire can then be used to insert intubation stylets and over these a ventilation tube after removing the face mask. However, that needs some effort, could result in displacement any time and does not guarantee correct placement of the tube. Last, in times of coronavirus disease 2019 (COVID-19), fiberoptic intubation is a high risk maneuver in regard of risk of infection due to highly infectious aerosols from the airway. 26 Using HFNO devices that are known to create large amounts of highly infectious aerosols in the surrounding of patient may pose an additional risk of infection for the physician who attempts intubation, and requires high levels of personal protective equipment. 27 Thus, the availability of a non-commercial OOD that could simply be constructed, may be an additional tool to maintain patient's oxygenation for awake fiberoptic intubation. Due to significantly lower gas flows, one might postulate that fewer infectious areosols might be produced during the use of these devices. 28,29 The setting of nay simulation study is always a limitation. Only experiments in humans would have been more realistic in regard of the airway than our manikin model. However, deliberately withholding ventilation in humans to provoke deoxygenating cannot be justified due to potential harm for patients. Further, the principle of apneic oxygenation had been demonstrated long before. 30,31 Overall, we assume our model was realistic enough to evaluate our hypothesis. In conclusion, apneic oxygenation by both application of CPAP via sealed face mask kept oxygen levels at 98% over 20 minutes, and application of 10 L O 2 via the OOD kept it at 86% after 20 minutes, whereas all other methods resulted in a significant decrease of oxygen levels below 75% over time in our simulation study.
2021-09-02T14:02:49.071Z
2021-08-12T00:00:00.000
{ "year": 2021, "sha1": "d7ba13eaf0ab75e2e10cc6777e12142f4918144d", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc8447949?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dc94cc4cb8c22efed60eb44602e75341a751c47c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64289505
pes2o/s2orc
v3-fos-license
Occurrence of female sexual hormones in the Iguazu river basin , Curitiba , Paraná State , Brazil Female sexual hormones have attracted the attention of the scientific community due to the effects that they cause by interfering in the endocrine system. Many contemporary studies have sought to monitor some of the main female sexual hormones in surface waters in Brazil. Current article evaluates the presence of 17β-estradiol, 17α-ethinylestradiol, estrone and progesterone in the surface waters of Curitiba and the surrounding metropolitan area in the state of Paraná, Brazil, by high-performance liquid chromatography (HPLC), performed at 7 different sites. The study revealed that a range of concentrations between 0.07 and 13.45 μg L of female sexual hormones was extant; higher values than these were found in other regions of Brazil and in other countries. Higher concentrations have been attributed to the region ́s sanitation due to large sewage amounts. Sewage discharge has also been confirmed by results of limnological parameters. Introduction Growing consumption of contraceptives by the female population worldwide has given rise to a new environmental concern: the contamination of the environment by female sexual hormones (FSHs).Some of these hormones, such as the estrogens oestradiol, oestriol and oestrone, and the progestogen progesterone, are produced naturally by the human and animal organism in small quantities by various endocrinal glands under the command of protein hormones released into the bloodstream (CAI, 2011).Others, such as ethinylestradiol, are produced synthetically (SIAH et al., 2003).When hormones are synthetically produced, they make up most of the oral and injected contraceptives, where they are found in concentrations ranging between 30 and 300 μg per pill (GOLDEFIEN, 1995). Although the main function of contraceptives is to inhibit ovulation, they are also used to combat menopause symptoms, physiological disorders and in prostate and breast cancer treatments (BOSCO et al., 2004).Both natural and synthetic FSHs are rapidly absorbed by the organism and are metabolized in the liver (CAI, 2011).Once metabolized, they are eliminated daily in the urine and, to a lesser extent, in the feces (BELFROID et al., 1999).Several organisms excrete different amounts of sexual hormones, depending on age, state of health, diet or pregnancy condition.The amount of hormones excreted by a pregnant woman may be up to a thousand times higher than a woman who is not (2 to 20 μg estrone day -1 , 3 to 65 μg estriol day -1 and 0.3 to 5 μg estradiol day -1 ), depending on the former´s stage of pregnancy (ELF et al., 2002). Although their presence has been suspected in the environment for over twenty years, FSHs in surface waters were first reported by Purdom et al. (1994) and Fent and Gies (1996) in England, where fish were being contaminated by estrogens originating from a sewage treatment plant (STP).Even at low concentrations, between μg L -1 and ng L -1 , FSHs may interfere in the endocrinal glands of animals and human beings, affecting the normal functioning of the endocrinal system and influencing development, growth and reproduction stages (WEN et al., 2006). Several studies have shown the interference of FSHs in fish, fowl, reptiles and mammals (HAHLBECK et al., 2004;PARROT;BLANT, 2005;TRAINOR et al., 2006).Hahlbeck et al. (2004) reported gonadal sex reversal in a species of wild fish Gasterosteus aculeatus following treatment with 17α ethinylestradiol (estrogen).Parrot and Blank (2005) reported the feminization of fish of the Pimephales promelas (minnow) species by synthetic estrogens from birth-control pills.Trainor et al. (2006) observed increasing aggressiveness in birds due to an increase in estrogen.According to Takeshi et al. (2003), FSHs cause disorders in the maturing of female sexual gonads in salmon, whereas Fry et al. (2006) reduced anxiety in rats after administering specific doses of progesterone. Coupled to other causes (smoking, stress, etc.) in humans, the presence of female hormones in drinking water may be related to masculine infertility diseases, such as varicocele (enlargement of the vein in the scrotum), cryptorchidism (undescended testicles), hydrocele (accumulation of liquid in the scrotum), and other deformities in the penis and testicles, low sperm count and cancer (BECK et al., 2005;PONEZI et al., 2007). Research in the USA, Spain, Germany, Japan, Israel, the Netherlands and other countries has detected considerable concentrations of these compounds in surface waters, especially those close to STP (BELFROID et al., 1999;KUCH;BALLSCHMITER, 2001;BAREL-COHEN et al., 2006). In addition to the harmful effects of the presence of FSHs in surface waters, there is also the fact that the demand for contraceptives continues to grow all over Brazil as they are one of the most popular alternatives when it comes to family planning (PONEZI et al., 2007).Thus, it is highly important to detect these compounds in water systems and domestic sewage, since their removal or destruction after passing through a STP is inefficient because they are resistant to most sewage treatment processes and are constantly found in surface waters (CARBALLA et al., 2004;YAMAMOTO et al., 2006;WATABE et al., 2004) and even in drinking water (KUCH;BALLSCHMITER, 2001;LOPEZ DE ALDA;BARCELO, 2000). Knowing where FSHs are finally disposed of in the environment is a fundamental issue when it comes to comprehending the potential of human pollution.Furthermore, these compounds may be related to local contamination levels caused by domestic sewage, be it as a result of treated effluent or discharge without any treatment (CARBALLA et al., 2004;WATABE et al., 2004).Therefore, regions with poor sanitation may be considered the source of FSHs and tend to have limnological parameters of water quality that are indicative of sewage discharges containing these compounds.In this context, current paper determines the level of concentration of the following FSHs: estrone (E1), 17 β estradiol (E2), 17 α ethinylestradiol (EE2) and progesterone (Pg) in the Iguazu River Basin in Curitiba, south Brazil, and relate them to limnological parameters that indicate contamination by domestic sewage discharges. Area under study The area under study is located in the Iguazu Basin in Curitiba and the surrounding metropolitan district in the state of Paraná, Brazil, as shown in Figure 1.The source of the Iguazu Basin is near the Serra do Mar (Atlantic Rainforest); the Iguazu, the main river, is approximately 90 km long and stretches to the Curitiba Metropolitan District.The drainage area of this basin is around 3,000 km 2 and the population is about three million people in 14 districts and towns.About a quarter of the population of the state of Paraná lives in this basin, with low sewage treatment rates.Considering the size of the basin, the results in current study are mainly related to the upper part of the Iguazu river Basin which includes the southern region of Curitiba and the Metropolitan District. Material and methods In current study, the upper part of the Iguazu Basin, comprising the sub-basins of the rivers Atuba, Palmital, Itaqui, Pequeno and Piraquara, was monitored.This densely populated region has a considerable water supply potential for humans.The sites of sampling stations were defined according to the characteristics of the surrounding area so that all pollution levels in the basin could be covered.Stations IG-01 and IG-02 are located on the River Iguazu downstream the confluence with the rivers Palmital and Atuba.The two rivers run through the urban area of Colombo and Curitiba and receive considerable domestic sewage discharges.Sampling stations on the river Atuba were located around the sewage treatment plant.Station AT-01 lay upstream, AT-02 immediately upstream and AT-03 downstream the STP.Water samples in station IT-01 represent the Itaqui river water quality prior to the confluence with the Canal Paralelo; however this river receives upstream domestic sewage discharges from a small STP.The water samples in station CP-01 represent the Canal Paralelo after confluence with the river Itaqui; this channel of the river Iguazu passes through an environmental protection area.To make matter worse, water quality is also affected by unauthorized settlements. Analysis Sampling, consisting of five liters of water samples collected with a Van Dorn-type bottle, was carried out between February and October 2009.Samples were kept at 4 o C, transported to the laboratory and refrigerated.The samples were preserved for nitrogen analyses by adding 1 mL of concentrated sulfuric acid, as established by APHA (1998).Amber glass bottles were used to store the samples, except for preservation with sulfuric acid when plastic bottles were used.The water samples were collected at a maximum depth of 50 cm.All the reagents used in the limnological analysis were of analytical grade. HPLC solvents (J.T. Baker ®) , with over 98% purity, were used to quantify FSHs.Shimadzu HPLC was equipped with a peristaltic pump model LC 20AT, a model DGU-20A degasser and an ultraviolet detector with an SPD M20A model diode-array detector.The injection volume was 20 μL and the chromatography column to separate the FSHs was a 4.66 mm x 15 cm Shimadzu ODS C8. Due to the influence of estrogens on the anthropogenic activities of the region under analysis, some limnological parameters were studied in order to obtain a scenario of the water quality levels during sample collection and to see whether any correlations existed.Dissolved oxygen concentration, pH, electrical conductivity (S cm -1 ), turbidity (NTU) and temperature were determined at each site by portable equipments.Water samples were collected in a Van Dorn-type bottle. Phosphorous concentrations (total and dissolved orthophosphate) were determined by applying the spectrophotometric method of the reaction with molybdate/ascorbic acid (APHA, 1998).All the forms of nitrogen were analyzed by spectrophotometry in filtered samples (0.45 m).Ammonia nitrogen concentration was determined using the phenol-nitroprusside method.Nitrite, nitrate (following reduction to nitrate by the Cd column) and total nitrogen (persulfate digestion) were determined by sulfanilamide/N-naftil reagents (APHA, 1998). The methodology for determining FSHs was based on Lopez de Alda and Barceló (2000).First, one liter of the sample was filtered in a membrane with 0.45 μm porosity and its pH corrected to 3.0 with phosphorous acid.The sample passed through a Stracta C18 solid phase extraction cartridge (1.0 g 6 mL -1 ) at a flow of 8-10 mL min. - , after conditioning with 5 mL acetonitrile, 5 mL methanol and 5 mL pH 3.0 water.After passing the sample through the cartridge, FSHs were eluted with 10 mL of acetonitrile, reduced under a nitrogen atmosphere and re-dissolved in 1 mL of methanol. The mobile phase used in the analysis was acetonitrile and water at a ratio of 50:50 for the estrogens and 90:10 for the progesterone, at a flow of 1.4 mL min. - .The wavelength used to detect the estrogens: estrone (E1), 17β estradiol (E2), 17α ethinylestradiol (EE2) was 280 nm, while the progesterone (Pg) was detected at 241 nm.The retention time of the compounds analyzed was 7.56 min.for E2; 8.88 min.for EE2; 10.31 min.for E1; 4.60 min.for Pg. Results and discussion Since the main source of FSHs in the environment was related to the discharge of domestic sewage, a strong link between the limnological features of the environment under study and the concentration of these compounds could be surmised.Results of the limnological parameters are demonstrated in Table 1. In the Iguazu basin, water evaluation resulted by observing the great human influence on the region.High concentrations of ammonia nitrogen were detected (between 0.05 and 33.1 mg L -1 ) (Table 1), with highest concentrations occurring in the river Atuba, downstream from the outlet of the Atuba Sul STP (AT-03), featuring an average of 24.82 ± 8.12 mg L -1 of ammonia nitrogen (Table 1).Total phosphorous varied from 0.08 to 5.47 mg L -1 , with the highest concentrations once again in the river Atuba (Table 1), indicating contamination by domestic sewage in the region. In addition to sewage discharge, there were indications of the influence of surface runoff, which, following rainfall, transports to water bodies considerable amounts of organic material originating from soil, plants and sewage produced in homes without sanitation and remobilized from soil pore spaces.Indications of such contamination are DOC values, which were as high as 91.0 mg L -1 in the river Atuba (AT-02), and organic nitrogen, which reached concentrations as high as 160.83 mg L -1 (Table 1) in the same region. In the case of dissolved oxygen, with the exception of the stations on the Itaqui (IT-01) and the Canal Paralelo (CP-01), which reached rates as high as 5.70 mg L -1 , the remainder of the stations had low DO rates, with a minimum of 1.25 mg L -1 and a maximum of 4.94 mg L -1 (Table 1).These data demonstrate the low water quality in the region.Chloride concentrations in all sampling stations ranged between 11.18 and 27.08 mg L -1 (Table 1) and confirmed contamination by domestic sewage, with influence on the region´s water quality.Considering sewage discharge as the main source of FSHs in the region under study, as confirmed by the results of the water quality mentioned above, the presence of these compounds is evident.Sampling was carried out in a period which covered all seasons of the year.However, it should be underscored that hydrological variability and discharge variability of sewage effluents were not considered in the analysis of the results.Table 2 shows the concentrations of FSHs found in the five samplings conducted. Among the hormones under analysis, 17β-estradiol (E2) was the least concentrated and frequent, varying between < 0.10 and 13.45 μg L -1 (Table 2).This is one of the main estrogens produced by the human body, with a fundamental role in the menstrual cycle.Furthermore, it is a commonly used estrogen in contraceptives.Since it may be of a natural and synthetic origin, its detection in surface waters is a strong sign of contamination by domestic sewage discharges. Considering data on excretion provided by Johnson et al. (2000), approximately half a ton of E2 is discharged into the sewage all over Brazil every day, which explains its concentration in certain bodies of water, resulting from human activities.In the case of 17 α-ethinylestradiol (EE2), the second most frequently found hormone, concentrations ranged between < 0.12 and 5.90 μg L -1 (Table 2).The above data are related to the fact that the compound is a synthetic estrogen used in contraceptives, with only 15% absorption by the human organism, whereas the remaining percentage is eliminated with urine (JOHNSON; WILLIAMS, 2004).In addition to human beings, cattle and swine also excrete natural FSHs.Although the use and occupation of the soil in the region under analysis showed no signs of large quantities of livestock, even if these animals are in the area under study, the presence of E2 confirms contamination by human sewage since it is a synthetic hormone. On the other hand, estrone (E1) had the lowest concentrations among the estrogens, varying between < 0.10 and 1.80 μg L -1 .E1 originates only from the human body, or rather, it derives from an exclusively natural human source and it is twelve times less active than E2.Progesterone (Pg) was the least concentrated of the compounds, varying between < 0.06 and 0.45 μg L -1 (Table 2).Since it is a pregnancy-related hormone and released throughout the ovarian cycle, it is during pregnancy that higher concentrations of this hormone are released (GOLDEFIEN, 1995). Highest rates were found at station AT-03, downstream the Atuba Sul STP, and prove that the sewage plant was not efficient in removing FSHs.It was only at this station that a positive correlation was found between DOC and estradiol (R = 0.9309) and DOC and ethinylestradiol (R = 0.9562).Therefore, the concentration of FSHs in this location may be estimated by DOC concentration.It may also be an indication that when there is high DOC degradation at STP, the hormones are also degraded. In general, FSHs concentrations in surface waters found in current study were higher than those found in other studies both in Brazil and in other countries, as shown in Table 3. According to the region´s sanitation data, the percentage of sewage collection in Curitiba is 80.2%, while in neighboring towns, such as Pinhais, Piraquara and São José dos Pinhais (Figure 1), percentage are 38.4,45 and 40.5%, respectively.One should also take into account that not all sewage collected in the region was treated and that consumption of antibiotics increased in Brazil from 2007 to 2009.Further, according to Araujo and Costa (2009), the consumption of contraceptives in the country had a 23% yearly increase.Among the studies analyzed, the highest levels of FSHs were obtained by Montagner and Jardim (2011) in Campinas, São Palo State, Brazil, with 6806 ng L -1 of E2 (Table 3).In current assay, even higher concentrations were found, reaching as high as 13450 ng L -1 of E2 (Table 2), precisely at a location directly affected by the outlet of the aerobic sewage plant (AT-03).The other stations also showed hormone concentrations, although their rates were lower than those at AT-03.The above data may be linked to the contamination level of the station under study, especially on the outskirts of Curitiba. In accordance with FSHs results and the limnological parameters obtained, it was observed that the location of the sampling stations has a strong influence on the spatial variability of FSHs.On the river Iguazu, at station IG-01, it is likely that part of the pollution is due to its proximity to the confluence with the Palmital river, which is used for the discharge of a large amount of untreated domestic sewage from the town of Colombo.The main characteristic of station IG-02, also on the river Iguazu, is that it is connected to all the sampled stations.Located downstream from the area under study, it comprises a section of the Iguazu Basin, including Curitiba, Pinhais, Piraquara and São Jose dos Pinhais (Figure 1).Stations AT-02 and AT-03 in the river Atuba are directly reached by the Atuba Sul STP, and AT-03 is downstream from the plant's outlet.These features of the sampling stations are part of a scenario of human influence that corroborates the presence of high concentrations of FSHs.In the case of river Itaqui and Canal Paralelo, stations IT-01 and CP-01 have the lowest contamination levels, but the highest concentration levels of ammonia nitrogen, organic nitrogen, total phosphorus, chloride and DOC were found in the river Itaqui over the Canal Paralelo.In this case, results show that the Canal Paralelo, also known as the Canal Extravasor, is affected by contamination that stems from the river Itaqui, as the sampling station is located after the confluence with the Canal Paralelo. Conclusion FSHs contamination levels were evaluated.The river with the highest contamination was river Atuba river, followed by Iguazu , Itaqui and Canal Paralelo, with concentrations up to 1.98 times higher than other places in Brazil (MONTAGNER; JARDIM, 2011), and other countries (Japan, Israel and Spain), due to low sanitation rates, inefficiency to remove FSHs by sewage treatment and yearly increase in Brazilian contraceptive consumption.E2 was the FSHs present in most samples and in highest concentrations, due to human excretion and high contraceptives usage. Current assay contributes towards a better understanding of FSHs in Brazilian surface water and their main sources. Table 1 . Results of limnological parameters observed in the rivers Iguazu (IG), Atuba (AT) and Itaqui (IT) and in the Canal Paralelo (CP). ND: not detectable, NA: not analyzed. Table 3 . Concentrations of FSHs found in other studies.
2018-12-27T00:44:29.944Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "d182ed09e9d2d21f9b6e8d46a3c238b8c385a994", "oa_license": "CCBY", "oa_url": "https://www.periodicos.uem.br/ojs/index.php/ActaSciTechnol/article/download/18477/pdf_8", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d182ed09e9d2d21f9b6e8d46a3c238b8c385a994", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
3090065
pes2o/s2orc
v3-fos-license
Chromatographic Characterization and GC-MS Evaluation of the Bioactive Constituents with Antimicrobial Potential from the Pigmented Ink of Loligo duvauceli Chromatographic characterization and the GC-MS evaluation of the black pigmented ink of Loligo duvauceli in the present study have yielded an array of bioactive compounds with potent antimicrobial property. Facing an alarm of antimicrobial resistance globally, a need for elucidating antimicrobial agents from natural sources will be the need for the hour. In this view, this study is aimed at characterizing the black pigmented ink of the Indian squid L. duvauceli. The squid ink was subjected to crude solvent extraction and was fractionated by silica gel column chromatography. TLC and HPTLC profiles were recorded. Antimicrobial bioassay of the squid ink fractions was done by agar well diffusion method. The antimicrobial fraction was then characterized using GC-MS analysis. The results showed that the n-hexane extract upon column fractionation yielded a total of 8 fractions with the mobile phase of Hex/EtOAc in different gradients. TLC and HPTLC profiles showed a single spot with a retention factor of 0.76. Fraction 1 showed significant antibacterial activity against Escherichia coli, Klebsiella pneumoniae, Staphylococcus aureus, and Lactobacillus acidophilus and a promising antifungal activity against Candida albicans. The antimicrobial fraction upon GC-MS analysis of bis(2-ethylhexyl) phthalate (BEHP) possesses the highest percentage of area normalisation (91%) with other few minor constituents. The study is concluded by stating that the antimicrobial efficacy of the squid ink might be due to the synergistic effects of the phthalate derivative and the other minor volatile compounds analysed in the squid ink. Introduction Characterization of the bioactive constituents from the black pigmented ink has resulted in a handful of chemical elucidations. The ink from the molluscs has created a great interest towards its bioactive molecules with promising antibacterial, antitumour, antileukemic, and antiviral activities [1]. The ink is ejected from the ink gland of the squid Loligo duvauceli through the ink duct to escape from its predators [2]. High performance liquid chromatographic (HPLC) analysis of the Loligo sp. ink has quantified its chemical components as L-DOPA and Dopamine [3]. The black pigment was found to be melanin and the process of melanogenesis was explained in the ink gland of Sepia sp. [4]. The ink is a complex mixture of organelles, premelanosomes, melanosomes, granules, proteic material (enzymes), glucosamine, and phospholipids in suspension. At the moment of extraction the mixture is still active which makes the ink suitable for research studies. The ink gland has been also shown to contain a variety of melanogenic enzymes as tyrosinase, dopachrome 2 International Scholarly Research Notices tautomerase, and peroxidase [5]. The ink has also various primary roles in the world of alternative medicine and has the widest range of therapeutic application [6]. Despite these reports there is no considerable interest shown towards the purification procedures of the L. duvauceli's ink. Potential chemical cues in squid ink have been identified and quantified using reverse-phase, highperformance, liquid chromatography (RP-HPLC) [7]. Meanwhile the active antimicrobial biomolecules have not been characterized yet. So this study is aimed at exploring the active bioconstituents of the ink by silica gel column chromatography and the gas chromatography-mass spectroscopic analysis (GC-MS) for its antimicrobial constituents. Preparation of Crude Extracts. The collection of ink and the crude solvent extraction of the constituents from L. duvauceli ink was done by the method followed earlier [8]. The crude extracts were subjected to sterility checking after exposing the extracts under UV light for 2 hrs. 5 mg of each extract was mixed in sterile nutrient broth and was incubated for 2 hrs which was plated onto nutrient agar for checking the sterility of the extracts. The extracts were stored at 4 ∘ C in brown glass bottles. The antimicrobial activity of the crude extracts was performed by conventional agar well diffusion method [9]. The n-hexane extract has scored in our earlier reports a high antimicrobial property against the clinical bacterial and fungal isolates [10]. Thus the n-hexane extract was chosen for the further fractionation by silica gel column chromatography. Chromatographic Fractionation of the Hexane Extract. Separation of the active biomolecules from the crude nhexane extract was done by silica gel column chromatography. Briefly, 10 gm of the crude n-hexane extract was subjected for fractionation using silica gel column. The crude extract was adsorbed on to silica gel (100-200 mesh, SISCO) and chromatographed employing a step gradient solvent system from low to high polarity. The starting solvent system was 100% n-hexane and subsequently the polarity was increased by varying the solvent concentration with ethyl acetate (EtOAc Water/EtOAc, 50 : 50 Water/EtOAc, and 100% Water. In order to select the best mobile phase for eluting the fractions, 5 L of each eluted fraction was spotted on TLC and ran with combinations of solvent system. In this way the solvent system that showed the most favorable separation of compounds was chosen. The fractions that showed the elution of similar compounds were pooled and concentrated under vacuum below 40 ∘ C using Heidolph, VE-11 Rotaevaporator for 30 min. High performance thin layer chromatography (HPTLC) analysis was also performed for the active fraction. Combined fractions were kept under air current to facilitate drying. The concentrated fraction was obtained and subjected to sterility checking as mentioned earlier. The active fraction was stored at 4 ∘ C in sterile glass brown bottles until used for the bioactivity studies. ]. The bioassay was performed by agar well diffusion method. Briefly, Mueller Hinton agar plate was divided into two halves and 50 L of inoculum of each test organism was spread as lawn cultures on the same plate to achieve a confluent growth at each half. The agar plates were allowed to dry and wells or cups of 8 mm were made with a sterile agar borer on the inoculated agar plates. 10 mgs of the pooled fraction was mixed with DMSO and was made ready for the study. A 50 L volume of the active fraction was propelled directly into the wells of the inoculated specific media agar plates for each test and control organism. Erythromycin (30 g) and Amphotericin B (100 U) were used as the positive controls for the bacteria and the yeast, respectively. DMSO served as the negative control. The plates were allowed to stand for 10 minutes for diffusion of the extract to take place and were incubated at 37 ∘ C for 24 h. After incubation the plates were observed for the zone of inhibition around the wells and the zone of inhibition was measured using an antibiotic sensitivity measuring scale (Himedia, Mumbai). Determination of MIC and MBC Value for the Active Fraction. Determination of MIC value for the active antimicrobial fraction was determined by Microbroth dilution method [11]. Serial dilutions of the active fraction were done in a 96-well microtitre plate with DMSO. The dilution factor was 5, 2.5, 1.25, 0.625, 0.312, and 0.156 mg/mL. To each dilution 100 L of the culture broths of the test organisms was added in their respective wells and the plate was incubated at 37 ∘ C for 24 hrs. After incubation the spectrophotometric analysis was performed and the OD values were recorded. The MBC value was confirmed by microbial spot checker board method [12] where 3 L of each dilution was spotted onto Mueller Hinton agar plates and incubated at 37 ∘ C for 24 hrs. Staphylococcus aureus Lactobacillus acidophilus -: no activity; 1-8: isolated column fractions. programme was maintained at 45 ∘ C for 2 min and 300 ∘ C for 10 min with overall holding time of 36.5 min. Mass spectra conditions applied were as follows: electron impact at 40 eV, ion source temperature at 200 ∘ C, and interface temperature at 240 ∘ C. Individual components were identified by Wiley 139.LIB and NISTO.5 LIB database matching. The percentage composition was determined by area normalization. Results The n-hexane extract scoring a high antimicrobial activity upon column fractionation over silica gel yielded a total of 8 fractions. Elution with Hex/EtOAc in the ratio of 4 : 1 yielded the active fraction. The active fraction was subjected to TLC, HPTLC, and GC-MS analysis. TLC profile showed a single spot with a retention factor of 0.76 (Figure 1). The same plate was subjected to HPTLC analysis, with a scanning wavelength of 254 nm. The chromatogram showed a single peak obtained as calibration spectrum data with a noise level at 0.072 mV, CAMAG software, and scanned with SCANNER II [951012] with area normalization of 83.91% that indicated the maximum extraction with Hex/EtOAc. Antimicrobial bioassay revealed that the active fraction possesses a high antibacterial activity against the test organisms ( Table 1). The zone size was measured as 18 mm for E. coli and K. pneumoniae, 16 mm for S. aureus, 23 mm for C. albicans, and 18 mm for L. acidophilus (Figure 2). The MBC value was determined as an average of 2.5 mg/mL for E. coli, K. pneumoniae, and C. albicans, 5 mg/mL for S. aureus and L. acidophilus. The microbial spot checker (Figure 3) board method yielded complete absence of the growth at the spot inoculated with the determined MBC value. The previous dilution that showed the visible decrease in the number of colonies was determined as the MIC and was deduced as 1.25 mg/mL for E. coli, K. pneumoniae, and C. albicans and 2.5 mg/mL for S. aureus and L. acidophilus. The bioactive fraction upon GC-MS analysis revealed a chromatogram showing nine peaks with bis(2-ethylhexyl) phthalate [BEHP] possessing the highest percentage of area normalisation (91%) (Figure 4). The mass spectrum was found to be superimposable (>93) with that of the authentic compound from the GC-MS library. Based on the GC-MS analysis the active fraction was structurally elucidated as bis(2-ethylhexyl) phthalate. The chromatogram also showed the presence of other minor compounds such as octadecane (0.29%), naphthalene (0.13%), tetradecane (0.41%), pentadecane (0.58%), hexadecane (1.02%), heptadecane (0.53%), and cholesterol (5.07%) ( Table 2). The analysis report reveals the presence of phthalate derivative and other minor volatile essential oils as potent antimicrobial agents extracted from the squid ink. Discussion Marine natural products have been a strong source for novel drug products, or have been a model for introducing a commercial drug [13]. Squid ink is not the most elusive and enigmatic pigment found in nature but just a particle waiting for a rational study. In recent years, the problem of antimicrobial (drug) resistance is emerging and many diseases are increasingly difficult to treat because of the emerging drug-resistant organisms [14]. The design of effective and novel dosing regimens that suppress the emergence and proliferation of resistant microbial populations is crucial [15]. As resistance has increased to alarming proportion, a safe and cheaper source can always be an alternative to the routine therapeutics. As mollusk has been reported to possess various active molecules, the ink from the squid L. duvauceli is selected as a novel source for the isolation of antimicrobial agents. Previous associated studies state that the crude extraction of the squid ink is achieved using various solvents [16]. Successful prediction of natural bioactive molecules from natural sources is largely dependent on the type of solvent used in the extraction procedure and in many studies it was found that extracts in organic solvents provided more consistent antimicrobial activity [17]. The crude extraction in our earlier studies is thus achieved by parallel solvent International Scholarly Research Notices 5 28.766 5.07 Cholesterol extraction method, where the ink is mixed with various solvents individually and is not added sequentially. Separation of the biomolecules has been successfully achieved by chromatographic procedures. TLC and HPTLC analysis are the simplest and cheapest method of detecting any natural constituent because the method is easy to run and reproducible and requires little equipment. The Rf value correlates with the phthalate compound isolated from the other sources [18]. Active crude extracts are chosen for column chromatography due to its relatively low complexity as seen with TLC, bioautography, and disc diffusion method [19]. The solvent gradient used for elution of the biomolecules has been successfully standardized and is best achieved with n-hexane and ethyl acetate gradient. The selection of the solvent gradients can be rationalized in terms of the polarity of the compounds being extracted by each solvent and in addition to their intrinsic bioactivity, by their ability to dissolve or diffuse in the different media used in the assay [20]. For the antimicrobial bioassay, the concentrated fractions were dissolved with DMSO and were employed for the agar well diffusion bioassay. The choice of DMSO as a solvent is due to its solvency for a wide range of chemicals, its low antibacterial activity at concentrations less than 2%, and its low toxicity [21]. The findings of the antimicrobial bioassay report that the active fraction has antimicrobial efficacy against the Gram positive cocci S. aureus, Gram negative bacilli E. coli and K. pneumoniae, Gram positive bacilli L. acidophilus, and the pathogenic yeast C. albicans. The active fraction scores a high activity against C. albicans that indicates its potent antifungal activity. A good antibacterial activity is achieved against the Gram negative and positive bacilli. A moderate antibacterial activity is observed against S. aureus. The MBC value has also been deduced and is determined as 2.5 to 5 mg/mL against the tested organisms. Microbial spot checker board also yields the same result with the absence of growth at the spot inoculated with the determined MBC value. GC-MS analysis of the bioactive fraction shows the presence of the bioactive compounds which are further confirmed with the library data. Using mass spectroscopy the molecular mass of a compound and its elemental composition can be easily determined. Further this method involves very little amount of the test sample and gives the molecular weights accurately. GC-MS analysis has showed bis(2-ethylhexyl) phthalate as the major constituent with large area normalization of 91.43%. BEHP identified through this analysis has been further confirmed by its molecular mass spectrum. It correlates with the mass spectrum of BEHP reported earlier [22] from a phthalate isolated from a marine bacterial strain. The other minor constituents with low area normalization were also identified by the molecular mass from the library database. Bis(2-ethylhexyl) phthalate is found in low levels in the environment as it is subjected to biodegradation [23]. This derivative has also been reported to be present in the fish and lipid tissues and also as a pollutant in marine environment [24]. The bioactivity of the phthalate derivatives has already been reported in many plants, algae, and marine microorganisms and also from many marine species [25]. Few reports are available for the antibacterial potential of phthalate derivatives from plants and from flowers [26]. Bis(2-ethylhexyl) phthalate extracted from Streptomyces bangladheshiensis has been reported to be a potent antibacterial agent against Gram positive bacteria [27]. Di(2-ethylhexyl) phthalate from Alchornea sp. has proved to reduce anti-inflammatory activity [28]. The other volatile minor compounds identified by GC-MS analysis have also been found to be potent antibacterial agents. The extracts of Spirulina sp. have showed antibacterial activity of octadecane and tetradecane [29]. The antibacterial activities of pentadecane and heptadecane compounds extracted from Sea Urchin have also been reported to possess potent activity against Gram positive and Gram negative bacteria [30]. BEHP has been reported to possess a potent antifungal activity against major pathogenic fungi like Candida, Cryptococcus, and Aspergillus sp. [31]. The other minor constituents analyzed by GC-MS have also been reported to possess antifungal activity. Naphthalene derivative has been reported to possess the same against C. albicans and Aspergillus sp. [32]. The antifungal activity of cholesterol hydrazone derivative has also been studied against C. albicans at a concentration of 1.5 g/mL [33]. The antifungal activity of tetradecane and octadecane has been reported against C. albicans [34]. GC-MS analysis of a natural cure concoction Epa-Ijebu showed the presence of natural alkanes such as hexadecane, heptadecane, and octadecane with potent antifungal activity [35]. In correlation with these reports, the study results reveal that the ink has potent antimicrobial constituents which owes for its antibacterial and antifungal properties. The promising antimicrobial activity of these bioactive constituents needs a further multipronged research to implement its use as a novel therapeutic agent in near future for treating ailments with the drug resistant microbial pathogens. This study has suggested the presence of antimicrobial bioconstituents in the squid ink by column fractionation studies. GC-MS analysis has also aided the evaluation of the major and minor compounds present in it through its mass spectrum data. The study thus concludes the synergistic effects of an array of compounds in the squid ink towards its potent antimicrobial property. A novel therapeutic compound from a new marine source like the squid ink would be of much use in eradicating the microbial pathogens and it would definitely aid in the control and emergence of drug resistant strains.
2018-04-03T03:24:14.697Z
2014-11-10T00:00:00.000
{ "year": 2014, "sha1": "ffd35870e8316d7c60b8ac43ec3225b78e7219eb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2014/820745.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffd35870e8316d7c60b8ac43ec3225b78e7219eb", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
141443160
pes2o/s2orc
v3-fos-license
Effect of an Augmented Reality Ultrasound Trainer App on the Motor Skills Needed for a Kidney Ultrasound: Prospective Trial Background: Medical education is evolving from "learning by doing" to simulation-based hands-on tutorials. Objective: The aim of this prospective 2-armed study was to evaluate a newly developed augmented reality ultrasound app and its effect on educational training and diagnostic accuracy. Methods: We recruited 66 medical students and, using imaging and measuring a kidney as quality indicators, tested them on the time they needed for these tasks. Both groups used textbooks as preparation; in addition, the study group had access to a virtual ultrasound simulation app for mobile devices. Results: There was no significant difference between the study arms regarding age (P=.97), sex (P=.14), and previous ultrasound experience (P=.66). The time needed to complete the kidney measurements also did not differ significantly (P=.26). However, the results of the longitudinal kidney measurements differed significantly between the study and control groups, with larger, more realistic values in the study group (right kidney: study group median 105.3 mm, range 86.1-127.1 mm, control group median 92 mm, range 50.4-112.2 mm; P<.001; left kidney: study group median 100.3 mm, range 81.7-118.6 mm, control group median 85.3 mm, range 48.3-113.4 mm; P<.001). Furthermore, whereas all students of the study group obtained valid measurements, students of the control group did not obtain valid measurements of 1 or both kidneys in 7 cases. Conclusions: The newly developed augmented reality ultrasound simulator mobile app provides a useful add-on for ultrasound education and training. Our results indicate that medical students’use of the mobile app for training purposes improved the quality of kidney measurements. (JMIR Serious Games 2019;7(2):e12713) doi: 10.2196/12713 Background Sonography is a well-established diagnostic tool and is sometimes used for small interventions. It is a noninvasive treatment or diagnostic tool, is cost effective, has no side effects, and is clinically valuable in nearly all medical disciplines. Technical developments in recent years mean that the examiner requires more skill and knowledge in using ultrasound [1]. As a result, the demand for educational lectures and courses has increased [2][3][4]. Traditionally, medical journals, hands-on tutorials, and theoretical lectures have been used for keeping doctors up-to-date. One of the difficulties of sonography compared with other imaging technologies is the complex motor hand-eye coordination required. Students are commonly trained in coordination on healthy volunteers with limitations in time and availability. Malignancies or abnormalities are most likely not present in healthy volunteers. Therefore, various models for simulation have been developed [5]. The expense of such simulators, unfortunately, limits their availability for practice. Due to the technical advances in mobile phones and the common acceptance of augmented reality (AR)-mainly due to the popularity of the video game Pokémon Go [6]-new training possibilities via smartphone have opened up [7]. AR is commonly defined as extended information on a real-world image, compared with virtual reality (VR), which is completely separated from the real-world image. With AR it is now possible to simulate a patient on a smartphone and imitate a sonographic examination. Objective The aim of this cohort study was to determine whether there was a difference in hand-eye coordination and motor skills needed for ultrasound examination between 2 groups of medical students with and without exposure to a VR ultrasound training app for the time and measurements of a kidney ultrasound. Participants and Procedure Using the Consolidated Standards of Reporting Trials (CONSORT) and Standards for Reporting of Diagnostic Accuracy Studies (STARD) statements as guidelines, we designed this cohort study, called Ultraschall App Study (UPPS), to evaluate a newly developed ultrasound AR simulator mobile app on its educational and diagnostic effect on 2 cohorts of medical students. The curriculum is an annual schedule resulting in same-year students attending a summer and a winter semester. We recruited 66 medical students and split them into 2 groups. We determined the starting group (the control group) by flipping a coin. We recruited the control group in the summer term between April and June 2016. We recruited the study group between August 2016 and November 2016 (no student courses are offered in June and July). Participation in the study was offered during a mandatory weekly course in obstetrics and gynecology sonography but participation was voluntary. The lecturer was the same over the recruitment period. No student declined. Initially a questionnaire was handed out and participants self-estimated their ultrasound experience (self-estimation was scored on a scale from 0 to 10, with 0 indicating no sonographic experience and 10 indicating a very experienced student). A tutor explained the aim of the following 60-minute study time, and participants were provided with theoretical knowledge for self-study (Sono-Grundkurs [8]). The participants were told to aim to visualize and document the reference or tutor kidneys with an ultrasound at the end of this lecture. Study group students additionally had access to the iOS-based AR ultrasound simulator app installed on 3 handheld devices. The study app was designed using the mobile device's gyroscope to simulate the motion of an ultrasound transducer and was provided in the native language (German). The text files were also translated into English, Hungarian, Romanian, Italian, and Polish by native-speaking colleagues. With the app, training ultrasound motor skills does not need a proper ultrasound machine, nor a patient. It is also independent of time and location, as the mobile device needed is a smartphone or a tablet. Figure 1 shows the virtual patient as displayed on the tracker pattern and the ultrasound mode once the mobile device is close to the virtual skin, showing a kidney scan simulation (Multimedia Appendix 1). One patient was simulated for this proof-of-concept study. After 60 minutes of self-study in a group, the participants had a brief tutorial on the use of the ultrasound machine (GE Voluson Expert 8, General Electric, GE Medical Systems, Solingen, Germany) set to kidney scan. Then the participants were asked one by one to scan and measure both kidneys of the tutor as accurately as possible and document their scan with the normal images. Starting time was the beginning of the examination, and finishing time was the time stamp on the last picture. We used this time frame to compare the 2 groups and as an internal quality control for the self-estimation. After students finished the documentation, they were given a written multiple choice test (range 0-6 points) to evaluate their theoretical knowledge. Finally, the study group was asked to assess the AR mobile app on a scale from 0 to 10 regarding the usefulness of the app, their recommendation regarding its use, and problems they encountered (responses: yes, no, not yet). Prior to the study recruitment, we consulted the University of Ulm ethics committee, which exempted the study from ethical approval. Statistical Analysis For the statistical analysis, we used IBM SPSS Statistics for Windows, version 21.0 (IBM Corporation). The descriptive statistic used likelihood tables with absolute and relative likelihood for nominal data and with median and area for ordinal-scaled and metric data. Due to the significant difference in the distribution of the multiple metric variables (kidney measurements, examination time, age, semester, ultrasound experience, multiple choice test results, and app rating) from the norm (Shapiro-Wilk test), we used exclusive nonparametric statistical analysis. We compared the groups for nominal-scaled (categorial) data or rates with chi-square tests (Fisher exact test; variable: successful visualization of the kidney). We applied Mann-Whitney U test to test the differences between 2 independent groups referring to ordinal-scaled or metric data (kidney measurements, examination time, age, semester, ultrasound experience, and multiple choice test results). We used boxplots for ordinal-scaled and metric data for intergroup visualization (kidney measurements, examination time, age, semester, ultrasound experience, and multiple choice test results). In these plots, the horizontal line is the median and the box symbolizes 50% of the data (interquartile area). The whiskers of the box and whisker plots had a maximum length of 1.5 times the interquartile area. If all data are within these borders, then the minimum and maximum value determine the length of the whisker. All values outside the whiskers are marked as dots. We calculated correlations for ordinal-scaled and metric data according to Spearman rho (ρ). All P values are 2-tailed and P<.05 was considered significant. Participant Characteristics A total of 66 medical students participated in our study; 33 students were assigned to the control group and 33 to the study (app) group. There was no significant difference in the parameters age, dominant hand, and sex between the 2 groups (Table 1). Because we recruited participants in the summer/winter term, members of the study group were on average in their ninth semester (range eighth to 12th semester) and the control group were in their eighth semester (range seventh to 10th semester). Prior ultrasound knowledge was similar (study group: median score 2, range 0-4.5; control group: median score 2, range 0-4; P=.66). In the study group, more students self-reported AR experience (7/33, 21% vs 1/33, 3%; P=.05; Table 1) than in the control group. Group Result Comparisons The study group visualized the kidneys in all cases on both sides, whereas the control group did document the kidney in 7 cases (1 right and 6 left). This resulted in a significant difference for the left kidney (Fisher exact test, P=.02) Figure 2). The measuring time period (in seconds) was not significantly different (study group median 351 s, range 155-563 s, control group median 302 s, range 103-527 s; P=.26; Figure 3). There was an inverse correlation between the time needed for kidney documentation and self-reported ultrasound experience (ρ=-.28, P=.04). The results of the multiple choice questionnaire were not significantly different between the 2 groups (P=.13). Principal Findings The transition from reality to virtuality has been described as a reality-virtuality continuum by Milgram et al [9]. The amount of additional virtual information varies depending on the needs and what it is being used for. In AR, additional information is displayed or initiated with barcodes, imaging recognition software, or trackers to enhance reality. This is in contrast to VR, where everything is computer generated. Computer games and smartphones have helped bring VR and AR into daily life. Google Glass, a brand of smart glasses, received mixed responses when first announced [10,11]. Pokémon Go, introduced in the summer of 2016, could be considered the AR breakthrough. Since then, the usability of AR and VR have continuously improved [12]. The "Pokémon Go effect" may have played a role in our study, as the questionnaires showed an increase in self-reported AR knowledge in the study population (from 3% in April to 21% in September) [6]. Over the last 20 years, several generations of medical apps have been produced. Whereas the first generation were expensive and had potential clinical uses [13], improvements in chip and smartphone technology enabled new possibilities for education and learning using the advances made in the entertainment industry and implementing them in medical education [14]. AR can enhance the learning curve for ultrasound education in combination with theoretical knowledge and motor skills. To date, to our knowledge no smartphone app that can simulate an ultrasound examination has been developed. To objectively evaluate the effectiveness of the mobile app, we targeted medical students with next to no ultrasound experience but with knowledge of anatomy. The weekly obstetrics and gynecology introductory course proved ideal. This introductory course is mandatory for fourth-year medical students before they are exposed to clinical work. Students tend to communicate with each other about their clinical courses and the examinations at the end. To ensure that the 2 groups in the study would be independent, with minimal exchange of information about the study, we spread the recruitment time over 2 university teaching periods (summer and winter semesters), with 1 semester per group (control and study). Starting with the control group also coincidentally ensured that the following students were not biased by earlier participants. Students also could not find the app in the online store as a training opportunity outside the study, which would have introduced further bias. The tutor was approached by 2 control students once word about the app had spread among the students. We chose the kidney due to its superficial position, homogeneous size (with the normal adult kidney being 100-120 mm [8]), and importance in various disciplines. Little time is needed to learn to do a kidney ultrasound. Our study group's measurements were closer to the values of the reference kidneys within an hour of practice and significantly different from the general visualization of the kidney. In a prospective randomized trial, Celebi et al showed similar teaching effectiveness for student tutors and ultrasound experts [3], so our aim was to provide a first evaluation of the effectiveness of a mobile app without a tutor's supervision. The results add to the observations of Celebi et al and others by showing positive effects after 60 minutes of autonomous practicing [15][16][17]. As opposed to Celebi et al [3] and Ritter et al [16], our study focused on practicing motor skills by using a smartphone or tablet. Furthermore, our study included a practical test by visualization of the kidney and a multiple choice questionnaire. Despite the published possibilities of combining practical evaluation methods for teaching interventions, such a practical test is not commonly used for evaluation in a clinical course [17] but, from our point of view, is an essential step for a successful clinical lecture. The significant differences between the control and intervention group in the visualization and measurements show the need for such a hands-on approach. A tutorial including hands-on practice prepares students better for clinical routine, even without further tutor supervision. Limitations Despite these positive results, we identified the following points that need to be addressed. Recruitment bias can only be minimized, as random allocation for each participant is not possible in our setting. The students were assigned on a weekly schedule to a group, and interaction between the groups was known to occur. To reduce this bias, the study protocol grouped the participants per semester. Unfortunately during the study period, the Pokémon Go game became available and VR bias might have had an effect on the results [6]. Students with more VR experience may have better motor skills with their mobile devices due to additional training. As there was no difference in scan time or app rating before and after the release of Pokémon, such a bias seems unlikely. But, with the expected increased use of AR mobile apps, this effect might influence future studies, as a VR-naive comparable control group would be impossible to recruit. On the other hand, the kidney measurements differed between the 2 subgroups and this study, which was not designed to differentiate between preexisting motor skills and app-trained skills. This needs to be evaluated with either a larger number of students or a baseline question regarding the participants' gaming habits. The 2-armed study protocol could be further criticized. There is, to our knowledge, no evidence-based statement for medical education trials, so we wrote the study protocol with the CONSORT and STARD statements as guidelines. The mobile app was designed to enhance the learning experience with a textbook by enabling the student to practice his or her motor skills and experience the theoretical facts on screen. This is in line with the results of the Extended Focused Assessment with Sonography for Trauma (eFAST) study [18], which showed no benefit for mobile e-learning compared with traditional learning. eFAST focused on the difference in theoretical learning and not on the motor skills as in our trial. The cost of developing such an app can be criticized. As the trial version of this app is available at no cost, we disagree with Nilsson et al [18] and do see a cost effectiveness for motor training, especially because the time of tutors, costs for ultrasound machines, and secondary costs (eg, room, missed outpatient clinic) are minimized with this app and no other cost-effective motor skills training method is available. Also, after app training, time is saved by improved imaging and, ultimately, diagnosis in the clinical setting. Besides those savings, the app development was the biggest cost factor. With only 1 organ and only 1 individual (eg, no variation in subcutaneous fat tissue) in the app, the costs surely outweigh the benefits per user, but there is the potential to simulate more difficult clinical cases such as obese patients, cardiac scans, or fetal organ screening in future studies. We could have applied the Objective Structured Assessment of Ultrasound Skills criteria like Tolsgaard et al [19] applied them for a structured examination of the lung. Here the ultrasound federations could help future studies by providing variables. These proposed benchmarks based on current teaching models provide an expert's feedback on imaging quality. A mobile app could support the expert by guiding the user to the "ideal" image, ultimately providing rapid feedback and improving image quality beyond the current expectations regarding time and practice. This approach adapts individual differences in the learning curve [4,20] by being independent of expertise, time, location, and place. This freedom could be appealing to a wide range of students, and our results also show no sex difference in the acceptance of the app. With this home-based learning, the app could be used to prepare participants prior to an ultrasound course in order to maximize the learning effect. Conclusion We found that students can be trained in the motor skills needed for ultrasound examination using an AR app. Within a short training period, participants documented the kidney significantly better. The main advantage of the app is the freedom to train without a patient and a real ultrasound machine. With the implementation of immediate feedback on imaging quality and various scenarios and patients, such apps could be a valuable enhancement of lectures, courses, and textbook-based learning. This should result in more effective learning and improved clinical skills. Further benefits include the freedom to train in terms of time, model or patient, and place at a reasonable cost. Conflicts of Interest FE has been the honorable medical adviser for the company programming the app with reimbursement of travel costs. The other authors declare no conflicts of interest.
2019-05-02T13:03:05.026Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "2415278783b067aed519c8dcb4f9fe8f91f65395", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/12713", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c2e6e2ba6063b9475d56b7b4b1671fdd79b61911", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119135207
pes2o/s2orc
v3-fos-license
Preconditioning fractional spectral collocation Fractional spectral collocation (FSC) method based on fractional Lagrange interpolation has recently been proposed to solve fractional differential equations. Numerical experiments show that the linear systems in FSC become extremely ill-conditioned as the number of collocation points increases. By introducing suitable fractional Birkhoff interpolation problems, we present fractional integration preconditioning matrices for the ill-conditioned linear systems in FSC. The condition numbers of the resulting linear systems are independent of the number of collocation points. Numerical examples are given. 1. Introduction. Fractional spectral collocation (FSC) methods [7,8,2] based on fractional Lagrange interpolation have recently been proposed to solve fractional differential equations. By a spectral theory developed in [6] for fractional Sturm-Liouville eigenproblems, the corresponding fractional differential matrices can be obtained with ease. However, numerical experiments show that the involved linear systems become extremely ill-conditioned as the number of collocation points increases. Typically, the condition number behaves like O(N 2ν ), where N is the number of collocation points and ν is the order of the leading fractional term. Efficient preconditioners are highly required when solving the linear systems by an iterative method. Recently, Wang, Samson, and Zhao [5] proposed a well-conditioned collocation method to solve linear differential equations with various types of boundary conditions. By introducing a suitable Birkhoff interpolation problem, they constructed a pseudospectral integration preconditioning matrix, which is the exact inverse of the pseudospectral discretization matrix of the nth-order derivative operator together with n boundary conditions. Essentially, the linear system in the well-conditioned collocation method [5] is the one obtained by right preconditioning the original linear system; see [1]. By introducing suitable fractional Birkhoff interpolation problems and employing the same techniques in [5], Jiao, Wang, and Huang [3] proposed fractional integration preconditioning matrices for linear systems in fractional collocation methods base on Lagrange interpolation. In the Riemann-Liouville case, it is necessary to modify the fractional derivative operator in order to absorb singular fractional factors (see [3, §3]). In this paper, we extend the Birkhoff interpolation preconditioning techniques in [5,3] to the fractional spectral collocation methods [7,8,2] based on fractional Lagrange interpolation. Unlike that in [3], there are no singular fractional factors in the Riemann-Liouville case. Numerical experiments show that the condition number of the resulting linear system is independent of the number of collocation points. The rest of the paper is organized as follows. In §2, we review several topics required in the following sections. In §3, we introduce fractional Birkhoff interpolation * School of Mathematical Sciences and Fujian Provincial Key Laboratory of Mathematical Modeling and High-Performance Scientific Computation, Xiamen University, Xiamen 361005, China problems and the corresponding fractional integration matrices. In §4, we present the preconditioning fractional spectral collocation method. Numerical examples are also reported. We present brief concluding remarks in §5. Preliminaries. 2.1. Fractional derivatives. The definitions of fractional derivatives of order ν ∈ (n − 1, n), n ∈ N, on the interval [−1, 1] are as follows [4]: • Left-sided Riemann-Liouville fractional derivative: • Right-sided Riemann-Liouville fractional derivative: • Left-sided Caputo fractional derivative: • Right-sided Caputo fractional derivative: By the definitions of fractional derivatives, we have Therefore, In this paper, we mainly deal with the left-sided Riemann-Liouville fractional problems with homogeneous boundary/initial conditions. By a simple change of variables, (2.1) and (2.2), the extension to other fractional problems is easy. Fractional Lagrange interpolation. Throughout the paper, let {x j } N j=1 be a set of distinct points satisfying Given µ ∈ (0, 1), the µ-fractional Lagrange interpolation basis associated with the points {x j } N j=1 is defined as Computations of . . , N, can be represented exactly as where P (α,β) n (x) denote the standard Jacobi polynomials. The coefficients α nj can be obtained by solving the linear system . Let P n (x) denote the Legendre polynomial of order n. By (see [6]) 3. Riemann-Liouville fractional Birkhoff interpolation. Let P n be the set of all algebraic polynomials of degree at most n. Define the space In the following, we consider two special cases. Preconditioning fractional spectral collocation (PFSC). In this section, we use two examples to introduce the preconditioning scheme. 4. 1. An initial value problem. Consider the fractional differential equation of the form The fractional spectral collocation scheme leads to the following linear system and a = a(y 1 ) a(y 2 ) · · · a(y N ) T , The unknown vector u is an approximation of the vector of the exact solution u(x) at the points {x j } N j=1 , i.e., Consider the matrix B (−ν) y →x as a right preconditioner for the linear system (4.2). By (3.2), we have the right preconditioned linear system It is easy to show that Then, the equation (4.3) reduces to (4.4) After solving (4.4), we obtain u by u = B (−ν) y →x v. Example 1. We consider the fractional differential equation (4.1) with The function f (x) is chosen such that the exact solution of (4.1) is Let {x j } N j=1 be the Gauss-Jacobi points as in Remark 2.1 and {y j } N j=1 be the Gauss-Legendre points as in Remark 3.2. We compare condition numbers, number of iterations (using BiCGSTAB in Matlab with TOL= 10 −9 ) and maximum pointwise errors of FSC and PFSC (see Figure 1). Observe from Figure 1 (left) that the condition number of FSC behaves like O(N 1.6 ), while that of PFSC scheme remains a constant even for N up to 1024. As a result, PFSC scheme only requires about 7 iterations to converge (see Figure 1 (middle)), while the usual FSC scheme requires much more iterations with a degradation of accuracy as depicted in Figure 1 (right). 4.2. A boundary value problem. Consider the fractional differential equation of the form The fractional spectral collocation method leads to the following linear system , and a = a(y 1 ) a(y 2 ) · · · a(y N −1 ) The unknown vector u is an approximation of the vector of the exact solution u(x) at the points {x j } N −1 j=1 , i.e., Consider the matrix B y →x as a right preconditioner for the linear system (4.6). By (3.4), we have the right preconditioned linear system (4.7) It is easy to show that , Then, the equation (4.7) reduces to (4.8) After solving (4.8), we obtain u by u = B (−ν) y →x v. Example 2. We consider the fractional differential equation (4.5) with ν = 1.9, a(x) = 2 + sin(4πx), b(x) = 2 + cos x. The function f (x) is chosen such that the exact solution of (4.5) is Let {x j } N j=0 be the Chebyshev points of the second kind (also known as Gauss-Chebyshev-Lobatto points) defined as and {y j } N −1 j=1 be the Gauss-Jacobi points as in Remark 3.4. We compare condition numbers, number of iterations (using BiCGSTAB in Matlab with TOL= 10 −11 ) and maximum point-wise errors of FSC and PFSC (see Figure 2). Observe from Figure 2 (left) that the condition number of FSC behaves like O(N 3.8 ), while that of PFSC scheme remains a constant even for N up to 1024. As a result, PFSC scheme only requires about 13 iterations to converge (see Figure 2 (middle)), while the FSC scheme fails to converge (when N ≥ 16) within N iterations as depicted in Figure 2 (right). 5. Concluding remarks. We numerically show that the Birkhoff interpolation preconditioning techniques in [5,3] are still effective for fractional spectral collocation schemes [7,8,2] based on fractional Lagrange interpolation. The preconditioned coefficient matrix is a perturbation of the identity matrix. The condition number is independent of the number of collocation points. The preconditioned linear system can be solved by an iterative solver within a few iterations. The application of the preconditioning FSC scheme to multi-term fractional differential equations is straightforward.
2015-10-20T07:56:54.000Z
2015-10-20T00:00:00.000
{ "year": 2015, "sha1": "2526ebb4fdfef58b4869f53cbfe09d929ec05693", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2526ebb4fdfef58b4869f53cbfe09d929ec05693", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
43963368
pes2o/s2orc
v3-fos-license
Native Language Cognate Effects on Second Language Lexical Choice We present a computational analysis of cognate effects on the spontaneous linguistic productions of advanced non-native speakers. Introducing a large corpus of highly competent non-native English speakers, and using a set of carefully selected lexical items, we show that the lexical choices of non-natives are affected by cognates in their native language. This effect is so powerful that we are able to reconstruct the phylogenetic language tree of the Indo-European language family solely from the frequencies of specific lexical items in the English of authors with various native languages. We quantitatively analyze non-native lexical choice, highlighting cognate facilitation as one of the important phenomena shaping the language of non-native speakers. Introduction Acquisition of vocabulary and semantic knowledge of a second language, including appropriate word choice and awareness of subtle word meaning contours, are recognized as a notoriously hard task, even for advanced non-native speakers. When nonnative authors produce utterances in a foreign language (L2), these utterances are marked by traces of their native language (L1). Such traces are known as transfer effects, and they can be phonological (a foreign accent), morphological, lexical, or syntactic. Specifically, psycholinguistic research has shown that the choice of lexical items is influenced by the author's L1, and that non-native speakers tend to choose words that happen to have cognates in their native language. Cognates are words in two languages that share both a similar meaning and a similar phonetic (and, sometimes, also orthographic) form, due to a common ancestor in some protolanguage. The definition is sometimes also extended to words that have similar forms and meanings due to borrowing. Most studies on cognate facilitation have been conducted with few human subjects, focusing on few words, and the experimental setup was such that participants were asked to produce lexical choices in an artificial setting. We demonstrate that cognates affect lexical choice in L2 spontaneous production on a much larger scale. Using a new and unique large corpus of nonnative English that we introduce as part of this work, we identify a focus set of over 1000 words, and show that they are distributed very differently across the "Englishes" of authors with various L1s. Importantly, we go to great lengths to guarantee that these words do not reflect specific properties of the various native languages, the cultures associated with them, or the topics that may be relevant for particular geographic regions. Rather, these are "ordinary" words, with very little culture-specific weight, that happen to have synonyms in English that may reflect cognates in some L1s, but not all of them. Consequently, they are used differently by authors with different linguistic backgrounds, to the extent that the authors' L1s can be identified through their use of the words in the focus set. The signal of L1 is so powerful, that we are able to reconstruct a linguistic typology tree from the distribution of these words in the Englishes witnessed in the corpus. We propose a methodology for creating a focus set of highly frequent, unbiased words that we expect to be distributed differently across different Englishes simply because they happen to have synonyms with different etymologies, even though they carry very limited cultural weight. Then, we show that simple lexical semantic features (based on the focus set of words) suffice for clustering together English texts authored by speakers of "closer" languages; we generate a phylogenetic tree of 31 languages solely by looking at lexical semantic properties of the English spoken by non-native speakers from 31 countries. The contribution of this work is twofold. First, we introduce the L2-Reddit corpus: a large corpus of highly-advanced, fluent, diverse, non-native English, with sentence-level annotations of the native language of each author. Second, we lay out sound empirical foundations for the theoretical hypothesis on cognate effect in L2 of non-native English speakers, highlighting the cognate facilitation phenomenon as one of the important factors shaping the language of non-native speakers. After discussing related work in Section 2, we describe the L2-Reddit corpus in Section 3. Section 4 details the methodology we use and our results. We analyze these results in Section 5, and conclude with suggestions for future research. Related Work The language of bilinguals is different. The mutual presence of two linguistic systems in the mind of the bilingual speaker involves a significant cognitive load (Shlesinger, 2003;Hvelplund, 2014;Prior, 2014;Kroll et al., 2014); this burden is likely to have a bearing on the linguistic productions of the bilingual speaker. Moreover, the presence of more than one linguistic system gives rise to transfer: traces of one linguistic system may be observed in the other language (Jarvis and Pavlenko, 2008). Several works addressed the translation choices of bilingual speakers, either within a rich linguistic context (e.g., given a source sentence), or decontextualized. For example, de Groot (1992) demonstrated that cognate translations are produced more rapidly and accurately than translations that do not exhibit phonetic or orthographic similarity with a source word. This observation was further articulated by Prior et al. (2007), who showed that translation choices of L2 speakers were positively correlated with cross-linguistic form overlap of a stimulus word with its target language translations. Prior et al. (2011) emphasized that "bilinguals are sensi-tive to the degree of form overlap between the translation equivalents in the two languages, and show a preference toward producing a cognate translation". As an example, they showed that the preferred translation of the Spanish incidente to English was incident, and not the alternative translation event, despite the much higher frequency of the latter. More recent work is consistent with previous research and advances it by highlighting phonologically mediated cross-lingual influences on visual word processing of same-and different-script bilinguals (Degani and Tokowicz, 2010;Degani et al., 2017). Cognate facilitation was also studied using eye tracking (Libben and Titone, 2009;Cop et al., 2017), demonstrating that the reading of bilinguals is influenced by orthographic similarity of words with their translation equivalents in another language. Crucially, much of this research has been conducted in a laboratory experimental setup; this implies a small number of participants, a small number of target words, and focus on a very limited set of languages. While our research questions are similar, we present a computational analysis of the effects of cognates on L2 productions on a completely different scale: 31 languages, over 1000 words, and thousands of speakers whose spontaneous language production is recorded in a very large corpus. Corpus-based investigation of non-native language has been a prolific field of recent research. Numerous studies address syntactic transfer effects on L2. Such influences from L1 facilitate various computational tasks, including automatic detection of highly competent non-native writers (Tomokiyo and Jones, 2001;Bergsma et al., 2012), identification of the mother tongue of English learners (Koppel et al., 2005;Tsvetkov et al., 2013;Malmasi et al., 2017) and typology-driven error prediction in learners' speech (Berzak et al., 2015). English texts produced by native speakers of a variety of languages have been used to reconstruct phylogenetic trees, with varying degrees of success (Nagata and Whittaker, 2013;Berzak et al., 2014). Syntactic preferences of professional translators were exploited to reconstruct the Indo-European language tree (Rabinovich et al., 2017). Our study is also corpus-based; but it stands out as it focuses not on the distribution of function words or (shallow) syntactic structures, but rather on the unique use of cognates in L2. From the lexical perspective, L2 writers have been shown to produce more overgeneralizations, use more frequent words and words with a lower degree of ambiguity (Hinkel, 2002;Crossley and Mc-Namara, 2011). Several studies addressed crosslinguistic influences on semantic acquisition in L2, investigating the distribution of collocations (Siyanova-Chanturia, 2015;Kochmar and Shutova, 2017) and formulaic language (Paquot and Granger, 2012) in learner corpora. We, in contrast, address highly-fluent, advanced non-natives in this work. Nastase and Strapparava (2017) presented the first attempt to leverage etymological information for the task of native language identification of English learners. They sowed the seeds for exploitation of etymological clues in the study of non-native language, but their results were very inconclusive. In contrast to the learner corpora that dominate studies in this field (Granger, 2003;Geertzen et al., 2013;, our corpus contains spontaneous productions of advanced, highly proficient non-native speakers, spanning over 80K topical threads, by 45K distinct users from 50 countries (with 46 native languages). To the best of our knowledge, this is the first attempt to computationally study the effect of L1 cognates on L2 lexical choice in productions of competent non-native English speakers, certainly at such a large scale. The L2-Reddit corpus One contribution of this work is the collection, organization and annotation of a large corpus of highlyfluent non-native English. We describe this new and unique corpus in this section. Corpus mining Reddit 1 is an online community-driven platform consisting of numerous forums for news aggregation, content rating, and discussions. As of 2017, it has over 200 million unique users, ranking the fourth most visited website in the US. Content entries are organized by areas of interest called subreddits, 2 ranging from main forums that receive much attention to smaller ones that foster discussion on niche areas. Subreddit topics include news, science, movies, books, music, fitness and many others. Collection of author metadata We collected a large dataset of posts (both initial submissions and subsequent comments) using an API especially designed for providing search capabilities on Reddit content. 3 We focused on several subreddits (r/Europe, r/AskEurope, r/EuropeanCulture, r/EuropeanFederalists, r/Eurosceptics) whose content is generated by users who specified their country as a flair (metadata attribute). Although categorized as 'European', these subreddits are used by people from all over the world, expressing views on politics, legislation, economics, culture, etc. In the absence of a restrictive policy, multiple flair alternatives often exist for the same country, e.g., 'CROA' and 'Croatia' for Croatia. Additionally, distinct flairs are sometimes used for regions, cities, or states of big European countries, e.g., 'Bavaria' for Germany. We (manually) grouped flairs representing the same country into a single cluster, reducing 489 distinct flairs into 50 countries, from Albania to Vietnam. The posts in the Europe-related subreddits constitute our seed corpus, comprising 9M sentences (160M tokens) by over 45K distinct users. Dataset expansion A typical user activity in Reddit is not limited to a single thread, but rather spreads across multiple, not necessarily related, areas of interest. Once the authors' country is determined based on their European submissions, their entire Reddit footprint can be associated with their profile, and, therefore, with their country of origin. We extended our seed corpus by mining all submissions of users whose country flair is known, querying all Reddit data spanning years 2005-2017. The final dataset thus contains over 250M sentences (3.8B tokens) of native and non-native English speakers, where each sentence is annotated with its author's country of origin. The data covers posts by over 45K authors and spans over 80K subreddits. 4 Focus on "large" languages For the sake of robustness, we limited the scope of this work to (coun-tries whose L1s are) the Indo-European (IE) languages; and only to those countries whose users had at least 500K sentences in the corpus. Additionally, we excluded multilingual countries, such as Belgium and Switzerland. Consequently, the final set of Reddit authors considered in this work originate from 31 countries, which represent the three main IE language families: Germanic (Austria, Denmark, Germany, Iceland, Netherlands, Norway, Sweden); Romance (France, Italy, Mexico, Portugal, Romania, Spain); and Balto-Slavic (Bosnia, Bulgaria, Croatia, Czech, Latvia, Lithuania, Poland, Russia, Serbia, Slovakia, Slovenia, Ukraine). In addition, we have data authored by native English speakers from Australia, Canada, Ireland, New Zealand, the UK and the US. Correlation of country annotation with L1 We view the country information as an accurate, albeit not perfect, proxy for the native language of the author. 5 We acknowledge that the L1 information is noisy and may occasionally be inaccurate. We therefore evaluated the correlation of the country flair with L1 by means of supervised classification: our assumption is that if we can accurately distinguish among users from various countries using features that reflect language, rather than culture or content, then such a correlation indeed exists. We assume that the native language of speakers "shines through" mainly in their syntactic choices. Consequently, we opted for (shallow) syntactic structures, realized by function words (FW) and ngrams of part-of-speech (POS) tags, rather than geographical and topical markers, that are reflected best by content words. Aiming to disentangle the effect of native language we randomly shuffled texts produced by all authors from each country, thereby "blurring out" any topical (i.e., subreddit-specific) or authorial trace. Consequently, we assume that the separability of texts by country can be attributed to the only distinguishing linguistic variable left: the dimension of the native language of a speaker. We classified 200 chunks of randomly sampled 100 sentences form each country into (i) native vs. non-native English speakers, (ii) the three IE language families, and (iii) 45 individual L1s, where the six English-speaking countries are unified under the native-English umbrella. Using over 400 function words and top-300 most frequent POS-trigrams, we obtained 10-fold cross-validation accuracy of 90.8%, 82.5% and 60.8%, for the three scenarios, respectively. We conclude, therefore, that the country flair can be viewed as a plausible proxy for the native language of Reddit authors. Initial preprocessing Several preprocessing steps were applied on the dataset. We (i) removed text by users who changed their country flair within their period of activity; (ii) excluded non-English sentences; 6 and (iii) eliminated sentences containing single non-alphabetic tokens. The final corpus comprises over 230M sentences and 3.5B tokens. Evaluation of author proficiency Unlike most corpora of non-native speakers, which focus on learners (e.g., ICLE (Granger, 2003), EF-CAMDAT (Geertzen et al., 2013), or the TOEFL dataset ), our corpus is unique in that it is composed by fluent, advanced non-native speakers of English. We verified that, on average, Reddit users possess excellent, near-native command of English by comparing three distinct populations: (i) Reddit native English authors, defined as those tagged for one of the English-speaking countries: Australia, Canada, Ireland, New Zealand, and the UK. We excluded texts produced by US authors due to the high ratio of the US immigrant population; (ii) Reddit non-native English authors; and (iii) A population of English learners, using the TOEFL dataset ; here, the proficiency of authors is classified as low, intermediate, or high. We compared these populations across various indices, assessing their proficiency with several commonly accepted lexical and syntactic complexity measures (Lu and Ai, 2015; Kyle and Crossley, 2015). Lexical richness was evaluated through typeto-token ratio (TTR), average age-of-acquisition (in years) of lexical items (Kuperman et al., 2012), and mean word rank, where the rank was retrieved from a list of the entire Reddit dataset vocabulary, sorted by word frequency in the corpus. Syntactic com-plexity was assessed using mean length of T-units (TU; the minimal terminable unit of language that can be considered a grammatical sentence), and the ratio of complex T-units (those containing a dependent clause) to all T-units in a sentence. Table 1 reports the results. Across almost all indices, the level of Reddit non-natives is much higher than even the advanced TOEFL learners, and almost on par with Reddit natives. L1 cognate effects on L2 lexical choice 4.1 Hypotheses Cognates are words in two languages that share both a similar meaning and a similar form. Our main hypothesis is that non-native speakers, when required to pick an English word that has a set of synonyms, are more likely to select a lexical item that has a cognate in their L1. We therefore expect the effect of L1 cognates to be reflected in the frequency of their English counterparts in the spontaneous productions of L2 speakers. Moreover, we expect similar effects, perhaps to a lesser extent, in the contextual usage of certain words, reflecting collocations and subtle contours of word meanings that are transferred from L1. The different contexts that certain words are embedded in (in the Englishes of speakers with different L1 backgrounds) can be captured by means of distributional semantics. Furthermore, we hypothesize that the effect of L1 is powerful to an extent that facilitates clustering of Englishes produced by non-natives with "similar" L1s; specifically, L1s that belong to the same language family. "Similar" L1s may reflect both typological and areal closeness: for example, we expect the English spoken by Romanians to be similar both to the English of Italians (as both are Romance languages) and to the English of Bulgarians (as both are Balkan languages). Ultimately, we aim to reconstruct the IE language phylogeny, reflecting historical and areal evolution of the subsets of Germanic, Romance and Balto-Slavic languages over thousands of years, from non-native English only. While lexical transfer from L1 is a known phenomenon in learner language, we hypothesize that its signal is present also in the language of highly competent non-native speakers. Mastering the nuances of lexical choice, including subtle contours of word meaning and the correct context in which words tend to occur, are key factors in advanced language competence. The L2-Reddit corpus provides a perfect environment for testing this hypothesis. Selection of a focus set of words Our goal is to investigate non-native speakers' choice of lexical items in English. We address this task by defining a set of English words that have at least one synonym; ideally, we would like the various synonyms to have different etymologies, and in particular, to have different cognates in different language families. English happens to be a particularly good choice for this task, since in spite of its Germanic origins, much of its vocabulary evolved from Romance, as a great number of words were borrowed from Old French during the Norman occupation of Britain in the 11th century. To trace the etymological history of English words we used Etymological Wordnet (EW), a database that contains information about the ancestors of over 100K English words, about 25K of them in contemporary English (de Melo, 2014). For each word recorded in EW, the full path to its root can be reconstructed. Intuitively, an English word with Latin roots may exhibit higher (phonetic and orthographic) proximity to its Romance languages' counterparts. Conversely, an English word with a Proto-Germanic ancestor may better resemble its equivalents in Germanic languages. We selected from EW all the nouns, verbs, and adjectives. For each such word w, we identified the synset of w in WordNet, choosing only the first (i.e., most prominent) sense of w (and, in particular, corresponding to the most frequent partof-speech (POS) category of w in the L2-Reddit dataset). Then, we retained only those words that had synonyms, and only those whose synonyms had at least two different etymological paths, i.e., synonyms rooted in different ancestors. For example, we retained the synset {heaven, paradise}, since the former is derived from Proto-Germanic *himin-, while the latter is derived from Greek παράδεισος (via Latin and Old French). Furthermore, to capture the bias of non-native speakers toward their L1 cognate, it makes sense to focus on a set of easily interchangeable synonyms, e.g., {divide, split}. In contrast, consider an unbal- Eliminating cultural bias Although our Reddit corpus spans over 80K topical threads and 45K users, posts produced by authors from neighboring countries may carry over markers with similar geographical or cultural flavor. For example, we may expect to encounter soviet more frequently in posts by Russians and Ukrainians, wine in texts of French or Italian authors, and refugees in posts by German users. While they may be typical to a certain population group, such terms are totally unrelated to the phenomenon we address here, and we therefore wish to eliminate them from the focus set of words. A common way to identify elements that are statistically over-represented in a particular population, compared to another, is log-odds ratio informative Dirichlet prior (Monroe et al., 2008). We employed this approach to discover words that were overused by authors of a certain country, where posts from each country (a category under test) were compared to all the others (the background). We used the strict log-odds score of −5 as a threshold for filtering out terms associated with a certain country. 7 Among the terms eliminated by this procedure were genocide for Armenia, hockey for Canada and independence for the UK. The final focus set of words thus consists of neutral, ubiquitous sets of synonyms, varying in their etymological roots. It comprises 540 synonym sets and 1143 distinct words. 7 The threshold was set by preliminary experiments, without any further tuning. Model We hypothesize (Section 4.1) that L1 effects on lexical choice are so powerful, even with advanced non-native speakers, that it is possible to reconstruct the IE language phylogeny, reflecting historical and areal evolution over thousands of years, from nonnative English only. We now describe a simple yet effective framework for clustering the Englishes of authors with different L1s, integrating both word frequencies and semantic word representations of the words in our focus set (Section 4.2). Data cleanup and abstraction Aiming to learn word representations for the lexical items in our focus set, we want the contextual information to be as free as possible from strong geographical and cultural cues. We therefore process the corpus further. First, we identified named entities (NEs) and systematically replaced them by their type. We used the implementation available in the spacy Python package, 8 which supports a wide range of entities (e.g., names of people, nationalities, countries, products, events, book titles, etc.), at state-of-the-art accuracy. Like other web-based user generated content, the Reddit corpus does not adhere to strict casing rules, which has detrimental effects on the accuracy of NE identification. To improve the tagging accuracy, we applied a preprocessing step of 'truecasing', where each token w was assigned the case (lower, upper, or upper-initial) that maximized the likelihood of the consecutive tri-gram w pre , w, w post in the Corpus of Contemporary American English (COCA). 9 For example, the trigram 'the us people' was converted to 'the US people', but 'let us know' remained unchanged. When a tri-gram was not found in the COCA n-gram corpus, we employed fallback to unigram probability estimation. Additionally, we replaced all non-English words with the token 'UNK'; and all web links, subreddit (e.g., r/compling) and user (u/userid) pointers with the 'URL' token. 10 Distance estimation and clustering Bamman et al. (2014) introduced a model for incorporating contextual information (such as geography) in learning vector representations. They proposed a joint model for learning word representations in situated language, a model that "includes information about a subject (i.e., the speaker), allowing to learn the contours of a word's meaning that are shaped by the context in which it is uttered". Using a large corpus of tweets, their joint model learned word representations that were sensitive to geographical factors, demonstrating that the usage of wicked in the United States (meaning bad or evil ) differs from that in New England, where it is used as an adverbial intensifier (my boy's wicked smart). We leveraged this model to uncover linguistic variation grounded in the different L1 backgrounds of non-native Reddit speakers. We used equal-sized random samples of 500K sentences from each country to train a model of vector representations. The model comprises representation of every vocabulary item in each of the 31 Englishes; e.g., 31 vectors are generated for the word fatigue, presumably reflecting the subtle divergences of word semantics, rooted in the various L1 backgrounds of the authors. In order to cluster together Englishes of speakers with "similar" L1s, we need a measure of distance between two English texts. This measure is based 10 The cleaned, abstracted subset of the corpus is also available at http://cl.haifa.ac.il/projects/L2. The cleanup code is available at https://github.com/ ellarabi/reddit-l2. on two constituents: word frequencies and word embeddings. Given two English texts originating from different countries, we computed for each word w in our focus set (i) the difference in the frequency of w in the two texts; and (ii) the distance between the vector representations of w in these texts, estimated by cosine similarity of the two corresponding word vectors. We employed the popular weighted product model to integrate the two arguments. The word vector component was assigned a higher weight as the frequency of w in the collection increases; this is motivated by the intuition that learning the semantic relationships of a word benefits from vast usage examples. We therefore weigh the embedding constituent proportionally to the word's frequency in the dataset, and assign the complementary weight to the difference of frequencies. Formally, given two English texts E L i and E L j , with L i and L j native languages, and given a word w in the focus set, let f i and f j denote the frequencies of w in E L i and E L j , respectively. Let p w be the probability of w in the entire collection. We further denote the vector space representation of w in E L i by v i , and the representation of w in E L j by v j . Then, the distance between E L i and E L j with respect to the word w is: The final distance between E L i and E L j is given by averaging D ij over all words in the focus set F S: Finally, we constructed a symmetric distance matrix (31 × 31) M by setting M [i, j] = D ij . We used Ward's hierarchical clustering 11 with the Euclidean distance metric to derive a tree from the distance matrix M. We considered several other weighting alternatives, including assignment of constant weights to the two factors in Equation 1; they all resulted in inferior outcomes. We also corroborated the relative contribution of the two components by using each of them alone. While considering only frequencies resulted in a slightly inferior outcome (see Section 4.5), using word representations alone produced a completely arbitrary result. Results The resulting tree is depicted in Figure 1. The reconstructed language typology reveals several interesting observations. First, and much expectedly, all native English speakers are grouped together into a single, distant sub-tree, implying that similarities exhibited by the lexical choices of native speakers go beyond geographical and cultural differences. The Englishes of non-native speakers are clustered into three main language families: Germanic, Romance, and Balto-Slavic. Notably, Spanish-speaking Mexico is clustered with its Romance counterparts. The firm Balto-Slavic cluster reveals historical relations between languages by generating coherent sub-branches: the Czech Republic and Slovakia, Latvia and Lithuania, as well as the relative proximity of Serbia and Croatia. In fact, former Yugoslavia is clustered together, except for Bosnia, which is somewhat detached. Similar close ties can be seen between Austria and Germany, and between Portugal and Spain. Another interesting phenomenon is captured by English texts of authors from Romania: their language is assigned to the Balto-Slavic family, implying that the deep-rooted areal and cultural Balkan influences left their traces in the Romanian language, which in turn, is reflected in the English productions of native Romanian authors. Unfortunately, we cannot explain the location of Iceland. A geographical view mirroring the language phylogeny is presented in Figure 3. Flat clusters were obtained from the hierarchy using the scipy fcluster 11 https://docs.scipy.org/doc/scipy/ reference/generated/scipy.cluster. hierarchy.linkage.html method 12 with defaults. Figure 1: Language typology reconstructed from nonnative Englishes using features reflecting lexical choice. Countries that belong to the same phylogenetic family (according to the gold tree) share identical color. E.g., Iceland is colored purple, like other Germanic languages, even though it is assigned to the Romance cluster. This outcome, obtained using only lexical semantic properties (word frequencies and word embeddings) of English authored by various non-native speakers, is a strong indication of the power of L1 influence on L2 speakers, even highly fluent ones. These results are strongly dependent on the choice of focus words: we carefully selected words that on one hand lack any cultural or geographical bias toward one group of non-natives, but on the other hand have synonyms with different etymologies. As an additional validation step, we generated a language tree using exactly the same methodology but a different set of focus words. We randomly sampled 1143 words from the corpus, controlling for country-specific bias but not for the existence of synonyms with different etymologies. Although some of the intra-family ties were captured (in particular, all native speakers were clustered together), the resulting tree (Figure 2) is far inferior. We also conducted an additional experiment, including multilingual Belgium and Switzerland in the set of countries. While the L1 of speakers cannot be determined for these two countries, presumably Belgium is dominated by Dutch and French, and Switzerland by German and French. Indeed, both countries were assigned into the Germanic language family in our clustering experiments. Evaluation To better assess the quality of the reconstructed trees we now provide a quantitative evaluation of the language typologies obtained by the various experiments. We adopt the evaluation approach of Rabinovich et al. (2017), who introduced a distance metric between two trees, defined as the sum of the square differences between all leaf-pair distances in the two trees. More specifically, given a tree of N leaves, l i , i ∈ [1..N ], the distance between two leaves l i , l j in a tree τ , denoted D τ (l i , l j ), is defined as the length of the shortest path between l i and l j . The distance Dist(τ, g) between a generated tree τ and the gold tree g is then calculated by summing the square differences between all leaf-pair distances in the two trees: We used the Indo-European tree in Glottolog 13 as our gold standard, pruning it to contain the set of 31 languages considered in this work. For the sake of comparison, we also present the distance obtained for a completely random tree, generated by sampling a random distance matrix from the uniform (0, 1) distribution. The reported random tree evaluation score is averaged over 100 experiments. Table 3 presents the results. All distances are normalized to a zero-one scale, where the bounds, zero and one, represent the identical and the most distant tree with respect to the gold standard, respectively. Much expectedly, the random tree is the worst one, followed closely by the tree reconstructed from a random sample of over 1000 words sampled from the corpus (Figure 2). The best result is obtained by considering both word frequencies and representations, being only slightly superior to the tree reconstructed using word frequencies alone. The latter result corroborates the aforementioned observation (Section 4.3.2) and further posits word frequencies as the major factor affecting the shape of the obtained phylogeny. and Europe (on the right) views. Countries assigned to the same flat cluster by the clustering procedure (Section 4.4) share identical color, e.g., the wrongly assigned Iceland shares the red color with the Romance-language speaking countries. Countries not included in this work are uncolored. Features used Distance Random tree 1.000 Randomly sampled words (Figure 2) 0.857 Focus set with frequencies only 0.497 + embeddings (Figure 1) 0.469 Table 3: Normalized distance between a reconstructed and the gold tree; lower distances indicate better result. Analysis The results described in Section 4.4 empirically support the intuition that cognates are one of the factors that shape lexical choice in productions of nonnative authors. In this section we perform a closer analysis of the data, aiming to capture the subtle yet systematic distortions that help distinguish between English texts of speakers with different L1s. Quantitative analysis Given a synonym set s ∈ F S, consisting of words w 1 , w 2 , ..., w n , and two English texts with two different L1s, E L i and E L j , we computed the counts of the synset words in these texts, and further normalized the counts by the total sum, yielding probabilities. We denote the probability distribution of a synset s = w 1 , w 2 , ..., w n in E L i by P s i = p i (w 1 ), p i (w 2 ), ..., p i (w n ) The different usage patterns of a synonym set s across two Englishes can then be estimated using the Jensen-Shannon divergence (JSD) between the two probability distributions: We expect that "close" L1s will have lower divergence, whereas L1s from different language families will exhibit higher divergences. Table 4 presents the top twenty synonym sets for the arbitrarily chosen Germany-Spain country pair, ranked by divergence (Equation 2). The overuse of hinder by German authors may be attributed to its German behindern cognate, whereas Spanish users' preference of impede is probably attributable to its Spanish impedir equivalent. A Spanish cognate for plantation, plantación, possibly explains the clear preference of Spanish native speakers for this alternative, compared to the more popular choice of German authors, grove, which has Germanic etymological origins. The {weariness, tiredness, fatigue} synset reveals the preference of Spanish native speakers for fatigue, whose Spanish equivalent fatiga resembles it to a great extent; weariness, however, is slightly more frequent in the texts of German speakers, potentially reflecting its Proto-Germanic *wōrīgaz ancestor. An interesting phenomenon is revealed by the synset {conceivable, imaginable}: while both words have Latin origins, imaginable is more ubiquitous in the English language, rendering it more frequent in texts of German native speakers, compared to the more balanced choice of Spanish authors. Usage patterns in {overdo, exaggerate} and {inspect, audit, scrutinize} can be attributed to the same phe-nomenon, where the German equivalent for inspect (inspizieren) resembles its English counterpart despite a different etymological root. Table 5 presents example sentences written by Reddit authors with French and Italian L1s, further illustrating discrepancies in lexical choice (presumably) stemming from cognate facilitation effects. The French rapide is a translation equivalent of the English synset {rapid, quick, fast}, but its English rapid cognate is more constrained to contexts of movement or growth, rendering the collocation rapid check somewhat marked. The French noun approbation is more frequent in contemporary French than its English (practically unused) equivalent approbation; this makes its use in English sound unnatural. In our Reddit corpus, approbation appears 48 times in L1-French texts, compared to 5, 4, and 4 in equal-sized texts by authors from the UK, Ireland and Canada, respectively. One of the frequent English synonym alternatives {approval, acceptance} would better fit this context. Finally, while the Italian expression sera precedente is common, its English equivalent precedent evening is very infrequent, yet it is used in English productions of Italian speakers. Conclusion We presented an investigation of L1 cognate effects on the productions of advanced non-native Reddit authors. The results are accompanied by a large dataset of native and non-native English speakers, annotated for author country (and, presumably, also L1) at the sentence level. Several open questions remain for future research. From a theoretical perspective, we would like to extend this work by studying whether the tendency to choose an English cognate is more powerful in L1s with both phonetic and orthographic similarity to English (Roman script) than in L1s with phonetic similarity only (e.g., Cyrillic script). We also plan to more carefully investigate productions of speakers from multilingual countries, like Belgium and Switzerland. Another extension of this work may broaden the analysis to include additional language families. L1 Sentence French I have to go to the Dr. to do a rapid check on my heart stability. French Maybe put every name through a manual approbation pipeline so it ensures quality. French Polls have shown public approbation for this law is somewhere between 58% and 65%, and it has been a strong promise during the presidential campaign. Italian The event was even more shocking because the precedent evening he wasn't sick at all. There are also various potential practical applications to this work. First, we plan to exploit the potential benefits of our findings to the task of native language identification of (highly advanced) non-native authors, in various domains. Second, our results will be instrumental for personalization of language learning applications, based on the L1 background of the learner. For example, error correction systems can be enhanced with the native language of the author to offer root cause analysis of subtle discrepancies in the usage of lexical items, considering both their frequencies and context. Given the L1 of the target audience, lexical simplification systems can also benefit from cognate cues, e.g., by providing an informed choice of potentially challenging candidates for substitution with a simplified alternative. We leave such applications for future research.
2018-05-24T10:24:47.000Z
2018-05-24T00:00:00.000
{ "year": 2018, "sha1": "dc894c67f7d54641dca22c54e2ec79ba521a8c45", "oa_license": "CCBY", "oa_url": "http://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00024", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "71ea14c80e2efee31b18b578f2b3ad89de328070", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
21665598
pes2o/s2orc
v3-fos-license
Reduced serotonin levels after a lifestyle intervention in obese children: association with glucose and anthropometric measurements Amelia Marti del Moral. Department of Food Science and Physiology. University of Navarra. C/ Irunlarrea. Pamplona, Navarra. Spain e-mail: amarti@unav.es Ojeda-Rodríguez A, Morell-Azanza L, Azcona-Sanjulián MC, Martínez JA, Ramírez MJ, Marti A; and GENOI members*. Reduced serotonin levels after a lifestyle intervention in obese children: association with glucose and anthropometric measurements. Nutr Hosp 2018;35:279-285 INTRODUCTION Serotonin (5-hydroxytryptamine or 5-HT) is a biogenic amine, synthesized from the essential amino acid L-tryptophan, involved in the regulation of energy balance for its actions in both central nervous system (CNS) and peripheral tissues (1).Central serotonin (5%) suppresses appetite and decreases food intake (2) and hence it influences indirectly body fat, which is an important determinant of insulin resistance and glucose levels (3).However, most of the serotonin (95%) is released into the bloodstream, mainly by intestinal cells, but also by pancreatic β-cells, adipocytes and osteoclasts (4,5).Recent works indicate that peripheral serotonin serves as a secreted hormone to regulate metabolic function in multiple tissues (6)(7)(8).Specifically, peripheral serotonin modifies glucose metabolism, participating in both glucose homeostasis and hepatic gluconeogenesis in cellular and animal studies (9)(10)(11)(12)(13).These effects of peripheral serotonin in metabolism have driven a renewed interest in the study of serotonin, even as a therapeutic molecule for obesity and diabetes treatment (6)(7)(8). Childhood obesity is a major public health problem worldwide, with an alarming trend in Europe and in Spain (14).A number of comorbidities are associated with obesity in pediatric population increasing cardiovascular risk (15).However, lifestyle interventions are able to reduce cardiovascular risk factors in obese children (16)(17)(18)(19)(20), but there is little information on the role of peripheral serotonin in weight loss interventions.Therefore, our aim was to evaluate plasma serotonin levels after a lifestyle intervention in obese children and its possible association with changes in glucose and adiposity measurements. METHODS The NUGENOI study (nutrigenomics and childhood obesity) was conducted by members of the GENOI group (Navarra Study Group of Childhood Obesity) in 2009.NUGENOI, an uncontrolled clinical trial (NCT01329367), is a ten-week intervention study involving 54 obese children and adolescents from Navarra.The weight loss program is based on a moderate caloric restriction together with nutritional education and familial involvement.Children and their parents received personal training in nutritional and physical education throughout the ten-week intervention period (17)(18)(19)(20). The study followed the ethical standards recognized in the Declaration of Helsinki (Brazil, October 2013), the Rules of Good Clinical Practice (EEC 111/3976/88 July 1990) and the current legislation responsible for regulating clinical research in humans (Royal Decree 561/1993).The project was approved and supervised by the Ethics Committee on Human Research of the University of Navarra (42/2005). SUBJECTS In the study, 71 overweight or obese children and adolescents (7-15 years) were recruited at the Pediatric Endocrinology Unit of the Clínica Universidad de Navarra and the Pediatric Department of the Complejo Hospitalario de Navarra, according to the criteria of Cole et al. (21).All of them were Spanish or schooling foreigners for at least one year in Spain.Participants with a major psychiatric illness, significant neurological disease, bulimia nervosa, familial hyperlipidemia or any sort of either major cardiovascular or respiratory complication were excluded. Fifty-four subjects agreed to participate in the study and signed the informed consent, but only 44 subjects (22 boys and 22 girls) completed the dietary intervention (drop-out rate 18.5%) during two different periods (April-June and September-December, 2010) in order to ease follow-up.They were distributed according to the response based on change in "Standard Deviation Score for Body Mass Index" (BMI-SDS, median equal to 0.5).Thus, the subjects who lost > 0.5 BMI-SDS were considered as high responders (HR; n = 22) and those who lost ≤ 0.5 BMI-SDS, as low responders (LR; n = 22). DIETARY INTERVENTION The child accompanied by his/her parent or tutor underwent ten weekly follow-up dietetic consultations for diet monitoring, weight control and nutritional education (17)(18)(19)(20).The adherence to ten weekly individual sessions was 93% in the total population. The dietary intervention was carried out by a registered dietitian with the support of pediatricians.On the first visit, participants were prescribed an energy restriction dietary program in a range from 10% to 40% depending on the degree of obesity presented (22).Firstly, energy expenditure of the participants was calculated according to the Scholfield equation (23) adapted to age and sex.Nevertheless, diets with an energy intake less than 1,300 kcal/ day or greater than 2,200 kcal/day were not prescribed. The distribution of the energy intake along the day was 20% at breakfast, 5-10% at morning snack, 30-35% at lunch, 10-15% at afternoon snack, and 20-25% at dinner.Daily macronutrient intake was distributed in the following corresponding nutrient-caloric percentages (24): 55% carbohydrates, 30% fat and 15% protein.A semi-quantitative Food Frequency Questionnaire (FFQ) previously validated in Spain (25) containing 132 food items was filled out to evaluate the dietary patterns of the participants.The adherence to the fixed full day dietary plan (five meals) was weekly evaluated during the personal interviews with the registered dietitian. ANTHROPOMETRIC AND CLINICAL MEASUREMENTS Anthropometric measurements (body weight, height, BMI, BMI-SDS, fat mass, waist circumference, hip circumference and waistto-hip ratio) were performed by trained personnel with previously calibrated equipment.Triplicate measures were performed and average was taken as the final value.Subjects were in a large room, barefoot, in his/her underwear and wore an exploration gown. REDUCED SEROTONIN LEVELS AFTER A LIFESTYLE INTERVENTION IN OBESE CHILDREN: ASSOCIATION WITH GLUCOSE AND ANTHROPOMETRIC MEASUREMENTS Body weight was determined using a digital scale (TBF-410, Tanita ® , Tokyo, Japan).Height was measured using a stadiometer (Seca ® 220, Vogel & Halke, Germany).BMI was calculated from weight and height measurements.BMI in childhood is a determinant of BMI in adulthood, and it allows monitoring of overweight or obesity of the child from childhood to adulthood (26).In addition, in pediatric population BMI should be referred to sex and age of each participant, by calculating BMI-standard deviation score (SDS).For each subject, BMI-SDS derives from the difference between his/her own BMI values, age and sex specific cut-points taken from Spanish reference growth charts (27).Waist circumference (WC) and hip circumference (HC) were measured using a non-stretchable measuring tape (type SECA ® 200).WC was measured as the smallest horizontal girth between the costal margins and the iliac crests at minimal respiration.HC was taken as the greatest circumference at the level of the greater trochanter (the widest portion of the hip) on both sides.Pubertal developmental stage was determined according Tanner stage (28).In addition, the children's body composition was measured using bioelectrical impedance analysis equipment (TBF-410, Tanita ® , Tokyo, Japan). Measurements were taken before and after the follow-up intervention at the same time of the day, except for the weight and height, which were measured weekly in order to have a rigorous control of weight loss. BIOCHEMICAL MEASUREMENTS Blood draws were performed before and after the study.Blood extraction was performed by specialized nurses through a BD Vacutainer ® system (Becton Dickonson, GB), after overnight fasting.Venous blood samples were obtained on ethylenediaminetetraacetic acid (EDTA) tubes, which were separated in plasma and serum aliquots by centrifugation (3,500 rpm, 4 °C, 15 min).After centrifugation, plasma (10 ml) and serum (5 ml) were stored in three tubes each and frozen at -80 °C.Triglycerides, total cholesterol, cholesterol linked to high-density lipoproteins (HDL-cholesterol), glucose and insulin were determined by enzymatic colorimetric using the Hitachi 911 analyzer (Roche Diagnostics, Basel, Switzerland).The fraction of cholesterol linked to low-density lipoproteins (LDL-cholesterol) was calculated with the Friedewald formula (29).Insulin resistance and insulin sensitivity were calculated according to the homeostasis model assessment of insulin resistance (HOMA-IR = [insulin levels x glucose levels]/405) and quantitative insulin sensitivity check index (QUICKI = 1/[logarithm of insulin levels + logarithm of glucose levels]), respectively. Determination of serotonin and its metabolite 5-HIAA from plasma was performed by adding 50 µl of 0.4 N perchloric acid containing 0.1% metabisulfite and 1 nM EDTA per 50 µL sample.Then, the sample was centrifuged (13,000 rpm, 2 min) in order to discard the pellet; 75 µl of the supernatant were removed, and perchloric acid (75 µl) was added.After a second centrifugation (13,000 rpm, 2 min), the resulting supernatant were used for determination of serotonin and 5-HIAA. For serotonin analysis, high performance liquid chromatography (HPLC) was used.The mobile phase consisted of 16% methane and 80% aqueous solution, containing 0.05 M potassium phosphate monobasic (KH2PO4), 0.16 nM octanesulfonic acid (SOS) and 0.1 mM EDTA, and was injected at a flow rate of 1 ml/ min and at pH value of 3. Serotonin and 5-HIAA were detected through a Waters ® 717 plus Autosampler injector (Waters, USA) that injected 40 µl of sample in a reverse-phase column Spherisorb ® ODS-2 (5 µm, 15 x 0.46 cm, Waters) connected to an amperometric detector DECADE ® (Antec Leyden, Zoeterwoude, the Netherlands) with a range of 20 amps.In order to quantify both compounds, a specific program for HPLC was used (Empower 2.1.5.4,Waters ® , USA), which compared the area generated by the peak with the standard-reference area (serotonin: 1,000 pg; 5-HIAA: 500 pg). STATISTICAL ANALYSES Stata 12.0 for Windows (version 12.0 Texas, USA) was used for statistical analyses.Plasma serotonin levels were log transformed to follow a normal distribution.Paired t tests were used to assess pre and post-intervention variables in participants. Multiple linear regression analyses were fitted to estimate associations between plasma serotonin levels and blood glucose or anthropometric measures after adjustment for potential confounders (age and sex, or pre-intervention variables). Values are shown as arithmetic mean (standard deviation/95% confidence interval).The level of statistical significance was p < 0.05. RESULTS Forty-four obese children (50% males, aged 7-15 years old) accomplished the ten-week lifestyle intervention.Anthropometric and biochemical variables from obese children (HR and LR groups) are indicated in table I; HR and LR subjects had similar pre-intervention measurements. Moreover, a significant association between pre-intervention serotonin and glucose levels was found in HR and LR subjects (Fig. 2).Curiously, 22% of the variability in baseline glucose levels is explained by its linear dependence on plasma serotonin.Thus, an increase of 0.05 mg/dl of glucose levels was explained by Finally, multiple regression models were fitted in order to assess the association between pre and post-intervention serotonin levels and anthropometric measures in the total population (HR and LR groups).Notably, significant associations were found between serotonin levels and body weight (pre-intervention, R 2 = 0.452, B = -0.109,p = 0.044; post-intervention, R 2 = 0.384, B = -0.174,p = 0.013), BMI (post-intervention, R 2 = 0.290, B = -0.048,p = 0.013) and BMI-SDS (pre-intervention, R 2 = 0.194, B = -0.026,p = 0.006) (Table II). DISCUSSION In the present study a ten-week lifestyle intervention was conducted in obese children and adolescents based on a moderate caloric restriction diet, not to compromise growth and development of the population.Our intervention was able to reduce adiposity indices and some biochemical markers, thus lowering cardiometabolic risk.It is worth mentioning that, from a pediatrician's perspective, it is favorable to have no increase in body weight when treating obese children (30). Furthermore, Reinehr et al. indicated that an improvement in body composition and cardiometabolic risk can been seen with a 0.2 decrease in BMI-SDS, while greater benefits occurs when losing at least 0.5 BMI-SDS (31).Indeed, in our study, a 0.49 BMI-SDS reduction was obtained together with an improvement in lipid and glucose profiles.We and other researchers have conducted successful programs for childhood obesity including lifestyle modification, moderate caloric reduced diet, nutritional education and family implication (22,32). The novelty of this study is that plasma serotonin levels were notably decreased after a ten-week lifestyle intervention and they were associated with anthropometric and glucose measurements.Little research work focused on the relationship between serotonin and weight status in human subjects is found in the literature.In normal weight, anorexia nervosa and obese adult subjects, plasma tryptophan levels are diminished after dietary restriction, suggesting that serotonin synthesis could also be reduced (33)(34)(35)(36)(37)(38).In a similar way, our results indicate that serotonin levels were reduced in obese children following a moderate calorie restricted diet. Concerning the potential involvement of serotonin levels in glucose metabolism, we observed that plasma serotonin levels were significantly associated with blood glucose in our obese children population.Peripheral serotonin appears to exert its action on different tissues, metabolic pathways and endocrine organs involved in glucose homeostasis, which may explain the contradictory effects reported in the literature (39,40).In healthy subjects, a positive correlation between platelet serotonin and glucose levels was observed before and after an oral load glucose test (41).Furthermore, it has been observed that plasma 5-HIAA was associated with fasting plasma glucose in patients with metabolic syndrome (42,43) and diabetes (44). As mentioned before, few studies in humans show a relationship between peripheral serotonin and glucose metabolism.However, in animals it is reported that glucose is a stimulator for the release of serotonin into blood in pancreatic β-cells (9,11) and enterochromaffin cells (12).Moreover, in animal hepatocytes, serotonin promotes hepatic gluconeogenesis and decreases glucose uptake, which would result in higher circulating glucose levels (10,13).These preliminary results in animal studies could help to understand the association between circulating serotonin and glucose levels.The strengths of this study include: first, measurements in young subjects not confounded by chronic obesity-related disorders; second, the overweight/obese subjects achieved weight loss in a short term dietary intervention; and third, a standardized intervention with similar lifestyle recommendations given to a relatively homogenous group.On the other hand, weaknesses of the study are: first, the absence of a normal weight children group; second, the reduced sample size of our population and also that there was no sample available for tryptophan measurement; and third, the different pubertal stage of participants, with an intense growth and endocrine changes which may influence our results.To make this effect lower, each statistical model was adjusted for age and sex. In summary, we have shown for the first time that plasma serotonin levels were decreased after a lifestyle intervention in obese children, and that they were associated with anthropometric indices (body weight, BMI, BMI-SDS) and blood glucose levels.Nevertheless, further studies are needed to confirm the findings in a larger population, and also to characterize the underlying mechanisms that link serotonin with body weight regulation and energy homeostasis. Figure 1 . Figure 1.Plasma serotonin levels pre and post-lifestyle intervention in obese children from high (A) and low (B) responder groups.Each dot represents the plasma serotonin level in a subject.The solid horizontal lines indicate the mean values of plasma serotonin pre and post-intervention, respectively.*p < 0.05; **p < 0.01. Values are expressed as mean (SD).Tanner I: infant; Tanner II: puberty; Tanner III: adult; BMI: body mass index; BMI-SDS: standard deviation score for BMI; HOMA-IR: homeostasis model assessment for insulin resistance; QUICKI: quantitative insulin sensitivity check index; 5-HIAA: 5-hydroxyindoleacetic acid; p*: p values for the comparison between pre and post-intervention variables in obese children distributed by the response; p † : p values for the comparison of pre-intervention variables between high and low responders.A B REDUCED SEROTONIN LEVELS AFTER A LIFESTYLE INTERVENTION IN OBESE CHILDREN: ASSOCIATION WITH GLUCOSE AND ANTHROPOMETRIC MEASUREMENTS the increase in one unit of serotonin (nmol/l), after adjusting for potential confounders.The change in plasma serotonin significantly (p < 0.05) predicted the 29% of the variance in changes in glucose levels in the HR group (Fig. 3) in a multiple linear regression after adjustment for baseline glucose levels and age.Post-intervention serotonin and glucose levels were significantly associated in HR and LR subjects (R 2 = 0.140, B = 0.055, p = 0.042, data not shown). Figure 2 . Figure 2. Association between pre-intervention serotonin and glucose levels in an obese children population (n = 44).*Multiple linear regression analyses after adjustment for baseline BMI-SDS and age. Figure 3 . Figure 3. Association between changes in serotonin and glucose levels after a ten-week lifestyle intervention in the HR group (n = 22).*Multiple linear regression adjusted for baseline glucose and age. Table I . Anthropometric and biochemical variables pre and post lifestyle intervention in obese children according to the response Table II . Association between pre and post-intervention serotonin levels and anthropometric measurements in an obese children population (n = 44)Multiple linear regression adjusted for baseline sex and age.BMI: body mass index; BMI-SDS: standard deviation score for BMI.
2018-05-21T20:56:47.476Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "84986d1fbfae5061d4e73a83051f3b68d0f88fd0", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.20960/nh.1439", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "84986d1fbfae5061d4e73a83051f3b68d0f88fd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
128300781
pes2o/s2orc
v3-fos-license
Auditable Blockchain Randomization Tool † : Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency; (b) Procedural, cryptographical and computational security; (c) Complete auditability and traceability; (d) Any attempt by participating parties or coalitions to spuriously influence the procedure should be either unsuccessful or be detected; (e) Open-source programming; (f) Multiple hardware platform and operating system implementation; (g) User friendliness and transparency; (h) Flexibility and adaptability for the needs and requirements of multiple application areas (like, for example, clinical trials, selection of jury or judges in legal proceedings, and draft lotteries). This paper presents a simple and easy to implement randomization protocol that assures, in a formal mathematical setting, full compliance to the aforementioned desiderata for randomization procedures. Introduction: Bad and Good Practices in Randomization Randomization is a technique used in the design of statistical experiments: in a clinical trial, for example, patients are randomly assigned to distinct groups receiving different treatments with the goal of studding and contrasting their effects. Randomization is nowadays considered a golden standard in statistical practice; its motivation is to prevent systematic biases (like an unfair or tendentious assignment process) that could distort (unintentionally or purposely) the conclusions of the study. For further comments on randomization see [1][2][3], for Bayesian perspectives see [4,5]. In the legal context, randomization (also known as sortition or allotment) is routinely used for the selection of jurors or judges assigned to a given judicial case; see [6]. For these applications, our initial quotation, from the Roman emperor Julius Caesar, suggests the highest standards of technical quality, and auditability, see [7]. Rerandomization is the practice of rejecting and discarding (for whatever reason) a given randomized outcome, that is subsequently replaced by a new randomization. Repeated rerandomization can be used to completely circumvent the haphazard, unpredictable or aimless nature of randomization, allowing a premeditated selection of a final outcome of choice. There are advanced statistical techniques capable of blending the best characteristics of random and intentional sampling, see for example [8][9][10][11][12]. Nevertheless, rerandomization is often naively used, or abused, with the excuse of (subjectively) "avoiding outcomes that do not look random enough", see for example [13,14]. In the legal context, spurious manipulations of the randomization process are often linked to fraud, corruption and similar maladies, see [6] and references therein. In order to comply with the best practices for randomization processes, the authors of [6] recommend the use of computer software having a long list of characteristics, for example, being efficient and fully auditable, well-defined and understandable, sound and flexible, secure and transparent. Such requirements are expressed by the following (revised) desiderata for randomization procedures: Given the juridical and social importance of the themata under scrutiny, we believe that it is important to develop randomization procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency, see [15][16][17][18]; (b) Procedural, cryptographical and computational security, see [19][20][21][22]; (c) Complete auditability and traceability, see [23][24][25]; (d) Any attempt by participating parties or coalitions to spuriously influence the procedure should be either unsuccessful or be detected, see [26][27][28]; (e) Open-source programming; (f) Multiple hardware platform and operating system implementation; (g) User friendliness and transparency, see [29,30]; (h) Flexibility and adaptability for the needs and requirements of multiple application areas (like, for example, clinical trials, selection of jury or judges in legal proceedings, and draft lotteries), see [6]. Such requirements conflate several complementary characteristics that may seem, at first glance, incompatible. For example, strong security is often (but wrongly) associated with excessive secrecy, a doctrine known as "security by obscurity", computer routines may be efficient but are often tough as hard to audit, and mathematically well-defined algorithms may be perceived as hard to understand. The bibliographical references given in the formerly stated desiderata for randomization procedures already hint at technologies that can be used to achieve a fully compliant randomization procedure, most preeminently, the blockchain. This is the key technology supporting modern public ledgers, cryptocurencies, and a host of related applications. A technical challenge for the application under scrutiny is the generation of pseudo-random number sequences that reconcile complementary properties related to computational efficiency, statistical soundness, and cryptographic security. In this respect, the excellent statistical and computational characteristics of linear recurrence pseudo-random number generators (or their modern descendants and relatives), like [16], can be reconciled with the needs concerning unpredictability and cryptographic security by appropriate starts and restarts of the linear recurrence generator. A sequence start for a linear recurrence generator is defined by a seed specified by a vector of (typically 1 to 64) integers, while a restart is defined by a jump-ahead or skip-ahead specified by a single integer (kept small relative to the generator's full period), see [22]. Unpredictable and cryptographically secure seeds and jump-aheads can be provided by high entropy bit streams extracted from blockchain transactions, an idea that has already been explored in the works of [31][32][33][34]. The next section develops a possible implementation of a fully compliant core randomization protocol based on blockchain technology, and also makes a simple prototype available for study and further research. Moreover, in order to make it simple and easy to use, we develop the prototype on top of a readily available crypto-currency platform. We use Bitcoin for this example, but other alternatives like Ethereum or other cryptocurrencies whose miners work under the same incentives model can be used with minor adaptations. Results: Core Randomization Protocol in Blockchain We intend to establish a protocol able to deliver on demand pseudo random numbers, from an auditable and immutable ledger. The procedure will start as follows: the user (the part that wants to receive a random number) shall send a Bitcoin transaction with a register of its purpose embedded in it. (One way to embed a message in a transaction is using the OP_RETURN script, which allows to store up to 40 bytes in a transaction.) The recipient of this transaction may be a proxy representing a competent authority, a pertinent regulatory agency, an agreed custodian, etc. When this transaction is first attached to the blockchain, we concatenate the transaction ID (a 32 bytes, hexadecimal number) and the block header (a 80 bytes, hexadecimal number). In case someone tries to generate more than one transaction for a same purpose, just take the one that was attached first. The resulting 112 bytes hexadecimal number will be the input for some known Verifiable Delay Function (VDF), that should be calibrated accordingly to the purpose of the random number. For instance, a less critical purpose should have a VDF that delays the result in just a few seconds, or even skip completely the VDF step. A critical purpose, with significant interests involved, should have a more complex VDF, with a delay of minutes or even hours. The final result, after the VDF, will be the source for our seeds and jump-aheads. With the aid of this protocol, one is able to find a different pseudo-random number for each user that demands it. Note that the user does not have any incentive to try to modify its transaction ID, because he does not have any control of the block header. We assume that the user and the miner are not the same person, so a miner will only be interested in trying to control his block header if he is paid to do so. Since the last stage of our protocol involves the calculation of a VDF, it will take a certain amount of time to the miner to decide if the the block he has found will be of interest of the user. Thus, he might even lose his block, if some other miner broadcasts a block of his own before he finishes calculating the VDF. In the following subsection, the miner's payoff and the necessary delay T for the Verifiable Delay Functions will be explicitly calculated. Preventing Collusion for Spurious Manipulation Suppose a malicious user tries to bribe a miner that controls a fraction p of the network's computational power. A prize P = nB, where B is the Bitcoin block reward, will be paid to the miner if he successfully mines what we call a "desirable block": a block that will deliver a random number in a set A, chosen by the malicious user. Let also λ be the average rate of incoming blocks and q the probability of a randomly generated number being an element of A, i.e., the measure of the set of desirable results for the malicious user. Finally, let T be the expected amount of time needed for the VDF calculations. The moment a miner finds a block that can be accepted by the network, he faces the decision of broadcasting it before checking the VDF, or calculating the VDF before broadcasting. If he decides to check the VDF before broadcasting, he might start another attempt to find a block rightaway. First, we calculate the expected absolute payoff for the first and second options, called E 1 and E 2 , respectively. E 1 will be larger than B, since the miner might issue a desirable block by chance: On the other hand, if the miner chooses to calculate the VDF, he will receive the block reward and the prize P, but with a probability given by E 2 =(B + P)qP{no other node finding a block before t = T} (2) P{successfully mining a desirable block in another attempt} P{successfully mining a desirable block after i attempts} The probabilities inside the summation, in the last equation, can be calculated as the product of the probability of finding a desirable block after i attempts (that will be a geometric distribution with probability of success q) and the probability of finding and checking i blocks before the rest of the network mines one. Considering P{attacker finding and analyzing i blocks before another node mining one} it follows that Finally, in order to make accepting the bribe not lucrative, we must have E 1 > E 2 , i.e.: Since for every n > 0 we have 1+n 1+nq < 1 q , if we choose λT * = 1 1−p log 1 q q 1−p+pq , we guarantee that the attack will not be lucrative for any bribe P = nB. Also, since it can be assumed that p < 1/2, a value λT * = 2 log 2 1+q < 2 log(2) will be high enough to prevent an attack for any bribe and any acceptable value of p. Conclusions and Final Remarks We formalized a simple and effective protocol to generate on demand pseudo random numbers, in a fully auditable way. We have demonstrated that none of the involved parts has enough financial incentives to try to affect the random number outcome: the part that issues the transaction lacks this power, since it does not have any control on the block header; and the miners do not have enough financial incentives to collude with an attacker, provided a suitable Verifiable Delay Function is applied. The essentially decentralized, yet completely traceable and auditable nature of the protocol presented in this article, makes the resulting randomization process eminently reliable without recourse of blind trust in any central authority. The authors believe the adoption of such a protocol by the the Brazilian Supreme Court (STF), as recommended in [6], would significantly increase public confidence in the judicial system and be a contributing factor for political and social stability. A simple prototype of the randomization tool described in this article is available in the supplementary materials; it is not intended to be used in a full-fledged application, but only to provide a working example of the key procedures.
2019-04-20T21:27:42.000Z
2019-04-21T00:00:00.000
{ "year": 2019, "sha1": "930ef01f7bbb941b85b6d70e893c594ff872759a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/33/1/17/pdf?version=1589257057", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5b0f6ab03cdd2c194d6e2a486de23ea43e88b988", "s2fieldsofstudy": [ "Computer Science", "Law" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
18696271
pes2o/s2orc
v3-fos-license
Chemical Sensors – from Molecules, Complex Mixtures to Cells – Supramolecular Imprinting Strategies Abstract : Methods of modern chemistry are a powerful tool in generating functional materials suitable as chemically sensitive layers to be combined with a variety of transducer principles. Molecular pits in polymers are formed by molecular imprinting, by suitable double-imprinting e.g. PAHs can be detected down to the sub-µg/l level. The resulting selectivity patterns depend both on the polymerization temperature and the template/mononomer composition. Organic contaminants in water can be either directly assessed in liquid phase or separated from the matrix by a porous Teflon membrane. Thus the detection limits can be reduced to the ppm-level due to the a much lower noise level in gaseous phase. Even complex processes such as engine oil degradation can be followed by suitably imprinted polymers. Pits on the nm- to µm scale are reached by surface templating polymers with microorganisms. The resulting layers show reversible, antibody-like interactions and thus are optimal sensor layers. The successful on-line detection of tobacco mosaic viruses (TMV) can be achieved by these surface imprinted layers. Introduction One of the time-consuming key aspects in the development of chemical sensors [1] is the design of suitable sensitive layer materials.These should provide ideally adapted cavities for analyte incorporation, be chemically, mechanically and thermically stable and be applied onto the respective device surfaces by methods compatible to industrial standard coating procedures.Mass-sensitive sensing is a popular method, as it has the great advantage that mass is the most fundamental physical property of any analyte, thus the resulting sensors can in principle be universally applied.Devices used are e.g.quartz crystal microbalances (QCM) or surface acoustic wave resonators (SAW), they have been combined with molecular hollows engulfing the analyte molecule such as cyclodextrines, paracyclophanes and calixarenes [2].Although highly selective, the resulting sensor materials have the inherent drawback that they often require a somewhat elaborate method of synthesis.This can be overcome by using (molecular) imprinting methods [3], where the analyte-to-be is used as a template during layer polymerisation and leaves behind adapted cavities in the highly cross linked material.In this paper we present some recent results obtained by different imprinting techniques that are improved e.g. by using more than one template, by separating the analytes from the matrix, by following complex processes or by applying biogeneous templates. Results and Discussion Even trace amounts of organic solvent contamination in water impose a huge threat to the proper functionality of wastewater treatment plants.Therefore, they have to be detected and removed before reaching the biological clarifier.Although the detection directly in aqueous phase is in principle possible, it should be considered to use an apparatus that allows the application of gas phase sensing. Figure 1: Liquid and gas cell for determining organic trace contaminants in wastewater This is favorable due to the substantially lower noise level occurring in the oscillators driving the respective mass-sensitive device, which results from decreased electronic loss of the sensor caused by viscous interactions between the sensor and its environment [5]. Figure 1 shows both the experimental setups for liquid as well as for gaseous phase used for these measurements.Whereas the first consists of a flow-cell, where the sensitive area is directly exposed to the samples, in the second case the sensor chamber is separated from the water stream by the means of a thin Teflon membrane with pores sizing 200nm.Preliminary experiments showed that this is impermeable for water molecules, whereas organic solvents pass through readily.The results of this procedure are given in Figure 2, where typical sensor responses for either type of measurement are given.In both cases toluene was used for templating.Comparing the two datasets with each other, one can clearly see that especially the response time is highly affected by the measuring environments.The sensor immersed into the liquid reacts much faster than the one with the Teflon membrane separating the measurement chamber and the flow chamber from each other.The resulting sensor effects are in the same order of magnitude, therefore the thin polymer material obviously is fully permeable for the toluene molecules.The system with the gas sensing chamber reacts sluggishly, however, it shows the much lower noise level (around 1 Hz compared with about 10 Hz for the fluid cell) thus leading to lower detection limits, namely less than one ppm, whereas in the liquid it is about two orders of magnitude higher.This is a result of the higher electronic damping that QCMs undergo in contact with a fluid, because here a part of the oscillation energy is dissipated into the adjacent liquid layers due to viscous coupling with the resonating device.Thus, by suitable alterations of the sensor housing, the resulting system can be tuned to either fast reaction times or lower detection limits.Of course, on first sight it is not logical why the gas measurements proceed more slowly, because usually sensors exposed to gas streams have substantially lower response times than in liquids samples.Therefore, the sensor response in this case is determined by the coupled kinetics of analyte evaporation from the water through the porous membrane and the sensor-analyte interaction. Of course, chemical sensors are not restricted to determine distinct analytes but also more complex phenomena can be evaulated.Although one could apply modern separation techniques in this case, sensors still offer the advantage of providing direct on-line measurements.However, a single sensor will usually fail in this case, as it is only capable of yielding a sum signal of all analyte changes.Therefore, applying an array of devices [6] with up to six channels or more showing different In special cases, however, even processes in complex mixtures can be observed by the means of one single sensor element, e.g. when monitoring engine oil degradation [7], which are similar for different brands and types of oil.Therefore, it should be possible to design a sensor system that is capable of addressing all these oxidative processes and thus yielding a total cumulative signal.Organic polymers have been imprinted with the entire oil matrix of a fresh and a used material, the resulting sensors showed powerful selectivities.In this paper we want to concentrate on inorganic polymers, such as solgel materials.They have proven excellent properties concerning ruggedness, chemical tolerance and thermostability.When using Ti-alkoxides as precursors, thick layers with very low electronic loss are generated.This allows for the construction of a sensor system where the QCM is operated as the An example of a degradation sensor mounted on a commercial oil level probe can be found in figure 5.The system is equipped with an electronic mixer device so that the chemical effects are directly read out, i.e. they are already corrected by the viscosity effects.Additionally, figure 5 contains a sensor response curve at the change from fresh oil to waste oil.The resulting mass effect of 1.2 kHz can be measured with high statistical significance, especially when regarding the noise level in the range of 10 Hz.Thus, by the means of suitable imprinted polymers, the complex procedures occurring during the oxidative degradation of engine oils can be translated into a single frequency shift that contains the entirety of chemical information.This gives a very straightforward measurand for the remaining lubricant usability based on chemical information (rather than a physical one). Usually, in molecular imprinting one single template is used to achieve ideal selectivity.However, there are analytical tasks where it has turned out to be advantageous to apply more than one template.Polycyclic aromatic hydrocarbons e.g. are members of a hazardous group of compounds that combine high toxicity and cancerogenous potential, so and thorough efforts are made for their analysis [8].Earlier measurements with single-imprinted chemical sensors showed good selectivities and excellent sensitivities [9].The sensitivity patterns obtained for different templates lead to the conclusion that the best re-inclusion can be observed for molecules being slightly larger than the template.To further investigate this phenomenon, mixtures of templates were tested [10].Figure 6 shows a resulting set of data, where both the ratio between the two template molecules as well as the polymerization temperature is varied.As with the single molecular templates, the temperature again has a high influence on the geometrical fit.At 20°C anthracene is obviously incorporated into the cavities generated by naphthalene, whereas the ones caused by pyrene are already too large to allow optimized interactions.This is the same observation as for single imprinted layers, where we could show that at low temperatures the cavities produced are larger than the templates.A second evidence for this phenomenon is the fact that the enrichment of the analyte monotonously depends on the amount of naphthalene in the template mixture.At higher temperatures, where the geometrical arrangement between the forming polymer and the template is much better, the larger pyrene molecule determines the re-inclusion properties.The smaller template molecule here acts as a porogen and opens otherwise inaccessible sites in the material.This is underpinned by the fact that the signals achieved at 70°C are much higher than for the other layers thus suggesting optimized uptake of the analyte. Imprinting strategies are not restricted to analytes of molecular scale, but also much larger particles can be used, such as enzymes or entire microorganisms.The synthetic procedures, however, have to be slightly modified, as imprinting into bulk materials is not possible in this case.First, for microorganisms with several µm in diameter the resulting layers would be much too thick to be useful for mass-sensitive detection.Second, as the analytes in this case are very bulky, they would diffuse through a three-dimensional polymer only very slowly (not speaking about the limitations imposed on template removal during synthesis) thus greatly enhancing the sensor response time and making it useless for sensing applications.For this purpose surface imprinting procedures have shown to be a suitable tool for sensor layer development.This principle has been successfully applied to yeast, that served as the microorganism used for the fundamental developments on the way to these sensors [11].Figure 7 shows an AFM-image of one of these layers prepared by a surface stamping procedure as well as the respective sensor response obtained with a 10 MHz QCM towards yeast.The microorganisms on the stamp obtain a hexagonal packing and the pits left behind represent the typical size distribution of yeast cultures.These pits are optimally adapted to re-include cells from suspensions, which can also be seen in the sensor responses.The QCM frequency shift is perfectly reversible, the 1200 Hz obtained is the expected value for a monolayer of yeasts on the surface.Thus, the imprinted materials achieve antibody-like interaction properties towards the microorganisms used.The main difference to natural antibodies, however, is the ability to remove the cells from the material surface.This exactly meets the requirements of chemical sensors, as in this case high selectivity and reversibility is desired, which is often difficult to achieve. Viruses represent a group of microorganisms, for which it would be highly interesting to develop selective sensors systems, especially because up to now no easy on-line method for their detection exists.This is a result of their size (below 1 µm) which makes them literally invisible for methods operating in the visible range of the electromagnetic spectrum (i.e.microscopy and scattering methods).The sensor layers are produced by a stamping method similarly to the yeast imprints, the procedure is shown in figure 8. Thus, the stamp and the polymer coating are prepared separately and the final material again is obtained by mechanically pressing the two substrates together.The polymerization takes place in the submicrometre-area around the templates that ensure an ideal adaptation of the cavities generated by forming an optimized interaction network between the reacting oligomeric chains and functional groups on the virus surface.An AFM-image of such a functionalized polymer is given in figure 9. The resulting pits once again can clearly be seen, however, the self organization of the yeasts is more pronounced.Nonetheless, the virus-temlated materials show large frequency shifts on masssensitive transducers, as can also be seen in figure 10, where the sensor characteristic for such a TMVimprinted material is given. TMV imprints Figure 10: Sensor characteristic for a TMV-imprinted material It shows several advantageous features: the dynamic range is comparably high (three orders of magnitude), in the lower concentration range of the suspension the sensor system shows very favourable sensitivity dynamic behaviour, as in this case the differential signal is much more pronounced than at higher concentrations.Another main advantage of the system shall be once more emphasized: these sensors represent a fast, on-line detection method for viruses and thus overcome the limitations imposed on optical methods in the visual range.Compared to sensors for biological analytes already existing [12][13][14], these imprint layers represent artificial antibodies and thus combine biological selectivity with the stability of a man-made material. Conclusion Molecular imprinting into highly cross-linked materials opens up the way to a very straightforward strategy of sensor layer design for a variety of different analytes.These include small molecules as well as micrometre-dimensioned biological species and pure substances as well as complex mixtures.Compared with other synthetic strategies for the generation of molecular hollows, imprinting is a very fast and flexible way to achieve layers.As long as compatibility is guaranteed, any cross-linked polymer can be used, the same flexibility can be observed for the potential templates.The resulting materials show selective re-inclusion of the template particle, sometimes almost antibody-like selectivities can be observed.This opens the way for rationally designed nano-and microfunctionality in both inorganic and organic materials that can be utilized for measuring purposes.Experimental General 10 MHz-QCMs were prepared by screen-printing the desired electrode structure onto an AT-cut quartz blank with 10mm or 15.5 mm diameter and burning the organic residues afterwards.Masssensitive measurements were carried out with a home-made oscillator circuit connected to commercially available frequency counters (HP 53131A and Keithley 775A), the resulting sensor data is read out into a computer by a self-written software. Layer synthesis Imprinted polystyrene layers for the detection of organic solvent traces in water are synthesized by mixing 210 µl of divinylbenzene (DVB), 90 µl of styrene, 3 mg of azo-isobutyronitrile (AIBN) and 4.4 mg biphenyl (working as additional porogen) with 100µl of the imprinting template, i.e. benzene or toluene.The resulting solution was spin-coated onto the QCM and polymerized over night at 40°C in a saturated solvent athmosphere.Afterwards the sensors are washed with o-xylene and dried at 150°C to remove the template as well as the biphenyl. The six sensitive materials used as array coatings in composting were prepared as follows: PVA(H2O): for preparation10 mg of polyvinylalcohol with a molar mass of 15000 g mol -1 is dissolved in 25 ml of deionized water.60 mg of acrylic acid is added to this mixture together with a small amount of AIBN and pre-polimerized for 2 hours at 70°C.PS(Lim): A mixture of 30 µl of styrene, 70 µl of divinylbenzene (DVB), 1 mg of AIBN and 300 µl of limonene is pre-polymerized for 15 min at 45°C.For spin-coating the resulting solution was diluted 1+4 with limonene.PS (EtAc): The polymer was synthesized by mixing 40 µl of styrene and 60 µl of divinylbenzene together with 1 mg of AIBN.After pre-polymerizing for 10 min at 70°C 200 µl of ethyl acetate were added.PU(PrOH) and PU(BuOH): For preparing the polyurethane 1 g of DPDI was mixed with 1.97 g of bisphenol A (BPA), 0.22 g of phloroglucinol and 2 ml of tetrahydrofurane (THF).After dissolution 970 µl of the respective alcohol (1-propanol or 1-butanol) were added to 30 µl of the pre-reacted solution, for spin-coating both mixtures were diluted 1+30 with the template solvent.PU(EtAc): Solutions of 2.5 mmol of DPDI, BPA, phloroglucinol and triethanolamine (TEA) in 2 ml of ethyl acetate were prepared, respectively.These were mixed in the following ratio: 500 µl ethyl acetate, 148 µl BPA, 90 µl ploroglucinol, 10 µl TEA and 231 µl BPA.For preparing the layers the solution was diluted 1+9 with ethyl acetate. For the synthesis of PAH-imprinted polyurethanes 4,4´-diisocyanato-diphenylmethane (DPDI) and polyethyleneglycol (molar mass 200; PEG200) and the template (mixtures) were dissolved in waterfree pyridine, respectively.The resulting solutions contained 100 mg/ml of the monomer and were mixed in the following ratio: 545µl DPDI, 455 µl PEG200 and 50µl template yielding to a total of 5% imprint molecule in the matrix.After 15 minutes of pre-polymerisation the mixture was spin-coated onto the device and the template removed by heating at 90°C for 45 minutes. Figure 2 : Figure 2: Sensor responses against toluene obtained in liquid and in gas phase. Figure 3 : Figure 3: Composter and sensor array used Figure 4 . Figure 4. Trends for three key analytes obtained by an artificial neural network. e a m o un t lim o n e ne eth yl a cetate pro pa n ol frequency-determining device in an oscillator circuit.Consequently, active measuring by applying a frequency counter can be used instead of (expensive) network analyzer equipment. Figure 5 : Figure 5: Commercial oil level probe with mounted sensor and frequency response of the system on the change from fresh oil to waste oil. Figure 6 : Figure 6: Enrichment factors of polyurethane layers imprinted with different template mixtures at 20°C and 70°C Figure 7 : Figure 7: AFM-image of a yeast-imprinted polyurethane and sensor effect of that material towards a suspension of the template microorganism in buffer.
2014-10-01T00:00:00.000Z
2003-09-11T00:00:00.000
{ "year": 2003, "sha1": "8951f2e07a91fb3191b4c6b56bc0c1b467dab749", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/3/9/381/pdf?version=1403299843", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "8951f2e07a91fb3191b4c6b56bc0c1b467dab749", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
11296406
pes2o/s2orc
v3-fos-license
Tissue-specific role of RHBDF2 in cutaneous wound healing and hyperproliferative skin disease Objective Gain-of-function (GOF) mutations in RHBDF2 cause tylosis. Patients present with hyperproliferative skin, and keratinocytes from tylosis patients’ skin show an enhanced wound-healing phenotype. The curly bare mouse model of tylosis, carrying a GOF mutation in the Rhbdf2 gene (Rhbdf2 cub), presents with epidermal hyperplasia and shows accelerated cutaneous wound-healing phenotype through enhanced secretion of the epidermal growth factor receptor family ligand amphiregulin. Despite these advances in our understanding of tylosis, key questions remain. For instance, it is not known whether the disease is skin-specific, whether the immune system or the surrounding microenvironment plays a role, and whether mouse genetic background influences the hyperproliferative-skin and wound-healing phenotypes observed in Rhbdf2 cub mice. Results We performed bone marrow transfers and reciprocal skin transplants and found that bone marrow transfer from C57BL/6 (B6)-Rhbdf2 cub/cub donor mice to B6 wildtype recipient mice failed to transfer the hyperproliferative-skin and wound-healing phenotypes in B6 mice. Furthermore, skin grafts from B6 mice to the dorsal skin of B6-Rhbdf2 cub/cub mice maintained the phenotype of the donor mice. To test the influence of mouse genetic background, we backcrossed Rhbdf2 cub onto the MRL/MpJ strain and found that the hyperproliferative-skin and wound-healing phenotypes caused by the Rhbdf2 cub mutation persisted on the MRL/MpJ strain. Introduction Tylosis, a genetic disease characterized by hyperproliferation of skin in the palms and soles, loss of hair, and oral leukoplakia [1], is caused by gain-of-function (GOF) mutations (p.I186T, p.P189L, and p.D188N) in the human rhomboid family protein RHBDF2 [1,2]. We recently showed that a spontaneous deletion of exons 2 through 6 in the Rhbdf2 gene in C57BL/6J mice that underlies the curly bare mutation (Rhbdf2 cub ) yields a mutant protein lacking the cytosolic N-terminal domain (ΔN-RHBDF2) [3,4]. We also showed that this mutant protein specifically enhances secretion of the epidermal growth factor receptor (EGFR) family ligand amphiregulin (AREG) in various tissues, including skin and intestine [3]. Notably, Rhbdf2 cub mice exhibit complete hair loss and rapid earwound healing (assessed via ear-punch hole closure). Additionally, we developed a mouse model of human tylosis by using CRISPR/Cas9-mediated gene editing to generate mice carrying the human tylosis disease mutation p.P189L (p.P159L in mice). Consistent with the Rhbdf2 cub/cub phenotype, Rhbdf2 P159L/P159L mice exhibited severe epidermal hyperplasia and hyperkeratosis, and showed accelerated wound healing [5]. To test whether high AREG levels mediate the hyperproliferative-skin and wound-healing phenotypes, we crossed Rhbdf2 P159L/P159L mice with Areg-null mice (B6.Cg-Areg Mcub Rhbdf2 cub /J, hereafter referred to as Areg −/− mice), and found that Rhbdf2 P159L/P159L Areg −/− mice exhibited neither the hyperactive EGFR nor the hair-loss phenotype, indicating that increased AREG levels alone mediate the hyperproliferative-skin and wound-healing phenotypes [5]. Collectively, these studies suggest that AREG is a functional BMC Research Notes *Correspondence: vishnu.hosur@jax.org; lenny.shultz@jax.org The Jackson Laboratory, 600 Main Street, Bar Harbor, ME 04609, USA driver of tylosis; however, the role of the immune system and the effect of the genetic background on the tylosis phenotype remain unknown. Here, we tested the hypothesis that the hyperproliferative-skin and rapid woundhealing phenotypes observed in tylosis is tissue-specific and persists independently of the immune system. Using genetic approaches, bone marrow transplants, and reciprocal skin grafts, we show that a tissue-specific function of RHBDF2 rather than the surrounding microenvironment or the immune system underlies this skin disease. Methods Animals All animal procedures were performed in accordance with the guidelines of the Animal Care and Use Committee of The Jackson Laboratory, and conformed to regulations in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Research Council, National Academy of Sciences, 8th edition, 2011). Euthanasia was performed in a way consistent with the 2013 recommendations of the American Veterinary Medical Association (AVMA) Guidelines on Euthanasia. Mice were bred and maintained under modified barrier conditions at The Jackson Laboratory. To generate MRL/MpJ-Rhbdf2 cub/cub congenic mice, C57BL/6J-Rhbdf2 cub/cub mice were backcrossed onto the MRL/MpJ strain background for more than 20 generations. The following primer pairs were used to genotype Rhbdf2 cub/cub mice: Rhbdf2 cub forward: TGT GGA ATA CCC CCA AAG AAG C; Rhbd-f2 cub reverse: ATA ACC CAT AGC AGA GGA GGC G; Rhbdf2 wildtype forward: TGC CCA CAC CGT ATC TGT TCT G; Rhbdf2 wildtype reverse: GTT TTG GAG ACT CAGTGC CCT G. B6.Cg-Areg Mcub Rhbdf2 cub /J mice are referred to as Rhbdf2 cub/cub Areg −/− mice. Bone-marrow chimeras and ADVIA cell counts To generate bone-marrow chimeras, two groups of recipient male B6 mice, 15 mice/group, were first irradiated with a single lethal dose of 1000 cGy delivered by a Shepard Mark I irradiator containing 137 Cs (J. L. Shepard and Assoc., San Fernando, CA). Then the first group received bone marrow collected from femurs of male B6 mice, and the second group received bone marrow from the femurs of male B6-Rhbdf2 cub/cub mice. For all mice, engraftment was via intravenously injection of 3 × 10 6 cells in 200 μLs of sterile RPMI 1640 medium. Post-engraftment a complete blood count (Siemens ADVIA 120 Hematology System) was run on recipient mice to test for any differences in the rates of bone marrow engraftment. Twelve weeks post-engraftment, wound-healing assays were performed by punching 2-mm through-and-through holes in both the right and left ears of recipient mice using a surgical ear punch device (Napox KN-292B; Natsume Seisakusho Co.) [3]. Wound closure was assessed by measuring the percentage of ear-hole closure after 4 weeks of wounding in recipient mice. Reciprocal skin grafting Mice were anesthetized with tribromoethanol (400 mg/ kg IP) and an analgesic (buprenorphine 0.05 mg/kg SC) was administered. The fur was removed from the dorsallateral thorax with clippers and the surgical site was disinfected with 10% povidone-iodine alternating with 70% ethanol. An oblong 8 × 10 mm full thickness section of skin was excised bilaterally with curved scissors. The skin sections were placed in a petri dish containing cold sterile saline. Once grafts were excised from paired mice for reciprocal transplantation the skin grafts were fitted into the recipients sites. Scissors were used to trim the graft as necessary and the graft was rotated such that fur growth on the graft would be in the direction opposite to the recipient's fur. A small amount of tissue adhesive was used to secure the grafts in position. A section of sterile non-adhesive gauze pad was placed over the grafts and a self-adhesive bandage (VetRap) was placed around the thorax. Mice were examined daily and the bandage removed at 5 days after surgery. Isolation of primary fibroblasts and keratinocytes Performed as described previously [5]. Bone marrow transplants fail to confer the wound-healing phenotype to slow-healer B6 mice To test whether the regenerative phenotype could be transferred from B6.Rhbdf2 cub/cub to B6 slow-healer mice, we performed bone-marrow transfer experiments. Two-mm through-and-through holes were punched into the ears of recipient mice and analyzed. B6.Rhbdf2 cub/ cub mice showed rapid wound healing, with up to 95% ear-hole closure 4 weeks post-wounding. Both B6 bone marrow recipient groups-mice receiving bone marrow from femurs of B6 mice and mice receiving bone marrow from femurs of B6.Rhbdf2 cub/cub mice-showed no difference in ear punch hole diameter (data not shown). To test whether the recipient mice showed any differences in the rates of recovery from lethal irradiation, we performed a complete blood count analysis and observed similar rates of recovery in the bone marrow from lethal irradiation (data not shown). Thus, bone-marrow engraftment from B6.Rhbdf2 cub/cub donor mice into slow-healer B6 recipient mice failed to transfer the cutaneous rapid wound-healing phenotype, indicating that the immune system does not regulate the wound-healing phenotype observed in Rhbdf2 cub/cub mice. Reciprocal skin grafts indicate tissue-specific function of Rhbdf2 cub To test whether tissue-specific or non-tissue-specific effects of Rhbdf2 cub cause the hyperproliferative skin phenotype, reciprocal skin grafts were performed by placing full-thickness skin grafts from littermate control (B6 wildtype) mice onto the dorsal skin of B6-Rhbdf2 cub/ cub mice and skin grafts from B6-Rhbdf2 cub/cub mice onto B6-wildtype mice. In addition, skin grafts from B6-Rhbd-f2 cub/cub Areg −/− mice, which present a full but wavy hair coat [3], were transplanted onto B6-Rhbdf2 cub/cub mice. Because all mice were congenic on the C57BL/6 J background, all were histocompatible. After 12 weeks, the skin grafts maintained the phenotype of the donor animal (Fig. 1a), suggesting that the phenotype was tissuespecific and persisted independently of the surrounding microenvironment. Histological examination of hematoxylin and eosin (H&E)-stained slides revealed follicular dystrophy in Rhbdf2 cub/cub mice (Fig. 1b); however, Rhbd-f2 cub/cub mice receiving skin grafts from either Rhbdf2 +/+ (Fig. 1c), or Rhbdf2 cub/cub Areg −/− mice (Fig. 1d), retained the skin phenotype of the donor animal-no evidence of follicular dystrophy and normal hair growth. Additionally, skin grafts from Rhbdf2 cub/cub mice exhibited follicular dystrophy following transplantation onto Rhbdf2 +/+ mice (Fig. 1e). Lastly, skin grafts from Rhbdf2 cub/cub mice engrafted onto Rhbdf2 cub/cub Areg −/− mice (Fig. 1f ) resulted in maintenance of the donor skin hairloss phenotype (Fig. 1g). Together, these results indicate that a tissue-specific effect of Rhbdf2 cub underlies the tylosis phenotype. Rhbdf2 gain-of-function accelerates cutaneous wound healing in MRL/MpJ 'healer' mice To test the influence of genetic background on the Rhbd-f2 cub wound-healing phenotype, we examined whether the Rhbdf2 cub mutation can accelerate wound healing in MRL/MpJ 'healer' mice, which have the capacity to regenerate ear hole-punch wounds without scarring [6]. To create a true congenic strain by moving the Rhbd-f2 cub/cub mutation onto the MRL/MpJ background, B6-Rhbdf2 cub mice were backcrossed onto the MRL/MpJ background for more than 20 generations (Fig. 2a). We punched 2-mm through-and-through holes in the ears of MRL/MpJ and MRL-Rhbdf2 cub/cub mice, and observed that the Rhbdf2 cub mutation significantly accelerated wound healing in MRL/MpJ mice (Fig. 2b, c). Cross-sections of ears from MRL/MpJ and MRL/MpJ-Rhbdf2 cub/cub mice taken at 14 day post-wounding revealed an extensive degree of proliferation in the ears of MRL-Rhbdf2 cub/ cub mice (Fig. 2d). Additionally, MRL/MpJ-Rhbdf2 cub/cub mouse embryonic fibroblasts (MEFs) (Fig. 2e) and mouse embryonic keratinocytes (MEKs) (Fig. 2f ) produced significantly higher levels of AREG compared with MRL/ MpJ wildtype MEFs and MEKs after stimulation with phorbol-12-myristate-13-acetate (PMA). Discussion Tylosis, a form of palmoplantar keratoderma, is a hyperproliferative skin disease associated with increased risk of developing esophageal cancer [7,8]. Currently there is no cure for tylosis or tylosis-associated carcinomas. Despite recent advances in the understanding of the genetic and biological factors underlying tylosis [9], including discoveries made by our group using the Rhbdf2 cub strain, a mouse model of tylosis that shows an accelerated woundhealing phenotype, key questions remain. In this study we investigated whether the role of the Rhbdf2 cub mutation in tylosis is tissue-specific or non-tissue-specific; and whether the immune system or the surrounding microenvironment plays a role in tylosis. To determine whether the immune system or the surrounding microenvironment plays a role in tylosis and whether the role of the Rhbdf2 cub mutation in tylosis is tissue-specific or non-tissue-specific, we performed bone-marrow transfer and reciprocal skin graft experiments. Bone marrow data indicate that the immune system does not regulate the regenerative phenotype observed in Rhbdf2 cub/cub mice, and reciprocal skin grafts data suggest that the tylosis phenotype is tissue-specific and persists independently of the surrounding microenvironment. In addition to yielding the new information on tylosis summarized above, this study provides data on the effect Hosur et al. BMC Res Notes (2017) 10:573 (See figure on previous page.) Fig. 1 Reciprocal skin grafts. a Representative image of a recipient mouse with a skin graft, showing a recipient B6-Rhbdf2 cub/cub mouse with a skin graft from B6-Rhbdf2 +/+ mouse showing retention of hair growth at 12 weeks post-skin graft. b H&E-stained skin section of a female B6-Rhbdf2 cub/cub mouse, showing follicular dystrophy (indicated by arrowhead). Scale bar: 50 μm. c H&E-stained skin section of a female B6-Rhbdf2 cub/cub mouse with a B6-Rhbdf2 +/+ donor skin graft displaying normal hair growth with no evidence of follicular dystrophy (arrows). Scale bar: 50 μm. d H&E-stained skin section of a female B6-Rhbdf2 cub/cub mouse with a B6-Rhbdf2 cub/cub Areg −/− donor skin graft displaying normal hair growth with no evidence of follicular dystrophy (arrows). Scale bar: 50 μm. e H&E-stained skin section of a female B6-Rhbdf2 +/+ mouse with a B6-Rhbdf2 cub/cub donor skin graft displaying follicular dystrophy (arrowhead); Scale bar: 50 μm. f H&E-stained skin section of a female B6-Rhbdf2 cub/cub Areg −/− mouse with no skin graft, displaying normal hair growth (arrow). Scale bar: 50 μm. g H&E-stained skin section of a female B6-Rhbdf2 cub/cub Areg −/− mouse with a B6-Rhbdf2 cub/ cub donor skin graft, displaying follicular dystrophy (arrowhead). Scale bar: 50 μm of genetic background on the tylosis phenotype. We backcrossed the Rhbdf2 cub mutation onto the MRL/MpJ 'healer' strain background and examined the woundhealing phenotype in congenic MRL/MpJ-Rhbdf2 cub/cub mice. The accelerated wound-healing phenotype, epidermal hyperplasia, and the loss of hair phenotypes were retained in this congenic strain, suggesting that the tylosis phenotype persists independently of mouse strain background. Our study also sheds light on the cell types responsible for tylosis; currently, the primary cell types responsible for tylosis are not known. Results of the current study, together with previous findings in our laboratory, will be helpful in laying the foundation for our future studies. Our previous studies using Rhbdf2 cub mice provided information on the possible role of pro-inflammatory cytokine tumor necrosis factor alpha (TNFA) [3]. RHBDF2 was shown by our group and others to be essential for stimulated secretion of TNFA; Rhbdf2 knockout mice fail to secrete TNFA in response to bacterial endotoxin lipopolysaccharide (LPS) [3,[10][11][12]. Thus, it is possible that GOF mutations in RHBDF2, such as Rhbdf2 cub/ cub and Rhbdf2 P159L/P159L , influence TNFA secretion, contributing to tylosis. However, our recent gene-deletion studies, in which we deleted AREG and observed restoration of the normal skin phenotype in both Rhbdf2 cub/cub [3] and Rhbdf2 P159L/P159L mice [5], strongly argue against a role for TNFA in tylosis. Several lines of evidence point to keratinocytes as the primary cell type responsible for tylosis disease. First, keratinocytes are the major cell type producing AREG in skin [13,14], and RHBDF2 is predominantly expressed in the skin [3]. Second, keratinocytes from tylosis patients show a wound-healing phenotype-accelerated proliferation and migration through constitutive activation of EGFR signaling [1]. Together, these previous observations and results from our present study provide valuable background information for future studies aimed at testing keratinocyte-specific effects of RHBDF2 in tylosis in skin tissue. In addition, based on our previous findings showing that increased AREG levels mediate the Rhbdf-2 cub phenotype [3], and the findings of the present study, we propose that tylosis therapies should be targeted toward inhibition of AREG specifically in the skin. Limitations The primary cell types responsible for tylosis are not known. We plan to carry out future studies to identify the responsible cell types, and as a key component of this we are currently developing conditional Rhbdf2 cub mice.
2017-11-09T18:11:39.284Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "307155a9a474cd750754e8cd575f44ae2fe831e1", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-017-2899-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6344bbf3655c31c6dc488d9b743b0e10e5e432d8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
250146412
pes2o/s2orc
v3-fos-license
Cysteine conjugate beta-lyase 2 (CCBL2) expression as a prognostic marker of survival in breast cancer patients Objective Cysteine conjugate beta-lyase 2 (CCBL2), also known as kynurenine aminotransferase 3 (KAT3) or glutamine transaminase L (GTL), plays an essential role in transamination and cytochrome P450. Its correlation with some other cancers has been explored, but breast cancer (BC) not yet. Methods The mRNA and protein expression of CCBL2 in BC cell lines and patient samples were detected by RT-qPCR and immunohistochemistry (IHC). BC patients’ clinical information and RNA-Seq expression were acquired via The Cancer Genome Atlas (TCGA) database. Patients were categorized into high/low CCBL2 expression groups based on the optimal cutoff value (8.973) determined by receiver operating characteristic (ROC) curve. We investigated CCBL2 and clinicopathological characteristics’ relationship using Chi-square tests, estimated diagnostic capacity using ROC curves and drew survival curves using Kaplan–Meier estimate. We compared survival differences using Cox regression and externally validated using Gene Expression Omnibus (GEO) database. We evaluated enriched signaling pathways using gene set enrichment analysis (GSEA), explored CCBL2 and relevant genes’ relationship using tumor immunoassay resource (TIMER) databases and used the human protein atlas (HPA) for pan-cancer analysis and IHC. Results CCBL2 was overexpressed in normal human cell lines and tissues. CCBL2 expression was lower in BC tissues (n = 1104) than in normal tissues (n = 114), validated by GEO database. Several clinicopathologic features were related to CCBL2, especially estrogen receptor (ER), progesterone receptor (PR) and clinical stages. The low expression group exhibited poor survival. CCBL2’s area under curve (AUC) analysis showed finite diagnostic capacity. Multivariate cox-regression analysis indicated CCBL2 independently predicted BC survival. GSEA showed enriched pathways: early estrogen response, MYC and so on. CCBL2 positively correlated with estrogen, progesterone and androgen receptors. CCBL2 was downregulated in most cancers and was associated with their survival, including renal and ovarian cancers. Conclusions Low CCBL2 expression is a promising poor BC survival independent prognostic marker. Introduction Breast cancer (BC) is the most diagnosed cancer in women, accounting for 11.7% of all types of cancers worldwide and the highest morbidity rate in women [1][2][3]. This commonly diagnosed malignant tumor is also the leading cause of cancer deaths worldwide, just after lung cancer [4]. Approximate 2.1 million people were diagnosed with BC in 2018 [5]. As a heterogeneous disease, various biomarker-based diagnostic and prognostic approaches have emerged in recent years. ER, PR and human epidermal growth factor receptor-2 (HER2) have served as both diagnostic and prognostic biomarkers of BC [6]. Nowadays, with advances in sequencing technology, DNA methylation, miRNAs, autoantibodies, lipidomics and proteomics as well as identification of multiparameter gene signatures have facilitated the early diagnosis and prognosis of breast carcinoma [7][8][9][10]. These latest studies have sparked our interest in mining genes as biomarkers associated with BC. CCBL2 has been found in mouse, rat, and human whose mRNA is widely expressed in several organs such as the liver, kidney, heart, and neuroendocrine tissues. However, the highest expression of CCBL2 is found in the kidney [11,12]. CCBL2 can effectively catalyze the transamination of glutamine, methionine, histidine, phenylalanine, cysteine, asparagine, and kynurenine (KYN) to kynurenic acid (KYNA) as well as the pathway of drug metabolism by cytochrome P450 [11,13]. All of these functions are involved in the important processes in human amino acid metabolism. According to the HUGO Gene Nomenclature Committee (HGNC), CCBL2 is identical to kynurenine aminotransferase 3 (KAT3) and glutamine transaminase L (GTL) genes [12,14]. In mammalian cells, the essential amino acid tryptophan is degraded mainly through the kynurenine pathway. Kynurenine aminotransferases (KATs) catalyze the synthesis of KYNA, which is a metabolite of tryptophan and an endogenous antagonist of N-methyl-D-aspartate and alpha 7-nicotinic acetylcholine receptors [15][16][17]. And KYNA is a recognized neuroprotective and anticonvulsant agent involved in synaptic transmission and in the pathophysiology of various neurological disorders (11). Abnormal expression levels of CCBL2 are involved in the pathophysiological process of kidney injury, hospitalacquired VTE, depression and neurological disorders [12,15,[18][19][20][21][22]. Recently, GTK (Glutamine Transaminase K, which is identical to KAT1 and CCBL1) has been reported to play an important role in pancreatic tumorigenesis through the glutamine pathway and cysteine conjugate beta-lyase (CCBL) had close relation with kidney cancer. Glutamine, as one of the catalytic substrates of CCBL2, plays biosynthetic roles in cells, as it is used in the biosynthesis of amino acids, proteins, lipids, and nucleotides which are essential to cell division, especially in cancer cells, also known as glutamine addiction, reported to be concerned with the process of pancreatic cancer. Furthermore, cysteine conjugate beta-lyase (CCBL) was found to be closely associated with the development of kidney cancer. Studies have shown that variants in the CCBL2 gene were significantly associated with the risk of chronic kidney disease due to a defect in reductive metabolism that leads to the formation of a cysteine conjugate, which is then converted to an active metabolite [20,22,23]. The findings piqued our curiosity in the role of the CCBL family member, CCBL2, in breast tissues, as well as the unknown association between CCBL2 and BC, which is now the most frequently diagnosed cancer worldwide. Moreover, we assessed whether CCBL2 could serve as a prognostic marker of survival in patients with BC. Therefore, to initially ascertain whether CCBL2 expression levels affect BC prognosis, we studied the correlation between CCBL2 expression in BC tissues and clinicopathological characteristics, as well as with the survival status of patients with BC through analysis of The Cancer Genome Atlas-Breast Invasive Carcinoma (TCGA-BRCA) level 3 data. Additionally, the results were validated using Gene Expression Omnibus (GEO) datasets. At both mRNA and protein levels, the expression of CCBL2 was verified with real-time quantitative polymerase chain reaction (RT-qPCR) and immunohistochemistry (IHC) staining in human protein atlas (HPA) dataset, respectively. Furthermore, a pan-cancer analysis of CCBL2 was performed to explore the correlation between CCBL2 and various cancers. Breast cell lines During this study, two types of human BC cells, MCF-7 and MDA-MB-231, and human normal breast epithelial cell lines MCF-10A were used. MCF-7 cell lines, regarded as type of luminal, were cultured in DMEM (Gibco, USA). MDA-MB -231, regarded as type of basal, were grown in RPMI 1640 (Gibco, USA). Both cells were supplemented with 10% fetal calf serum (Gibco, USA) and 1% Penicillin-Streptomycin Solution (Beyotime China). MCF-10A cell lines were cultured in DMEM-F12 (Gibco, USA) with 5% equine serum, 1% Penicillin-Streptomycin Solution, 20ng/ml epidermal growth factor, 0.5ug/ml hydrocortisone, 0.1ug/ml cholera toxin and 10ug/ml insulin. The cell lines mentioned above were all obtained from the American Type Culture Collection (ATCC, USA) and cultured in a humid atmosphere of 5% CO 2 at 37˚C. Real-Time Quantitative Polymerase Chain Reaction (RT-qPCR) The total RNA was isolated using the TRIzol reagent (Invitrogen, USA) and reverse transcription was implemented using the HiScript Ⅲ RT SuperMix for qPCR with gDNA wiper (Vazyme Biotech) to synthesize cDNA following the manufacturer's instructions. The RT-qPCR was performed by ChamQ Universal SYBR qPCR Master Mix (Vazyme Biotech) and run by the Mastercycler Ep Realplex (Eppendorf, Hamburg, Germany). The relative gene expression fold change was normalized using beta-tubulin 2A (TUBB2A) as an internal control and compared with MCF-10A. The primers sequences used in this study were as follows: CCBL2, F: 5ʹ-ATC CTT GTG ACA GTA GGA GCA-3ʹ, R: 5ʹ-GGG CTC ATA GCA GTC ATA GAA AG-3ʹ; TUBB2A, F: 5ʹ-TTG GGA GGT CAT CAG CGA TGA G-3ʹ, R: 5ʹ-AGG CTC CAG ATC CAC CAG GAT G-3ʹ. The independent experiments were performed at least three times. Pan-cancer analysis of CCBL2 The expression differences between normal and tumor tissues were analyzed by TIMER databases (data based on TCGA). Analysis of survival probability of CCBL2 in pan-cancer was completed systemically in HPA databases (https://www.proteinatlas.org/). Relevant immunohistochemistry (IHC) staining images were also obtained from HPA. Statistical analysis Using the ggplot2 package in R, the differential expression of discrete variables was processed into visible boxplots with Wilcoxon and Kruskal-Wallis test. Based on the optimal cutoff value (8.973) determined by the ROC curve, patients were categorized into high and low CCBL2 expression groups. With the implement of Chi-square test as well as Fisher exact test in R program (version 3.5.2), the analysis of relationship between the expression of CCBL2 and clinicopathological characteristics was performed. ROC curves of the subjects were plotted to estimate their diagnostic ability by applying the ROC package. Adopting the survival package of R, we used the Kaplan-Meier curves to compare respective differences in overall survival (OS) and relapse-free survival (RFS) between high and low groups. Kaplan-Meier Plotter (https://kmplot.com/analysis/) was used to further explore the relationship between the prognosis of patients with endocrine therapy and CCBL2 expression. Log-rank test was used for p values calculation. The clinicopathological characteristics were selected by the univariate and multivariate cox regression analysis. Independent experiment of RT-qPCR was done three times and measured data expressed as mean ± standard deviation. The result of RT-qPCR was plotted by GraphPad Prism 8. P value<0.05 was the significance threshold. CCBL2 expression in BC cell lines and tissues CCBL2 was overexpressed in the human normal breast epithelial cell line MCF-10A while downregulated in BC cell lines ( Fig 1A) and tissues (Fig 2A-2C). In particular, MCF-7 cells showed higher CCBL2 expression than MDA-MB-231 cells. Out of a total of 24 samples of BC, 22 of them exhibited moderate IHC staining for CCBL2, other two of them showed weak IHC staining. However, normal tissues exhibited strong IHC staining for CCBL2 (Fig 2A-2C). This result indicated that CCBL2 expression varied at the protein level. In addition, it was validated in microarrays GSE42568 (p = 1.3e-05) and GSE71053 (p = 0.0490) that CCBL2 expression of BC was lower in tumor tissue than in normal breast tissues (Fig 1B and 1C). Patient characteristics Based on the TCGA-BRCA level 3 data, Table 1 showed the clinical characteristics of tumor samples, such as molecular subtype, histological type, menopause status, radiation therapy, margin status, neoadjuvant treatment, targeted molecular therapy, ER, PR, HER-2, TNM stage, clinical stage, vital status, lymph node status and sample type. Diagnostic capacity of CCBL2 expression The ROC curve was plotted to assess the diagnostic capacity of CCBL2 and the area under the curve (AUC) showed a value of 0.659, implying a finite diagnostic capacity. In the subgroup analysis of different stages, CCBL2 showed a relatively valuable diagnostic capacity in patients 4). Correlation between CCBL2 expression and survival of patients with BC The correlation between CCBL2 expression and survival of patients with BC was determined using Kaplan-Meier curves. The log-rank tests indicated that low CCBL2 expression was associated with a low overall survival (OS) rate (p<0.0001) (Fig 5) as well as a low relapse-free survival (RFS) rate (p = 0.0036) (Fig 6). Subgroup analysis revealed that low CCBL2 expression was correlated with low OS in patients with ER-positive BC (p = 0.0005), PR-positive BC (p = 0.0001), HER-2-negative BC (p = 0.0011), infiltrating ductal carcinoma (p = 0.0023), infiltrating lobular carcinoma (p<0.0001), and luminal A (p = 0.0250) (Fig 5). The analysis also revealed that low CCBL2 expression was associated with low RFS in patients with ER-positive BC (p = 0.0310), PR-positive BC (p = 0.0340), luminal A BC (p = 0.0400), and infiltrating ductal carcinoma (p = 0.0004) (Fig 6). Additionally, Kaplan-Meier analysis was conducted based on whether patients with ER-positive BC had received endocrine therapy. The log-rank tests indicated that high CCBL2 expression was associated with a high RFS rate in patients receiving endocrine therapy (with or without chemotherapy) (p = 0.0039) and in a subgroup of patients receiving endocrine therapy alone (without chemotherapy) (p = 0.0035). Furthermore, high CCBL2 expression was associated with better OS (p = 0.0020) and RFS (p = 7.2e−05) rate in patients with ER-positive BC without endocrine therapy (Fig 7). Independent prognostic value of low CCBL2 expression in BC Univariate and multivariate analyses were performed to demonstrate the prognostic value of clinicopathological characteristics, which were subsequently used in the evaluation of the impacts of CCBL2 on the survival of patients with BC. Age, clinical stage, HER-2, margin status and CCBL2 expression were linked with poor OS according to the results of the univariate analysis (Table 3). Likewise, ER, PR, margin status, clinical stage, and CCBL2 expression were linked with an unfavorable RFS (Table 4). Subsequently, multivariate analysis was performed, the results of which were shown in the forest plot (Fig 8). Low CCBL2 expression served as an independent prognostic biomarker for low OS (p = 0.0011; HR: 2.18, 95% CI: 1.37-3.47) and low RFS (p = 0.0382; HR: 1.59, 95% CI: 1.03-2.47) (Tables 3 and 4). Gene set enrichment analysis (GSEA) of CCBL2 GSEA was performed between the low and high CCBL2 expression datasets, which was significantly different in h.all.v6.2.symbols.gmt of the MsigDB database (FDR<0.25, NOM p<0.05) ( Table 5). On the basis of the normalized enrichment score (NES), the most significantly enriched pathways included estrogen response early and estrogen response late, indicating that the estrogen response was downregulated when CCBL2 expression was low. In addition, the correlated pathway, androgen response, was also declined ( Table 5). The oppositely regulated and enriched pathways included the G2M checkpoint, MYC, mTorc1 signaling, and glycolysis (Fig 9). Pan-cancer analysis of CCBL2 expression The differential expression of CCBL2 between normal and tumor tissues was analyzed using the TIMER database. According to the results, the expression of CCBL2 was significantly higher in normal tissues compared with that in tumor tissues in not only BC but also in renal cancer (p<0.0001), ovarian cancer (p<0.0001), and uterine corpus endometrial carcinoma (p = 1.31e-05). In cholangiocarcinoma (p = 1.90e-06) and liver hepatocellular carcinoma (p = 1.38e-04), the expression of CCBL2 was lower in normal tissues compared with tumor tissues. Considering the results of both TIMER and HPA databases, monogenic pan-cancer analysis of CCBL2 expression (data from HPA) was performed, and the results indicated that apart from BC, low expression of CCBL2 was associated with poor prognosis of renal, ovarian and head and neck cancers (p<0.0010). There was no significant relation between CCBL2 and prognosis of patients with uterine corpus endometrial carcinoma (p = 0.1600) and liver cancer (cholangiocarcinoma and liver hepatocellular carcinoma) (p = 0.2600) (Fig 11). Low CCBL2 expression in patient-derived tissue samples of breast, renal, ovarian and head and neck cancers The results of IHC staining were downloaded from HPA, and IHC staining was employed to verify the protein expression of CCBL2 in breast, renal, ovarian and head and neck cancers. strong staining, 16 showed moderate staining and 5 showed weak staining. A total of 8 tissue samples of head and neck cancer were analyzed, out of which 5 showed moderate staining, and 3 showed weak staining. The first three types of cancer tissues showed evidently weaker staining than their respective normal tissues (para-tumor tissues); however, no change in the staining intensity was observed in tissue samples of head and neck cancer. In particular, in BC, infiltrating lobular carcinoma presented stronger staining than infiltrating ductal carcinoma (Fig 2). Discussion Based on the data acquired from the TCGA database, CCBL2 showed lower expression in tumor tissues compared with normal tissues, and this result was validated in GEO datasets. In addition, RT-qPCR and IHC staining demonstrated the enhanced expression of CCBL2 in the human normal breast epithelial cell line MCF-10A and its diminished expression in BC cell lines as well as BC tissues. Low CCBL2 expression was correlated with an unfavorable survival, and we could come to a conclusion that CCBL2 was a prognostic biomarker in BC. To the best of our knowledge, this is the first study to elucidate the correlation between CCBL2 expression and BC survival based on TCGA data analysis. CCBL2, a gene located on chromosome 1p22.2 [11], encodes an aminotransferase that transaminates kynurenine to form kynurenic acid, which is a metabolite of tryptophan. According to previous studies, CCBL2 facilitated the clearance of nephrotoxic substances [26]. The expression of CCBL2 was also decreased in patients with hyperoxaluria [27]. Moreover, CCBL2 expression was positively correlated with the occurrence of hospital-acquired VTE [19]. As important paralogs, evidence has shown the correlation between CCBL1 (identical to KAT1 and GTK) and pancreatic, prostate, and bladder cancers [28]. Furthermore, CCBL2 (identical to KAT2) plays an important role in several neurological diseases such as Huntington's disease, Alzheimer's disease and depression [15,18,29]. However, limited information is available regarding the expression of CCBL2 in tumors, especially BC. In our study, we demonstrated that CCBL2 expression was lower in tumor tissues than in normal tissues based on both the TCGA database and microarray datasets GSE42568 and GSE71053. Low expression of CCBL2 was correlated with several clinicopathologic characteristics, including histological type, ER, PR, HER2, molecular subtype, T classification, M classification, vital status, and stage. Several research groups have reported that the abovementioned clinicopathologic features could guide the diagnosis, treatment and prognosis of BC, which promoted us to further explore the correlation between CCBL2 and BC [9,30]. As confirmed above, CCBL2 had significantly strong relation with ER (p = 0.0005) and PR (p = 0.0005) status, accounting for the lower expression of CCBL2 in tumor cells of basal-like/Her-2-enriched BC and higher expression in luminal A (ER/PR-positive) BC cells. Therefore, lower OS and RFS were linked with lower CCBL2 expression because of the low survival rate and poor prognosis of basal-like/Her-2 enriched BC. The prognosis of infiltrating ductal carcinoma was found to be worse than that of infiltrating lobular carcinoma [31]. And the analysis results showed that the expression of CCBL2 was lower in infiltrating ductal carcinoma than in infiltrating lobular carcinoma, which was consistent with the IHC staining results that lower CCBL2 expression was linked to worse BC survival. A recent study showed that CCBL1 (identical to GTK) was involved in glutamine utilization through the GLS1 and glutaminase II pathways to generate glutamate [23], while the role of CCBL2 (identical to GTL) was unclarified, although glutamine was one of its metabolic substrates [15]. For this reason, when CCBL2 expression is low, it can be estimated that glutamine is relatively abundant. Glutamine plays an important role in the biosynthesis of amino acids, proteins, lipids, and nucleotides, which are essential to cell division, especially in cancer cells, also known as glutamine addiction. Therefore, the proliferating BC cells consumed glutamine at a very high rate [32,33]. Thus, among all T stages, T4 had the lowest CCBL2 levels and relatively the highest glutamine levels, with the fastest tumor cell growth and angiogenesis. In the case of the M stage, low CCBL2 levels tended to be related to distant metastasis [23,34]. Many cancer cells, especially those driven by the Myc gene (involving BC, as confirmed by GSEA results), were metabolically reprogrammed to consume more glutamine. When the expression of CCBL2 was low, the pathway of Myc in BC cells was upregulated. Altered glutamine metabolism in Myc-driven cancer, BC, resulted in glutamine addiction, which caused worse survival rate [35]. Our study shows that low expression of CCBL2 is associated with low OS in BC, especially in ER-positive tumors, PR positive tumors, HER-2 negative tumors, luminal A tumors, and invasive ductal and lobular carcinomas. Estrogen plays an important role in BC progression. Through GSEA, we found several relevant pathways, including estrogen response, enriched in CCBL2. Estrogen response pathway was downregulated when CCBL2 exhibited low expression, indicating that CCBL2 is positively correlated with this pathway. Oshi et al. found that the ESR1-associated early estrogen response was upregulated in ER-positive BC, indicating a better OS [36]. Therefore, that the low expression of CCBL2 was correlated with the worse OS of ER-positive BC, which was consistent with the results of our survival analysis. However, the pathway of estrogen response (early) involves 200 relevant genes [36] and the underlying molecular mechanism remains unclear. Additionally, CCBL2 could favorably predict the response to endocrine therapy in patients with ER-positive BC. Conventionally, RFS is used to evaluate the therapeutic effect of adjuvant therapy in carcinomas. The results of the Kaplan-Meier analysis indicated that with higher CCBL2 expression, patients who received endocrine therapy showed better RFS rates than lower CCBL2 groups. More specifically, patients receiving endocrine therapy alone (without chemotherapy) with higher CCBL2 expression presented a significant better RFS rate, whereas patients receiving both endocrine therapy and chemotherapy showed no increase in OS rate and insignificant increase in RFS rate. Therefore, CCBL2 possesses a significant prognostic value for ER-positive BC patients with or after endocrine therapy, particularly in the subgroup receiving only endocrine therapy, but little prognostic value in the subgroup receiving both endocrine therapy and chemotherapy. In patients with ER-positive BC without endocrine therapy, high CCBL2 expression indicated a favorable OS and RFS. In other words, CCBL2 also exhibited a valuable prognostic capacity in BC patients without or before endocrine therapy. Nowadays, the administration of endocrine therapy is mainly based on ER status (ER-positive tumors). However, patient compliance is poor due to the requirement for long-term medicine and associated side effects, with 20-50% of patients failing to finish the treatment cycle [37,38]. Additionally, drug resistance has become a growing problem. Hence, novel biomarkers, such as CCBL2, are needed to evaluate the benefit of endocrine therapy, which would allow for a better prognosis. This will not only improve patient compliance but also assist in the selection of patients who are likely to benefit from neoadjuvant endocrine therapy. Similarly, the mTORC1-signaling pathway is also correlated with ER-positive BC, which was downregulated when the CCBL2 expression was high. The regulatory targets of rapamycin (mTOR) are involved in protein translation, metabolism, cell growth, and proliferation [39]. As an enzyme complex, MTORC1, mainly binds with S6 kinases (S6Ks) to mediate its function. Studies have demonstrated that S6K1 (one of the S6 kinases) and some other relevant kinases contributed to the activation of ERα through its phosphorylation [40,41]. It was further shown that S6K1 and ERα formed a positive feed-forward loop. The phosphorylation of ERα by S6K1 facilitated the process, promoting the transcription of RPS6KB1, which in turn regulated he proliferation of BC cells [42,43]. To conclude, lower expression of CCBL2 in ERpositive or luminal A BC is related to the higher transcription of RPS6KB1 and subsequent BC cell proliferation. For the reasons mentioned above, the consensus can be reached that lower CCBL2 expression is associated with worse OS in ER-positive or luminal A BC. Moreover, the Myc pathway is negatively correlated with CCBL2 according to the GSEA. In triple-negative breast cancer (TNBC), MYC was found to regulate polyamine metabolism and a plasma polyamine signature was related to the development and progression of TNBC [44]. Therefore, for TNBC, the lower expression of CCBL2 was related to worse OS and RSF, consistent with the Kaplan-Meier survival analysis. However, to date, not much is known about the correlation between CCBL2 and Myc, thereby warranting further experimental validation. Other results of GSEA stated that G2M-checkpoint and glycolysis pathways were negatively correlated with CCBL2 expression, both of which have close correlation with the proliferation of most types of malignant tumors [45]. This is obviously consistent with the results of our analysis. In addition to ER and PR, AR is also a hormone receptor that is expressed on the surface of mammary cells, regarded as a novel biomarker arousing heated discussion. Our study showed that the expression of CCBL2 was positively correlated with that of the AR gene and that the pathway of androgen response was upregulated when CCBL2 expression was high. According to a prospective clinical study by Anand et al., higher AR expression was correlated with earlier stage (p < 0.03), lower axillary burden (p < 0.04), higher ER (p = 0.002) and PR (p = 0.001) expression [46]. More specifically, Lin et al. showed that AR expression was positively correlated with a better prognosis of patients with HER2-positive breast carcinomas [47]. In TNBC, several studies have confirmed that the luminal androgen receptor (LAR) subtype, defined as AR-positive subtype, was associated with the highest OS compared with other subtypes [48][49][50]. We can conclude that CCBL2 could be a potential prognostic biomarker based on the confirmed relationship between higher AR and better BC prognosis, as well as the findings of the study mentioned above. Furthermore, we found some proofs to confirm our conclusions with the assistance of website of http://guotosky.vip:13838/GPSA/. In this website, 3048 gene knock out RNAseq datasets were performed GSEA with four source of gene sets, including TCGA, Genotype-Tissue Expression (GTEx), and Cancer Cell Line Encyclopedia(CCLE). The result (S1 Table) showed the negative fold change values of ESR1 and AR with knock-down CCBL2 gene. It could be assumed that when CCBL2 gene was knocked down, ESR1 and AR genes also declined. Using multivariate analysis, it was confirmed that low CCBL2 expression might serve as an independent prognostic marker, which was correlated with the unfavorable OS and RFS of BC using multivariate analysis. Then, by analyzing the AUC value, we found that CCBL2 possessed a moderate diagnostic efficacy between tumor and normal tissues, especially in stage IV. Therefore, CCBL2 can be regarded as a novel biomarker in the field of diagnosis and prognosis in BC. The pan-cancer analysis revealed differential expression of CCBL2 between normal human tissues and various types of cancers. In addition to BC, some other cancers exhibited significantly different CCBL2 expression levels. Combined with the HPA results, renal, ovarian and head and neck cancers had significantly different CCBL2-related survival rates. According to the findings of RT-qPCR and IHC, CCBL2 may be a favorable prognostic biomarker in most cancer types. Further experimental verification was underway. Further in silico and in vitro research is needed to explore the correlation between CCBL2 and pan-cancers. In this study, we initially discussed the value of CCBL2 expression as an independent prognostic marker for BC. Due to the limited samples, further explorations still need to be carried out based on the large samples data to verify our consequence. Additionally, the results of this study also promote the subsequent work, involving the further cell function test through gene overexpression or knock-down.
2022-07-01T06:17:39.995Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "eae4855851f5d110ba5a1e945546869cd7a15aad", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f45a7def3c4fbc405ef651013a8f38ec44a961a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251417106
pes2o/s2orc
v3-fos-license
Modeling and Forecasting Daily Temperature Time Series in the Memphis, Tennessee : Temperature is an essential weather component because of its tremendous impact on humans and the environment. As a result, one of the widely researched parts of global climate change study is temperature forecasting. This work analyzes trends and forecasts a temperature change to see the transient variations over time using daily temperature data from January 1, 2016 – November 3, 2019, collected from a weather station located at the Memphis International Airport. The Mann-Kendall (M-K) test is used to detect time series analysis patterns as a non-parametric technique. The result from the test revealed that the temperature time series increased by 0.0030 °F almost every day, implying that the location is becoming hotter. The other method of analysis is the autoregressive integrated moving average (ARIMA) model, which fits temperature time series using its three standard processes of identification, diagnosis, and forecasting. Considering the selection criteria, The seasonal autoregressive integrated moving average (SARIMA) (3, 0, 0) (0, 1, 0) 365 model is found as appropriate for the studied temperature data on a daily basis. Finally, the selected model is utilized to estimate the next 50 days; after November 3, 2019, the temperature forecast showed an increasing trend. This observed trend provides an understanding of daily temperature change in the studied area for that specific period. Introduction Temperature variation resulting from climate change has become a global concern as it is correlated to global warming. The fifth IPCC assessment report revealed that the mean temperature increased by 0.85°C through 1880 to 2012. [1]. Global warming significantly impacts the natural ecology, agricultural production, and human health [2]. Rising temperatures have already intensified drought, flooding, rising sea level, and weather extremes. [1]. Furthermore, temperature variations will delay the onset of the monsoon and cause water loss from the soil, reducing crop productivity and lowering water levels in surface and groundwater [3]. Because of ocean-atmosphere circulation, land cover use, and other linked characteristics, surface air temperature fluctuates more at the regional scale than at the global average scale [4]. However, because the temperature is affected by many climate elements, it is an ever-difficult endeavor to predict the changes in temperature for the projected duration [5]. Therefore, it is required to conduct quantitative analyses of temperature fluctuation to take the appropriate steps to mitigate adverse effects. In temperature forecasting research, time-series analysis is considered as an essential direction [6][7][8]. Since the Mann-Kendall (M-K) test incorporates the better treatment of outliers, It is frequently employed in weather and climate time series data to discover trends. [9,10]. Various trend analysis studies have been carried out at various spatiotemporal scales, underscoring the importance of the M-K test. The M-K test was used in a research in Ethiopia's Woleka sub-basin to detect time series trends of precipitation and temperature. [11]. In a study the Mann-Kendall test, Sen's slope estimator, and linear regression were Time Series in the Memphis, Tennessee used to examine yearly and seasonal temperature patterns, along with temperature extremes. [12]. One study used the M-K test to identify changes in environmental and meteorological features at a Kolkata station from 2002 to 2011, and the M-K test's performance was reliable at the verified significant level [13]. In addition, a study looked at the trend in rainfall time series at Fifteen sites in the Swat River basin from 1961 to 2011 using both non-parametric M-K and SR statistical tests, which provided a daily forecast of the parameters with precision [14]. Given this, it is reasonable to conclude that the standard-Kendall test is widely utilized to assess how parameters change over time. In time series analysis, projecting values in the later phase are based on previous observations of the variable under examination. Numerous studies in hydrology and meteorology used the ARIMA method to achieve more accurate forecasts, and this method has essentially superseded older statistical techniques [15,16]. Another study used a seasonal ARIMA model for agricultural irrigation and reported achievement of a significant level of model fitting in strategic planning [17]. Likewise, one study used a SARIMA model to assess temperature trend in Assam [18]. Moreover, an investigation used the ARIMA model to forecast monthly mean temperature and discovered a falling trend [19]. A substantial number of studies have successfully comprehended climate parameters and have provided a better understanding of the hydrological system using ARIMA and SARIMA models [4,[20][21][22]. This study is designed to investigate the temperature time series of daily average temperature data to discover trends using the non-parametric M-K test with the ARIMA model technique. The SARIMA model is fitted to daily temperature data (Jan 1, 2016 -Nov 3, 2019). The chosen model is used to forecast temperature for the next 50 days from Nov 4, 2019, to Jan 23, 2020, using Box-Jenkins's technique. Data Collection and Study Area The data for the temperature time series came from a weather station located at the Memphis International Airport, as seen in Figure 1. This data represented the local weather of Memphis, Tennessee, and offered the amount of data required to fit the SARIMA model. The data gathered from the station included temperature readings at the daily interval from Jan 1, 2016, to Nov 3, 2019. Trend Analysis The Mann-Kendall test implies that data are not normally distributed, and it additionally considers the effect of outliers. As a result, trend analysis frequently employs the nonparametric M-K test and the Sen slope estimator [9,12]. Using a two-tailed test with a 5% significance level [13], the alternative as well as null hypotheses are H 0 = There is no discernible trend in the time series. and H 1 = There is a rising or falling trend. [12]. Thus, the following equations (1) and (2) can be used to determine Mann-Kendall test statistics. where x i and x k are consecutive data in the series; n is the sample size; ei is the number of ties at the ith value, and m is the number of ties if the value is tied. Z C , the standard test statistic, was calculated as follows: The Z C symbol indicates the trend's direction. A negative Z number indicates a downward trend, whereas a positive ZC value indicates an upward trend [13,23]. The magnitude of the slope (change per day) was determined using Sen's estimator [9,12]. ARIMA Model ARIMA is the acronym for the autoregressive integrated moving average (ARIMA) model, widely known as the Box-Jenkins model (p, d, q). The order of the autoregressive (AR) is p, the degree of difference is d, and the order of the moving average (MA) is q. [24]. It is almost as if the independent variables in the regression model are the past values of the time series. Equation 4 or 5 can be used to express the general equation [22]. 2 3 4 + 6 2 3 + 6 2 3 + … … +6 8 2 3 8 + 9 3 + : : 3 + : : 3 + ⋯ … : < : 3 < (4) where 6 , 6 , … … … . . 6 8 IJ : , : … … … … … : < are the regression coefficient, Y D is the time series data (temperature), c is the intercept, 6 8 indicates the AR part's order, θ F L indicates the MA part's order, and d indicates the differencing, e D is called the random error amount. If seasonality is considered, then the ARIMA model will become a seasonal autoregressive integrated moving average (SARIMA) model and represented by ARIMA (p, d, q) (P, D, Q) S [19]. S stands for the number of seasons per year, P for the seasonal AR, D for the seasonal difference, and Q for the seasonal moving average. The first stage in fitting the ARIMA model is to ensure that the time series is stationary. The Augmented Dickey and Fuller (ADF) unit root test are used to determine the stationarity of a time series data set [19]. The test's null and alternative hypotheses are H0: Series has a unit root and H1: Series has no unit root, respectively [19]. The ADF test statistics must be smaller than the crucial value to reject the null hypothesis. The transformation should be done using the differencing procedure if the time series is not stationary [13]. Following the discovery of the stationary time series, the autocorrelation function (ACF) and partial autocorrelation function (PACF) are used to determine the appropriate order of AR (p) and MA (q) [18]. The model coefficients are estimated using the least square approach after the appropriate values of p, d, and q have been determined. The residuals are then examined using a set of criteria, assuming that they are not autocorrelated and normally distributed [24]. Within a 95 percent confidence interval, the residual's ACF should not differ from zero. Furthermore, the histogram of the residual will have a bell shape, indicating that it is normally distributed. Akaike's information criterion (AIC) and Bayesian information criterion (BIC) are used to select models [13]. Then the model with the least AIC and BIC values is selected as a best-fit model [21]. KLM = 2O − 2 * ln Q (6) RLM = O * ln I − 2 * ln Q where k is the number of model parameters, L is the likelihood function's maximum value, and n is the number of observations. Finally, if the model has been evaluated using the root mean square error (RMSE), the mean absolute error (MAE) or mean absolute percentage error (MAPE) is employed for the predictive capability, as shown in equation 8-10. The minimum value of the RMSE, MAE, and MAPE is ideal for the model's adequacy. The RMSE, MAE, and MAPE equations are presented in the equation (8), (9), and (10). The value obtained at time t is Y t (obs), the predicted value is Yt (pred), and the number of observations is n. Table 1 shows the descriptive statistics for the temperature time series. The skewness, in this case, is negative, indicating that the left-handed tail is longer than the right-handed tail. The first and third quartiles are 52.60 °F and 79.27 °F, respectively, according to the box plot in Figure 2. Table 2 shows the M-K test statistics for the time series data. Since the threshold value (p-value = 0.006) is less than 0.05, the M-K tests revealed patterns in temperature time series. Kendall's positive value implies a positive upward trend; hence, the temperature time series, which previously had a tendency, has been demonstrated to have a positive upward trend. According to Sen's slope, the temperature time series exhibits a trend of 0.003 ℉ every day, which is the slope of the trend. Figure 3 illustrates a time series depiction of daily surface temperature. The data appear to be stationary in the graph. A consistent pattern in the data, on the other hand, suggests seasonality. As seen in Figure 4, this figure is further studied by deconstructing it using the additive method. Figure 4 shows that the data contains a seasonal component and a wavelike structure. ARIMA Model As a consequence, instead of the ARIMA model, the SARIMA model is investigated. Moreover, a trend in the time series data is also depicted in the figure. Furthermore, the inclusion of outliers in the data is indicated by the random component. All The unit root in the daily temperature time series data is checked using the ADF test. The ADF data and accompanying p-value for the ADF test is shown in Table 3. The null hypothesis with a unit root is rejected because the pvalue is less than 0.05. As an outcome, the daily temperature time series data is stationary, and there is no need for the difference. All The ACF and PACF of the daily temperature time series are shown in Figures 5 and 6. The ACF plot in Figure 5 looks like a sine wave, indicating that the data has much seasonality. As a result, the seasonal difference should be considered to eliminate seasonality. PACF may be used to detect the order since ACF shows exponential series decaying to zero, suggesting the autoregressive model exclusively. As shown in figure 6, the PACF is significant at lags 1, 2, and 3, and after lag 3, the PACF shows an irregular pattern by being above and below the confidence limit. Furthermore, there is no discernible seasonal rise between 365 and 730. As a result, the non-seasonal AR term's order is possibly 3, whereas the seasonal AR term's order could be zero. The resulting model for this daily temperature data is SARIMA (3,0,0) (0,1,0) 365 , with three no seasonal Time Series in the Memphis, Tennessee autoregressive parameters and one seasonal difference, considering the low AIC and BIC values. The estimated parameters for the selected model are shown in Table 4. The table shows that all coefficients are significant because the tstatistics are greater than 1.96 in all situations. Table 5 reveals that the highest RMSE, maximum MAE, and maximum MAPE for the selected model are 6.86, 4.28, and 8.23 percent, respectively. These numbers can be considered when determining whether or not the model is a good fit. The ACF of the residuals has no substantial autocorrelation, as seen in Figure 7. Furthermore, the histogram of the residuals is more or less normally distributed. As a result, the residuals are white noise, indicating that the chosen model can forecast. Conclusions The Mann-Kendall (M-K) test and the Box-Jenkins's method dubbed SARIMA were used in this work to determine daily average temperature variability and forecasting. For the Memphis international airport station, Mann-(M-K) Kendall's trend analysis showed a growing upward trend of 0.0030F each day. In addition, the identification and diagnosis for the SARIMA model reveal that the model fits well. The residuals analysis also shows that the model fits all assumptions. Moreover, the accuracy measures validate the model's predictive capacity. The following 50 days of data after November 3, 2019, has been projected using the (3, 0, 0) (0, 1, 0) 365 model. The analysis of this study will give policymakers insight into the rate of temperature change during that period and the scope and extent of possible temperature change.
2022-08-09T15:10:01.247Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "26018ab11bd54a6dcac9def21427dbff9e766c65", "oa_license": "CCBY", "oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijema.20210906.17.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ab0d179f2da8a8bfbe6b7af3ac8f3a94503b4e9d", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
35233920
pes2o/s2orc
v3-fos-license
Efficacy of the Disappearance of Lateral Spread Response before and after Microvascular Decompression for Predicting the Long-Term Results of Hemifacial Spasm Over Two Years Objective The purpose of this large prospective study is to assess the association between the disappearance of the lateral spread response (LSR) before and after microvascular decompression (MVD) and clinical long term results over two years following hemifacial spasm (HFS) treatment. Methods Continuous intra-operative monitoring during MVD was performed in 244 consecutive patients with HFS. Patients with persistent LSR after decompression (n=22, 9.0%), without LSR from the start of the surgery (n=4, 1.7%), and with re-operation (n=15, 6.1%) and follow-up loss (n=4, 1.7%) were excluded. For the statistical analysis, patients were categorized into two groups according to the disappearance of their LSR before or after MVD. Results Intra-operatively, the LSR was checked during facial electromyogram monitoring in 199 (81.5%) of the 244 patients. The mean follow-up duration was 40.9±6.9 months (range 25-51 months) in all the patients. Among them, the LSR disappeared after the decompression (Group A) in 128 (64.3%) patients; but in the remaining 71 (35.6%) patients, the LSR disappeared before the decompression (Group B). In the post-operative follow-up visits over more than one year, there were significant differences between the clinical outcomes of the two groups (p<0.05). Conclusion It was observed that the long-term clinical outcomes of the intra-operative LSR disappearance before and after MVD were correlated. Thus, this factor may be considered a prognostic factor of HFS after MVD. gram (EMG) during MVD procedures. Some authors reported that the disappearance or decreased amplitude of the LSR after MVD was associated with postoperative spasm relief or a favorable clinical outcome 6,19) . On the other hand, other authors suggested that the disappearance or persistence of an abnormal response in the intra-operative monitoring after decompression cannot predict the improvement of facial spasm and its effectiveness may be questionable 3,7,9) . There have been many reports on the association between LSR findings after decompression and the clinical outcome of hemifacial spasm. Few reports described the association between the disappearance of LSR before and after MVD and the post-operative re-duration was 40.9±6.9 months (range : 25-51 months) ( Table 1). Intra-operative monitoring with EMG From the time of the administration of general anesthesia to the dural closure, continuous facial EMG was monitored using needle EMG recordings from the orbicularis oculi, orbicularis oris, and mentalis muscles with the Viking IV (Nicolet TM ). The brainstem auditory evoked potential was also determined in all the patients. Depolarizing muscle relaxants were not used (except before the intubation). Bipolar stainless needle electrodes were subdermally placed in the mentalis muscles. A lateral spread response appeared in the other facial muscles during the subdermal stimulation of the temporal branch of the facial nerve. We used stimuli of a 0.1-0.2 msec pulse wave with an intensity of 5-30 mA. There were nine check points to detect when the LSR disappeared : 1) after the administration of the anesthesia, 2) before the dural opening, 3) immediately after the dural opening, 4) at the time of the CSF drainage, 5) before the decompression, 6) during the dissection, 7) after the decompression, 8) before the dural closure, and 9) after the dural closure. The patients were classified into two groups according to the timing of their LSR disappearance : Group A [in which the LSR disappeared after the decompression (7-9)] and Group B [in which the LSR disappeared before the decompression (1-6)]. Of the 199 patients in whom the LSR disappeared during the surgery, 128 (64.3%) were placed in Group A and 71 (35.7%), in Group B. Surgical procedures One surgeon performed all the microvascular decompression surgery procedures in the authors' institute. All patients were put under general anesthesia using a lateral suboccipital retrosigmoid approach and auditory brainstem evoked potentials, which are well described in the literature 10,11,16) . After the dura mater was opened and the CSF was drained, appropriate brain sult of spasm. Kim et al. 8) reported that when comparing two groups based upon whether the LSR disappeared before or after decompression, facial EMG monitoring of the LSR is helpful in predicting outcomes. This factor must be considered a prognostic factor of HFS after MVD, together with the disappearance or persistence of the LSR after decompression. In that paper, the mean follow-up duration was 17.9 months (range : 12-27 months) less. Reports on the correlation between the disappearance or persistence of the LSR after MVD and clinical outcomes usually had a follow-up duration of less than about two years 5,7,17) . In the authors' institute, their senior colleague (S. H.L.) has also checked continuous intra-operative LSR monitoring during MVD and the clinical outcomes of hemifacial spasm. Furthermore, long-term clinical follow-up data over two years were collected on the intra-operative monitoring results. It was hypothesized that the disappearance of the LSR before and after MVD also predicts the long-term clinical outcomes over two years. This study was conducted to clarify the longterm effectiveness of intra-operative electromyography during MVD for HFS. In the authors' prospective study, we investigated the association between the disappearance of LSR before and after MVD for the prediction of the short-and long-term clinical outcomes of HFS. Patient populations Between June 2006 and December 2008, 244 patients with HFS who underwent MVD were prospectively pooled, together with their intra-operative facial EMG recordings. The patients included those with typical symptoms of HFS, disappearance of an abnormal LSR before or after MVD, and a minimum follow-up period of at least two years were included. Forty-five of the 244 patients were excluded because of persistent LSR after decompression (n=22, 9.0%), the non-existence of an LSR from the start of their surgery (n=4, 1.6%), re-operation (n=15, 6.1%), and follow-up loss (n=4, 1.6%). For the patients in whom no LSR was observed, much effort was made to make the LSR appear, such as the use of a low-dose muscle relaxant or repositioning of the insertion site of the EMG needle. Despite these efforts, the LSR could not be induced in four of the 244 patients (1.6%), although they showed typical HFS. Thus, 199 patients were included in this study. Fifty-six of them were men and 143 were women aged 25-85 years, with a mean value of 58.5±10.3 years. The symptom duration varied from 3 to 480 months, with a mean value of 67.3±65.7 months. The mean follow-up Disappearance of LSR before and after MVD and clinical outcome For the analysis of the efficacy of intra-operative facial EMG monitoring, the 199 subject patients were divided into two groups depending on the disappearance of their LSR before or after their decompression. Group A included 128 patients (64.3%) in whom the LSR disappeared after the decompression, and Group B included 71 patients (35.7%) in whom the LSR disappeared before the decompression. In the one-week outcomes after the surgery, HFS was completely relieved in 75 (58.6%) patients in Group A and 39 (54.9%) patients in Group B. At three months post-operatively, complete relief occurred in 87 (68.0%) patients in Group A and 44 (62.0%) patients in Group B. In the one-year follow-up examination, 101 (78.9%) patients in Group A and 34 (47.9%) patients in Group B were completely cured. In the post-operative two-year examination, 107 (83.6%) patients showed complete HFS relief in Group A, and 37 (52.1%) patients showed complete spasm relief in Group B. In the post-operative three-year examination, complete relief occurred in 73 (74.5%) patients in Group A and 25 (45.5%) patients in Group B. The complete relief rate of Group A was higher than that of Group B after one-, two-, and three-year follow-up (Fig. 1). There were statistically significant differences between the one-year (p val-ue=0.002), two-year (p value=0.0001), and three-year (p val-ue=0.003) follow-up results for the two groups. There were no statistically significant differences in the results after one week (p value= 0.642) and three months (p value=0.261) ( Table 2). DISCUSSION LSR is usually immediately reduced with decompression of the facial nerve root. There is still much controversy, however, over relaxation was achieved. Gentle elevation of the cerebellum exposed the compressed root exit zone of the facial nerve. Teflon felt implants were used for the decompression. Water-tight dural closure was performed, with several pieces of muscle interposed between the interrupted sutures. Statistical analysis A statistical analysis was performed with commercial software (SPSS V15.0, SPSS Inc., Chicago, IL, USA). The data are presented as means±standard deviations. A chi-square test was used to assess the statistical significance of the independent variables of the two groups, and an independent t-test was used to compare the degrees of the clinical outcomes of the two groups. Spasm-free outcomes and post-operative complications All 199 patients were followed for 40.9 months (range : 25-51 months). The clinical data evaluations were performed one week, three months, one year, two years, and three years after the MVD surgery. To assess the effects of the MVD surgery, a more than 90% improvement in the spasm was defined as complete relief; >50%, partial relief; and a <50% decrease in symptoms or unchanged symptoms, as no relief. In this study, 144 (72.4%) of Table 2. Clinical results of the two groups regarding the disappearance of the LSR after and before the microvascular decompression after one week, three months, and one, two, and three years low-up periods of over two years. Unlike previous results 8) , the three-month spasm-free results did not differ significantly. It may take time before the motor nucleus hyper-excitability of the facial nerve decreases and its re-myelination process is completed. Several authors have proposed that in some patients, once the vascular compression is resolved, the motor nucleus hyperactivity starts to decline slowly and normalizes over a few months to a few years [4][5][6] . Considering the lower spasm relief rate in Group B (in which the LSR disappeared before the decompression) than in Group A (in which the LSR disappeared after the decompression), it cannot be intra-operatively confirmed if the LSR disappearance in the patients in Group B was actually due to the decompression of the conflicting vessel, because the LSR disappeared before the decompression. Thus, more careful and adequate decompression between the offending vessel and the facial nerve root exit zone may be needed to improve the clinical outcomes of MVD. Although the LSR disappearance before MVD had poorer outcomes, it is not based on scientific evidence and cannot be proven logically. Moreover, in these patients, despite the knowledge of the need for more careful and adequate decompression, it is not known how much or where the Teflon felt must be added. Furthermore, a study to determine the scientific cause of this phenomenon may be needed. CONCLUSION In this study, it was found that patients in whom the LSR disappeared after the decompression had better prognoses than those in whom the LSR disappeared before the decompression in the long-term follow-up periods of over two years. Thus, intra-operative facial EMG monitoring is helpful in predicting HFS prognosis and verifying the decompression of the facial nerve. In patients in whom the LSR disappeared before the decompression, the facial nerve should be more carefully and adequately decompressed. whether or not intra-operative LSR disappearance means adequate decompression of the facial nerve. Some authors 3,7,15) reported that an intra-operative change in the LSR did not always indicate a favorable prognosis. Many authors tried to investigate the correlation between the disappearance or persistence of the LSR after decompression and the clinical outcomes with a mean follow-up duration of about two years 5,7,17) . To authors' knowledge, two papers report that the disappearance or decreased amplitude of the LSR indicates post-operative spasm resolution with a follow-up duration of over two years 6,9) . On the other hand, similar to the authors' hypotheses, some studies have tried to prove spasm-free results by disappearance of the LSR before or after the decompression 2,14,15,19) . It was reported that LSR disappeared before the decompression of the compressing vessel in two of eight patients in one study 2) . The outflow of cerebrospinal fluid shifts the neurovascular relation, which is temporarily equivalent to decompression 2) . Mooij et al. 15) deemed the abnormal muscle response (AMR) as indirectly confirming the AMR disappearance in patients after the drainage of the cerebrospinal fluid. Their results showed five (6.8%) patients in which the AMR disappeared before the decompression. These five patients with indirect confirming had a lower rate of cured spasm than 25 (33.8%) patients with guiding in which decompression was followed by disappearance of the AMR. Yamashita et al. 19) described the AMR disappearance in 53 of 60 patients after their microsurgery, and in nine patients before the transposition of the offending arteries. Among these nine patients, three (33.3%) showed persistent facial spasm in their immediate results. In the long-term results, however, the nine patients were completely cured. These three reports announced the LSR disappearance before the decompression, but they did not directly compare the LSR disappearance before and after the decompression with the spasm-free outcome, and performed the statistical analysis. Kim et al. 8) reported that the 75.6% complete cure rate of Group B (in which the LSR disappeared before the decompression) was much lower than the 92.9% of Group A (in which the LSR disappeared after the decompression). Moreover, the spasm-free outcomes in the three-month and oneyear results had statistically significant differences between the two groups (p value <0.05). In the discussion section, it was mentioned that further long-term follow-up evaluation may provide more information regarding the association between intra-operative LSR monitoring and post-operative results. Fortunately, we had opportunities to analyze the results for the two groups with long-term follow-up periods of more than two years. As mentioned, our study also revealed that the complete relief rate of Group A was higher than that of Group B, not only after a one-year follow-up but also after two-and threeyear follow-ups. The post-surgical two-and three-year results also significantly differed between the two groups. In this study, the patients in whom the LSR disappeared before the decompression showed poorer results than those in whom the LSR disappeared after the decompression during the long-term fol-
2018-04-03T05:33:48.069Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "0979caf3d4c716b8ec42dcb650a649a0a7e069eb", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3340/jkns.2012.52.4.372", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0979caf3d4c716b8ec42dcb650a649a0a7e069eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195787792
pes2o/s2orc
v3-fos-license
De novo prediction of cell-type complexity in single-cell RNA-seq and tumor microenvironments This study describes a computational method for determining statistical support to varying levels of heterogeneity provided by single-cell RNA-sequencing data with applications to tumor samples. The ability to identify known cell types and discover novel cell groups is key to analyzing such data. Although classical unsupervised clustering and more recent dimensional reduction methods have been successfully adapted to single-cell RNA-seq data (Grün et al, 2015;Macosko et al, 2015;Bacher & Kendziorski, 2016;Li et al, 2017), a common drawback is the need to specify the degree of complexity in clustering, either by fixing the total number of subgroups anticipated or by choosing a resolution parameter controlling the extent of dimensional reduction. Because the degree of cell-type diversity expected from data is often unknown in real applications, a clustering approach capable of inferring the number of cell types present in a sample solely based on statistical evidence would provide a significant advantage, freeing cell-type classification and discovery process from potential resolution bias. The question of how to determine the number of clusters in unsupervised clustering analysis has a long history in statistical literature (Milligan & Cooper, 1985;Tibshirani et al, 2001). Nevertheless, only a few currently available single-cell RNA-seq analysis pipelines provide such capability (Kiselev et al, 2019): SC3 uses principal component analysis (PCA) and compare eigenvalue distributions with that of random matrices to pick the most likely number of principal components (Kiselev et al, 2017); SINCERA (Guo et al, 2015) and RaceID (Grün et al, 2015) use statistics comparing intercluster versus intracluster separations; SNN-Cliq (Xu & Su, 2015) provides an estimate within a graph-based clustering approach. These existing choices thus either rely on indirect quality measures of multiple clustering solutions or significance tests associated with dimensional reduction. In Bayesian formulation of general unsupervised clustering, in contrast, the number of clusters is just one of many hyperparameters, whose statistical support can rigorously be examined via Bayesian model comparison (Held & Ott, 2018): possible choices for the number of clusters can be compared quantitatively via marginal likelihood (or evidence, the probability of seeing data given a specific number of subgroups). In application point of view, a shift to Bayesian statistics therefore enables a comprehensive and powerful clustering approach, where clustering depth, assignment of individual cells into clusters, and characteristics of each cluster all emerge as collective analysis outcomes. To our knowledge, Bayesian model comparison is yet to be applied to single-cell RNA-seq analyses. Here, we developed and tested such a method for inferring and assessing the degree of heterogeneity in single-cell samples using Bayesian statistics and identifying the range of most appropriate number of clusters. For the actual subgroup identification, we chose nonnegative matrix factorization (NMF) (Lee & Seung, 1999), an unsupervised machine-learning method of dimensional reduction, where a highdimensional data matrix with nonnegative elements is factorized into a product of two matrices sharing a common, low dimension-the rank (Lee & Seung, 2000). Single-cell RNA count data are inherently nonnegative and typically sparse, making them ideal for NMF analysis. Earlier studies of bulk data and recent single-cell applications (Brunet et al, 2004;Carmona-Saez et al, 2006;Kim & Park, 2007;Puram et al, 2017;Zhu et al, 2017;Filbin et al, 2018;Ho et al, 2018) were all based on maximum likelihood (ML) formulation of the NMF algorithm (Gaujoux & Seoighe, 2010). The need to resort to quality measures of factorization (Brunet et al, 2004;Gaujoux & Seoighe, 2010) to choose its optimal value compromises the predictive power of ML-NMF, as with other clustering methods involving adjustable parameters controlling the degree of cell-type diversity. In contrast, we use NMF as one of possible dimensional reduction engines facilitating Bayesian model comparison and focus instead on the resulting capability to evaluate different choices of rank values. We adapted the variational Bayesian formulation of NMF (Cemgil, 2009) for barcoded single-cell RNA-seq data. Cell-type heterogeneities in carcinoma samples pose a unique analytic challenge, with complex interplay of immune, stromal, and malignant epithelial cells playing key roles in the development and homeostasis of the tumor ecosystem (Li et al, 2016). Despite its predominance among cancer types, studies of single-cell transcriptomic heterogeneities in solid tumors are still in early stages (Jaskowiak et al, 2018). As a major application of our approach, we present analyses of available single-cell tumor samples, characterizing the range and depth of tumor microenvironment heterogeneities encountered in different cancer types. Results Optimal cell-type separation is determined by data We implemented ML and Bayesian NMF (bNMF) algorithms for single-cell RNA count data (see the Materials and Methods section). Briefly, bNMF combines the NMF-based Poisson likelihood of RNA count data with gamma-distributed prior distributions for twofactor matrices (basis W and coefficient H) (Cemgil, 2009) (Fig 1A). The mean counts are given by the matrix product WH, with inference optimizing both the factor matrices and hyperparameters of the priors simultaneously. The most likely rank is determined by comparing evidence (marginal likelihood of data conditional to hyperparameters Θ and rank r) for a range of rank values ( Fig 1B): (1) r opt = arg max r PrðXjΘ p ; rÞ; where X is the RNA count data. We used the log evidence per matrix element, regarded as a function of rank, as the primary measure of statistical significance. Its difference between two rank values can then be related to Bayes factor (Kass & Raftery, 1995;Held & Ott, 2018): we used a conservative Bayes factor threshold 3 for statistically significant model differences in determining the optimal Figure 1. bNMF for single-cell RNA-seq clustering. (A) RNA count matrix derived from droplet-based single-cell RNA-seq data is modeled as a Poisson realization of the mean given by a product of basis W and coefficient H matrices sharing a common dimension rank. Factorization infers these matrices for varying rank values using gamma priors. (B) We find the optimal rank maximizing log evidence or marginal likelihood of hyperparameters given the data. Heterogeneity class is determined by the shape of evidence profile: in type I, the difference in evidence between the maximum at rank r opt and the value at r max is larger than the threshold L; in type II, this difference is within L. The threshold is given by L = ðln TÞ=m; where T is the lower bound of Bayes factor for statistical significance. The factorization solutions for ranks from 2 to r opt are then used to construct the subgroup tree, which connects subgroups under successively increasing ranks. This tree provides a global view on the structure of cell-type heterogeneity on varying resolution. (C) Factor matrices W and H corresponding to the optimal rank are used to identify metagenes (genes distinguishing a given subgroup from the rest), characterize subgroups into known or novel cell types, and to assign individual cells into subgroups. rank ( Fig 1B and the Materials and Methods section). After factorization, the two-factor matrices yield metagene lists and subgroup membership of all cells (Fig 1C). We first characterized the performance of bNMF using simulated data (Fig 2A-D). With data sets generated from m = 100 features ("genes") and r = 10 subgroups of 20 cells (n = 200), we factorized the count data with varying rank r using ML-NMF and computed two quality measures: dispersion and cophenetic correlation (the Materials and Methods section). Dispersion increased with increasing rank, saturating at r ≈ 10 (Fig 2A). Cophenetic correlation (Brunet et al, 2004) showed a similar behavior with a maximum at r = 10 ( Fig 2B) and a narrow overall range of values close to 1. We used bNMF to compute log evidence (Fig 2C), which increased linearly to reach a sharp maximum at rank 10. For higher rank values, log evidence decreased moderately. This trend remained unchanged for larger matrices up to sizes more typical of real data (m = 2,000 and n = 2,000; Fig 2C). In ML-NMF, likelihood is equal to the negative generalized Kullback-Leibler (KL) divergence, a distance measure distinct from Euclidean distance (see the Materials and Methods section). In bNMF, the generalized KL divergence is weighted by the prior distribution rather than minimized. As expected from this distinction, the Euclidean distance and generalized KL divergence both showed sharp cusps at rank 10 ( Fig S1A and B), whereas for higher ranks, their magnitudes decreased weakly and remained similar for ML-NMF and bNMF, respectively. Thus, for these simulated data sets with 10 subgroups, ML-NMF predicted the correct rank well via two quality measures, and bNMF yielded a clear and unambiguous choice of the optimal rank. We also used a simulated data set of rank 5 to characterize how relative outlier cells in expression counts would be classified by bNMF factorization: the relative outliers identified by minimum covariance determinant method (Hubert & Debruyne, 2010) were predominantly located within the t-distributed Stochastic Neighbor Embedding (tSNE) (van der Maaten & Hinton, 2008) plot near the termini of branches separately forming individual subgroups (Fig S2), suggesting that bNMF would be resistant to overclustering of moderate outliers. As a representative choice from existing methods relying on specification of parameter(s) controlling clustering depth, we applied Seurat (Macosko et al, 2015) to the same simulated data with a range of resolution parameter values. With increasing resolution, the number of subgroups obtained showed consecutive jumps to reach 10 ( Fig 2D). We further tested the convergence of bNMF inference using a different simulation scheme, where factor matrices W and H were generated from γ priors with known hyperparameters (Fig S3). With increasing sample size, the evidence profile converged to a shape as in Fig 2C and the predicted optimal rank and hyperparameters became more sharply peaked around the correct values. We next compared these algorithms using the fresh PBMC singlecell data set (Zheng et al, 2017; Fig S4A and Table S1). To test the dependence of the number of subgroups on sample sizes, we used two different subsamples (n = 34,289 and n = 6,857) derived from the full data. We first characterized evidence profiles with the smaller data set under ML-NMF, bNMF, and PCA (Fig 2E-H). Both dispersion and cophenetic correlation from ML-NMF were maximal near r = 2; dispersion increased moderately for large r, whereas cophenetic correlation remained low for r > 10 ( Fig 2E and F). The log evidence from bNMF exhibited a sharp increase with increasing rank for 2 ≤ r ≤ 6 and decreased slightly for larger ranks. The rank with maximum evidence was r = 9. Seurat led to a monotonic increase in the number of subgroups with increasing resolution from r = 5 to r = 21 ( Fig 2H). In contrast, both Euclidean distance and KL divergence decreased monotonically with increasing ranks under ML-NMF and bNMF (Fig S1D and E). The bNMF evidence profile was robust against varying sample sizes, reaching maximum at r ≈ 6 and remaining similar or decreasing slightly for larger ranks (Fig 2G). We further compared bNMF rank profiles with the numbers of clusters predicted by existing algorithms for six small single-cell data sets (Yan et al, 2013;Biase et al, 2014;Deng et al, 2014;Pollen et al, 2014;Kolodziejczyk et al, 2015;Goolam et al, 2016) with wellknown cell-type complexity (e.g., embryonic stem cells in early development): "gold standard" data sets used in published works assessing SC3 (Kiselev et al, 2017) and SIMLR (Wang et al, 2017). In many cases (Fig 2I,K,M,and N), the number of cell types expected from experimental design coincided with the lowest rank regions where bNMF-derived evidence profile became relatively flat. At the same time, apparent overestimations of the number of clusters by other methods often fell within such flat regions (Fig 2I,J,L,M,and N), providing a possible explanation for the lack of consensus among different methodologies: many data sets exhibit evidence profiles that are monotonically increasing up to a certain rank, beyond which statistical support remains similar. In summary, although all three algorithms performed reasonably well for simulated data sets with simple compositions, NMF provided a means to assess the subtype complexity without the need to set adjustable parameters (Fig 2A-D). The bNMF enabled a statistically well-controlled comparison via the evidence profile, which unambiguously predicted the number of subgroups supported by PBMC data (Fig 2E-H). Derivation of evidence profiles for benchmark single-cell data sets demonstrated that bNMF reveals a much more comprehensive picture of how statistical support varies with the number of clusters than in existing computational methods estimating a single clustering depth (Fig 2I-N). bNMF infers depth of heterogeneity in PBMC/pancreatic cells We next characterized bNMF cell-type separation outcome of the PBMC (n = 34,289) using the metagenes from basis matrix W ( Fig 1C) under rank 9 ( Fig 2G). Most of the top metagenes clearly distinguished each subgroup from the rest, whereas a small proportion of them featured in more than one subgroups ( Fig 3A). We used correlations between the mean expression counts of subgroups and those of purified blood cell types (Zheng et al, 2017), along with metagene and markers (Foell et al, 2007;Walzer et al, 2007;Kallies, 2008;Quann et al, 2011;Lu et al, 2017) to annotate major components of nine clusters (Fig 3A and B). The bNMF inference results from rank 2 to 9 provide cell-type separation outcomes with increasing resolution up to the optimal rank, beyond which statistical support from data no longer improves. Using cluster membership of all cells under these ranks, we constructed a hierarchical tree relating these subgroups ( Fig 3B). The two subgroups at rank 2 separated cells into two branches, one containing B cells, NK cells, and monocytes and the other containing T cells. Intermediate levels of subgrouping within the tree revealed sub-branches linking B cells and monocytes, and naive/helper/regulatory versus effector/memory T cells. This global tree view under varying rank values facilitates biological interpretation of subgroups within the framework of NMF-enabled dimensional reduction. We applied t-SNE to the coefficient matrix H elements and visualized the seven subgroups ( Fig 3C). The proximity of subgroups within the map closely reflected their hierarchical (C)). ML-NMF narrows down the rank into an optimal range based on two quality measures, dispersion and cophenetic coefficient. (C) bNMF finds the correct rank 10 maximizing evidence. (D) Seurat (Macosko et al, 2015) requires specification of resolution parameter; the correct number of subgroups is reached as the upper bound with respect to resolution. (E, F) ML-NMF applied to PBMC single-cell data (Zheng et al, 2017). (G) bNMF applied to PBMC data sets of different sizes led to the optimal rank maximizing evidence as r opt ≈ 9: (H) PCA applied to PBMC yielded a wide range of subgroup numbers depending on resolution. (I-N) bNMF rank profiles and the number of clusters predicted by other computational algorithms applied to six gold standard data sets (Yan et al, 2013;Biase et al, 2014;Deng et al, 2014;Pollen et al, 2014;Kolodziejczyk et al, 2015;Goolam et al, 2016). The SC3 (Kiselev et al, 2017), SINCERA (Guo et al, 2015), and SNN-Cliq (Xu & Su, 2015) predictions are from Kiselev et al, 2017. The black dotted and red dashed lines are the number of major cell types expected from experimental design and the optimal rank from bNMF protocol, respectively. In (I), the total number of cells was small (n = 49) so that a large subset of factorization results in W matrices had uniform columns for r ≥ 4, implying r opt = 3. (A-C) Results for the PBMC data set (n = 34,289). (A) Metagenes for subgroups derived from the factor matrix W under optimal rank 9 ( Fig 2G). Heat map shows the relative magnitudes of matrix element W ik for each gene i and subgroup k, rescaled such that in each row, minimum and maximum correspond to 0 and 1. Up to 10 metagenes in addition to preselected markers per subgroup are shown. (B) Subgroup tree showing hierarchical relationships between subgroups under varying ranks from the lowest (2) to the optimal (9). Branching of a subgroup under a given rank into two under a successively larger rank was inferred by applying the majority rule (see the Materials and Methods section). (C) Visualization of subgroups with tSNE. Subgroup ID and composition of cells are indicated. (D, E) Comparison of cell type compositions predicted by bNMF and bulk data deconvolution method, CIBERSORT (Newman et al, 2015). Outcomes for the full fresh PBMC data and an example mixture of seven purified cell types are shown in (D) and (E), respectively. (F) Subgrouping of human pancreas cell data (Baron et al, 2016). Colors indicate major cell types. Insulin-producing β-cells are in yellow (see Figs S5 and S6 and Table S2) Fig 3C). bNMF classifies known cell types with high accuracy We next tested the robustness of bNMF clustering applied to real data using mixtures of count data derived from purified PBMCs (Zheng et al, 2017). We generated multiple realizations of PBMC data sets of known composition by sampling fixed numbers of up to seven cell types-CD8 + CTLs, B cells, monocytes, CD4 + Th, regulatory T cells (Treg), NK, and hematopoietic stem cells (HSCs)-of equal proportions and performed bNMF inference for each realization. The distribution of optimal ranks gradually shifted to higher ranks as mixtures became more complex (Fig 4A-F). It was notable that the degree of shifts with the successive addition of new cell types reflected the novelty in the added cell type: the addition of Tregs and NK cells to mixtures already containing Th cells and CTLs led to only moderate shifts in optimal ranks to higher values, whereas the addition of HSCs led to a more substantial jump (Fig 4E and F). Typical shapes of evidence profiles showed two distinct qualitative trends: for mixtures with low complexity, there was a sharp and pronounced rank value with maximum evidence (Fig 4G) and statistical support decreased for larger rank values (type I). For complex mixtures, on the other hand, the evidence profile became relatively flat, with support for broader range of rank values above a threshold (Fig 4H; type II). We quantified the reliability of subgroup assignment by the following procedure: we first determined the cell-type identities of subgroups obtained under rank 4 inferred for four-sample mixtures ( Fig 4C) using metagenes. We then assigned cells into four subgroups using H matrix elements and calculated classification score as the proportion of correctly classified cells. We obtained a mean score of 0.82 ± 0.08 (SD; Fig 4I). To further test identification of rare cell types, we used mixtures containing four cell types of which two had cell counts of~10% of the rest, obtaining the score of 0.73 ± 0.08. Together, these tests indicated that bNMF enabled robust determination of optimal subgrouping depths and reliable assignment of individual cells into subgroups. We further compared the cell-type identification of bNMF with that of a deconvolution procedure, where reference panels of expression patterns are used to infer cell-type compositions from bulk data (Avila Cobos et al, 2018). We used CIBERSORT (Newman et al, 2015) to estimate the proportion of cell types from RNA counts averaged over fresh PBMC cells and found a reasonable agreement with noticeable differences when compared with single-cell results (Fig 3D). We further characterized differences in cell-type proportion estimates from single-cell and deconvolution methods with a mixture of seven purified blood cells: the bNMF prediction (Fig 3E), where the major discrepancy arose in discriminating Treg from Th cells (also see Fig S7), was substantially closer to true proportions (Fig 3E), demonstrating the advantage of explicit single-cell data analysis compared with bulk deconvolution. Because our algorithm takes cell-count matrix as input, it can be combined with improved quality control or preprocessing steps alleviating challenges in single-cell capture and counting protocols. Such challenges include the overabundance of zero counts thought to originate from incomplete sampling of lownumber RNA molecules in individual cells (Lin et al, 2017;Li & Li, 2018). To demonstrate such a combined usage, we processed the cell-count matrix of one of the PBMC seven-cell-type mixtures in Fig 4F with scImpute (Li & Li, 2018). Imputation did not change the evidence profile (Fig S7A), where the optimal rank was 6 with rank 7 slightly lower but close in evidence value. The bNMF factorization results of the original and imputed count matrices (Figs S7B and 7C) showed that CD4 + Th and Treg cells were clustered together in both cases, explaining the optimal rank of 6. Imputation enhanced the quality of cell-type resolution separating Th/Treg and CTL subgroups, resulting in a closer agreement of overall cell counts in each cluster in comparison to true cell counts (Fig S7D). Solid tumor cell cultures have limited heterogeneity We next applied our algorithm to melanoma cell culture single-cell data (Gerber et al, 2017), which contain transcriptomes of tumor cells derived from three patients: two replicates of wild-type (WT), BRAF mutant-NRAS WT, and BRAF WT-NRAS mutant samples. The evidence profile of this in vitro data set ( Fig 5A) showed a pronounced maximum near r~7, decreasing sharply for higher rank values. This behavior was analogous to those for low complexity mixtures of immune cells (Fig 4G; type I). The tSNE visualization of seven subgroups closely reflected the patient of origin and mutation status (Fig 5B and C): the subgroups of cells from WT patient (subgroups 1-4) formed one major branch (Fig 5D), which included subgroups expressing oxidative phosphorylation and other melanoma-specific marker genes (Gerber et al, 2017) (subgroup 1), a highly proliferative subgroup expressing cell cycle and DNA repair genes (subgroup 2), and a stromal subgroup (subgroup 3; Fig 5E). The BRAF-mutant cells (subgroups 5-6) showed CD34, BRAF, and apoptosis-related genes as metagenes/markers, whereas NRASmutant cells had NRAS as a marker. Overall, this outcome was consistent with the expected low depth of heterogeneity in cultured tumor samples. Tumor microenvironments in vivo show two distinct classes of heterogeneity We characterized the degree of cell-type heterogeneity in tumor microenvironments in vivo with six additional solid tumor data sets (Table S1 and Fig 6). Lavin et al (2017) studied the landscape of innate immune cells infiltrating lung adenocarcinoma. We obtained a rank profile with a relatively narrow range of optimal ranks ( Fig 6A). The subgroups derived consisted of B cells, mast cells, NK cells, dendritic cells, monocytes, and tumor-/normal cell-associated macrophages (Fig S8). We also analyzed two glioma samples (oligodendroglioma [Tirosh et al, 2016b] and astrocytoma [Venteicher et al, 2017]), which both exhibited rank profiles (Fig 6B and C) similar to lung cancer immune cell results: together, these samples were characterized by an intermediate level of heterogeneity with optimal rank of r~20 and decreasing statistical support for higher ranks (type I; Fig 6A-C). In contrast, the evidence profiles for three additional data sets-melanoma (Tirosh et al, 2016a, Fig 6D), immune cells in breast cancer (Azizi et al, 2018, Fig 6E), and head and neck squamous cell carcinoma (HNSCC; Puram et al, 2017, Fig 6F)-showed a different behavior, where evidence increased monotonically to reach a maximal level and remained similar for higher ranks (type II). We classified evidence profiles into these two classes unambiguously by comparing maximum evidence and evidence at maximum rank using a Bayes factor threshold ( Fig 1B): although clear maxima existed in type I data sets (Fig 6A-C), global maxima were located at the highest rank considered in type II (Fig 6D-F). In type II data, the lowest rank with the evidence value within the threshold around the maximal level provides the most parsimonious description. cells (CTL), B cells (B), monocytes, CD4 + T cells (Th), regulatory T cells (Treg), NK cells (NK), and CD34 + HSCs, of varying compositions as indicated. (G, H) Examples of rank versus evidence profiles for mixtures of three (G) and seven (H) blood cell types. (I) Subgroup assignment scores (fraction of correctly assigned cells) of bNMFbased inferences applied to mixtures of four purified blood cell types shown in (C) . Two sets of mixtures with different compositions were sampled, one with uniform cell counts ("uniform") and the other where three cell types were~10% in count than the rest ("common + rare"). Mean scores are 0.82 (0.08, SD) and 0.73 (0.08) for uniform and common + rare cases, respectively. To ensure that our classification did not depend on quality of statistics afforded by each data set, we repeated each inference after down-sampling, where sample sizes were reduced by a factor of 2~4. All three cases in type I retained their shapes with the locations of maxima shifted to lower ranks (Fig 6A-C, red dashed lines), suggesting that the pronounced maxima in evidence profiles observed for full data sets were statistically significant. In contrast, upon downsampling, all three type II samples retained their shapes of asymptotic monotonicity with similar locations of optimal rank (Fig 6D-F). We additionally examined ML-NMF quality measures of two representative tumor samples, each from type I and II classes (oligodendroglioma and breast cancer immune cells; Fig S9). The rank-dependence of dispersion and cophenetic coefficients were qualitatively similar to those of PBMC (Fig 2E and F), with maxima at rank~2, minima below rank~20, and monotonic increases under large rank values (Fig S9). We further characterized the composition of HNSCC sample, which contains primary and lymph node metastatic tumors from 18 patients (Puram et al, 2017). The subgroup tree ( Fig S10A) showed a division at r = 2 into epithelial (subgroups 1-8) and immune/ stromal branches (subgroups 9-15). Major cell type assignments from bNMF were highly concordant with annotations by Puram et al (2017) (Fig S10B-D). Given the fundamental roles somatic mutations play in cell-type heterogeneity of tumors, we reasoned that the type II-like behavior of high-complexity cancer microenvironments would be associated with relatively large degrees of somatic mutations. We explored such a connection between transcriptomic and DNA-level complexities using single-cell data sets from multiple myeloma (MM) patients (Ledergor et al, 2018): we characterized three sets of malignant plasma cell samples derived from patients at different stages of disease progression: an asymptomatic, monoclonal gammopathy of undetermined significance (MGUS), a more advanced, smoldering multiple myeloma (SMM), and full MM stages. These disease stages exhibit progressively larger degrees of somatic copy number aberrations (Ledergor et al, 2018). The MGUS sample showed a clear type I behavior with the optimal rank of 8 and a strong monotonic decrease in evidence for higher ranks ( Fig 6G). The SMM sample showed a broader peak at rank 9 ( Fig 6H). The MM sample result, in contrast, was strongly indicative of a borderline behavior where type I would transition into type II (Fig 6I). This progression of evidence profiles supports the view that cancer disease progression and increases in somatic mutation load would typically cause a gradual replacement of type I by type II behaviors. Discussion Our approach for single-cell RNA-seq analysis confers a unique capability of assessing the degree of cell-type heterogeneity via unsupervised clustering with the number of subgroups rigorously determined from data. We showed with simulated data sets and existing PBMC/pancreatic single-cell data that the appropriate depth of subgrouping is generally dictated by data at hand and is largely independent of sample sizes. Our method allows us to not only infer this degree of complexity but also identity cellular subtypes with high accuracy and consistency (Figs 2, 3, and 4). In particular, the high degree of heterogeneity we found among pancreatic β-cells (Fig 3F and G) is consistent with existing experimental evidences (Wang & Kaestner, 2018). The prominence of peaks signifying the optimal rank-the range of heterogeneity most appropriate for the data set at hand-in samples of relatively low complexity (e.g., Figs 4G, 5A, and 6A-C), where statistical support clearly decreases for larger ranks, illustrates a key difference between ML approaches and bNMF: in ML methods, larger ranks using more parameters would generally result in better fit unless penalized. In contrast, explicit priors used in bNMF (γ distribution in our case) prevent overfitting. Our characterization of solid tumor microenvironments highlights the diversity in the degree of heterogeneity and the importance of assessing it adequately in transcriptomic studies. The highly pronounced and low value of optimal rank observed for in vitro tumor cell culture (Fig 5A) is in contrast with in vivo tumor microenvironments, which showed intermediate (type I, Fig 6A-C) to high (type II, Fig 6D-F) levels of heterogeneity. The latter two classes of heterogeneity each showed a relatively clear optimal rank and a lower bound for subgroup number with evidence equally supporting all higher depths, respectively. Although two type II samples (melanoma and HNSCC) contained primary and metastatic tumors from multiple patients (Table S1), which presumably contribute to heterogeneity, the multiplicity of patient/tumor of origin comprising each data set did not determine heterogeneity class by itself: the breast cancer immune cell data derived from a single patient belonged to type II (Fig 6E), whereas two type I cases (gliomas, Fig 6B and C) contained 6 and 10 patients, respectively. The tumor types and their heterogeneity classes in Fig 6B-F instead are broadly consistent with their known relative somatic mutation loads (glioma < breast cancer < HNSCC < melanoma; Alexandrov et al, 2013). A type II behavior in tumor samples thus suggests extensive cell-type heterogeneities spanning a substantial range of resolution, possibly down to levels reaching individual cells. Such a complex gene expression signature spanning multiple levels could arise from extensive diversification of tumor cells through somatic mutation, as suggested by the progression of MM samples in Fig 6G-I. In contrast, a single or narrow range of optimal ranks would signify a well-defined, finite set of subgroups, with cells in each subgroup relatively homogeneous in their expression profiles. Although we adopted the "pooled" analysis approach for samples containing multiple tumors, one may instead seek to extract shared molecular-level profiles independent of patient or tissue of origin, which would require incorporation of a batch effect-removal strategy (Dal Molin & Di Camillo, 2018;Haghverdi et al, 2018). Such multi-sample extension may take the form of a statistical procedure deriving a consensus subgrouping depth among multiple values optimal for each constituent sample. ML-NMF We implemented ML (Lee & Seung, 2000) and variational bNMF inference with γ priors (Cemgil, 2009) for factorization of count data. A statistical inference-based formulation of NMF regards each element of count matrix X (m rows for gene and n columns for cells) as a realization of the sum of r Poisson random variables, X ij = å r k = 1 S ikj , where S ikj~P oissionðλ = W ik H kj Þ is a "latent source" variable. The matrices W and H are the basis and coefficient factor matrices, each of dimension m × r and r × n, respectively. The intermediate dimension r (rank) typically satisfies r m and r n. Using the known property that the distribution of a sum of Poisson random variables is Poisson with mean equal to the sum of individual means, one has: One can then write for the likelihood of data, ln PrðXjW; HÞ = å ij ln  where Sterling's approximation was used in the second line. The likelihood then takes the form of: ln PrðXjW; HÞ = å ij X ij ln ðWHÞ ij The right-hand-side of Equation (5) is the negative of generalized KL divergence (Lee & Seung, 2000), which is minimized upon ML condition. An expectation-maximization treatment applied to Equation (5) (Cemgil, 2009) leads to the iterative update rule for W and H first derived by Lee & Seung (1999). We used ML inference with randomized initial conditions, where multiple iterations were seeded by identically distributed initial matrix elements. Convergence was tested with fractional changes to log likelihood below a cutoff (10 −5 ). Quality measures we considered were dispersion and cophenetic correlation. The dispersion was defined with respect to the consistency matrix. Consistency matrix C is an n × n matrix with elements C jl = Eðδ jl Þ, where δ jl is the Kronecker δ equal to 1 if cell j and cell l belong to the same cluster and zero otherwise, and the mean is taken over factorization results with different initial conditions. A given cell j is assigned to the cluster r within a factorization outcome, where r = arg max k H kj . The dispersion, a measure between 0 and 1 for the consistency of cluster assignment over multiple inferences, was defined as: i.e., the mean deviation of the consistency matrix from the null value 1/2. The factor of 4 rescales the value such that max(D) = 1, and in the second expression, we separated the diagonal term for which C jj = 1; the second summation is over the upper triangular part of C. Cophenetic correlation was defined as: i.e., the correlation between consistency matrix and the height h jl within the dendrogram from hierarchical clustering at which cell j and cell l merge (Sokal & Rohlf, 1962;Brunet et al, 2004). The cophenetic correlation P measures the degree to which dissimilarity between two cells 1 − C jl is preserved in hierarchical clustering. We used the "hclust" function in R with "average" method for the computation of P. bNMF We used Bayesian inference, evaluating the marginal likelihood or evidence, PrðXjΘ; rÞ = ð dWdHå S PrðXjSÞPrðSjW; HÞPrðW; HjΘ; rÞ; where Θ is the set of hyperparameters for the prior distribution of factor matrices W and H. Both hyperparameters and rank r can be chosen by maximizing evidence [Equations (1) and (2)]. In practice, hyperparameters are updated during iteration for a given rank and the inference is repeated for multiple rank values. The resulting (log) evidence values can then be compared to find r opt . We assumed all matrix elements were identically distributed by γ priors with shape α and rate β parameters: such that Θ = fa w ; b w ; a h ; b h g We used update equations for the posterior mean of latent and factor elements resulting from a variational approximation to Equation (8) (Cemgil, 2009). We typically held hyperparameters fixed for initial 10 iterations and updated them every step thereafter. The overall procedure of bNMF inference is summarized as follows: 1. Choose a maximum rank r max and consider all rank r = 2; /; r max : For each r, a. Factorize count matrix X using a random initial guess for W ðpÞ and H ðpÞ sampled from Equation (9) (see Algorithm 1 in Cemgil (2009)). Store the corresponding log evidence U p (r). b. Repeat a for a given number of different initial conditions and find p p = arg max p U p ðrÞ: Store W ðp p Þ and H ðp p Þ for the rank r. 2. Construct the evidence versus rank profile via fU p p ðrÞg; r = 2; /; r max : Find the optimal rank r opt for which U p p ðrÞ is maximum (Fig 1B; see below). 3. Construct the subgroup tree connecting rank r = 2 and r max (see below). 4. Use (W, H) under rank r opt to derive metagene lists and assign cells to subgroups ( Fig 1C). The computational requirements of bNMF inference scaled linearly with increasing matrix dimensions (Fig S11). Because factorizations for each rank and initial conditions are independent, computation is easily distributed into multiple cores with linear speed-up. Determination of optimal rank We determined the heterogeneity class and optimal rank based on evidence defined by Equation (8). We assumed that the support from data for rank r9 is statistically more significant compared to rank r if the Bayes factor satisfies: BF = PrðXjΘ p ; r9Þ PrðXjΘ p ; rÞ > T n ; where T is a threshold (Held & Ott, 2018). The exponent n takes into account the fact that data X contains n samples. We used T = 3 in this work. In terms of the log evidence per matrix element ϵðrÞ = ½ln PrðXjΘ p ; rÞ=nm; we then have: ϵðr9Þ − ϵðrÞ > L = ðln TÞ=m: The left-hand-side of Equation (11) becomes the slope of log evidence if r9 = r + 1: We used the following procedure to classify heterogeneity type and determine the optimal rank: 1. Replace evidence profile data ϵðrÞ for r = r min ; /; r max by its cubic-smoothing splined points to reduce artefacts from statistical noise. We used "smooth.spline" function in R with degrees of freedom d:f: = minð10; r max − r min + 1Þ: We used a larger d.f. if fit was inadequate. Find r p = arg max r ϵðrÞ: 2. If jϵðr max Þ − ϵðr p Þj > L, the class is type I and r opt = r p : 3. Otherwise, the class is type II. Compute the slope: ϵðr + 1Þ − ϵðrÞ if r = r min ; ϵðrÞ − ϵðr − 1Þ if r = r max ; ½ϵðr + 1Þ − ϵðr − 1Þ=2 otherwise; (12) and r opt is the lowest rank for which sðrÞ < L. If no such rank exists, r opt = r max . Software availability An R package implementing the algorithm is available as a Bioconductor package, https://bioconductor.org/packages/ccfindR. Simulated data We generated simulated data to characterize rank determination of bNMF algorithms in two different ways. First, for given numbers of genes m, rank r, and the total number of cells n = rn c (n c = 20, r = 10 in Fig 2, such that n = rn c = 200), we set the coefficient matrix H such that H kj = 1 for j = ðk − 1Þn c + 1; /; kn c , k = 1; /; r; and zero otherwise. The basis matrix W was set by dividing m rows into r groups and assigning elements of each group of rows by sampling from multinomial distributions of given total counts with uniform probabilities. The count matrix X = WH was used after randomly shuffling rows and columns. ML-NMF and bNMF inferences used 50 different initial conditions for each rank. PCA-based analysis ( Fig 2D) used Seurat (Macosko et al, 2015) using 10 principal components ( Fig S1C). We varied the resolution parameter, as an input to "FindCluster" function, with default values of other parameters. The bNMF inference was repeated for different matrix sizes as indicated in Fig 2C. We used a realization of simulated data generated under rank 5 to determine the distribution of relative outlier cells (Fig S2). The bNMF factorization results were visualized using tSNE ( Fig S2B) and relative outliers were identified using the function "cov.mcd" in the R package "MASS" with default parameters. We tested the convergence of bNMF by generating a second set of simulated data using basis W and coefficient H matrices, whose elements were sampled from their γ prior distributions with a given set of hyperparameters. We chose these hyperparameter values in Fig S3 as a w = a h = 0.1 and b w = b h = 1. The number of features ("genes") was fixed as 100, and we considered three values for the total number of cells (n = 10, 100, and 1,000). We computed the product of sampled matrices W and H, whose elements were used as the mean values for the Poisson counts. Multiple realizations (100) of these count matrices for the single set of mean values given by WH were generated for each sample size, and bNMF inference was performed separately (10 different initial conditions per rank) to determine the log evidence versus rank profiles, optimal rank statistics, and the distribution of final hyperparameter values (Fig S3). Gene selection We applied quality control filtering to count matrix and gene/cell annotation data to select features with high variance for subgrouping ( Fig S4). We used processed RNA count matrices of publicly available single-cell data sets (Table S1). We computed the variance to mean ratio (VMR) for all genes and selected genes with VMR above a cutoff. We also used a cutoff for the number of cells expressing each gene such that only those genes with nonzero counts in a minimum number of cells would be included. For a subset of samples, we further expanded the pool of genes such that those with relatively lower variance but with potentially nontrivial count distributions would also be included: for each gene filtered out by the criteria above, we constructed its count distribution histogram, which is typically peaked at zero count and monotonically decreases with increasing count. For a varying fraction of genes, this histogram contained a mode (a local maximum at a nonzero count). We moved filtered genes back into the selection when such a mode existed in its count distribution (Fig S4). Data sets with unique molecular identifier counts were used without normalization. For data sets reporting transcripts per million or fragments per kilobase per million, we took log-transformed levels of these quantities as pseudo-Poisson counts. We used fresh PBMC and purified blood cell data (Zheng et al, 2017) from https://support.10xgenomics.com/single-cell-gene-expression/ datasets. We generated two samples with different sizes by down-sampling original PBMC data set (n = 34,289 and n = 6,857; 11,212 genes). We applied ML-NMF and bNMF (Fig 2E-G) to the smaller data set, finding the solution with ML (ML-NMF) and evidence (bNMF). To annotate each cluster (Fig 3B and C), we first computed correlations between the mean RNA counts of bNMF subgroups and purified blood cell groups. We then used the "solve-LSAP" function of the R package "clue" (Hornik, 2005) to find the most likely assignment of bNMF subgroups to purified cell types. The annotation shown in Fig 3B is a consensus of this assignment and the metagene/marker lists (Fig 3A). With Seurat, we used the smaller PBMC data set (n = 6,857) and applied the quality control procedure of cell filtering with the proportion of mitochondrial genes less than 0.08 and minimum unique molecular identifier count of 100. Variable genes were selected with the range of mean expression level between 0.02 and 3 and log VMR above 0.5, which yielded 1,773 genes and 6,847 cells. We used seven principal components based on the elbow plot ( Fig S1F) and varied the resolution parameter to obtain Fig 2H. We assessed the reliability of cell-type identification by bNMF using random mixtures of purified blood cell data containing from two to seven cell types (Fig 4A-F). Hundred random realizations of up to seven cell types (CD8 + CTLs, CD19 + B cells, CD14 + monocytes, CD4 + Th, Treg, CD56 + NK cells, and CD34 + HSCs, each containing 100 cells, respectively) were generated by sampling columns from the purified cell count matrices and the count matrices of each realization were constructed by combining these columns. Rank determination and metagene identification in bNMF were performed for each realization after selecting genes with minimum VMR ratio of 1 and minimum number of 10 cells expressing the gene. Factorizations were performed for 50 different realizations of mixture, each with 10 initial conditions. Rank values with maximum evidence from each realization were extracted to obtain distributions shown in Fig 4A-F. Annotation scores in Fig 4I were calculated for four-cell-type mixtures first for the case of equal composition of Fig 4C and then for the "common + rare" mixtures containing 180, 20, 20, and 180 cells of CTLs, B cells, monocytes, and Th cells, respectively. Comparison of cell-type composition prediction from single-cell analysis and bulk data deconvolution was performed by summing RNA counts of fresh PBMC ( Fig 3D) and a realization of seven-blood-cell mixtures (CTLs, B cells, monocytes, CD4 + Th, Treg, NK cells, and HSCs of count n = 100, 80,,120,100, 80, 80, 80, respectively; Fig 3E) for all genes under consideration. We used these bulk counts as input to CIBERSORT at https://cibersort.stanford.edu/ with default parameters. Metagene identification To characterize subgroups derived from bNMF inference under the optimal rank r (9 for PBMC), we took the basis matrix elements W ik and analyzed them column by column. For each subgroup indexed by k = 1, …, r, we rescaled the vector W ik by dividing each row (the basis component of gene i in each subgroup k) by its geometric mean over k, such that different genes would have basis components that are comparable in magnitude. For k running from one to r, we then reordered the rows of W such that the k-th column would have monotonically decreasing magnitudes from top to bottom. We subsequently looked at the top m rows of the sorted matrix and selected genes whose rows within the submatrix given by i = 1, …, m had their maximum elements at position k. The genes corresponding to these rows were defined as the metagenes of the subgroup k. This definition avoids picking genes that feature strongly in one subgroup but even more so in other subgroups, instead focusing on those that help identify the given subgroup uniquely (Carmona-Saez et al, 2006). These steps were repeated for all k. Note that the maximum number of metagenes per subgroup is m and we often found the actual numbers to be smaller. Marker genes, preselected for PBMC in addition to the genes with high variance, were considered together with m genes in the above procedure, such that the actual maximum size of the metageneplus-marker set was m plus the total number of markers. As can be seen in Fig 3A, however, each marker gene appears only once in the subgroup in which the marker contribution is strongest. Subgroup tree construction We inferred hierarchical relationships between subgroups obtained under different ranks by comparing cellular subgroup memberships of neighboring ranks. Specifically, we used the series of coefficient matrices with elements H ðrÞ kj for rank r = 2; /; r opt ; where r opt is the optimal rank, to derive the subgroup index of cell j under rank r given by c j;r = argmax k H ðrÞ kj : For each subgroup k under rank r + 1, we then tabulated the subgroup index c j;r of all cells j belonging to subgroup k and defined the subgroup of origin by: I k;r+1 = arg maxk9å j2k δ À k9; c j;r Á ; where δðx; yÞ = 1 if x = y and zero otherwise and the summation is over all cells belonging to subgroup k under rank r + 1. The subgroup of origin I k;r+1 is the subgroup under rank r with the highest count of cells in the subgroup k under rank r + 1. In rare cases where there were ties in ranking for the subgroup of origin count, we randomly broke the tie such that I k;r+1 would be uniquely defined for all k. We then grew the tree at a given r by connecting the subgroup k under rank r + 1 to the subgroup I k;r+1 under rank r. In most cases, this step resulted in bifurcation of a subgroup under rank r, but triplebranching also occurred occasionally. We repeated this procedure sequentially for r = 2; /; r opt − 1 to complete the tree. Pancreatic tissue sample We downloaded human pancreatic tissue single-cell count matrix (patient 1; Baron et al, 2016) via accession number GSE84133. We used all 1,937 cells in the count matrix and selected 2,454 genes using minimum VMR of 2 and minimum number of 100 cells expressing each gene. Rank scan for r up to 40 used 20 initial conditions for each rank. Cancer samples We used processed RNA count matrices of cancer samples via accession numbers GSE81383, GSE97168, GSE70630, GSE89567, GSE72056, GSE114724, GSE117156, and GSE103322, for melanoma cell culture, lung cancer immune cells, oligodendroglioma, astrocytoma, melanoma, breast cancer immune cells, MM, and HNSCC, respectively (Table S1). We used all cells and selected genes using thresholds for VMR and number of cells expressed as indicated in Fig S4 to obtain count matrices of dimensions shown in Table S1. For MM samples, immunoglobulin genes were excluded (Ledergor et al, 2018) in addition to VMR-based filtering. We chose patient ID BC09 (tumor 01) for the breast cancer immune cell sample (Azizi et al, 2018). For MM samples, we used patient IDs MGUS01, SMM01, and MM01 (Ledergor et al, 2018). J Woo: data curation, software, formal analysis, visualization, methodology, and writing-original draft, review, and editing. B Winterhoff: resources, data curation, investigation, and writingreview and editing. T Starr: resources, data curation, investigation, and writing-review and editing. C Aliferis: conceptualization, resources, data curation, investigation, and writing-review and editing.
2019-07-04T13:05:55.918Z
2019-07-02T00:00:00.000
{ "year": 2019, "sha1": "0da2e17bde358e5ee8bcef6796ce1922aab1350d", "oa_license": "CCBY", "oa_url": "https://www.life-science-alliance.org/content/lsa/2/4/e201900443.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "7a47dcb9a750c4327f413c7296d80d829b96469f", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
16066238
pes2o/s2orc
v3-fos-license
RADIOGRAPHIC STUDY ON THE ACROMION INDEX AND ITS RELATIONSHIP WITH ROTATOR CUFF TEARS Objective: The purpose of this study was to evaluate the relationship between the lateral projection of the acromion and rotator cuff tears (RCTs) in the Brazilian population. Methods: The lateral projection of the acromion was measured using anteroposterior radiographs of the shoulders, carried out with the glenoid cavity in absolute profile and the humeral head in the neutral position or with internal rotation. The acromion index (AI) was defined as the ratio between the distance from the plane of the glenoid cavity to the lateral edge of the acromion and the distance from the plane of the glenoid cavity to the lateral edge of the humeral head. This index was measured in 83 patients (mean age of 54 years) with RCTs and compared with a group of 28 individuals (mean age of 48 years) without RCTs. The presence or absence of RCTs was determined by means of magnetic resonance imaging. Results: The mean AI was 0.7194 for the patients with RCTs and 0.6677 for the individuals without RCTs, in the Brazilian population. This difference was statistically significant, with P < 0.001. Conclusion: A relationship can be established between AI and rotator cuff tears in the Brazilian population. INTRODUCTION The etiology of rotator cuff tears (RCTs) is still controversial (1) . However, they have been correlated with the format of the acromion (2) . Bigliani et al (3) described three types of acromion and correlated type III (hooked) with greater prevalence of RCTs. Wang and Shapiro (4) , along with Ikemoto et al (5) , reported greater prevalence of this type of acromion among older patients. Zuckerman et al (6) conducted a morphometric study on the shoulders of cadavers and found greater anterior projection and less inclination of the acromion in cadavers with RCTs, in comparison with those without such lesions. Another parameter for evaluating the format of the acromion is its lateral angulation, which was studied by Banas et al (7) using magnetic resonance images. They found smaller angles in patients with RCTs. ABSTRACT Objective: The purpose of this study was to evaluate the relationship between the lateral projection of the acromion and rotator cuff tears (RCTs) in the Brazilian population. Methods: The lateral projection of the acromion was measured using anteroposterior radiographs of the shoulders, carried out with the glenoid cavity in absolute profile and the humeral head in the neutral position or with internal rotation. The acromion index (AI) was defined as the ratio between the distance from the plane of the glenoid cavity to the lateral edge of the acromion and the distance from the plane of the glenoid cavity to the lateral edge of the humeral head. This index was measured in 83 patients ( Analyzing the lateral appearance of the acromion shape, Nyffeler et al (2) and Torrens et al (8) found a direct relationship between lateral projection of the acromion and the presence of RCTs. From this relationship, Nyffeler et al (2) proposed a model to explain this, in which the vector of the muscle force resulting from the deltoid muscle would be influenced by the lateral projection of the acromion. Contraction of the deltoid muscle during active abduction would pull the humeral head upwards and would also put pressure on it, against the glenoid cavity. The orientation of the resultant force vector depends on the orientation of the muscle fibers of the deltoid at their origin in the acromion. The more lateral their origin is in the acromion, the greater the ascending component of the resultant force will be; and the less the lateral projection of the acromion is, the greater the compressive component of the force against the glenoid cavity will be ( Figure 1). It might be imagined that a greater ascending force component (Fa) would favor subacromial impact and, consequently, degenerative changes to the supraspinal tendon, while a greater compressive force (Fc) would favor degenerative changes to the shoulder joint (2) . However, there is no consensus in the literature regarding this relationship, given that neither Van Nüffel and Nijs (9) nor Itoi * found it in their studies, even though their work was carried out using similar methodology. Our study had the aim of evaluating the shape of the acromion, and specifically its lateral projection, using methodology similar to that of Van Nüffel and Nijs (9) and Itoi * , with radiographic measurements using an index that was then correlated with RCT occurrences. SAMPLE AND METHODS Radiographs from patients who undergone operations performed by the Shoulder and Elbow Surgery Group of the Department of Orthopedics and Traumatology, School of Medical Sciences, Santa Casa de Misericórdia de São Paulo, Pavilhão "Fernandinho Simonsen", between July 1995 and December 2007. The shoulder radiographs were standardized and only those that had been produced with correction for anteversion of the glenoid cavity were used. The arm was radiographed in a resting position alongside the body, with the proximal region of the humerus in a neu-tral position or with internal rotation. According to the study published by Nyffeler et al (2) , there is no difference in measuring the acromiale index with the shoulder in neutral position of with internal rotation. Two measurements were made on these radiographic images, taking the reference points to be the plane of the glenoid cavity, the lateral extremity of the humeral head and the lateral extremity of the acromion. The distance between the lateral extremity of the acromion and the plane of the glenoid cavity was called GA. The distance between the lateral extremity of the humerus and the plane of the glenoid cavity was called GU. The ratio between the values of GA and GU forms an index known as the acromion index (AI) (Figure 2). There was no concern regarding the distance between the ampoule of the X-ray apparatus and the radiographic film, since was an index and changes to the parameters would not interfere with the result. To check that variation in the inclination of the X-ray ampoule would not alter the AI measurements, we created a control group of 10 patients. This group underwent radiography centered on the glenoid cavity in anteroposterior view, at angles of 0°, 30° of caudal inclination and 30° of cranial inclination. The AI from these radiograph views was measured and subjected to statistical analysis (Friedman test). This analysis on the three views showed similar values, with a p-value of 0.999, and it was concluded that the inclination of the X-ray ampoule did not have any influence on the result (Table 1). The inclusion criterion was that the patients selected should present a completely torn rotator cuff, proven by magnetic resonance images and through observation during the surgery. For the control group, patients treated for shoulder diseases who did not show RCTs on magnetic resonance images were selected. We defined the exclusion and non-inclusion criteria as cases of antecedents of fractures of the scapular belt, arthritis, degenerative arthrosis, osteonecrosis and sequelae from infection. The patients selected were divided into two groups: group I, with a completely torn rotator cuff (83 cases); and group II, with an undamaged rotator cuff (28 cases) ( Table 2). The radiographs were digitized using a scanner (HP Deskjet F4180 ® ). These images were then analyzed using the Image J 1.41 software (Wayne Rasband, Research Services Branch, National Institute of Mental Health, Bethesda, Maryland, USA), which is available for download from the website http://rsbweb.nih.gov. This enables precise measurement of distances in figures, starting from a parameter for calibration. A ruler marked out in millimeters was used as the calibration parameter ( Figure 2). The data obtained were subjected to statistical analysis by means of Student's t test, which was controlled using Levene's test for equality of variance, with a significance level of 5%. The chi-square test was also used, with the aim of investigating a possible difference in the sex distribution between the study groups. RESULTS In group I, the mean age was 54 years, with a range from 32 to 77 years, while in group II, the mean age was 48 years, ranging from 35 to 63 years. In group I, females predominated, accounting for 64% (53 women). This was also found in group II, in which 57% of the patients were female (16 women). However, this difference in sex distribution in groups I and II was not statistically significant (p = 0.527) ( Table 2). With regard to the side affected, there was predominance of the right side in both groups, accounting for 78% of the shoulders in group I and 64% of the shoulders in group II (Table 2). We found a mean AI of 0.7194 among the individuals who presented RCTs (group I); and a mean AI of 0.6677 among the individuals who presented an undamaged rotator cuff (group II). The statistical analysis showed a correlation with p = 0.001, i.e. there was a statistically significant relationship between RCTs and greater lateral projection of the acromion. In 1972, Neer (10) made an important study on impact syndrome and identified that forces impacting on the lower portion of the acromion, the coracoacromial ligament and the inferior surface of the acromioclavicular ligament were the agents responsible for the narrowing of the subacromial space, which led to tendon lesions. However, it is now known that the pathogenesis of RCTs is probably multifactorial (1) . One of the possible causes for such lesions is greater lateral projection of the acromion, as proposed by Nyffeler et al (2) . However, there is no consensus in the literature in relation to this association, in the way seen for the anterior projection of the acromion (3,6,7,10) . Our results support the theory of Nyffeler et al (2) , since individuals with RCTs presented a greater AI, i.e. greater lateral projection of the acromion. This shape of the acromion causes the origin of the deltoid to be more lateral, thereby producing a resultant force with a more ascendant orientation (Fa), which probably favors subacromial impact. Among the patients with an undamaged rotator cuff, this index was lower, i.e. there was less lateral projection of the acromion, with a resultant force that was oriented more towards compression (Fc) against the glenoid cavity. Although not part of the objective of our study, we did not find any signs of arthrosis in the shoulder joint of these patients, as this theory would suppose. Our findings also in some way corroborate the results of Torrens et al (8) , given that we also found a relationship between greater lateral projection of the acromion and occurrences of RCTs, even though the calculation used to determine the lateral projection of the acromion was made differently, i.e. using another index. DISCUSSION On the other hand, our results and those of Nyffeler et al (2) are not the same as those of Van Nüffel and Nijs (9) and Itoi * , even though we used the same way of measuring the lateral projection of the acromion and our groups were similar regarding sex and age. The only difference was that both the group with RCTs and the control group in the study by Itoi * consisted mostly of men, which could constitute a form of bias, given that there have been reports that RCTs are more prevalent among women, although this too is not a matter of consensus in the literature (11) . The individuals with RCTs and with undamaged rotator cuffs who were selected both in previous studies and in ours had mean ages between 45 and 65 years. Thus, they were in phase III of the impact syndrome, as described by Neer (12) , i.e. the stage at which complete tearing of the rotator cuff occurs. Another factor that might have influenced the results was that the radiographic images used for measuring the AI were not standardized. However, we took care to select radiographs with correction for anteversion of the glenoid cavity, and in which the humerus was in a neutral position or in internal rotation, as already demonstrated by Nyffeler et al (2) , since in this way neither position would have any influence on the AI. Torrens et al (8) reported previously that the distance from the ampoule to the film, at the time of producing the radiograph, also did not alter the AI because it is determined as the ratio between two measurements made on the same radiographic image. In our study, we demonstrated that the angulation of the X-ray ampoule also did not influence the determination of the AI, thereby avoiding such bias in carrying out our study. One factor that should be taken into consideration is that Itoi * made measurements on a Japanese population, Nyffeler et al (2) on a Swiss population and Van Nüffel and Nijs (9) on a Belgian population, while our study was conducted on a Brazilian population. Since each ethnic group has its own characteristics, it could be that the differences between the results may come from a factor relating to the biotype of each ethnic group. However, to investigate this question better, further studies should be conducted, taking into consideration other morphological parameters such as the type of acromion, lateral angle and anterior inclination of the acromion, in an attempt to find the precise etiological factor that causes RCTs. CONCLUSION We conclude that RCTs may be associated with greater AI, i.e. greater lateral projection of the acromion.
2016-05-12T22:15:10.714Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "a29baa8dcb77796e02f4eaf50e4d1b324acf3c02", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/s2255-4971(15)30285-8", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "26849370b4505d399a54256a01c8099ef1093cd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53139018
pes2o/s2orc
v3-fos-license
Conservation and preservation of medicinal plants-leads from Ayurveda and Vrikshayurveda For instance, “Sacred groves’ were dedicated to a deity or a village God, protected, and worshipped like Devarakaadu near Shimoga, India. The Sacred Groves are important repositories of floral and faunal diversity that have been conserved by local communities in a sustainable manner. They are present in Himachal Pradesh, Maharashtra, Kerala, Karnataka, and other places and highlight community managed conservation efforts.1 Historically, the protection of nature and wildlife was an ardent article of faith, reflected in the daily lives of people, enshrined in myths, folklore, religion, arts, and culture. Such traditional cultural attitudes, though based on religious faith, have made significant contribution in the protection and propagation of various species of trees and plants in India. ExUse of bael in summer associated with Ramanavami celebration, Durva for Lord Ganesha, Parijatha plant for Lord Krishna, Bilwa for Lord Ishwara and so on. For the people of India, environmental conservation is not a new concept. Sustainability was ingrained in the thought processes of early Indians as evident from the teachings of Vedas. Perhaps no other culture can provide such a profound variety of cultural practices and ecologically sound relationship with nature as the Indian. For eg: a hymn in Atharva Veda (12.1.35) says “Whatever I dig out from you, O Earth! May that have quick regeneration again, may we not damage thy vital habitat and heart.2 Implicit here are the following principles: In the process of harvest no damage should be done to the earth, Humans are forewarned not against the use of nature for survival, but against the overuse and abuse. Introduction Today, when people throughout the world are disturbed by the degradation of the environment and the disastrous consequences of this, traditional ethics of nature conservation could be looked upon as a source of inspiration and guidance for the future. The practice of allocating tree species to individuals based on lunar asterism like nakshatra vana, navagraha vrukshas was also prevalent. Nakshatravana, Rashivana and Navagraha Vrukshas are other effective ideas to protect trees and environment. There has been a practice of allocating tree species to individuals based on lunar asterism under which they are born (birth star trees or Nakshatravanam) under this, all individuals are expected to take care of their birth star trees. 3 Mythology also has been useful in cultivating certain plants that needed extra care. Socio-culturally valued species find place in home gardens and courtyards For example, Tulsi (Ocimum sanctum) a highly valued medicinal plant is grown, in every household in centre of the courtyard and ritually watered even today. Vrikshayurveda mentions that one who grows Tulsi at home will be residing in Vaikunta (Heaven) for 1000 years. It is also said that one who plants neem and mango trees on roadsides would be attain liberation. Probably, these are counted as motivational factors for plant preservation based on mythology. 4 Relevance of Vrikshayurveda Recognizing the significance of plant bio-resources of varied values in ancient India, emphasis has been laid on conservation of flora. Ancient texts contain many descriptions of the uses and management of forests and highlight sustainability as an implicit theme. Treatise called Vrikshayurveda mentions in depth about the plants, its importance, diseases suffered by them, treatment, protection from external factors, increasing the yield , conservation techniques like protection of plants from mist, pests etc. Chemical fertilizers show dramatic short-term benefits, but in the longer run they adversely impact the soil, water and perhaps the nutritional quality of the plants. 7 Hence there is great scope to integrate traditional practices for better productivity of quality planting materials The second chapter "Bijoptivithi" illustrated about the process of seed germination and explains about grading and preservation of seeds. The methods described for seed preservation is to mix the seeds with ashes and it was also suggested that the seeds should be exposed to the medicated smoke which can serve as an antimicrobial agent. Fertilizers are prescribed for undeveloped and underdeveloped trees and plants. Drumaraksa is the chapter deals with several advices to save plants and trees from the weather and other conditions like winds and storms. It also tells about the medicinal plants used on the broken branch to protect the whole tree from dying. Use of powders of Solanum indicum, Sesamum indicum, Embelia ribes and Brassica juncea, milk, ghee and cow dung has been mentioned in almost all the texts for protection during storage. 8 In addition to pre treatments applicable to all seeds in general, treatments specific to specific plants also have been described. Various seed priming processes have been carefully designed in Vrikshayurveda to allow early germination, to obtain good quality of seedlings by following the classical techniques. A study conducted to compare the effects of Vrikshayurveda & Modern cultivation techniques on germination of Bakuchi has revalidated the germination behaviour of dormant seeds of Psoralia corylifolia. 9 The chapter "Citrikarana" depicts some astounding techniques such as to make a plant bloom throughout the year irrespective of the seasons, bring forth premature maturity to plants and fruits, and change the shape and form of trees. For nourishment of plants, use of a biofertilizer called 'Kunapajala' has been mentioned. Kunapajala as organic manure Kunapajala is a natural organic product derived from animal and plant products containing a significant quantity of one or more of the primary nutrients like Nitrogen, Phosphorus, and Potassium which are necessary for plant growth. The literary meaning of the Sanskrit word Kunapa is "smelling like a dead or stinking" and the name is apt for the liquid manure which is prepared using excreta, bones ,body, flesh and marrow of animals, fish, decayed plant products etc. Kunapajala has some plant growth regulatory actions through which it enhances the overall growth of plants. Being a liquid biofertilizer it is a more suitable form of manure and can be beneficial in growth of medicinal plants with probably minimal toxic effects on human body when compared to chemical fertilizer. Usually the raw organic matter decomposes into humus which will be further digested by soil microbes producing high levels of organic acids like humic, carbonic and fulvic acids and increases high cation (+) exchange capacity. This capacity is responsible for the mobilization of calcium, potassium and other plant nutrients. In order to obtain good results aerobic composting is said to be beneficial. The nitrogen which is very essential for plant growth is supplemented by blood, cottonseed, fish meal and emulsion etc, whereas compost from bird manures, bone meal etc are rich source of Phosphorus and Potassium which helps in regulating root, bud, flower and fruit formation, cell division, sugar formation in the sap, chlorophyll production and photosynthesis, increasing crop resistance to diseases etc. The other important micronutrients are Magnesium, Calcium, Zinc, Manganese, Copper, Iron, and Selenium which are also supplemented by the organic compost Kunapajala. 10 Researchers suggest that application of the principles of Vrikshayurveda like Kunapajala does produce phenomenal and interesting results. Since few research works have been carried out, this discipline of science needs to be developed through concerted research efforts to ascertain its utility. Advantages of organic farming Though, chemical fertilizers increase the yield, they pose certain serious health threats to human beings especially infants, pregnant and nursing mothers. 11 Another concern for health is contamination of medicinal plants with toxic heavy metals like mercury, lead, cadmium, etc., through fertilizers, harmful industrial wastes contaminating the water sources etc. In contrast, organic manures are considered to be safe and yielding good produce by improving water penetration, waterholding capacity, improvement in soil structure, microbial biomass, nutrient availability, drought and heat stress resistance. It also helps in improving the soil pH which has an impact on plant growth and soil microbial activity. 10 Studies using Kunapajala for growing Senna 12 have shown that the total Sennoside content per plant was more. Simlarly for Langali (Gloriosa superba Linn.) 13 the active principle (methanol extract) Colchicine was found in higher amount. When Kunapajala was used for Brinjal (Solanum melongena),it produced large number of branches, higher yield, fruits with lesser seeds and lower susceptibility to diseases when compared with plants grown with artificial fertilizer. Similarly for mango, coconut, chilly, paddy, vegetables etc similar results have been found. 10 Thus, Kunapajala by virtue of its behaviour as growth regulator has been effective in increasing the leaf area, higher yield of flowers and fruits as well as phytoconstituents. Some major centers carrying out Vrikshayurveda-related work are CIKS, Chennai, Asian Agri-History Foundation (AAHF), Secunderabad and National Institute of Vrikshayurveda, Jhansi. Prof. Nene and his group at the AAHF are promoting Vrikshayurveda in a big way. CIKS, Chennai is involved in promoting organic farming and works along with farmers belonging to various villages in Tamil Nadu. They are also involved in testing and validation of indigenous knowledge of agriculture by rapid assessment of traditional agricultural practices. 8 As a result of their experiments, as well as that of Indian Council of Agricultural Research, using the modern research procedures, it has been proved that the traditional knowledge is valid beyond doubt. Conclusion The use of pañcagavyam, kuṇapajala and other procedures mentioned in the various texts can be studied further for efficacy and if found to be suitable can be adopted for the various steps involved in development of organic nursery protocol for medicinal plants. A majority of the raw materials used in these procedures are by-products obtained from other activities and are easily available around us. The procedures are easy and economical too, which is an added advantage. Many of the raw materials listed in the Vrikshayurveda texts, such as flesh and bone of animals, husk, oil cakes, dung and urine of cattle, etc., are waste products and reutilization and recycling of these products will also result in their effective waste management. With the help of ancient texts and model methods of agriculture we can not only scientifically prove the sayings of the text but we could also establish some novel modified methods for the agricultural systems. The proper interpretation and availability of Vrikshayurveda can also play an important role in the field of intercropping and put forward for the use of organic fertilizers and can play a crucial role to build the eco friendly environment. An attempt has been made to compile the traditional methods of conservation and preservation of medicinal plants. It is hoped that the ancient wisdom coupled with modern technology would benefit the mankind.
2019-04-03T13:07:51.847Z
2018-10-10T00:00:00.000
{ "year": 2018, "sha1": "0fe076350491d9f7245bb1fa4da18bd9e2a072e7", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/IJCAM/IJCAM-11-00412.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5c49cb0cd4049a06826f5798bb6c88ea75e8a8e1", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
218933081
pes2o/s2orc
v3-fos-license
Architecture and Design of a Spiking Neuron Processor Core Towards the Design of a Large-scale Event-Driven 3D-NoC-based Neuromorphic Processor . Neuromorphic computing tries to model in hardware the biological brain which is adept at operat-ing in a rapid, real-time, parallel, low power, adaptive and fault-tolerant manner within a volume of 2 liters. Leveraging the event driven nature of Spiking Neural Network (SNN), neuromorphic systems have been able to demonstrate low power consumption by power gating sections of the network not driven by an event at any point in time. However, further exploration in this field towards the building of edge application friendly agents and e ffi cient scalable neuromorphic systems with large number of synapses necessitates the building of small-sized low power spiking neuron processor core with e ffi cient neuro-coding scheme and fault tolerance. This paper presents a spiking neuron processor core suitable for an event-driven Three-Dimensional Network on Chip (3D-NoC) SNN based neuromorphic systems. The spiking neuron Processor core houses an array of leaky integrate and fire (LIF) neurons, and utilizes a crossbar memory in modelling the synapses, all within a chip area of 0.12 mm 2 and was able to achieves an accuracy of 95.15% on MNIST dataset inference. Introduction Neuromorphic computing which is aimed at modeling the biological brain on hardware has gone through decades of research [1], and the ability of the biological brain to carryout rapid parallel computations in real time, in a fault tolerant and power efficient manner is the inspiration behind it [2]. The third generation of Artificial Neural Network (ANN) Spiking Neural Network (SNN) has proven to be more effective than its predecessors in this aim, mimicking more closely, the behavior of a biological neuron. The computation of Spiking neurons, like biological neurons are event triggered and communicate via spikes which could be sparse, and this makes them process information only when spikes are received. Neuromorphic architectures take advantage of the sparsity of spikes in SNN to reduce power consumption by power gating parts of the network that are not receiving spikes at any point in time. However, an efficient neuromorphic hardware targeted towards edge application and scalable neuromorphic architecture with large number of synapses requires building small sized neural Processors with low power consumption, efficient neuro-coding scheme, and fault tolerance. To enable scalability while maintaining minimal power consumption and footprint, we presented in our previous work [3] a Three Dimensional Network-on-Chip (3D-NoC) SNN based architecture, a different approach from the conventional 2D-NoC which is limited in scalability, and consumes more power with increased latency and foot print, when scaling is attempted. The 3D-NoC based SNN architecture utilizes the merits of Networkon-Chips and 3D-Integrated Circuits [4] to enhance the * Corresponding author e-mail: d8211104@u-aizu.ac.jp parallelism and scalability of a neuromorphic processor in the third dimension, minimizing power consumption and communication latency as a result of the brief length and low power consumption of the Through Silicon Vias (TSVs) [5] employed in inter-layer communication [6] [7]. The 3D-NoC SNN based architecture has the spiking neuron processor cores as the processing elements. These processing elements are connected in a 2D mesh topology to form tiles, and then stacked to form the 3D structure. Communication among the processing elements are made possible with 3D routers [8] (one for each spiking neuron processor core). In this work, we present the architecture and design of a spiking neuron processor core described in Fig 1 suitable for the 3D-NoC based SNN architecture. The spiking neuron processor core is designed using the leaky integrate and fire (LIF) spiking neuron model which accumulates incoming spikes as membrane potential and stores in the buffer while experiencing leak, then fires an output spike when the membrane potential crosses a threshold. We have chosen the LIF spiking neuron model because of its simplicity, while maintaining some degree of biological plausibility, making it easier to implement. In designing the spiking neuron processor core, we utilized an SRAM for the N×N crossbar based synapse (N is the number of neurons) which has the synapse at the intersection of horizontal and vertical wires that represent the axons and dendrites of the neurons. An SRAM is also used for the neuron and synapse memory. A control unit implemented as a finite state machine is used to control the operations of the spiking neuron processor. Methodology The spiking neuron processor core design is described using Verilog-HDL. Cadence tools were used for the synthesis and simulation. The hardware complexity is evaluated for power and area. For performance evaluation, the neuro-core is used to classify MNIST dataset [9] of 60,000 training, and 10,000 inference images on an SNN with an architecture of 748×48×10 trained off-chip with backpropagation as an ANN, then converted to SNN [10]. The MNIST images are converted to spikes using Poisson distribution before being sent to the network for classification. Finally, the result is compared with some existing work and presented in Figure 2. Result The spiking neuron processor consumes an estimated power (leakage and dynamic) of 493.5018mW, covers a chip area of 0.12mm 2 and achieves an accuracy of 95.15% on MNIST dataset inference. The result was compared with some existing works reviewed in [11]. The comparison shows that the spiking neuron processor core has a good trade-off between area and accuracy Conclusion and Future Work This work presents the architecture and design of a spiking neuron processor core for 3D-NoC SNN, and evaluated its hardware complexity and performance. Future works towards realizing the 3D-NoC SNN architecture will require integrating the spiking neuron processor core into it, and exploring applications that will leverage the architecture.
2020-05-21T00:05:21.883Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "eced04ab4bda7a44a5930b81a4f166aff221b3bd", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2020/05/shsconf_etltc2020_04003.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "93a9b8b3c00f601714164e499a15fbdfd351f676", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
214298074
pes2o/s2orc
v3-fos-license
Analysis on Operation Modes of Regional Integrated Energy System based on Interests Exchange Relationship Regional energy is an energy system solution designed to solve such issues as regional warm, heat, cooling and power supply and satisfy intra-regional energy demands. With the advancement of power marketization reform, newly added transaction subjects will gradually participate in such links as investment, construction and operation of integrated energy system. Considering the exchange relationship of energy, capital and service flows between various interests subjects of integrated energy system, the paper classifies the operation modes of regional integrated energy system according to the combination relationship between power generation, distribution and sales businesses, and also supplies case analysis according to actual demands. Introduction As early as 1908, International District Energy Association was established to help its members become industrial leaders in terms of the supply of reliable, economic, efficient, environmentally friendly and correct regional energy solutions and promote the improvement of energy efficiency and environment quality by means of such advanced technologies as regional cooling and heating supply and combined heat and power production [1][2][3]. In the middle of the 20th century, developed countries recognized the economic benefits originating from regional energy, for example, many European countries started to vigorously develop regional heat supply, followed by the energetic development of regional cooling supply and the promotion of combined heat and power production in the 1970s. At the end of the 20th century, while recognizing environment problems resulting from energy consumption, various countries in the world also saw better environment benefits following scientific and rational supply and uses of energy, and then some developed countries started to use energy in an integrated way, thus creating opportunities for the further development of regional energy at the beginning of the 21st century [4][5][6]. Following the direction of efficiently and fully using local energies, regional energy system, as a beneficial supplement to urban energy supply, is mainly designed to achieve the integrated and efficient use of various local energies and promote the rational absorption and configuration of large-scale extraregional power by such intelligent power grid technologies as source grid load interaction, multi-energy complementation, power distribution automation. Following the direction of leading energy-saving and interactive energy consumption behaviors, user-side energy system mainly aims to lead users to cultivate such an energy consumption habit featured by electrification and interaction, promote energy saving and improve users' life quality by such technologies and means as intelligent building, intelligent home, internet of vehicles and demand-side management [7][8]. Analyses and researches on the operation mode of regional integrated energy system are correlated to those on such issues as enterprise operation, business operation and development strategy, and have large influence on the realization of project construction targets and economy benefits. 2. Classification on operation modes of regional integrated energy system Regional integrated energy system mainly includes three major kinds of operation businesses, namely power generation, distribution and sales, and according to the combination relationship between the three kinds of businesses and actual operation possibilities, regional integrated energy system may adopt the following operation modes, which can be organized by steps and phases, that is what needs to be clearly expounded. Regional integrated energy system operator purely engaging in power sale Energy Generation System D i s t r i b u t e d s o l a r g e n e r a t i o n , distributed wind generation, small oil ge ner atio n, gar bag e ge ner ati on, ga s turbine, energy storage etc. Figure 1. Mode I-Regional integrated energy system operator purely engaging in power sales Energy Distribution System Establish an intra-regional operation entity on regional integrated energy for sales of cooling, heat and power within the region. The operation entity may determine together with local administration of power supply a certain wholesale power price through negotiation or directly sign a contract of direct power purchase for large consumers with power plant based on relevant policies on direct power purchase for large consumers or power transmission and distribution verification. For intra-regional users, it may supply flexible price "package" for their free selection. In essence, the operation mode will not change the assets relationship or construction planning of existing power grid, and neither will the operator of regional integrated energy construct or operate any grid distribution asset, and thus it can serve as a user agent or pure power seller outside. The operator just mentioned can mainly play two roles: first, operating integrated energy, namely implementing self-operation in terms of energy resources under its control, and to be specific, coordinating such distributed resources as PV power, garbage power, gas-fired boiler, peak-shaving boiler, refrigeration station and heat pump and such controllable resources as energy storage system, heat storage system, ice storage air conditioner and electromobility; second, supplying load in an interactive way, namely introducing contractual energy management and demand response service, implementing time-of-use power price and supplying energy services and information value-added service based on bid data, thus providing users with better services 3 and reducing their overall energy consumption cost. On the basis of flow-graph as shown in Figure 1, the operation analysis is as follows. Energy flow: mainly including distributed PV power generation device, small-scale draught fan, small-scale oil-fired generator and energy storage equipment, energy supply system is the main starting point for the production of energy flow of regional integrated energy system, and when the production of energy flow of regional integrated energy system is insufficient, the system also relies on external oil and gas grid for supplement, and then power and heat energy may be configured through regional power and heat grid and supplied to energy consumption system to satisfy such energy consumption demands as lighting, warming and power and gas consumption. In future, energy consumption system will be mainly composed of energy producer and seller and implement active demand-side response, thus making it possible the existence of two-way energy flow between energy configuration system and energy consumption system, and also between energy configuration system and external power grid. Service flow: two-way service flow exists between external energy market and external power grid, regional energy supply system and also regional energy seller to supply auxiliary services. Besides, twoway service flow also exists between energy consumption system and energy configuration system within regional integrated energy system. Capital flow: energy consumption system pays energy seller for such energies as cooling, heat, power and gas it purchases, and meanwhile, energy seller also pays corresponding expenses to energy consumption system for demand-side services it supplies. Energy seller on one hand pays energy expenses to energy supply system within regional integrated energy system and configuration expenses to energy configuration system, and on the other hand implements capital transaction with extra-regional energy market. Besides, two-way capital flow also exists between energy market (energy transaction centre of regional network) and external power grid, external oil and gas pipe networks and regional energy supply system. Regional integrated energy operator integrating power generation and sales businesses Energy Generation System D is tr ib u te d so la r g en er at i on , distributed wind generation, s ma ll o i l ge ne ra t io n, g ar b ag e generation, gas turbine, energy storage etc. Based on Mode I, we can organize an energy supplier integrating power generation and sales businesses through incorporating intra-regional energy supply system into the operation scope of energy supplier and constructing regional energy centre. Energy Distribution System As shown in Figure 2, The energy, capital and service flows of the operation mode are all similar with those of Mode I, and the difference of the two modes lies in that the energy supply plan of energy supply system is not issued directly by energy supplier, instead it is executed by regional integrated energy operator after implementing self-decision in combination with such factors as intra-regional load demands and peak and off-peak power prices and having the plan submitted to energy configuration plan for safety verification. The mode may change neither the assets relationship or construction planning of existing energy configuration system nor the grid connection position of energy supply device. Energy Generation System Distributed solar generation, distributed wind generation, small oil generation, garbage g e n e r a t i o n , g a s t u r b i n e , energy storage etc. Energy Distribution System Electricity grid, heat grid etc. Energy Retail System Cold, heat, electricity, gas etc. Energy Consumption System Lighting, heating, electricity consumption, gas consumption etc. Energy flow Financial flow Service flow Bulk Power System Bulk Oil and Gas System Regional Integrated Energy System Figure 3. Mode III-Analysis on operation mode of regional integrated energy operator integrating power distribution and sales businesses Based on Mode I, this mode allows energy supplier to operate intra-regional existing and newly added energy configuration systems. Being similar with Mode I in terms of energy, capital and service flows, as shown in Figure 3, the mode is featured by the basic formation of a physical "wholesales area" and the existence of clear measurement gateway and settlement relationship between energy supply system and energy configuration system. Regional integrated energy operator integrating power generation, distribution and sales businesses As shown in Figure 4, in this mode, regional integrated energy operator can simultaneously operate energy supply, configuration and sales systems and implement integrated dispatching and control over intra-regional energy production and configuration and user-side resources. Figure 4. Mode IV-Analysis on operation mode of regional integrated energy operator integrating power generation, distribution and sales businesses Regional integrated energy operator integrating distribution and sales businesses The operation mode integrating power distribution and sales can strengthen the relationship with users through power distribution network and support the implementation of power sales business, and it can also improve assets use efficiency though integrating power distribution and use, thus making assets more economical. Many foreign power distribution networks were privately invested and constructed, such as those in such European countries as France and German, especially in German, where most power distribution network assets are privately owned owing to the wave of privatization occurred at the end of 1990s. Afterwards, with the opening of power sales market, many power companies integrating distribution and sales businesses which possess power distribution network assets emerged, and compared with other power sales companies, these companies can acquire benefits from both power distribution network business and power sales business. Considering foreign practices and domestic trial implementation situations, the operation mode integrating distribution and sales will become one of the common operation modes for incremental power distribution in the future. In this scenario, the operator will carry out businesses in a certain industrial park, and in this case, the internal supply system of the industrial park only includes distributed power supply, small-scale oilfired generator, gas-fired unit and other power generation equipment, and power distribution and sales company only operate power and heat energy, which means that gas resource necessary for the industrial park still needs to be acquired from external gas network. Figure 5, operators have mainly two profiting modes in terms of their core power distribution and sales businesses: first, operators can supply basic energies to users in the industrial park to acquire energy consumption expenses from users, which is reflected as power and heat benefits in this case; second, it is easier for power distribution and sales companies with power distribution resources to take the lead in the market and become minimumguarantee power sales company, and thus acquire a large quantity of user resource, thanks to which such companies can implement such value-added services regarding power sales as energy efficiency management and demand-side response using their power distribution resources according to users' demands. It can be seen from capital flow direction that the operators just mentioned typically purchase power from the internal energy supply system of the industrial park, and in case that internal energy supply is insufficient, they will implement power transaction through external energy market. Meanwhile, the operators under this mode are supposed to sacrifice more and bear more risks, as they should firstly invest more capital to construct or transform power distribution network, and they also need professionals and advanced management technologies for daily operation and maintenance of power distribution network. For example, the development of renewable energies will inevitably produce large influence on the planning scheme of power distribution network, especially distributed renewable energies, as most of power generation equipment using such energies need to access to power distribution network, thus making it a necessity the expansion and transformation of power distribution network, which undoubtedly forces power distribution and sales companies to invest more capital, and second, they also bear policy risks, as current power transmission and distribution price verification methods may change, which makes it more uncertain the income of such companies integrating distribution and sales businesses. Regional integrated energy operator integrating power generation, distribution and sales businesses For historical reasons, some enterprises with conventional power supply also posses power distribution system to secure their production, and when the market is open, such enterprises are more willing to participate in market businesses, and thus can easily become integrated energy operators integrating the distribution and sales of conventional power supply. Taking enterprises possessing domestic power plant as an example, under the promotion of reform, some large users possessing domestic power plants or CHP units intend to establish power distribution and sales company to supply power to other enterprises and acquire the support of local government. Such large-scale enterprises and industrial parks with domestic power plants are more likely to become operators integrating power generation, distribution and sales businesses when implementing integrated energy businesses in the future, which is taken as the case for analysis in this section, and corresponding operation mode is shown as follows. Figure 6 and the case mentioned above, we can conclude that operators integrating power generation, distribution and sales businesses can supply users with energy resources using their own energy systems without paying power purchase expenses additionally, and when domestic energy supply is insufficient, they can acquire additional energy resources from external market. Besides, both their capital and service flowing directions at the side of energy consumption are the same with those of operators integrating distribution and sales businesses, and they all can acquire benefits through supplying basic energy supply services and energy value-added services to users. Energy Generation System Seeing from the actual development power distribution network in China, some large-scale energy enterprises with domestic power plants have already acquired power sales license, and they can further participate in power market transactions and compete with intra-regional power grid enterprises in the future. In addition to providing power supply service to self-owned enterprises, they can also provide similar power consumption services to intra-regional industry and commerce and also residents. Such companies typically possess natural energy supply resources, relatively mature and complete power generation and distribution system and a batch of stable energy users, thus having great advantages in creating regional integrated energy system within their operation scope.
2019-11-28T12:36:26.271Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "29c85df63b3ea7a49b7ec075d36eec102770065b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1346/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9ee25523246dbf4199487d4726e1d521f8369cbc", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Business" ] }
156488
pes2o/s2orc
v3-fos-license
On the nature of the Tsallis-Fourier Transform By recourse to tempered ultradistributions, we show here that the effect of a q-Fourier transform (qFT) is to map {\it equivalence classes} of functions into other classes in a one-to-one fashion. This suggests that Tsallis' q-statistics may revolve around equivalence classes of distributions and not on individual ones, as orthodox statistics does. We solve here the qFT's non-invertibility issue, but discover a problem that remains open. Introduction Non-extensive statistical mechanics (NEXT) [1,2,3], a well known generalization of the Boltzmann-Gibbs (BG) one, is used in many scientific and technological endeavors. NEXT central concept is that of a nonadditive (though extensive [4]) entropic information measure characterized by the real index q (with q = 1 recovering the standard BG entropy). Applications include cold atoms in dissipative optical lattices [5], dusty plasmas [6], trapped ions [7], spin glasses [8], turbulence in the heliosphere [9], selforganized criticality [10], high-energy experiments at LHC/CMS/CERN [11] and RHIC/PHENIX/Brookhaven [12], low-dimensional dissipative maps [13], finance [14], galaxies [15], and Fokker-Planck equation's studies [16], EEG's [17], complex signals [18], Vlasov-Poisson equations [19], etc. q-Fourier transforms, developed by Umarov-Tsallis-Steinberg [20] constitute a central piece in the Tsallis' q-machinery. However, Hilhorst [21], in a lucid study that investigated the feasibility of obtaining an invertible q-Fourier transformation (qFT) by restricting the domain of action of the transform to a suitable subspace of probability distributions. He was able to show that this is invertible transformation does not exist. Even more, by explicit construction, he encountered families of functions, all having the same qFT (the q-Gaussians themselves being part of such families) for which the noninvertibility of the qFT becomes evident. In the present communication we intend to reconcile the Umarov-Tsallis-Steinberg developments [20] with Hilhorst's findings. We will show below that the qFT does indeed map, in a one-to-one fashion, classes of functions into other classes, not isolated functional instances. Thus, both parts of the controversy are right, but the issue can be resolved by appealing to a higher order of mathematical perspective. The Complex q-Fourier Transform and its Inverse Let Ω be the space of functions of the real variable x that are parameterized by a real parameter q: where and Here g(x) is bounded, continuous, and positive-definite. We will make extensive use in this work of the notion of generalized functions (or distributions), objects extending the notion of functions that are especially useful in making discontinuous functions more like smooth functions, and (going to extremes) describing physical phenomena such as point charges. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used to formulate generalized solutions of partial differential equations. Where a classical solution may not exist or be very difficult to establish, a distribution solution to a differential equation is often much easier. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta function (which is historically called a "function" even though it is not considered a proper function mathematically). In more detail, distributions are a class of linear functionals that map a set of test functions (conventional and well-behaved functions) onto the set of real numbers. In the simplest case, the set of test functions considered is D(R), which is the set of functions {φ : R → R}, having two properties: 2) φ has compact support (is identically zero outside some bounded interval). Then, a distribution d is a linear mapping D(R) → R. Here we focus attention on the space of test functions defined in Eq. (3.3) of Appendix. Its dual U is a space of so-called tempered ultradistributions [22,23,24,25], that constitute a generalization of the distributions-set for which the test functions are members of a special space called Schwartz' one S, a function-space in which its members possess derivatives that are rapidly decreasing. S exhibits a notable property: the Fourier transform is an automorphism on S, a property that allows, by duality, to define the Fourier transform for elements in the dual space of S. This dual is the space of tempered distributions. In physics it is not uncommon to face functions that grow exponentially in space or time. In such circumstances Schwartz' space of tempered distributions is too restrictive. Instead, ultradistributions satisfy that need [26], being continuous linear functionals defined on the space of entire functions rapidly decreasing on straight lines parallel to the real axis [26]. Now, following [22], we use the Heaviside step function H to define the q-Fourier transform as (2.7) As has been proved in [22], F is one to one from Ω to U On the real axis: for the real transform, and for its inverse. We define now: Thus, according to (2.7), It has been proved by Hilhorst [21] that F T is NOT one to one from Ω to U . Let Λ fq the set given by: and We define the equivalence relation and, subsequently, the Umarov-Tsallis-Steinberg q-Fourier transform F U T S [20] F U T S : Λ −→ U (2.17) as: We see that F U T S is an application from equivalence classes into equivalence classes and, as a consequence, one to one from Λ into U! Illustrating our theory we consider the example given by Hilhorst [21] for the function: In Ref. [22] we evaluated the q-Fourier transform on this function and obtained Taking q ′ = q in (2.21) we have for F U T S : and, on the real axis, Conclusions We have shown that the q-generalization advanced by Umarov et al. in [20] is to be properly regarded as a transformation between classes of equivalence and thus one-to-one, a finding that reconciles the assertions of Ref. [20] with the lucid observations of Ref. [21]. 3 Appendix: Tempered Ultra-distributions and Distributions of Exponential Type For the benefit of attentive readers we give here a brief summary of the main properties of distributions of exponential type and tempered ultradistributions. x p entails x p 1 1 x p 2 2 ...x pn n . We shall denote by | p |= n j=1 p j and call D p the differential operator ∂ p 1 +p 2 +...+pn /∂x 1 For any natural k we define The space H of test functions such that e p|x| |D q φ(x)| is bounded for any p The space of continuous linear functionals defined on H is the space Λ ∞ of the distributions of the exponential type given by ( ref. [24] ). where k is an integer such that k ≧ 0 and f (x) is a bounded continuous function. In addition we have H ⊂ S ⊂ S ′ ⊂ Λ ∞ , where S is the Schwartz space of rapidly decreasing test functions (ref [25]). The Fourier transform of a functionφ ∈ H is According to ref. [24], φ(z) is entire analytic and rapidly decreasing on straight lines parallel to the real axis. We shall call H 1 the set of all such functions. Let Π be the set of all z-dependent pseudo-polynomials, z ∈ C n . Then U is the quotient space By a pseudo-polynomial we understand a function of z of the form s z s j G(z 1 , ..., z j−1 , z j+1 , ..., z n ), with G(z 1 , ..., z j−1 , z j+1 , ..., z n ) ∈ A ω Due to these properties it is possible to represent any ultra-distribution as where the path Γ j runs parallel to the real axis from −∞ to ∞ for Im(z j ) > ζ, ζ > p and back from ∞ to −∞ for Im(z j ) < −ζ, −ζ < −p. (Γ surrounds all the singularities of F (z)). Eq. (3.6) is our fundamental representation for a tempered ultra-distribution. Sometimes use will be made of the "Dirac formula" for ultra-distributions where the "density" f (t) is such that While F (z) is analytic on Γ, the density f (t) is in general singular, so that the r.h.s. of (3.8) should be interpreted in the sense of distribution theory. Another important property of the analytic representation is the fact that on Γ, F (z) is bounded by a power of z [24] |F (z)| ≤ C|z| p , (3.9) where C and p depend on F . The representation (3.6) implies that the addition of a pseudo-polynomial P (z) to F (z) does not alter the ultra-distribution: However, Γ P (z)φ(z) dz = 0.
2015-07-21T12:45:07.000Z
2013-01-15T00:00:00.000
{ "year": 2013, "sha1": "0629062e8b112a9f88804ef78dc8462f09543222", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/3/3/644/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "821e7343056a0a535de9860911cd7a692b026b8d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
14806853
pes2o/s2orc
v3-fos-license
Cell cycle–dependent localization of macroH2A in chromatin of the inactive X chromosome One of several features acquired by chromatin of the inactive X chromosome (Xi) is enrichment for the core histone H2A variant macroH2A within a distinct nuclear structure referred to as a macrochromatin body (MCB). In addition to localizing to the MCB, macroH2A accumulates at a perinuclear structure centered at the centrosome. To better understand the association of macroH2A1 with the centrosome and the formation of an MCB, we investigated the distribution of macroH2A1 throughout the somatic cell cycle. Unlike Xi-specific RNA, which associates with the Xi throughout interphase, the appearance of an MCB is predominantly a feature of S phase. Although the MCB dissipates during late S phase and G2 before reforming in late G1, macroH2A1 remains associated during mitosis with specific regions of the Xi, including at the X inactivation center. This association yields a distinct macroH2A banding pattern that overlaps with the site of histone H3 lysine-4 methylation centered at the DXZ4 locus in Xq24. The centrosomal pool of macroH2A1 accumulates in the presence of an inhibitor of the 20S proteasome. Therefore, targeting of macroH2A1 to the centrosome is likely part of a degradation pathway, a mechanism common to a variety of other chromatin proteins. Introduction Male and female eutherian mammals achieve equivalent levels of X-linked gene expression by silencing all but one X chromosome in cells of the developing embryo (Avner and Heard, 2001;Willard, 2000). With the exception of imprinted X inactivation in extraembryonic tissue (Huynh and Lee, 2001), the choice of which X chromosome to inactivate in the soma is random and maintained throughout subsequent cell divisions. The inactive X chromosome (Xi)* shares features common to other types of heterochromatin, including hypoacetylation of histone tails (Jeppesen and Turner, 1993;Belyaev et al., 1996;Boggs et al., 1996;Gilbert and Sharp, 1999), hypermethylation of CpG islands (Mohandas et al., 1981;Pfeifer et al., 1990), late replication in S phase (Gilbert et al., 1962;Morishma et al., 1962), and a characteristic pattern of histone H3 lysine methylation (Boggs et al., 2002;Peters et al., 2002). In addition, several unique features characterize heterochromatin of the Xi. These include the association of a large untranslated RNA, the Xi-specific transcript (XIST) (Brown et al., 1991;Brockdorff et al., 1992), and a nonrandom distribution of variants of the core histone H2A (Costanzi and Pehrson, 1998;Chadwick and Willard, 2001a,b). The macroH2A family of H2A variants was first identified through association with nucleosomes (Pehrson and Fried, 1992). The amino-terminal third of the protein is almost identical to histone H2A, with a unique nonhistone carboxyterminal tail. Two separate genes encode macroH2A1 and macroH2A2, both of which are enriched in Xi chromatin Pehrson, 1998, 2001;Chadwick and Willard, 2001a). The enrichment of macroH2A at the Xi forms a characteristic structure in the female nucleus, referred to as a macrochromatin body (MCB). In cultured differentiating mouse embryonic stem cells (ES), an MCB appears after counting and choice of which X chromosome to inactivate has occurred (Mermoud et al., 1999;Rasmussen et al., 2000). Although macroH2A has transcriptional repression activity (Perche et al., 2000), it is not essential for the maintenance of X inactivation. The formation of an MCB is dependent upon localization of XIST RNA, as disruption of XIST results in the loss of the MCB without reactivating the Xi (Csankovszki et al., 1999;Beletskii et al., 2001). Com-bined, the available data indicate that macroH2A may represent one of several highly redundant mechanisms of gene silencing employed by the Xi (Mohandas et al., 1981;Singer-Sam et al., 1992;Brown and Willard, 1994;Gartler and Goldman, 1994;Csankovszki et al., 1999). Prior to the onset of X inactivation in ES cells, a cytoplasmic concentration of macroH2A1 is evident, coincident with the centrosome (Rasmussen et al., 2000). When ES cells are stimulated to differentiate, centrosomal macroH2A disappears. More recently, a centrosome-associated pool of macroH2A1 has been observed in somatic cells as well (Mermoud et al., 2001), raising questions about the relationship between nuclear and centrosomal macroH2A1. In the present study, in order to address the spatial and temporal relationship of macroH2A1 with the centrosome and the MCB, we have investigated the distribution of macroH2A1 during the maintenance phase of X inactivation throughout the somatic cell cycle. Results MacroH2A1 and macroH2A2 associate with centrosomes in male and female somatic cells A centrosomal association of macroH2A1 was observed in human somatic cells (Fig. 1 a), as previously observed in mouse (Mermoud et al., 2001), indicating that the centrosomal macroH2A1 pool is not restricted to undifferentiated ES cells (Mermoud et al., 1999;Rasmussen et al., 2000). In addition, the association with centrosomes in both XY and XX somatic cells indicates that the association is independent of X inactivation. Using independent antisera specific to either the macroH2A1 or the macroH2A2 protein, we detected both forms of macroH2A at the centrosome (Fig. 1 b). Both forms of macroH2A could also be detected in sucrose gradient fractions enriched for centrosomes (Fig. 2 a). In addition to a signal of anticipated size for macroH2A1, a second band of smaller size was detected in centrosome preparations (Fig. 2 a, lane 1). The same band cofractionates with full-length macroH2A1 in sucrose gradient fractions of nucleosome preparations (Fig. 2 b). Additional bands of comparable size have been detected with independent anti-macroH2A1 antisera (Costanzi et al., 2000;Mermoud et al., 2001). Although a smaller macroH2A2 band is not detected in centrosome preparations ( Fig. 2 a, lane 2), a considerably weaker smaller band can be detected in nucleosome fractions with antisera specific to macroH2A2 (Fig. 2 c, lane 2). Smaller bands can also be observed in nucleosome preparations from cell lines expressing epitope-tagged forms of macroH2A1 and macroH2A2 (Fig. 2 d). Although the smaller bands observed for the epitope-tagged forms of macroH2A1 and macroH2A2 do not directly correlate to the size of the smaller bands detected with the macroH2A1 and macroH2A2 primary antisera, this may reflect either a direct effect of the presence of the epitope tag on processing or the relative stability of the endogenous proteins. Whether the smaller bands represent the potential breakdown of macroH2A during various purification protocols, or the direct result of an intracellular biological process is unknown. Further, though less likely in our view because of its reproducibility with three independent antisera, the possibility remains that the smaller bands may be unrelated to macroH2A and simply represent a shared epitope between macroH2A and another protein(s) that coexists with macroH2A in both nucleosomal and centrosomal fractions. Both macroH2A1 and macroH2A2 concentrate in distinct bands on the human and mouse Xi Previous observations have indicated that macroH2A1 is uniformly associated with the mouse Xi at metaphase (Costanzi and Pehrson, 1998;Mermoud et al., 1999). We have investigated the relationship of macroH2A with the human and mouse Xi and consistently observed a distinct banding pattern on the Xi chromosome (Fig. 3). As previously observed, macroH2A associates with the autosomes and active X chromosome (Xa) in human metaphase spreads as well, but at a significantly lower level than that for the Xi (Fig. 3 c and Fig. 4 a). The same banding pattern is observed using antisera specific to either macroH2A1 or macroH2A2, indicating that the bands contain both isoforms ( Fig. 3 a). Up to four macroH2A bands were observed on the Xi in a variety of human 46,XX cell lines ( Fig. 3; unpublished data). To determine the precise location of each band, we stained metaphase chromosomes with macroH2A in combination with FISH using a number of ordered and previously mapped X chromosome probes. This approach placed the macroH2A bands at Xp22, Xp11, Xq13, and Xq22-24, with the most intense and consistent band at Xq22-24 (Fig. 3). Notably, the band at Xq13 was indistinguishable from a cosmid probe containing the XIST locus at the X inactivation center (Fig. 3 b). All four bands were reproduced in a 46,XX cell line overexpressing either an amino-or carboxy-terminal epitope-tagged form of macroH2A1 (Fig. 4, b and c), confirming the identity of the bands. To extend this observation, we investigated the distribution of macroH2A1 on the mouse Xi at metaphase. As seen in humans, macroH2A1 formed a characteristic banding pattern on the mouse Xi (Fig. 3 d). Intriguingly, the location of the bands within the distal portion of the mouse Xi correlates with sequences that are syntenic with human Xp22, Xp11, Xq13, and Xq22-24 (DeBry and Seldin, 1996), suggesting a conserved role for macroH2A in these regions. The macroH2A band at Xq22-24 overlaps with the site of histone H3 lysine-4 methylation In addition to a reproducible banding pattern of macroH2A on the Xi, a distinct banding pattern of histone H3 lysine-4 methylation (DimH3K4) has been observed (Boggs et al., 2002). Human female metaphase chromosomes stained for macroH2A and DimH3K4 show a clear overlap of the distal boundary of the Xq22-24 macroH2A band with DimH3K4 ( Fig. 4, a, a Ј , and d). The pattern of macroH2A and DimH3K4 at Xq22-24 is indistinguishable from that seen between macroH2A and FISH with a probe of the macrosatellite sequence DXZ4 (Fig. 3 b, middle). FISH analysis confirms that the DimH3K4 band is centered at DXZ4 on Xq24 (Fig. 4 e). Centrosomal association of macroH2A1 alters during the cell cycle To address a possible temporal relationship between the chromosomal and centrosomal pools of macroH2A, we ex-amined the localization of macroH2A1 throughout the cell cycle. Female cells were synchronized chemically at either the G 1 -S boundary and released into S phase toward mitosis, or blocked in mitosis and released into G 1 toward S phase. The appearance of an MCB was most obvious during S phase, with the loss of the MCB as cells approached mitosis (Fig. 5 a; Table I). In contrast, the proportion of cells demonstrating centrosomal localization of macroH2A1 increased as cells approached mitosis (Fig. 5 a; Table I). MCB formation after release from mitosis required ‫ف‬ 12 h (Fig. 5 a; Table I), whereas macroH2A1 was observed at the centrosome shortly after release. The same relationship of in- creased centrosome association and decreased MCB frequency as cells approach mitosis was observed in two other 46,XX cell lines (unpublished data), indicating that this is a common feature of human somatic cells. Although changes in the relative level of centrosomal macroH2A1 were observed as cells passed through S phase toward mitosis, the nucleosomal concentration of macroH2A1 did not appear to change significantly (unpublished data). Formation of an MCB follows, but does not mirror, XIST RNA accumulation Disruption of XIST RNA results in the loss of MCB formation (Csankovszki et al., 1999;Beletskii et al., 2001), indicating dependence of macroH2A1 on XIST for Xi localization. To examine their temporal relationship in normal cells, we monitored MCB formation in relation to XIST RNA during the somatic cell cycle. XIST RNA paints the Xi during interphase , but unlike mouse Xist (Duthie et al., 1999), human XIST RNA does not remain associated with the Xi during mitosis . An XIST RNA domain was observed in cells throughout S phase and G 2 and only dissociated from the Xi as cells entered mitosis (Fig. 5 b; Table II), consistent with earlier findings . XIST RNA was expressed and formed an XIST RNA domain shortly after release from mitosis (Fig. 5 b; Table II). In contrast, MCBs dissociated from the Xi chromatin as cells approached mitosis, significantly earlier than XIST RNA, and did not rapidly reform with the XIST RNA territory shortly after mitosis (Fig. 5 b; Table II). With the exception of a very small number of cells, an MCB was present only in cells with an XIST RNA domain. This indicates that although the association of XIST RNA with the Xi is a prerequisite for an MCB, the formation of an MCB is strongly influenced by the cell cycle. MCB formation is influenced by the cell cycle and is most prominent during S phase To relate MCB formation directly with DNA replication during S phase, cells were synchronized and pulsed with BrdU for 1 h after different release times to detect exit and entry into S phase. Unlabeled cells after release from mitosis had not yet entered S phase (Fig. 6 a), whereas unlabeled cells after release from G 1 -S had exited S phase and entered G 2 . Chromatin of the Xi is late replicating in S phase (Gilbert et al., 1962;Morishma et al., 1962), and therefore the MCB is only labeled in late S phase cells (Fig. 6 a). This, in combination with time of release, allows accurate determination of early, middle, and late S phase. Cells released from mitosis into G 1 took ‫ف‬ 15 h to enter S phase as detected by BrdU incorporation. Only 23% of cells in G 1 had an MCB (Fig. 6 b). In contrast, MCBs were observed in 72% of cells in early S phase. Cells were synchronized at the G 1 -S boundary or in mitosis and released for 0-12 h before detection of macroH2A1 and ␥ -tubulin by indirect immunofluorescence. Numbers of cells are given as a percentage with standard deviations ( n = 100). Early S phase, cells at the G 1 -S boundary; mid to late S phase, G 1 -S ϩ 4 h; late S phase, G 1 -S ϩ 8 h; G 2 , G 1 -S + 12 h; mitosis, release from nocodazole for 1 h; early G 1 , mitosis ϩ 4 h; mid G 1 , mitosis ϩ 8 h; late G 1 to early S, mitosis ϩ 12 h. Cells were synchronized at the G 1 -S boundary or in mitosis and released for 0-12 h before detection of macroH2A1 and XIST RNA by indirect immunofluorescence and RNA FISH. Numbers of cells are given as a percentage with standard deviations (n = 100). Early S phase, cells at the G 1 -S boundary; mid to late S phase, G 1 -S ϩ 4 h; late S phase, G1-S ϩ 8 h; G 2 , G 1 -S ϩ 12 h; mitosis, release from nocodazole for 1 h; early G 1 , mitosis ϩ 4 h; mid G 1 , mitosis ϩ 8 h; late G 1 to early S, mitosis ϩ 12 h. MCB frequency peaked at 93% in cells 5 h after release from G 1 -S, before dropping sharply late in S phase and in G 2 . As shown in Fig. 6 b, these data clearly indicate that the appearance of an MCB is most prominent during S phase, in agreement with timing of MCB formation after mitosis and loss of MCBs as cells pass through S phase ( Fig. 5; Tables I and II). Inhibition of the 20S proteasome results in the accumulation of macroH2A1 at the centrosome The proteasome is a large multisubunit proteolytic complex that is the major site of protein degradation (Bochtler et al., 1999). Components of the proteasome have been identified in purified centrosome fractions (Wigley et al., 1999) that are capable of degrading ubiquitinated substrates (Fabunmi et al., 2000). To evaluate the potential association of macroH2A1 with the proteasome, cells were synchronized at the G 1 -S boundary and in mitosis before releasing in the presence of lactacystin, an irreversible proteasome inhibitor (Fenteany et al., 1995). Centrosomal accumulation of macroH2A1 significantly increased after incubation for 12 h in lactacystin and colocalized with an enlarged ubiquitin domain (Fig. 7), whereas an accumulation of macroH2A2 was not detected (unpublished data). The dramatic accumulation of macroH2A1 at the centrosome after inhibition of the proteasome is consistent with the inability to degrade the protein, suggesting that macroH2A1 may be targeted to the centrosome for degradation. The appearance of an enlarged macroH2A1 domain at the centrosome occurs between 8-12 h after release from G 1 -S or mitosis. More cells acquired an enlarged centrosomal macroH2A1 domain when released from G 1 -S (91% of cells) than from mitosis (71%), perhaps indicating that more macroH2A1 is targeted for degradation as cells pass through S phase and G 2 than in G 1 . This may reflect the remodeling of chromatin in preparation for mitosis and the need to remove macroH2A1 released in this process. Centrosome association and accumulation is a feature of some, but not all, chromatin proteins To determine how specific macroH2A1 association with the centrosome is, we looked at a number of other chromatin proteins for centrosome association (Fig. 8 a). Of 29 chromatin proteins tested, 8 demonstrated a clear overlap with ␥-tubulin at mitosis (Fig. 8 a) and interphase (unpublished data). To address the possibility that, like macroH2A1, the centrosomal association of these chromatin proteins represents a degradation pathway, cells were treated with lactacystin and monitored for the accumulation of each chromatin protein at the centrosome (Fig. 8 b). Indeed, several of the chromatin proteins demonstrated a dramatic increase in size at the centrosome and colocalization with ubiquitin (Fig. 8 b). In contrast, DNA-methyltransferase 3a (DNMT3a) remained absent from the centrosome, but consistently accumulated at two nuclear foci (Fig. 8 b), suggesting that this protein is directed to an alternative proteasome center. Taken together, these data imply that targeting of chromatin proteins to the centrosome is a common mechanism of protein degradation. MacroH2A1 associates with the centrosome in a manner characteristic of a degradation pathway The association of macroH2A at the centrosome is not restricted to undifferentiated mouse ES cells (Rasmussen et al., 2000), but is a common feature of male and female mouse (Mermoud et al., 2001) and human somatic cells (Fig. 1 a). Centrosomal macroH2A is composed of both macroH2A1 and macroH2A2 (Fig. 1 b and Fig. 2 a), indicating that both proteins are spatially indistinguishable at the centrosome. Recently, a link has been made between protein degradation pathways and the centrosome. Treatment of cells with lactacystin, a potent inhibitor of the 20S proteasome (Fenteany et al., 1995), results in the dramatic formation of perinuclear protein aggregates (Wojcik et al., 1996). In the absence of lactacystin, a variety of mutant misfolded or overexpressed proteins also accumulate in perinuclear aggregates (referred to as aggresomes) that are centered at the centrosome (Johnston et al., 1998;Garcia-Mata et al., 1999). Once aggresomes form, they appear to be highly resistant to proteolysis (Kopito and Sitia, 2000) and are thought to be a major contributor to the pathology of disease (Kopito, 2000). Most compelling is the detection and purification of active components of the proteasome at the centrosome (Wigley et al., 1999;Fabunmi et al., 2000). The accumulation of macroH2A1 at the centrosome in the presence of lactacystin (Fig. 7) suggests that macroH2A1 is targeted to the centrosome-proteasome as part of a degradation pathway. In contrast, despite the association of macroH2A2 with the centrosome (Fig. 1 b and Fig. 2 a), an accumulation was not detected in the presence of lactacystin (unpublished data). One explanation for this could be that macroH2A2 is targeted primarily to a nuclear proteasome center for degradation with centrosomal proteasome-mediated degradation being secondary. Alternatively, lactacystin may prevent the export of macroH2A2 but not macroH2A1 from the nucleus. Ultimately, this may reflect a difference in the biology of the two macroH2A proteins, or a sensitivity issue regarding the antisera. Notably, poly-ADP ribose polymerase, an activator of the 20S proteasome to degrade histones damaged by oxidation in the nucleus (Ullrich et al., 1999), is also located at the centrosome (Kanai et al., 2000) and may activate the centrosomal proteasome in a similar fashion. Although a common feature of proteins targeted for degradation is the addition of polyubiquitin chains (Pickart, 2001), exhaustive immunoprecipitation experiments using antisera raised to ubiquitin failed to immunoprecipitate polyubiquitinated forms of macroH2A (unpublished data). Close examination of the extensive overlap in the macroH2A1 and ubiquitin signals in the aggresome indicates a hole in the macroH2A1 signal at the center of the aggresome (Fig. 7). This is also true of the macroH2A1 signal before lactacystin treatment (Fig. 7), placing macroH2A1 in the pericentriolar material, as confirmed by the absence of macroH2A1 signal at the centrioles marked by ␥-tubulin ( Figs. 1 and 5). Although ubiquitination is a common signal for targeting proteins to the proteasome (Bochtler et al., 1999), other proteins are targeted to the proteasome for degradation in the absence of detectable ubiquitination (Garcia-Mata et al., 1999). Therefore macroH2A1 may be targeted to the proteasome in a ubiquitin-independent fashion or else macroH2A1 is targeted through a physical association with other ubiquitinated proteins. Alternatively, the mono-ubiquitinated form of macroH2A, which is readily detectable (unpublished data), may be sufficient for targeting for degradation as previously demonstrated for histone H3 (Haas et al., 1990). In addition to macroH2A1, a number of other chromatin proteins associate with the centrosome (Hsu and White, 1998;Barthelmes et al., 2000;Xue et al., 2000) (Fig. 8 a). Like macroH2A1, treatment of cells with lactacystin results in the association of some, but not all, chromatin proteins with aggresomes (Fig. 8 b). This indicates that targeting of chromatin proteins to the centrosomal proteasome is a fairly common mechanism. The fact that DNMT3a not associated with the centrosome (Fig. 8 a) is consistently targeted to two nuclear proteolysis centers (Fig. 8 b), and not to the centrosomal proteasome, indicates that different chromatin proteins are targeted to different proteolysis centers. In addition, export from the nucleus is not a requisite for chromatin protein degradation, and the site of degradation for each protein is specific. Why a selection of nuclear proteins is targeted to the centrosomal proteasome, instead of the proteol-ysis centers in the nucleus, is unclear. Given its association with gene silencing (Perche et al., 2000), it is possible that nonnucleosomal macroH2A1 may retain the ability to interact with partners in the nucleus and thus needs to be rapidly exported to prevent macroH2A1 from having detrimental effects on the cell by sequestering chromatin complexes. Alternatively, factors involved in the remodeling of chromatin, resulting in the potential release of macroH2A1, may themselves be targeted to the centrosomal proteasome, taking macroH2A1 with them. Centrosomal concentrations of macroH2A1 alter in a cell cycle-dependent fashion. As cells proceed through S phase and G 2 toward mitosis, the concentration of macroH2A1 at the centrosome increases to a level detectable by immunofluorescence (Fig. 5 a; Table I). The same trend is observed as cells proceed from mitosis through G 1 toward S phase (Fig. 5 a; Table I). The accumulation of macroH2A1 at the centrosome in the presence of lactacystin is most prominent in cells as they pass through S phase toward mitosis. This is consistent with the need to target more macroH2A1 for degradation, as it is during this period that the MCB disappears, suggesting that excess quantities of macroH2A1 are required to be removed. The association of macroH2A1 with the Xi chromatin is most prominent at S phase A prerequisite for the formation of an MCB is the association of XIST RNA with the Xi (Csankovszki et al., 1999;Beletskii et al., 2001). Whereas XIST RNA coats the Xi through early G 1 to late G 2 , the stable presence of XIST does not immediately direct macroH2A1 to the Xi to form an MCB (Fig. 5 b; Table II). Instead, the formation of an MCB is most common in early and middle S phase (Fig. 6 b). MacroH2A1 is unlikely to be marking chromatin for late replication, as not all sites of late replication overlap with macroH2A1 staining (Fig. 6 a), and the banding of macroH2A at metaphase (Figs. 3 and 4) does not correspond to regions of the Xi known to replicate latest in S phase (Willard, 1977). The cell cycle-influenced appearance of an MCB suggests that macroH2A1 (and perhaps macroH2A2) may be substituting the H2A position in Xi nucleosomes at and around S phase. Although the nucleosome inner core of histones H3 and H4 is stable at interphase, H2B is more dynamic and can readily be substituted (Kimura and Cook, 2001). H2A and H2B are deposited onto chromatin as a heterodimer (Ridgway and Almouzni, 2000). Therefore it is conceivable that H2A variants, like H2B, can also dynamically exchange the H2A position, conferring alternative states to local chromatin. Preparations of nucleosomes from cells blocked at the beginning of S phase or in mitosis have comparable levels of macroH2A1 (unpublished data), despite the significant decrease in macroH2A at the Xi as the MCB disappears (Fig. 5). Put into a genomic perspective, fluctuations in the local concentrations of macroH2A1 at the Xi visualized as an MCB in females may be masked by the total concentrations of nucleosomal macroH2A1 in a cell. This may indicate that macroH2A1 at the MCB represents only a small fraction of the total concentration of macroH2A1, with the remainder functioning in autosomal chromatin (Fig. 3 c and Fig. 4 a). Indistinguishable concentrations of macroH2A in male and female nucleosome fractions and total cell extracts support this (unpublished data). More speculatively, it is conceivable that macroH2A1 at the MCB is not all nucleosomal, but functioning with other components of the dosage compensation complex at the Xi outside of the nucleosome context. Resolving this issue will require a detailed analysis of nucleosome levels of macroH2A at the Xi by chromatin immunoprecipitation analysis at different stages of the cell cycle. One possible model to explain the functional significance of the MCB during S phase is that higher local concentrations of macroH2A may be one of many redundant mechanisms to promote the loading of the dosage compensation complex onto the daughter X and mark it as the Xi as it is synthesized. MacroH2A is enriched at specific bands on the metaphase Xi overlapping a site of histone H3 methylation Although the MCB is not evident before the onset of mitosis ( Fig. 5; Tables I and II), macroH2A1 does remain associated with both the human and mouse Xi during mitosis as distinct bands (Figs. 3 and 4). Intriguingly, these bands appear to mimic the banding seen with Xist RNA on the mouse Xi during mitosis (Duthie et al., 1999), suggesting that macroH2A may be functioning to anchor Xist RNA in cis with the Xi. However, in humans, XIST RNA does not remain associated with the Xi during mitosis . Bands enriched for macroH2A may function as reentry sites for XIST RNA and the dosage compensation complex, assisting in the rapid spread along the Xi in a manner analogous to reentry sites of the Drosophila dosage compensation complex (Meller et al., 2000). The band of macroH2A at the site of the XIST locus (Fig. 3 b) is perhaps analogous to reentry of the Drosophila dosage compensation complex at the site of the roX1 and roX2 loci (Kelley et al., 1999). Why macroH2A remains associated specifically with these regions of the chromosome is intriguing. With the exception of the proximity of macroH2A to the satellite repeat DXZ4 at Xq22-24 (Giacalone et al., 1992), there are no obvious shared features, such as gene densities or frequency of repeated elements at the chromatin of the other identified regions. Most intriguing is the clear overlap of the macroH2A band at Xq22-24 with a band of histone H3 lysine-4 methylation (Fig. 4, a, aЈ, and d). The band of DimH3K4 is centered at DXZ4 (Fig. 3 e) and marks the distal edge of the macroH2A band. This demonstrates the association of a histone modification, thought primarily to associate with euchromatin and regions of transcriptional activation (Kouzarides, 2002, and references therein), with a macrosatellite repeat (Giacalone et al., 1992). Potentially, DXZ4 may act as a boundary element, delimiting the spread of macroH2A, strengthened by the H3 lysine-4 methylation. Identification of genomic sequences and chromatin modifications at the boundary of each of the macroH2A bands will provide invaluable insight into the functional significance and influence of the histone code used by the Xi. The MCB observed in interphase and the banding of macroH2A seen at metaphase (Figs. 3 and 4) might provide similar or separate functions. Although disruption of XIST results in the loss of detectable MCB formation (Csankovszki et al., 1999;Beletskii et al., 2001), this may not effect macroH2A at the chromosome bands. Targeting of macroH2A1 and macroH2A2 in human and mouse ES cells, along with carefully directed chromatin immunoprecipitation experiments, will further our understanding of the functional significance of macroH2A in X inactivation. Cell culture and chemical treatment Cell lines used include T-3352, a 46,XX human primary fibroblast strain (provided by Stuart Schwartz, Case Western Reserve University); hTERT-RPE1, a 46,XX telomerase-immortalized cell line derived from a human retinal pigment epithelial cell line RPE-340 (catalog no. C4000-1; CLON-TECH Laboratories, Inc.); hTERT-BJ1, a 46,XY telomerase-immortalized cell line derived from a human primary foreskin fibroblast cell line (catalog no. C4001-1; CLONTECH Laboratories, Inc.); hTERT-HME1, a 46,XX telomerase-immortalized cell line derived from a human mammary epithelial cell line (catalog no. C4002-1; CLONTECH Laboratories, Inc.); and HEK-293, a female fetal kidney tumor cell line. B144 is a female mouse primary fibroblast cell line (provided by Laura Carrel, Case Western Reserve University). Cells were maintained as described previously (Chadwick and Willard, 2001a). Inhibition of the 20S proteasome was achieved by washing cell lines twice with PBS and applying complete media containing 10 M lactacystin (Calbiochem) (Fenteany et al., 1995) for the time periods indicated at 37ЊC in a 5% CO 2 atmosphere. Immunofluorescence and FISH Immunofluorescence and FISH was performed essentially as previously described (Chadwick and Willard, 2001b). Slides were denatured at 85ЊC before FISH, as opposed to 72ЊC, to overcome extensive sample fixation. A digoxygenin-labeled DXZ4 probe was obtained from Oncor Inc. Human X chromosome TRITC-labeled probes were generated using a nick translation kit (Vysis Inc). A mouse X chromosome-specific probe, DXwas70, was used to detect the mouse X chromosome. Human cosmid clones ICRFc100H0130 (XIC) and ICRFc100G11100 (CIC8) were obtained from the Imperial Cancer Research Fund Reference Library Database. Immuno-
2014-10-01T00:00:00.000Z
2002-06-24T00:00:00.000
{ "year": 2002, "sha1": "b15dcf6f2afe4747d2f55fc06141274239e0c3aa", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/157/7/1113/1303938/jcb15771113.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b15dcf6f2afe4747d2f55fc06141274239e0c3aa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10808016
pes2o/s2orc
v3-fos-license
Texture in the Superconducting Order Parameter of CeCoIn_5 Revealed by Nuclear Magnetic Resonance We present a ^{115}In NMR study of the quasi two-dimensional heavy-fermion superconductor CeCoIn_5 believed to host a Fulde-Ferrel-Larkin-Ovchinnkov (FFLO) state. In the vicinity of the upper critical field and with a magnetic field applied parallel to the ab-plane, the NMR spectrum exhibits a dramatic change below T*(H) which well coincides with the position of reported anomalies in specific heat and ultrasound velocity. We argue that our results provide the first microscopic evidence for the occurrence of a spatially modulated superconducting order parameter expected in a FFLO state. The NMR spectrum also implies an anomalous electronic structure of vortex cores. A myriad of fascinating properties have been proposed for unconventional superconductors in the presence of a strong magnetic field. Among the possible exotic superconducting (SC) phases, a spatially nonuniform SC state originating from the paramagnetism of conduction electrons has become a subject of intense theoretical investigation after the pioneering work by Fulde and Ferrel and Larkin and Ovchinnikov (FFLO) in the mid-1960's [1]. In spin singlet superconductors, the destruction of superconductivity by a magnetic field can be achieved in two distinct ways. Cooper pairs may break up either because the spin of the conduction electron is coupled to the magnetic field (Pauli paramagnetism) or because the latter effects also the electronic orbital angular momentum (vortices). A novel SC phase was predicted by FFLO when Pauli pair-breaking dominates over the orbital effect [2,3,4,5,6,7,8,9]. In the FFLO state, pair-breaking due to the Pauli effect is reduced by the formation of a new pairing state (k↑, −k + q↓) with |q| ∼ 2µ B H/hv F (v F is the Fermi velocity) between exchange-split parts of the Fermi surface, instead of (k↑, −k↓)-pairing in ordinary superconductors. In other words, spin up and spin down electrons can only stay bound if the Cooper pairs have a drift velocity in the direction of magnetic field. As a result, a new SC state with spatially oscillating order parameter and spin polarization with a wave length of the order of 2π/|q|, which is comparable to the coherence length ξ, should appear in the vicinity of upper critical field H c2 . The issue of the actual observation of the FFLO phase has only been addressed more recently especially in the last several years. Although several type-II superconductors (including heavy fermion and organic compounds) have been proposed as likely candidates for the observation of the FFLO state, subsequent research has called the interpretation of the data into question [10]. No solid evidence, which is universally accepted as proof of the FFLO state, has turned up. In this context, the case of CeCoIn 5 has aroused great interest, because several measurements have led to a renewed discussion of a pos-sible high field FFLO state [11,12,13,14]. CeCoIn 5 is a new type of heavy fermion superconductor with quasi 2D electronic structure [15] and is identified as an unconventional superconductor with, most likely, d-wave gap symmetry [16,17,18,19,20]. Very recent heat capacity measurements revealed that a second order phase transition takes place at T * (H) within the SC state in the vicinity of the upper critical field with H parallel to the ab-plane H c2 at low temperatures [11,12]. The transition line branches from H c2 -line and decreases with decreasing T , indicating the presence of a novel SC phase (Hereafter we refer to the phase below T * (H) as the high field SC phase). In the inset of Fig. 1 the H − T phase diagram for CeCoIn 5 is illustrated in the vicinity of H c2 and at low temperatures. Subsequent ultrasound investigation revealed the collapse of the flux line lattice tilt modulus [13], and the thermal conductivity reported a pronounced anisotropy [14] in the high field SC phase below T * (H). Both measurements were presented in support of the FFLO nature. Thus, as new results accumulate, there is a growing experimental evidence that the FFLO state may indeed be realized in the high field SC phase of CeCoIn 5 . CeCoIn 5 appears to meet in an ideal way the strict requirements placed on the existence of the FFLO state. First, an extremely high H c2 (∼ 12 T at T =0) is favorable for the occurence of the FFLO state because then the Pauli effect may overcome the orbital effect. Paulilimited superconductivity is in fact supported by the fact that the phase transition from SC to normal metal at the upper critical fields is of first order below ∼1.3 K [17,21]. Second, it is in the extremely clean regime. Third, dwave pairing symmetry greatly extends the stability of the FFLO state with respect to a conventional superconductor [9]. While these experimental and theoretical results make the FFLO scenario a very appealing one for CeCoIn 5 , there is no direct experimental evidence so far which verifies the spatially nonuniform SC state expected in FFLO are obtained from Ref. [14]. The transition at H c2 is of first order in this T -range. The region shown by green depicts the high field SC phase discussed in the text. Horizontal arrows indicate the magnetic fields at which the NMR spectrum was measured. Main panel: 115 In-NMR spectra outside, slightly above H c2 (blue line), slightly above T * (black), and well inside (red) the high field SC phase. The resonance feature at higher frequency is marked by hatch. (H, T )-points at which each NMR spectrum was measured are shown by crosses in the inset. state. A central matter related to this issue is the quasiparticle structure in the high field SC phase. Therefore a powerful probe of the quasiparticle excitations in the high field SC phase is strongly required to shed light on this subject. NMR is particularly suitable for the above purpose because NMR can monitor the low energy quasiparticle excitations sensitively. Here we present the NMR spectrum in the vicinity of H c2 to extract microscopic information on the quasiparticle structure for the first time. The spectrum we observed in the high field SC phase is quite unique, and we will argue that our results provide the first microscopic evidence for the occurrence of a spatially inhomogeneous SC state expected in a FFLO state. 115 In (I=9/2) NMR measurements were performed on high quality single crystals of CeCoIn 5 by using a phasecoherent pulsed NMR spectrometer. Experiments were always carried out in the magnetic field H parallel to the [100]-direction under the field-cooled condition. The tetragonal crystal structure of CeCoIn 5 consists of alternating layers of CeIn 3 and CoIn 2 and so has two inequiv-alent In sites per unit cell [15]. We report NMR results at the In(1) site with axial symmetry in the CeIn 3 layer, which locates in the center of the square lattice of Ce atoms. The Knight shift 115 K was obtained from the central 115 In-line (±1/2↔ ∓1/2 transition) using a gyromagnetic ratio of 115 γ=9.3295 MHz/T and by taking into account the electric quadrupole interaction. Figure 1 depicts the NMR spectra outside and well inside the high field SC phase. At T slightly above T * (H = 11.3 T)≃300 mK, the NMR spectrum is almost symmteric as shown by the black solid line. Generally, the spatial distribution of the magnetic field arising from the flux line lattice structure gives rise to an asymmetric NMR spectrum [22], indicating that the influence of the field distribution is negligible in the present high field region. A most remarkable feature in the NMR spectrum well inside the high field SC phase shown by the red solid line is an appearance of a new resonance peak with small but finite intensity at higher frequency , as seen clearly at 115 K ≃ 2.03%. The intensity of the higher resonance line is about 3-5 percent of the total intensity and is nearly Tindependent below 180 mK. This higher resonance line is an important clue to elucidate the nature of the high field SC phase. We stress that the occurrence of the magnetic ordering is a highly unlike source for the higher resonance in view of the large difference in the intensity of the two lines. Should antiferromagnetic order set in, the alternating hyperfine fields would produce two unquivalent 115 In(1) sites, which gives rise to the two resonance lines with equal intensities. In Fig. 1, the NMR spectrum at H slightly above H c2 is also shown by the blue line. A noteworthy feature in the spectrum inside the high field SC phase (red solid line) is that the position of the higher resonance line within the high field SC phase coincides well with that of the resonance line above H c2 (blue), while the position of the lower resonance line locates close to that of the SC state above T * (H) (black). Therefore, it is natural to deduce that the higher resonance line originates from a normal quasiparticle regime, which is newly formed below T * (H), while the lower resonance line corresponds to the SC regime, which appears to have a similar quasipaticle structure above T * (H). These results lead us to conclude that an appearance of the new resonance line at a higher frequency is a manifestation of a novel normal quasiparticle structure in the high field SC phase. Figure 2 displays the temperature evolution of the spectra at H=11.3 T. The higher resonance line grows rapidly with T just below T * (H). A double peak structure with same intensity shows up at T =240 mK, followed by a shoulder structure at T = 260 mK, indicating that the intensity of the higher resonance line dominates. Two lines merge into a single line above T * (H)∼300mK. The T -dependences of the 115 K evaluated from the peak position is plotted in Fig. 3 [23]. 115 K at H=11.3 T ex- hibits quite ususal T -dependence. As the temperature is lowered below T * (H), 115 K of the higher resonance line increases rapidly and coincides with 115 K above H c2 below 180 mK. On the other hand, below T * (H), 115 K of the lower resonance line changes slightly. So far, we have established that the high field SC phase is characterized by the formation of normal regions. This brings us to the next question on whether the NMR spectrum below T * (H) is an indication of a FFLO phase. It has been predicted that in a FFLO phase the SC order parameter exhibits one-dimensional spatial modulations along the magnetic field, forming planar nodes that are periodically aligned perpendicularly to the flux lines. Therefore, the formation of the normal regions is consistent with a phase expected in a FFLO state. We will show that the NMR spectra just below T * (H) in Fig. 2 can be accounted for by considering such planar structures. The field induced layered structure expected in a FFLO phase resembles the SC states of stacks of superconductor-normal-superconductor (S-N-S) Josephson tunnel junctions. In the NMR experiments, rf magnetic field H rf is applied perpendicular to the dc magnetic field (H a, H rf b). The shielding supercurrent currents flow passing across the planar nodes. Because of the second order transition at T * (H), the modulation length of the order parameter parallel to H or the thickness of the SC layers, Λ(= 2π/|q|), diverges as, Λ ∝ (T * − T ) −α with α > 0, upon approaching T * (H). Therefore Λ will exceed the in-plane penetration length λ in the vicinity of T * . In such a situation, rf field penetrates into the normal sheets much deeper than into the SC sheets, which results in a strong enhancement of the NMR intensity from the normal sheets. At low temperature where Λ becomes comparable to ξ(≪ λ), penetration of the rf field into the normal sheets is as same as that into the SC sheets. We estimate the above effect semi-quantitatively. Assuming a simple sinusoidal modulation of the gap function along the applied field (x-axis), ∆(x) = ∆ 0 sin qx, the spatial modulation of the rf field H rf (x) is given as, The NMR intensity is given as where K(∆/T ) is the Yoshida function for the Knight shift in the SC state [24]. We attempt to fit the experimental data with this formula, using Λ λ and T ∆0 as fitting parameters. Figure 4 depicts the calculated spectrum just below T * (H), where we have used T /∆ 0 = 0.29 and Λ/λ = 4.7 at T =240 mK, and T /∆ 0 = 0.3 and Λ/λ = 7.5 at T =260 mK. The spectra is also convoluted with a Lorentian shape for an inhomogeneous broadening. This simple simulation, in which planar nodal structure is assumed, reproduces well the observed spectra, and suggests that the wave length of the spatial oscillation of the SC order parameter decreases largely with lowering temperature. Thus, the evolution of the NMR spectrum with temperature is compatible with what is expected in a FFLO phase. We finally discuss the nature of the quasiparticle structure inferred from the NMR spectrum. The intensity of the higher resonance line indicates that only a few percent of the total volume is occupied by a newly formed normal qusiparticle region well below T * (H). Furthermore the presence of two well-separated NMR lines implies that the quasiparticle excitation around the planar nodes is spatially localized. We therefore speculate that the spatial dependence of the order parameter along the magnetic field may be Bloch wall-like or rectanguar, rather than sinusoidal far below T * (H). These results call for further theoretical investigations on the real space structure of the SC order parameter. We note that a peculiar electronic structure of vortex cores in CeCoIn 5 is also inferred from the present results. A double peak structure in the NMR spectra directly indicates that the Knight shift within the vortex core deviates from that in the normal quasiparticle sheets. This implies that the vortex core is to be distinguished from the normal state above H c2 , a feature in sharp contrast to conventional superconductors, where the Knight shift within the core coincides with the Knight shift in the normal state. What is the reason behind this unusual structure of the vortex core? Since in CeCoIn 5 H c2 is limited by Pauli paramagnetic effect, the area occupied by vortex cores can be much smaller than what is estimated from H/H c2 . Hence, even just below H c2 , the vortex cores are associated with a large spatial oscillation of the SC order parameter. We recall that a strong reduction of the quasiparticle density of states within vortex cores has been reported in high-T c cuprates [22,25], and discussed in terms of the strong enhancement of the antiferromagnetic correlation within cores [26]. A similar situation may be present in CeCoIn 5 . Interestingly, Nernst effect measurements in the latter [27] indicate that the difference of entropy between the vortex core and the superconducting environment is unusually small and are therefore compatible with a reduced density of states in the vortex core. Moreover, Strongly enhanced antiferromagnetic correlation in CeCoIn 5 is inferred from the T -shift of Knight shift in the normal state above H c2 , which increases with decreasing T , as is evident from Fig. 3. This behavior is notably different from that expected in the Fermi liquid model, which predicts the T -independent Knight shift. This non-Fermi liquid behavior has been discussed in the light of the incipient antiferromagnetism with the quantum critical point in the vicinity of the upper critical field [28]. These results call for further investigations of the vortex core structure in the presnce of strong antiferromagnetic correlation. To conclude, 115 In NMR spectrum in CeCoIn 5 exhibits a dramatic change in the vicinity of H c2 . Below T * (H) a new resonance line appears at higher frequency, which can be attributed to the normal quasiparticle sheets formed in the SC regime. On the basis of the NMR spectrum, we were able to establish a clear evidence of the spatially inhomogeneous SC state at high field and low temperatures, precisely as expected in a FFLO state. The NMR spectrum also indicate that the vortex core structure of CeCoIn 5 appears to be markedly different from that of ordinary superconductors.
2018-04-03T00:17:01.159Z
2004-05-28T00:00:00.000
{ "year": 2005, "sha1": "b8c678aff49429d4176d5698578d65bee5381bb4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0405661", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66bbffdebc96547b7bfd96bb30d04be0e6e5e4b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
99817998
pes2o/s2orc
v3-fos-license
Metal-insulator transition in Hubbard-like models with random hopping An instability of a diffusive Fermi liquid, indicative of a metal-insulator transition (expected to be of first order), arising solely from the competition between quenched disorder and short-ranged interparticle interactions is identified in Hubbard-like models for spinless fermions, subject to (complex) random hopping at half-filling on bipartite lattices. The instability, found within a Finkel'stein Non-Linear Sigma Model treatment in d = (2 + epsilon)>2 dimensions, originates from an underlying particle-hole like (so-called"chiral") symmetry, shared by both disorder and interactions. In the clean, interacting Fermi liquid this symmetry is responsible for the (completely different)"nesting"instability. PACS numbers: 71.30.+h, 71. 10.Fd,72.15.Rn Understanding the combined effects of quenched disorder and interparticle interactions in electronic systems remains one of the central problems in solid state physics. Models of noninteracting electrons subject to static, random impurity potentials provide the simplest description of disordered metals; many analytical and numerical studies have shown that such models exhibit a continuous metal-insulator transition (MIT) in three dimensions (3D), 1 accessed by varying either the disorder strength, or the Fermi energy relative to the mobility edge. By contrast, all electronic states are typically exponentially localized in one and two dimensions by arbitrarily weak disorder. (Notable exceptions occur in the presence of spin-orbit scattering, and in the systems discussed, e.g., in Refs. 2,3.) The above description ignores the important effects of interparticle interactions. Unfortunately, however, theories capturing the competition of both disorder and interactions are typically quite complex, and difficult to analyze reliably. 4,5 These issues have come again to the forefront of scientific debate in view of discussions concerning a metal-insulator transition in 2D, and related experimental results on 2D semiconductor inversion layers. 6 In the present work, we identify a novel "Anderson-Mott" instability of a diffusive Fermi liquid in d = (2 + ǫ) > 2 spatial dimensions (ǫ ≪ 1), which arises solely from the competition between disorder and short-range interactions. This instability is indicative of a metal-insulator transition (expected to be of first order) from the diffusive Fermi liquid to an insulating state dominated by both strong disorder and interactions. (See Fig. 1, below.) Since the system that we study has no localized phase with disorder in the absence of interactions, a localized phase can only appear due to the presence of the interactions. We expect our result to be relevant for sufficiently strong disorder in d = 3 dimensions. Specifically, we analyze a class of "Hubbard-like" models 7 of spinless fermions, at half-filling on bipartite lattices, with random (short-range) hopping between the two sublattices. [The Hamiltonian is given below in Eq. (1).] In every realization of the (static) disorder, such a model possesses a special particle-hole like symmetry, which we will refer to as sublattice symmetry (SLS). [SLS is termed "chiral symmetry" in the classification scheme of Ref. 8 (see also Refs. 2,3,9,10,11,12,13).] In the absence of disorder, it is SLS which is responsible for the "nesting" condition of the Fermi surface. Fermi surface nesting is, in a sense, the defining property of (clean) Hubbard-like models for interacting lattice fermions in d ≥ 2. It is the nesting condition which makes the ballistic Fermi liquid phase at half filling in such models unstable to Mott insulating order in the presence of generic, arbitrarily weak interparticle interactions. 14,15,16 Here we consider, in addition, complex random (nearestneighbor) hopping, which breaks time reversal invariance (TRI) in every realization of disorder. 17 For our system of spinless fermions, this is consistent with the application of a random magnetic field to the otherwise clean model. Our principal motivation for studying such a model is that we expect both disorder and interparticle interactions to play important roles in the description of the low-energy physics. Because random hopping preserves the special SLS, our disordered model retains the nesting instability of the associated clean system. This instability can therefore compete with the unusual localization physics of the disordered, but noninteracting model (see below). The further assumption of broken TRI guarantees that we do not have to confront an additional superconducting instability. 5,14,15,16 We expect the instability that we identify in this work in the simultaneous presence of both disorder and interactions to occur in three dimensions for sufficiently strong disorder, and we stress that it is clearly distinct from the pure Mott nesting instability, although the latter also appears in our model phase diagram (see Fig. 1, below). We note that the effects of hopping disorder upon the Néel ground state of the (slightly more complex) spin-1/2 Hubbard model at half filling were studied numerically in Ref. 18, although these studies were limited to d = 2. A second motivating factor is that, interestingly, the presence of SLS radically changes the localization physics of the disordered, noninteracting random hopping model [Eq. (1), below, with V = U = 0]. SLS enables the ran-dom hopping model to evade the phenomenon of Anderson localization. Specifically, the noninteracting system exhibits a critical, delocalized phase at the band center (half filling) in one, two, and three dimensions for finite disorder strength, with a strongly divergent low-energy density of states in d = 1, 2. 2, 3,9,10,11,12 In particular, there is no MIT and no Anderson insulating phase in d = 3 (in the absence of interactions). Random hopping models have been of significant theoretical interest in the recent past, both because of the unusual delocalization physics described above, but also because these models have proven amenable to a variety of powerful analytical techniques in d ≤ 2, with many exact and/or nonperturbative features now understood. 2,10,12 This situation should be contrasted with our understanding of the conventional noninteracting ("Wigner-Dyson") MIT, which is based largely on perturbative results in d > 2. 1 Our work here addresses the effects of interparticle interactions in random hopping models for d ≥ 2. Our starting point is the following extended Hubbardlike Hamiltonian for spinless fermions hopping on a bipartite lattice at half filling: Any bipartite lattice may be divided into two interpenetrating sublattices, which we distinguish with the labels A and B. In Eq. (1), c † Ai and c Bj denote fermion creation and annihilation operators on the A and B sublattices, respectively. Here, i and j respectively index the A and B sublattice sites, and the sums on ij run over all nearest neighbor A-B lattice bonds, while the sums on ii ′ and jj ′ run over all next-nearest neighbor (same sublattice) pairs of sites. The homogeneous hopping amplitude t in Eq. (1) is taken to be purely real; disorder appears in the perturbation δt i,j . We take the amplitude δt i,j to be a Gaussian complex random variable with zero mean, statistically independent on different lattice links. The (1) denote the deviations of the local sublattice fermion densities from their value at half filling; the interaction strengths V and U appearing in this equation couple to nearest neighbor and next-nearest neighbor density-density interactions, respectively. The Hamiltonian H in Eq. (1) is invariant under the (antiunitary) sublattice symmetry (SLS) transformation (all complex scalar terms in H are complex conjugated). 19 In the clean limit, Eq. (1) with δt i,j = 0 for all lattice bonds ij , SLS is responsible for the nesting condition of the noninteracting Fermi surface. As a result of nesting, the Fermi liquid phase of the clean model is unstable to charge density wave (CDW) order for any 2V > U ≥ 0. 16,20 The effects of the interparticle interactions U and V upon the delocalized phase of the disordered model given by Eq. (1) may be investigated by using Finkel'stein's generalized non-linear sigma model (FNLσM) approach 4,5 to formulate the low-energy effective continuum field theory. The latter can be studied using a controlled ǫ expansion in d = 2 + ǫ dimensions (0 ≤ ǫ ≪ 1). We use the Schwinger-Keldysh 21 method to perform the ensemble average over realizations of the hopping disorder. The FNLσM is derived following the standard methodology. 4,5,21 In the present case, the resulting FNLσM is described by the generating functional 20 where and The field variableQ (r) → Q ab tt ′ (r) (6) in Eqs. (3)-(5) is a complex, "infinite-dimensional" square matrix living in d spatial dimensions, where indices t and t ′ belong to a continuous time or (via Fourier transform) frequency space, and where indices a and b belong to a 2-dimensional "Keldysh" species space, with a, b ∈ {1, 2}. 21,22 In Eq. (4), Tr denotes a matrix trace over time (or frequency) and Keldysh indices.Q(r) satisfies in addition the unitary constraint The matrixQ and its adjointQ † may be interpreted 20 as continuum versions of the same-sublattice fermion bilinears The action given by Eq. (4) describes the low-energy diffusive physics of the noninteracting random hopping model; a replica version of the noninteracting sigma model with action S D was originally studied by Gade and Wegner. 3 This noninteracting sector of the FNLσM contains three coupling constants: λ, λ A , and h. The parameter 1/λ is proportional to the dimensionless dc conductance g of the system (i.e., is related to the disorder strength), while λ A denotes a second measure of disorder, unique to this sublattice symmetry class, which strongly influences the single-particle density of states. 3,10,11 The parameter λ A may be simply interpreted as characterizing the strength of long-wavelength, quenched random orientational fluctuations in bond strength dimerization of the random hopping model. 20 Finally, h is a dynamic scale factor, which determines the dynamical critical exponent z in Eq. (10e), below, through the condition d ln h/dl ≡ 0. 5,13,20 The interparticle interactions appear in S I , defined by Eq. (5). Given Eq. (8), we may interpret Q aa tt (r) and Q †aa tt (r) as continuum local density operators on the A and B sublattices, respectively. Then the interaction couplings Γ V and Γ U in Eq. (5) describe generic short-ranged intersublattice and same-sublattice densitydensity interactions, respectively [compare to Eq. (1), above]. Finally, ξ a = ±1 in Eq. (5), for Keldysh species label a = 1, 2. Using a Wilsonian frequency-momentum shell background field methodology, 4 we have performed a oneloop renormalization group calculation on the model defined by Eqs. (3)-(5). The calculation is straightforward, though rather lengthy; the details will be published elsewhere. 20 Below we simply state our results. In order to do so, it is convenient to introduce the following effective interaction couplings The interaction strength γ s couples to the square of the (smooth) local charge density in the continuum theory, while γ c couples to the square of the sublattice staggered charge density. In accordance with the discussion in the paragraph below Eq. (2), we expect γ c < 0 to promote charge density wave (CDW) formation, while γ c > 0 should suppress it. We find the following one-loop RG flow equations for the couplings λ, λ A , γ s , γ c , and h in d = (2 + ǫ) dimensions: Here, l is the logarithm of the spatial length scale. These flow equations are given to the lowest non-trivial order in the couplings λ, λ A , and γ c , but contain contributions from γ s to all orders; Finkel'stein's NLσM formulation provides 5 a perturbative expansion which is controlled by the (small) dimensionless resistance λ, but does not require the interaction strength γ s to be small. Before turning to an analysis of our results, Eqs. (10a)-(10e), we provide interpretations for various key terms appearing in them. First, the term in square brackets on the second line of Eq. (10a) is the usual correction to the dimensionless dc resistance λ, arising from the short-ranged interparticle interactions, 4,23,24 and corresponds 25 to coherent backscattering of carriers off of disorder-induced Friedel oscillations in the background electronic charge density. 26 The last term in Eq. (10d) drives the CDW instability, which is a remnant of the clean Hubbard-like model (recall that in our conventions γ c < 0 signals this instability). Now we analyze our results. In d = 2 dimensions, integrating Eq. (10) for generic initial conditions shows that the critical, delocalized phase of the half-filled, noninteracting random hopping model 3 is unstable to the effects of short-ranged interparticle interactions. We find that either γ c → −∞ signaling CDW formation, or that λ, λ A → ∞ and γ c → +∞, indicating a flow toward both strong disorder and strong interactions. Regardless, we expect the 2D interacting, disordered Hubbard model to be an insulator at zero temperature. This should be compared to an analogous result 13 previously obtained for a TRI, interacting random hopping model on the honeycomb lattice. This physics is consistent with numerical studies 18 of the half-filled spin-1/2 Hubbard model in d = 2, which have shown that TRI random hopping disorder preserves the compressibility gap of the clean Mott insulator, and that the disordered and interacting system shows no signs of metallic behavior. The situation in d = (2 + ǫ) > 2 dimensions is more interesting. Upon increasing ǫ from zero, a narrow, irregularly shaped sliver corresponding to a stable metallic, diffusive Fermi liquid state opens up in the four-dimensional (λ, λ A , γ s , γ c ) coupling constant space. The sliver encloses the line λ = λ A = γ c = 0, with −∞ < γ s < 1, the entirety of which is perturbatively accessible because the FNLσM does not require the interaction strength γ s to be small. A highly schematic 3D "projected" phase diagram is depicted in Fig. 1. In this figure, the interaction constants reside in the horizontal plane, while the vertical direction schematically represents (both) disorder strengths; the shaded sheath is a cartoon for the boundary of the stable metallic region, which resides between it and the ballistic (λ = λ A = 0) plane. The "height" of the stable metallic region in the "disorder" directions (λ, λ A ) is controlled by ǫ, although the precise shape and size of the phase boundary varies with γ s , and is difficult to characterize analytically. Over the range of perturbatively small values of γ c , the stable Fermi liquid phase resides in the region γ c 0, and terminates near γ c = 0. The flow equations (10a)-(10d) possess no perturbatively accessible, nontrivial RG fixed points for d > 2, and thus no continuous metal-insulator transition can be identified. However, the two instabilities described above for the 2D case persist for d > 2, and become clearly distinct roads out of the metallic state. The conventional CDW instability always occurs for initial γ c < 0 and sufficiently weak disorder, i.e., when λ, λ A ≪ ǫ, and is represented by the flow γ c → −∞. This flow is accompanied by a decay in both disorder strengths λ, λ A . The primary result of this paper is the identification of a second route out of the diffusive Fermi liquid phase in d = (2 + ǫ) > 2 dimensions, independent of the Mott CDW instability, arising solely from the competition of disorder and interaction effects. As in the 2D case, this second route is characterized by a flow off to both strong disorder (λ, λ A → ∞) and strong interactions (γ c → +∞), as indicated by the thick arrows emerging from the γ c > 0 portion of the phase boundary shown in Fig. 1; we call it an Anderson-Mott instability. Even though there is no perturbatively accessible fixed point, this new instability is nonetheless perturbatively controlled in d = (2 + ǫ) over a wide range of initial conditions when ǫ ≪ 1; in particular, it is perturbatively accessible over the entire range 0 ≤ γ s < 1. 27 Numerically integrating Eqs. (10a)-(10d) for small ǫ ≪ 1, we find that the Anderson-Mott instability can apparently always be reached by increasing only the dimensionless resistance λ. We expect the boundary separating the flow toward the stable metallic regime from that toward the regime of the Anderson-Mott instability to represent a disorder-driven, first order metal-insulator transition (MIT). We emphasize that a MIT does not exist in the noninteracting random hopping model, which possesses only a delocalized phase at half-filling for finite disorder in d ≥ 1, 2,3,10,11 while the clean spinless Hubbard model possesses only the Mott CDW instability. RG flow equations in related systems of spin-1/2 fermions were recently obtained independently in Ref. 28. This work was supported in part by the NSF under Grant No. DMR-00-75064 and by the UCSB Graduate Division (M.S.F.).
2019-04-08T13:12:01.522Z
2006-03-15T00:00:00.000
{ "year": 2006, "sha1": "46da13253c80d119e7dae8fc22c91dd0becd5321", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0607574", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "46da13253c80d119e7dae8fc22c91dd0becd5321", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258051939
pes2o/s2orc
v3-fos-license
Nitrogen transformation processes catalyzed by manure microbiomes in earthen pit and concrete storages on commercial dairy farms Storing manure is an essential aspect of nutrient management on dairy farms. It presents the opportunity to use manure efficiently as a fertilizer in crop and pasture production. Typically, the manure storages are constructed as earthen, concrete, or steel-based structures. However, storing manure can potentially emit aerial pollutants to the atmosphere, including nitrogen and greenhouse gases, through microbial and physicochemical processes. We have characterized the composition of the microbiome in two manure storage structures, a clay-lined earthen pit and an aboveground concrete storage tank, on commercial dairy farms, to discern the nitrogen transformation processes, and thereby, inform the development of mitigation practices to preserve the value of manure. First, we analyzed the 16S rRNA-V4 amplicons generated from manure samples collected from several locations and depths (0.3, 1.2, and 2.1–2.75 m below the surface) of the storages, identifying a set of Amplicon Sequence Variant (ASVs) and quantifying their abundances. Then, we inferred the respective metabolic capabilities. These results showed that the manure microbiome composition was more complex and exhibited more location-to-location variation in the earthen pit than in the concrete tank. Further, the inlet and a location with hard surface crust in the earthen pit had unique consortia. The microbiomes in both storages had the potential to generate ammonia but lacked the organisms for oxidizing it to gaseous compounds. However, the microbial conversion of nitrate to gaseous N2, NO, and N2O via denitrification and to stable ammonia via dissimilatory nitrite reduction seemed possible; minor quantities of nitrate was present in manure, potentially originating from oxidative processes occurring on the barn floor. The nitrate-transformation linked ASVs were more prevalent at the near-surface locations and all depths of the inlet. Anammox bacteria and archaeal or bacterial autotrophic nitrifiers were not detected in either storage. Hydrogenotrophic Methanocorpusculum species were the primary methanogens or methane producers, exhibiting higher abundance in the earthen pit. These findings suggested that microbial activities were not the main drivers for nitrogen loss from manure storage, and commonly reported losses are associated with the physicochemical processes. Finally, the microbiomes of stored manure had the potential to emit greenhouse gases such as NO, N2O, and methane. Supplementary information The online version contains supplementary material available at 10.1186/s40793-023-00483-z. Introduction The shift from family owned small dairy farms to large dairy operations in the US over the past decades has been accompanied by the generation of high volumes of manure [1,2], and the associated accumulation and concentration of nitrogen, phosphorus, potassium, salts, and minerals in specific geographical zones [3]. The high nutrient content of manure makes it a valuable source of organic fertilizer for crops and pasture production. Thus, an effective manure management involving storage prior to application on land is an important factor driving the sustainability of dairy operations. Storing manure allows the (i) use of manure at the right time, (ii) decrease manure handling costs, and (iii) minimize the potential to pollute the environment. During storage, the organic nitrogen of manure is converted via physicochemical and microbial processes into plant available inorganic species, such as ammonia (NH 3 ), nitrite (NO 2 − ), and nitrate (NO 3 − ) [4][5][6][7]. However, these transformations also cause the production of gaseous forms of nitrogen such as dinitrogen (N 2 ), nitric oxide (NO), and nitrous oxide (N 2 O), which along with ammonia, are amenable to loss to the atmosphere unless they are rapidly converted into soluble compounds [8,9]. Nitrogen loss from manure storage could amount to 30 percent of the total nitrogen contents depending on the storage condition [10], substantially reducing the fertilizer value of the material. Additionally, the anaerobic microbial decomposition of organic matter in manure generates methane (CH 4 ), which along with NO and N 2 O are potent greenhouse gases (GHGs), making manure storage an agricultural greenhouse gas source [11]. The manure management systems contribute 9.7 percent of the methane emission in the US [12]. Thus, an understanding of the microbe-mediated nitrogen and carbon transformation in these units is necessary to develop strategies for preserving the nitrogen fertilizer value of manure and mitigating greenhouse gas emissions from these sources. While there have been studies on these processes, the attention primarily has been on the chemical reactions mediating the losses [13,14]. The few studies that analyzed the microbiomes of stored manure [15][16][17][18][19][20][21] did not focus on the role of microbes in nitrogen transformation processes but the emergence of antibiotic resistant species [16][17][18] and methane production [19][20][21]. To fill this gap, we assessed the potentials of microbial nitrogen biotransformation in a clay-lined earthen pit (EP) and an above ground concrete manure (CS) storage employing a culture-independent approach. The characteristics of the methanogens which carry out the terminal step of the biomethanation of organic materials were investigated as well. The study also tested the hypothesis that the nitrogen transformation and methanogenesis activities are influenced by the storage types. Storage description Two on-farm manure storages, a clay-lined earthen pit (EP) and a partial aboveground concrete tank (CS), were studied. The farms are located in Franklin County, VA (Fig. 1A). The EP is an earthen pit with a clay lining (Figs. 1B-C), while the CS was an aboveground tank made of concrete (Figs. 1D-E). The EP and CS received manure from 85 and 75 cows, respectively. At each farm, the cows are raised in a barn and fed a total mixed ration diet, and the manure is scraped from the barn floors to the storage twice daily. The EP is oval, with top surface dimensions of 60 and 27 m on the long and wide sides ( Fig. 1B-C). The manure inlet and outlet (pump-out) locations are on opposite sides of the longer dimension of the storage. The depth of the storage pit increases gradually from 3.66 m (near the inlet) to 3.96 m (near the outlet). The design storage capacity is enough to hold manure for about four months. For EP the manure was fed from the bottom at a location near the periphery of the pit (EP1, Fig. 1C) and for CS the addition also occurred at a periphery location but on the surface (CS3, Fig. 1F). The CS structure had a diameter of 18.3 m and was 4.6 m deep (Figs. 1D and 1E-F). Manure sample collection and processing The samples were collected from the EP in August 2018 and from the CS in September 2018. At the time of the experiment, these storages were about 90% full from four months of filling, starting from an empty stage to the manure depths of 2.9 and 3.05 m for the EP and CS, respectively. In each case, the sampling locations were selected based on their distances from the inlet and outlet, and the occurrence of a typical physical structure, crust, on the surface. Samples were collected from the following five locations of EP (Fig. 1C) and three locations of CS (Fig. 1F): EP1, inlet with 15 cm dry crust; EP2, closest distance to EP1 with no crust; EP3, close to lining with no crust; EP4, 30 cm dry crust; EP5, closest to outlet with no crust; CS3, inlet; CS1, middle of the storage; CS2, farthest from inlet. The crusting profile on the surface of CS was similar at all sampling locations and ranging from 15 to 22 cm. A self-propelled commercial telescopic boom lift with an 80 ft reach (Genie S-85, Genie United States, Redmond, WA) was used to reach a sampling location above the manure pit. Then, a custom-built sampler (Additional file 1: Fig. S1) was used to collect samples from the following three depths as measured from the surface: 0.3 m, near-surface; 1.2 m, middle; 2.1-2.7 m, bottom. The sampler was made of ¾ and 1 ½ inch diameter PVC pipes fitted with manually operated butterfly valve (Additional file 1: Fig. S1). Immediately after retrieval, each sample was placed in a plastic beaker and gently mixed with a spoon; prior to their use, the beaker and spoon were washed with 2% phosphoric acid to remove contaminating nucleic acids. Then, the sample was distributed into three sterile, DNase-free 15 ml polypropylene tubes (catalog number: 62406-200, VWR International, Radnor, PA) and snap-frozen in a dry ice and ethanol bath; samples in these tubes were considered replicates. Another aliquot (~ 0.5L) of sample were placed in separate bottles to analyze for manure chemical characteristics. A total of 45 and 27 manure samples were collected from EP and CS, respectively. These were transported to the laboratory on dry ice and stored at -20 °C. Manure characteristics The manure samples were analyzed for the content of total and volatile solids (TS and VS, respectively), and pH according to the standard method for wastewater analysis (APHA, 2012) as follows. The pH was measured using the IDS pH combined electrode (SenTix ® 940-3, Wissenschaftlich-Technische Werkstätten GmbH, Weilheim, Germany) while the total chemical oxygen demand (COD) was analyzed using a HACH method 8000 (HACH, Colo., USA). The content of important nutrients for plant present in the manure, including total nitrogen (TN), total ammonium nitrogen (TAN), nitrate nitrogen (NO 3 − -N), total phosphorus, potassium, calcium, magnesium, sulfur, iron, manganese, zinc, copper, boron, molybdenum, aluminum, and sodium, were analyzed at the Agronomic Services lab, North Carolina Department of Agriculture & Consumer services (Raleigh, NC). DNA extraction and 16S rRNA amplicon sequencing From a manure sample, DNA was extracted using Qiagen Fast DNA Stool Mini Kit (cat. no. 51604, Qiagen, Germantown, MD) following the manufacturer's instructions with modification (Additional file 1: Method S1). The DNA preparations that passed quality assessment were used for paired-end sequencing targeting the 16S rRNA hypervariable region 4 (V4), using 515F and 806R primers [22,23] at the Environmental Sample Preparation and Sequencing Facility of the Argonne National Laboratory or ANL (Lemont, IL). Bioinformatic analysis The QIIME 2-2019.4 and PICRUSt2 v.2.1.4 pipelines were used on the high-performance computing cluster of the Virginia Tech Advanced Research Computing (ARC) resources. The analysis relies on sequences of a short section of the 16S rRNA gene and not whole genomes or isolate characteristics. Thus, the detected Amplicon Sequence Variants (ASVs) represent organisms that are highly similar and not identical to known archaea and bacteria that we list in the report. Taxonomic and abundance analysis of the of the 16S rRNA sequences Raw sequence data obtained from the ANL were analyzed by the QIIME 2-2019.4 package [24] for preprocessing and removal of contaminants. The ASVs were generated via DADA2 pipeline [25] and then clustered at 99% sequence similarity using vsearch [26]. A pre-trained Naïve Bayes classifier was used to annotate the sequences using the SILVA 132 database [27]. Sequences annotated as chloroplast and mitochondria were classified as contaminants [28][29][30] and removed from the dataset. Statistical and phylogenetic analysis were done using Bioconductor packages [31] in R [32] as follows. Species richness index was calculated using Chao1 estimator of the microbiomeSeq package [33] with samples rarefied to 4529 sequences per sample. Significant differences between species richness of two groups were determined by pairwise ANOVA (P ANOVA < 0.05). Microbial community comparison between samples was performed via non-metric multidimensional scaling (nMDS) ordination of the Bray-Curtis dissimilarity distances [34]. The sample parameters that contributed the most to sample clustering were identified via a non-parametric permutational analysis of variance (PERMANOVA) and analysis of similarities (ANOSIM) of adonis function in vegan [35] (permutation: 999, P < 0.05) [36]. The microbiome composition of stored dairy manure was assessed using phyloseq and microbiome packages [37,38]. Prior to the analysis, the ASVs were normalized to its relative abundance. The microbial species that were more enriched in one sample group (at a location or depth) versus another were identified using differential abundance analysis on DESeq2 [39] with P wald < 0.001, fitType = "parametric", and sfType = "poscounts". The significance of the difference between the abundances of Euryarchaeota members across sampling parameters, specifically Methanocorpusculaceae, was assessed using non-parametric Kruskal-Wallis [40] and pairwise Wilcoxon comparison test with continuity correction [41] (P Kruskal-Wallis = 0.05 and P Wilcoxon = 0.05). Linking the ASVs to the nitrogen transformation pathways The workflow as shown in Fig. 2 was used to assign the nitrogen (N) transformation capabilities to the detected ASVs based on their lowest valid taxonomic annotations. This analysis employed two approaches (Fig. 2), one of which was based on a literature review (Additional file 2: Table S1) and the other utilized PICRUSt2 v.2.1.4 which linked the appropriate ASVs to the nitrogen-transformation genes by leveraging available genomic libraries [42]. For PICRUSt2, the option of "metagenome_contrib in the metagenome_pipeline.py" was used to list ASVs with link to nitrogen transformation capabilities broadly (missing hydroxylamine oxidase (EC 1.7.3.6), hydrazine synthase (EC 1.7.2.7), and hydrazine oxidase (EC 1.7.2.8)), while "per_sequence_contrib option in the pathway_pipeline. py" was used to focus on denitrification [42]. The information generated using these two approaches was combined, and a heatmap for the relative abundances of ASVs linked to the N-transformation capabilities was generated using pheatmap ver 1.0.12 [43] (Additional file 1: Figs. S2 and S3). Then the predicted capabilities of the ASVs showing significant relative abundances were used to build a scheme of the potential N-biotransformation pathways by the microbiome in manure storage. This analysis revealed 13 reactions for the EP and CS manure storage structures (Figs. 7A and B). Manure characteristics The analysis targeted five locations in EP and three locations in CS. The reason for this difference is that for EP the thickness of surface crust varied from location to location, whereas it was uniform for CS. The pH, COD, and the nutrient profile of the manure samples are presented in Additional file 2: Table S2 Table S2). The same observation was also made for the TAN at EP1 but not CS3. In fact, all other samples in CS contained at least four times more TAN than the near-surface samples collected at the inlet (CS3-near-surface; Fig. 1F and Additional file 2: Table S2). Another unusual observation was that in the EP, the next highest levels of TS, VS, total Kjeldahl nitrogen (TKNTN), ORG-N, and TAN were found at EP4 which was located halfway between the inlet and the outlet (Fig. 1C); this location however did not have a high level of NO Table S2), respectively. Except for pH and NO 3 − -N content, the nutrient-rich features of EP1near-surface and EP4-near-surface were also observed in EP1-bottom and EP4-bottom locations. Some of the samples taken from the middle depth, especially those from EP2, EP3, and EP5 locations, showed the lowest organic matter concentrations (Additional file 2: Table S2). 16S rRNA-V4 amplicon sequences of stored dairy manure samples Sequencing of the 16S rRNA-V4 region of the DNA preparations generated 872,408 sequences with 3,719 ASVs. Clustering of the ASVs at the 99% similarity threshold produced 872,194 reads with 2,885 ASVs. Species richness in stored dairy manure Microbial diversities of the microbiomes of the manure stored in EP and CS, as measured in terms of species richness index, were identical (P ANOVA > 0.05), although the individual compositions differed (Fig. 3A). Similar results were observed when comparing the microbiomes at various depths in each storage (Fig. 3B). However, this was not the case when comparing microbiomes between the locations within a storage. A significant heterogeneity was observed for microbiome composition between sampling locations in EP (EP1-5, Fig. 3C) (P ANOVA < 0.05). Samples collected from EP inlet (EP1) had the most diverse microbial population, followed by those collected from a near outlet location (EP5) (Fig. 3C). The lowest microbiome diversity was observed in the manure samples taken from proximity of the lining (EP3) (Fig. 3C). However, such was not the case with the CS, as the microbiome in this system appeared more uniform over all locations (Fig. 3C). Comparing the manure microbiomes of two storage systems In terms of composition, the manure microbiomes of EP and CS displayed a clear separation (Fig. 4). Such separations were also observed between storage depths, with near-surface samples showing the most obvious segregation while the rest were clustered together (Fig. 4). Within the same storage system, EP exhibited higher location-to-location variation in comparison to CS (Fig. 4); the latter showed a tight commonality across all sampling locations. It seems that for EP, the sampling locations near the inlet (EP1) and that with a crust (EP4) were the main drivers of these variations (Fig. 4); as mentioned above, EP1-near-surface and EP4-near-surface samples had substantially higher values for the COD and TS, VS, TKN, ORG-N, TAN and NO 3 − -N values than the other sites. A quantitative assessment of sample parameters that influenced the composition of the stored manure microbiomes was conducted using permutational analysis PERMANOVA and ANOSIM based on Bray Curtis distance matrices with 999 permutation and α-level of 0.05 [36]. The results, presented as the respective P-values in Table 1, revealed that the storage type influenced the microbiome composition in stored dairy manure (P PER-MANOVA and P ANOSIM < 0.05). Furthermore, within each storage system, both sampling location and depth contributed to the microbial population structure (Table 1), partially contradicting the results from nMDS analysis which did not identify the sampling location as a driver for sample separation in CS. The ASVs that were more abundant in the EP compared to the CS based on DESeq2 analysis [39], where those having P WALD less than 0.001 were classified as enriched. In total, there were 110 enriched ASVs in EP, and 81 in CS ( Fig. 5 and Additional file 2: Table S3). Thirteen ASVs representing six Proteobacteria species (Ruminobacter, Rhodospirillales, Rhodobacteraceae, Syntrophus, Smithella, and Desulfovibrio) were more abundant in EP microbiome, where only one Desulfovibrio ASV was enriched in CS (Fig. 5). A similar observation was made for methanogenic members of Euryarchaeota phylum, as 5 ASVs annotated as Methanophilaceae, Methanomassiliicoccaceae, Methanocorpusculum, and Methanoculleus genera were found in high abundance in EP compared to CS (Fig. 5). A Methanosarcina ASV however, was more enriched in CS, followed by other archaeal members from Nanoarchaeum (4 ASVs). While in nMDS clustering the near-surface samples were separated (Fig. 4), the differential analysis using depth as a comparison parameter did not yield a similar observation for this set. In EP, only 2 ASVs representing Syntrophomonas and Ruminococcaceae were differentially abundant (P WALD < 0.001) between the near-surface location and the middle. In contrast, the middle vs bottom comparison identified five differentially abundant ASVs annotated as Marinilabiliaceae, Hydrogenispora, Herbinix, and Cloacimonadales (Additional file 2: Table S4). A similar comparison for CS returned 9 and 5 ASVs, respectively (Additional file 2: Table S4). Within these, Mollicutes RF39, Cloacibacillus, Armatimonadetes, and Ruminococcaceae UCG-014 ASVs were found to be more enriched in the middle depth of CS whereas Ruminofilibacter, Fibrobacter, Treponema, Phycisphaerae mle1-8, Ruminiclostridium, Hydrogenospora, and Marinilabiliaceae ASVs were more abundant in the near-surface location. Between the middle and bottom depths of CS, no ASV was found to be significantly abundant, which was concordant with the nMDS analysis results that did not display a sample separation for these sets. Microbial community variation by locations in stored dairy manure Differential abundance analysis of microbial communities across sampling locations within each storages displayed contrasting results. For example, slight variation Fig. 6 Differentially abundant species of prokaryotic microorganisms at various sites in earthen pit storage. In a DEeq2 analysis, sixty-three ASVs with Wald statistical test value less than 0.001 were defined as significantly differential abundant species. Prior to plotting on a heatmap, the data from these ASVs were normalized using a variance stabilizing transformation algorithm on DESeq2. The lowest taxonomic annotation of the ASVs are shown on the Y-axis was observed over locations in the CS, where heterogeneity was shown only by the enrichment of two ASVs annotated as Peptococcaceae and Methylophilaceae in CS1 (center) vs CS3 (inlet); both were more abundant in CS1. In contrast, the EP microbiome displayed more location-to-location variations in composition, as represented by 63 ASVs (Fig. 6 and Additional file 2: Table S5). The sampling location closest to the inlet (EP1) exhibited the most discrete microbial community (Fig. 6), followed by the EP4 and EP5, and of these only EP4 had a crusted surface. Of the ASVs were detected in the heavily populated EP1 location, 28 had high level similarities to the Succinivibrionaceae, Acinetobacter, Rikenellaceae, Odoribacter, Halomonas, Paludibacteraceae, Phascolarctobacterium, Flavonibacter, Desulfovibrio, and Planococcaceae. Most of these enriched ASVs were found to be located 0.15 m below the surface (Fig. 6). Characterization of nitrogen-transforming microorganisms in manure storage The screening strategy shown in Fig. 2 linked 740 and 430 ASVs (Additional file 2: Tables S6 and S7) to specific nitrogen transformation pathways operating in EP and CS, respectively (Fig. 7). At the next step, we defined their sites of occurrence in the storages and respective relative abundances (Additional file 1: Figs. S2 and S3). With these assignments in hand, the organisms represented by the ASVs with high abundances as well as presence in more than two samples were linked to specific nitrogen transformation processes as shown Figs. 7A and B. Also, the possibility of the occurrences of each nitrogen transformation reaction or pathway at a particular site was also judged based on the respective chemical conditions such as the availability of oxygen that blocks or facilitates certain metabolic processes (Fig. 7). There were clear possibilities for the microbial production of ammonia in both storages. Many of the organisms represented by the identified ASVs had the enzymatic potentials for degrading protein and nucleic acids, the major nitrogen-containing constituents of cells, and urea, and thereby, producing free ammonia from manure under aerobic and anaerobic conditions; some of these organisms are shown on Reactions 2 and 3 in Fig. 7 and many are listed in Additional file 2: Tables S6 and S7. For example, Proteiniclasticum, Luteimonas, and Proteiniphilum are known to degrade and live on proteins using an inventory of proteases, peptidases and amino acid deaminases [44][45][46] (Additional file 2: Table S7-8). Similarly, Pseudomonas, Hydrogenophaga, Flavobacterium, and those from the Rhodobacteraceae family could obtain ammonia nitrogen from urea [47][48][49][50] (Additional file 2: Tables S6 and S7). As the pH for both storages ranged from 6.92 to 7.85 (Additional file 2: Table S2) and the pKa of ammonia is 9.2, not more than 4% of this compound will occur in the deprotonated or NH 3 form which could be released to the atmosphere (Additional file 2: Table S2). The ASV data were not analyzed for organisms with nitrogen fixation potentials as manure is rich in fixed nitrogen making nitrogen fixation unlikely to occur in the storages. We examined the possibilities of microbial conversion of ammonia to non-gaseous and gaseous products. We found that although oxygen could be present at the inlet or in the area immediately underneath the surface, the ASVs detected in both EP and CS did not show a significant representation of the archaea and bacteria that could perform aerobic and autotrophic nitrification. This process occurs either in two steps, nitritation (ammonia nitrite) and nitratation (nitrite nitrate), involving two different organisms, or via a one-step process with one organism that is called comammox (ammonia nitrate) [51][52][53][54][55][56][57][58]. Nitritation is also catalyzed by aerobic ammonia oxidizing archaea and bacteria (AOA and AOB) [51][52][53][54][55][56]58]. None of the CS samples carried AOA or AOB ASVs. One of three EP2-near-surface samples harbored an AOA ASV, assigned to Candidatus Nitrosoarchaeum limnia (ammonia nitrite) [59] (Fig. 7A), with the relative abundance of 0.03%. For AOB, only one ASV was found in EP. It was annotated as Nitrosomonas and associated with two out of 45 samples: one out of three EP3-nearsurface samples and one out of three EP5-middle samples with relative abundances of 0.09 and 0.04%, respectively. Consequently, these finding were either artifacts or indicative of an insignificant presence of AOA and AOB in EP. There was no indication of Nitrospira species that perform comammox in EP and CS [51,[55][56][57]60]. Under limited oxygen concentration, a nitritation function is provided by certain methanotrophs as these bacteria oxidize ammonia to nitrite due to shared structural and functional similarities between ammonia monooxygenase (AMO) and methane monooxygenase (MMO) [61,62]. Indeed, ASVs representing the methanotrophic species of Methylocaldum, Methylomonas, and Methylobacter genera [63] were found in both storages. In EP these ASVs were detected exclusively in the near-surface samples at EP2, EP3, and EP5 locations and in CS the respective locations were the near-surface at CS1 and CS3, and the bottom of CS3. Heterotrophic nitrification (HD) that combines heterotrophic energy production with ammonia oxidation to nitrite and nitrate (ammonia nitrite nitrate) could be coupled to aerobic denitrification (ADN: nitrate nitrite NO N 2 O N 2 ) [51-56, 58, 64]. In EP, several ASVs representing the organisms that could catalyze this combined HD-ADN process were found primarily associated with the near-surface samples at multiple locations (shown on reaction 7 in Fig. 7A) [65][66][67][68]. In CS, the distribution of such ASVs was mixed with about half being associated with the near-surface locations (reaction 7, Fig. 7B). Thus, in both EP and CS some of the ammonia could be lost, especially from the near-surface locations, through the HD-AND process. Microbial methane metabolism in stored manure The ASVs representing methanogens were detected in both storages at average relative abundances of 7.73% and 5.95% for EP and CS, respectively. Methanocorpusculaceae, a hydrogenotrophic methanogen family, comprised up to 95% of the Euryarchaeota sequences for both storage systems (Fig. 8 and Additional file 2: Table S8). Other observed families were Methanosaetaceae, Methanosarcinaceae, Methanomethylophilaceae, and Methanomicrobiaceae (Fig. 8). For the low-abundance families, EP and CS differed substantially, as detected counts of the members of Methanomethylophilaceae and Methanomassiliicoccaceae were higher in EP and that of Methanosarcinaceae were higher in CS. The anaerobic methane oxidizing archaea, which are close relatives of methanogens [77,78], were not found in the samples analyzed. The results of Kruskal-Wallis and Wilcoxon rank test revealed that the difference between the relative abundances of Euryarchaeota in the two storages was significant (Additional file 2: Table S9). However, this was not the case when the comparison was between the sampling sites and depths within the same storage (Additional file 2: Table S9). In EP, the location close to the lining of the storage (EP3) was found with the highest methanogen relative abundance. In contrast, in CS it was the center of the storage (CS1) that had this characteristic (Fig. 9). In a comparison across storage depths in EP, the inlet location (EP1) exhibited maximum variations. For this location, the highest methanogen prevalence was found at the bottom, and from there, it was progressively lower towards the middle and near-surface locations (Fig. 9). For other locations in EP and CS, little variation in methanogen prevalence was observed among the depths. Discussion We have characterized the microbiomes of dairy manure stored in an EP and a CS in two commercial dairy farms for their potential to transform nitrogen into soluble and volatile inorganic species. First, we identified the archaeal and bacterial ASVs that occurred in the stored manure by analyzing the determined sequences of the V4 regions of the 16S rRNA genes. Then we assigned the potentials for catalyzing various nitrogen transformation reactions to these prokaryotes. With that information, we developed models for the pathways that allow the stabilization and loss of nitrogen in EP and CS. We have also determined the diversity of the detected methane-forming archaea or methanogens and developed concepts for their relative impacts on methane production in these two manure storage systems. We elaborate below on these findings and their importance in analyzing the performances of manure storage systems in small commercial dairy farms. The nature of microbial nitrogen metabolism in the two storage types investigated appeared to be determined by the complex and anaerobic nature of the manure. The presence of organic nitrogen helped enrich an abundance of prokaryotic organisms with the capability of generating ammonia from the complex nitrogenous compounds and urea. Oxygen can penetrate maximum up to a depth of 7-10 cm beneath the surface of stored manure [79], creating a strictly anoxic environment in most areas of this system. Consequently, the autotrophic nitrifiers that require oxygen were almost absent in the stored manure, whereas anaerobic heterotrophic nitrifiers were present abundantly (Fig. 7). Such selections are also known to be favored by a high C/N ratio present in manure [80]. While autotrophic nitrifiers are extremely sensitive towards acidic pH [24,81], this factor was not responsible for their absence as the pH of manure in both storages was in the neutral to slightly alkaline range (6.92 -7.85) (Additional file 2: Table S2). For the above-mentioned environmental status, it was also unlikely that ammonia was lost from the stored manure via a combined action of Fig. 1 autotrophic or heterotrophic nitrifiers and aerobic denitrifiers (Reactions 4-7, Fig. 7). In contrast with the situation described above, anaerobic respiration driven denitrification (Reactions 8 and 10-12, Fig. 7) provided a route through which the microbiomes of both EP and CS could have emitted NO, N 2 O and N 2 via the transformation of nitrate. The input manure contained nitrate at appreciable concentrations, up to 6-12 times that of the stored manure (Unpublished data, Jactone Arogo Ogejo, 2022). This was likely a product of aerobic microbial processes, such as Reaction 7 of Fig. 7A, that occurred on the cattle barn floor before manure was scrapped off to the storage. This high concentration is the likely reason for the observed high diversity and abundance of nitrate reducers at the inlet area of both storages (EP1 and CS3, Fig. 7). A diversion from the denitrification process catalyzed by the bacteria that perform dissimilatory nitrite reduction to ammonium (DNRA) (Reaction 9, Fig. 7), presented a possible way of retaining some of the NO 3 − -N in the stored manure. The ASV data yielded a curious observation, an apparent absence of the anammox process in both systems. As nitrite is the limiting substrate for this reaction (82)(83)(84), the possible reasons to the absence of anammox are high flux operations of the respiratory denitrification (Reactions 10-12, Fig. 7) and DNRA (Reaction 9, Fig. 7) or the absence of or poorly functioning anaerobic nitrate reduction process (Reaction 8, Fig. 7). It is also possible that chosen 16S rRNA primer set was not able to capture the anammox community [85,86]. In future studies, this problem could be mitigated by use of the hydrazine oxidoreductase gene (hzo) as additional marker which has been proven to be effective in capturing the presence and diversity of anammox bacteria better [83]. In the context of above-mentioned general possibilities, EP carried more nitrogen transformation associated ASVs (Additional file 2: Table S6). It also exhibited substantial site to site variations, which was limited in CS. At a given sampling location of either EP or CS, the composition of the manure microbiome at 0.15 m below the surface was distinct from those in the middle and bottom, and the latter two were similar (Figs. 4 and 7). This separation was likely due to oxygen exposure to the near-surface location and uniform anaerobic conditions further down. Except for this variation, the CS established a nearly common microbiome composition at all locations, whereas EP offered variations by location. We hypothesize that this distinction arose from the differences in the design that led to distinct chemical and structural characteristics of the storages, such as the solid and N-content, and surface crusting. The CS was made up of cylindrical concrete tank with concrete floor, with no contact with adjoining environment except that the top was open to the air. In contrast, the EP was oval shaped with a clay lining, which could allow permeation of aqueous solutions with soluble organic and inorganic components from the adjoining soil into the storage. This storage could also receive soil parts including respective microbes. In CS, manure was added to surface at a peripheral location (CS3, Fig. 1F), and in EP, the addition occurred at the bottom of a similar location (EP1, Fig. 1C). Thus, it is also possible that manure moved from the point of entry to the exit area via two distinct flow paths in these two storages. It was likely uniform in all directions in CS whereas in EP there was an indication of an ununiform movement of manure or locally distinct microbiome activities. In fact, this case was exemplified in the microbiome composition and chemical conditions at the EP4 location as discussed below. In EP, the site near the inlet (EP1) provided the highest species richness (Fig. 3) and distinct microbiome comprised of 563 ASVs. This status was likely due to the freshest input material that floated to the surface, which provided the EP-near-surface location with the highest levels of TS, VS, TKN, ORG-N, NO 3 − -N and TAN compared to other locations, and perhaps a minor amount of oxygen that was introduce to this site by the addition system. While these observations with EP1 were reasonable, the situation with the EP4, which was located midway between the inlet (EP1) and the outlet (EP5), showcased an unusual nature of EP. Compared to EP1, the EP4 location harbored microbiomes of distinct compositions (Fig. 6). These unusual characteristics were consistent with the prevailing physiochemical conditions at the location. The TS, VS, TKN, ORG-N, and TAN levels at EP4 were higher than those at EP2, EP3 and EP5 and similar to the values seen at EP1 (Additional file 2: Table S2). The surface of EP4 also carried a crust which likely developed from the drying of the foam generated by gas bubbles arising from the bottom and carrying undigested plant fibers of manure. This incidence was indicative of a more active gas producing anaerobic degradation activity at this site. Some of the microbiome characteristics of EP4 were seen at EP5 (Fig. 6). Since the manure storages have been reported to have potential to lose up to 30% of the total nitrogen [10] and the microbial metabolism did not seem to be a major driver for such a major loss, we hypothesize that it is a combination of physiochemical processes that accounts for a majority of the loss. As mentioned about, at the prevailing pH of 6.92 to 7.85 of the manure, EP and CS will maintain less than 4% of the ammonia in the volatile NH 3 . However, as the vapor is blown away by wind, the system would generate more NH 3 to maintain the equilibrium and causing substantial loss of ammonia from the storage. This hypothesis is consistent with the observations that the loss of total nitrogen from EP could rise fourfold if the wind speed increases from 0 to 5 m per hour and the presence of up to 30 cm crust reduces ammonia emission or nitrogen loss by twofold [87]. All of the Euryarchaeota ASVs detected in EP and CS corresponded to the methanogens (Fig. 8). These communities were dominated by the Methanocorpusculum species (Fig. 8-9), an observation that has been previously reported for manure storages [4,5,20]. Since these methanogens are hydrogenotrophs [88], the methane emission from a manure storage would be tightly linked to the hydrogen production by fermentative bacteria [89]. Significant abundance of Methanomethylophilaceae and Methanomassiliicoccaceae ASVs were observed in EP while CS had more prevalence of Methanosarcina (Fig. 8). This is a major contrast in terms of the methanogenesis from methyl group containing substrates, as Methanomethylophilaceae and Methanomassiliicoccaceae are obligately dependent on hydrogen for the reduction of methyl groups to methane and do not use other methanogenic substrates, whereas Methanosarcinaceae make methane from methylotrophic substrates with and without hydrogen and can use several other substrates for methane production [90,91]. It is possible that with higher abundance and diversity of methanogen ( Fig. 9 and Additional file 2: Table S7), EP was a higher methane emitter than CS. However, it should be noted that the 16S rRNA copy numbers and the abundance of a particular type of methanogens are not always true indicators of a higher methane production activity of a methanogenic system [19]. Conclusions The study assessed the composition of nitrogen transforming and methanogenic prokaryotic communities in two types of dairy manure storages (EP and CS), and in the process it tested the hypothesis that this feature is influenced by the storage type. It was found that in general, EP and CS provided similar metabolic outcomes and EP was distinguished for its site-to-site variations. In both cases, while the microbes detected therein will generate ammonia from proteins, nucleic acids and other complex organic compounds and urea, they will not oxidize this product to soluble or gaseous nitrogenous compounds. There was a possibility that the nitrate generated through chemical or microbial oxidation occurring in the manure on the barn floor would be converted to NO, N 2 O and N 2 , which are gases, through a denitrification process. A likely route for converting nitrate and preserving it as ammonia was also detected. Thus, the microbial processes were not the likely drivers for the reported loss of nitrogen from the storages and a shift in the equilibrium towards the volatilization of ammonia due to removal of this compound by wind was the likely cause. The crust that forms on manure could counter this effect. The earthen pit storage (EP) established a more complex ecosystem with greater location to location compositional heterogeneity than CS, and this distinction was likely due to an ununiform movement of manure and interactions with the adjoining soil areas in the EP, which CS did not offer. The production of methane in both storages was likely driven primarily by the species that could utilize the hydrogen generated from the fermentative degradation of the complex carbon compounds of manure. In EP, even the methane production from methyl group containing compounds was performed by methanogens that are dependent on hydrogen. The microbiomes of both storages had the potential of generating greenhouse gases such as methane, NO, and N 2 O. With a higher abundance of methanogens, EP could be a higher producer of methane and here a location near the lining had a more potential for this activity. A rapid removal of manure from the barn floor, and thereby, lowering the production of nitrate, could reduce NO and N 2 O emission from these storages and methane production could be reduced with a better isolation of the earthen pit storage from the adjoining soil. Our results clearly revealed a complex nature of commercial manure storage systems in terms of their microbiomes. As mentioned in the introduction, there is a lack of detailed studies on the relationships between the microbiome metabolism and retention of nitrogen fertilizer and greenhouse gas emission in manure storage systems of small dairy farms. This is a serious concern as the designs of such storages are not fully similar to those studied in research laboratories, and therefore, the results from the latter may not be able to predict the outcomes for the former well. A need for a better understanding of the nitrogen transformation processes occurring in the manure on the barn floor was also identified. These gaps prevent the development of meaningful whole farm nutrient accounting models. Thus, the current study, which relies on 16S rRNA amplicons, provides motivation for more detailed investigations with more incisive approaches such metagenomics including the generation of metagenome-assembled genomes (MAGs), metatranscriptomics, metaproteomics, metabolomics, and metabolic modeling, leading to predictive models for the storage outcomes and better designs for the manure storages. Additional file 1. Custom built manure sample collection system, sitespecific ASV heatmaps, and supplementary DNA extraction method.
2023-04-11T13:57:11.176Z
2023-04-11T00:00:00.000
{ "year": 2023, "sha1": "81aee0c681b4607188b652be36b2c8eb718969bc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "81aee0c681b4607188b652be36b2c8eb718969bc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
148217613
pes2o/s2orc
v3-fos-license
Communitarian education and mathematics learning: A way of value diversity . In our society there is high diversity so we need educational methodologies that promote equal opportunities for personal success inside the difference. It is necessary to explore the role of non-formal educational practices in multicultural contexts and to implement a model of communitarian education that allows the practices of other cultures to become valued by and visible to the broader society. Nowadays there are not doubts about the importance of the developing of number sense in the early mathematics learning, however, the entrance to the scholar arithmetic is, in most cases, through the teaching of the four rules using the traditional algorithms. Here we show how to use open calculations based on numbers (ABN) as an inclusive methodological alternative, based on the meaningful learning of the decimal system, the operations and their properties. We think the method fits very well with people from other ethnics as Romanian people. Introduction and contextualization Education, especially in its compulsory level, is understood as a prerequisite for inclusion, but in some cases it can be a pathway to exclusion.In this paper we present the beginning of a research that intend to take into account culture in the teaching-learning process, focusing on a specific topic of Mathematics, the scholar arithmetic. The research is sited in a Primary School and its surrounding area.The community is formed by an agglomeration of 3000 people distributed among 520 houses, and situated in the outskirts of Córdoba city, in Spain.Unemployment is one of the biggest problems in the community. Recent statistics show that 60% of the people in the community are unemployed, causing high rates of poverty, marginalization and a soaring risk of school truancy.More than half of the population of the community is illiterate. a Corresponding author: nadamuz@uco.es In 2011-2012 the school began its transformation to become a Learning Community, which resulted in a significant reduction of truancy.In this year there are 120 students enrolled in the school, most of them Romanian children. In a general way, a learning community is a group of people who learn together, but we mean more than this, it's a specific model which means to agree on a common vision, basic values and objectives of school development, it increases the commitment of pupils, teachers, parents and other stakeholders and supports school quality and development [3]. As it became a Learning Community, the school opened its doors to the local community.This transformation has favored important changes in the families who have been voluntarily participating in the communitarian educational activities.At the same time, new methodological strategies like dialogic learning and communitarian mediation have been adopted. Framework In our classrooms there is high diversity that requires solutions to help us to assist our students; we need methodologies that promote equal opportunities for personal success inside the difference.Interculturality represents a manner of being, a way of living in and make commitments with a city [10]. Mathematical knowledge can be divided into two categories: informal and formal.Informal mathematics is that relative to knowledge and skills that children have acquired in their social environment, outside school context.On the other hand, the formal mathematical is referred to knowledge and skills acquired at school.Mathematical knowledge is the result of both formal and informal experiences [5], so we need to build bridges between the learning experiences of children outside and inside of school. There are several inquiries about calculation in Romanian communities [2, 9 and 10] that agree Romanian people have developed a great mental arithmetic skills, although, in most case, they are not able to read.Children work with their parents in the market and they are able to do a lot of operations but they do not get success in scholar mathematics. From this point of view, it is necessary a change on the field of scholar Mathematics, the failure of mathematics alphabetization in the early years of learning can produce indelible marks on the person.So, we emphasize how relevant is to start using integrated methodologies which not label a student as "able" or "not able" to mathematics. Nowadays there are not doubts about the importance of the developing of number sense in the early mathematics learning [1 and 5], however, the entrance to the scholar arithmetic is, in most cases, through the teaching of the four rules using the traditional algorithms, in the same way for ages.Maier asserts that students need to know these procedures not because of their mathematical import but because they help students be successful in school [6] . In this study we present a methodology that uses open calculations based on numbers (ABN) as an inclusive alternative, based on the meaningful learning of the decimal system and in the comprehensive mastery of the operations and their properties.These algorithms were created by Jaime Martinez Montero [7].They are called Open calculations based on numbers: • Open because students can solve operation by different ways, according to their skills. • Based on Numbers and not on ciphers, it let students to be aware about the size of the number. We have found that this method fits very well with people from other ethnics as Romanian people. Objective The main general objective of the research is to explore the role of non-formal educational practices in multicultural contexts and to implement a model of communitarian education that allows the practices of other cultures to become valued by and visible to the broader society. Methodology The inquiry is a model of participatory action research based on critical communicative research methodology, studying the processes of social and cultural transformation that will take place at various stages [4].The choice of a critical perspective is because it fosters transformation, in addition to understanding and interpreting reality.While the communicative perspective is based on interactions and communication to promote such a transformation; reflection and self-reflection from the voices and interpretations of the people, who the research is directed to, are also included.Thus, we seek not only the development of scientific knowledge but also the transformation of a reality marked by social exclusion and inequality. Results We are working in the following directions: • There is a space for the cultural issues: initiatives arising in the Learning Communities project are generating dynamics of participation that influence the redefining of the school as a privileged space for cultural diversity, giving voice for example to the cultural ability in mental calculation of the Romanian people. • The multiplicities of interactions generated in the school space are changing the beliefs of teachers, families and volunteers involved in the school transformation. An added value of the experience is that the context around the school could reflect the transformation process of the school itself, but there is still a lot of work to do on this sense.
2019-05-09T13:07:01.438Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "b1fea4378610ba247a08f482b28f20e7213e7d7e", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2016/04/shsconf_erpa2016_01132.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b7654a0f64a3571477236254b316f834229b2a5", "s2fieldsofstudy": [ "Education", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Political Science" ] }
73813591
pes2o/s2orc
v3-fos-license
Evaluating the Role of First Polar Body Morphology on Rates of Fertilization and Embryo Development in ICSI Cycles. BACKGROUND Recent studies have demonstrated that morphology of the first polar body (1(st)PB) is related to oocyte viability, which can be used as a prognostic tool to predict oocyte performance and pregnancy outcomes in an intracytoplasmic sperm injection (ICSI) program. According to some studies, there is a correlation between oocyte performance and 1(st)PB morphology, while others have not reported any correlation. The objective of this study is to evaluate the role of 1(st)PB morphology on rates of fertilization and embryo development in ICSI cases. MATERIALS AND METHODS In this prospective study morphological characteristics of 470 metaphase II (MII) oocytes were assessed in 80 ICSI cycles. The women were ages 21-42 years (mean 32.6 ± 0.2). Their oocytes were retrieved after a hyperstimulation protocol. After denudation, all oocytes were evaluated for 1(st)PB morphology. The oocytes were divided into two groups of A (normal 1(st.) RESULTS Twenty-seven percent of oocytes had fragmented 1(st)PB, while the remainder was associated with other morphological abnormalities. A total of 46.1% and 26.9% of oocytes showed double and multiple defects, respectively. RF was the most common abnormality observed in group B. No significant differences in women's' ages between groups A and B were noted (p=0.3). A total of 179 and 107 oocytes (61.5% vs. 59.8%) were fertilized in groups A and B, respectively (p=0.7). The rates of good embryo formation for A and B groups were 66.5% and 55.6% (p=0.07), and cleavage rates were 77.7% and 68.5%, respectively (p=0.09). CONCLUSION The data demonstrated that 1(st)PB morphology does not appear to be a prognostic factor for rates of fertilization and embryo development in ICSI cycles. Introduction One of the most important factors that determine success in assisted reproductive technology (ART) is the oocyte. It is clear that the quality of oocytes can affect fertilization and embryo development (1). By introduction of intracytoplasmic sperm injection (ICSI), many couples with male factor infertility have taken this opportunity to overcome their infertility. One of the capabilities of ICSI is the evaluation of oocyte morphology and maturity after denudation of cumulus cells for microinjec-tion. The majority of all metaphase II (MII) oocytes (60%-70%) have at least one morphological abnormality (2). Many studies have reported the effect of morphological characteristics of oocytes on fertilization rate and embryo development. The outcome of ART is dependent on both patient parameters and embryo variables (3). Evaluation of first polar body (1 st PB) morphology is useful for distinguishing the post-ovulatory age of the oocyte (4). Correlation between oocyte morphology and ICSI outcome is still a matter of controversy (5-9). Ebner et al. (10) have reported that the 1 st PB shape can affect fertilization rate and embryo quality in ICSI cycles. Recent studies have also demonstrated the relationship of 1 st PB morphology to mature oocyte viability, which may be used as a prognostic factor to predict oocyte performance and pregnancy achievement after an ICSI treatment (11,12). Some studies have shown a correlation between oocyte performance and 1st PB morphology during ICSI treatment cycles (9,10,13-15). However, others did not show any correlation between 1 st PB and ICSI outcomes (16)(17)(18)(19). Additionally, the correlation between blastocyst formation, implantation rate and 1 st PB morphology has been reported by Ebner and colleagues in 2002 (14). Germinal vesicle breakdown (GVBD) and simultaneous extrusion of the 1 st PB shows completion of the first meiotic division in human oocytes. As a result, the 1st PB is a marker which indicates that the oocyte is ready to undergo the fertilization process. This event is synchronized with nuclear and cytoplasmic maturation (20) The main goal of this prospective study is to evaluate the correlation of 1st PB morphological characteristics with rates of fertilization and embryo development in ICSI cycles. Patient selection In this prospective study, we evaluated morphological characteristics of 470 MII oocytes from 80 ICSI cycles. Maternal age was between 21-42 years. All patients underwent ICSI treatment at Yazd Research and Clinical Center for Infertility between April 2010 and August 2010. This study was approved by our Center's Ethics Committee. Patients signed informed consents. Controlled ovarian hyperstimulation In most patients, controlled ovarian hyperstimulation was undertaken with GnRH agonist downregulation, followed by rec FSH. An antagonist protocol was also used. Next, 10,000 IU of human chorionic gonadotrophin (hCG, i.m. DRG Co., Germany) was administered. The ovarian response was controlled by transvaginal ultrasound and serum estradiol concentration. Oocyte retrieval was done approximately 36 hours after hCG injection under transvaginal ultrasound-guidance. Semen analysis and sperm preparation Semen analysis was done according to a WHO laboratory manual (21). Sperm specimens were obtained by ejaculation or testicular biopsy in azoospermic patients. We used a Makler chamber and light microscopy at ×200 magnification to determine sperm counts and motility evaluation. Progressive and nonprogressive spermatozoa were reported as percentages. Sperm morphology was evaluated using Giemsa staining. All sperm preparations were performed using the swim-up or density gradient techniques (22). For swim-up, 1 ml of semen was mixed with 3 ml of Ham's F10 medium (Seromed Co., Germany) supplemented with 10% human serum albumin (HSA). After gentle mixing, the sample was centrifuged twice (2000 rpm for 10 minutes, followed by 5 minutes). Following the removal of the supernatant, 0.2-1.0 ml of the culture medium was added, dependent upon the size of the pellet and the quality of the original sample. The suspension was then incubated at 37°C in 5% CO 2 until use. ICSI procedure After oocyte aspiration, the oocytes were incubated for about 4 hours, then denudation from cumulus cells occurred with the use of 80 IU hyaluronidase/ml (Sigma Chemical Co., USA) along with the mechanical aid of appropriate Pasture pipettes. Each of the MII oocytes were washed in culture media and before microinjection, their morphological characteristics were evaluated. For sperm injection, the motile spermatozoon were aspirated by Pasture pipette and then transferred to a 10% PVP droplet. The best morphologically well-shaped spermatozoa were selected for the microinjection procedure. Each spermatozoa was immobilized by touching its tail near the mid-piece with an injecting pipette, and then aspirated from the tail. The injected oocytes were washed twice, then individually placed in fresh droplets of G1 covered with mineral oil. Oocyte evaluation The morphological characteristics of the MII oocytes were evaluated by inverted microscope just prior to microinjection. The characteristics employed for the assessment of oocyte morphology were: a. normal oocytes had clear cytoplasms with homogenous fine granularity; b. granular oocytes, dark with granularity either homogenous in the whole cytoplasm or concentrated in the central portion of the oocyte; c. cytoplasmic inclusions comprised of vacuoles presumed to be of endocytotic origin; d. anomalies of zona pellucida (ZP); e. fragmented polar body; f. non-spherical shaped oocyte; g. wide previtelline space (wPVS); h. refractile bodies (RF); i. bull's eye; j. debris in the PVS; and k. smooth endoplasmic reticulum cluster (SERc) (8). First polar body evaluation After denudation, we evaluated all oocytes for 1 st PB morphology. The oocytes according to their polar bodies were divided into two groups of A (normal intact 1 st PB) and B (abnormal fragmented 1 st PB) (Fig 1). Other abnormalities, such as RF, wPVS, central and general granulation, bull's eye, vacuoles, SERc, debris in the PVS, as well as oocyte shape and color were noted. Fertilization evaluation The injected oocytes were incubated followed by fertilization evaluation 18-19 hours after injection by visualizing the oocytes under a microscope and determining the presence of 2PN. Embryo evaluation and transfer About 48 hours post-injection, we evaluated the embryos according to the procedure of Hill et al. (23). Briefly, grading was as follows: grade A, equal size blastomeres without fragmentation; grade B, slightly unequal blastomeres, up to 10% cytoplasmic fragments; grade C, unequal sized blastomeres up to 50% fragments and large granules; and grade D, unequal blastomeres with significant fragmentation and large black granules. We considered grades A and B to be good quality embryos, whereas grades C and D were poor quality embryos. Inclusion and exclusion criteria All retrieved oocytes were included in the study. No oocytes were cryopreserved or discarded. Egg donation, natural cycles and degenerated oocytes after microinjections due to mechanical error were excluded from the study. Statistical analysis Data was presented as mean ± SE. Statistical analysis chi-square and Fisher`s exact tests were chosen. Data were presented as odds ratio (OR), 95% confidence interval (95% CI) and p value. The ORs referred to fertilization rate and good quality or early cleaved embryos. Independent sample ttests were used wherever appropriate. P<0.05 was considered significant. Statistical analysis was done with the Statistical Program for Social Science (SPSS 16.0, Chicago, IL) software. Results A total of 286 oocytes were normally fertilized, of which 179 were normal and 107 had fragmented 1st PBs. Additionally, 287 embryos were formed of which 179 were good embryos (119 normal oocytes and 60 fragmented 1 st PB). From the total of 287 embryos, 213 had early cleavage that resulted from 139 normal oocytes and 74 fragmented 1st PB (Table 1). The data showed that 27% of the oocytes had fragmented 1 st PB with no other morphological abnormalities, while the remainder were associated with other abnormalities. There were 46.1% of the oocytes that had double defects and 26.9% of the oocytes had multiple defects. In group B, RF (19%) and granulation (9%) were double defects; wPVS with RF (10%) were the most common abnormalities observed for multiple defects. Overall, other abnormalities with fragmentation of 1 st PB for both double and multiple defect oocytes was less than 10%. The least anomaly combined with 1 st PB fragmentation was darkness in the oocyte cytoplasm (0.6%). For oocytes with multiple defects, the least common anomaly was bull's eye with debris in the PVS (0.6%). Polar Body Morphology and ICSI Outcome There was no significant relationship between groups A and B in terms of oocyte fertilization rate; 1 st PB morphology did not predict fertilization rate. The rates of good embryo formation for groups A was 66.5%, whereas it was 55.6% for group B. Cleavage rates for group A was 77.7% and 68.5% for group B. The embryo development rates were not statistically significant in the two groups (Table 2). Women with group A had a mean age of 32.3 ± 0.3 years, whereas those with group B were 32.8 ± 0.4 years, which was not statistically significant. However, we found a significant difference between maternal age in groups A and B (p=0.016) with regard to the formation of good embryos (Table 3). Discussion Because of different nuclear maturity, the retrieved oocytes post ovarian hyper-stimulation show different grades of 1 st PB morphology. The presence of 1 st PB and its observation by an embryologist before ICSI is very important because extrusion of the 1 st PB reflects MII oocyte maturity. One of the attractive issues in ART is finding criteria(s) to predict which oocytes will fertilize and which oocyte characteristic(s) may affect embryo development during the ART procedure. Our results demonstrate no correlation between 1 st PB morphology and fertilization rate, embryo quality or even cleavage rate in women undergoing ICSI treatments. There is no significant difference between maternal age in groups A and B. Also, patients' ages were not related to fertilization and cleavage rates, similar to the findings of Ciotti et al. (16) but is related to embryo quality. In the literature, the prognostic value of 1 st PB morphology for fertilization rate, embryo quality and cleavage rate is controversial. Our findings are similar to previous studies reported by reproductive scientists (11, 14-17, 19, 24, 25), but conflict with reports from others (9, 10, 13). One reason for this contradiction may be related to the methodological variation in oocyte evaluation. Xia (9) and Mikkelsen and Collouge (24) have reported that the 1 st PB morphology combined with perivitelline space and cytoplasmic inclusions can be used as prognostic factors for fertilization rate and cleavaged embryo quality (9, 24), however we only evaluated 1 st PB morphology. The other reason for this discrepancy may be related to the use of a different 1 st PB grading system. Some studies divide 1 st PB morphology simply into two groups of normal and fragmented, as we did. However, others may grade 1 st PB according to criteria such as surface, size and maturity (9, 10, 13). Considering the positive relationship between 1 st PB morphology and time elapsed in culture, therefore, PB morphology may alter after a few hours, and it can change according to the timing of the observation. Ciotti et al. have noted that 1st PB fragmentation is related to the time elapse between retrieval, denudation and ICSI performance (16). They checked a subgroup of oocytes twice (at the moment of denudation and injection, respectively) for 1 st PB fragmentation and have observed different degrees of fragmentation at the first and second efforts (11.1% and 22.8%, respectively). However, as they increased the time elapsed for denudation to >3.5 hours, the fragmentation rate of the oocytes was 26.7%. Another dispensary may be related to the procedure that occurs for the time of the 1 st PB evaluation and performing the ICSI. In different studies, distinct ICSI procedures were applied, but timing for cumulus cell denudation and ultimately the time in which the 1 st PB was evaluated may not be the same. It is well known that, because of controlled ovarian hyperstimulation, not all retrieved oocytes have the same quality, and different ovar- Independent sample t test was used for analysis ian hyperstimulation protocols may be applied in studies. Therefore, oocyte quality may be influenced by the aforementioned protocols. Moreover, Verlinsky and his colleagues in 2003 have shown that PB morphology is not related to the genotype analyzed for aneuploidy in patients who underwent preimplantation genetic diagnosis (PGD). They noticed no correlation between polar body shape and genetic constitution of the oocyte (19). In addition, they detected 1 st PB morphology grading changes in terms of fragmentation in over one-third of the oocytes studied. Hence, 1 st PB morphology assessment may not serve as a reliable marker for assessment of oocyte quality and competence. Conclusion The data demonstrated that 1 st PB morphology does not appear to be a prognostic factor for oocyte competence in the process of fertilization and early embryo development in ICSI cycles. maturity and embryo quality on pregnancy rate in a program for in vitro fertilization-embryo transfer. Fertil Steril. 1989;52 (5)
2016-05-04T20:20:58.661Z
2011-07-01T00:00:00.000
{ "year": 2011, "sha1": "1fe640603f0b398a7e23125a27f4d2f5936caf82", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9fcb3ab35590ceb1d9d608007af267f84b2d3790", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119151850
pes2o/s2orc
v3-fos-license
p -adic L -functions on metaplectic groups With respect to the analytic-algebraic dichotomy, the theory of Siegel modular forms of half- integral weight is lopsided; the analytic theory is strong, whereas the algebraic lags behind. In this paper, we capitalise on this to establish the fundamental object needed for the analytic side of the Iwasawa main conjecture — the p -adic L -function obtained by interpolating the complex L -function at special values. This is achieved through the Rankin–Selberg method and the explicit Fourier expansion of non-holomorphic Siegel Eisenstein series. The construction of the p -stabilisation in this setting is also of independent interest. Introduction Traditionally, p-adic L-functions have dual constructions -analytic and algebraic -and it is the substance of the Iwasawa main conjecture that these two are equivalent. This conjecture can be formulated for various settings; for example, over GL 1 , the conjecture asserts that the analytic construction -Kubota-Leopoldt's p-adic interpolation of the Dirichlet L-function -is equivalent to Iwasawa's algebraic p-adic L-function. The Iwasawa main conjecture for classical modular forms of integral weight is formulated over GL 2 and this has been a recent, active research area with its connections to the Birch and Swinnerton-Dyer conjecture, see [8,9]. Provided one has both the analytic and algebraic machinery, the Iwasawa main conjecture can be formulated for higher dimensional modular forms and groups, for example, [17]. The algebraic theory of half-integral weight modular forms, both classical and metaplectic, has long been inchoate due to the difficulties present in developing the 'Galois side'. Recent work by Weissmann in [18] has made progress in this regard by developing L-groups for metaplectic covers, the length and methods of which further underline the difficulties present here. The analytic theory is substantial however, and in this paper we give the analytic construction of the p-adic L-function for Siegel modular forms of half-integral weight and any degree n. In [5], we gave a similar construction when n = 1; in that case, the p-adic L-function was already known to exist by the Shimura correspondence, this is not so for general n > 1. The proof found here is adapted from the method of Panchishkin found in [7, Chapters 2 and 3], which proves the existence of the analytic p-adic L-function for Siegel modular forms of integral weight and even degree n. This method makes critical use of the Rankin-Selberg method and reduces the question of p-adic boundedness of the L-function down to that of the Fourier coefficients of the Eisenstein series that are involved in the Rankin-Selberg integral expression. For full generality, it is assumed that p does not divide the level of the modular form f and a crucial step is to produce another form f 0 such that p does divide the level. Significant modifications to the method of [7] were required to make this work in the metaplectic casethis is Section 4. Outside of this, the success of Panchishkin's method is facilitated by the work of Shimura in developing the Rankin-Selberg integral expression in this setting, [15], and the arithmeticity of Eisenstein series, [16,. Interestingly, the p-adic boundedness of the Eisenstein series coefficients is almost immediate in this case, making the final step of the proof simpler than that found in [7]. After preliminary Sections 2 and 3, we establish the p-stabilisation in Section 4. Fairly elementary manipulations on the level of the Rankin-Selberg integral follow in Section 5. Sections 6 and 7 are devoted to transformation formulae of theta series and Fourier expansions of Eisenstein series -these are relatively well known. Finally the statement and proof of the main theorem, and the subsequent existence of the p-adic L-function, are given in Section 8. Siegel modular forms This section runs through the very basics of the modular forms that we study and their Fourier expansions are detailed. For any ring R and any matrix a ∈ M n (R), note the use of the following notation: a > 0 (a 0) to mean that a is positive definite (respectively, positive semi-definite), |a| := det(a), a := | det(a)|, andã := (a T ) −1 . For any collection a 1 , . . . , a r of matrices with entries in R, let diag[a 1 , . . . , a r ] be the matrix whose jth diagonal block is a j and is zero off the diagonal. Let A Q and I Q denote the adele ring and idele group, respectively, of Q. The Archimedean place is denoted by ∞ and the non-Archimedean places by f . If G is an algebraic group, let G A denote its adelisation. Let G ∞ := G(R), G p := G(Q p ), and denote by G f the subgroup of elements of G A whose Archimedean place is the identity of G ∞ . View G as a subgroup of G A by embedding diagonally at every place, but view G ∞ and G p as subgroups by embedding place-wise. Recall the adelic norm where x ∈ A Q , | · | denotes the usual absolute value on R, and | · | p denotes the p-adic absolute value, normalised in the sense that |p| p = p −1 . Let T denote the unit circle and define three T-valued characters on C, Q p , and A Q , respectively, by where {x} denotes the fractional part of x ∈ Q p ; if x ∈ A Q and z ∈ C, then write e ∞ (x) = e(x ∞ ) and e ∞ (z) = e(z). For any fractional ideal r of Q, let r p denote the completion (with respect to the p-adic norm) of the localisation of r at the prime p, which is an ideal of Z p . Understand 0 N (r) ∈ Q to be the unique positive generator of r. Write any α ∈ GL 2n (Q) as Define an algebraic group G, subgroup P G, and the Siegel upper half-space H n by A half-integral weight is an element k ∈ Q such that k − 1 2 ∈ Z; an integral weight is an element ∈ Z. The factor of automorphy of half-integral weight involves taking a square root; to guarantee consistency of the choice of root, one uses the double metaplectic cover Mp n of Sp n . The localisations M p := Mp n (Q p ) and the adelisation M A of Mp n (Q) can be described as groups of unitary transformations, respectively, on L 2 (Q n p ) and L 2 (A Q n ) with the exact sequences There are natural projections pr A : M A → G A and pr p : M p → G p , either of which will usually be denoted pr as the context is clear. On the flip side, there are natural lifts r : G → M A and r P : P A → M A through which we view G and P A as subgroups of M A . For any two fractional ideals x, y of Q such that xy ⊆ Z, congruence subgroups are defined by the following respective subgroups of G p , G A , and G: Typically these will take the form Γ[b −1 , bc] for certain fractional ideals b and integral ideals c. One of the key differences in the theory of half-integral weight modular forms is in the congruence subgroups one considers. The factor of automorphy involved can only be defined for a certain subgroup M M A , and any congruence subgroups Γ must therefore be contained in M. This subgroup, M, is defined via the theta series and is given by Typically we shall take b and c such that The spaces defined above interact with each other as follows. The action of Sp n (R) on H n and the traditional factor of automorphy are given by where γ ∈ Sp n (R) and z ∈ H n . If α ∈ G A , then we extend the above by α · z = α ∞ · z and For any σ ∈ M, we can define a holomorphic function h σ = h(σ, ·) : H n → C satisfying the following properties 3) The proofs for the above three properties can be found in [11, pp. 294-295]. If k is a half-integral weight, then put [k] := k − 1 2 ∈ Z; if is an integral weight, then put [ ] := . The factors of automorphy of half-integral weights k and integral weights are given as where σ ∈ M, α ∈ G A , and z ∈ H n . Given a function f : H n → C and an element ξ ∈ G A or M, the slash operator of an integral or half-integral weight κ ∈ 1 2 Z is defined by Definition 1. Let κ ∈ 1 2 Z be an integral or half-integral weight, and let Γ G be a congruence subgroup with the assumption that Γ M if κ / ∈ Z. Denote by C ∞ κ (Γ) the complex vector space of C ∞ functions f : H n → C such that f || κ α = f for any α ∈ Γ. Let M κ (Γ) ⊆ C ∞ κ (Γ) denote the subspace of holomorphic functions (with the additional cusp holomorphy condition if n = 1). Elements of M κ (Γ) are called modular forms of weight κ, level Γ; if κ / ∈ Z they are also known as metaplectic modular forms. Elements of C ∞ κ (Γ) and M κ (Γ) have Fourier expansions summing over positive semi-definite symmetric matrices, the precise forms for which are given later in this section. The subspace S κ (Γ) ⊆ M κ (Γ) is characterised by all forms f such that the Fourier expansion of f || κ σ sums over positive definite symmetric matrices, for any σ ∈ G A if κ ∈ Z, or for any where X ∈ {M, S} and the union is taken over all congruence subgroups of G (that are Take a fractional ideal b and integral ideal c and put Γ = Γ[b −1 , bc]; when κ / ∈ Z, always make the crucial assumptions that b −1 ⊆ 2Z, (2.4) bc ⊆ 2Z, (2.5) then we have Γ M in this case. By a Hecke character of Q, we mean a continuous homomorphism ψ : I Q /Q × → T. Denote the restrictions to R × , Q × p , and Q × f by ψ ∞ , ψ p , and ψ f , respectively. We have that ψ ∞ (x) = sgn(x ∞ ) t |x ∞ | iν for t ∈ Z and ν ∈ R and we say that ψ is normalised if ν = 0. For any integral ideal a, let ψ a = p|a ψ p . Now take a normalised Hecke character ψ of Q such that Modular forms of character ψ are then defined by To give the precise Fourier expansions of these forms, define the following spaces of symmetric matrices: for any fractional ideal r of Q. Take a congruence subgroup , a modular form f ∈ M κ (Γ, ψ), and matrices q ∈ GL n (A Q ), s ∈ S A . The Fourier expansion of f A is given as for some c f (τ, q) = c(τ, q; f ) ∈ C satisfying the following properties (2.10) The proof of the above expansion and properties can be found in [14,Proposition 1.1]. The coefficients c f (τ, 1) are the traditional Fourier coefficients of f in the following sense. By property (2.8), the modular form f ∈ M κ (Γ, ψ) has Fourier expansion , then it has Fourier expansion of the form where z = x + iy and the coefficients c F (τ, y) are smooth functions of y having values in C. We finish this section with some final key definitions. Consider b fixed in the definitions of Γ = Γ[b −1 , bc], so that this group depends only on c, and let ψ be a normalised Hecke character satisfying (2.6) and (2.7). For any two f, g ∈ C ∞ κ (Γ, ψ), the Petersson inner product is defined by This integral is convergent whenever one of f, g belongs to S κ (Γ, ψ). Complex L-function The standard complex L-function associated to eigenforms is defined in this section, and the known Rankin-Selberg integral expression is stated. As in the previous section, take ideals b and c satisfying (2.4) and (2.5), and set For any Hecke character χ of Q, let χ * (p) = χ * (pZ) denote the associated ideal character. Although the integral expression can be stated for any half-integral weight k, we take k n + 1 to ease up on notation -we shall be making this assumption later on anyway. For a prime p, the association of the Satake p-parameters -an n-tuple (λ p,1 , . . . , λ p,n ) ∈ C n -to a non-zero Hecke eigenform f ∈ S k (Γ, ψ) is well known (see, for example, [14, p. 46]). Now set a Hecke character χ of conductor f. The standard L-function of f , twisted by χ, is then defined by The Rankin-Selberg integral expression, (4.1) in [15, p. 342], is given there in generality; we restate it now for our purposes. Fix τ ∈ N (b)S + such that c f (τ, 1) = 0 and let ρ τ be the quadratic character associated to the extension The key ingredients of the integral are three modular forms: the eigenform f , a theta series θ χ , and a normalised non-holomorphic Eisenstein series E(z, s). To define the theta series, take any 0 < τ ∈ S and define an integral ideal t by the relation h T (2τ ) −1 h ∈ 4t −1 for all h ∈ Z n . Take μ ∈ {0, 1} and a Hecke character χ such that χ ∞ (x) n = sgn(x ∞ ) nμ . The theta series is then the sum where we understand (χ ∞ χ * )(0) = 1 if f = Z and as zero otherwise. This has weight n 2 + μ, level Γ[2, 2tf 2 ] determined by [15, Proposition 2.1], character ρ τ χ −1 , and coefficients in Q(χ). The Eisenstein series of weight κ ∈ 1 2 Z is now defined in a little more generality. Let Γ = Γ[x −1 , xy] be a congruence subgroup, contained in M if κ / ∈ Z, and let ϕ be a Hecke character satisfying (2.6) with y in place of c, and also such that ϕ ∞ (x) = sgn(x ∞ ) [κ] (note that this is a more stringent condition than the usual (2.7)). The Eisenstein series is defined by where recall Δ(z) = det(Im(z)), and we have z ∈ H n , s ∈ C. This sum is convergent for Re(s) > n+1 2 and can be continued meromorphically to all of s ∈ C by a functional equation with respect to s → n+1 2 − s. This series belongs to C ∞ κ (Γ, ϕ −1 ) and is normalised by a product of Dirichlet L-functions as follows. Let a be any integral ideal and define The normalised Eisenstein series is given by E(z, s; κ, ϕ, Γ ) := Λ n,κ y (s,φ)E(z,s; κ, ϕ, Γ ). Set η := ψχρ τ . In this setting, the integral expression of [15, (4 p-stabilisation Fixing a prime p, the initial key ingredient in our construction of the p-adic L-function is the replacement of an eigenform f with its so-called p-stabilisation f 0 . The form f 0 is also an eigenform away from p, whose eigenvalues there coincide with f , however, it has the key property that p divides the level of f 0 and is an eigenform for the operator U p -the Atkin-Lehner operator that shifts Fourier coefficients. Thus, the L-functions of f and f 0 are easily relatable and so for full generality we can begin with an eigenform f , assume that p c does not divide the level, and then pass to f 0 . In [5], we constructed f 0 explicitly in the case n = 1 which was possible through explicit formulae on the action of the Hecke operators involved on the Fourier coefficients. For general n, we modify the method of [7], which involves abstract Hecke rings, the Satake isomorphism, and certain Hecke polynomials; at the end of this section, however, we show how all this abstract Hecke yoga reduces to the explicit form found in [5], when n = 1. Let k be a half-integral weight, (b −1 , bc) ⊆ 2Z × 2Z be ideals, and ψ be a Hecke character satisfying (2.6) and (2.7) If (Δ, Ξ) is a Hecke pair, in the sense of [1, pp. 77-78], then the abstract Hecke ring R(Δ, Ξ) denotes the ring of formal finite sums ξ c ξ ΔξΔ, where c ξ ∈ C and ξ ∈ Ξ. Each double coset has a finite decomposition into single right cosets, and the law of multiplication is given in [1, pp. 78-79]. Consider the Hecke ring R(V, W ) defined in [14, p. 39] and let R denote the factor ring of R(V, W ) defined in [14, p. 41] or analogously to (4.1) below -this is the adelic Hecke ring which acts on forms in M k (Γ, ψ), and it is factored in order to give the Satake isomorphism. We need the use of a slightly different Hecke ring and we define this more explicitly. Let D 0 := D ∩ P A and Γ 0 := Γ ∩ P ; define Now define the Hecke ring S := R(V 0 , W 0 ), which differs from R(V, W ) of [14] in allowing denominators of p into the matrices r defining Y 0 (contrast with the definition of Z 0 ), and is therefore analogous to the Hecke ring also has a well-defined action on f ∈ M k (Γ, ψ) and f A . The action of the double coset , for example, is given by first decomposing into single cosets where α ∈ G ∩ D diag[q, q]D and then summing over the actions of α on f by the slash operator involving an extended factor of automorphy J k (α, z) -see [14, Sections 2, 3, and 4] for the details here. . Then the local rings R p and S 0p are the spaces generated by A q and A r , respectively, where now q ∈ X p and r ∈ GL n (Q p ∩ Q). Assume p c. Let W n be the Weyl group of transformations generated by the transformations pp. 41-42] through the composition of two maps Lemma 2.4] for the precise definition and characterisation of J(α)). By [14,Lemma 4.3], the map Φ p is injective. The map ω 0p . Note that any coset O p d with d ∈ GL n (Q p ) contains an upper triangular matrix of the form ⎛ with a di ∈ Z, and then define Through the decomposition O p xO p = d O p d and C-linearity, we extend this to obtain ω 0p . By multiplying out elements of diag[r, r](D 0 ) p , for r ∈ GL n (Q p ∩ Q), we see that (D 0 ) p diag[r, r](D 0 ) p also has a single coset decomposition of the form (4.2). Thus, we can analogously define Φ p : S 0p → GL n (O p , GL n (Q p )) and . The map Φ p , and therefore ω p , is no longer necessarily injective. There is a local embedding ε 0p : R p → S 0p and we have ω p = ω p • ε 0p . There exists U p ∈ S 0p -called the Frobenius element -defined by If n = 1, it is well known that U p corresponds to the pth Hecke operator when p | c; for general, n > 1, this is no longer true. Note that ω p (U p ) = p n(n+1) 2 x 1 · · · x n . Let C := {A ∈ S 0p | U p A = AU p } denote the centraliser of U p in S 0p . The map Φ p is injective when restricted to C by the following argument. Proposition 4.1. Any A ∈ C is a linear combination of double cosets Proof. This is essentially the second statement of [1, Proposition 2.1.1] with δ = 0 (in the notation of Andrianov). To prove it, define U − p := Γ 0 ( Proof. By the previous proposition, if A r ∈ C, then r ∈ M n (Z p ) ∩ GL n (Q p ). We therefore have the decomposition x δ1 1 · · · x δn n z), It has an immediate decomposition of the form Proof. Denote the sum on the left-hand side by Y , this belongs to S 0p . It is easy to check that R n (z) = (p n(n+1) 2 z) 2 n R n ((p n(n+1) z) −1 ) so, immediately from (4.3), we have For any 0 m 2 n , define Proposition 4.4. The Hecke polynomial R(z) can be factorised as Proof. By definition V 0 = 1 and by Lemma 4.3, V 2 n −1 U p = −T 2 n . For the rest, 1 m 2 n − 2, we have Expanding the right-hand side of (4.5) therefore gives the factorisation (4.3), which concludes the proof. Definition 2. Let f ∈ M k (Γ, ψ) be a non-zero Hecke eigenform with Satake p-parameters (λ p,1 , . . . , λ p,n ), assuming p c. Set Then the p-stabilisation of f is defined by The action of α ∈ C on f is considered the scalar one, that is, f |α = αf . The second property is then given by the calculation For q = p, the qth Hecke operator commutes with V m,p . Therefore, f 0 and f share the same eigenvalues away from p, and we then have the following corollary. In [5] we showed, if n = 1, that the p-stabilisation of f takes the form where, for any Dirichlet character ϕ of conductor F , denotes the twist of f by ϕ. This satisfies , ψ) and this matches the first part of Proposition 4.5. By definition, we have V 1,p = U p − T 1 = U p − T p in this case, so the abstract definition of f 0 in Definition 2, when we set n = 1, becomes denotes the eigenvalue of f under T p . By [5, Lemma 3.1(c)], this is precisely the form of (4.7) above. Non-vanishing of f 0 . It is not clear from the above method that f 0 = 0 if f = 0. That f 0 may vanish is entirely possible, as is remarked in [7, p. 50]. Suppose that Λ : R → C is a homomorphism defining the eigenvalues of f , that is for all 1 m 2 n , we have f |T m = Λ(T m )f . By the definition in (4.6) and of V m,p , we get Assume that f = 0, so that we can take τ ∈ S + such that c f (τ, 1) = 0. Using the fact that The above formula may be used as a method of checking, computationally, whether one has c f0 (τ, 1) = 0 as well. Given the formula in (4.8) above, it seems unlikely that c f0 (τ, 1) should vanish for all τ outside of a few special cases. As an example, consider the n = 1 case and assume that c f (τ, 1) = 0 for some 0 < τ ∈ Z such that p 2 τ . By (4.7), the coefficient c f0 (τ, 1) = 0 only if pλ p, This becomes less trivial a situation if c f (τ, 1) = 0 only for p 2 | τ . As things become significantly more complex for general n, we acknowledge that this does not constitute a particularly strong argument, but it is hopefully enough to convince the reader that there should exist eigenforms f = 0 for which f 0 = 0 as well. In [2, Section 9], Böcherer and Schmidt give an alternative construction for the p-stabilisation of a Siegel modular form of integral weight, which does guarantee that f 0 = 0. Although this is perhaps stronger than our construction, one still needs to make an assumption that such a non-zero f 0 should exist and this is incorporated into Böcherer-Schmidt's definition of p-regular [2, p. 1431]. Their construction takes two Andrianov-type identities of Dirichlet series for f and f 0 and uses them to compare their Satake parameters directly. It has a fairly simple generalisation to the present setting by using the identity of [14,Corollary 5.2]. Indeed this identity becomes almost exactly the same as that of [2, Proposition 9.1] by putting [|x|Z] = Y ordp(|x|) and [v] = Y in the notations found in [14], as well as in the definition of D(τ, p; f ) in [14,Theorem 5.1]. All that remains is to manipulate the lattice sum, the far righthand component of [14,Corollary 5.2], and express it as a sum of the U (π i ) Hecke operators (defined as the double coset Γ 0 diag[π i , π i ]Γ 0 and π i = diag[pI i , I n−i ]). This was done for the Hermitian modular forms in [3,Section 7], but remains the same for our case. Tracing the Rankin-Selberg integral Given the relationship, established in Corollary 4.6, between L(s, f, χ) and (s, f 0 , χ), the focus can be shifted to the latter. The level, y, of the Rankin-Selberg integral (3.2) will depend on χ, which dependence we naturally seek to avoid. This is achieved in this section by making crucial use of the behaviour of f 0 under U p . Fix 0 < τ ∈ S + such that c f0 (τ, 1) = 0. Recall t as an integral ideal such that h T (2τ ) −1 h ∈ 4t −1 and defineτ := N (t)(2τ ) −1 ∈ M n (Z). This section involves many levels and liftings of modular forms through these levels, so first we define and clarify these schematically. Fix b and note by (2.8) that b −1 | t, so we can think of f 0 as a form of level Γ[b −2 , b 2 tc 0 ] and put The ideal y χ can be taken as the level of the integral in the Rankin-Selberg expression of L ψ (s, f 0 , χ) only if χ 2 n − 1; to avoid this condition, we generally choose higher levels. The levels involved are Γ α := Γ[b −2 , b 2 y α ], where the integral ideals y α are indexed by α ∈ {r, ∈ Z | χ r} ∪ {0}. They are defined below, arranged in order of divisibility: Later on, when we invoke the Kummer congruences, we shall take a set of Dirichlet characters of varying moduli p and we shall be considering a sum of Rankin-Selberg integral expressions of varying levels y . Then we shall take a single r 0 so that all characters in the set are defined modulo p r and therefore we can simply lift all the Rankin-Selberg integrals of varying levels to all be of the same level y r and finally we trace the Rankin-Selberg integral back down to y 0 which process is given in the rest of this section. This is so that we can treat all characters uniformly. In specific cases, that is, when we consider a single primitive Dirichlet character with = χ 2 n − 1, one need not lift up to r in the first place and such a case is given as an example at the end of this section but will not be of much use later on. Assuming that χ is a Dirichlet character of modulus p with 1, the Rankin-Selberg expression from [15, (4.1)] of L ψ (s, f 0 ,χ) is given as in which r and V r : The definition of the trace map on modular forms is well known; with b fixed, the map Tr c2 c1 for any c 2 ⊆ c 1 takes modular forms in M k (Γ 2 , ψ) down to forms in M k (Γ 1 , ψ), where Γ i = Γ[b −2 , b 2 c i ], and is defined by decomposing Γ 1 = γ Γ 2 γ and summing over all the slash operator actions by these coset representatives. If g ∈ M n 2 +μ (Γ[b −2 , b 2 y r ], χρ τ ), then put F g (z, s) := g(z)E(z, 2s−n 4 ; k − n 2 − μ,η, Γ r ) and we have Tr yr y0 (F g ) = u∈S(Z/p 2r Z) Define, for any M ∈ Z, the matrix which belongs to P ι and is therefore in M. Associate to ι M the operator W (M ), acting on any modular form h of weight κ ∈ 1 2 Z by h|W (M ) = h κ ι M . Proposition 5.1. Let χ be of modulus p , and let g and F g be as above. If r 0 is an integer, then Proof. By the definition of the trace map and substitution of variables in the integral, we have To finish, note that W (Y 0 ) 2 = (−1) n[k] and we claim Tr yr y0 (F g )|W (Y 0 ) = H g |U r p , the proof of which, in contrast to the integral-weight case, is twofold. That the matrices corresponding to the operators match is given by the simple matrix multiplication for u ∈ S(b −2 /p 2r b −2 ) and in which we used Y r = Y 0 p r . For the claim to hold, however, we need to check that the half-integral weight factors of automorphy match up as well, for which the requisite identity is Making use of Y r = Y 0 p r and combining all of the above, observe that both sides (5.3) coincide with |Y 0 i(z − u)| 1 2 . Thus. the claim, and therefore the proposition, holds. A transformation formula of the theta series Transformation formulae for theta series of the form θ χ |W (Y χ ) when χ is a primitive Dirichlet character are generally well-known entities. The precise formula of this section is encompassed by the generality of both Theorem A3.3 and Proposition A3.17 of [16]; what follows is a concrete derivation and calculation of the integrals found in the aforementioned results. Theorem A3.3 of [16] gives the existence of a C-linear automorphism λ → σ λ of M on the space of 'Schwartz functions on M n (Q f )', and it gives formulae of this action by P A and the inversion ι = ( 0 −In In 0 ). This is relevant since a more general class of theta series is defined using Schwartz functions λ by gives the series θ(z, λ) = θ (μ) χ (z; τ ) of (3.1). Assume that χ is a Hecke character of conductor p χ and let ι χ = ι Yχ . Since ι χ ∈ C θ , [16,Proposition A3.17] says that and so we calculate ι −1 and so ι −1 χ λ = ι ( σ λ). Let d = n 2 2 if n is even, d = 0 if n is odd, and let d p y be the Haar measure on M n (Q p ) such that the measure of bM n (Z p ) is |b| n 2 /2 p for any b ∈ Q. Theorem A3.3 (5), and equation (A3.3) of [16], and the definition of λ in (6.1) above gives making the change of variables y → Y χ y in the last line. By the definition ofτ in (5.1) and The integral in the above equation is non-zero if and only if the integrand is a constant function in y -that is, if and only if x ∈ |N (bc) Hence, by the calculation in (6.4), the transformation formula (6.2) on theta series with Schwartz functions translates, when χ is a primitive Dirichlet character, to Fourier expansions of Eisenstein series The holomorphic projection map Pr : C ∞ κ (Γ) → M κ (Γ, ψ) and its explicit action on Fourier coefficients is well known when 2n < κ ∈ Z -see [7,Theorem 4.2,p. 71]. This has a simple extension to the half-integral weight case with the formulae remaining unchanged, and we did this in [6, Theorem 3.1]. Given Proposition 5.1 and the transformation formula (6.5), it will be germane to give the explicit Fourier development of , for certain values m ∈ 1 2 Z defined below. To ease up on notation, let δ := n (mod 2) ∈ {0, 1}. The projection map is only applicable for certain values s at which the Eisenstein series satisfies growth conditions; restriction to the set of special values, Ω n,k , at which the standard L-function satisfies algebraicity results guarantees this and this set is given by Proposition 7.1. For any ς ∈ S + , define Assume that k > 2n, χ is a Dirichlet character, and m ∈ Ω n,k . For any β ∈ Z, there exists a polynomial P (σ, σ ; β) ∈ Q[ς ij , ς ij | 1 i j n], defined on σ, σ ∈ S + ; a finite subset c of primes; polynomials f σ,q ∈ Z[t], defined for each σ ∈ S + and q ∈ c, whose coefficients are independent of χ; and a factor where m + = m − n − 1 2 and m − = 0, such that if m ∈ Ω n,k \{n + 1 2 } (and m = n + 3 2 if n > 1 and (ψ * χ) 2 When k ∈ Z and n is even the above kind of result is well-known, see, for example, [7,Theorem 4.6,p. 77]. Since the definition of the projection map remains unchanged, we can obtain the above in a similar manner, by using results on the Fourier development of integral and half-integral weight Eisenstein series as follows. p-adic measures and the main theorem Alhough complex L-functions are defined on variables s ∈ C, they can equally be viewed as Mellin transforms of the continuous characters R >0 → C × ; y → y s . In this latter vantage point, p-adic L-functions can naturally be constructed as Mellin transforms of continuous characters on Z × p with respect to a p-adic measure. Fix a prime p c, let C p := Q p denote the completion of the algebraic closure of Q p , and fix an embedding ι p : Q → C p . The p-adic norm naturally extends to C p and its ring of integers is given by The domain of the p-adic L-function will be The discussion in [7, pp. 23-25] concerning the decomposition of X p tells us that any C panalytic function F on X p is uniquely determined by its values F (χ 0 χ) for a fixed χ 0 ∈ X p and χ ranging over non-trivial elements of X tors Since Z × p = lim ← − (Z/p i Z) × is a profinite group, taken with respect to the natural projections π ij : (Z/p i Z) × → (Z/p j Z) × for each i j, to any distribution there associates a system of functions ν i : (Z/p i Z) × → A satisfying This association works by noting that each φ ∈ LC(Z × p , C p ) factors through some (Z/p i Z) × and by The compatibility criterion of [7, p. 17] tells us when we can run the above process backwards. So distributions are generally quite easy to define; p-adic measures arise from p-adic distributions that are p-adically bounded. Hence, defining a distribution interpolating L-values is relatively trivial and showing that these expressions are bounded is the crux of the matter. To do this, we will invoke the abstract Kummer congruences, which criterion is well known in generality and is due to Katz in [4, p. 258]; we give a specialisation of it. The proof of this can be found in [7, pp. 19-20]; it covers C p -valued measures as well by multiplication of some non-zero constant. An easy example of these criteria is the Fourier
2019-02-24T13:28:52.000Z
2019-01-14T00:00:00.000
{ "year": 2019, "sha1": "240ab8300775b755b6511b488541b4f818f1f257", "oa_license": "CCBY", "oa_url": "https://londmathsoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1112/jlms.12318", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2d023d5f3c75be1c50b95ec0062975b09b96872b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
35153021
pes2o/s2orc
v3-fos-license
Mechanism of Relaxant Action of Ethyl 6-amino-5-cyano-2-methyl-4-( pyridin-4-yl )-4 H-pyran-3-carboxylate Mainly Through Calcium Channel Blockade in Isolated Rat Trachea 1 Facultad de Farmacia, Universidad Autónoma del Estado de Morelos, Cuernavaca, Morelos, Mexico, 2 Departamento de Química, División de Ciencias Naturales y Exactas, Universidad de Guanajuato, Guanajuato, Mexico, 3 Unidad de Biomedicina, Facultad de Estudios Superiores-Iztacala, Universidad Nacional Autónoma de México, Tlalnepantla, Mexico, 4 Departamento de Hiperreactividad Bronquial, Instituto Nacional de Enfermedades Respiratorias Ismael Cosío Villegas, Mexico City, Mexico. INTRODUCTION The increase in intracellular calcium concentration ([Ca 2+ ] i ) involves voltage-gated, receptor-operated, store operated, and nonspecific Ca 2+ -influx, as well as sarcoplasmic reticulum release through channels activated by the phospholipase C (PLC), inositol trisphosphate (IP 3 ) and CD38/ciclyc ADP ribose (cADPR) phathways (Prakash, 2013;Perez-Zoghbi et al., 2009;Sanderson et al., 2008).Then, mechanisms such as the sarcoplasmic reticulum Ca 2+ -ATPase (SERCA), the bidirectional Na/Ca 2+ exchanger (NCX), and mitochondrial buffering help limit [Ca 2+ ] i restore levels after removing the agonist (Mahn et al., . 2010;Perez-Zoghbi et al., 2009).Beyond [Ca 2+ ] i , the Ca 2+calmodulin-myosin light chain (MLC) kinase-MLC cascade regulates contractility mediated by actin-miosin interactions (Jude et al., 2008;Berridge 2008).Thus, the regulation of [Ca2+]i in smooth muscle of the airways is a target of interest for research and develop of potential antiasthmatic drugs.Despite the currently available array of antiasmathic therapies, the search for novel chemical entities with new mode of actions, represents an important field of investigation for the development of safely and effectively drugs for the treatment of asthma.In this context, previous research work allowed us to determine the relaxant effect of Ethyl-6-amino-5-cyano-2-methyl-4-(pyridin-4-yl)-4H-pyran-3carboxylate (1, Fig. 1) on rat tracheal smooth muscle, and results indicate that it was 1.5 fold more active than theophylline, used as positive control (Alemán-Pantitlán et al., 2016).Furthermore, current work was designed in order to determine the underlying functional mode of action of 1 on tracheal rat ring and, by using Docking studies, to explain its interactions with L-type calcium channel. Animals Healthy male Wistar rats (250-300 g) were used and maintained under standard laboratory conditions, with free access to food and water.All animal procedures were conducted in accordance with our Federal Regulations for Animal Experimentation and Care (SAGARPA, NOM-062-ZOO-1999, Mexico), and approved by the Institutional Animal Care and Use Committee based on US National Institute of Health publication (No. 85-23, revised 1985).All experiments were carried out using six animals per group. Rat tracheal relaxation assay Trachea was removed from rats, cleaned out of adhering connective tissue, and cut into 3-5mm length rings.Then, tissue segments were mounted by stainless steel hooks under an optimal tension of 2g, in 10 mL organ baths containing warmed (37 ºC) and oxygenated (O 2 :CO 2 , 95:5) Krebs solution (composition,mM: NaCl,118;KCl,4.7;CaCl 2 , 2.5; MgSO 4 , 1.2; KH 2 PO 4 , 1.2; NaHCO 3 , 25.0; EDTA,0.026;glucose,11.1,pH 7.4).Changes in tension were recorded by Grass-FT03 force transducers (Astromed, West Warwick, RI, US) connected to a MP100 analyzer (BIOPAC Instruments, Santa Barbara, CA, US), as described (Sánchez-Recillas et al., 2014a).After equilibration, rings were contracted by carbachol (1 M) and washed every 30 min for 2h.After pre-contraction with carbachol, the test samples (compound 1 and positive control) were added to the bath in a volume of 100 μL; then cumulative concentration-response curves were obtained for each ring.The relaxant effect of test samples were determined by comparing the muscular tone of the contraction before and after the application of the test materials. In order to establish the underlying mode of action of 1, the following ex vivo experiments were carried out: a) For the interaction with the cholinergic receptors, concentration response curves (CRC) were obtained with carbachol (0.006-540 µM) after tissues were incubated with 1 (EC 50 = 96.30µM) during 15 min.Carbachol-contractile effect was determined comparing the contraction induced by carbachol in absence and presence of 1. b) For the interaction with phosphodiesterases (PDE's), tissues were incubated with compound 1 (96.30µM) during 15 min, then theophylline (inhibitor of PDE's) was cumulatively added to the bath (1.67-550 µM), and concentration response curves (CRC) were obtained. The relaxant effect induced by theophylline was compared in absence and presence of 1. c) For interaction with β-adrenergic receptor and cAMP increase, tissues were pre-incubated during 15 min with isoproterenol (10 µM; β2 adrenergic agonist) and propranolol (10 µM; β-adrenergic antagonist) and maximal relaxing effect of 1 was compared in absence and presence of isoproterenol and propranolol.d) To establish a possible interaction of 1 with L-type calcium channel blockade, the tracheal rings were precontracted with high KCl (80 mM).Once a plateau was attained, CRC of 1-induced relaxation were obtained by adding cumulative concentrations of compound to the bath.e) To determine whether the inhibition of extracellular Ca 2+ influx was involved in 1-induced relaxation, the experiments were carried out in Ca 2+ -free Krebs solution.Tracheal rings were washed with Ca 2+ -free Krebs solution containing KCl (80 mM) (15 min) and the cumulative CRC for CaCl 2 were obtained in the absence of 1 (control group) or after 15 min incubation with 1 (96.3 µM).Finally, the contractile effect induced by CaCl 2 was compared in absence and presence of 1. f) In order to explore the role of K + channels on -induced relaxation, tracheal rings were preincubated with the K + channel blocker glibenclamide (10 μM) and 2-AP (100 μM) for 15 min before carbachol (1 M) was added, and then 1 was added cumulatively. In silico docking studies The model of the L-type calcium channels was performed by Lipkind and Fozzard (2003), and was kindly given by Prof. Mancilla-Percino (Mancilla-Percino et al., 2010).Nifedipine models of ligands and 1 were built using Marvin Sketch (6.0.0 Marvin, 2013, ChemAxon, http://www.chemaxon.com).The study of molecular coupling (docking) was performed by using Vina Autodock (Trott and Olson, 2010).The channel was centered at (0,0,0) and was used a mesh size of 22.5 x 22.5 x 22.5 Å with a space in the mesh of 1 Å and exhaustiveness of 50.The systems were prepared using Pymol (Schrodinger, 2010) and Autodock/Vina for Pymol (Seeliger and Groot, 2010).In an effort to improve the statistics of the result obtained, a thousand independent molecular dockings were made using Autodock Vina.Images were made using VMD (Humphrey et al., 1996) and molecular interactions with LigPlus (Laskowski and Swindells, 2011). Statistics Data were expressed as mean ± S.E.M. and statistical significance was evaluated by using one-way ANOVA followed by Tukey's test.P values less than 0.05 were considered to denote statistical significance. RESULT AND DISCUSSION Previous results indicate that 1 was one of the most active relaxant compounds of the entire series evaluated (Alemán-Pantitlán et al., 2016), being two-times most active than theophylline (positive control).Thus, we decided to determine the functional mode of action of compound 1 on tracheal rat rings and, by using Docking studies, to explain its interactions with L-type calcium channel in in silico model.Hence, 1-pretreatment significantly shifted to the right (p<0.001) the carbachol-induced contraction, and did not allow reaching carbachol-induced maximum contraction (Fig. 2), suggesting that 1 is acting as a possible functional non-competitive antagonist.In addition, compound 1 (1.06-350 µM) produces significant (100%) relaxant effect on the contraction induced by KCl (80 mM) (Fig. 3) and the CaCl2-induced contraction was significantly reduced by compound 1 (p<0.001)(Fig. 4).Thus, Since 1-induced a non-competitive antagonism effect, offers the idea that bioactive 1 is not directly interacted with muscarinic receptor (Racké et al., 2006).Meanwhile, the relaxant effect could be produced by blocking a common step which is necessary to produce cholinergic contraction, such as the augment of [Ca 2+ ] i .It is well known that, in smooth muscle cells, two classes of Ca 2+ channels exist: voltage-dependent Ca 2+ channels (high KCl induced contraction is due to membrane depolarization, leading to increased Ca 2+ influx through voltage-dependent channels), and receptor operated Ca 2+ channels (contraction induced by carbachol in Ca 2+ release, through sarcoplasmic reticulum Ca 2+ channel activated by IP 3 ) (Montaño and Bazan-Perkins, 2005;Siddiqui et al., 2013;Racké et al., 2006).Therefore, our results suggest that 1 induced its relaxant effect by the interference with the Ca 2+ influx into the smooth muscle cells, since compound 1 was capable to relax the contraction induced by KCl and abolished the CaCl 2 -induced contraction.Furthermore, we believe that 1 acts as calciumchannel blocker, which result in a decrease in intracellular calcium concentration, and therefore reflected in the relaxation of tracheal smooth muscle (Flores-Soto et al, 2013;Sanchez-Recillas et al., 2014b;Medeiros et al., 2011).On the other hand, in the presence of isoproterenol (β-adrenergic agonist) the relaxant curve was significantly displaced to the left (p<0.001),which indicates a possible synergic effect on β-adrenergic receptor and/or a potential accumulation of cAMP by guanilate cyclase activation.Likewise, preincubation with propranolol (β-adrenergic antagonist) (Fig. 5), also modified the relaxant curve induced by 1, corroborating later asseveration (Dowell et al., 2014).On the other hand, our finding shows that 2-AP (10 µM) provokes a shifted to the right the relaxant curve of 1 (p<0.001),which suggest a potential potassium channel-opening mode of action.Finally, pre-incubation of glibenclamide (10 µM) did not produce any change in the concentration-response relaxant curve induced by 1 (Fig. 6), which allowed us to discard the ATP sensitive potassium channels (KATP) opening in the relaxant effect (Perez-Zoghbi et al., 2009). In addition, compound 1 did not modify the relaxant curve induced by theophylline (Fig. 7), a non-specific inhibitor of phophodiesterases which are responsible for converting cAMP into AMP, suggesting that 1 did not induce an augment of intracellular cAMP as relaxant mechanism of action. Once the relaxant effect of 1 was demonstrated and related with the calcium channel blockade, we decided to investigate the in silico putative interactions of active compound with L-type calcium channel (LTCC).For this, nifedipine (a well know L-type calcium channel blocker) and compound 1 were docked on human LTCC model.In this context, docking results for nifedipine gave four possible sites with different affinity energies ranging from -6.36+/-0.16 to -5.55+/-0.07kcal/mol.Binding sites and energies are shown in Fig. 8.Each binding site was analyzed individually and identifying their corresponding interactions between nifedipine and LTCC model.Fig. 9 shows the structures that were found in each binding site and their interactions.Some of the binding sites calculated in current work were reported previously; however, they were not classified as it was done in this study, as follows: binding site C (Hernández et al., 2013) Even when the energetic differences were small, results showed that nifedipine might be bound in different places for the calcium channel cavities broadening the search for more specific compounds than nifedipine.Lowest affinity energy site is characterized by closed contacts of residues of four distinct chains: IIIP (F 49 , E 50 , P 53 ), IVP (C 46 , A 51 , Q 53 , W 52 ), IVS5 (M 26 ) and IVS6 (I 4 , F 7 , I 8 , F 11 ).Only two residues of IVP chain form hydrogen bonds with nifedipine, E 50 and Q 53 , respectively.As noted, Figure 9B showed that no correlation exist between the number of hydrogen bonds created and the affinity energy.Specifically, B5 site interacts with IIIP (F 49 , E 50 , P 53 ), IVP (I 4 , Q 50 , A 51 , W 52 , Q 53 , E 54 , C 46 ), IVS5 (M 26 ) and IVS6 (F7).Taken in account latter results, and by using the same methodology as for nifedipine, it was found that compound 1 docked primordially (99.9%) on a site nearby nifedipine binding sites A and B (Fig. 9).The average affinity energy for 1 was -6.49 +/-0.04 kcal/mol.Binding site and those amino acids interacting with 1 are shown in Fig. 10.In Fig. 10B is shown that compound 1 is interacting with the calcium channel model in the following chains and residues: chain IP (G51, W52, T53, D54), IVP (R45, E50, A51, Q53, D54), and IS6 (W4, F7).Even when there is only one residue interacting with 1 by hydrogen bond (IVP Q53), the rest of residues interact by van der Waals forces stabilizing the ligand in the calcium channel.Docking results found in this work showed that 1 might be able to bind to the calcium channel with a subtle affinity greater than nifedipine does.In conclusion, ex vivo and in silico approaches suggest that compound 1 induces its relaxant effect mainly by calcium channel blockade.et al., 2012).Letter corresponds to the binding site and the number to the conformation in the plot in Fig. 8.The standard deviation for each conformation is in parenthesis. Fig. 2 : Fig. 2: Inhibitory effect of compound 1 on the concentration-response curve of the contraction induced by carbachol.All results are expressed as the mean ± S.E.M. of six rats. Fig. 3 : Fig. 3: Relaxant effect of compound 1 on the contraction induced by KCl (80 mM) in rat tracheal rings.Results are presented as mean ± S.E.M.of six rats. Fig. 4 : Fig. 4: Inhibitory effect of compound 1 on the cumulative-contraction curve dependent on extracellular Ca 2 + influx induced by 80 mM of KCl in Ca2+-free solution.Results are presented as mean ± S.E.M. of six rats. Fig. 7 : Fig. 7: Effect of compound 1 on the concentration-response curve of the relaxation induced by theophylline.All results are expressed as the mean ± S.E.M. of six rats , binding site B (Sánchez-Recillas et al., 2014b), binding sites B and D (Pandey et al., 2012), and binding site D (Lipkind and Fozzard 2003). Fig. 8 : Fig.8: Binding sites and affinity energies found on 1000 independent docking studies.In the graph, letters on the left side corresp ond to the binding site and the number on the right to the number of conformations found by AutodockVina. Fig. 9 : Fig.9: Binding sites found by AutodockVina for nifedipine and calcium channel studies.A) Binding sites found in current study.B) Nifedipine/calcium channel interactions for each energy group generated by Ligplus (Laskowski and Swindells, 2011).Residues orange highlighted were found previously as disease related by mutagenesis(Pandey et al., 2012).Letter corresponds to the binding site and the number to the conformation in the plot in Fig.8.The standard deviation for each conformation is in parenthesis.
2017-10-24T09:44:44.942Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "5dafd1e389d8d8d8b9171685e05d54c20d7e280d", "oa_license": "CCBY", "oa_url": "https://japsonline.com/admin/php/uploads/2010_pdf.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5dafd1e389d8d8d8b9171685e05d54c20d7e280d", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
216046618
pes2o/s2orc
v3-fos-license
Sulfonium Acids Loaded onto an Unusual Thiotemplate Assembly Line Construct the Cyclopropanol Warhead of a Burkholderia Virulence Factor Abstract Pathogenic bacteria of the Burkholderia pseudomallei group cause severe infectious diseases such as glanders and melioidosis. Malleicyprols were identified as important bacterial virulence factors, yet the biosynthetic origin of their cyclopropanol warhead has remained enigmatic. By a combination of mutational analysis and metabolomics we found that sulfonium acids, dimethylsulfoniumpropionate (DMSP) and gonyol, known as osmolytes and as crucial components in the global organosulfur cycle, are key intermediates en route to the cyclopropanol unit. Functional genetics and in vitro analyses uncover a specialized pathway to DMSP involving a rare prokaryotic SET‐domain methyltransferase for a cryptic methylation, and show that DMSP is loaded onto the NRPS‐PKS hybrid assembly line by an adenylation domain dedicated to zwitterionic starter units. Then, the megasynthase transforms DMSP into gonyol, as demonstrated by heterologous pathway reconstitution in E. coli. Supplementary Figures Supplementary Figure 1 Supplementary Tables Supplementary Table 1 Comparison of A-domain binding pockets. A) Crystal structure of the GrsA PheA-domain from gramicidin S biosynthesis (PDB code 1amu; [1] ). The substrate is shown as sticks in magenta, the binding pocket residues in green. Residues K517 and D235 are highly conserved and bind the carboxylate and amino group of the substrate, respectively. B)-C) Homology models were prepared using the Swiss-Model server [2] with 1amu as a template. Sequence identities to the template are 25.3% (BurA-A) and 29.1% (ATRR-A [3] ). Models of the substrates have been energy minimized in Chem3D (Perkin Elmer) and manually docked into the active sites in an orientation similar to that of Phe in GrsA-A. Hence, the positioning and conformation of the substrate is only an approximation. In both models, the loop bearing D235 in GrsA-A is replaced with a shorter loop not containing an acidic residue. Instead, an acidic residue (D606 in BurA-A and E304 in ATRR-A) is found in a position where it could electrostatically interact with the sulfonium or ammonium residue of the substrate, respectively. D) Binding pocket residues of GrsA-A, BurA-A and ATRR-A. It is not clear, whether the loop carrying the residues marked in blue has been modelled reliably. Preparation of Gene Knockout Mutants of B. thailandensis E264 An overnight preculture (0.5 mL) of B. thailandensis Pbur in LB medium (2 mL) supplemented with tetracycline (45 μg mL -1 ) was inoculated into LB medium (50 mL) supplemented with tetracycline (45 μg mL -1 ) and cultured in a 300 mL buffled Erlenmeyer flask at 30 °C with orbital shaking until an OD600 from 0.5 to 0.8 was reached. Cultured cells were centrifuged at 2,500 × g and 20 °C for 10 min. The obtained cells were washed with a 300 mM sucrose solution (3 ×) and resuspended in 300 μL sucrose solution. Subsequently, 100 μL of the washed B. thailandensis Pbur cells were subjected to electroporation (200 kV) with knockout plasmids (2-5 µg, see above). The transformed cells were precultured in LB broth (1 mL) for 4-6 h at 30 °C with shaking and then plated on either LB or nutrient agar plates with tetracycline (45 µg mL -1 ) and kanamycin (150 µg mL -1 ). After 3 days, a few positive colonies were observed and confirmed by PCR (see below) using purified genomic DNA from the respective mutants. Heterologous Production and Purification of His6-BurB The gene fragment burB was amplified by PCR with the primer pair burB-fw-NheI/burB-rv-HindIII using the DeepVent polymerase and the resulting amplicon was purified by the illustra GFX PCR DNA and Gel Band Purification Kit followed by subcloning into pJET1.2, generating pJET-burB. This plasmid was restricted with NheI/HindIII and the obtained gene fragment burB was ligated into NheI/HindIII-restricted pET28a (+), to yield pET28a-burB. Subsequently the plasmid was introduced into E. coli Rosetta2 (DE3) for heterologous protein expression. E. coli Rosetta2 (DE3) pET28a-burB was cultured in LB medium (2 mL) with chloramphenicol (25 μL mL -1 ) and kanamycin (50 μL mL -1 ) overnight. These cultured cells were inoculated into LB medium (50 mL) with added chloramphenicol (25 μL mL -1 ) and kanamycin (50 μL mL -1 ) in a 300 mL baffled Erlenmeyer flask and grown at 30 °C with orbital shaking until an OD600 = 1.5 was reached. The bacterial culture was cooled in an ice bath for 20 min and IPTG (200 μM, final concentration) was added followed by further cultivation at 20 °C with orbital shaking for 18 h. The overnight cultured cells were harvested by centrifugation at 20 °C and 8,000 × g for 5 min and kept at -25 °C until usage. 15 mL lysis buffer [50 mM Tris HCl (pH 8.0), 200 mM NaCl, 2 mg mL -1 lysozyme] were added to the cells and the mixture was incubated at 37 °C for 1 h. After DNase A (5 μL) was added, the cells were lysed by usage of a sonicator (BANDELIN SONOPULS HD2200) and centrifuged at 10,000 × g and 4 °C for 30 min. The resulting supernatant was filtered through a Chromafil ® PET-45/15 MS (Macherey-Nagel) filter and subjected to a Ni-IDA agarose (Biontex) column (2 mL Methylation of L-Methionine Through BurB Purified His6-BurB (10 μM) was added to phosphate buffer (50 mM, pH 8.0) containing 20 mM NaCl, 1 mM L-methionine and 1 mM S-adenosylmethionine. As a control, purified BurB was heat-inactivated at 80 °C for 25 min and used in the same way. The resulting mixtures were incubated at 30 °C for 90 min. Subsequently, 50 μL of the reaction was diluted with 50 μL of a 0.5 M NaHCO3 solution and 10 μL of a 1-fluoro-2,4-dinitrobenzene (DNFB) solution (1% w/v in acetonitrile) were added. After incubation at 60 °C for 60 min the reaction mixture was quenched with 12.5 μL HCl (2 M) and diluted with 122.5 μL methanol. As a reference, reaction buffer containing 1 mM S-methylmethionine was treated in the same way. The resulting solutions were filtered through a PTFE syringe filter and subjected to HR-LCMS analysis. Gonyol synthesis Gonyol was prepared by following a known procedure [10] from ethyl-bromoacetate and 3-(methylthio)propionaldehyde through reformatsky reaction followed by ester hydrolysis and subsequent methylation with iodomethane. 13 C3 DMSP was synthesised according to Chambers et. al. [11] as hydrochloride from 13 C3 acrylic acid and dimethylsulfide by bubbling gaseous HCl into a solution of both educts in dichloromethane for 20 minutes and subsequent concentration in vacuo. Stable isotope labelling of Malleicyprol with 13 C3-DMSP B. thailandensis PburΔburI was grown in a 300 mL baffled Erlenmeyer flask filled with 100 mL MM9 medium supplemented with 45 mg L -1 tetracycline, 150 mg L -1 kanamycin and 137 mg L -1 13 C3 DMSP at 30 °C with shaking at 150 rpm for 24 h. Subsequently, the culture was extracted with ethyl acetate (2 ×), concentrated in vacuo and redissolved in methanol for LC-HRMS analysis. Metabolite extraction and metabolomics analysis B. thailandensis E264, B. thailandensis Pbur, B. thailandensis PburΔburA [8] , B. thailandensis PburΔburB, B. thailandensis PburΔburI, B. thailandensis PburΔburD and B. thailandensis PburΔburE were each grown in a 300 mL baffled Erlenmeyer flask filled with 100 mL MM9 medium supplemented with the appropriate antibiotic and 2% (w/v) XAD16N at 30 °C with shaking at 150 rpm for 24 h. Subsequently, the XAD16N resin was separated from the culture broth by filtration through Miracloth (Merck Millipore). The XAD16N resin was washed with water and eluted with methanol (40 mL) followed by elution with ethyl acetate (20 mL). Both eluted fractions were combined and concentrated under reduced pressure to yield supernatant extracts. For cell extracts, 50 mL of filtered culture broth was pelleted by centrifugation (6000 g, 10 min). Subsequently the pellet was resuspended in 20 mL methanol, sonicated and incubated for 1 h. Cell debris was removed by centrifugation (6000 g, 10 min) and filtration through a 0.2 μm PTFE ROTILABO syringe filter. The resulting methanol solution was concentrated in vacuo to yield cell extracts. Cell and supernatant extracts were dissolved in methanol to yield a concentration of 1.75 mg mL -1 of crude extract mass for supernatant and 1.64 mg mL -1 for cell extracts. All extracts were subjected to LC-HRMS analysis in two technical replicates for subsequent metabolomics analysis using the software Compound Discoverer (2.1, SP1) from Thermo Fisher Scientific. Additionally MM9 medium substituted with tetracycline (45 μg mL -1 ) and kanamycin (150 μg mL -1 ) was treated in the same way as mentioned above for supernatant extracts. This medium control was used to subtract medium components when supernatant extracts were analysed. Both extract types were analysed using a Metabolomics Workflow from Compound Discoverer with a Pattern Scoring node to identify sulphur-containing metabolites. All genotypes were compared to each other using differential analysis. Conversion of DMSP to Gonyol by Expression of burA in E. coli A 300 mL baffled Erlenmeyer flask containing 75 mL MM9 [8] liquid medium supplemented with 25 μg mL -1 chloramphenicol and 50 μg mL -1 kanamycin was inoculated with 1% of an overnight culture of E. coli Rosetta2 (DE3) expressing His8-BurA. E. coli Rosetta2 (DE3) containing an empty vector (pHis8-3-svp) was used as negative control. The cultures were grown at 37 °C with shaking at 150 rpm until an OD600 of 1.3 was reached. After cooling to 16 °C, the cultures were supplemented with 5.1 mg DMSP, heterologous protein production was induced with 0.5 mM IPTG and the resulting cultures were incubated at 16 °C with shaking at 150 rpm for 18 h. Subsequently, 1.3 g XAD16N resin was added per culture followed by incubation with shaking at 100 rpm for 30 min. The XAD16N resin was harvested from the culture broth by filtration through Miracloth (Merck Millipore), washed with water and eluted with methanol (40 mL) followed by elution with ethyl acetate (20 mL). Both eluted fractions were combined, concentrated under reduced pressure, redissolved in methanol to a
2020-04-22T13:05:08.026Z
2020-04-21T00:00:00.000
{ "year": 2020, "sha1": "7e392376c523910413bc08dc7b518b2f2c039fe1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202003958", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d63b4b0d20958712174d4005aeadf235bb2d701a", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220864733
pes2o/s2orc
v3-fos-license
Teacher professional development and change: Evidence of compliance, redefinition, and reflection in the use of Sport Education Using change theory (Fullan, 1992; Guskey, 2002) and socialization theory (Lacey, 1977; Zeichner & Gore, 1990), our research team focused on the professional development of middle and junior high teachers relative to learning the Sport Education model (Siedentop, Hastie, and van der Mars, 2004). Data were collected both quantitatively and qualitatively via psychometric instruments and interview processes respectively. In aggregate, teacher level of adherence and commitment to Sport Education varied across school levels. This paper comments on the change process relative to both empirical data and theoretical connections across four major findings to date. 21 An organizing principle in the economics of professional team sports is the uncertainty of outcome hypothesis. In its general form, originally put forward by Rottenberg (1956), the hypothesis maintains that spectators are more likely to be attracted to sporting encounters involving evenly matched opposition. In essence, while we all love our special teams and want them to win, generally, sports fans are risk lovers and love the thrill of close contests. We are beginning to think that such a hypothesis may apply to the study of teachers and continuing professional development. That is, as we look at the dynamics of the continuing professional development (CPD) through grants targeted to improve physical education teachers and reform the programs within which they work, we wonder if here, too, we aren't entering a game that involves high risk and great uncertainty. While grant agencies or researchers may not be attracted to entering such contests, we believe the research literature (Armour and Yelling, 2007;Day,1999) is clear that there are few guarantees in CPD and the teacher socialization process (Lacey, 1977;Zeichner and Gore, 1990). While we know the basic tenets of successful school reform, teacher change, and the aspirational model of CPD (Day, 1999;Fullan,1992;Leat, et. al, 2006) are foundational, we believe the complexities of the school culture and the idiosyncratic nature of teacher socialization process or teacher agency promote great uncertainty. This was born out throughout our team's experience in a federally funded programmatic grant over the last 8 eight months working with the CPD of eight middle school and junior high school PE teachers. We have learned that school and program culture as well as the identities of each teacher play an important role here. To the extent that it provides a focus and clear purpose for the school, a positive and supportive school culture becomes the cohesion that bonds a program and its faculty members together as it goes about its change mission. However, a negative school culture and sub-cultures within, such as a PE staff mired in negative routines, can be counterproductive and an obstacle to educational reform. Without a culture composed of committed teachers that value their subject, reflection, independent thinking, knowledge creation, accountability, and true professionals that value career-long learning, improved programs through CPD are near impossible. As Armour and her associates (Armour, Makopulous & Chambers, 2008;Armour &Yelling, 2007) have found, the quality of PE and pupil learning rests on the quality of PE teachers' responsible attitude and accompanying behavior toward career long professional learning. They argue that unless teachers are committed to and engage in life-long learning, their knowledge and skill base becomes obsolescent and teachers are deskilled. As Armour 22 and her colleagues state, "The problem, of course, is that such teachers don't become obsolete. Instead, they continue to practise for many years and, as a result, their pupils lose out and the integrity of the physical education profession is undermined." This finding is not surprising and appears time and time again in the literature and in our study. Purpose and method Our research team joined the challenge to enhance PE teachers' CPD and attempted to improve school PE through a federal grant program in the U.S. whereby school districts can apply for funding targeted to improve school physical education. The program entitled, the Carol M. White Physical Education Progress grant, has funded over 900 school districts over the last eight years in the U.S. More the 500 million dollars have been distributed with the aim of instructional and curriculum enhancement of physical education that will ultimately improve the skill, knowledge, and dispositions of children relative to sustained involvement in physical activity. We focused on introducing and implementing three curricular models to the teachers: Sport Education, Health-Related Fitness Education, and the Personal-Social Responsibility model. Initially, we joined the teachers in learning about Sport Education (Siedentop, Hastie, and van der Mars, 2004). Our project, conducted at a middle school and a junior high school, just concluded its first phase of three years of funding. Four physical education teachers at each school were engaged in specialized workshops, on-site mentoring, periodic reflection or focus group discussions, and individual conversations that focused on learning Siedentop's Sport Education model. Our hope was that teachers would improve their instructional and curricular knowledge, skills and dispositions relative to sport education and apply the model within their curriculum. Importantly, we hoped that pupils would profit from such innovation by engaging in a new model that motivates the children to be more active and more sport literate. To assess our progress, our team developed a year-long research design that would examine the teachers' perceptions and behaviors related to the reform/change process. This research involved psychometric measurement of teacher attitudes toward change and their self-efficacy as well as the examination of teacher perceptions of the change process through 4 individual and 3 group interviews with teachers from both school settings. Equally, interviews were conducted with school administrators and our team observed the eight teachers over an eight month period to assess how they implemented the Sport Education model. Data have been and continue to be inductively analyzed and in the time remaining I will share but a sampling of our findings that center around four themes and the lessons we have learned: Theme 1 -Who asked us? Skepticism and concern abounds Theme 2 -We already do sport education. Theme 3 -Communities of practice, reflection and commitment: Real and unreal Theme 4 -Kids like Sport Education Theme one -Who asked us? Skepticism and concern abounds An important tenet of teacher change forwarded by Guskey (1995) is that teachers have ownership in the change process. While our team interacted frequently with school administrators and teachers upon our first submission of a grant that had been rejected two years ago, this wasn't the case in the submission of a subsequent grant application. While we interacted with school administrators, we learned that teachers had not been informed of the second application and the award Hence, our first task was to clear up miscommunication and apparent lack of communication. It needed to be addressed and clarity of project goals agreed upon after learning that we were awarded the grant -this, of course, wasn't easy. As one teacher stated: "While I'm excited about it, but I'm a little nervous, of course, obviously for change. We are so use to the way things are, but I think all of us are ready for new things….I think the way, the way it was presented to us wasn't in the best…it could have been handled better. Nobody informed us -at least me about it and I am the department chair. So I feel I should have been told [about the grant application and award]." So despite this rough patch, the teachers and our university team were able to get on the right pathway by engaging in multiple discussions sessions about the intent of the grant, curricular models, teacher and university faculty roles, and by acquiring and distributing reading materials to orient the teachers about the models and particularly the first model, Sport Education. While the teachers were generally enthused about the project, there was some caution and even cynicism about this reform effort as just another fad. We call this "drive by" or "drive through" CPD. As one skeptical teacher stated: " I'm always skeptical with new. At my age when I see new programs coming down the tube and especially not so much in PE, but program development that we have in different realms in education….everybody says we are going to get on the bandwagon. Well , it lasts about a year and a half and then it fizzles out." 24 Lessons learned: 1) make sure everyone has ownership in a reform project and are clear about goals, objectives, roles and plans to improve their program and teaching and 2) make sure you have a real partnership -one based on mutual respect and enthusiasm for your project -anything less will result is awkwardness and diminished goals that will ensure your efforts "fizzle out". In contrast to Rottenberg's premise of risk and uncertainty, one must try to insure as much certainty as possible as where people are going and how to get there. 2. 2. Theme two: We already do Sport Ed! While many of the teachers claimed that they already "did" sport education, it certainly wasn't the model designed by Siedentop and his colleagues. To clarify what the model represents, reprints from journal articles about the model were collated and distributed to the teachers at both schools and the text, the Sport Education: A Comprehensive Guide (Siedentop, Hastie, and van der Mars, 2004) was purchased for each school. This includes an extensive DVD complete with instructional materials. This was followed by a two-day workshop on Sport Education conducted by Hans van der Mars, one of the developers of the model. Subsequently, teachers at each school conducted a nine week season of volleyball using the Sport Ed. Model. Our team assisted in the development of materials and instructional approaches to varying degrees at each school and one graduate student was assigned full time to rotate between schools to assist and observe the implementation of the model. As preparation and initial implementation unfolded in a nine week season of Volleyball, the teachers began to understand that their Sport Education was very different from the Sport Education. We were very pleased that the teachers at both schools implemented the Sport Education model in some hybrid form. The teachers at the middle school went full speed in developing materials and implementing the model in its pure form while the junior high teachers opted to implement certain elements most suited to their students and importantly, their own comfort levels. While this model is clearly different from what they implemented in the past, these teachers still weren't initially convinced. As one teacher stated: "Well, it's not actually that much different from what we did in Volleyball before. We had fitness and scorekeepers, and the participants -only fair play points [are different]." Nonetheless, the teachers at both schools did see the greater utility of the model for shaping student leadership and decision making opportunities -a shift from a teacher centered to student centered model of instruction as kids took on various roles during a season. Various teachers commented: "Definitely the kids take over leadership roles" "Most of the coaches did a nice job. Some got frustrated during the pre-season because people were not listening to them." "I think the kids, look like and act like, and say that they feel more ownership in it." "[Duty teams] I would keep all of them [various roles]….I think they are important and would definitely keep them." Lesson learned: Teachers will define their own pathway to reform. Providing CPD opportunities via workshops, reading materials, group discussions, etc. are all helpful. Ultimately, teachers will shape CPD in the forms they wish to pursue, not those that others expect them to follow. We began seeing a bimodal tendency between enthusiasm on part of the middle school teachers and concern and strategic compliance (a kind of 'we will do this because it's expected of us' by school administrators and our university partners). It reveals the dialectical nature of the socialization process and the agency of the teachers. That is, while one group of teachers appeared to reflect and redefine their situation, the others attempted to accommodate the innovation slightly while holding tightly onto their existing beliefs and behaviors for the most part. This reminds of the research by Doolittle, Dodds, and Placek (1993) with pre-service teachers who held onto established beliefs throughout their teacher education. Finally, it also suggests as Griffin and Patton (2008) found in their study that change involves risk -some teachers are more willing than others to "risk" a change in routine. In terms of Rottenberg's thesis, we see both sides of the coin. We found that some teachers are more comfortable with the uncertainty that goes along with learning new approaches to teaching while others prefer more certainty by holding onto routine versus innovation. Unreal One of the things that impressed us most as researchers was the middle school teachers' ability to coalesce around the Sport Education model. They worked closely as a team and created an environment that facilitated student learning and enjoyment through the model. When concerned about one teacher being too controlling in the model, the other teachers encouraged the teacher to adjust her teaching strategies to fit the model; that is, allow pupils more ownership and control in implementing a lesson. As opposed to past history of working independently, this group really pulled together in discussing plans, developing student manuals and materials, building on each others strengths, and sharing the work load, and overcoming difficulties. As one teacher stated: " I think it's great being able to 26 work with them [other teachers] and I believe that everybody has really jumped in wholeheartedly at doing it and likes what's going on. I feel like we all kind of blend together and we all, you know….we can get upset with one another and we can, you know, not like this, but we will work it out and go on." In contrast, while the junior high teachers claimed they were on the "same page", there was dependence on the university team to provide lessons and direction for the season. In fact, our team did develop the season plan and the materials used by each team within the season. Initially, all four teachers were present to oversee the lessons, but this changed over time as only two of the teachers appeared in the gym. While acknowledging the value of the model, the teachers appeared less enthusiastic and reflective about the merits of the model during the season. And we began to observe differentiated and lessened work load and wondered about the degree of collective commitment and caring to the reform efforta concern even the chair of the department expressed from time to time. Once again, holding on to the past ruled the day for the most part. Lesson learned: The most productive way to bring about reform is when a "true" community of practice (O'Sullivan, 2007) and reflection related to an aspirational model of innovation and CPD that is enthusiastically embraced. While the middle school teachers were collaborative, reflective, enthusiastic, and more certain about the model and are likely advocates of Sport Education, in contrast and regardless of some supportive rhetoric from junior high teachers, they appeared to be mostly "playing the PEP game". That is, they displayed strategic compliance to a new curricular model that didn't engender a real commitment to reform or situational redefinition (Lacey, 1977). This was evidenced in subsequent activities where they opted not to implement sport education in activities clearly quite suitable to the use of the model. This may be tied to their late career stage and the lack of connectedness between the teachers as well as lack of accountability and responsibility linked to their daily performance. Needless to say we are very concerned what the future holds in years two and three of this project for this school. 4. Theme Four: Kids like sport education, The final theme I will address is that both middle school and junior high school students appeared to genuinely enjoy the Sport Education model. Naming their teams, assuming duty roles, and engaging in more student vs. teacher centered activity made their experience more enjoyable. It was clear that some students really got into their leadership roles as captain, coach, referee, linesperson, and fitness trainer and came to their team with well-defined plans. Teachers at both schools recognized the value of the model for 27 their students and observed students' comfort and accompanying effort to learn volleyball in a different mode. One teacher stated: "I think they really like it …they were just kind of like the teachers, hesitant at first, just because they didn't understand all the responsibility that was being put on them. But now, I mean they sit together at lunch in their teams, I mean they talk in the hallways about this stuff… so I think they are definitely on board." Lesson learned: While teachers may recognize the enthusiasm by pupils for a certain curriculum or instructional model, that doesn't mean they will implement the model to meet student needs. In the case of the junior high teachers, teacher behavior was shaped to meet their own needs and routines -what was most comfortable to them. Caring for themselves had priority over caring for the students -doing what was best for the kids. In contrast, the middle school teachers' enthusiasm and instructional delivery carried over to the students. Again, in contrast to Rottenberg's uncertainty thesis, we saw investment in the certainty of the potential of the model by the middle school teachers, and prioritization of the certainty of their long held routines in teaching by junior high teachers. While implementing the model to an extent, the junior high teachers' actions seemed to connote more uncertainty and less value of the model, particularly after the volleyball season. Summary In closing, it appears in its own unique way, the uncertainty of outcomes hypothesis applies to work in continuing professional development. While more certainty may be assured by following some of the basic tenets of teacher change theory, the complexity of school and program cultures along with the idiosyncratic nature of teacher identities and behavior, makes reform rather problematic versus automatic. We have learned that: 1. Individual and group ownership in the change process impacts the degree of success in changing the culture of a program. Successful change calls for active initiation and participation of all teachers in partnership with one another and with support groups in communities of practice in order to elevate the possibility of sustained commitment, behavior change, and a vision for the future. As Huberman (1995, p. 207) found, "teachers were most uniformly enthusiastic when they were in the throes of a major innovation of which they approved." Armour and Yelling (2007) suggests that for CPD to be successful, teachers must lead the charge both by establishing communities of practice and demanding experiences of learning. While we couldn't agree more, such an ethic has to be part of a teacher's inner soul and personal value system. Without it, we will not observe a "change" of any kind. 2. While there needs to be pressure to bring about change, it may result in compliance and personal and interpersonal conflict rather than collaboration and a genuine change in teacher beliefs and behavior. Hence, the degree of commitment and accompanying behavior and beliefs varies across individuals and groups. In one way or another, our project or perhaps CPD in general can generate paradoxes of sorts: reform stimulates enthusiasm and internal conflict; brings people together or moves them further apart; expands learning opportunities for teachers or erodes others; and in some cases intensifies professional unions and has the potential to bring about professional conflict. 3. To the extent that it provides a focus and clear purpose for the school, again culture becomes the cohesion that bonds the school or a program together as it goes about its mission. HOWEVER, culture can be counterproductive and an obstacle to educational success. One culture was clearly about improvement facilitated by a steadfast effort to learn and reflect on how to make things better, while the other was one of compliance and few signs of reflective practice, and mostly discomfort with breaking from previous routines of practice. 4. Finally, the process has a powerful potential to impact how teachers learn and behave. While it clearly involves risk for both teachers and CPD providers and researchers, it is risk worth taking. As partners, we have learned what to do and what not to do, when to push ahead and when to back off, and what to provide and what not to offer. It has taught us a little more about the world of teaching and how teachers wish to engage in their workboth good and bad. It has reinforced the many shades of teaching and teachers -the pedagogical, emotional, intellectual, political and moral dimensions that affect teachers work.
2020-02-27T09:10:59.255Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "f894fcfd0dba1a4b0712cc74f6e87500be1300cc", "oa_license": "CCBY", "oa_url": "https://journals.openedition.org/ejrieps/pdf/4615", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "86f52731be56a93fa42bfaf4af302527a7d9dfda", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Sociology" ] }
233791412
pes2o/s2orc
v3-fos-license
Lesson of Drama in Language Education: Why do We Have to Learn English Through Drama Performance? The primary goal of this research is intended to report how the implementation of Drama Performance as a project based in learning English foreign language and whether the lessons, the learning activity can motivate students interest also encouraging students to use their English language skills. Further, the discussion also takes into the issues of the problem that may exist during drama performance such as some students believe they cannot perform in English and they may choose to withdraw from the activity. The researcher is employing qualitative methodology to investigate the research questions. Observation and interview use as the instrument of this present study. The participants are the students who take Drama in Language Education course at the fifth semester of English Education Department at School of Teacher Training and Education (STKIP) Muhammadiyah Bogor. The students have performed drama, and have experience in preparing the session. The students are under the researcher supervision. The participants from other Department to be interviewed became the source of the data who watched drama performance. The students are under the researcher supervision. Activities are designed in accordance with the course objectives to share information about the students, the learning process and the phenomenon. The implementation of drama performance as a project based learning has myriad benefits for students and drama activities are a unique way of enhancing students’ motivation and participation. Language skills can be exposed to students not only through conventional technique but also through drama performance. Project based learning provides students with the opportunity to explore the contents of Drama in Language Education. To get into the outcome, the core activities focus on the elements of drama, creative writing (composing a play script), characterization, make-up character, providing students with the information about drama in language education, and also hope to give students the opportunity to perform a drama in English. The lecturer’s role in the teaching learning process is crucial. It can influence the students’ response towards project based learning. Keywords—drama performance, project based learning, learning skills INTRODUCTION Drama activities have been used to foster the acquisition of foreign language skills time after time. Drama integrates English Foreign Language skills in a natural way. There are also studies to support the advantages of drama on students' language skills [1]. Some useful activities are suggested for encouraging students to practice their English as follows. Firstly, students can be encouraged to practice speaking outside the classroom through rehearsal. Secondly, lecturers can help the students by providing feedback on the activities they are working on or assisting when they have language problems. Finally, students should be engaged in good speaking activities which can motivate them to play an active part in the speaking class [2]. Project-based learning provides learners with opportunities to focus on language functions through various tasks in the project. Drama performance is as one of techniques may be used to motivate and create bond between the students and their peer, and also between the students and their lecturer. In fact, the prior research on drama in learning English have been abundant. However, the focus on make-up characters and drama performance as project based learning has not been widely explored especially from students' view in School of Teacher Training and Education (STKIP). Some of the previous research additionally emphasize on experienctial learning through interactive drama drama [3], the effect of performing arts [4]. Another study, drama was used to teach English to linguistically diverse middle school students [5]. In other coutry, such as Brazil, the researcher point out the positive effects of drama on students' oral skills in the target language [6]. Drama in language education has naturally taken its place in English language learning. Since many research have established that there are valuable influences of learning English Foreign language through drama performance. The researcher need to explore more the implementation of drama performance as project based in using English Language at School of Teacher Training and Education. The specific research questions are as follows:  How the implementation of drama performance as project is based learning in EFL class?  How project based learning and drama can be integrated to promote students' learning skills? The primary goals of this research is to report how the implementation of Drama Performance as a project based in learning English Foreign Language (EFL) class. Next, it aims to see how the use of project based learning focusing on drama can be integrated to promote students' learning skills. Further, the discussion also takes into the issues on the participant's perspectives into drama performance and the problem that may exist during drama performance. II. LITERATURE REVIEW A. Drama in Language Education This compulsory course provides students with the opportunity to explore the contents of Drama in language Education. To arrive at this objective, the core activities focus on the elements of drama, creative writing (composing a play script), characterization, make-up character, providing students with the information about drama in language education, and also hope to give students the opportunity to perform a drama in English Foreign language (EFL). Based on the topics to be covered in the whole semester, the students are expected to perform a drama as their project. The teacher training curricula in school of teacher training and education have hardly been adapted in a way that would ensure the teachers in Elementary, Junior and Senior High School have the skills to use educational drama while teaching. In fact, drama performance in language education depends on the initiative of the teachers. In order for teachers to be able to use drama in teaching learning English Foreign Language (EFL) in school curriculum, they will need to be appropriately and correspondingly trained [7], [8], and [9]. Drama activities offer a framework for successful language learning because they provide a stimulating and positive learning environment and can help reduce classroom anxiety which can increase student motivation and participation. English is a foreign language which is used by people who speak different first languages in order to communicate with each other. Various activities have been used to develop students' speaking skills. There are studies to confirm the effectiveness of project-based learning and drama instruction on learners' communication [10] and [11]. The results suggest that drama activities can be used to enhance learners' language skills. Moreover, the drama activities can motivate them to use the target language. In another study, the researcher determined the results of using drama for teaching and learning language and communication [12], [13], and [14]. B. Project Based Learning It has been explored in various contexts and in different phases of schooling, from primary to higher education. The uniqueness of project based learning is the construction of an end product, a 'concrete artifact' which represents students' new understandings, knowledge and attitudes regarding the issue under investigation often presented using videos, photographs, sketches, reports, models and other collected artifacts [15]. Project-based learning (PBL) is an active student-center form of instruction which is characterized by students' autonomy, constructive investigations, goal-setting, collaboration, communication and reflection within real-world practices [16]. Social media sites have become invaluable tools in education. The Fifth semester students of English Education Department were promoting their drama project using social media, such as Instagram, YouTube and Facebook. They were creating their drama trailer on social media platforms. Modern digital technology is a major enabler for students to comfortably engage with the process of designing and developing their project as they can document the whole process and easily share their creations in a digital format [17]. Fig The participants from other Department to be interviewed became the source of the data who watched drama performance. Activities were designed in accordance with the course objectives to share information about the students, the learning process and the phenomenon. The researcher used non-probability sampling technique. This technique was chosen because the sampling data collection did not give same opportunity to all individual of population. Then, the sampling technique in non-probability was used is purposive technique. Purposive technique was used based on the researcher's assessment of knowledge or prospective participants or respondents to answer research questions. The assessment that the participant had knowledge is done subjectively based on the observation of the researcher. In general, samples that were considered capable of answering research questions people who were experienced or have knowledge related to the focus of the study. C. Data Collection 1. Observation Observation, particularly participant observation, has been used in a variety of disciplines as a tool for collecting data about people, processes, and cultures in qualitative research [18]. The observation on this study was done with the objectives to:  Observe the students' learning process in learning English through Drama in Language Education;  The rehearsal Process in Drama Performance;  Investigate the problems appeared during the implementation of project based learning; and  Observe the students' drama performance The time duration of observation was 3 months. The observation was done in the classrooms and at Aula of STKIP Muhammadiyah Bogor based on Drama in Language Education schedule, rehearsal schedule and drama performance. Interview The next step was interviewing the participant about drama performance as project based learning. The interview is The process of obtaining information for the purpose research with questions and answers method while looking at the face between interviewer with respondent using interview guides [19]. The type of interview used in this study is in-depth interview. The method considered an appropriate to the research as this it will enhance a deeper understanding on the study objectives. Data Analysis Data could be collected only in various ways (observation, interviews, document digest, and tape) and usually processed first before it was ready to be used, but still used words arranged into text that compiled. The process was systematically done to get the valid result of data. In analysing the data, the researcher state that among qualitative analysis methods, thematic content analysis is perhaps the most common and effective method in this study. It can also be one of the most trustworthy, increasing the traceability and verification of an analysis when done correctly [20]. The following are the six main steps of thematic analysis of the transcripts. 1) read the transcript, 2) annotate the transcript, 3) conceptualize the data, 4) segment the data, 5) analyse the segments, and 6) write the results. Every single step has its own function that connected each other. The process was systematically done to get the valid result of data. The data that had been analysed became the elements to present the findings and discussion of this research. IV. FINDING AND DISCUSSION This present research was purposively done to report implementation of drama performance as project based in learning English Foreign Language (EFL), and whether drama performance encourage students to use their English language skills. Additionally, the investigation was also conducted to figure out the problem that may exist during the learning process, the rehearsal process and drama performance. In order to get the ideas of how the implementation of drama in language education performance as a project based learning in EFL, especially on English language skills which was started from the step of preparing the script, the learning process, the rehearsal, the core activities done in the classroom until drama performance, including the problems appeared during the teaching learning process and the lecturers' solutions in overcoming such problems. A. The Observation The data gained from all of the observations were expectedly representative to the findings of this research. The researcher obviously saw the students did it really well. They enthusiastically learnt to write the play script which was stimulated by some slides and examples the lecturer made in digital presentation also videos. There were some play scripts shown to students, and then they discuss it, analysed and elaborated the scripts with their classmate. Some common technical problems appeared in the middle of learning process; the main problems were finding students who were interested in directing and motivating their friends in drama project, who had the time to commit to it and willingness to work on English foreign language drama production. Another problem that was encountered was scheduling rehearsals so that all students in the particular scene were available. Every week, the director attempted to find a time that would work for all students involved but it was a bit impossible and resulted in much frustration. Many conflicts happened between students. To allow as many interested students as possible to be a part of the production, scheduling rehearsals based on student availability seemed to be the only solution. Since students were using English language, they often could not remember their dialogues. The time constraints of only two months made it difficult for students to memorize their script and lines. The first month of rehearsal until the last few rehearsals before drama performance, several students still had not memorized their dialogues and not only were this worrisome but it also made rehearsals difficult. It also caused conflict. Some students were crying during rehearsal. Some students believe they cannot perform in English language and they may choose to stop and withdraw from the activity. Without being tied to the project, another problem that was met was lack of budget. Even though this was a small performance (only two groups), a few properties and wardrobe had to be purchased to enhance drama performance. There was a need for an operator and technician to work the lighting, but finding a volunteer with enough time to attend some rehearsals and become familiar with the technical aspects of the stage in drama performance was not easy. Luckily, the theatre communities at campus were able to support and contribute the purchase of some properties, and the students communities at English Education Department was another communities from which some support could be obtained. Here, too, reaching out crossdepartment at campus may be the key to find the students were interested in participating drama performance project based. The last, performing a drama in English language produces additional challenges that cannot always be foreseen when planning, preparing and performing for such a project. Some students have strong learning skills and have minimal problems in acting. Other student, however, need a lot of help in acting, improvisation, pronunciation and intonation. An awareness of their shortcomings in some cases may lead decrease their selfconfident, speaking anxiety, low motivation in performing drama. Thus, an appropriate way must be found between correcting and helping students improve their language skills and not demotivating them and decreasing their selfconfidence. Observing students project on drama in language education performance for three months made it clear that through intense work and teamwork, the students were able to improve their language skills, especially in the area of speaking skill, reading skill, pronunciation and vocabulary knowledge. The student learned many words, new vocabulary that they would not have encountered in their regular classroom instruction and were able to expand their grasp of English foreign language (EFL). In fact, producing a drama project with students helps to strengthen student relationships. It improves the atmosphere between the lecturer and the student, togetherness. Finding an appropriate way to increase students' interest in learning English foreign language and attract students is thus an important task for a majority of foreign language department today. Performing plays in English language can be one such way to reach the learning outcome and students' motivation in learning English Foreign Language (EFL). B. The Interview The data gained from the interview with the students from English Education Department as the participants. They were interviewed by the researcher with several questions to verify; Students' feeling (Number 2, 3, 4, 7), the core activities: Project based learning (Number 1, 6) Language skills (Number 5). The data of the interview were recorded as audio files and digital data. The When the participants were further questioned about Drama in Language Education project based learning, mostly all participants' mentions that the y felt happy in doing drama project.. The participants answers above were given in order of the number of the citations from the interviews (not edited version). For the purposes of this research, the researcher use the terms project based learning and language skills in EFL to refer to drama performance. Drama involves such activities as roleplaying, mime, simulation, and improvisation [21]. The impact of creative drama has been shown to have on literacy and language development. By using their bodies and voices to dramatize the characters' words and actions, children gain a sense of how interactions among the characters shaped the events described in the story. "In this way they can touch, see, and experience the meaning of the words in the text" [22], [23]. Some views of the participants were associated with including core activities, drama performance project task in order to make all students be actively involved in the lesson. Drama activities offer a framework for successful language learning because they provide a stimulating and positive learning environment and can help reduce classroom anxiety which can increase student motivation and participation [24]. Project drama is an annual project in the English Education Department. In my opinion, this project is the most anticipated project because every process in this project is never easy, really out of the box. (p. 8 Problems arose for the class when some of the students did not want to participate, the student found some difficulties that they encountered while doing rehearsal in Drama performance. The big problem that was encountered was scheduling rehearsals so that all members in the particular drama scene were available. Every week, the director (a student) attempted to find a time that would work for all members in their group involved but it was a bit impossible and resulted in much frustration. In contrast, some of the students mention that they did not find problems. They were enjoying the core activities and learning process. It has been argued that the freedom and challenge that students experience as a result of solving the problems that arise in designing and building their projects result in high levels of student engagement [25]. The above comments may indicate how project based learning is a student-centre form of instruction which is based on three constructivist principles: learning is context-specific, learners are involved actively in the learning process and they achieve their goals through social interactions and the sharing of knowledge understanding. . (p.7 ), Yes honestly I feel nervous when performing because this is my first time performing with costumes like this in front of many people and I am confident that I certainly can and can show my best (p.8) The researcher find out that in order to perform a drama project, the students must not only understand the material of drama in language education but also find a way to express their feeling, communicate it creatively and effectively to the audience. Drama Performance is one of alternative projects for students to imagine, explore, create, and share in front of others. Apart from fostering language skills, drama performance as project based have further positive effects on a variety of social competences and personal skills. One of the study finding is that, despite initial resistance from the majority of the English language learners about taking this mandatory class, the drama pedagogy used in this classroom drew on students' personal and cultural experiences in the creation of identity texts and thereby provided room for a situated practice as well as multimodal representations of meaning. This process of creating performance-based identity texts, the author argues, cognitively engaged students, provided room for identity investment and, therefore, despite initial challenges, helped many students with their linguistic and social performances [26]. . (p.6), Yes of course, I learn to write when revising drama script, learn to correct pronunciation when doing dialogue although there is someone who said this to me it is enough to doing dialogue with Indonesia accent to be easily understood, but I think it is a process for me to learn more in the pronunciation aspect. So I decide to use an English accent, I also learn to listen while listening to other characters dialogue in drama (p. 7 ), Of course yeah, especially for the pronunciation. As the narrator I've to read some of the paragraph of the text. It is increasing my pronunciation (p.8), Yes, from this drama I learned how to pronounce English, then listen to English dialogue. (p.9) Based on the interview, the students have shown that they perceive many positive effects from engaging with drama in their language classrooms. To explain this, some students seem to enjoy repeated oral practice with a text and having an opportunity to spend more time than usual practicing and focusing on intonation pattern. Rehearsal is useful for students in various ways. When the learners rehearse, they were involved with various processes which include the establishment of characters, personalities, motives, and persona, thus creating a genuine purpose for communication [10]. The atmosphere in drama performance encouraged the students to speak with their friends when they were playing their characters. Furthermore, it allowed them to think about how they have to respond to other characters in that situation. When students performed in English drama, they had a purpose for speaking. To work on a drama performance, students took on various roles as researchers. The students in the project gathered information about the theme. They studied related information by themselves from various resources such as texts, some books, Internet, YouTube, movies, and so on. Those data were analysed and adapted into their drama project. These encouraged them to be autonomous learners. It is supportive with the result of the research that the component of project based learning increases students' research skills as the students are required to take some responsibility for their own learning through the gathering, processing, and reporting of information from target language resources [27] . (p.4), The researcher report that EFL drama performance fosters and maintains students' motivation by providing an atmosphere which is full of fun and entertainment. Interestingly, from the results of the study, it was found that the audience was one important factor that motivated the participants of the study to perform with enjoyment. One of the students mentioned that she and her friends felt little nervous when acting on the stage in front of an audience; however, it positively encouraged them to do their best and put in their best effort to play the characters. I was very surprised, so many gave positive responses, until others did not think that I was said to be able to do extraordinary acting, according to them. (p.2), The response from the audience is that good and suitable to play a role as a mother who is angry with her child and I didn't expect that, thank the response was good. (p.3), It is obvious that the audience is one important factor which motivates learners to perform in English. From the researcher's observation, it seemed students were a little nervous when they worked on make-up characters performance; however, they looked more relaxed and confident on the day of the actual performance on the stage. Students tried their best and showed great effort in their actual performance. (p.7), I'm not really feeling something special from the audience, because I'm a narrator, but when the make-up character performance, as Valak, all the audiences were scare of me, I don't know why, I just Hahaha (p. 8) In terms of evaluation and feedback which are important processes at this stage, the students' drama performance was evaluated based on the elements of drama, verbal language, non-verbal language, learning lines, and staging. They were provided with feedback after finishing the performance. The students were expressing their feeling at the end of their performance. The audiences give comments directly; some of them asked several questions to the students as the performer. I was very surprised when the audience laughed at me, because my role as an informant had to be funny and strange. At that time I realized that I had managed to portray a strange and funny informant. (p.5) , The audience response when I was acting was, when Mrs. ____ commented that I was very happy and confident. She said that "waaaaah so cool like auto of the box did not expect to be like a guy so A with F just lost with maleness L" Haha Based on the observation and interviews, the researcher point out that the students participated in various tasks of the project based learning, such as make-up character performance, script writing, setting design, costumes, text analysis, cast, poster publicity, sounds and effects Advances in Social Science, Education and Humanities Research, volume 535 integration, producing trailer, promoting their drama using social media and rehearsal. Students independently chose to work on different tasks according to their knowledge, abilities and preferences. Moreover, they were allowed to take on different characters and roles. In this study, the lecturer was seen as a facilitator who assisted students and approved the content. Moreover, the lecturer assisted whenever the students could not reach a common decision or when the students needed help to discuss important issues. 7. How do you feel now after performing your drama? Actually in the show there are some things that we really want to get angry, the stage setting is not in line with our group's expectations. Lots of frustration with the team that helped group one. Maybe there was no further communication that caused it. After the show also in the whatssapp group some people got on their emotions. But yes, everything has been done okay. (p.6), The audience can clearly hear every dialogue that is carry up, but I think it was not optimal because the character's voice sometimes small (not too loud) (p.7), I just wish please not raining and all the actors of my group, their voice are loud, that's my wish. (p.8), I want during the drama to run smoothly from the beginning to the end and there is nothing I don't want (p.9), my feeling now must be happy because in my opinion the existence of a program like this is very fun even though the process is not very easy because they have to prepare this and that. not only mental retention but all property as well everything must be prepared carefully and it's not easy. (p.1), More confident, it turns out that thing that I might not be able to do with real can I do. (p.2), Certainly relieved and have a lot of experiences to be had. (p.3), I feel happy and everything becomes a distinctive memory for me. (p.4) The end-product of this study was a drama performance on stage. The performance lasted approximately one hour. The students performed in front of an audience with all the elements of a drama production. A dramatic performance is beneficial to students in various ways: linguistic reinforcement, pronunciation practice, becoming more familiar with the text, self-esteem development, discussion skills, and meaning-focused [28]. The drama was a success even though there were a few minor obstacles but I am very grateful to have a very extraordinary experience. (p. 5) The participants' views support that the implementation of drama performance has positive effects on the development of individual or group work skills. Since drama performance as project based learning activities enable students to promote their team work skill and building personal relationships students' individual participation or group cooperation in drama enhance their interaction and build positive social relationships. Enabling the engagement of whole class in a drama performance was suggested by the participants of this research, as well. Some were recommended all the students, even the passive ones, be engaged in drama project. What I feel after doing the drama is certainly a relief because I have already completed this assignment. In addition, our longing for our habits that always gather, joke, eat, discuss, like having new friends and family. We also know each other with their respective characters, our ignorance becomes discovered. The point is togetherness that will not be forgotten. Gratitude is grouped with them with our own efforts, our own efforts, without the help of others. yes we are proud, we are happy. (p. 6 The participants shared their views on drama performance. They state that the learning activities can improve their language skills and academic performance of the students who participate in drama due to their being more engaged in lessons than their nonparticipant counterparts. The views regarding the implementation of drama performance as project based learning yield positive environment for students from different background so that they provide not only lecturer and peer support at campus but also parent involvement in their life. The next step was interviewing the audience from different background. The participants were some students from department of Educational Administration, students' parents and High School students. Interview was done to verify what had been obviously witnessed in the process of observation [19]. Interview is important to do in this research to get the verified data through the communicative competence between interviewer and the respondents. The definition of communicative competence as: "that aspects of our competence that enable us to convey and interpret messages and to negotiate meanings interpersonally within specific context" [29]. The data of the interview were recorded as digital data. Interestingly, from the results of the research, it was found that the audience was one important factor that motivated the participants of the study to perform with enjoyment. Many family members came to see their children performance. Some High School students watched their teacher performance, and some students mentioned that they were feeling proud being a part of the performance. It is obvious that the audience is one important factor which motivates students to perform in English. From the researcher's observation, it seemed many students were a little nervous when they worked on the rehearsal; however, they looked more relaxed and confident on the BIG DAY of their actual performance on the stage. Students tried their best and showed great effort in their actual performance. When students know that larger audiences wait for their work, they will be more dedicated to their work. Drama project based learning connecting students from different background, language skill level, ages, and department. Drama performance was able to create a community of learners that helped each other in the process of language learning. It also helps to connect the school of teacher training and education (STKIP) Muhammadiyah Bogor to the community in which it is based. It brings together individuals interested in theatre or in the language in which a drama is performed. Because of the positive responses, the implementation of drama performance has been suggested as effective ways to promote English Education Department. In fact, performing a drama in the target language can help to increase enrolments and its department more visible on campus. The research shows the benefits of project-based learning and drama integration. The students had opportunities to improve their knowledge and practice their language skills by implementing a drama project based on their talents and individual differences. V. CONCLUSION "Tell me and I forget, teach me and I may remember, involve me and I learn" (an ancient Chinese proverb) The proverb reflected the process of drama performance project at School of Teacher Training and Education (STKIP) Muhammadiyah Bogor. The success of any program is strongly connected to fidelity of implementation [14].The implementation of drama performance as a project based learning has myriad benefits for students and drama activities are a unique way Advances in Social Science, Education and Humanities Research, volume 535 of enhancing students' motivation and participation. Language skills can be exposed to students not only through conventional technique but also through drama performance. Project based learning provides students with the opportunity to explore the contents of Drama in Language Education. To get into the outcome, the core activities focus on the elements of drama, creative writing (composing a play script), characterization, make-up character, providing students with the information about drama in language education, and also hope to give students the opportunity to perform a drama in English. The lecturer's role in the teaching learning process is crucial. It can influence the students' response towards project based learning. The researcher believe that the students are actively and productively involve in drama project from planning, preparing and staging the performance; the students feel free to demonstrate their creativity in creating their ideas; their drama performance incorporated a range of media and forms of expression (written work, social media, and performance). The majority of lectures like their students to be motivated. When the students are motivated, it means something. The students would be actively engaged in the classroom activities. Therefore, the students' participation is something which is desirable by most lecturers. Based on what the participants directly experienced that learning English through drama performance is motivating, raising their critical thinking, exploring their creativity and improving language skills in English. Another important point that students can get from drama performance is togetherness. Therefore, it recommends the implementation of drama performance as project-based learning to promote students' learning skills as it provides students with great opportunities to speak English and express themselves. Drama should become a greater part of learning language instruction, not only encourages the students and improve language skills but also fosters their social, emotional and intellectual development. Suggestions for further study might explore the collaborations between student and lecturer from different subject or even other departments can ease the workload for all involved and can make an English language drama production as successful and enjoyable experience for everyone. Although the results were found, the research reported here has limitations. The research relied on qualitative analyses and future research could include quantitative analyses from other aspects of drama in language education to expand and confirm the results of our study. It is hoped that the implementation of drama performance as project based learning in learning English skills contexts is recognized and further explored.
2021-05-07T00:02:53.419Z
2021-03-08T00:00:00.000
{ "year": 2021, "sha1": "8797e6fe6816244769fb881a3aa680610cfdabc2", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125953744.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b6516008ebd30316588c6790830c585e33cc8685", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
25592634
pes2o/s2orc
v3-fos-license
Radiographic evaluation of dental age of adults using Kvaal's method. INTRODUCTION It is a well-known fact that the assessment of the dental development can be related to an individual's age, but after the age of 21 years when the wisdom teeth also complete their development, there arises a need for an optimal age estimation procedure. With advancing age, there is a reduction in the size of the pulp due to secondary dentin deposition and a measurement of this reduction can also be used as a parameter to assess the age of the individuals, both in the living and dead. AIMS AND OBJECTIVES The purpose of the present study was to evaluate the feasibility of this approach in the estimation of age of adults, using Kvaal's method in the set sample. MATERIALS AND METHODS The material consisted of the digital long-cone intraoral periapical radiographs from 50 subjects of either sex in the age group of 15-60 years, who were selected after evaluation for the set inclusion and exclusion criteria. The pulp width and length from radiographs of 6 selected teeth, namely, maxillary central incisor, lateral incisor, and second premolar and mandibular lateral incisor, canine, and first premolar of either right or left side were measured using the RVG trophy software [Trophy® Windows is a software program supplied by Trophy Radiologie (Trophy Windows Version 5.03, Copyright 1993-2002,Trophy RVG patented by Trophy, Chicago)]. In order to compensate for the differences in magnification and angulation, various ratios were calculated and the mean of all ratios (M) was taken as the first predictor, while the difference between the mean of 2 width ratios and the mean of 2 length ratios (W - L) was taken as the second predictor. Different regression formulae for all 6 teeth, 3 maxillary teeth, 3 mandibular teeth, and each of the individual teeth were derived and the age was assessed. The assessed age was then co-related with the actual age of the patient using the Student's t test. RESULTS The results showed that the coefficient of determination (R(2)) was the strongest (0.198) for the mandibular first premolar indicating that age can be estimated better with this particular tooth. No significant difference was observed between the estimated age and the actual age for all (P>0.05) except in mandibular lateral incisor and maxillary lateral incisor, where a significant difference was observed. CONCLUSION To conclude, the results of the present study suggest the feasibility of Kvaal's method for age estimation in the set sample. Introduction F orensic odontology is one of the most unexplored and intriguing branches of forensic sciences. Age estimation constitutes an important factor in the identification of an individual in forensic odontology and search for optimal age estimation procedures has continued over the years until the present day. [1] approach if used, offers a relatively nondestructive method and eliminates the need for extraction of teeth. [2] The dental pulp is a delicate soft tissue enclosed within the confines of calcified structures, namely, dentin and enamel, and is well protected from the external tooth environment. The regressive changes in the pulp have also been related to age. It is a well-known fact that both, the developmental and regressive changes to the tooth can be related to chronological age. [4] The size of the pulp decreases with age due to the deposition of the secondary dentin, and this is a continuous process that occurs throughout life. [5] Hence dental pulp can be used as a parameter to assess the age of an individual even during later periods of life, when other methods cannot be employed. Kvaal's method [2] is one such method, which was initially applied to intraoral periapical radiographs and very recently on digital orthopantomographs (OPGs) for estimating the age of an individual. [6] Materials and Methods The present study comprised 50 subjects of either sex in the age group of 15-60 years from whom informed consent was obtained after explaining the aims of the study and the procedure in the language understandable to them. For each subject, a thorough medical history was elicited to rule out any kind of systemic disorders and simultaneously a proof of their date of birth, preferably in the form of a copy of their birth certificate, was obtained and submitted to another observer who was not associated with the procedure. Those subjects who failed to produce their authenticated proof of date of birth were excluded from the study. Digital intraoral periapical radiographs were acquired using Trophy RVG machine with the exposure factors of 65 KVp and 8 mA for 0.2 s for the 6 teeth of either right or left side, i.e., the maxillary central incisor, maxillary lateral incisor, maxillary second premolar, mandibular lateral incisor, mandibular canine, and mandibular first premolar. The subjects in whom the required teeth were missing/ impacted/carious/filled/prosthetically restored/malposed/ had periapical or pulpal pathologies, or morphological abnormalities, including attrition/abrasion/ erosion were not taken into consideration. For each of these teeth, the following measurements were made using the RVG trophy software: Age was assessed for all the subjects by regression using 2 predictors, where the mean of all ratios (M) was taken as the first predictor, while the difference between the mean of the 2 width ratios and the mean of the 2 length ratios (W − L) was taken as the second predictor. Prior to running the regression, correlation was carried out to find the relationship between the age and the variables. Different regression formulae for all the 6 teeth, 3 maxillary teeth only, 3 mandibular teeth only, and each individual tooth were derived and the age was assessed for each individual. The entire statistical analysis was performed using the SPSS (Version 13) software. The assessed age was then compared with the actual age of the patient using the Student's t test. Results The study comprised 21 males and 29 females. The mean age of the subjects was 25.78 years for males and 22.73 years for females. Correlation between age and the ratios of measurement from each tooth is depicted in Table 1. It was seen that there was a significant co-relation between age and "M" for upper second premolar and lower first premolar. A significant co-relation was also seen between age and the second predictor "W − L" using the upper central incisor. The regression equations derived for assessing the age are depicted in Table 2. It was observed that when the selected 6 teeth were taken individually, the coefficient of determination R 2 was the strongest for the lower first premolar indicating that the age can be estimated better with this particular tooth when "M" and "W − L" are considered as predictors of age. Only "M" was found to be a significant predictor (P<0.05) in this case. When 3 upper and 3 lower teeth were taken together, it was observed that R 2 was higher for the upper teeth compared with the lower teeth. In the upper teeth as well as the lower teeth, "M" and "W − L" were found to be insignificant. When all the 6 teeth were taken together and the age was estimated with "M" and "W − L" as the predictors, it was found that only "M" was a significant predictor and the radiograph in contrast to other radiographs, such as an OPG provides a good image detail and definition without any superimpositions. It was also suggested by Willems et al. that it might be worthwhile to produce a calibrated digital image of the radiograph in order to be able to perform digital linear measurements, which might produce the most accurate measurements. [1] The results of the study by Kvaal showed that the coefficient of determination was highest when the ratios of all the 6 teeth were taken and lowest when mandibular canines alone were taken. [2] Whereas in the present study, it was seen that the coefficient of determination was the highest when lower first premolar was used and lowest when lower 3 teeth were used together. When the age of the subjects was estimated by substituting the values of "M" and "W − L" in the derived regression equations and compared with the actual age, it was seen that there was no significant difference between the mean Sharma and Srivastava: Radiographic evaluation of dental age of adults using Kvaal's method co-efficient of determination was low. The age of the subjects was then estimated by substituting the values of "M" and "W − L" in the regression equation using each individual tooth, upper 3 teeth together, lower 3 teeth together, and for all 6 of them combined, and this estimated age was compared with the actual age using Student's t test [ Table 3]. From the comparison of actual age and assessed age it was observed that there was no significant difference observed between the estimated age and the actual age for all (P>0.05) except in mandibular lateral incisor and maxillary lateral incisor. The bar diagram depicting the comparison between the mean actual age and the mean estimated age is shown in figure 1. Discussion Based on the study on age estimation of adults from the measurements of pulp size on intraoral periapical radiographs done by Kvaal et al., we assessed the age of the subjects using digital long-cone intraoral periapical radiographs of the 6 selected teeth. All the required measurements were made using the inbuilt trophy digital software of the RVG unit. A digital intraoral periapical actual age and the mean estimated age in the lower first premolar, lower canine, upper second premolar, upper central incisor, 3 upper teeth taken together, 3 lower teeth taken together, and all the 6 teeth taken together (P>0.05), which is in consistence with Kvaal's study. [2] But a significant difference was observed in the actual and estimated age when the upper lateral incisor and the lower lateral incisor were used. The difference in the observations in Kvaal's study can be attributed to the use of a different technique for obtaining measurements. The required length measurements in Kvaal's study were obtained on conventional radiographs by using vernier calipers and the width measurements using a stereomicroscope with a measuring eyepiece to the nearest 0.1 mm. But in our study, digital radiographs were acquired to obtain the measurements using a standardization procedure. A similar study was also carried out on digital OPGs of Caucasian population by Bosmans et al. in 2005. [6] In their study, they found no significant difference between the actual age and the calculated age based on regression equation of all 6 teeth taken together and for mandibular 3 teeth taken together, which is quite consistent with the present study. They found a significant difference in the actual age and the calculated age for the 6 teeth taken individually and for the upper 3 teeth taken together, but in our study, even the upper 3 teeth when taken together and all the 6 teeth taken together gave no significant difference between the actual and the estimated age. [6] From their study, they concluded that all 6 teeth when taken together were the strongest predictors for age estimation but according to the results of the present study, the lower first premolars were found to be the strongest predictors. Another reason for the difference in the results of this study from other similar studies can be attributed to the variation in set of sample, which was a set of Norwegian population in the reference study. Many other studies based on similar parameters have also been carried out and one among them is based on exploring if measurements of the size of the pulp cavity performed on digital OPGs can be used for individual age estimation. In a study, carried out by Paewinsky et al. the measurements were made digitally for 6 types of teeth from OPGs of individuals aged between 14 and 81 years. The width ratios of the pulp cavity showed significant correlation to the chronological age and the coefficient of determination (r 2 ) was highest in the upper lateral incisors (r 2 =0.913) when an exponential or a logistic regression model was constructed. At the same distance with a linear regression model, the coefficient of determination (r 2 ) reached 0.839. [7] On similar grounds, Roberto Cameriere et al, in 2007, carried out a study to examine the application of the pulp/ tooth area ratio by digital periapical images of upper and lower canines as an indicator of age. Separate linear regression equations were obtained for age estimation using upper and lower canines. A variation of 86% with a residual standard error of about 5.4 years was estimated between chronological and actual age and it was concluded that canines can serve as appropriate variables to predict the age of an individual. [8] The application of RVG was made use of, in a study conducted by Velmurugan, et al. in 2008, to determine morphological measurements of the pulp chamber and also to establish the relationship of the CEJ to the roof of the pulp chamber of the maxillary first molars in an Indian population. The results of these measurements revealed that the morphological measurements of the maxillary first molars in the Indian population were similar to those reported by previous studies; the roof of the pulp chamber was found at the CEJ in 96% of the specimens. [9] Hence, it is quite clear that various studies have come up using digital systems, either in the form of RVG for intraoral periapical radiographs or digital OPGs, to assess the relationship of various tooth parameters with the age of an individual. Accuracy and precision are important in assessing age. Accuracy refers to the closeness of a computed value to its true value. Any difference found can be attributed to many variables, including precision of the method, age distribution of the sample, sample size, and the statistical approach used. [10] The present study made use of digital intraoral periapical radiographs for estimation of age applying the Kvaal technique and although there are observed variations in the results of different similar studies, yet the feasibility of the technique is certain. Sharma and Srivastava: Radiographic evaluation of dental age of adults using Kvaal's method Conclusion To conclude, this study was an attempt to apply Kvaal's method on digital intraoral periapical radiographs to assess the age of individuals in the set sample and the results suggest that Kvaal's method can be used for age estimation. Furthermore, from among all the chosen teeth, the results may be better when lower first premolar is taken. Also it gives a scope for future studies on larger sample size with adequate representation of samples from different age groups and sex distribution.
2018-04-03T03:46:22.432Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "f11354c82586c3e5b9e9f1862c45db2ef78d582d", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3009551", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "b9faadf9992a813938047419e763665eeb608358", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211031836
pes2o/s2orc
v3-fos-license
Berry Phases in the Reconstructed KdV Equation We consider the KdV equation on a circle and its Euler-Poincar\'e reconstruction, which is reminiscent of the equation of motion for fluid particles. For periodic waves, the stroboscopic reconstructed motion is governed by an iterated map whose Poincar\'e rotation number yields the drift velocity. We show that this number has a geometric origin: it is the sum of a dynamical phase, a Berry phase, and an `anomalous phase'. The last two quantities are universal: they are solely due to the underlying Virasoro group structure. The Berry phase, in particular, was previously described in [arXiv:1703.06142] for two-dimensional conformal field theories, and follows from adiabatic deformations produced by the propagating wave. We illustrate these general results with cnoidal waves, for which all phases can be evaluated in closed form thanks to a uniformizing map that we derive. Along the way, we encounter `orbital bifurcations' occurring when a wave becomes non-uniformizable: there exists a resonance wedge, in the cnoidal parameter space, where particle motion is locked to the wave, while no such locking occurs outside of the wedge. Introduction and summary of results It is quite generally true that the state vector of a quantum system undergoing cyclic changes of reference frames picks up Berry phases [2,3]. Typical examples of this behaviour include Thomas precession [4], a spin in a slowly rotating magnetic field [2,5,6], and its non-compact analogue [7] which appears in the quantum Hall effect [8]. In [1], such Berry phases were shown to arise in two-dimensional conformal field theories (CFTs) coupled to an environment that produces adiabatic conformal transformations. These phases can be computed exactly despite the infinite-dimensional parameter space, and coincide with 'geometric actions' of the Virasoro group [9]. From now on, we refer to them as Virasoro Berry phases. They are reminiscent of the response of a quantum Hall fluid to metric deformations, where the parameter space is infinite-dimensional as well [10]. The goal of this paper is to exhibit classical systems where Virasoro Berry phases are realized dynamically, i.e. without any implicit coupling to the 'environment'. 1 This notably includes the Korteweg-de Vries (KdV) equation [12], or rather its Euler-Poincaré reconstruction [13][14][15][16], but it applies more generally to any Lie-Poisson equation based on the Virasoro group [17,18], such as the Hunter-Saxton and Camassa-Holm equations [19]. Indeed, reconstructed Lie-Poisson equations yield geodesics on Lie groups, and powerful geometric tools can then be used to predict universal properties of the reconstructed dynamics -such as Berry phases appearing when the system's motion in momentum space is periodic. For example, the Lie-Poisson system of SO(3) yields the standard Euler equations for the angular momentum of a rigid body. When angular momentum performs one period of its motion, the final orientation of the body in space differs from its initial one by a rotation whose angle is known as a Montgomery phase [20,21]; it is the sum of a dynamical phase and a geometric phase due to adiabatic rotations. The purpose of this paper is thus to describe the Virasoro analogue of Montgomery phases. For the record, this is not the first time that geometric phases are found in the KdV equation: such phases were indeed found in [22] and reproduced, among other things, the standard phase shift occurring after the collision of two solitons. However, [22] crucially used the effective, finite-dimensional phase space description of KdV solitons, and the corresponding geometric phases are Hannay angles in a finite-dimensional parameter space. This is radically different from what we do here, since we, by contrast, explicitly use the infinite-dimensional nature of the Virasoro group and never rely on soliton dynamics per se. In this sense, there is, to our knowledge, no overlap between [22] and this work, other than the general context. We now explain how Virasoro Berry phases can be observed through the motion of suitable (comoving) 'fluid particles', and how these phases can be computed. We then expose the plan of our work. Summary of results. This work relies on a fair amount of symplectic geometry and Virasoro group theory, none of which is reviewed in a self-contained manner -we refer e.g. to [23] for an introduction to the former, and to [24,25] for the latter. Nevertheless, it is straightforward to describe the main aspects of our work with minimal technicalities. Namely, let p(x, t) be a (spatially 2π-periodic) wave profile that solves the KdV equation 2 ∂p ∂t + 3p ∂p ∂x − c 12 where c = 0 is a constant parameter (the Virasoro central charge). Suppose, then, that a particle on the line has a position x(t) that satisfies with initial position x(0) = x 0 say. This particle could be, for example, a small fluid element in a shallow water channel supporting the wave p. 3 Our goal is to find, analytically, general properties of the resulting solution x(t), such as the drift velocity To ensure that the latter is a well-defined quantity, we add one extra condition: we require the wave p to be periodic in time, i.e. p(x, t + T ) = p(x, t) for some T > 0. Then, there exists a (time-independent) diffeomorphism x → F (x) of R such that, after N periods, We can thus think of the 'stroboscopic' motion of particles at integer multiples of the period T as a discrete-time dynamical system governed by the map F . From that perspective, the drift velocity (3) reads where ∆φ is the Poincaré rotation number of F [24, sec. 4.4.3]. It is easily read off by integrating eq. (2) numerically over many periods. As we now explain, there is in fact a way to predict the value of ∆φ, analytically, using group theory and symplectic geometry. This value involves, in particular, a Virasoro Berry phase. To see where symplectic geometry plays a role, one has to think of (1) as a Lie-Poisson equation for the Virasoro group. The phase space of any such system is the cotangent bundle of a Lie group, where the cotangent part consists of 'momenta', while the group manifold is a space of 'positions' or 'configurations'. In the KdV case, for instance, p(x, t) is a Virasoro momentum (which justifies our notation). By construction, the motion of momenta determines that of configurations through Euler-Poincaré reconstruction [13][14][15]. In the KdV case, this reconstruction turns out to precisely take the form of eq. (2), as explained in greater detail in section 3.2. Importantly, periodic motion of momenta does not, in general, imply periodicity of configurations. Instead, when the system performs a loop in momentum space, its configuration typically traces an open path, and the difference between the initial and final positions can be interpreted as an (an)holonomy. The latter involves a Berry phase associated with adiabatic changes of reference frames, exactly as in the aforementioned example of Montgomery phases [20,21]. For Lie-Poisson systems based on the Virasoro group, such as KdV, the holonomy in the space of configurations is precisely the angle ∆φ of (5), except it can now be written as a sum (63) whose schematic form is ∆φ = Dynamical phase + Berry phase + Anomalous phase Universal (6) In that expression, the first term, proportional to the period T , is a dynamical phase, while the second term is a Virasoro Berry phase associated with adiabatic diffeomorphisms [1]. The anomalous term is a contribution due to the Virasoro central extension and may be seen as the integral of a Berry connection along the inverse of the reconstructed path. Both the Berry phase and the anomalous term are universal: they solely follow from Virasoro group theory and take the same form regardless of dynamics (though the path p(x, t) that determines their value does, of course, depend on dynamics). Furthermore, the dynamical phase and Berry phase are known functionals of p(x, t); the anomalous term, on the other hand, is an implicit integral (62). All these functionals turn out to simplify greatly for travelling waves p(x, t) = p(x − vt), which eventually yields eq. (89) for ∆φ. As a result, for cnoidal waves, the three terms of (6) can be evaluated analytically at any point in parameter space -they are displayed in eqs. (101)-(103). Their sum coincides with the value (5) that can be computed by other means -thanks to a suitable uniformizing map that we derive -, and leads to the compact formula (99) for the drift velocity. It should be noted that our derivation of eq. (6) for KdV rests on one key technical assumption: the profile p(x, t) must be uniformizable, or amenable, in the sense that there exists a conformal transformation (i.e. a diffeomorphism of the circle) mapping it on some uniform, x-independent, profile k. This is generally not guaranteed, as there exist a great many Virasoro coadjoint orbits without uniform representative [26,27]. For any profile that does not satisfy the assumption of amenability, a notion of drift does exist in the sense of eqs. (3) and (5), but the corresponding ∆φ is an integer multiple of 2π and cannot be written in the form (6). Following [28], we will show that such a regime occurs for cnoidal waves with sufficient pointedness: there exists a resonance wedge in the cnoidal parameter space where (6) does not apply, and, in that wedge, particle motion is 'locked' to the travelling wave: v Drift = v Wave . The transition along the wedge boundary is reminiscent of the sniper bifurcation of the Adler equation [29]. Outside of the wedge, cnoidal waves are amenable and eq. (6) applies, leading to a drift velocity v Drift = v Wave . An important motivation for this work stems from fluid dynamics, where the KdV equation notoriously describes shallow water waves [30]. Indeed, in a comoving frame, the leading equation of motion for fluid particles in a two-dimensional channel supporting KdV waves is nearly identical to the reconstruction equation (2): the only difference is the presence of a (large) constant term on the right-hand side (see eq. (68) below and the surrounding discussion). The symplectic formula (6) for ∆φ, along with the drift velocity (3), thus suggests that Virasoro Berry phases contribute to the Stokes drift velocity of particles in shallow water [31], similarly to the crest slowdown phenomenon observed in wave breaking [32]. However, the seemingly innocuous change of reference frames that distinguishes eq. (2) from the actual equation of motion for fluid particles turns out to be crucial: it implies that Stokes drift in standard shallow water dynamics differs from the drift velocity introduced here, whose (subleading) effect is entirely washed out by the overwhelming, dominant contribution of the overall velocity √ gh of the comoving frame. More on that in section 3.3. Prospects for actual observations of Virasoro Berry phases are relegated to the conclusion of this paper. Plan of the paper. This work is not self-contained: the necessary prerequisites include symplectic geometry [23] and Virasoro group theory [24,25], and the parts concerning cnoidal waves heavily rely on [28]. We will not review any of that content here, but we do adopt a logical flow that corresponds to the way one would naturally teach the subject. Accordingly, the structure is as follows. First, section 2 contains general prerequisites in symplectic geometry. In it, we introduce Euler-Poincaré reconstruction and derive an abstract formula for the rotation ∆φ associated with any periodic solution of a Lie-Poisson system based on a centrally extended group, provided the solution has a U(1) stabilizer in a suitable sense. This leads to eq. (48), which is critical to the rest of the paper (and is new to our knowledge, as it contains an 'anomalous phase' that appears to have been overlooked so far). In section 3 we apply this formula to any Lie-Poisson wave equation based on the Virasoro group, resulting in eq. (63) for ∆φ. We also establish the link between reconstruction and the equation of motion (2), hence between geometric phases and the drift velocity (3), and comment on the important difference between the latter notion and that of Stokes drift [31]. Section 4 is devoted to the application of these arguments to travelling waves, and to a comparison between the geometric prediction (6) and the value of ∆φ computed analytically. To that end, we actually find a general formula for 'uniformizing maps' of travelling waves satisfying KdV, and deduce an exact expression for the solution of the equation of motion (2), from which the drift velocity (3) follows. As we shall see, the drift velocity is indeed perfectly predicted by the symplectic formula (6), but the values of cnoidal parameters strongly affect the drift velocity -in particular, waves located in a certain 'resonance wedge' produce particle motion that is locked to the wave, confirming the existence of 'orbital bifurcations' anticipated in [28]. Finally, we conclude in section 5 with a discussion of potential follow-ups of our work. For completeness, the appendix collects further details of group theory and symplectic geometry needed in section 2. Reconstruction and Berry phases This section is a mathematical prelude. We start by briefly reviewing general aspects of Lie groups and their relation to Lie-Poisson equations [18], then turn to the key method of Euler-Poincaré reconstruction [13][14][15], which will be instrumental for the entire paper. Following that, we derive general formulas for the reconstructed rotation angle ∆φ in Lie-Poisson systems with a U(1) stabilizer -first for generic groups, then for centrally extended ones. The former case includes Montgomery phases as an application [20,21], while the latter is crucial for the Virasoro group and the KdV equation. Lie groups and Lie-Poisson equations Lie-Poisson equations are Hamiltonian systems whose dynamics is almost entirely fixed by a parent Lie group. For instance, the group SO(3) of spatial rotations leads to the motion of free-falling rigid bodies, while the Virasoro group is associated with a host of non-linear wave equations that includes the KdV, inviscid Burgers, Hunter-Saxton and Camassa-Holm equations [17]. Here, as a preparation for KdV and its cousins, we recall the derivation of Lie-Poisson equations in a general group-theoretic setting. We refer to the appendix for the minimal necessary background on Lie groups and symplectic geometry; see also [18] for a pedagogical introduction. Let G be a Lie group with algebra g, whose dual space is g * . The adjoint representation of G on g is defined, for all g ∈ G, by Ad g (ξ) ≡ ∂ t 0 g e tξ g −1 , where e tξ is the exponential of tξ ∈ g. For matrix groups, the right-hand side boils down to gξg −1 . The dual of the adjoint is the coadjoint representation of G, given for all g ∈ G, p ∈ g * , ξ ∈ g by In what follows, the coadjoint representation will play a key role, so we reduce clutter by writing it as g · p, instead of the heavier notation Ad * g (p). The Lie-algebraic analogue of the coadjoint representation will be denoted as ad * and is defined by the derivative of Ad * , that is, ad * ξ ≡ ∂ t | 0 Ad * e tξ . Using (7), this is equivalent to where p ∈ g * and ξ, ζ ∈ g, with [·, ·] the Lie bracket. Lie-Poisson equations. As a starting point towards the Lie-Poisson construction, note that g * can be seen as a phase space, since it can be endowed with a Poisson structure. Indeed, given any real function F on g * , its differential dF p at a point p is a linear map from T p g * ∼ = g * to R. Thus dF p can be seen as an element of the Lie algebra g ∼ = (g * ) * , and one defines the Kirillov-Kostant bracket on g * as 4 Once we think of g * as a phase space, it is immediate to write down evolution equations: any Hamiltonian H on g * determines the time-dependence of a function F according tȯ whereḞ ≡ dF/dt. The left-hand side can be written asḞ (p) = dF p (ṗ), whereṗ is the vector field given by the Hamiltonian flow. Thus, removing the differential dF p from both sides of (10), we read off the equation of motioṅ Given H, this yields a unique curve p(t) in phase space for any initial condition p(0). To derive Lie-Poisson equations, one restricts attention to quadratic Hamiltonians. This requires an extra bit of terminology: by definition, an inertia operator is an invertible linear map which is self-adjoint in the sense that I(ξ), ζ = I(ζ), ξ , and positive definite in the sense that I(ξ), ξ > 0 for any non-zero ξ ∈ g. Any such map defines a (positive-definite) quadratic Hamiltonian and the associated evolution equation (11) readṡ This is the Lie-Poisson equation of the group G, given the inertia operator I. One can show that it is equivalent to a geodesic equation on G for the right-invariant metric induced by I [18, sec. 4.3]. The point of 'reconstruction' will precisely be to recover a geodesic g(t) ≡ g t in G from a solution p(t) of (14). 5 Remarks on coadjoint orbits. The name 'inertia operator' stresses that Lie-Poisson equations generalize the Euler equations of motion for free-falling rigid bodies. The latter have a configuration space G = SO(3) and an inertia tensor specified by their distribution of mass. The Lie algebra so(3) and its dual respectively consist of angular velocities and angular momenta, the two being related through the inertia tensor I. The time evolution of angular momentum, as seen from a (non-inertial) reference frame attached to the body, is given by eq. (14). By contrast, however, in any inertial frame, the angular momentum vector is constant. This example illustrates a key general aspect of eq. (14). Namely, any solution p(t) of (14) is such that p(t 1 ) and p(t 2 ) are related by a change of reference frames, for all t 1 , t 2 , in the sense that p(t) = f t · p(0) for some path f t in the group manifold. Thus, once an initial condition is fixed, the motion of p(t) takes place on a single coadjoint orbit of the group G, 6 O In particular, there always exists a frame where the motion of p(t) is trivial, namely For the Euler top, this is achieved in any inertial frame. The orbit (15) is a submanifold of g * , so it is typically specified by a certain number of continuous parameters whose value remains constant in time. In that sense, the statement p(t) = f t · p(0) is a conservation law. This will allow us to fix a particular orbit representative, say k ∈ g * , and write time evolution as p(t) = g t · k for some path g t in G. Suitable choices of k will then greatly simplify the reconstruction equation for g t . Euler-Poincaré reconstruction As just reviewed, the dual g * of an algebra g is a space of momenta endowed with a Poisson structure (9); the Lie-Poisson equation (14) describes a Hamiltonian system in that space. We now extend this picture by thinking of the group G as the configuration manifold of the system, with a phase space given by the cotangent bundle T * G ∼ = G × g * . From that perspective, Lie-Poisson dynamics is a 'reduction' of more complete, parent dynamics in T * G. In the opposite direction, Euler-Poincaré reconstruction will lift the motion p(t) in g * to a curve g t , p(t) in T * G, with g t ∈ G determined by p(t) (see fig. 1). In the remainder of this section, we derive a general property of the reconstructed path g t when p(t) is periodic, with p(T ) = p(0) for some period T . As we shall see, despite the periodicity of p(t), the curve g t is generally not closed: g T = g 0 . This inequality will turn out to reflect a holonomy in the principal G-bundle T * G, and involves the sum of a dynamical phase and a Berry phase. Accordingly, the next few pages are crucial for the rest of the paper. We warn the reader that the discussion relies heavily on Lie group theory and symplectic geometry; some technical details are relegated to the appendix. For a pedagogical introduction, see e.g. [23]; see also [13,14] for a detailed account of Euler-Poincaré reconstruction in general, including a discussion of geometric phases. Defining Euler-Poincaré reconstruction. We show in the appendix that the cotangent bundle T * G, i.e. the phase space of the reconstructed system, is a trivial bundle: it is equivalent to the product G × g * . In particular, the symplectic form ω = −dA of G × g * is obtained by pulling back the standard Liouville symplectic form of T * G, with 7 where dg g −1 ≡ d(R g −1 ) g is the right Maurer-Cartan form (and R g −1 denotes right multiplication by g −1 ). The one-form (16) is the group-theoretic version of what is commonly 6 This actually holds for any Hamiltonian in eq. (11), since the latter makesṗ(t) tangent to the orbit of p(t) regardless of H. Quadratic Hamiltonians are special in that the reconstruction of (14) is a geodesic in G with respect to an invariant metric [18, sec. 4.3], which makes the dynamics more tractable. 7 See eq. (124) in the appendix. Figure 1: A schematic picture of Euler-Poincaré reconstruction: a path p(t) in g * is lifted to a pair of paths (g t , p(t)) in G × g * ∼ = T * G. written in mechanics as p dq, with dq the Maurer-Cartan form of the group R. It is also the Berry connection that will eventually give rise to Berry phases in reconstructed dynamics (which is why we call it 'A'), so it is an essential object for all that follows. Now consider the Lie-Poisson system (14) from the point of view of the full phase space T * G ∼ = G × g * . The key to Euler-Poincaré reconstruction is the fact, emphasized in section 2.1, that any curve p(t) which solves (14) lies entirely on a single coadjoint orbit of the group G. Thus, the path in momentum space can be written as for some fixed coadjoint vector k and some path g t in G. Note that k need not coincide with p(0) = g 0 · k, as g 0 may well differ from the identity -indeed, this will exactly occur below. It is thus tempting to consider paths of the form g t , g t · k in G × g * , and declare that any such path is a reconstruction of g t · k. However, this naïve definition suffers from a 'gauge redundancy': for any curve h t in G such that h t · k = k, one has p(t) = g t ·k = g t h t ·k even though the paths g t and g t h t differ. To fix this, one additionally requires g t to be a geodesic in G with respect to a right-invariant metric determined by the inertia operator I, which turns out to produce the condition [14, sec. 13.5] This is a generalization of the relation L = I(ω) between angular momentum L ≡ p and angular velocity ω ≡ġg −1 . Given an initial condition g 0 such that g 0 · k = p(0), the resulting unique solution g t , p(t) = g t , g t · k in G × g * is called an Euler-Poincaré reconstruction of p(t). In sections 2.3 and 2.4, we show how this definition leads to geometric phases when p(t) is periodic. Remarks. Eq. (18) is consistent with the Lie-Poisson equation (14): writing p(t) = g t · k and omitting the dependence on time, the reconstruction condition (18) can be recast as where we temporarily reinstate the notation Ad * for the coadjoint representation, and both sides are now Lie algebra elements. Acting with them on k through the coadjoint representation (8) of g, we find ad * g −1ġ (k) = Ad * which is indeed the Lie-Poisson equation (14). One may also ask the opposite question: does (14) imply the reconstruction formula (18)? The answer is nearly yes: eq. (20) does not imply (19), but it does imply that g −1ġ and Ad g −1 • I −1 • Ad * g (k) only differ by an element of the Lie algebra of the stabilizer of k. Eq. (18) sets that element to zero for any time t; choosing a different element would amount to a different gauge choice. The rewriting (19) exhibits a general feature of Lie-Poisson equations: non-trivial dynamics only occurs when the inertia operator breaks G symmetry, i.e. when in general Indeed, suppose instead that G symmetry were preserved, i.e. that the inequality (21) were replaced by an equality for all g ∈ G. Then the right-hand side of (19) would be a constant I −1 (k) and the solution of (19) would read In the case of the Euler top, this occurs when the tensor of inertia is proportional to the identity matrix, i.e. when the rigid body is isotropic. Eq. (22) then states that the top rotates around its axis without any precession. Dynamical phase and Berry phase Geometric phases appear when one performs a loop -a closed path -in a suitable parameter space [2,5,11]. In the case at hand, parameter space is momentum space (or rather a coadjoint orbit therein), so the question we wish to ask is: given a path p(t) such that p(T ) = p(0), is the reconstructed path (g t , p(t)) closed? If not, is there a way to measure the difference between the initial configuration, g 0 , and the final one, g T ? As we now show, in anisotropic setups, the path g t is typically not closed even when p(t) is (see fig. 2), and the degree to which it fails to close is the combination of a dynamical phase, proportional to the period T , and a Berry phase. In order to prove this, following [20], we will integrate the Liouville one-form (16) along a closed path in G × g * given by Euler-Poincaré reconstruction. Then we will argue that this integral can be interpreted in two ways: first, as a Berry phase; secondly, as the sum of a dynamical phase and an observable rotation angle after one period. The centrally extended version of that argument is postponed to section 2.4. Integrating the Liouville one-form. Let p(t) = g t · k be a closed path, with period T , in the orbit O k (notation as in (15)). At this stage we do not yet assume that p(t) solves the Lie-Poisson equation (14), nor that g t satisfies the reconstruction condition (18). Instead, we introduce a loop in G × g * given by Figure 2: The fate of fig. 1 when the path p(t) = g t · k, in momentum space, is closed. Its reconstruction (g t , g t · k) generally contains a curve g t that does not close, corresponding to a non-trivial group element g −1 0 g T . To compensate this effect, we introduce the closed pathḡ t defined in (24). whereḡ t is the concatenation of g t with a curve h t lying in the stabilizer of k, chosen so thatḡ t closes (see fig. 2 Here h t · k = k for any t in the interval [T, T ]; the starting point of h t is h T = I, and its endpoint is h T = g −1 T g 0 , which indeed ensures thatḡ T =ḡ 0 . The fact that h t fixes k also ensures that the momentum part of (23) is constant on the interval [T, T ], where it equals p(0) = p(T ) = g T · k. For simplicity, we assume from now on that the stabilizer of k is a U(1) group; this will be sufficient for any Lie-Poisson equation based on the Virasoro group, including the KdV equation. Let us now integrate the Liouville one-form (16) along the closed curve (23): Here we split the integral in two pieces coming from the two parts of the path (23), and L g (R g ) denotes left (right) multiplication by g. In the first term, we relate the right Maurer-Cartan form to the left one: In the second term, we simplify the integrand into k,ḣ h −1 and use the fact that h stabilizes k to rewrite this This expression is a (known [13,14,20]) key result for the rest of this paper. We stress that, in order to derive it, we did not assume that the path p(t) = g t · k solves the Lie-Poisson equation (14); all we needed was that p(t) be closed, with period T . Still following [20], we shall now provide two interpretations of the integral (26). On the one hand, it will turn out to be the flux of a symplectic form through a surface enclosed by the path p(t), allowing us to think of it as a Berry phase associated with adiabatic changes of reference frames g t . On the other hand, when p(t) solves the Lie-Poisson equation (14) and provided g t satisfies the reconstruction condition (18), eq. (26) will be the sum of a dynamical phase and a rotation angle ∆φ in the U(1) stabilizer of k, eventually allowing us to express ∆φ as the sum of a geometric phase and a dynamical phase. Here is the detailed argument: Eq. (26) is a Berry phase. To see this, we relate (26) to the symplectic structure of coadjoint orbits of G. Indeed, the integral of A along the path (23) can be written as the line integral of a one-form in the group manifold alone, without reference to g * : In the last equality we used Stokes' theorem, with Σḡ an oriented two-dimensional surface in G whose boundary is the closed pathḡ. The integrand on the far right-hand side of this expression is a two-form on G, and it can be shown (see e.g. [25, sec. 5.3.2]) that it coincides with the pullback of the Kirillov-Kostant symplectic form on O k by the projection Π : In formulas, this means that where the symplectic form Ω, defined on the coadjoint orbit of k, is such that the Poisson bracket (9) reads {F, G}(p) = Ω p (dF p , dG p ) for any p ∈ O k . Plugging (28) back into (27), we find where Σ g·k is any surface in O k whose boundary is the curve g t · k = p(t). Thus, the integral (26) is the flux (29) of the Kirillov-Kostant symplectic form, and this flux, in turn, can be interpreted as a Berry phase. Indeed, it is often true that quantizing a coadjoint orbit O k produces unitary representations of G in which a coherent state, acted upon by transformations tracing a closed pathḡ t , has a Berry connection whose curvature is the symplectic form Ω [1,33]. The phase (29) is a classical analogue of that statement. Note that eq. (29) is expressed solely in terms of the path p(t) = g t · k in momentum space -there is no longer any reference to the path g t by itself. This fact will play an essential role for the evaluation of the geometric phase (26) in section 3: it implies that its value is independent of the choice of g t , as long as p(t) = g t · k. In particular, this will allow us to evaluate (26) with relatively simple choices of paths, as opposed to the generally complicated paths produced by the reconstruction condition (18). Eq. (26) = dynamical phase + rotation. So far, since introducing the path (23), we did not need to assume that p(t) solves the Lie-Poisson equation (14) or that g t satisfies the reconstruction condition (18). We now enforce both of these assumptions and work out their consequences for the integral (26). To begin, using eqs. (17)- (18), the integrand of the first term in (26) can be recast as k, g −1ġ = g · k,ġg −1 = g · k, I −1 (g · k) = p, I −1 (p) = 2H(p), where H(p) is the Hamiltonian (13). Since energy E is conserved, H(p) is constant and the first term of (26) becomes a dynamical phase: On the other hand, the second part of (26) is a boundary term because the one-form p, h −1 dh is exact. In fact, it is essentially the difference between g 0 and g T : since we assume that the stabilizer is U(1), we may label its elements by an angle φ and write h −1ḣ = −φ ξ 0 , where ξ 0 ∈ g generates the stabilizer. The normalization of ξ 0 is fixed so that e 2πξ 0 = I be the identity in G, but e tξ 0 = I for any t ∈ (0, 2π). Then the second integral in (26) is where ∆φ is the angle of the rotation g −1 0 g T . As a result, we can write the integral (26) as the sum of the dynamical phase (30) and the angle (31). Equivalently, upon rearranging the terms, one has Note that the value of ∆φ depends on the normalization of ξ 0 , but the product ξ 0 ∆φ does not, so this ambiguity is merely a matter of 'units'. In particular, if φ is normalized so that one turn corresponds to an angle 2π (as stated above eq. (31)), then the normalization of ξ 0 becomes fixed uniquely. Formula (32) makes it manifest that the complete rotation ∆φ is the sum of two very different contributions. The first, proportional to the energy E and the period T , is a dynamical phase. The second is the integral (26); it is a geometric phase that coincides with the symplectic flux (29). We now apply this statement to centrally extended groups. As we shall see, the extension will affect the Berry phase formula (26) and contribute an extra term to the right-hand side of (32). Both modifications will have observable consequences in the KdV equation (and more generally in any Lie-Poisson equation for the Virasoro group). Reconstruction for centrally extended groups We are interested in the reconstructed dynamics of Lie-Poisson equations for the Virasoro group. The latter is a central extension of the group of diffeomorphisms of the circle, so we now describe the extended analogue of sections 2.2 and 2.3. We start with some general preliminaries on centrally extended groups and their Lie-Poisson equations, then briefly analyse their reconstruction, and finally write general formulas for the geometric phases of reconstructed dynamics. Central extensions and Lie-Poisson equations. Let G = G×R be a central extension of a Lie group G. Its elements are pairs (f, α) with a group operation where C(f, g) ∈ R is a cocycle. 8 The corresponding Lie algebra is g = g ⊕ R, and its dual space g * = g * ⊕ R consists of pairs (p, c), where p ∈ g * and c ∈ R, the latter being a central charge. The pairing between g and its dual reads (p, c), (ξ, α) = p, ξ + cα. As before, the coadjoint representation (7) will play a key role; it turns out to read where the Ad * on the right is the coadjoint representation of G (without central extension) and S[f ] is the Souriau cocycle associated with C, defined so that In the Virasoro group, S[f ] will be the Schwarzian derivative that plays an important role in CFT (see section 3.1). We also need the coadjoint representation of g, obtained by differentiating (34) (or equivalently given by eq. (8)). One thus finds where In order to write the Lie-Poisson equation (14), we introduce a centrally extended inertia operator where I is an inertia operator (12) for g, while J > 0 is just a number. 9 Using the coadjoint representation (36), the corresponding Lie-Poisson equation (14) reads In particular, the central charge c is a fixed parameter -this really just follows from its being left invariant by the coadjoint representation (34). Reconstruction conditions. Suppose we are given a solution (p(t), c) of eq. (38). Euler-Poincaré reconstruction consists in finding a path g t , α t in G such that for some fixed coadjoint vector k, where the dot denotes the coadjoint representation (34) of G. In addition, the path must be such that the reconstruction condition (18) holds. For a centrally extended group, this means that On the one hand, this yields the expected reconstruction equation p = I(ġg −1 ), exactly as in the unextended case (18). On the other hand, it gives an ordinary differential equation for α t , which is readily solved thanks to the constancy of the central charge c [16]: This result will turn out to be crucial for the evaluation of ∆φ in the reconstructed KdV equation. To lighten the notation, we introduce a one-form d 1 C on G defined by , whereby the last term of (41) becomes an integral of d 1 C along the path g t . Geometric phases. We now assume that p(t) is a periodic solution of eq. (38), with period T and central charge c, lying in a coadjoint orbit O (k,c) with U(1) stabilizer. 10 Our goal is to rewrite eqs. (26) and (32) in the centrally extended case. To do this, first note that the Berry connection (16) now gets an extra central contribution: Similarly to (24), we introduce a closed curve ḡ t ,ᾱ t in G by concatenating the path (g, α), which satisfies (39)-(40), with a path (h, β) in the stabilizer of k which ensures that (ḡ,ᾱ) closes. In particular, β T = 0 and This Berry phase is a straightforward generalization of (26), involving just one extra contribution due to the central extension. As before, we stress that the value of that integral depends neither on the specific path g t , nor on the parametrization of time. It only depends on the image of the path p(t) ∈ g * , as in (29). However, in order to interpret (43) as the sum of a dynamical phase and an observable rotation angle, we need to enforce the reconstruction conditions (40) and write where we have split the path (ḡ,ᾱ) into a piece (g, α) on the interval [0, T ] and a piece (h, β) in the stabilizer. The first piece yields a dynamical phase analogous to (30): The second piece, on the other hand, now contains an extra contribution with respect to the unextended expression (31). Indeed, it reads owing to the conditions β T = 0 and β T = α 0 − α T . Using now the solution (41) of the reconstruction conditions to evaluate α T , we can rewrite (44) as Finally, as in (31), we interpret the last term as a rotation angle: recall that we assumed the stabilizer of k to be a U(1) group, generated by ξ 0 . The result is This is the centrally extended version of eq. (32), and it takes the anticipated form (6): (i) The first term is the expected dynamical phase, with a subtraction of c 2 /(2J) ensuring that ∆φ does not depend on J (since E is given by (45)). This is as it should be, since J does not affect the Lie-Poisson equation (38). (ii) The second term is the Berry phase (43), given by a loop integral of the Liouville one-form. It is universal in the sense that it does not depend on the inertia operator. (iii) The third and fourth terms are proportional to the central charge, are directly due to the extension C, and are also universal. In particular, the third term is a line integral of c(d 1 C) g = − (0, c), (g, 0)d(g, 0) −1 . This is the Berry connection on the coadjoint orbit of (0, c), evaluated at the point g −1 . Thus, the anomalous phase in (48) is akin to an 'inverse Berry phase', except that the path (g −1 t , 0)·(0, c) is not closed in general. We now apply the result (48) to Lie-Poisson equations based on the Virasoro group, taking KdV as our main example. In general, all three terms of (48) will be non-zero. Geometric phases and drift in reconstructed KdV This section is devoted to the first key statement of our work. Namely, we focus on wave profiles that are amenable (i.e. that can be mapped on a constant thanks to suitable diffeomorphisms) and satisfy a Lie-Poisson equation for the Virasoro group, such as KdV. We then show that, for periodic waves p(x, t), the master equations (43) and (48) apply and correspond to a generally non-trivial one-period rotation of the reconstructed dynamics. The angle ∆φ of that rotation is the sum of a dynamical phase, a Berry phase and an anomalous phase, all of which can be written explicitly as functionals of either the reconstructed path, or of its projection on the coadjoint orbit of p. The Berry and anomalous phases are universal (they follow solely from the Virasoro group structure), and the Berry phase in particular takes the form described in [1]. Before displaying these results, we briefly review some elementary properties of the Virasoro group and its relation to the KdV equation. For more background material, we refer e.g. to [28] and its appendix A; our notation and conventions will follow those of that paper. Much more detailed, pedagogical accounts of the Virasoro group and its coadjoint orbits [26] can be found e.g. in [18,24,25,27,34]. Finally, note that the application of the results of this section to travelling waves is postponed to section 4. Virasoro group and KdV equation Here we briefly recall the relation between the Virasoro group and the KdV equation, through the Lie-Poisson equations (14)- (38). For many more details on this relation and its generalizations, see [18]. Up to a different choice of inertia operator, the same construction leads to the Hunter-Saxton and Camassa-Holm equations [17]. The Virasoro group, which we denote as Diff S 1 , is the central extension of the group Diff S 1 of diffeomorphisms of the circle. Accordingly, let x ∈ R be a 2π-periodic coordinate. An element of the Virasoro group is a pair (f, α), where α is a real number while the function f ∈ Diff S 1 is an (orientation-preserving) diffeomorphism, such that f (x + 2π) = f (x) + 2π. 11 For example, a rotation by θ reads f (x) = x + θ, which we denote as R θ (x) from now on (rotations will soon play a prominent role). The group law is, by definition, of the form (33): where • denotes composition and C is the Bott cocycle [35]. For future reference, note that C vanishes on rotations: if either f , or g, or f • g is a rotation, then C(f, g) = 0. We let Vect S 1 denote the Lie algebra of Diff S 1 -the Virasoro algebra. Its elements are pairs (ξ, α), where ξ = ξ(x)∂ x ∈ Vect S 1 is a vector field on the circle and α ∈ R as before. Its dual space, ( Vect S 1 ) * , consists of pairs (p, c), where p = p(x)dx 2 is a quadratic density and c ∈ R is a central charge. 12 The pairing between Vect S 1 and its dual is In two-dimensional CFT, p is interpreted as a (chiral component of the) stress tensor and (50) is the Noether charge of the conformal generator ξ. In the KdV context and its cousins, p(x) is a wave profile, governed by a nonlinear evolution equation of the form (38). From the perspective of symplectic geometry, p(x) is thus a 'momentum vector', which justifies our notation. We now derive KdV from Virasoro group theory. To begin, we need the coadjoint representation (34), which we write as (f, α) · (p, c) = (f · p, c) thanks to the fact that the central charge is invariant. Using the Bott cocycle (49) and the definition (35), one can then show [24,25] that the term f · p is given by x. This is the standard transformation law of the stress tensor under conformal transformations in any twodimensional CFT [36, sec. 5.4]. In particular, the combination of derivatives of f −1 multiplying c/12 is the Schwarzian derivative of f −1 : the Virasoro version of the Souriau cocycle (35). As a result, the coadjoint representation (36) of the Virasoro algebra reads The vanishing second entry confirms that the central charge is constant in time, for any choice of the inertia operator. By contrast, p(x) transforms non-trivially under the Virasoro group, so it will generally have a non-trivial time evolution. Specifically, we choose the inertia operator to be the simplest possible map of the form (37): where J is an arbitrary (and ultimately irrelevant) positive constant. This choice ensures that I is invertible and self-adjoint (recall the definition around (12)), so it is indeed an inertia operator. It is also anisotropic in the sense of eq. (21), since the adjoint and coadjoint representations of Virasoro are inequivalent. More complicated inertia operators yield different wave equations, such as Hunter-Saxton and Camassa-Holm [17,18]. For definiteness, we do not consider such more general cases, but our approach also applies to them up to straightforward modifications of all expressions involving I. As p(x)dx 2 transforms according to eq. (51), this expression is manifestly not Virasoroinvariant. This implies that the resulting Lie-Poisson equation (14)- (38) is non-trivial; using (52), one finds indeedṗ where, as in (38), the central charge c is a constant parameter. This is the Korteweg-de Vries equation (1) for the field p(x, t), derived here as a Lie-Poisson equation of Virasoro. Reconstruction and phases for periodic waves We now describe the reconstructed dynamics in the (cotangent bundle of the) Virasoro group when p(x, t) is a periodic solution of (55), say with period T , so that p(x, t + T ) = p(x, t). This is precisely the setup considered in section 2, so eqs. (43) and (48) will apply. At this stage, we adopt an abstract viewpoint without reference to particle motion, and without assuming that p(x, t) is a travelling wave -these problems will be addressed in sections 3.3 and 4, respectively. We refer again to [24, chap. 4-6] and [25, chap. 6-7] for the necessary background on the Virasoro group, especially its coadjoint orbits [26,27], which will now start playing an important role. Amenable profiles. As in section 2, the motion of p(x, t) determines a path (g t , α t )actually a geodesic -in the Virasoro group, and our task is to find the difference between g T and g 0 when p has period T . Before doing that, however, we need to state one key simplifying assumption: from now on, we require the wave profile p(x, t) to be amenable, that is, conformally equivalent to a uniform (i.e. x-independent) field configuration k. In other words, we assume that there exists a constant k and a diffeomorphism g 0 ∈ Diff S 1 such that, at time t = 0, where the dot denotes the coadjoint action (51). One can show that the constant k, provided it exists, is uniquely fixed by the wave p. 13 The map g 0 can then be seen as a 'boost' (analogously to Lorentz boosts) that sends the uniform profile k on p(x, 0). The ensuing path g t consists of diffeomorphisms that may be seen, from the fluid dynamics perspective, as changes of coordinates mapping the 'Lagrangian' reference frame, where fluid particles are uniformly distributed on the circle, to the 'Eulerian' one, where the wave profile is non-uniform. Since p(x, t) solves the KdV equation, it is confined to a coadjoint orbit (15) throughout time evolution, so the condition (56) guarantees that p(x, t) is conformally equivalent to k at any time. Note that the assumption of amenability is restrictive: there are a great many wave profiles that are not conformally equivalent to uniform configurations; a prominent example is provided by cnoidal waves with sufficient pointedness [28], which we shall return to in section 4.3. Despite this, we do wish to stick to the assumption that p ∈ O k for some constant k, since it implies that the stabilizer of the orbit is (conjugate to) a group U(1), as assumed in section 2. Indeed, the set of Diff S 1 elements leaving fixed the uniform profile k, in the sense that h · k = k, is exactly the group U(1) of rigid rotations h(x) = x + A, with (normalized) generator ξ 0 = ∂ x . 14 As a result, the time periodicity of p(x, t) guarantees that the reconstructed path g t is such that the diffeomorphism g −1 0 • g T is a rotation. Our task is to express the angle of that rotation in terms of observable wave data. Geometric phases in KdV. Let us now apply Euler-Poincaré reconstruction to a periodic solution (p(x, t), c) of KdV, assumed to be amenable. Given this wave, the reconstruction condition (40) reads where we used the multiplication (49) of the Virasoro group. On the far right-hand side, the first entry yields the reconstruction condition that one would find, without central extension, in the group Diff S 1 : In principle, the initial condition g 0 is free, but we choose it to satisfy eq. (56). As for the path α t ∈ R, we can solve eq. (57) similarly to (41) and find [16] This will eventually contribute to ∆φ through the 'anomalous phase' of eq. (48). As stressed earlier, choosing g 0 to satisfy (56) for some constant k, along with the periodicity of p, ensures that g −1 0 • g T is a rotation (since g 0 · k = g T · k, and k is only stabilized by rotations). We now use eq. (48) to compute the angle of that rotation as the sum of a dynamical phase, a Berry phase and an anomalous term: (i) The dynamical phase (45) involves the energy, given by the Hamiltonian (54): Since energy is conserved, one may evaluate it at any time t; here we chose t = 0. As stressed below (48), the right-hand side of this expression is independent of J. This is as it should be, since J does not affect dynamics and does not appear in the reconstruction condition (58). (ii) The Berry phase (43) is, in fact, standard: as was shown in [9], line integrals of the symplectic potential can be computed in closed form; they are 'geometric actions' for the Virasoro group, and were later interpreted as Berry phases associated with adiabatic conformal transformations [1]. Importantly, these phases only depend on the image of the path p(t) in momentum space -not on the reconstructed path g t in the group manifold. As a result, we are free to express the Berry phase in terms of any path f t in Diff S 1 such that the curve traced by f t · k coincides with p(t). We choose such a path f . Then, adapting the notation of [1] to the case at hand, 15 eq. (43) becomes where it is understood that the integrals over t and x run from 0 to T and 2π, respectively. (iii) Finally, since g −1 T • g 0 is a rotation and since the cocycle (49) vanishes on rotations, the last term of (48) does not contribute. The only non-zero contribution to the anomalous phase comes from the integral of the derivative of C, namely where we used the propertiesġ • g −1 + g • g −1 ∂ t g −1 = 0 and g • g −1 (g −1 ) = 1, along with an integration by parts. Combining eqs. (60), (61) and (62), we can finally write the angle ∆φ, given by (48), as Anomalous (63) where we used ξ 0 = ∂ x to simplify k, ξ 0 = k. This takes the anticipated form (6) and contains two geometric phases: one is a Virasoro Berry phase [1], and the other is an 'anomalous phase' which we deliberately wrote in a way that exhibits its similarity with the Berry term. We stress, however, that the path g in the anomalous phase is the reconstructed curve that satisfies (58), whereas the Berry phase involves any path f t such that the curve f t · k coincides with p(t). This distinction will allow us to evaluate the Berry phase easily for travelling waves, while the anomalous one will require a bit more work. Note that the formula (63) for ∆φ is almost universal: aside from the model-dependent dynamical phase, it applies to any Lie-Poisson system based on the Virasoro group, such as the Hunter-Saxton and Camassa-Holm equations [19]. Note also that the overall factor k on the left-hand side of (63) implies that, at k = 0, the right-hand side of (63) vanishes. However, since c = 0 in general, one may well have ∆φ = 0 even for k = 0; cnoidal waves (section 4.3) will provide an explicit example of this. Drift velocity as a Poincaré rotation number We now return to the reconstruction equation (58) to explain how the angle (63) can be observed by monitoring the motion of 'fluid particles' as defined by eq. (2). Again, we impose no restrictions on p(x, t) other than amenability and periodicity in space and time (so p(x, t) could, for instance, be a system of colliding periodic solitons with rational phase shift). At the end of this section, we will comment further on the (in)applicability of our approach to actual fluid dynamics, owing to a subtlety in reference frames that we already alluded to in the introduction. The application of our arguments to travelling waves is postponed to section 4. Particle drift as reconstruction. Consider a 'fluid particle' on the real line whose position x(t) satisfies the equation of motion (2) in terms of the (given) wave profile p. We claim that this equation is equivalent to the reconstruction condition (58). 16 Indeed, let X(t, x 0 ) be the unique solution of (2) with initial condition X(0, x 0 ) = x 0 . We can think of this solution as a time-dependent diffeomorphism g t , with an arbitrary initial configuration g 0 , acting on a suitable starting point: X(t, x 0 ) ≡ g t g −1 0 (x 0 ) . Then, in terms of g t , eq. (2) becomesġ t g −1 0 (x 0 ) = p(t) • g t (g −1 0 (x 0 )). Since this holds for all x 0 , we may remove the argument g −1 0 (x 0 ) and deduce that g t satisfies the reconstruction condition (58), as announced. Conversely, the condition (58) may thus be seen as an equation of motion for (comoving) fluid particles. This is true for any g 0 , but from now on we always let g 0 be a uniformizing map that satisfies eq. (56). Relating reconstruction to particle motion suggests a way to observe the angle ∆φ computed in (63). Indeed, suppose one asks the following question: given a particle with initial position x 0 and equation of motion (2), what is the particle's position after one period? We can certainly write x(t) = g t (g −1 0 (x 0 )) in terms of the reconstructed curve g t , since this is the unique solution of (2) with initial condition x 0 . After one period, one has Now recall the crucial fact, due to the periodicity and amenability of p, that g −1 0 • g T is a rotation by ∆φ (given by eq. (63) when p solves KdV). As a result, after N periods, and we may identify the map F in eq. (4) with the composition g 0 • R ∆φ • g −1 0 , where R ∆φ (x) ≡ x + ∆φ. The stroboscopic particle motion is thus a discrete-time dynamical system governed by iterations of the diffeomorphism F = g 0 • R ∆φ • g −1 0 . The latter is conjugate to rotation by ∆φ, which allows us to exploit a key result on circle dynamics: the Poincaré rotation number of F [24, sec. 4.4.3], as defined in the second equation of (5), coincides with ∆φ. 17 In terms of the particle's position, we can write where the drift velocity is defined as in eq. (5). From that perspective, ∆φ is the average rotation angle of a particle during one period -which answers the question raised above. Note that ∆φ is independent of the particle's initial position x(0), as is the drift velocity. We have thus shown that 'particle motion' in the sense of eq. (2) provides a system whose late-time behaviour is directly sensitive to the angle (63) through the drift velocity (5)-(66). In particular, this velocity contains a contribution due to a Virasoro Berry phase [1], somewhat analogously to the crest slowdown found in [32] for breaking waves whose envelope is described by the nonlinear Schrödinger equation. Note that this prediction is independent of the uniformizing map g 0 that satisfies (56); in fact, that map is generally unknown (even when the value of k is known for a given p(x, t)). In the next section we will study the drift velocity in travelling waves satisfying KdV, and in that case we will actually manage to find g 0 analytically. Comparison with fluid dynamics. At this point, it is worth comparing our approach, and in particular the drift velocity defined in (5)-(66), to the Stokes drift of fluid particles in shallow water dynamics [31]. Indeed, within the KdV approximation of fluid mechanics in a shallow layer (see e.g. [30]), eq. (1) describes the slow time evolution of a rightmoving wave p(x, t). Here, x is emphatically not a fix laboratory coordinate, but rather a (dimensionless) 'lightcone', or comoving, coordinate where X is a static laboratory coordinate, t is the (dimensionless) slow time variable, and C 1 is a dimensionless version of the standard velocity √ gh of gravity waves of average depth h in a gravitational field g. In fact, C ∝ L 2 /h 2 , where L is the (dimensionful) wavelength. The KdV approximation then holds in the 'non-relativistic' limit h/L → 0, where the velocity C of (67) goes to infinity. In that limit, the leading velocity of fluid particles is purely horizontal and given by an equation of motion that closely resembles, yet is crucially different from, eq. (2) above. Indeed, in terms of the static laboratory coordinate X, particle motion readṡ which only differs from (2) by the spatial argument of p. Equivalently, in terms of the comoving coordinate x, one hasẋ = p(x, t) − C, which obviously differs from eq. (2) by a dominant term. One can then ask the same question as the one we raised above: given a periodic wave train, what is the drift velocity of X(t)? This is the velocity that would presumably be seen in a laboratory, and it is tempting to hope that it is related to the one we introduced (66). However, the latter was defined from the equation of motion (2) in the comoving ('lightcone') frame, and it is quite clear that the drift velocity in the laboratory frame, due to eq. (68), will take a very different form because of the extra dominant term C ∝ L 2 /h 2 . For instance, at leading order in h/L, the particle satisfying (68) sees a fast average of the wave profile, and its position at time t (assuming t is of order one) is simply The drift velocity then coincides with the average of the wave profile (this average is constant along KdV time evolution), which is very different indeed from the prediction (63). 18 Thus, while the Berry phases and drift studied here do have some similarities with fluid dynamics, they do not, ultimately, describe the same phenomenology. Particle drift and phases of travelling waves Travelling waves form a prominent class of solutions of the KdV equation (and of wave equations in general): they take the form p(x, t) = p(x − vt) for some velocity v, so their shape is constant throughout time evolution. When the profile p(x) is 2π-periodic in space, such travelling waves are automatically time-periodic with period T = 2π/|v|. In this section, we study the reconstruction equations (2)-(58) for travelling waves that solve KdV. For amenable profiles, we show that these equations are integrable -they can be solved exactly in terms of known wave data -, and we build an explicit uniformizing map from which an exact expression for the drift velocity (5) follows. We then use this to derive a simplified formula for the rotation angle (63). Finally, we apply this to cnoidal waves and obtain a detailed picture of drift velocity throughout the cnoidal parameter space. In particular, we exhibit 'orbital bifurcations' that occur at the boundaries of a resonance wedge anticipated in [28]: in the wedge, particle motion is locked to the wave and v Drift = v, while no such locking occurs outside of the wedge and v Drift ∼ v/3 at large v. Thus, the drift velocity emerges in this picture as a diagnostic of wave amenability, i.e. of the nature of Virasoro coadjoint orbits. Exact reconstruction for travelling waves Here we study the reconstruction equations (2)-(58), without any reference to geometric phases for now. Our goal is to show that, for amenable travelling waves, these equations can be solved exactly in terms of readily accessible wave data. An explicit expression for the drift velocity will follow. The comparison to geometric phases and formula (63) is postponed to section 4.2. with the fact that p(x, t) is a travelling wave, yields a tremendous simplification of the reconstruction equation (58). Indeed, since p(x) = (g 0 · k)(x), we can write where R θ (x) ≡ x + θ is a rotation by θ (same notation as below (65)). On the other hand, we know, by definition of reconstruction, that p(x, t) = (g t · k)(x). Combining this with (72), it follows that i.e. the diffeomorphism g −1 t •R vt •g 0 stabilizes k. Since k is uniform, its stabilizer consists of rotations only, 19 so there must exist a function θ(t) ∈ R such that We have thus 'factorized' the dependence of g t (x) on t and x. Indeed, rewriting (74) as the 'advection' form of eq. (58) , which holds for all x, t. Since t and x − vt are independent coordinates on the plane, we may just as well rename x − vt into x and rewrite the equation for θ(t) aṡ This is a crucial result, as we now explain. Uniformization and drift velocity. Eq. (76) is an essential consequence of the result (74). Indeed, since the left-and right-hand sides of (76) depend separately on t and x, we find several striking implications just by differentiating the equation. First, differentiating (76) with respect to x, we conclude that there exists a constant V = 0 such that We will soon see that V contributes to the drift velocity of fluid particles -hence the notation. Since g 0 ∈ Diff S 1 , eq. (77) readily implies that, if the profile p is amenable, then p(x) − v has no roots (so its sign is constant); in fact, we will show (right before section 4.2) that the implication also goes the other way around. Furthermore, the condition g −1 0 (x + 2π) = g −1 0 (x) + 2π implies that the constant V is given by Together with eq. (77), this determines the uniformizing map g −1 0 exactly and uniquely, up to an arbitrary rotation by φ: Thus, we have found g −1 0 , hence g 0 , in terms of the wave profile. We will use this below (section 4.3) to obtain an explicit expression for cnoidal uniformizing maps. Secondly, differentiating eq. (76) with respect to t, we findθ = 0. In fact, owing to eq. (77),θ = V, so θ(t) = θ 0 + Vt. The integration constant vanishes by virtue of eq. (74) and the initial condition g t=0 = g 0 , so θ(t) = Vt. From this we can deduce the exact reconstruction g t , hence the solution x(t) of (2), hence the drift velocity (3). Here we go: from (75) we read off which is an exact geodesic in the Virasoro group (with respect to the right-invariant metric induced by the inertia operator (53)). The ensuing particle motion reads All coefficients here are determined by the wave profile p(x − vt). Thus, we have now proven that any amenable travelling wave solution of the KdV equation satisfies identity (86). This can be used, for instance, to find k once A, B, V are known, or to find V once A, B, k are known. In practice, eq. (86) yields a consistency check that can be used once A, B, k, V have been found by independent means. That will be our point of view below for cnoidal waves. Incidentally, eq. (86) allows us to prove the following point, raised below eq. (77): if p(x−vt) solves KdV, then p(x) is amenable if and only if p(x)−v has no roots. Indeed, we have already shown that amenability implies the absence of roots. Conversely, if p(x) − v has no roots, then one can define a map g 0 ∈ Diff S 1 by eq. (79), and the resulting coadjoint action on any constant k is given by eq. (85). Upon choosing k to satisfy eq. (86), one finds g 0 · k = p, proving that p(x) is amenable. Geometric phases of travelling waves Having shown that particle motion is integrable for amenable travelling waves, we now return to eq. (63) for the rotation angle ∆φ and use the properties of travelling waves to simplify it. We treat separately the dynamical and Berry phases on the one hand, and the anomalous phase on the other hand, then verify that the resulting prediction of v Drift is consistent with eq. (83). Dynamical and Berry phases. For a travelling wave, the dynamical phase in (63) is readily evaluated as the integral of p(x) 2 . As for the Berry phase, it is greatly simplified by the fact that the path f t (x) need not be the reconstructed one, g t (x). Owing to the fact that p(x, t) = p(x − vt) is a travelling wave, we can thus choose f t (x) = g 0 (x) + vt, where g 0 satisfies (56). Upon plugging this into the Berry phase (61), one finds where ± = sign(v) and we used the coadjoint representation (51) to recognize the integrand as (g 0 · k)(x) = p(x). Thus, up to a sign and a term 2πk, the Berry phase is the zero-mode (the average) of the profile p. Anomalous phase. The anomalous phase (62) explicitly depends on the reconstructed path g t , so, in contrast to the dynamical and Berry phases, one really needs to solve eq. (58) in order to simplify it. Fortunately, we have already done that: we showed in section 4.1 that the equation of motion (2) can be integrated exactly for amenable travelling waves. Accordingly, we use the solution (80) to rewrite the anomalous phase (62) as where we also used eq. (77) to express (g −1 0 ) in terms of p. Combining this with the Berry phase (87) and the dynamical phase, we obtain an expression of ∆φ that only involves the (time-independent) profile p(x), without any other wave data. In practice, it is simpler to express the formula as a drift velocity (5) instead of ∆φ, so as to absorb the awkward signs of eq. (87). The result reads: where we have grouped the various phases as in (6)-(63). Note that, from this perspective, the anomalous phase looks like a correction to the dynamical phase (both contribute terms that are not proportional to the velocity v, as opposed to the Berry phase). However, this is really specific to travelling waves: for other kinds of profiles p(x, t), the simplification (88) would not hold. Consistency check. At this point, one should compare the geometric phase prediction (89) to the previously derived exact result (83). Indeed, it is not obvious that these expressions coincide. The fact that they do follows from a series of formulas derived earlier: first, upon writing (89) as v Drift = v + V, one finds the condition Proving this equality then proves that (89) and (83) coincide. To this end, one can use eq. (71) to express p (x) 2 in terms of p, and eq. (70) to write the remaining p 2 term as a linear combination of constants, p and p . The latter does not contribute to the integral (by periodicity), and, after various cancellations, one ends up having to prove the identity Owing to eq. (78), this is equivalent to the formula −Av − B = kV 2 , which we encountered in eq. (86). We conclude that eq. (89) does coincide, as expected, with eq. (83). We have now completed a full conceptual circle: we first argued on symplectic grounds that particle motion, in the sense of eq. (2), has a drift velocity determined by the sum of phases (63). We then showed, independently, that the drift velocity is given by (83) for amenable travelling waves, and we just proved that this formula is consistent with the geometric phase prediction. In practice, it is much easier to compute the drift velocity using eq. (83), with V given by (78), than in terms of the phases (89). For travelling waves, the main virtue of (89) is that it neatly isolates the various geometric contributions to the drift velocity. For more complicated wave profiles, however, a simple formula such as (83) is generally not available, so one has to use the general geometric expression (63) to compute the drift velocity. We will consider such more general cases elsewhere. For now, we apply our results to cnoidal waves. Drift in cnoidal waves and orbital bifurcations Here, we apply the formulas of the previous pages to cnoidal waves -the periodic solitons of KdV. The key result in that context are explicit formulas for the cnoidal uniformizing map and drift velocity, both given in terms of an elliptic integral of the third kind. Importantly, not all cnoidal waves are amenable [28]: the profiles that have no uniform orbit representative span a 'resonance wedge' in the cnoidal parameter space, Figure 3: The root wedge (94) and resonance wedge (95) in the (m, V ) plane. They partly overlap when m > 0.5. For completeness, we also display the line V = (2 − m)/3, along which k = 0. As we shall see below, the contributions of dynamical, Berry and anomalous phases all diverge on that line, but these divergences cancel out so that the sum (89) is finite even when k = 0. A more detailed picture can be found in fig. 7 of [28]. with v Drift = v in the wedge. By contrast, outside of the wedge, v Drift = v is such that v Drift ∼ v/3 at large v, corresponding to an average one-period rotation of ∆φ ∼ ±2π/3 (see eq. (100)). As we explain, all these results are consequences of the (symplectic) geometry of coadjoint orbits of the Virasoro group. Cnoidal waves. A cnoidal wave (with 2π-periodicity in space) is a travelling wave solution of the KdV equation (1). It is specified by two parameters: a 'pointedness' m ∈ [0, 1) and a (rescaled) velocity V ∈ R. In these terms, the wave reads where K(m) is the complete elliptic integral of the first kind, sn is the Jacobi elliptic sine, and the wave's velocity is a function of (m, V ) given by Several qualitative aspects of the equation of motion (2) can be read off from simple properties of the profile (92). For example,ẋ(t) has a constant sign if and only if p(x) has no roots; such roots occur in the wedge Much more importantly, the results of section 4.1 imply that the key object is not quite p(x), but rather p(x) − v: as shown below (77) is non-amenable. Using (92) and the velocity (93), one readily sees that such roots only occur in the following resonance wedge: This is consistent with the classification of Virasoro orbits of cnoidal waves described in [28] (and closely related to the band structure of the Lamé equation [37]). Indeed, any profile with labels (m, V ) outside of the wedge (95) is amenable, with a uniform orbit representative 20 that becomes complex (hence nonsensical) once (m, V ) enter into the resonance wedge. Here ℘ −1 is the inverse Weierstrass function and ζ is the Weierstrass zeta function (both specified by half-periods K(m) and iK(1−m)). On the boundaries of the wedge, where V = [(1 ± 3)m − 2]/6, the constant (96) takes the 'exceptional' value k = −c/24. See fig. 4 for a plot of k in the (m, V ) plane. For many more details about this, see [28]; for an introduction to elliptic functions, see e.g. [37,38]. Particle motion and drift. The tools of section 4.1 readily apply to amenable cnoidal waves. Thus, the velocity V defined by (78) reads 20 Eq. (96) is roughly the square of the crystal momentum for a state with energy E ∝ cst − V in a Lamé lattice. From that viewpoint, the resonance wedge is the Lamé band gap. See [28] for details. − −− → Figure 5: A plot of the cnoidal boost g 0 (x), the inverse of (98), for φ = 0, m = 0.9, V = 1/3. On the right, we also represent the effect of such a map on uniformly distributed points on a circle with angular coordinate x ∼ x+2π. The region with the highest density of points is x = π, while the lowest density occurs at x = 0. These points respectively correspond to the maximum and minimum of p(x)/c, when p(x) is a cnoidal profile (92). where Π(x|m) is the complete elliptic integral of the third kind. As a result, the uniformizing map (79) can be written as where Π(x, y|m) is the incomplete elliptic integral of the third kind and am(x|m) is the Jacobi amplitude. This is an explicit 'cnoidal boost': by construction, g 0 maps a uniform profile k ('at rest') on a cnoidal ('boosted') one. An example of such a boost is plotted in fig. 5. The corresponding particle motion is given by eq. (81); see fig. 6 for a few examples. As is manifest there, the large-scale behaviour of x(t) is approximately linear in t, with a drift velocity given by eq. (83): The asymptotics of this expression at large |V | follow from Π(x|m) This is the asymptotic formula announced above. Writing v Drift = ∆φ/T , it indicates an average rotation angle of ∆φ ∼ ±2π/3 during each period. We stress that eq. (99) only holds in the region of amenable cnoidal waves, i.e. outside of the resonance wedge (95). In order to obtain a complete picture of v Drift as a function of (m, V ), throughout the entire cnoidal parameter space, we therefore need to study the equation of motion (2) in the resonance wedge. This will be done below, and the resulting final shape of v Drift (m, V ) is displayed in fig. 8. For now, we study (99) from the point of view of geometric phases. Geometric phases of cnoidal waves. As explained in section 4.2, eq. (83) for the drift velocity can be written in a 'decomposed' form (89) that exhibits the separate contributions of dynamical, Berry and anomalous phases. We now list the values of these three terms for cnoidal waves in terms of parameters (m, V ), being understood that all formulas only hold outside of the resonance wedge (95): where K(m), E(m) and Π(x|m) are respectively complete elliptic integrals of the first, second and third kind, and k is the function of (m, V ) written in eq. (96). It is not particularly illuminating to plot these velocities as functions of (m, V ). Their overwhelmingly dominant feature is a divergence on the line V = (2−m)/3, where k = 0. This divergence cancels out when the three velocities are added together, since the total drift velocity (99) is finite even when k = 0 (as it should be); see fig. 8 below. Note also that the Berry velocity (102) is the only one that vanishes on the line V = 0. As shown in section 4.2, the coincidence between formulas (83) and (89) for the drift velocity hinges on the identity (86) satisfied by amenable travelling waves. We now verify this identity for cnoidal waves: using the profile (92) and the definition (71) of the Figure 7: A few solutions of (2) when p(x, t) is a cnoidal wave (92) at c = 1, m = 0.9, V = 0.1 (left panel) and V = −1/3 (right panel). These parameters lie in the resonance wedge (95), so eq. (81) does not apply and the plotted values x(t) were obtained by numerical integration of (2). Note the monotonous behaviour of x(t)/t, radically different from the oscillating one in fig. 6. The rotation number ∆φ = ±2π is manifest. constants A, B, one finds The value of −B − Av follows. Upon plugging it in eq. (86) and using the value (96) of k, one encounters the formula This turns out to be a known, albeit somewhat obscure, identity that arises in the Lamé band structure: it coincides with eq. (7.20) of [39] upon identifying the crystal momentum as q(E) ≡ π K(m) −6k/c in terms of the uniform representative (96), along with e = V , e 1 = (2 − m)/3, e 2 = (2m − 1)/3 and e 3 = −(m + 1)/3. 21 Cnoidal waves thus satisfy eq. (86), as they should. Particle motion in the resonance wedge. The cnoidal waves whose parameters (m, V ) belong to the resonance wedge (95) are not amenable [28], so the solution of the equation of motion (2) is no longer given by eq. (81). It is, nonetheless, easy to compute the particle drift velocity as defined in (3). Indeed, for a travelling wave p(x, t) = p(x − vt), the equation of motion (2) can be recast aṡ in terms of X ≡ x − vt. Thus, if p(X) − v has roots (which precisely occurs in the resonance wedge), then at least one of them, say X * , is a stable fixed point of the system (107). It follows that X(t) → X * at late times, which is to say that x(t) ∼ X * + vt at large t > 0. This is manifest in fig. 7, where we plot a few solutions of (2) for a wave in the resonance wedge. As a result, the drift velocity (3) is This finally justifies the name 'resonance wedge': for parameters (m, V ) that satisfy (95), particle motion is 'locked' to the wave -it 'resonates'. This is akin to the sniper bifurcation of the Adler equation [29]. The resulting, complete function v Drift (m, V ), on the entire cnoidal parameter space, is shown in fig. 8. We stress again that the result (108) could not have been deduced from a rotation angle such as (63). If anything, upon declaring that (108) is to be written in the form v Drift = ∆φ/T , one would find a (piecewise) constant value ∆φ = 2π sign (v) in the resonance wedge. This could not have emerged from the phase formula (63) since k, given by (96), is complex in the resonance wedge. Despite this, it is in fact possible to predict that ∆φ/(2π) must be an integer in the resonance wedge by using the symplectic arguments of section 2 -the exact same line of thought that led to eq. (63). Indeed, we argued earlier that the equation of motion (2) is an Euler-Poincaré reconstruction equation for the Virasoro group, and this holds whether or not p(x, t) is amenable. Furthermore, when p(x, t) is periodic in time, it still remains true that the reconstructed path g t ∈ Diff S 1 satisfies (g −1 0 • g T ) · k (x) = k(x) for a suitable orbit representative k(x). In the case of non-amenable orbits, k(x) cannot be a constant, but the fact remains that g −1 0 • g T belongs to its stabilizer. And now, the key argument: for non-amenable Virasoro orbits, the stabilizer is not (conjugate to) a group of rotations [18,24,25,27,34]. Instead, the stabilizer is a non-compact group, isomorphic to R (possibly up to further finite factors), and consists of circle diffeomorphisms whose rotation number vanishes modulo 2π. It thus follows that ∆φ = Rotation number of for some integer n, which confirms the observed result (109). In that sense, the drift velocity (108) is a diagnostic of the fact that cnoidal profiles in the resonance wedge are not amenable [28]. Conclusion and outlook The purpose of this paper has been to initiate a geometric study of nonlinear wave equations, such as KdV, Hunter-Saxton or Camassa-Holm, from a point of view that strongly relies on group theory and symplectic geometry. This follows a long tradition in mathematics and physics [13,14,16,20,40] (see also the very recent [41]), but it seems to have been somewhat overlooked in the mainstream physics literature despite the relevance of geometric objects, such as Berry phases, in a plethora of systems ranging from condensed matter to nonlinear dynamics (see e.g. the classic [42] or the more contemporary [43], and references therein). Our goal has been to start filling that gap. Specifically, we considered a spatially periodic KdV equation (1) and the equation of motion (2), seen as a model for a 'fluid particle' dragged along by the wave profile p(x, t). Using the fact that (2) effectively coincides with the equation for Euler-Poincaré reconstruction used in symplectic geometry, we predicted the value of the particle's drift velocity in terms of a Poincaré rotation number ∆φ. The latter turned out to be a sum of phases (6) that we wrote explicitly in eq. (63) and that crucially involves a Virasoro Berry phase in the sense of [1]. We then turned to travelling waves, for which the assumption of amenability yielded a striking simplification of the equation of motion (2). In fact, we showed that particle motion becomes integrable in that case, leading to the simple formula (83) for the drift velocity. An equivalent way to write that velocity in terms of geometric phases was displayed in eq. (89), and we showed that the two expressions, while not manifestly identical, do in fact coincide thanks to the KdV equation. Finally, we applied our tools to cnoidal waves and exhibited 'orbital bifurcations' occurring at the boundaries of the resonance wedge (95), inside of which particle motion is locked to the wave. These bifurcations are a direct observation of the sharp change in nature of Virasoro orbits of cnoidal waves, as investigated in [28]. As stressed in section 3.3, the equation of motion (2) whose drift we computed does not quite coincide with the actual equation of motion for a fluid particle in shallow water. In particular, while conceptually similar, it is not clear whether the drift velocity studied here has anything to do with the standard notion of Stokes drift [31]. A natural question that follows from our work, therefore, is whether any actual experiment could exhibit the rotation we studied, and in particular Virasoro Berry phases. For example, it is almost trivially true that a quantum-mechanical particle on a circle, subjected to the periodic Hamiltonian would have a wavefunction that rotates by an average angle ∆φ during each period of the profile p(x, t). This example, however, is somewhat artificial, as one is merely restating our construction in quantum language. It would be much more satisfactory to find a classical mechanical system that naturally reproduces the equation of motion (2) -for example, a plasma in one dimension subjected to a suitable external magnetic field. We hope to address this issue elsewhere. Setting aside the issue of experimental signatures, one can think of numerous followups of our work. For example, remaining within the confines of the KdV equation, it is natural to apply formula (63) to non-travelling waves, such as profiles containing multiple colliding cnoidal waves with a rational phase shift [44]. The phase shift being rational ensures that the profile as a whole is periodic in time. Indeed, in terms of the phase shift ∆θ (which has a priori nothing to do with ∆φ!), the profile satisfies for some quasi-period τ . If the phase shift is rational in the sense that ∆θ = 2πp/q, with p, q coprime integers, then the time period of the profile is T = qτ , and our results of section 3 apply. It would be illuminating to display the resulting Berry phase spectrum (as a function of the wave profile parameters), especially as one might hope to even extend the picture, formally, to non-periodic profiles (for which ∆θ is irrational). Other potential extensions of our paper include the quantum version of ∆φ, in the sense of the quantum KdV equation [45], and the effects of stochasticity. Regarding the latter, see e.g. [46] and references therein, or [47] for a language very close to ours. The symplectic approach followed in this paper should apply to a host of other nonlinear wave equations, and it would be interesting to investigate the analogue of our angle ∆φ (and its observable effects) in such setups. As examples, we already mentioned the Hunter-Saxton and Camassa-Holm equations [19], for which the Berry phase and anomalous phase of eq. (63) would remain unchanged. Perhaps more interestingly, one could also extend the picture to Lie-Poisson wave equations not based on the Virasoro group, such as Hirota-Satsuma dynamics [48]. The nonlinear Schrödinger equation is also, in effect, a Lie-Poisson equation, albeit one based on the loop group of SO(3) through its reformulation as a Landau-Lifschitz model [49]. We hope to contribute to some of these research avenues in the future. a preliminary version of our results was presented. The work of B.O. was mostly carried out at ETH Zürich, where it was supported by the Swiss National Science Foundation and the NCCR SwissMAP; his current funding is the ANR grant TopO number ANR-17-CE30-0013-01. G.K. is a Research Associate with the Fonds de la Recherche Scientifique-FNRS (Belgium). Appendix: Groups and geometry In this appendix, we review some of the elementary material on Lie groups and symplectic geometry needed in section 2. For a much broader and more pedagogical introduction, we refer e.g. to [13,14,23]. Lie groups. Let G be a Lie group with Lie algebra g = T I G; we denote elements of the former as f , g, etc. and those of the latter as ξ, ζ, etc., while I is the identity in G. The group acts on its algebra through the adjoint representation 22 which for matrix groups boils down to gXg −1 . The corresponding infinitesimal action, i.e. the adjoint representation of the Lie algebra, coincides with the Lie bracket: which for matrix groups is just a commutator. In the context of Lie-Poisson evolution equations (14), one thinks of the group manifold G as the space of classical configurations of a dynamical system, while the Lie algebra g consists of all infinitesimal motions (deformations) of G, such as angular velocities. Equivalently, G is the group of all possible changes of reference frames that a dynamical system is allowed to go through, while the adjoint representation (113) says how the angular velocity varies under changes of frames. For instance, the matrix group G = SO(3) of spatial rotations consists of all possible orientations of a rigid body (with respect to a reference frame whose origin lies at the body's centre of mass). The Lie algebra g is a vector space whose dual g * consists of elements that we write as p, q, etc., each of which is a linear form on g, p : g → R : ξ → p, ξ . We refer to such maps as coadjoint vectors, and their transformation law under the action of the group G is the coadjoint representation efined in eq. (7): This definition is dual to (113) and ensures that the pairing (115) is invariant under 'changes of reference frames' in the sense that Ad * g (p), Ad g (ξ) = p, ξ . Analogously to (114), the coadjoint representation of the Lie algebra is ad * ξ (p) ≡ d dt 0 Ad * e tξ (p) = −p • ad ξ . 22 Here e tξ is the exponential of tξ, such that e 0 = I. As mentioned in eq. (8), this is to say that ad * ξ p, ζ = − p, [ξ, ζ] for all Lie algebra elements ξ, ζ and any coadjoint vector p. For any algebra admitting a non-degenerate invariant bilinear form, the adjoint and coadjoint representations are equivalent and their distinction is inconsequential. However, for the algebra of vector fields that are needed for KdV, adjoint and coadjoint representations differ, which is why the definitions (116)-(117) are important for our purposes. The phase space T * G. What now follows is a technical preliminary to section 2.2. Given a Lie group G, we show that its cotangent bundle T * G = g∈G T * g G is a trivial bundle. This will allow us to think of the product G × g * as a symplectic manifold: Lemma: T * G is diffeomorphic to a direct product G × g * . Proof: The key point is that group translations can be used to map any tangent space T g G on the Lie algebra g = T e G. Accordingly, any element of T * g G, i.e. a map u : T g G → R, can be turned into a map from g to R, i.e. an element of g * . Concretely, let us write right translations in G as R g : G → G : f → f g. Then define a map Φ : T * G → G × g * : (g, u) → g, (R g ) * u I (118) where (R g ) * u I is a coadjoint vector such that (R g ) * u I , ξ = u, d(R g ) I ξ for any ξ ∈ g. The map (118) is a smooth bijection whose inverse is also smooth, so it is a diffeomorphism. This concludes the proof. Having established that T * G ∼ = G×g * is a trivial bundle, we now look at its symplectic form. First recall how the Liouville symplectic form is built on T * G: one has a projection π : T * G → G : (g, u) → g whose differential at (g, u) projects any tangent vector of T * G on its part tangent to G alone: dπ (g,u) : T (g,u) T * G → T g G : (V, V) → V. (120) The Liouville one-form on T * G then reads A (g,u) ≡ u • dπ (g,u) , i.e.Ã (g,u) (V, V) = u, V , and the symplectic form of T * G is its exterior derivative:ω = −dÃ. We put tildes on these objects because we are eventually interested in the counterpart of (121) in G × g * , which will be tilde-free. To find this counterpart, we pullback the Liouville one-form thanks to the inverse diffeomorphism (119), finding (Φ −1 ) * A (g,p) (V, X) = A (g,(R −1 g ) * p)g) d(Φ −1 ) (g,p) (V, X) (122) Thus, if we introduce the right Maurer-Cartan form d(R g −1 ) g ≡ dg g −1 , and if we denote the (pulled-back) Liouville one-form on G × g * as A ≡ (Φ −1 ) * Ã , then eq. (123) yields A (g,p) = p, dg g −1 , 0 . This is the result announced in eq. (16).
2020-02-06T02:00:58.176Z
2020-02-05T00:00:00.000
{ "year": 2020, "sha1": "00b6bbb4a65e0f0c07694091b894fb781f01a262", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.01780", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00b6bbb4a65e0f0c07694091b894fb781f01a262", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine", "Mathematics" ] }
19489530
pes2o/s2orc
v3-fos-license
Knowledge and Attitudes of General Practitioners and Sexual Health Care Professionals Regarding Human Papillomavirus Vaccination for Young Men Who Have Sex with Men Men who have sex with men (MSM) may be at higher risk for human papillomavirus (HPV)-associated cancers. Healthcare professionals’ recommendations can affect HPV vaccination uptake. Since 2016, MSM up to 45 years have been offered HPV vaccination at genitourinary medicine (GUM) clinics in a pilot programme, and primary care was recommended as a setting for opportunistic vaccination. Vaccination prior to potential exposure to the virus (i.e., sexual debut) is likely to be most efficacious, therefore a focus on young MSM (YMSM) is important. This study aimed to explore and compare the knowledge and attitudes of UK General Practitioners (GPs) and sexual healthcare professionals (SHCPs) regarding HPV vaccination for YMSM (age 16–24). A cross-sectional study using an online questionnaire examined 38 GPs and 49 SHCPs, including 59 (67.82%) females with a mean age of 40.71 years. Twenty-two participants (20 SHCPs, p < 0.001) had vaccinated a YMSM patient against HPV. GPs lack of time (25/38, 65.79%) and SHCP staff availability (27/49, 55.10%) were the main reported factors preventing YMSM HPV vaccination. GPs were less likely than SHCPs to believe there was sufficient evidence for vaccinating YMSM (OR = 0.02, 95% CI = 0.01, 0.47); less likely to have skills to identify YMSM who may benefit from vaccination (OR = 0.03, 95% CI = 0.01, 0.15); and less confident recommending YMSM vaccination (OR = 0.01, 95% CI = 0.00, 0.01). GPs appear to have different knowledge, attitudes, and skills regarding YMSM HPV vaccination when compared to SHCPs. Introduction Human papillomavirus (HPV) vaccination of young men who have sex with men (YMSM) (age [16][17][18][19][20][21][22][23][24] potentially has important implications for cancer prevention worldwide. HPV is one of the most common sexually transmitted infections [1]. Over 70% of MSM are carriers of HPV [2,3]. HPV infection is associated with other anogenital and oropharyngeal cancers [4]. Anal cancer incidence has increased rapidly in recent years [5], with approximately 95% of anal cancers caused by HPV [6]. MSM (men who have sex with men) carry a disproportionate burden of anal cancer (15:1 compared with heterosexual men) [7]. Relative to human immunodeficiency virus (HIV)-negative men or women, HIV-negative MSM have a 4-fold higher risk of developing anal cancer, and HIV-positive MSM have up to an 80-fold higher risk [8]. Approximately 72% of oropharyngeal cancer cases in the United States from 2008-2012 was attributable to HPV, with an annual incidence rate of 7.6 per 100,000 population [9]. Prevention of HPV-related disease is a key public health issue. The United Kingdom's (UK) current strategy is to offer publicly funded vaccination only to girls aged 12-14 (prior to the legal age of consent at 16 years), which is intended to protect males through herd immunity. This decision was made on the basis of cost effectiveness [10], although more recent studies have called this into question [11]. This benefit does not extend to MSM. Public Health England estimates that 3.2% of the UK population are lesbian, gay, or bisexual [12], which suggests that almost one million UK men may not protected from HPV-associated anogenital warts and cancers. In November 2015, the Joint Committee on Vaccination and Immunisation (JCVI) recommended the HPV vaccination programme be extended to MSM aged up to 45 years via genitourinary medicine (GUM) clinics, HIV clinics, or opportunistically through general practice clinics (GPs) [13]. Vaccination is likely to be most efficacious before exposure to HPV [14], however the majority of men do not identify as gay or bisexual before they engage in sexual contact with other men [15], and many men do not disclose their sexual identity and/or behaviour to their physician. In attempts to address this issue, UK healthcare professionals (HCP) have recently been issued guidance from NHS England recommending that they enquire about a patient's sexual orientation at "every face to face contact with the patient, where no record of this data already exists" [16]. Such policies have the potential to exacerbate stigmatisation of LGBTQ patients accessing healthcare services if they feel they will be asked to disclose their sexual orientation every time they access a service, whether it is relevant to their presenting complaint or not [17]. Best practice guidance for discussing sexual behaviour has been produced from UK charities, such as Stonewall [18]. It is crucial to engage widely with HCPs expected to vaccinate YMSM against HPV. Patients and parents of younger children place a strong emphasis on the recommendations (or otherwise) of a HCP in decision-making regarding vaccinations [19][20][21]. GPs will arguably have more opportunity to vaccinate men before sexual debut compared to GUM clinics, given that men are more likely to attend a GUM clinic after the first sexual encounter [15]. It is also important to identify appropriate strategies to support any new HPV vaccination programmes in the future and highlight any barriers and facilitators to the programmes' effective implementation. In a survey of 131 sexual healthcare professionals (SHCPs), 95% of clinicians supported a targeted HPV vaccination programme in MSM within GUM services but expressed concern that alone this strategy was too late and too limited for most MSM [22]. This study was specific to clinicians with expertise in sexual health, and did not include other HCPs who may be involved in vaccination, such as GPs. It was also conducted prior to the recent JCVI recommendation. The aim of this study is to understand and compare the knowledge, perceptions, and attitudes of UK GPs and SHCPs regarding HPV vaccination for YMSM. Materials and Methods An exploratory cross-sectional survey of GPs and SHCPs was conducted as part of a mixed-methods study. SCHPs included GUM consultants, doctors-in-training, and nurses working in sexual health clinics. Between September 2016 and January 2017, convenience sampling was used to recruit participants through an email invitation distributed by the Royal The email invitation included a link to the online survey and a participant information sheet which explained that participation implied consent. At the end of the anonymous survey, GPs were invited to provide contact details if they wished to take part in a follow-up interview (findings not yet published). Similar to Nadarzynski et al. [23], participants were asked to distribute the e-survey to co-workers to increase the number of responses using snowballing techniques. The questionnaire aimed to capture knowledge and attitudes towards HPV vaccination for YMSM, as well as any barriers or facilitators. YMSM were chosen as the focus for this study as greater understanding of factors affecting HPV vaccination in this age group could improve the efficacy of HPV vaccination programmes for MSM by targeting younger men before they engage in sexual activity. Questionnaire content was informed by a study steering group comprising two lesbian, gay, bisexual, transgender and queer (LGBTQ) group stakeholders and three MSM sexual health researchers from England and Northern Ireland, and piloted with HCPs prior to wider distribution. The questionnaire was adapted from a HCP HPV attitude scale developed by Nadarzynski et al. [23], and a HCP pre-exposure prophylaxis (PrEP) knowledge and attitude scale [24]. Question items focused on the barriers and facilitators to vaccinating YMSM provided pre-specified options based on existing literature, with an option for free text responses. Individual questionnaire items used either binary ("yes" or "no") or ordinal ("high", "medium", "low") response measures for knowledge questions. Basic demographic information, including participant age, gender, clinical role, and years of experience were gathered. Descriptive statistics summarised demographic, attitude, and knowledge data. Fisher's exact testing and unpaired t-tests were utilised for comparison of demographics. Due to null responses for some categories, sexual orientation was converted to a binary variable ("heterosexual or straight" vs. "gay, lesbian, or bisexual") for the analysis. Ordinal knowledge variables were converted to binary responses ("high/medium" vs. "low/none"). Attitudinal responses were converted into positive ("yes") or negative ("unsure" or "no") binary variables. Simple and multiple logistic regression techniques were utilised to compare the responses of GPs and SHCPs to the knowledge and attitude questions. Adjusted analysis controlled for the effects of participant age, gender, sexual orientation, and years of experience. Hosmer-Lemeshow goodness of fit testing was performed to assess accuracy of multiple logistic regression models. All analysis was conducted using Stata version 14 Results In total, 87 participants completed the survey. Demographic data was incomplete for three SHCPs, but overall individual question response rates were high (range: 94.25-100%). Thirty-eight GPs and 49 SHCPs (35 GUM specialists, 8 specialist nurses, 3 hospital sexual health specialists, and 3 other) completed the questionnaire. Participants included 59 females (67.82%), with a mean age of 40.71 years, and a median 14 years of experience (IQR (Interquartile Range) 8, 24). There were no significant differences between GPs and SHCPs. Further demographics are shown in Table 1. SHCPS were more likely than GPs to have vaccinated a YMSM patient against HPV (20/49 (47.83%) vs. 2/38 (5.6%), p < 0.001), more likely to be aware of the recent JCVI recommendations (adjusted OR = 0.03, 95% CI = 0.01, 0.11), and more likely to report they knew enough to have an informed discussion with MSM about HPV vaccination (adjusted OR = 0.04, 95% CI = 0.01, 0.14). Thirty GPs (78.95%) stated they have a "low level of knowledge" or "no knowledge" of HPV vaccination for YMSM, compared to 6 (12.24%) SHCPs (adjusted OR = 0.02, 95% CI = 0.00, 0.10), but there were no significant differences in knowledge ratings regarding overall HPV knowledge or HPV in females (see Table 2). GPs attitudes towards HPV vaccination in YMSM differed from SHCPs (see Table 3). GPs were less likely to agree that HPV vaccination should be widely available for both genders (adjusted OR = 0.30, 95% CI = 0.09, 0.98) or MSM (adjusted OR = 0.30, 95% CI = 0.09, 0.98) based on current evidence. Even if a gender-neutral programme existed in the UK, GPs were less likely to recommend HPV vaccination to MSM (adjusted OR = 0.33, 95% CI = 0.13, 0.88) and they were less likely to believe the majority of YMSM would be willing to receive the vaccine (adjusted OR = 0.13, 95% CI = 0.04, 0.41). Paradoxically, there was no significant difference in the numbers of GPs who would recommend the HPV vaccination to their own son (33/38, 86.84%) compared to SHCPs (49/49, 100%, p = 0.36). When asked about whether they ask patients about sexual orientation "if it is relevant to the consultation", there was no difference between the responses of GPs (31/38, 81.58%) and SHCPs (40/46, 86.96%, p = 0.57). GPs were much less likely to believe that a young person would disclose their sexual orientation to them (adjusted OR = 0.17, 95% CI = 0.06, 0.50), less confident that they had the skills to identify YMSM who may benefit from HPV vaccination (adjusted OR = 0.03, 95% CI = 0.01, 0.15), and they reported lower levels of confidence in recommending HPV vaccination for YMSM (adjusted OR = 0.04, 95% CI = 0.01, 0.18). GPs and SCHPs reported different factors that would most affect their ability to deliver HPV vaccination for young MSM (see Table 4). GPs highlighted "no time" as a key limiting factor (25/38, 65.79%), while SCHPs felt "staff availability" (27/49, 55.10%) was the most important limitation. The majority of GPs (28/38, 73.68%) felt that additional training was needed to support HPV vaccination for MSM in primary care, while SHCPs felt computer prompts would be most useful (18/35, 51.43%) (see Table 5). Discussion This is the first UK-based study to examine the knowledge, perceptions, and attitudes of GPs and SHCPs since the JCVI updated its recommendations to include offering HPV vaccination for MSM for men under 45 years opportunistically in GUM and primary care. The survey findings suggest that compared to SHCPs, GPs were less aware of the evidence for HPV vaccination for MSM, and reported less confidence in recommending HPV vaccination to YMSM. GPs felt that lack of time and training were the main barriers to HPV vaccination for YMSM, whereas SHCPs had greater concerns about vaccine availability. A similar survey targeting SHCPs was conducted prior to the JCVI recommendation of a targeted vaccination programme for MSM [23]. SHCP attitudes around perceived value, health behaviours, and capabilities are consistent across the two studies, and there are no clear changes following the JCVI recommendation. This is probably not surprising given their clinical interest in preventing the spread of HPV and exposure to MSM with sexual health problems in clinical practice. Interestingly, 74% of respondents in that study "agreed" or "strongly agreed" that "HPV vaccination should be offered to MSM in alternative settings such as GP practices or pharmacies" [23]. Disparities in knowledge and attitudes towards HPV vaccination for YMSM between SHCPs and GPs, as suggested in this study's findings, may lead to differences in treatment and HPV prevention depending on where YMSM seek sexual health advice. Our findings indicate GPs may have a low level of knowledge regarding HPV vaccination among young MSM, and implementing a targeted HPV vaccination programme for YMSM prior to exposure to HPV to maximise the cancer prevention potential that involved GPs would need investment in clinician education, training, and support. Studies in the United States of America explored reasons behind the low uptake of HPV vaccination for adolescent boys, where access varies on a state by state basis. In a national survey, Gilkey et al. found that paediatricians and family physicians delivered their recommendations for HPV vaccination in children inconsistently, sometimes not in a timely manner or with strong endorsement [19,25]. Alexander et al. also found variation in physicians' recommendations of the HPV vaccine to young males, citing the "newness" and sexual nature of the vaccine as barriers [26]. The study authors suggest American family physicians do not feel they have the time or knowledge to counsel YMSM about the vaccine, and they do not believe they see them frequently. These findings are consistent with our results, providing further evidence of the need for extra support and training for GPs to help them identify YMSM and raise their awareness about the potential health benefits of HPV vaccination in this high-risk group. This study utilised an adapted version of a validated survey instrument that has been delivered to SHCPs previously. There was minimal missing questionnaire data. Obtaining and comparing GP and SHCP knowledge, perceptions, and attitudes towards HPV vaccination for MSM (including young MSM) has proved insightful, given the JCVI recommendations that both settings could be used to deliver the vaccine. The lower levels of confidence and knowledge among GPs may help to explain the low uptake of HPV vaccination for MSM in the current pilot programme to date [27]. There are a number of limitations that be considered in the interpretation of this study's findings. The cross-sectional design, convenience sampling approach, and exploratory nature of the study-using pre-determined survey statements-limits the ability to draw sound inferences about the reasons behind participant responses. The sample size is small, and while a response rate cannot be accurately calculated it is presumably quite poor considering the RCGP has over 50,000 members and BASHH has over 1000 members (some of whom are not based in the UK). There were no incentives offered for participation; a practice which is known to raise study participation rates in similar studies. Interviews with GPs who participated in this survey will provide more in-depth insight into their views and opinions regarding HPV vaccination for YMSM. Conclusions GPs can potentially play a crucial role in the prevention of HPV-related diseases in YMSM. In order to implement the JCVI recommendation regarding HPV vaccination for MSM most effectively, YMSM should be identified early and offered the HPV vaccine with clear information. However, barriers to such implementation in primary care appear to still remain. If the findings of this exploratory work were confirmed in future research, interventions could be developed to raise awareness and educate GPs about the benefits of HPV vaccination for MSM, and to improve the skills of GPs in sensitively eliciting a patient's sexual orientation to benefit the consultation and the patient-doctor relationship. There are also other potential settings for delivering HPV vaccination to YMSM to improve access, such as pharmacies and schools, which have not yet been explored.
2018-04-03T04:53:22.545Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "f8c85db2b469ab75c24d700b1a7e049748824c57", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/15/1/151/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e877ba5f30191e6d03a9835b7482ddcbb327179d", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
268011984
pes2o/s2orc
v3-fos-license
Individual and Community-Contextual Level Factors Associated With Wellbeing Among Older Adults in Rural Zambia Objective: This article aims to identify individual and community-contextual level factors associated with the wellbeing of older adults (50 years and older) in rural Zambia. Methods: Data from the nationally representative 2015 Living Conditions Monitoring Survey (LCMS) was used. Employing multilevel mixed effects, the individual and community-contextual factors on wellbeing were determined. Results: Overall, 31.7% of rural older adults perceived their wellbeing as good. Both individual and community-contextual level factors are associated with the wellbeing of older adults in rural communities. At the individual level, wellbeing was associated with higher education attainment. Community-contextual factors significantly associated with wellbeing included improved housing, access to piped tap water within the premises, own charcoal or income to purchase firewood. Conclusion: The findings foreground the imperative to analyse both individual and community-contextual level factors of wellbeing to generate and present evidence for investments in education across the life course and for the development of infrastructure towards increasing the wellbeing of rural older adults. Additionally, the results provide a basis for planning by devising policies and programmes for older people to thrive and for no one to be left behind regardless the setting. INTRODUCTION As the global population ages, efforts to ensure older people's wellbeing and quality of life, are becoming more prominent [1].While rural areas worldwide face diverse and unique challenges providing social services due to resource constraints, geographical location, and diversity in cultural and social settings, the situation is more pronounced in developing countries [2,3].Developing countries will generally experience faster growth in absolute numbers of older people than developed countries [4].For instance, the Sub-Saharan Africa (SSA) region is, population-wise, the youngest region [5], resulting in low prioritisation and implementation of ageing issues in national policies [6].The region will experience the fastest growth rate in the absolute number of older people compared to any other region due to past fertility patterns and the current young age structure [7].It is estimated to triple from 46 million in 2015 to 161 million by 2050 [8,9].Among SSA countries, Zambia has a young population with about 79% (15,570,950) under 35 years.The proportion of the population aged 50 years and over has steadily increased, averaging 8% (1,673,149) in 2022 and projected to grow to about 10% in 2035 [10,11]. The 2022 Zambia Census of Population and Housing estimates that 6 out of 10 people live in rural areas [11], with most older people residing in rural areas where 79% of the general population is poor [12].The rapid growth of the ageing population and the growing number of older people living in rural communities raise concerns about their socio-economic wellbeing, health and social care, the type of support available and access to daily living needs such as food, housing, energy and water to support their wellbeing [4,13].Limited infrastructure, economic constraints, changing social dynamics and cultural norms, coupled with persistent policy gaps, pose challenges for ageing well in rural communities [14,15].Rurality as such, and ageing processes associated with such settings make it contested spaces at the dynamic nexus of older people's active and passive interactions with existing and potential community-contextual characteristics [14], impacting efforts towards the attainment of Sustainable Development Goals (SDGs), particularly those relating to health and social wellbeing. The World Health Organisation (WHO) defines wellbeing as "a state of complete physical, mental, and social wellbeing, and not merely the absence of disease or infirmity" [16].The WHO policy-oriented definition embodies aspects related to individual factors (e.g., health, education), and also community-contextual factors (e.g., access to services, general living conditions) [16,17]; including the development and maintenance of positive interactions with local communities and contexts [18].In its call for action to improve the wellbeing of older persons, The United Nations Decade of Healthy Ageing (2021-2030) positions communities as particularly important as they foster the abilities of older people by creating age-friendly environments that are good places to "grow, live, work, play, and age" [16].We use community-contextual level factors to describe the tangible aspects of rural settings within which ageing and wellbeing are influenced. There is growing interest in older people living in rural and remote areas as these locales face unique challenges and opportunities that affect their general wellbeing [2,19].Despite the often-perceived serenity of rural communities with strong social bonds and networks [20] as a distinctive feature, these areas generally have an older demographic profile with limited supportive services, often described as age-unfriendly resource-vulnerable settings [21]. Ageing in SSA rural communities particularly presents unprecedented socio-economic, cultural, structural, and public health challenges because of weak or non-existent policy frameworks on ageing [22].Rural areas in Zambia, face disproportionately increased demands and associated costs in delivering health and social care services because of accessibility issues due to inadequate infrastructure and service limitations [23].Rural communities tend to be geographically isolated due to a lack of investment in public transport and poor infrastructure to host and deliver essential services, in addition to low educational attainment among older adults [24] and high rural poverty [25].The interplay of these factors in rural settings creates a challenging environment for older people's wellbeing. For ageing well in rural areas, Bosch-Farré et al. identify eight elements, namely: health, information, practical assistance, financial conditions, physical and mental activity, the company of friends and family, transport and safety [26].Communitycontextual characteristics for this article include environmental factors, accessibility of health and social services and the quality of available infrastructure [15].The wellbeing of older Zambians also involves community support and care, anchored in the intergenerational extended family [27].However, the family system is in flux [28,29]: a dynamic compounded by the impact of HIV and AIDS, with a significant number of orphans left under the care of older people with no steady income to support themselves and their dependents [30,31]. In response to rural ageing, a large body of literature has emerged on rural ageing, health systems, and economic and social implications in Europe and North America [32,33], but much less about the factors of rural ageing and wellbeing in the least developed countries [34,35].The literature, therefore, broadly points to the inadequacy of community-context related factors in the analysis of wellbeing within rural settings.The gap identified, beckons scholars to move beyond a monolithic analysis of individual factors disaggregated by the blanket clustering of settings (broad rural or urban categorisations) towards an analysis of context-specific factors associated with the settings within which older persons live and through which they experience ageing.This dynamic interplay between older adults and relevant community-context characteristics requires further analysis to identify factors associated with older adults' wellbeing.Such analysis considers complexity, here viewed through the lens of a critical realist approach that seeks to understand and explain complex relationships that underlie the social world and society's perceived knowledge of it.Understanding the individual and community-contextual factors associated with the wellbeing of older people within the dynamic interplay with rural contexts provides an opportunity to promote older people's wellbeing, thereby helping attain the goals of the 2030 Decade of Healthy Ageing [16], the Madrid Plan of Action [36], the AU Policy Framework and Plan of Action on Ageing (2022) as well as contributing to the rural ageing agenda as proposed by the Age-friendly cities/ communities Framework [37]. This paper presents the individual, socio-economic conditions of rural older people and the rural community-contextual factors in understanding what influences the wellbeing of older people (50 years and older) in rural Zambia. Data Source and Population The data analysed in this study are from the 2015 LCMS, a nationally representative cross-sectional population-based household survey.The 2015 LCMS is the seventh wave in the series.Previous studies were conducted in 1996, 1998, 2002/2003, 2004, 2006, and 2010.The main aim of the LCMS is to monitor and highlight the living conditions of people.The LCMS collects information on the general living conditions, household income and expenditure, food security and coping strategies, economic activities, education attainment and health status of household members, housing conditions, as well as access to communitybased facilities and services such as health facilities, banks and transport [24]. The 2015 LCMS covered 12,251 households in 664 randomly selected enumeration areas (EAs) across the ten provinces of Zambia.In the case of rural EAs, households were listed and stratified according to the scale of their agricultural activity areas (farming blocks as a way of demarcation typical for rural settings) [20].Therefore, four explicit strata were created at the second sampling stage in each rural EA: the Small-Scale Agricultural Stratum (SSAS), the Medium-Scale Agricultural Stratum (MSAs), the Large-Scale Agricultural Stratum (LSAS) and the Non-Agricultural Stratum (NAS).In each stratum, 7, 5, and 3 households were selected from the SSAS, MSAS and NAS, respectively.In each rural EA, a minimum of 15 households were selected without large-scale agricultural households. Measures The outcome variable (wellbeing) was computed as a composite variable from four variables to assess access to amenities (facilities) in rural communities and self-assessed poverty-a three-response category measured self-assessed poverty: nonpoor, moderately poor and poor.In assessing access to facilities, respondents were asked if they have a facility within the community, if they have used it in the last 12 months, and how far this resource is from the village.This analysis used these measures of self-assessed poverty and access to facilities because they provided a good indication of life satisfaction and the general living conditions of older people in rural Zambia.Figure 1 shows the summary classification of variables used in this study. To assess wellbeing, a discrete binary variable coded as (1) if the respondent residing in the rural area described his/her household to be non-poor, has the facility within the community, has used the facility in the last 12 months, and the facility is within 5 km radius of the village (community); if otherwise, (0) is used. The explanatory variables were categorised into two (2) broad categories: individual and community-contextual variables.Individual-level variables included the socioeconomic and demographic characteristics of older people, such as sex and age.The age of the respondents was categorised into intervals from 50-64, 65-74, and 75+.75+ was coded in that manner because there were few older people in ages over 90 years.The level of education was categorised as 1 = primary education, 2 = secondary education, 3 = postsecondary education, and older people's marital status was coded into three categories: 1 = single, 2 = married/living with a partner, 3 = divorced/separated and 4 = widowed.The general health wellbeing was assessed by whether an older person was ill or injured in the last 7 days before the survey and the number of meals per day. Community-contextual variables included variables that described older people's housing conditions and the type of material used for the walls, roofs, and floors.Housing variables were identified to provide the general living conditions or settings for older people.Four categorical variables were used: one variable described the type of dwelling (housing), and three variables were used to describe materials used for walls, roofs, and floors. Similarly, access to water, type of toilet facility (sanitation) and the type of energy for cooking and lighting were used to describe further community-level elements that support older people's wellbeing at the household level.Whether the house was connected to electricity was also included in the analysis.All these variables were categorical. Statistical Analysis The study analysis was performed in two steps.The first step involved descriptive and bivariate analysis in describing older people's wellbeing by selecting explanatory characteristics (individual, household, and community characteristics).The second step involved multilevel regression modelling to measure the effect on the wellbeing of older people, first of individual characteristics: age, education attainment, morbidity (sickness); and second of community-contextual characteristics: type of dwelling, materials used for roof, walls and floor, source of water, and type of energy for cooking and lighting.Adjusted odds ratios (AOR) and a 95% confidence interval were used to report results.Multilevel regression was necessary because of the hierarchical nature of the data, which may violate one of the important assumptions of independence of the residuals [38] if ordinary logistic regression was used and may obscure factors of wellbeing that are a result of the hierarchical structure of older adults living in rural communities.Figure 2 shows the hierarchical data structure, in which older people (N) (the lower-level units) are nested in districts (K) (the higher-level units). Figure 2 shows the data has a natural nested structure, where older people are nested in districts.The district was used as a unit of analysis because services are designed to cover the administrative level of the district.As such, all EA-level data were pulled into the districts they belong to. A two-level multilevel analysis was used to examine the influence of individual and community-contextual factors on the wellbeing of older people.Older people (individual participants) constitute level 1. Older people are nested in districts which constitute level 2. In this analysis, districts are a level rather than a predictor/variable.On the other hand, variables such as education (no education, primary, and secondary level), marital status, type of housing, and water source are factors since their categories are both non-random and theoretically meaningful. Multilevel regression analysis results were obtained using four (4) models.The null model (empty) was fitted without explanatory variables to predict random variability of the intercept and show the total variance in the wellbeing of older rural people.Model 1 examined the effects of individual-level characteristics of older adults on wellbeing.Model 2 examined the effects of community contextual-level characteristics, and Model 3 examined the combined effects of individual and community contextual-level characteristics, with results fixed at a 95% confidence level.The inter-class correlation (ICC) for each model was calculated to explain the proportion of variation attributable to the higher level of variation and compare models.The Proportional Change in Variance (PCV) was also calculated for each model regarding the empty model to show the power of the factors in the models in explaining the outcome variable. Only significant variables from the bivariate and correlation analysis using Pearson's chi-square test (p < 0.05) (5%) were added to the models.All analyses were conducted using Stata software version 14.0. Characteristics A total of 14,531 older people's data were captured for this analysis.In this case, 70 rural districts out of the total of 116 districts were included.The mean number of older people per rural district (n = 70) was 208, ranging from 34 to 663.Good wellbeing was experienced among 31.7%(95% CI: 30.739, 32.661) of older people (Table 2).Access to community facilities in rural areas was very low.Table 1 shows that only 15% and 12% of older people had used a facility and had a facility within a 5 km radius of the community (district), respectively.The average age of older people in the study was 62 (SD = 9.5), with the majority (63%) between 50-64 years.About 71% of older people were married or living with a partner, and 20% and 8% were widowed and divorced or separated, respectively.More than half of rural older people (58%) had a primary level of education, and 1 in 50 had a higher level of education.Among the total number of older people, the prevalence of morbidity in the last 7 days before the survey was 59%.There were significant relationships between wellbeing by gender (p < 0.01), level of education (p < 0.001), marital status (p < 0.001) and morbidity prevalence in the last 7 days before the survey (p < 0.05) (Table 2). About half of older adults (49%) lived in traditional housing, with one in every five housing units (55%) used grass or leaves as materials for roofing (thatching) and about 4 in every 10 older adults in housing units (39%) constructed with mud bricks (Table 2).Concerning energy for cooking and lighting, only 2% of older adults in rural areas reported that their houses were connected to electricity, about nine in ten older adults (88%) collected firewood for cooking, and more than two-thirds (72%) used a hand-held torch for lighting.Regarding the type of toilet facilities, 53% were using a pit latrine (toilet) without a slab.About one-third (35%) of older adults accessed water from boreholes, 28% from unprotected wells and 18% from local water sources (e.g., rivers, lakes, streams, dams, rainwater (Table 3).There were significant differences in wellbeing in relation to: 1) the type of housing, 2) the type of materials used for the roofs, walls and floors, 3) the main source of water, and 4) the energy source for cooking and lighting (Table 2). Table 3 shows the multilevel mixed-effect results of individual and contextual factors associated with the wellbeing of older adults in rural areas.In the null model (Model 0), the wellbeing of older adults, the regional level variance was statistically significant with a variance level of 0.66 (p < 0.001).The ICC coefficients show that 17% of the variance in the wellbeing of older adults was attributed to differences in individual-level and community contextual-level factors.So, the inter-district differences were confirmed.The PCV in Model 1 shows that only 1% of the variation in the wellbeing of older adults was explained by individual-level factors.In Model 2, a PCV of 16% implies that variation in the wellbeing of older adults in rural areas was explained by community-level characteristics. In Model 3, the results of a multilevel analysis on the wellbeing of older adults were statistically significant in relation to the individual-level variables (level of education and prevalence of morbidity).Concerning the contextual-level factors, the type of dwelling (house), materials used for roofs, walls, and floors, the main local water source, the type of energy used for cooking and lighting, and the type of sanitation service (toilet) statistically significant influenced older adults' wellbeing in rural settings. Education Attainment and Morbidity The results show that older adults in rural areas with higher education attainment were more likely to experience good wellbeing compared to older adults with no education (AOR = 2.075, 95% CI: 0.58, 2.73) (Figure 3).The prevalence of morbidity (illness in the last 7 days) among rural older adults reduced the odds of wellbeing by 88% compared to older people who were not sick 7 days before the survey (AOR = 0.875, 95% CI: 0.80, 0.96). Housing Housing conditions were an important element of wellbeing.Results showed that an improvement in the types of housing increased the wellbeing odds by 28% for older people who lived in improved traditional houses (AOR = 1.281, 95% CI: 1.12, 1.46) and doubled for those who lived in modern detached houses compared to older people who lived in traditional huts (AOR = 2.264, 95% CI: 1.89, 2.71). Water Older adults with access to a borehole had 18% higher odds of wellbeing than older adults who accessed water directly from a river/ Energy For older adults who purchased firewood as a source of energy for cooking, their odds of wellbeing were more than three times higher compared to older adults who collected firewood for this purpose (AOR = 3.349, 95% CI: 2.54, 4.43) and older people who had their charcoal had 38% higher odds of wellbeing compared to older adults who collected firewood (AOR = 1.376, 95% CI: 1.14, 1.67).Relatedly, among older adults whose source of energy for lighting was an open fire or other sources of energy, the likelihood of wellbeing decreased by 44% (AOR = 0.438, 95% CI: 0.31, 0.62) and 32% (AOR = 0.317, 95% CI: 0.22, 0.45), respectively.The random effects in the final model show that the variance of the random intercept remained statistically significant across the models, suggesting divergence across the rural areas even after accounting for individual-level and contextual-level factors.This further suggests that other unmeasured or unobserved rural community characteristics may influence the wellbeing of older people.Although there are other unobserved rural community characteristics, the PCV of 14% indicates that the random effects (individual and contextual factors) included in the model account for the substantial portions of the variability in the wellbeing of older adults in rural communities.Therefore, unpacking the multilevel structure of the data is important to understand the context-specific nuances that influence the wellbeing of older adults in rural communities. DISCUSSION We aimed to identify individual and community-contextual level factors associated with the wellbeing of older adults above 50 years in rural settings.We established that both individual and community-contextual factors dynamically interact to influence rural settings that foster or hinder the wellbeing of older people.These findings align with other studies on rural ageing that suggest that rural settings are contested spaces for ageing and are created through active and passive interactions between diverse older adults, community members, rural organisations and the policy/programmatic architecture [14]. The study highlights that educational attainment at the nexus of access to adequate housing, the appropriate type of materials to construct housing, and access to water and energy for cooking create contested spaces.By identifying these spaces through the generation of evidence, possible opportunities are opened for policy and practical interventions that could be beneficial for ageing individuals and their communities.At the individual level, higher education attainment among older adults was associated with better wellbeing.A population and development review argues that education attainment over the life course is a paramount driver for many social, economic and health outcomes [39].Another study on the impact of education attainment on older people's wellbeing found that each additional year of education attainment improved the wellbeing of older persons [40], also in terms of their social, economic and health outcomes [41].Others have shown a qualitative increase in older people's cognitive health, selfconfidence, and life satisfaction with educational attainment [42].The level of education has been argued to directly enhance the quality of social engagement and social interaction, which results in more opportunities for the formation of stronger social networks including connections with peers [43]-contributing to wellbeing.Level of education correlates strongly with better job prospects, personal empowerment, and income.Better income in older age can reduce stress and contribute to general wellbeing.Importantly, education attainment can enhance health literacy and improve the ability to access, understand and use information to make informed health decisions for wellbeing.Although the results of this analysis have shown that the prevalence of illness or injury among older adults negates the gains in wellbeing, studies have indicated that older adults with higher levels of education have better health outcomes than their less-educated peers [41].Education can facilitate and shape the wellbeing of older adults and is a key driver for attaining a demographic dividend and the SDGs.Education is a key mechanism to prepare for old age, especially when complemented by community-contextual factors such as social support, access to healthcare and a better socioeconomic status [27]. At the community-contextual level, the contested spaces for the wellbeing of older adults were associated with the available community resources, such as type housing, access to water, sanitation, and energy for cooking.The interactions with these (or lack of) resources, directly or indirectly shape the setting within which older adults age.Access to housing provides a sense of safety and increases the desire to age in a specific place [44].The results have shown that older people with access to improved housing experienced better wellbeing than those who live in traditional huts.The results are consistent with the research reported in other studies, which argued that rural communities have distinctive challenges associated with infrastructure, specifically, and decent housing improves the general wellbeing of older people [15,31]. The WHO further emphasises that housing protects people from hazards and promotes good health and wellbeing [36,37].However, another study in Zambia argues that the challenge related to housing dates to Zambia's pre-independence times, and 80% of the national housing stock is in informal and unplanned settlements and made of poor materials not resistant to withstand an array of climatic and weather conditions [45].The results also indicate that housing conditions in rural communities significantly impact individual older people and community wellbeing.Studies on health and housing have demonstrated that housing can affect various aspects of health, mental wellbeing, and overall quality of life [46]. According to the results, older adults living in improved houses with access to piped water and energy for cooking (such as sufficient income to buy charcoal) experienced better wellbeing than counterparts in poor housing with related conditions.The interaction of housing conditions, access to water and energy for cooking and lighting in rural settings directly influences older people's wellbeing.Generally, most rural areas in Zambia face challenges concerning access to energy for cooking and lighting [24].The results show that older people who purchased firewood or had charcoal for cooking had better wellbeing than older people in rural areas who collected firewood for cooking, given distances and weight.Access to water and sanitation services (toilets) remains a key wellbeing factor.Findings showed that older people in rural areas with access to either a borehole, public tap or a tap on their property have better wellbeing compared to older individuals who have to collect water directly from the source (e.g.river, lake, dam, rainwater).A possible explanation is that older people must walk long distances to the source of water, as a study by Koff confirmed, but also that many of them cannot carry heavy loads due to their frailty [15,31,47,48]. In terms of a critical realist approach [49], it could be posited that the factors that support the wellbeing of older adults are obscured within the contextual causal relationship, as evidenced by the interaction of individual and rural community-contextual characteristics [50].Thus, it is asserted that there is a need to move beyond a simplistic focus on older peoples' observable Well-being of Older Adults in Rural Zambia individualistic characteristics towards a more complex understanding by integrating "real" world communitycontextual effects, evidenced in this study.The monolithic clustering and characterisation of older people based on the binary/blanket categorisation of communities as rural and/or urban are likely to obscure a true reflection of the wellbeing of older adults.Consequently, the analysis of older adults' wellbeing should consider the specific characteristics of individuals at the interface of the particular rural context.The findings suggest that the community-contextual factors of wellbeing are diverse and dynamic.As such, the emergence of any external influence could threaten elements that support contested spaces beneficial for the wellbeing of older adults.For example, the COVID-19 pandemic of 2020 negatively affected the elements that create a favourable setting for the wellbeing of older people, such as loss of income, inadequate food, challenges to access healthcare, and exacerbated isolation due to restricted movements [51,52].The 2021 Socio-economic Impact Assessment Survey of COVID-19 on Households in Zambia (SEIA) highlights how COVID-19 altered mechanisms for the wellbeing of older adults.Thus, any efforts that do not consider the variability of community-contextual characteristics in understanding what influences the wellbeing of older adults may not generate optimal outcomes. The stark reality is that only about a third of rural older adults in this study experienced wellbeing.It is therefore imperative to highlight the identified factors that facilitate wellbeing with a clear and critical realist approach to structure attainable interventions that may otherwise be obscured in the reductionism and clustering of the challenges in rural communities.This implies that the factors that support the wellbeing of older people in rural communities should be looked at with a three-tier approach by focusing on what is prevailing in rural communities, the underpinning factors influencing the prevailing factors, and how they interface with the prevailing wellbeing of older adults. Conclusion This study adds compelling evidence to the studies about rural ageing in SSA on the influence of individual factors (education attainment) and community-contextual factors (access to improved housing, piped water, having own energy sources for cooking such as charcoal or income to buy firewood) on the wellbeing of older people in rural communities.These results underscore the need to address educational disparities and improve access to basic community resources to promote the wellbeing of older populations in rural communities.Furthermore, this analysis has policy-making and pragmatic implications.To this end, the 2022 African Union Strategic Policy Framework and Plan of Action on Ageing (AUPFPAA) calls for strategic investment across the life course (in this case, education) to enhance capacities and wellbeing in older age that can benefit both older and younger people [53].This might, in turn, foster the attainment of a demographic dividend as the population ages.These results also inform a call for direct investment in rural infrastructure such as housing, water access, and energy (cooking, lighting).Amalgamated efforts are needed to negotiate and address the contested spaces for rural ageing by valuing the participation and needs of current cohorts of older citizens and, to that end, also investing in future generations through education.It emphasises the call for a life course approach to wellbeing in later life through education, as well as the need to ensure that older people's physical environments are good or friendly places to age. The limitations in terms of the dataset are acknowledged on two levels: the use of the 2015 data may have presented some inadequacies due to it being dated and potential changes might have occurred, resulting in changes in the context; the data were also collected to measure the general wellbeing of the population.This focus may have missed salient aspects unique to older adults.Nevertheless, the results point to a non-monolithic analysis of what shapes the wellbeing of older adults by interfacing individual and community-contextual level factors.The multilevel analysis has demonstrated the need to decrypt factors of wellbeing often obscured in the monolithic analysis of individual or community-contextual level factors separately.This is because the monolithic approach might risk not recognising the diverse, dynamic and complex interface of individual and communitycontextual factors for the wellbeing of older adults.Further research is required to explore additional determinants of wellbeing, specifically human and social capital, and the development and impact of specific community-context interventions to support the wellbeing of older people in rural settings. Banda et al. Well-being of Older Adults in Rural Zambia FIGURE 1 | FIGURE 1 | Factors influencing the wellbeing of rural older adults in Zambia (2015 Living Conditions Monitoring Survey, Zambia). FIGURE 3 | FIGURE 3 | Distribution of odds ratios for older people's wellbeing and level of education (2015 Living Conditions Monitoring Survey, Zambia). a Int J Public Health | Owned by SSPH+ | Published by Frontiers February 2024 | Volume 69 | Article 1606571 5 TABLE 2 | Bivariate analysis of the wellbeing of rural older adults with individual and community-contextual characteristics in Zambia (2015 Living Conditions Monitoring Survey, Zambia). Int J Public Health | Owned by SSPH+ | Published by Frontiers February 2024 | Volume 69 | Article 1606571 TABLE 2 | (Continued)Bivariate analysis of the wellbeing of rural older adults with individual and community-contextual characteristics in Zambia (2015 Living Conditions Monitoring Survey, Zambia). TABLE 3 | (Continued) Fixed and random effects result in the association of Wellbeing of rural older people with the individual and community-contextual factors in Zambia (2015 Living Conditions Monitoring Survey, Zambia). Int J Public Health | Owned by SSPH+ | Published by Frontiers February 2024 | Volume 69 | Article 1606571
2024-02-27T18:24:38.436Z
2024-02-19T00:00:00.000
{ "year": 2024, "sha1": "9671c4607d4821bec3cdc3d84bd26e64a932a735", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "897a2d2191861e7a4cba0c042c4174ac5cf7da75", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
268998431
pes2o/s2orc
v3-fos-license
Efficacy and safety of a four-drug, quarter-dose treatment for hypertension: the QUARTET USA randomized trial New approaches are needed to lower blood pressure (BP) given persistently low control rates. QUARTET USA sought to evaluate the effect of four-drug, quarter-dose BP lowering combination in patients with hypertension. QUARTET USA was a randomized (1:1), double-blinded trial conducted in federally qualified health centers among adults with hypertension. Participants received either a quadpill of candesartan 2 mg, amlodipine 1.25 mg, indapamide 0.625 mg, and bisoprolol 2.5 mg or candesartan 8 mg for 12 weeks. If BP was >130/>80 mm Hg at 6 weeks in either arm, then participants received open label add-on amlodipine 5 mg. The primary outcome was mean change in systolic blood pressure (SBP) at 12 weeks, controlling for baseline BP. Secondary outcomes included mean change in diastolic blood pressure (DBP), and safety included serious adverse events, relevant adverse drug effects, and electrolyte abnormalities. Among 62 participants randomized between August 2019-May 2022 (n = 32 intervention, n = 30 control), mean (SD) age was 52 (11.5) years, 45% were female, 73% identified as Hispanic, and 18% identified as Black. Baseline mean (SD) SBP was 138.1 (11.2) mmHg, and baseline mean (SD) DBP was 84.3 (10.5) mmHg. In a modified intention-to-treat analysis, there was no significant difference in SBP (−4.8 mm Hg [95% CI: −10.8, 1.3, p = 0.123] and a −4.9 mmHg (95% CI: −8.6, −1.3, p = 0.009) greater mean DBP change in the intervention arm compared with the control arm at 12 weeks. Adverse events did not differ significantly between arms. The quadpill had a similar SBP and greater DBP lowering effect compared with candesartan 8 mg. Trial registration number: NCT03640312. INTRODUCTION This document outlines the proposed analyses for the QUARTET USA phase II clinical trial, which aims to compare clinical and safety outcomes for adult participants with elevated blood pressure (BP) at baseline who receive treatment with ultra-low-dose quadruple-combination therapy (LDQT) to those outcomes in participants given standard dose monotherapy at 12 weeks.Thus, we plan to conduct a two-arm, double-blind randomized controlled trial with equal allocation (1:1) in adults with uncontrolled blood pressure who are eligible for monotherapy.The purpose of this document is to provide detail regarding the statistical analysis plan (SAP) for this study. Study Aims The overarching study aims are as follows: Aim 1: To investigate whether initiating treatment with ultra-low-dose quadruple-combination therapy ("LDQT") including candesartan 2mg, amlodipine 1.25mg, indapamide 0.625mg, and bisoprolol 2.5mg will lower office blood pressure at 12 weeks more effectively, and with no increase in side effects, compared to initiating standard dose monotherapy (candesartan 8mg) in adults with elevated blood pressure who are eligible for monotherapy based on the 2017 AHA/ACC guideline. We hypothesize that initiating treatment with LDQT will lower office blood pressure at 12 weeks more effectively, and with no increase in side effects, compared to initiating standard dose monotherapy in adults with elevated blood pressure who are eligible for monotherapy. Aim 2: To investigate whether initiating treatment with LDQT will lower mean 24-hour ambulatory blood pressure at 12 weeks more effectively, and with no increase in side effects, compared to initiating standard dose monotherapy in adults with elevated blood pressure who are eligible for monotherapy. We hypothesize that initiating treatment with LDQT will lower mean 24-hour ambulatory blood pressure at 12 weeks more effectively, and with no increase in side effects, compared to initiating standard dose monotherapy in adults with elevated blood pressure who are eligible for monotherapy. Exploratory Aim 1: Assess heterogeneity of treatment effect by hypothesized moderators (age, sex, race/ethnicity, and health literacy level). We hypothesize that the treatment effect of LDQT will be greater in participants with limited health literacy than those with adequate health literacy.We also hypothesize that treatment effect of LDQT will differ by age, sex, and race/ethnicity subgroups. Exploratory Aim 2: To evaluate acceptability, preferences, and lessons for implementation of LDQT among patients and clinicians using mixed methods. We hypothesize that patients and clinicians will prefer LDQT more than standard dose monotherapy for initial blood pressure lowering therapy; we further hypothesize that LDQT will be simpler and easier for both patients and clinicians than standard dose monotherapy for initial blood pressure lowering therapy, including patients with low health literacy. This SAP will focus on the details of analyses for Aims 1, 2, and part of exploratory Aim 1 (pertaining to age, sex, and race/ethnicity as potential moderators); we reserve details of the exploration of health literacy and implementation analyses (exploratory Aim 2) for a separate document. Study time points include baseline assessment (completed over approximately two days), a six-week follow-up time point, and a 12-week follow-up time point (also completed over approximately two days). STUDY OUTCOMES In the sections below, we include the relevant specific field names for variables within the study database as of the time of SAP creation. Exploratory Outcomes The following outcomes are relevant to the overall aims; however, they carry less weight, and we consider them exploratory in nature.This SAP will not focus in detail on analyses of these outcomes, but we anticipate the general analytic approach to apply.In the event of small numbers that would make modeling infeasible, these data may be reported as descriptive statistics. 3) Mean nighttime (2200 to 0600) SBP and DBP [asleep_abpsbp, asleep_abpdbp].4) Proportion of dippers [sys_dip, dia_dip], defined as nighttime BP falling more than 10% from the daytime values OR night / day blood pressure ratio less than 0.9 and greater than 0.8 with normal diurnal blood pressure pattern.5) Mean daytime SBP and DBP load.Load is defined as the percentage of abnormally elevated readings; daytime SBP / DBP elevated readings would be 130 / 80 mmHg or above.6) Mean nighttime SBP and DBP load.Abnormally elevated readings for nighttime SBP / DBP would be 120 / 70 mmHg or above.7) Percentage of participants with morning surge, calculated as the difference between the mean SBP during the morning hours and nighttime trough SBP.Trough SBP is defined as the mean of three SBP measurements: the lowest nighttime SBP and the measurements immediately preceding and following this measurement.8) Coefficient of variation of SBP and DBP assessed through 24-hour ambulatory blood pressure as defined as the ratio of the 24-hour standard deviation of BP / mean 24-hour value.9) Day-night variability (SDdn), which uses the standard deviation (SD) for daytime measurements and, separately for nighttime measurements, to calculate a weighted mean of these SDs.10) Average real variability (ARV), calculated as the average absolute difference between consecutive readings over the 24-hour ABPM period. We will assess additional exploratory outcomes as needed to inform individual participant data (IPD) meta-analysis with international QUARTET studies.The details of these IPD analyses are reserved for a separate analytic plan. DEMOGRAPHICS AND BASELINE ASSESSMENTS The following are specific demographic / baseline assessments of interest for analyses.Primary analyses will adjust for these covariates as we anticipate they will influence outcome.We plan to report both model-adjusted and simple unadjusted intervention effect estimates.In the cases of adjusted models, we will include the following variables as fixed effects, regardless of significance: 3) Race/ethnicity [ethnic].We plan to categorize participants into White (ethnic = 1), Hispanic (ethnic = 3, 4, 5, 6), African American (ethnic = 2), or other categories.In the event of low cell counts in any one category, we may consider collapsing categories, foregoing adjustment for race (if collapsing cannot be justified scientifically), or failing to adjust for race altogether.We may also consider another potential covariate that is heavily related to race and ethnicity or conduct sensitivity analyses under different parameterizations / assumptions.Note that some additional exploratory analyses may examine these additional demographic variables as covariates and/or effect modifiers as well.We will label any exploratory analyses involving additional potential covariates as post hoc in any dissemination materials. DATA STORAGE Data will be collected and managed using Research Electronic Data Capture (REDCap) housed at Northwestern University's Clinical and Translational Sciences Institute (CTSA), NUCATS [1]. REDCap is a secure, web-based application designed for research studies that provides an intuitive interface for validated data entry, audit trails for tracking data manipulation and export procedures, and automated export procedures for seamless data downloads to common statistical packages, and procedures for importing data from external sources.Refer to the study Data Management Plan (DMP) for details. RANDOMIZATION METHODS We plan for equal allocation (1:1) across study arms; the study statistician (Co-PI: Ciolino) generated a randomization list using random block assignments (i.e., randomly varying block sizes).The details of the block sizes and number of blocks will remain confidential until study completion.The study statistician uploaded a "Development" randomization list and a separate "Production" randomization list into REDCap.Each participant will be assigned a randomly-generated kit number that will correspond to either an active comparator (candesartan) or investigational product (LDQT) drug kit. The randomization lists are housed on Northwestern University's "FSMResFiles" with restricted access such that only unblinded individuals can access.Neither the study coordinator / study nurse / individual assigning the study kit numbers nor the participants will have ability to determine which kit numbers correspond to each arm as they were generated via a random uniform distribution with the seed number, block sizes, and subsequently sorted randomization lists restricted to this set of folders on Northwestern University's servers.Randomization does not involve any stratification factors; however, with the addition of a second study site, the randomization is de facto stratified by site (i.e., each site has its own randomization sequence). STATISTICAL METHODS Descriptive statistics will summarize participant demographics and baseline clinical outcomes overall and across arms: proportion (percentages) for categorical variables; mean (± standard deviation) for continuous variables; and median (interquartile range) for skewed or count variables.Analyses in general will employ normal theory methods and residual diagnostics will evaluate validity of assumptions; where appropriate (i.e., in the event of low cell counts for categorical data or questions of normality), transformation of variables, nonparametric methods, or exact tests may be employed.All primary efficacy and safety analyses will be pre-specified as outlined in this SAP, and deviations from planned analyses or post hoc analyses will be labeled as such in any reports or dissemination materials. Analyses will assume a two-sided 5% type I error rate unless otherwise specified; there will be some exploratory analyses that will involve a relaxed type I error rate (10%).There will be no corrections made for multiple hypothesis tests, as this is a phase II study evaluating preliminary efficacy. Planned Primary, Secondary, Safety Analyses The primary study analysis time point for all relevant outcomes is 12 weeks post randomization.The original analysis plan called for an analysis of covariance (ANCOVA), controlling for baseline value of each relevant outcome in addition to the following baseline covariates: sex, age at baseline, race/ethnicity, health literacy level (indicator of limited literacy as defined by the Newest Vital Sign instrument), and an indicator of monotherapy at baseline (vs.untreated).We deem these variables clinically relevant, important covariates; thus, all analyses will plan for adjustment for these covariates of interest (regardless of statistical significance in the current dataset) in evaluating efficacy of intervention in the present study.However, to better align with the analytic strategies of the QUARTET Australia study, and to make most efficient use of all follow-up data (both six-week and 12-week data), primary analyses will involve a linear mixed model with fixed study arm and baseline outcome value effects and a random participant effect to account for within-participant correlation.We plan to conduct both unadjusted (for potential covariates mentioned above) and adjusted analyses. The updated details of these analyses are reserved for the statistical analysis plan.The basic analytic model will be as follows for each outcome (Y) for participant i (i=1…N) at visit j (j=1,2; corresponding to Week 6 and Week 12): Under the assumption that error terms ~(0, 2 ) is the random error term, ~(0, 2 ) is the participant-level random effect.If visit-by-study arm ( 4 ) is insignificant at the 5% level, we will remove that term from the model, and subsequently, we will further examine for an overall main effect for visit ( 3 ).If insignificant, we will also remove that effect from the model and evaluate intervention effect via the primary hypothesis test of interest: 0 : 1 = 0 . 1 : 1 ≠ 0 for each outcome.However, if the visit-by-study arm interaction term or the visit term alone is significant, we will plan to evaluate the 12-week contrast via model-estimated least squared means as our primary analyses.All between-arm differences at both six-and 12-week time points will be reported as model-based estimates, corresponding 95% confidence limits, and p-value of the corresponding hypothesis test. Secondary analyses (i.e., those for the secondary and exploratory outcomes of interest) will utilize data from all time points via (generalized) linear mixed modeling (GLMM) methods with the following specifications: identity, logit, or log link for continuous, binary, or count outcomes, respectively; fixed arm, visit, visit-by-arm interaction, and aforementioned covariates; and random participant effect.For secondary analyses, the model adjusted Wald type III tests for fixed effects will first evaluate significance of a visit-by-arm interaction at the 5% level of significance.If insignificant at the 5% level, then this interaction term will be removed and the model Wald type III test for fixed arm effect will evaluate the overall intervention effect in this longitudinal model at the 5% level. The table below summarizes the general modeling strategy for each outcome.In each case, we plan to conduct both adjusted and unadjusted analyses.Adjusted analyses will include the aforementioned covariates. LMMs; we will add an arm-by-<potential moderator> effect in the original model specified above for each potential moderator of interest.Moderation effects will be explored via model-adjusted type III Wald test for fixed effects at the relaxed 10% level of significance.If significant, then we will examine the intervention effect within each subgroup via a series LMMs: male/female, age category, racial/ethnic category. Agreement between Outcomes We will use simple sample Pearson correlation coefficients (and 95% confidence limits), Bland-Altman plots, or both to examine agreement among continuous outcome measures (primarily focused on office SBP and 24-hour SBP measures).Though these analyses are not the primary focus of this trial and its results, we will use agreement analyses to make inference regarding quality of SBP measurement methods and variability. Analyses Contingent on Add-on Therapy Requirements If there is evidence of a difference in proportion of participants requiring amlodipine add-on therapy at six weeks across arms, then we will also explore dividing the sample into four strata: (1) those that required amlodipine add-on treatment + received active control, (2) those that required add-on + received LDQT, (3) those that did not require add-on + received active control, and (4) those that did not require add-on and received LDQT.Depending on cell counts, we will attempt a series of exploratory analyses for key outcomes (SBP, DBP) to evaluate an effect.Additional analyses of this nature will be indicated as exploratory. ANALYTIC DATASET Analyses will include the (modified) intention-to-treat (mITT) dataset, whereby all those participants with data at any follow-up time point and baseline to contribute to analyses will be included in analyses according to arm to which they were randomized, regardless of adherence to the study protocol.We will conduct a sensitivity analysis on the per protocol dataset (defined as 80% treatment regimen adherence) since precise estimates of intervention effect (if any) on outcomes are important in a phase II study. Power and sample size considerations allowed for some missing data (20%); however, in the event of large amounts of missing data (i.e., more than 10%), multiple imputation analyses will be explored.We will examine rates of missing data for all variables and determine whether the rates vary by participant characteristics, etc.These summarizations will inform potential biases resulting from missing data.Mixed effects models planned for longitudinal analysis are generally robust for unbalanced data across study time points.Additional sensitivity analyses will explore multiple imputation methods and the global sensitivity analysis to evaluate overall trial robustness [2].These analyses will again serve as sensitivity analyses to the previously outlined analyses. POWER AND SAMPLE SIZE CONSIDERATIONS The initial sample size calculations called for a total of 365 participants to be randomized (1:1 allocation).We anticipated an analytic sample size of 292 based on 365 participants at randomization and a 20% dropout rate by the 12-week follow-up time point.We originally based sample size and power calculations conservatively on an independent two-sample t-test.Based on results of interim analyses (refer to Section 10 for details), we updated our recruitment target to 87 participants (1:1 allocation).The analytic sample size of 77 is anticipated based on 87 participants at randomization and a conservatively estimated 12% dropout rate by the 12-week follow-up time point based on 8% dropout rate observed through September 2021. The initial, conservative plan for primary outcome analyses involving an independent two-sample ttest provided an estimated 80% power to detect a 5 mmHg difference in SBP between the intervention and comparator arms assuming a two-sided 5% level of significance and a 15 mmHg standard deviation in outcome.This estimate is based on a 2017 Cochrane systematic review update evaluating the effects of fixed-dose combination therapy and systematic review on quarter dose combination therapy, and a pilot trial of quarter-dose combination therapy [3].We assumed baseline SBP has a moderate correlation with follow-up SBP (r≈0.50-0.6);under this assumption, sample size calculations based on ANCOVA has the potential to allow for over 90% power under the same assumptions for remaining parameters. At the request of the DSMB, we conducted an interim conditional power analysis, taking into consideration information from both the QUARTET USA trial data as of August 2021 and further the QUARTET (Australia) results. 10These interim analyses, incorporating information to date, suggested that a recruited sample size of at least 77, and a 12% dropout rate, would provide over 90% conditional power based on a sample of 87 randomized participants. Previously, the protocol required the 24-hour ambulatory blood pressure assessments, and we thus conducted initial power calculations based on several a priori assumptions for this endpoint as follows: Since expected mean 24-hour ambulatory blood pressure may be more precise than office blood pressure (with a standard deviation of 12 mmHg vs. 15 mmHg for office blood pressure), we estimate over 95% power with the planned sample size to detect a 5 mmHg difference across arms in this important secondary outcome, under the same assumptions as outlined for our primary outcome (office blood pressure).However, subsequent protocol modifications allowed for optional 24-hour ambulatory blood pressure, and this outcome has been modified to become an exploratory outcome. TECHNICAL DETAILS The SAP is subject to version control, and we anticipate modifications to analytic plans be documented herein.As in any study, the analytic plan may change due to assumption violations, logistical issues, unexpected empirical distributions of study outcomes, or a combination thereof.In these cases, the SAP will be updated accordingly.All analyses will be performed via SAS version 9.4 or higher (The SAS Institute; Cary, NC) or R version 3.6.0or higher (The R Foundation for Statistical Computing platform).Table and figure formatting and style may be dictated by mode of dissemination or specific target journal(s) for results dissemination. Summary of updates: Version 2.0: 1) Removed reference to heterogeneity of treatment effects based on health literacy as these analyses will be outlined in a separate document.2) Moved ABPM to exploratory analyses. 3) Updated target sample size based on interim analyses at request of the DSMB.4) Updated modeling strategy overall to involve mixed modelling techniques (previously planned for ANCOVA at Week 12, only) to use both Week 6 and Week 12 data in analysis models.All longitudinal models will first explore a time-by-arm interaction term at the 5% level of significance in evaluating treatment effect.5) Specified minor details on treatment of covariates in analyses.6) Added in plans to explore potential strata combining study arm and add-on therapy. TIMELINE FOR ANALYSES As this is a phase II clinical trial, the original analysis plan did not include any formal interim statistical analyses involving hypothesis testing or any pre-specified stopping criteria for efficacy or futility on primary or secondary outcomes.Interim reports to the study team and data and safety monitoring board (DSMB) will consist of process measures such as protocol departures, missing values, missing forms, treatment regimen adherence, etc. and simple descriptive statistics on primary and safety outcomes of interest.In addition, weekly meetings with the study team will utilize central statistical monitoring techniques as a method of quality control and quality assurance for trial data on an ongoing basis.We foresee the DSMB requiring specific data listings or summarizations, but these will be specified at the time of the relevant DSMB meeting(s).At the request of the DSMB, we conducted an interim conditional power analysis, taking into consideration from both the QUARTET USA trial data as of August 2021 and further the QUARTET (Australia) results.These interim analyses, incorporating information to date, resulted in an updated recruitment target (and thus overall sample size goal).These interim analyses also resulted in the ultimate updates to the analytic strategy from an ANCOVA to one involving a mixed modeling approach to make use of both the six-and 12-week follow-up data for all participants.These modifications to the analytic plan resulted in an updated SAP (to version 2.0). To preserve the integrity of the study, no formal final statistical analyses will occur until the REDCap database has been locked and all queries/discrepancies resolved; the date of database lock will be documented. mean change (from baseline) in automated office systolic blood pressure (SBP) at 12 weeks [ These are indicated by the[brackets].sbpavg], and analyses will compare this change across arms for primary outcome analyses, adjusting for baseline. -related quality of life: Mean change Proportion of participants with any SAE according to the Good Clinical Practice (GCP) definition [sae_present, saedeath, saelifethrt, saehosp, saedisp, saecong, saeimpevnt].2) Proportion of participants with any potentially relevant side effect ([sae_term] refer to the adverse event case report form and the list of relevant side effects from the informed consent form).3) Rate of relevant side effects at the participant level (i.e., count per participant [sae_term]).
2024-04-09T06:17:24.713Z
2024-04-08T00:00:00.000
{ "year": 2024, "sha1": "4f2f3555afac8ebc867ce1ed4870dbac81be42dc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41440-024-01658-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "483c37c5aa9a81a131f9c803dff10de1335319a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232200794
pes2o/s2orc
v3-fos-license
Spatio-Temporal Variation in Growth Performance and Condition of the Winged Pearl Oyster Pteria penguin Environmental conditions can strongly influence the growth performance of pearl oysters and affect pearl farm production schedules. Growth and condition index (CI) of two age cohorts of Pteria penguin were measured for 13 months to investigate differences in growth performance between four culture sites within the northern (Vava’u) and southern (Tongatapu) island groups of the Kingdom of Tonga. Environmental conditions were also measured at culture sites and used to explore potential effects on oyster growth and condition. Between island groups, growth performance of P. penguin was superior at northern sites and was most strongly related to higher water temperatures at these sites. Within the southern island group, growth performance varied significantly between sites and may be driven by differences in wave energy. Monthly growth rates (GM) of P. penguin also showed significant temporal variation related to age and environmental conditions. This study demonstrated significant variation in the growth performance of P. penguin at latitudinal and local scales and suggests that in oligotrophic marine environments with minimal terrestrial inputs, such as Tonga, water temperature and wave exposure may be the primary environmental conditions influencing the growth performance of P. penguin. This study therefore recommends that optimal culture sites for P. penguin in Tonga are characterized primarily by warmer water temperatures (25–30°C) and low wave exposure (<15 joules m2 day–1). Culture of P. penguin at sites with more suitable environmental conditions enables pearl production to begin up to 34.2 % (6.5 months) earlier than at less-suitable sites and this may greatly influence mabé pearl farm profitability and feasibility. INTRODUCTION The winged pearl oyster, Pteria penguin (Röding 1798), occurs in a diverse range of marine environments, from the east coast of Africa, throughout Asia and northern Australia, and the western Pacific (Wada and Tëmkin, 2008). Pteria penguin is widely cultured for the production of high-quality mabé pearls (half-pearls) at both commercial and subsistence scales, supporting a range of pearl-based livelihoods Johnston et al., 2019). The Kingdom of Tonga is the leading producer of mabé pearls in the south Pacific (Johnston et al., 2019) and has experienced rapid industry development sustained by recently improved hatchery, husbandry, and pearlculture methods (Wassnig and Southgate, 2012;Southgate et al., 2016;Gordon et al., 2018Gordon et al., , 2019Gordon et al., , 2020. Subsistence-level mabé pearl farming has considerable potential for livelihood support (Yamamoto and Tanaka, 1997;Anon, 2018;Johnston et al., 2019), with small-scale farms capable of generating annual profits of more than twice the average income in Tonga (Johnston et al., 2020). Currently, mabé pearl farms are distributed among the three main island groups of Tonga (Johnston et al., 2019) which span three degrees of latitude (∼300 km) and vary in environmental conditions (Smallhorn-West et al., 2020). To date, no studies have investigated large-scale spatial variation in growth performance of P. penguin, yet such information is vital to inform pearl farm site selection to maximize pearl farm productivity, profitability, and associated livelihood benefits (Pouvreau and Prasil, 2001; Gaertner-Mazouni and Le Gueguen et al., 2016). Mabé pearl culture, like round pearl culture, utilizes oysters of a specific minimum size for pearl production (Gervis and Sims, 1992;Taylor and Strack, 2008). The initial phase of mabé pearl production therefore involves a non-productive culture period of around 1-3 years where wild-collected, or hatchery-produced, P. penguin are cultured to pearl-production size (100-140 mm dorso-ventral height, DVH) (Kripa et al., 2008;Milione and Southgate, 2012;Gordon et al., 2019Gordon et al., , 2020. Appropriately sized oysters are then implanted for pearl production by attaching hemispherical nuclei to the inner nacreous surfaces of pearl oyster shells (Haws et al., 2006;Kishore et al., 2015). Successive layers of nacre (mother of pearl) are subsequently deposited over the nuclei (Taylor and Strack, 2008) to produce mabé pearls with commercial nacre thickness within 9-12 months (Kishore et al., 2015;Gordon et al., 2018Gordon et al., , 2019. Maximizing the growth performance of P. penguin up to pearl-production size will therefore reduce the non-productive culture period and overall production time, thereby increasing pearl farm profitability and feasibility (Johnston and Hine, 2015;Johnston et al., 2020). Pearl oyster growth rates (G M ) are influenced by environmental factors including water temperature, food availability, turbidity, salinity, pH, current speed, and wave energy (Gervis and Sims, 1992;Lucas, 2008;Adzigbli et al., 2019). Of these, the variables considered to have the greatest influence on growth performance are water temperature (del Rio-Portilla et al., 1992;Mills, 2000;Yamamoto, 2000;Le Moullac et al., 2016) and food availability (Yukihira et al., 1998(Yukihira et al., , 1999(Yukihira et al., , 2006 because of their acute effects on metabolic rate and scope for growth (Numaguchi, 1994;Yukihira et al., 2000). Variations in water temperature or food availability have been related to growth performance of Pinctada margaritifera (Linnaeus 1758) (Pouvreau et al., 2000;Pouvreau and Prasil, 2001), Pinctada maxima (Jameson 1901) (Lee et al., 2008;Kvingedal et al., 2010), Pinctada fucata (Gould 1850) (Tomaru et al., 2002), and P. penguin Southgate, 2011, 2012). Current speed and wave energy also strongly influence pearl oyster growth via their effects on food renewal, waste removal, and physical stability of oysters (Lucas, 2008;Kishore et al., 2014). Extensive study of P. margaritifera in French Polynesia showed that growth performance of this species was highest at culture sites with water temperatures between 21 and 30 • C, high rates of food renewal, and high levels of particulate organic matter (Pouvreau et al., 2000;Pouvreau and Prasil, 2001). Similar national comparisons do not exist for P. penguin, although Milione and Southgate, 2012 indicated that the survival and G M of P. penguin was higher at turbid inshore sites than at offshore reef sites. Recruitment patterns of P. penguin also suggest that this species has a higher tolerance to turbidity and that it differs from P. margaritifera and P. maxima in its response to environmental factors (Yukihira et al., 2006;Kishore et al., 2018). This study examined spatio-temporal variation in the growth performance and condition of two age cohorts of P. penguin between distant sites in the northern and southern island groups of the Kingdom of Tonga. We also describe variation in marine environmental variables between sites and examine their relationship with growth performance and condition of P. penguin. Our ultimate aim was to identify ranges of key environmental variables where oyster growth performance was optimized so that we could assess the implications for mabé pearl farm site selection and production schedules. Study Sites This study was conducted from September 2018 to November 2019 at two sites in the northern island group (Vava'u, 18 • 39' S, 174 • 00' W) and southern island group (Tongatapu, 21 • 07' S, 175 • 11' W) of the Kingdom of Tonga, separated by just under three degrees of latitude (∼250 km) ( Figure 1A). All sites had a depth of 10-20 m but were characterized by differing environmental conditions, exposure to terrestrial inputs and wave energy (Smallhorn-West et al., 2020). Within Vava'u, the Vaipua site was located closer to terrestrial inputs from the Taoa estuary, while the Utulei site was located within Neiafu harbor. In Tongatapu, the Sopu site was exposed to higher wave energy than the Pangaimotu site, and was located further from inputs from Fanga'uta lagoon (Kaly et al., 2000;Smallhorn-West et al., 2020). Oysters Pteria penguin used in this study were hatchery-cultured at the Ministry of Fisheries (MoF) Aquaculture Center in Sopu using standard MoF hatchery, grow-out, and stock-maintenance procedures Gordon et al., 2020). Two age cohorts of oysters were selected: (1) "young" oysters, 0.7 years old with a mean (±SE) DVH and wet mass (WM) of 34.2 ± 0.3 mm and 5.7 ± 0.2 g, respectively; and, (2) "old" oysters, 2.7 years old with a mean DVH and WM of 89.9 ± 0.5 mm and 70.8 ± 1.6 g, respectively, at the start of the study. "Young" oysters represented the standard size of oysters received by pearl farmers from the MoF nursery and "old" oysters represented those around 12 months immature of pearl-production size (Gordon et al., 2017. Experimental Design Oysters were cleaned, measured and individually numbered, before being attached to ropes with fishing line to form "chaplets" (Figure 1B; Southgate, 2008;Gordon et al., 2020). "Young" oysters were distributed between 24 chaplets, each comprising 12 pairs of oysters (n = 576), and "old" oysters between 36 chaplets of seven pairs of oysters (n = 504). Pairs of "young" and "old" oysters were spaced on chaplets at a distance of 150 and 250 mm, respectively . Resulting chaplets were held for 1 month within trays suspended at a depth of 5 m from a submerged longline at Sopu, to allow recovery from drilling (September-October 2018; Gordon et al., 2020). After 1 month, chaplets were removed from the trays and oysters were measured. Six chaplets of "young" oysters and nine chaplets of "old" oysters were then transported to each culture site (site n = 270). Oyster chaplets were secured inside 2-m-long culture cylinders constructed of 40 mm pore-size galvanized wire mesh Figure 1B) and deployed to pearl longlines. Culture cylinders were suspended at a depth of 5 m and spaced at a distance of 500 mm on pearl longlines at Vava'u and Tongatapu sites (Figure 1). Data Collection Culture cylinders were cleaned and oyster survival and shell dimensions measured every three to 5 weeks for 13 months, with the exception of April to May 2019 (specifically, days 215-265) when weather conditions prevented data collection once at Vava'u sites and twice at Tongatapu sites. At each site, 20 pairs of "young" and 20 pairs of "old" oysters (site n = 80, total n = 320) were randomly selected and were repeatedly measured for DVH, shell thickness (ST) and WM, and photographed following Gordon et al. (2017Gordon et al. ( , 2020. Shell dimensions were measured to ±0.1 mm using Vernier calipers, and WM was determined to ±0.1 g using an electronic balance. At each sampling event, three additional pairs of "young" and "old" oysters were haphazardly selected and harvested for assessment of condition index (CI) (site n = 12, total n = 624) using the "dry tissue mass : dry shell mass" ratio method described by Walne and Mann (1975) and Lucas and Beninger (1985). Oysters sampled for CI were dissected and tissues dried at 60 • C to a constant mass in a drying oven (Freites et al., 2017). At each site, water temperature ( • C), salinity (ppt), pH, turbidity (nephelometric turbidity units, NTU), and chlorophyll content (µg L −1 ) were measured using submerged multiparameter sondes (YSI 6920-2, Xylem, Australia) and current speed (ms −1 ) was measured using drag-tilt current meters (Marotte HS, Marine Geophysics Lab, Australia) deployed between culture cylinders ( Figure 1B). Sondes and current meters were downloaded, cleaned, and checked for functioning at each sampling event and recalibrated as required. Daily rainfall data (mm) were obtained from the Tonga Meteorological Service for Tongatapu and Vava'u. Mean wave energy (joules m 2 day −1 ) was calculated for each site from spatial layers provided by Smallhorn-West et al. (2020). Data Analysis To compare the shell dimensions of P. penguin between culture sites and ages over time, generalized additive models (GAMs) were fit to mean DVH, ST, and WM values. Total growth (G T ) of P. penguin over the 13-month culture period was also calculated for DVH, ST, and WM as: G T = G n −G 1 , where G n = shell dimensions at final sampling, and G 1 = shell dimensions at deployment to culture sites. Effects of site and oyster age cohort on G T of DVH, ST, and WM were examined using generalized linear models (GLM) based on a Gaussian distribution. To compare site production schedules, the age of "young" P. penguin at pearl-production sizes of 100 mm DVH (T 100 ) (Kripa et al., 2008;Milione and Southgate, 2012) and 140 mm DVH (T 140 ) (Gordon et al., 2019) were predicted from GAMs fit to mean DVH. Spatio-temporal variation in environmental variables was examined using GAMs fit to mean values for each sampling period. Site-related environmental variation was also examined using principal component analysis (PCA) on summarized and scaled environmental data. Environmental data recorded 6 h post-deployment and 3 h pre-retrieval of sondes and current meters were discarded to minimize error. Outliers caused by probe malfunctioning or interference by fouling were removed or transformed in accordance with recommendations by sonde manufacturers and technicians (Xylem, Australia). Spatio-temporal variation in G M of DVH, WM, ST, and CI between cohorts was examined using GAMs fit to raw observations. Relationships between environmental variables and G M of DVH, WM, ST, and CI were also described using GAMs using separate splines for sites and age cohorts. Environmental variables that did not contribute substantially (by AIC) to model fit were removed from GAMs and were not described in partial plots. G M of P. penguin shell dimensions for each sampling period were calculated as: G M = [(G T −G T−1 ) ÷ D] × 30, where G T = shell measurement of the current sampling; G T−1 = shell measurement of the previous sampling, and D = number of days between sampling events, for DVH, ST, and WM. Condition index was calculated as per Walne and Mann (1975): Dry total tissue weight Dry shell weight × 1000 The relative quality of all models was assessed using Akaike information criterion values corrected for small sample-sizes (AICc). Visual assessments of diagnostic plots were used to validate models and data transformed and/or outliers removed as required to improve diagnostics and improve compliance with model assumptions. All analyses were completed in R (R Core Team, 2019) using base R "lme4" (Bates et al., 2015), "mgcv" (Wood, 2017), and "MUMIn" (Barton, 2019) packages. Total Growth and Survival In general, mean shell dimensions of P. penguin showed minimal variation between sites until after approximately day 212 (April 2019), when values tended to trend significantly higher at northern sites (Vaipua and Utulei) than at southern sites Pangaimotu, then Sopu (Figure 2). G T in WM of both age cohorts was significantly higher at northern sites than southern sites, while G T in DVH was also higher at northern sites for "young" oysters ( Table 1). All G T models confirmed an interaction between site and age cohort, with G T tending to be highest at Utulei for "young" oysters but highest at Vaipua for "old" oysters (Tables 1, 2). After 13-months culture at experimental sites, "young" oysters at Utulei had mean shell dimensions larger than, or comparable to, "old" oysters cultured at Sopu and had a mean DVH 27.2 % larger than "young" oysters at Sopu (Figure 2 and Table 1). "Young" oysters at Utulei reached minimum pearlproduction size (T 100 and T 140 ) faster than at all other sites and were projected to reach T 140 up to 34.2 % (6.5 months) earlier than oysters cultured at Sopu, the poorest culture site ( Table 3). Oyster survival in the month following chaplet construction was 92.4 and 97.4 % for "young" and "old" oysters, respectively, while survival at all sites for the following 13-month culture period was >97.0 % for both age cohorts. Environmental Variation Culture sites showed strong inter-island variation in mean water temperature, rainfall and salinity but minimal intra-island variation in environmental variables (Figures 3A-G). Principal component analysis indicated clear separation of samples from northern (Vava'u) and southern (Tongatapu) island groups along PC1 (accounting for 34.3 % of variation), but not within islands ( Figure 3H). Northern sites were characterized by higher water temperature (25-30 • C), chlorophyll content (0.5-2.5 µg L −1) , rainfall (5-30 mm day −1 ) and current (0.25-0.75 ms −1 ), and lower salinity (33-35 ppt), turbidity (0-2 NTU), and pH (8.0-8.3) compared to southern sites ( Figure 3H). Mean water temperature was significantly higher (by around 2 • C) at northern sites (25-30 • C) than at southern sites (23-28 • C), but did not differ within islands ( Figure 3A). Water temperature also showed strong seasonal trends, with maximum and minimum water temperatures occurring in February to , and August to October (days 317-391), respectively ( Figure 3A). Mean daily rainfall in northern sites was significantly higher than in southern sites from November to February (days 46-138) but did not differ significantly for the remainder of the year ( Figure 3B). Salinity was significantly lower at northern sites than southern sites for the majority of the study period and was lower at Utulei than Vaipua from January to April (days 107-212) ( Figure 3C). Turbidity, chlorophyll, pH, and current speed showed substantial temporal variation over the study period but little systematic variation between sites (Figure 3). Sopu experienced a peak in turbidity, chlorophyll, pH, and current speed during January to February, also coinciding with the occurrence of three severe tropical depressions (Figures 3D-G; Tonga Meteorological Service, 2019). Turbidity, pH, and current speed also generally showed lower temporal variation at northern sites than at southern sites (Figures 3D,E,G). Turbidity, chlorophyll, and current speed were generally low at all sites, with mean values below 2.0 NTU, 3.0 µg L −1 and 0.1 ms −1 , respectively, over the study period (Figures 3E-G). Mean wave energy (±SE) was 17-90 times higher at Sopu than all other culture sites . Splines represent GAM model predictions ±95% confidence intervals. Where one spline falls within the error envelope of another, there is no significant difference in mean measurements between those splines for that time period. Monthly Growth Rate and Condition Index Monthly growth rate (G M ) and CI of P. penguin varied significantly by culture site, age cohort and over time (Figure 4). G M of DVH and ST tended to decrease with time and were significantly higher at northern than southern sites for "young" oysters, and lower at Sopu than all other sites for "old" oysters, for most months (Figures 4A-D). G M of WM increased with time to a peak around July to October (days 278-381) before decreasing toward November (day 412 onward), and was significantly higher at northern than southern sites from April (day 212) onward ( Figures 4E,F). Condition index generally increased with time but with minimal variation between sites and ages, with the exception of higher CI of young oysters cultured at Utulei (Figures 4G,H). Acute declines were observed in CI and G M of DVH and ST, in February (days 138-166) and June-July (days 258-288) and in CI and G M of WM in October to November 2019 (days 381-417) (Figure 4). In general, G M of shell dimensions and CI had positive relationships with water temperature, rainfall and chlorophyll content ( DISCUSSION This study demonstrated significant variation in the growth performance of P. penguin at latitudinal and local scales. Between island groups in Tonga, growth performance of P. penguin was highest at northern sites (Vava'u) and was most strongly related to higher water temperatures at these sites. Within the southern island group, growth performance was significantly higher at the Pangaimotu site than at Sopu but was not clearly related to differences in water quality measured in this study and may be driven by differences in wave energy. Monthly growth rate of P. penguin shell dimensions also showed significant temporal variation in the form of general age-related trends interspersed by acute declines. In the sections that follow we discuss the implications of these results for mabé pearl farm site selection and production schedules. Inter-Island Spatial Variation Variation in P. penguin growth between island groups was most strongly related to water temperature, with both being significantly higher at northern sites. This positive relationship is symptomatic of the profound effect water temperature has on metabolic rate and related physiological processes of pearl oysters (Yukihira et al., 2000;Lucas, 2008) and is in keeping with results of previous aquarium and field-based studies (del Rio-Portilla et al., 1992;Mills, 2000;Yamamoto, 2000). Aquarium-based studies by Li et al. (2009Li et al. ( , 2011 indicated that P. penguin has a relatively high tolerance to high water temperatures, experiences peak absorption efficiency and clearance and filtration rates at 28-29 • C, and shows only slight declines in these metrics at 32 • C. Elevated water temperatures at northern culture sites in Significance groupings of final measurements can be inferred from splines in Figure 2. Letters in superscript denote significant groupings of G T as per Tukey posthoc comparisons. Ages not obtained from measurements were predicted from GAMs and are presented with the standard error of predictions. Tonga (∼25-30 • C) may therefore enable P. penguin to live at close to peak metabolic rate for the majority of the year, without exceeding the species' upper thermal limits. Inversely, cooler temperatures in the southern island group (23-28 • C) likely result in suppressed metabolic rates for the majority of the year and account for relatively poor growth performance of P. penguin at southern sites. Northern sites were also characterized by lower salinity, turbidity, and pH, and higher chlorophyll content, rainfall, and current than at southern sites; however, these variables were likely to be only weak drivers of P. penguin growth. While all of these variables have been shown to influence the growth performance of pearl oysters (Gervis and Sims, 1992;Lucas, 2008;Adzigbli et al., 2019), at low levels and/or in systems with low variability, their effects may be substantially weaker. For example, while a positive relationship between turbidity and growth performance of P. penguin has been reported in northern Australia Southgate, 2011, 2012), this trend was not detected in the present study. These findings are likely due to the low mean turbidity at all sites in the present study, which were comparable to offshore sites of the Great Barrier Reef (0.3-1.0 NTU) and were substantially lower than at near-shore sites (regularly > 100 NTU) shown to yield higher growth performance by P. penguin (Orpin et al., 2004;Milione and Southgate, 2012). Similarly, although salinity showed substantial inter-island variation in this study, the salinity range recorded (32-35 ppt) was very close to the species' optimal range (Li et al., 2011) and was therefore unlikely to account for the observed growth trends. Taken together, these results therefore indicate that in oligotrophic marine environments with minimal terrestrial inputs, such as in Tonga, water temperature may be the primary water quality factor influencing growth performance of Where one spline falls within the error envelope of another, there is no significant difference in mean measurements between those splines for that time period. Missing data was due to instrument malfunction. P. penguin. Large-scale geographic trends in water temperature may therefore be a useful predictor of potential suitability and productivity of P. penguin culture sites in other island groups of Tonga. Intra-Island Spatial Variation The growth performance of P. penguin also varied significantly between sites within island groups, but was not clearly related to environmental variation measured in this study. In the southern island group, growth performance of P. penguin was significantly lower at Sopu than at the Pangaimotu site, while in the northern island group site-related variation in growth was weaker and showed an interaction with age cohort. Although intra-island trends in P. penguin growth were not clearly related to water quality parameters measured in this study, they may be driven by site wave energy. The site supporting the worst oyster performance, Sopu, experiences 17-90 times higher wave energy than the Vaipua, Utulei or Pangaimotu sites (Smallhorn-West et al., 2020) and showed moderate movement of oysters in cylinders during periods of strong wave energy. Increased physical agitation of pearl oysters can increase byssal secretion (Taylor et al., 1997;Kishore et al., 2014) and reduce pearl quality (Kishore and Southgate, 2016) and may account for poorer growth performance of P. penguin cultured at Sopu than at Pangaimotu. Sopu was also the only site to record acute peaks in pH, turbidity, chlorophyll and current speed coinciding with a series of severe tropical depressions that tracked through Tonga in February 2019 (Tonga Meteorological Service, 2019). This suggests that oysters cultured at the Sopu site may also be more exposed to the effects of seasonal disturbance events than oysters at the other culture sites used in this study. Results of this study suggest that while water temperature is the most important large-scale consideration for P. penguin culture-site selection in Tonga, wave exposure and vulnerability to disturbance events could be important local considerations. In addition to its' effects on growth performance, sites with high wave exposure may also experience faster deterioration of infrastructure and equipment, and a lower ease of operation (Southgate, 2008). These factors can increase operating risks and maintenance and labor costs and thereby reduce mabé pearl farm profitability and feasibility (Johnston and Hine, 2015;Johnston et al., 2020). Temporal Variation Temporal trends in growth rate and CI of P. penguin in this study were strongly related to time and environmental variation. Monthly growth rate of DVH and ST of P. penguin tended to decrease with time, while CI and G M of WM tended to increase with time. These trends are typical of age-related changes in growth over the lifespan of pearl oysters, which are characterized by initial exponential growth, followed by a shallower increase to near maximum size (Gervis and Sims, 1992;Southgate and Lucas, 2003). Monthly growth trends also reflected a tendency for P. penguin to shift from a low WM:DVH ratio (<1:1) below around 90 mm DVH to a high WM:DVH ratio (>1:1) above this size Gordon et al., 2017). Increases in G M and CI of P. penguin also mirrored increases in water temperature and rainfall from October to January (days 15-107), but declined sharply in February (days 138-166) following a series of severe tropical depressions (Tonga Meteorological Service, 2019). This acute decline in G M and CI of P. penguin was likely related to stress and a probable spawning event prior to, or triggered by, the disturbance (Southgate, 2008;Milione and Southgate, 2012). Acute declines in G M and CI of P. penguin in June-July (days 258-288) and October to November (days 381-417) were not associated with disturbances but may indicate the occurrence of additional spawning events. It is also notable that G M of P. penguin was similar at all sites until the February disturbance (days 147-151), after which oysters at northern sites showed better recovery of G M than oysters at southern sites, with this difference persisting for the remainder of the study. Site-related variation in recovery of P. penguin was also reported by Milione and Southgate (2012) and suggests environmental conditions may also determine whether disturbance events have additional chronic impacts on P. penguin growth. Implications for Culture Site Selection and Production Schedules All culture sites in this study yielded >97 % survival of P. penguin and produced G M higher than, or comparable to, those reported by previous studies at similar latitudes in north Queensland and China (Beer, 1999;Fu et al., 2001Fu et al., , 2007Liang et al., 2001;Gu et al., 2009Gu et al., , 2013Southgate, 2011, 2012;summarized in Gordon et al., 2020). These results indicate that the ranges of all environmental variables examined in the study are suitable for the culture of P. penguin. While all sites were suitable for P. penguin culture, higher water temperatures and lower wave energy in the northern island group (Vava'u) resulted in the best growth performance at these sites and are therefore recommended as the preferred location for culture of P. penguin in Tonga. This study therefore recommends that optimal culture sites for P. penguin in Tonga are primarily characterized by warm water temperatures (25-30 • C) and low wave exposure (<15 joules m 2 day −1 ). Pteria penguin cultured at sites with more suitable environmental conditions reached pearl production size up to 34.2 % (6.5 months) earlier than oysters cultured at less suitable sites. This difference in G M would enable mabé pearl production to begin substantially sooner at more suitable culture sites and could have profound effects on production schedules, farm profitability and feasibility (Saidi et al., 2017;Johnston et al., 2020). Results of this study therefore highlight the impact of both large-scale and local environmental conditions on mabé pearl farm productivity and feasibility. Future research should now assess the effects of environmental conditions not only on growth performance, but also on mabé pearl production and quality in Tonga to enable full systematic industry recommendations to be made. This study provides vital information to inform future mabé pearl farm site selection, marine spatial planning and economic analyses to ensure continued sustainable expansion of the mabé pearl sector in Tonga and culture of P. penguin in the Pacific. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS SG: conceptualization, methodology, formal analysis, investigation, writing-original draft, visualization, and project administration. MW: methodology, resources, and project administration. PS-W: methodology and writing-review and editing. SM and TH: resources. DS: methodology, formal analysis, supervision, and writing-review and editing. PS: conceptualization, methodology, funding acquisition, supervision, and writing-review and editing. All authors contributed to the article and approved the submitted version. FUNDING This study was jointly funded by the University of the Sunshine Coast Graduate Research School (grant number: 1.035.06782) and the Australian Centre for International Agricultural Research (ACIAR) and was conducted as part of ACIAR Project FIS/2016/126 "Half pearl industry development in Tonga and Vietnam" led by PCS at the University of the Sunshine Coast.
2021-03-12T14:10:56.185Z
2021-03-12T00:00:00.000
{ "year": 2021, "sha1": "13e1ae5a64e97949eba5b5e2e9449c49cbf492e9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.618910/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "13e1ae5a64e97949eba5b5e2e9449c49cbf492e9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
119088898
pes2o/s2orc
v3-fos-license
Peculiar Stationary EUV Wave Fronts in the eruption on 2011 May 11 We present and interpret the observations of extreme ultraviolet (EUV) waves associated with a filament eruption on 2011 May 11.The filament eruption also produces a small B-class two ribbon flare and a coronal mass ejection (CME). The event is observed by the Solar Dynamic Observatory (SDO) with high spatio-temporal resolution data recorded by Atmospheric Imaging Assembly (AIA). As the filament erupts, we observe two types of EUV waves (slow and fast) propagating outwards. The faster EUV wave has a propagation velocity of ~ 500 km/s and the slower EUV wave has an initial velocity of ~ 120 km/s. We report for the first time that not only the slower EUV wave stops at a magnetic separatrix to form bright stationary fronts, but also the faster EUV wave transits a magnetic separatrix, leaving another stationary EUV front behind. Introduction As the two largest eruptive phenomena in solar atmosphere, both solar flares and coronal mass ejections (CMEs) are frequently associated with erupting filaments, which later become the core of the CMEs. These three phenomena are the three important ingredients in the standard CSHKP model (Carmichael 1964;Sturrock 1966;Hirayama 1974;Kopp & Pneuman 1976) for flare/CME. In this unified model, the erupting filament plays a very crucial role (Chen 2011). The erupting filament also drives wave phenomena in the solar atmosphere, which are manifested in radio, Hα, extreme ultraviolet (EUV), and other wavelengths. In contrast to Hα Moreton waves which occur sparsely, the frequently accompanied waves that can be directly imaged are EIT waves. EIT waves were discovered in the EUV difference images with the EUV imaging telescope (EIT, Delaboudinière et al. 1995) on board the Solar and Heliospheric Observatory (SOHO, Domingo et al. 1995) by Moses et al. (1997) and Thompson et al. (1998). This is the reason why they were named EIT waves more than 17 years ago. Later, several other names were invented for this phenomenon, such as EUV waves and large-scale coronal propagating fronts (Nitta et al. 2013). EIT waves were initially proposed to be the coronal counterparts of Hα Moreton waves, i.e., fast-mode magnetohydrodynamic (MHD) waves in the solar corona (Thompson et al. 1998;Wang 2000;Wu et al. 2001). However, these waves present some features which cannot be easily explained by the fast-mode wave model. For example, according to Klassen et al. (2000), the typical velocity of EIT waves is in the range of 170-350 km s −1 , which is about 3 or more times slower than Moreton waves. In some cases, the EIT wave speed can be as small as ∼10 km s −1 (Zhukov et al. 2009). In order to resolve the velocity discrepancy, several non-wave models have also been proposed (Gallagher & Long 2011;Chen & Fang 2012;Patsourakos & Vourlidas 2012a;Liu & Ofman 2014). For more reviews on EUV waves, see Warmuth (2007), Wills-Davey & Attrill (2009), Zhukov & Veselovsky (2007), Zhukov (2011), Warmuth & Mann (2011), and Patsourakos & Vourlidas (2012b). Very recently -4 - Warmuth (2015) presented a very excellent review on the globally propagating coronal waves. The review is focussed on various observational findings, physical nature and different models of EUV waves proposed in the past years. It seems now that there should be two types of EUV waves with different velocities, and the EIT wave initially discovered by Moses et al. (1997) and Thompson et al. (1998) may correspond to the slower type of EUV waves (Chen et al. 2002). Following Chen & Fang (2012), we use "EIT waves" for the slower type of EUV waves specifically. Biesecker et al. (2002) did a statistical study of EIT waves observed by SOHO/EIT (195Å) telescope and found that some of the EIT waves have sharp bright features, which they called "S-waves". These S-waves may be the signature of Moreton waves. They also concluded that EIT waves having S-shape signatures are always associated with both flares and CMEs. On the basis of SOHO/EIT observations, Zhukov & Auchère (2004) suggested a bimodal characteristic for EIT waves, i.e. including a wave mode and an eruptive mode component. The wave-mode component is a wavelike phenomenon and represented by pure MHD waves. The eruptive-mode component is defined as propagating bright fronts and dimming as a result of successive stretching of field lines during the eruption of CMEs as modeled by Chen et al. (2002). Downs et al. (2012) presented a comprehensive observations of 2010 June 13 EUV wave observed by SDO/AIA in different channels and conducted a 3D MHD simulations of CME eruptions and the associated EUV waves. They suggested that the outer component of EUV waves behaves as a fast-mode wave and found that this component later decouples from the associated CME. Their study distinguishes between wave and and non-wave mechanisms of EUV waves. An even more serious issue that led Delannée & Aulanier (1999) and Delannée (2000) to doubt the fast-mode wave model for the EIT waves is that they found stationary wave fronts in several events. These stationary fronts are found to be located at magnetic separatrix,. Considering that this feature can hardly be accounted for by the fast-mode wave model, they related the stationary EUV front to the opening of the closed magnetic field lines during the CME. The stationary EUV front was also explained in the framework of the magnetic field -5line stretching model proposed by Chen et al. (2002). With numerical simulations, Chen et al. (2005) and Chen et al. (2006) illustrated how a propagating EIT wave stops at the magnetic separatrix. However, as mentioned by Delannée & Aulanier (1999), although it is unlikely, there is a possibility that the stationary EUV front is an artifact because it might be due to successive wave fronts reaching the same location after ∼15 min, which is the cadence of the EIT observations. With the high cadence of the SDO/AIA observation up to 12 s, this issue can be settled conclusively. With the purpose to better understand the EUV wave and its stationary fronts, in this paper we present our study of the filament eruption event originating between the active regions NOAA 11207 and 11205 on 2011 May 11. The paper is organized as follows: Section 2 describes the instruments and the observational data. The observational view of filament eruption and associated phenomena are investigated in Section 3, whereas the EUV waves and its stationary fronts are analyzed in Section 4. The discussion of our results is presented in Section 5. Finally, the conclusion is drawn in Section 6. Instrumentation and Data The Atmospheric Imaging Assembly (AIA, Lemen et al. 2012) on board the SDO satellite (Pesnell et al. 2012) observes the full Sun with different filters in EUV and UV spectral lines with a cadence up to 12 s and a pixel size of 0. ′′ 6. For this current study, we use the AIA 171Å, 193 A, and 304Å data. The high cadence and high spatial resolution of the AIA images allow us to see more details of the filament eruption and the associated EUV waves. To have a better view of EUV waves, we utilize the base difference images by subtracting each image with one before eruption. All the images are corrected for the solar differential rotation. For the magnetograms, we use the data observed by the Helioseismic and Magnetic Imager (HMI, Scherrer et al. 2012) aboard SDO. HMI measures the photospheric magnetic field of the Sun with a cadence of 45 sec Filament Eruption and the Associated Phenomena On 2011 May 11 the filament under study is located between the active regions NOAA 11207 and NOAA 11205 at N20W60 on the solar disk. It has a length of ∼150 Mm. To its north, there is another short filament, which shares the same magnetic neutral line. During eruption, only the longer filament erupts. This filament starts to rise at ∼02:10 UT on 2011 May 11. The eruption of filament is followed by a weak flare. According to Geostationary Operational Environmental Satellite (GOES) observations, the flare is classified as B9.0-class. The soft X-ray enhancement Figure 1, the flare shows two quasi-parallel ribbons. As the filament moves up, the two ribbons start to separate from each other as expected from the standard CSHKP model. The ribbons are located on the opposite sides of the magnetic neutral line. The filament eruption is associated with a CME. According to the LASCO CME catalog, the CME appears in the LASCO field-of-view around 02:48 UT . The CME is a partial halo event with an angular width of 225 • . The speed and the acceleration of the CME are 740 km s −1 and 3.3 m s −2 , respectively. In order to see the kinematics of filament eruption, we create a time-slice diagram using the SDO/AIA 304Å data. The location of the slice is shown in the top-left panel of Figure In many reported cases, filament eruptions often exhibit distinct two phases, i.e., slow and fast rise phases (Chifor et al. 2006;Schrijver et al. 2008;Koleva et al. 2012;Joshi et al. 2013). Interestingly, in our case, the velocity of the erupting filament changes continually, and we cannot divide the evolution into two phases. Such type of eruptions were proposed in the case of the kink instability (Török & Kliem 2005;Cheng et al. 2012). In this case, the eruption can occur without the need of the slow rise phase with a nearly constant velocity. Therefore, our eruption event may be initiated by the kink instability on the first instance. However, we did not observe the clear number of twist which meets the Kruskal-Shafranov condition for the kink instability (Srivastava et al. 2010). Two Types of EUV waves and Stationary EUV Wave Fronts The filament eruption on 2011 May 11 is associated with EUV waves. The first appearance of the EUV waves is around 02:00 UT. The wave is seen to propagate mostly in the south-west direction. We display the AIA 171 and 193Å base difference images to see the evolution of the EUV waves in the two rows of Figure 3, respectively. To make the base difference images, a pre-event image at 02:00 UT is subtracted from each observed image. Propagating EUV waves are clearly seen in Figure 3, including a fast-moving EUV wave marked by yellow arrows and another slowly-moving EUV wave indicated by red arrows. The slowly-moving EUV wave is followed by coronal dimmings in both wavelengths. Since the coronal dimmings can be observed in different wavelengths, they are mainly due to the depletion of plasma density. To see the kinematics of the EUV waves clearly, we create a time-slice image in AIA 193 A. As shown in the left panel of Figure 4, the slice is a great circle starting from the flare region. The right panel of Figure 4 displays the time evolution of the 193Å intensity distribution along the slice. Inspecting the time-slice diagram, we find that there are two types of waves, one is a fast-moving wave and another is a slowly-moving wave. We claim the fast-moving wave as the fast-mode MHD wave and the slowly-moving wave as the EIT wave, as marked by the arrows in the right panel of Figure 4. The speed of the fast-mode wave is ∼500 km s −1 , which is several times greater than the coronal sound wave. The observed slower wave is a typical EIT wave. The initial speed of the EIT wave is ∼120 km s −1 , which is even smaller than the coronal sound speed. Note that with the AIA 193Å formation temperature, the coronal sound speed is 186 km s −1 . It is also seen that as time progresses, the foremost front of the fast-mode wave keeps a constant speed, whereas the speed of the EIT wave decreases, and around 02:33 UT it stops. Since the EIT -11wave bifurcates into two fronts, they form two stationary fronts, F 2 and F 3 at distances of 160 ′′ and 200 ′′ , respectively. Since SDO/AIA has a 12 s cadence, we can follow the propagation of any EUV waves. It is seen that the stationary fronts at the distances of 160 ′′ and 200 ′′ in Figure 4(b) indeed result from the gradual deceleration of the slowly-moving EIT waves. More interestingly, we notice another two stationary fronts in Figure 4(b), which are not related to the EIT waves. The first one, F 1 , is located at a distance of 110 ′′ , and the second one, F 4 , is at a distance of 280 ′′ in Figure 4(b). The first one, which is very close to the flare site, is the border of a core dimming region. In order to understand the formation of these stationary EUV fronts, we plot the extrapolated coronal magnetic field in Figure 5, where the extrapolation is based on the potential field source surface (PFSS) model. After checking the extrapolated magnetic field, we find that Front F 2 (marked by the red line) is nearly cospatial with magnetic separatrice or quasi-magnetic separatrix layers (QSLs) where magnetic field lines diverge rapidly. Note that a magnetic separatrix is a special case of magnetic QSL, where the neighboring magnetic fields belong to different magnetic systems. Front F 4 (marked by the yellow line) is not cospatial but very close to another QSL. Beside, Front F 1 is located inside the magnetic system of the source region, and Front F 3 is shifted slightly from the QSL that is nearly cospatial with Front F 2 . Discussion When EIT waves were discovered, they were initially considered as fast-mode MHD waves (Thompson et al. 1998;Wang 2000;Wu et al. 2001), i.e., they are long-awaited coronal counterparts of chromospheric Moreton waves. Moreton waves were discovered by Moreton (1960) and Moreton & Ramsey (1960) as a dark front followed by a bright front in the Hα red wing or a bright front followed by a dark front in the Hα blue wing. They have a typical velocity of the order of 1000 km s −1 (Smith & Harvey 1971). Despite some apparent evidence that seems to support the fast-mode wave nature of EIT waves (e.g., Olmedo et al. 2012; Gopalswamy et al. 2009; Ballai et al. 2005), a serious problem with the fast-mode wave model is that the EIT wave speed is typically ∼3 times slower than Moreton waves (Klassen et al. 2000), and in some cases the EIT wave speed is only ∼80 km s −1 (Klassen et al. 2000) or even ∼10 km s −1 (Zhukov et al. 2009). Nitta et al. (2013) claimed that the large-scale coronal propagating fronts have a mean wave speed of 644 km s −1 , which is comparable to that of Moreton waves. However, they always selected the fastest front in each of their time-slice diagrams. Therefore, in our view, most events in their paper are the coronal counterpart of Moreton waves, rather than the original EIT waves found by Thompson et al. (1998). In our study, whenever we say that EIT wave is generally three times slower than the fast-mode wave in the corona, we mean the slower one in the two-wave paradigm. The lack of correspondence between the speeds of Moreton and EIT waves was also suggested by Warmuth et al. (2001Warmuth et al. ( , 2004a. In order to explain the velocity difference, they proposed that fast-mode wave decelerates from typical Moreton wave speeds to typical EIT wave -15speeds. Whereas this idea may be able to explain the deceleration of the real fast-mode wave, whose speed is higher near the source active region than in the quiet region, we definitely need another model to explain those EIT waves whose speeds are below the sound speed. In order to explain the low speeds of many typical EIT waves, Chen et al. (2002) and Chen et al. (2005) proposed that two types of EUV waves are formed in association with a filament eruption. The fast-moving wave is a piston-driven shock wave, which corresponds to the coronal counterparts of the chromospheric Moreton waves, and the slowly-moving wave is an apparent motion, which is formed due to the successive stretching of the closed magnetic field lines overlying the erupting flux rope. The co-existence of two types of EUV waves was initially verified by Harra & Sterling (2003), and later conclusively confirmed by SDO/AIA observations (Chen & Wu 2011;Asai et al. 2012;Kumar et al. 2013;White et al. 2013). It can also be identified in many events statistically analyzed by Nitta et al. (2013). As predicted by the magnetic field-line stretching model (Chen et al. 2002), the fast-moving EUV wave is about 3 times faster than the slowly-moving EUV wave. For example, the ratio is 2.5 (Harra & Sterling 2003), 2.9 (Chen & Wu 2011), 3.4 (Kumar et al. 2013), and 1.8 (White et al. 2013). In this paper, we also found two EUV waves, with the velocity ratio being 4.2. According to the magnetic field-line stretching model, this relatively large ratio implies that the closed field lines overling the filament are relatively more stretched in the radial direction. Besides, it is seen that expanding dimmings immediately follow the slower EIT wave, as illustrated by Figure 3. Such a feature is again consistent with the magnetic fieldline stretching model, which interprets that the EIT waves and expanding dimmings are both due to the field line stretching. Another feature of EIT waves that led to the doubt on the fast-mode wave model is the stationary fronts. Delannée & Aulanier (1999) first reported that an EIT brightening remains at the same location for tens of minutes. They called such brightenings as "stationary brightenings". -16 -Later on, such stationary brightenings were confirmed in several observational studies (Delannée 2000;Delannée et al. 2007;Attrill et al. 2007;Chandra et al. 2009). Such a stationary front located at a magnetic separatrix or a QSL in more general cases was reproduced in numerical simulations, and can be well explained by the magnetic field-line stretching model (Chen et al. 2005(Chen et al. , 2006. Even though, there has still been doubt about the validity of the stationary fronts due to the low cadence of the EIT telescope. With the high cadence observations of the 2011 May 11 event by SDO/AIA, we confirm that the slowly-propagating EIT wave finally stops at a magnetic QSL. One peculiar feature in this event is that the EIT wave bifurcates into two stationary fronts, F 2 and F 3 in the time-slice diagram (Figure 4), and only the first front, F 2 , is cospatial with a QSL, with the other one being slightly shifted away. These detailed structures cannot be detected with the telescopes before SDO was launched. One possibility of the bifurcation is that the outer front is the traditional EIT wave front, whereas the inner front is simply an expanding coronal loop, as proposed by Cheng et al. (2012). Another possibility, which we favor, is that the two fronts are due to the projection of different layers of one EIT wave front since the EIT wave front has a domelike structure in 3-dimensions (Veronig et al. 2010). In addition to the bifurcation of the slower EUV wave into fronts F 2 and F 3 , even inside front F 3 , a multitude of strands are identifiable. One might wonder whether the fine structures inside front F 3 can be explained by slow-mode shocks, which was proposed by Wang et al. (2009Wang et al. ( , 2015. With the current observations, we could not tell. As for the soliton model (Wills-Davey et al. 2007), we are still not sure whether a slow-mode soliton wave can propagate across magnetic field lines and stop at magnetic separatrix. More strikingly, we find two more stationary fronts F 1 and F 4 , where F 1 is close to the flare site, and F 4 is formed when the fast-mode wave interacts with another magnetic QSL. It seems from Figure 4 that the stationary front F 1 emanates at ∼02:19 UT, which is slightly earlier than the onset of the solar flare around 02:20 UT. Therefore, it would be more related to the initiation of the filament eruption. It is noticed that this stationary front is located at the boundary of the core dimmings. Since the core dimmings are generally believed to be due to the evacuation -17of plasma associated with the erupting flux rope (Sterling & Hudson 1997;Jiang et al. 2003), this stationary front might be formed at the interface between the flux rope (near the footpoint) and the envelope magnetic field (which is more like potential). In this sense, the current shell model proposed by Delannée et al. (2008) may provide a sound explanation for front F 1 . As for another stationary front F 4 , it seems that it is formed when the fast-mode MHD wave passes through the magnetic QSL at a distance of 280 ′′ away from the flare site. This feature has never been reported, and generally it is thought that a fast-mode wave may pass through a magnetic QSL freely, leaving no significant traces behind since a magnetic QSL is a topological characteristic, and the magnetic field strength may change smoothly across the QSL. However, from a theoretical point of view, when a wave propagates in a non-uniform medium, wave reflection would be produced when the intrinsic wave speed of the medium changes rapidly. In particular, when the wave speed in a layer is much lower than that of other regions outside, a wave passing through would be decomposed into a transmitting component and a trapped component that bounces back and forth inside this layer, just like the Fabry-Pérot interferometer. Inspired by the observational result presented in this paper, we (2016, in preparation) call such a layer as "magnetic valley", and are planning to study how such a magnetic valley responds to an incident wave by numerical simulations. So far a similar phenomenon was numerically investigated by Yuan et al. (2015). Murawski et al. (2001) and Yuan et al. (2015) did a one-dimensional simulation of propagation of fast magnetoacoustic pulses in a randomly structured plasma and found that the magnetoacoustic pulses were trapped by the randomly structured plasma. Such a "magnetic valley" exists when the QSL is a magnetic separatrix, and the magnetic fields on the two sides of the separatrix belong to two different magnetic systems. The magnetic field around the separatrix might be strongly divergent. In this case, after a fast-mode MHD wave enters this magnetic valley, only a part of the wave can be refracted from the low-Alfvén speed region out to the high-Alfvén speed region, with the remaining part of the -18wave being trapped in the magnetic valley, bouncing back and forth between the two interfaces. We illustrate this physical process in Figure 6. Definitely this observational feature of EUV waves merits further numerical simulations. Unfortunately we cannot identify the bouncing waves at the stationary front. The possible reason is that in 2 or 3 dimensions, the magnetic valley have different widths at different heights, contrary to the one-dimensional case. Therefore, trapped waves with different periods are mixed together in the projected plane, making each wave not identifiable. It is also noted that, as seen in Figure 5, the stationary front F 4 is not exactly cospatial with the QSL. Such a shift might be due to the limitation of the PFSS model, or such a stationary front is formed with a mechanism different from our conjecture mentioned above. Kwon et al. (2013) also reported stationary EUV fronts after the passage of a fast-mode shock wave. However, the two fronts in their paper are actually separating in opposite directions with a small velocity. Since the two EUV fronts are located on the two footpoints of a helmet streamer, the brightenings are probably produced by the magnetic reconnection of the current sheet above the helmet streamer triggered by the passing shock wave (B. Vršnak, private communication), which are like flare ribbons and different from ours. Alternatively, the formation of the stationary front F 4 might be interpreted as stoppage of expansion of structures inside the CME as suggested by Cheng et al. (2012). Cheng et al. (2012) presented the study of formation and separation of two EUV waves from the expansion of a CME. They also reported that the CME and the faster EUV wave propagate with different kinematics after they decouple. Conclusions In this study, we presented the observations of two propagating EUV waves, i.e., a fast-mode MHD wave and a slowly-moving EIT wave associated with a filament eruption and a CME, as -19found in many other CME events via SDO/AIA. In association with the two propagating waves, we observed four stationary fronts i.e., F 1 , F 2 , F 3 , and F 4 , as indicated by Figure 4. The stationary wave fronts F 2 and F 3 are the results of the gradual deceleration of the slowly-moving EIT wave, which finally stops near the location of a QSL. The formation of Front F 2 can be explained by the magnetic field-line stretching model proposed by Chen et al. (2002Chen et al. ( , 2005. Front F 3 is bifurcated from front F 2 , so it is shifted slightly away from the QSL. This might be due to the projection effects, i.e., Front F 3 is from a higher layer of the same domelike EIT wave front as Front F 2 . Front F1 is proposed to be related to the initiation of the filament eruption and is located at the edge of the core dimmings. This may correspond to the edge of the erupting flux rope at the footpoint. It might be explained by the current shell model proposed by Delannée et al. (2008). Stationary front F 4 is observed for the first time. We tentatively explain it to be formed when the fast-mode MHD wave interacts with a magnetic QSL. During the interaction, a fraction of the wave passes through, with the rest being trapped locally. Other possibilities are not excluded though. We are thankful to the referee for his/her detailed comments and suggestions, which improved the manuscript significantly. The authors thank the open data policy of the SDO team. RC, AF is supported by the ISRO/RESPOND project no. ISRO/RES/2/379/12-13, and PFC is supported by the Chinese foundations (NSFC grants nos. 11533005 and 11025314).
2016-02-28T10:06:22.000Z
2016-02-28T00:00:00.000
{ "year": 2016, "sha1": "98b4cef79487c9871518ce09f349386e9f7762b8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1602.08693", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "98b4cef79487c9871518ce09f349386e9f7762b8", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
189974408
pes2o/s2orc
v3-fos-license
Nationally Determined Contributions: Material climate commitments and discursive positioning in the NDCs In the lead‐up to the 2015 Conference of Parties meeting in Paris, 186 countries, representing over 95% of global emissions, submitted Nationally Determined Contributions (NDCs). The NDCs outline national goals for greenhouse gas emission reductions and identify financial needs for unfolding mitigation and adaptation efforts. In this study, we review various analyses of the NDCs that cover the aggregate impact and strength of emissions reduction commitments and discuss recent literature on the adequacy and sectoral focus of the NDCs. We then argue that the NDCs are more than just goal setting reports; they are important discursive documents that are contested, negotiated, and ongoing. To supplement the existing literature, we examine the discursive narratives embedded in the NDCs from the 19 founding nations of the Climate Vulnerable Forum and the top 10 greenhouse gas emitters. Our literature review of quantitative and sectoral aspects of the NDCs highlights the inadequacy of the NDC commitments in the context of limiting warming to 2°C, discusses the uncertainties in the promised mitigation strategies, and identifies the reliance of many countries on policies such as those on forests or renewable energy. Our own analysis of the discourses in the NDCs adds critical depth by highlighting the stark contrasts in NDC discourses between North and South, as well as between historical emitters and emerging economies. These contrasts reflect deeper debates regarding justice and equity between nations within the UNFCCC negotiations. | INTRODUCTION One of the key mechanisms for implementing the international response to climate change under the UNFCCC Paris Agreement of December 2015 are voluntary commitments to emission reductions and other actions by countries called Nationally Determined Contributions (NDCs) (United Nations Framework Convention on Climate Change, 2016). NDCs, which are submitted to the United Nations Framework Convention on Climate Change (UNFCCC), outline steps that a country is undertaking to reduce emissions at the national level, with the option to also discuss other actions such as adaptation. All countries are expected to deliver NDCs. Prior to the Paris Agreement coming into effect these commitments were termed Intended Nationally Determined Contributions (INDCs). Previous agreements, such as the Kyoto Protocol, included legally binding emission reduction pledges identified as Quantified Emissions Limitation and Reduction Objectives or Nationally Appropriate Mitigation Actions (Gupta, 2014). One hundred eight-six countries submitted INDCs in advance of the Paris conference with the option to revise them following ratification of the Paris Agreement, which occurred in October 2016. The UNFCCC agreed at COP20 in Lima that INDCs could include commitments to emission reductions and removals including base year, time frame, and accounting methods. The UNFCCC also offered that countries could discuss how its INDC is fair and ambitious, how it contributes to preventing dangerous climate change, and issues related to adaptation. Additionally, least developed countries and small island developing states could choose to focus on low carbon development rather than emission reductions. Countries are expected to revise and resubmit NDCs before 2024 as a central part of the UNFCCC process. The emerging literature on the NDCs focuses on their material significance in terms of overall emission trajectories and their commitments in particular sectors such as energy or land use. While the NDCs can be assessed as expressions of political commitment to real emission reductions, they are also narrative documents that can be read as persuasive discourses that reveal deeper tensions, ideas, and values about international climate policy, national identities, and aspirations. For example, NDCs may engage with questions of responsibility and fairness or emphasize vulnerabilities and adaptation; they may focus on land use and biodiversity or provide extensive technical detail on energy policy and technology. This study summarizes the aggregate impact and strength of emission reduction commitments made in the NDCs based on recent reports and discusses the literature on the adequacy and sectoral focus of the NDCs. Because most of these reports and literatures tend to take the NDCs at face value, reporting quantitative elements or counting mentions of sectors and strategies, in this study we seek to add critical depth by analyzing the discursive narratives that can be observed in an emblematic set of NDCs from the 10 top greenhouse gas emitters and the founding countries of the Climate Vulnerable Forum (CVF). Discourse analysis allows us to analyze the NDCs for statements that reveal the intentions and aspirations of different actors beyond their specific commitments to mitigation and adaptation, reflecting the broader politics of international climate responses. Our analysis thus contributes to qualitative research in climate governance that illustrates the techniques and rationalities that underpin international efforts to respond to climate change. First, we review the literature on the aggregate quantitative impacts of NDCs on emissions and temperature. Second, we review the sectoral analyses of NDCs. Third, we undertake original analysis of the NDCs to uncover some of the key discursive narratives and tensions in climate policy that they reflect. | AGGREGATE ASSESSMENTS OF THE NDCs: GLOBAL TEMPERATURE, AND EMISSION GAPS Organizations that regularly report on the nature and material significance of the NDCs include the UNFCCC, UNEP (United Nations Environment Program) and the World Bank (indc.worldbank.org). The UNFCCC maintains a registry of the NDCs with the submissions from 171 parties and any updates (unfccc.int) with an analysis of their content (United Nations Framework Convention on Climate Change, 2016) . UNEP has published nine Emission Gap Reports, which from 2015 have included analyses of the INDCs and NDCs. The 2018 report (United Nations Environment Program, 2018) concludes that the NDCs would result in 53-56 GtCO 2e of global emissions in 2030, whereas to keep warming below 2 C emissions should be less than 40 GtCO 2e by 2030 (Figure 1). For a 66% chance of keeping warming below 1.5 C in 2100, the report concludes, emissions in 2030 should not exceed 24 GtCO 2e , much less than the NDC projected emissions of 53-56 GtCO 2e . NDCs include both unconditional commitments and commitments that are conditional on the action of other countries and/or financial or other types assistance. Implementing the unconditional NDCs with no further action would mean a warming of more than 3.2 C by 2100 compared to pre-industrial levels (United Nations Environment Program, 2018, p. 10). The overarching conclusions of the report are that "Current commitments expressed in the NDCs are inadequate to bridge the emissions gap in 2030. Technically, it is still possible to bridge the gap to ensure global warming stays well below 2 and 1.5 C, but if NDC ambitions are not increased before 2030, exceeding the 1.5 C goal can no longer be avoided. Now more than ever, unprecedented and urgent action is required by all nations" (p. 4). Contributors to the UNEP reports include scientists who have published on the implications of the NDCs, and maintain websites to track and evaluate commitments. For example, climateactiontracker.org is maintained by a consortium that includes nonprofits, consultancies and research institutes which rates and analyses the climate implications of NDCs and other climate commitments in relation to fair share efforts. Climate Action Tracker estimates that only seven countries have made commitments compatible with a 2 C limit (Bhutan, Costa Rica, Ethiopia, the Gambia, India, Morocco and the Philippines) and evaluate the commitments of the Russian Federation, Saudi Arabia, Turkey, the USA and Ukraine as "critically insufficient" and on a path that would lead to a 4 C warmer world (Climate Action Tracker, 2018; Höhne, Fekete, den Elzen, Hof, & Kuramochi, 2018) Notably, the consortium decided not to include emissions or commitments relating to land use in their quantitative estimates because of data and accounting uncertainty as well as the importance of decarbonizing the energy system. Other such sites include Climate Analytics (climateanalytics.org), the World Resources Institute (cait.wri.org/indc), and Carbon Brief. Pauw et al. (2017) have also created a useful database of the NDCs (https://klimalog.die-gdi.de/ndc) with categories reflecting the focus and detail of each NDC. Assessments of the NDCs range from the enthusiastic and positive to the concerned and skeptical. Positive assessments point to their universality, clarity, fairness, and reflection of input from smaller and vulnerable countries (Mead, 2015;Mercer, 2015;Morgan & Northrop, 2017;Schellnhuber, Rahmstorf, & Winkelmann, 2016) and to their role in advancing national climate policies and non-state action (Hohne, Kuramochi, Warnecke, & Röser, 2017). More optimistic assessments express hope that revised NDCs will ramp up ambition or note that many NDCs are conservative estimates with some countries already exceeding their commitments as a result of faster decarbonization or the role of non-state actors. The main concern is that despite the widespread adoption of the NDCs, the current commitments are inadequate to meet temperature targets. Rogelj et al. (2016) model the temperature outcomes of the NDCs and show that although they result in cooler temperatures than business as usual scenarios, they still result in warming of 2.6 to 3.1 C by 2100 missing the 2 C target unless the NDCs are made more ambitious. They observe that the omission of international aviation and marine emissions, the uncertainty in land uses, the optimism about economic growth, and the lack of discussion of non-CO 2 gases increases the chance of even higher temperatures. Similar conclusions are reached by Hare, Ancygier, de Marez, and Parra (2017) who project NDC outcomes of 2.8 C warming by 2100 and emphasize the need for NDC revision and increased ambition, especially in the light of the rapidly declining cost of renewables. Knopf et al. (2017) show that the INDCs absorb most of the remaining carbon budget that limits warming to 2 C and exceed that needed for 1.5 C. They argue that negative emissions-including land use changes such as reforestation or bioenergy to take up carbon or carbon capture and storage (CCS)-are indispensable to a 1.5 C goal and need to be included in country commitments and investments. Other analyses of NDCs show the importance of early action and increased ambition to have any chance of meeting temperature targets (Boyd, Turner, & Ward, 2015;van Soest et al., 2017) and that reliance on reforestation, forest protection, land use, or bioenergy in NDCs is problematic and uncertain (Climate and Development Knowledge Network, 2015). Although most current NDCs do not mention CCS, the reliance on such "negative emissions" is a source of skepticism for Geden (2016) who argues that such "magical thinking" resulted in the Paris Agreement managing "to adopt a 3 C agreement with a 1.5 C label" (p. 783) and for others who argue that negative emissions will not deliver (Anderson & Peters, 2016;Geden, 2016;Larkin, Kuriakose, Sharmina, & Anderson, 2017;Vaughan & Gough, 2016). The fairness of the NDC process and voluntary nature of the commitments is another area of concern (Bretschger, 2017;Holz, Kartha, & Athanasiou, 2018;Iyer et al., 2016). For example, Oxfam and Holz et al. suggest that the NDCs do not reflect fair shares of the global carbon budget that would reflect historical responsibility and capacity (Holz et al., 2018;Oxfam, 2015). They argue that although reduction commitments from China and Indonesia might be considered fair, those of the United States, the European Union, and Japan should be much larger. Pan, den Elzen, Höhne, Teng, and Wang (2017) compare six equity approaches to evaluate the NDCs (combinations of responsibility, capability, and equality) and find that while India's NDC has ambition consistent with equity, the United States and the European Union lack equitable ambition. In order to look at the overall credibility of NDC commitments, Averchenkova and Bassi (2016) assess the NDCs based on indicators such as the presence of rules, organization, public concern, and past performance; and find that within the G20, the European Union has the highest credibility and Canada and India the least. Winkler et al. (2018) provide an interesting evaluation of how the NDCs represent issues of equity and adaptation, finding that countries use a range of expressions to underline their "small share" and low per capita emissions but often do not substantiate these claims with specific data or verification by independent sources. Several authors have attempted to assess the costs of meeting the NDCs. For example, Hof et al. (2017) use the IMAGE model to compare the costs of the NDCs in comparison to those of temperature targets. They find that, depending on assumptions about development paths and conditionality, the overall costs of meeting the conditional NDCs by 2030 range from $40 billion to $135 billion and up to six times higher to meet the 1.5 goal. For key economies, these costs are less than 0.4% of GDP, with emission trading reducing costs by 50%. Liu, Wang, and Zheng (2017) find that an immediate carbon price would result in less costly emission reductions for key countries compared to the NDC planned actions to 2030 which, although more equitable, place burdens on future action. Rose, Richels, Blanford, and Rutherford (2017) explore scenarios for ambition and participation and find that costs of reductions could reach as high as 8.5% of GNP by 2100 if there is no CCS and NDC commitments remain on the current path after 2030. Fragkos et al. (2018) conclude that the overall costs to GDP of meeting the NDCs are not prohibitive but are high for fossil fuel producing countries. | ASSESSING THE NDCS BY SECTOR Another substantial area of research on the NDCs, largely presented in gray literature, is examining how NDCs treat key mitigation sectors including those in forestry, agriculture, and renewable energy. These reports analyze the material commitments to reductions in a specific sector, often for a subset of countries. For example, Forsell et al. (2016) found that land use, land use change, and forests (LULUCF) are expected to contribute up to 20% of total emission reductions, with Brazil and Indonesia making the most substantial pledges in this sector. These commitments, however, are tempered by the large uncertainties surrounding how mitigation via LULUCF is estimated, modeled and monitored and the lack of common rules of LULUCF accounting undermines the mitigation commitments in this sector. In an analysis of non-Annex 1 NDCs, Hargita and Ruter (2016) find that countries have exploited the lack of accounting rules by designing their LULUCF mitigation strategies in diverse ways, jeopardizing the transparency, attainability, and comparability of land use mitigation actions. Agriculture is a key sector in the NDCs for both adaptation and mitigation actions. One hundred forty-eight countries included agriculture (crops and livestock) among their mitigation actions. Agriculture is also plagued by challenges in emission reduction estimation, monitoring and enforcement (Strohmaier et al., 2016). Agriculture was the foremost sector identified for adaptation in the 131 countries that discussed adaptation in their NDCs with 94 of these countries providing details on agricultural goals that will be implemented. The agricultural adaptation mitigation and adaptation goals tend to focus on agricultural technologies over incentives and services that support uptake (Richards et al., 2015). Kuramochi et al. (2017, p. 15) are pessimistic about mitigation in agricultural production, finding that mitigation opportunities were "scattered and potential is limited" and that transitioning diets towards less carbon intensive options had a greater potential for reducing emissions. Stephan, Schurig, and Leidreiter (2016) analyze the potential of renewable energy to meet mitigation finding that 108 countries plan on increasing renewable energy. China and India plan the highest rates of renewable expansion and eight countries (including Costa Rica, Cabo Verde, Cook Islands, Fiji, Papua New Guinea, Samoa, Tuvalu, and Vanuatu) are planning to completely decarbonize their energy matrix. Seven countries include "clean coal" as a mitigation strategy and nine countries plan to increase nuclear power generation. | NDCs AS DISCURSIVE TEXTS The literature and reports, reviewed above, view NDC commitments in terms of their promised material contributions. While this literature sometimes expresses skepticism about the likely fulfillment of the expressed goals, for the most part they do not explore the ways in which the commitments are presented or discussed in the context of the way language is used, political positions defined, or certain issues are avoided. The NDCs varied widely in their content, length and style and can be read as discursive documents that reveal deeper tensions, ideas, and values about international climate policy, national identities, and aspirations. Environmental discourse analysis has been used to understand ideas underpinning many aspects of climate and environmental governance (Bäckstrand & Lövbrand, 2016;Fløttum & Gjerstad, 2017;Hajer & Versteeg, 2005;Hulme, 2008;Liverman, 2009;Okereke, Bulkeley, & Schroeder, 2009). Discourse has varying definitions, but in this analysis, we understand discourse as the language that shapes views of the world, revealing how different actors impose frames on discussions of addressing climate change, explaining how some ideas and people come to dominate the discussion and delimit what is acceptable as policy, and creating opportunities for more democratic knowledge and practices (Feindt & Oels, 2005;Hajer & Versteeg, 2005). In particular, we draw on critical discourse studies to emphasize the relationship between language and power (Wodak & Meyer, 2001), highlighting how the discourses in NDCs "enact, confirm, legitimate, reproduce or challenge relations of power and dominance" in relation to climate change governance (Calliari, 2018, p. 728). There is little published literature to date that uses a discourse analysis to examine the NDCs. Tobin, Schmidt, Tosun, and Burns (2018) use discourse network analysis to cluster countries and assess shared interests within existing interest groups. Their analysis uses a content analysis of statements in the NDCs, identifying key approaches to mitigation, and finds that non-European Union developed states and OPEC have the most internal similarity in their NDC statements about emission reduction targets, land use, and adaptation. However, they do not analyze the style or framing of what is said within the NDCs as discourses. To fill this gap and show the role of NDCs as discursive texts where countries assert political positions and values, below we provide an initial analysis of discourses in a subset of the NDCs. We conducted a review of the NDCs in order to analyze the discourses employed by the founding nations of the CVF (n = 19) as well as the top national greenhouse gas emitters (n = 10). We included CVF nations because these countries constitute an important negotiating block with the UNFCCC process, positioned as a highly visible Global South partnership, that is, "highly vulnerable to a warming planet." We chose to also include the top 10 greenhouse gas emitters because these nations provide an important counterpoint to the CVF. These top emitters are key players in the UNFCCC negotiations and include both emerging economies such as India and China as well as large historical emitters such as the United States and the European Union. In order to structure our review, we chose a priori analytical categories that we identified based on an initial reading of the NDC texts and relevant literature on the discourses of international environmental governance. We conducted discursive coding of selected NDCs using NVivo qualitative analysis software (QSR International) examining and comparing the compiled passages for each discursive category across NDCs. We coded NDCs three times for consistency. We illustrate the discourses with some quotations from illustrative NDC texts. For more details on our methodology, see Appendix 1. | The blame game: Responsibility for emissions The assignment of responsibility for causing climate change is a highly political aspect of the NDCs and the discourse of responsibility is highly differentiated between the CVF and high emitters. Almost all CVF countries address the issue of responsibility by highlighting their insignificant contribution to global greenhouse gas emissions (using quantitative data) and calling on large historical emitters to both reduce emissions and financially support adaptation in CVF countries. St. Lucia exemplifies this discourse stating, "(Our) greenhouse gas emissions are miniscule in global terms, with the country having contributed approximately 0.0015% of global emissions in 2010 at a per capita rate of 3.88 tCO2-eq" (pp. 1-2). In contrast, high emitters the European Union, Russia, and Indonesia do not mention the issue of responsibility. The United States does so cursorily by claiming that their target is "fair and ambitious" while not directly addressing historical emissions. Canada highlights that they have the cleanest energy supply in the G7 and G20 and only account for 1.6% of global emissions without mentioning historical responsibility. Only Japan acknowledges its role as a major historical emitter. Among the top emitters, the emerging economies of China, Brazil, and India all directly blame historical emissions for climate change. India states: "India, even though not a part of the problem, has been an active and constructive participant in the search for solutions. Even now, when the per capita emissions of many developed countries vary between 7 and 15 metric tonnes, the per capita emissions in India were only about 1.56 metric tonnes in 2010. By enhancing their efforts in keeping with historical responsibility, the developed and resource rich countries could reduce the burden being borne by developing countries." Brazil echoes this discourse in advocating for national reduction contributions to be pegged directly to historical emissions. Brazil suggests their NDC is "far more ambitious than its marginal relative responsibility to the global average temperature increase." | (Over)relying on renewables and land use for mitigation Many of the NDCs focus on renewables and/or land use to meet their emission reduction commitments. While solar and wind are the most frequently mentioned technologies, biomass, nuclear, and hydropower are also discussed. Renewables are seen as a mitigation and low carbon development option for the CVF. Vanuatu, for example, sets a goal for 100% renewable energy contingent on financial and technical support with specific mega-watt generation goals for different renewable technologies. High emitting emerging economies of India, Indonesia, and China highlight their ambitious quantitative goals for renewable energy use. India asserts that they have "largest renewable expansion program in the world" while Brazil emphasizes their large, successful biofuel program. Many NDCs discuss halting deforestation or bolstering reforestation to reach their mitigation goals. Among the high emitters, Russia places forests at the center of their mitigation strategy, "Limiting anthropogenic greenhouse gases in Russia to 70-75% of 1990 levels by the year 2030 might be a long-term indicator, subject to the maximum possible account of absorbing capacity of forests [...] forest management, is one of the most important elements of the Russian policy to reduce GHG emissions." Indonesia, Brazil, and India suggest that REDD+ programs are a key means of reaching land use mitigation goals. Among the CVF countries, nine countries plan on using REDD+ to reach land use mitigation goals. Bhutan, states that the "vast forest sink of Bhutan will form the cornerstone of their commitment to remain carbon neutral" and Costa Rica proposes to become a carbon neutral economy by "compensating its emissions through the removal or offsetting by the forest sector" including using market mechanisms. However, despite the heavy reliance on land use, countries do not discuss the uncertainties, present robust monitoring schemes or discuss the social and environmental limitations of REDD+ discussed by many scholars (Di Gregorio et al., 2013;Larson et al., 2013;Schroeder & McDermott, 2014). | We are exceptional: Arguing for extreme vulnerability Vulnerability is referred to extensively throughout the NDCs, but without any clear definition. Across NDCs, countries position themselves as being uniquely vulnerable to climate change based on their biophysical traits and socioeconomic challenges. Island states such as Indonesia, Tuvalu, Vanuatu, St. Lucia, Philippines, Madagascar, Maldives, Kiribati, and Barbados explain their vulnerability as being as an inherent trait as low-lying islands. Vanatu states that they are "one of the countries most vulnerable to climate change among the other Pacific island nations" positioning themselves as being unusually vulnerable among island states. Mountainous countries such as Nepal and Bhutan also position themselves being vulnerable because they are land-locked and mountainous with Nepal claiming that "its development agenda is constrained by the fact that it is one of the most vulnerable countries to the adverse effects of climate change." Beyond these biophysical factors shaping vulnerability, countries also refer to social determinants of vulnerability. Indonesia states, "the poorest and most marginalized populations tend to live in high-risk areas that are prone to flooding, landslides, sea level rise, and water shortages during drought." Socioeconomic drivers of vulnerability were mentioned uniformly across CVF countries and several high emitters although Russia, the United States, the European Union, Canada, and Japan do not explicitly mention vulnerability despite considerable vulnerability within their countries. The discourse of exceptional vulnerability is pervasive across the NDCs of CVF countries as well as emerging economies. Countries draw on various indices and narratives to exemplify their unique vulnerability. As a prime example, Bangladesh claims their exceptional vulnerability based on their disaster risk, "Bangladesh, one of the world's most disaster-prone climate vulnerable countries, has faced dozens of major disasters over its short history as a nation…the Climate Change Vulnerability Index (CCVI-2011) reveals that Bangladesh is the most vulnerable country to climate change." Tuvalu's NDC states, "Tuvalu is the world's second lowest lying country and sea level rise poses a fundamental risk to its very existence" joining Kiribati in highlighting the existential threat to their nation. These passionate claims to vulnerability support the efforts of CVF and emerging economy countries to induce financial and technical supports from donor countries. | Only if: Conditionality of commitments The CVF NDCs almost uniformly emphasize that their mitigation commitments are conditional on the developed countries taking significant action and international support through financial resources, technology transfers, and capacity building (see Table 1). Kiribati clearly defines its two requisite conditions for achieving its conditional emission reduction target, "All commitments are premised on: (a) a fair and ambitious agreement being reached, reflecting Common but Differentiated Responsibilities and Respective Capabilities and (b) timely access to international climate change financing, capacity building, and technology." Four of emerging economies (Mexico, India, Brazil, and Indonesia) include both a conditional and unconditional reduction target (see Table 1). Mexico explains this distinction, The unconditional set of measures are those that Mexico will implement with its own resources, while the conditional actions are those that Mexico could develop if a new multilateral climate regime is adopted and if additional resources and transfer of technology are available through international cooperation. This is unprecedented, since it is the first time Mexico assumes an unconditional international commitment to carry out certain mitigation actions. The use of both conditional and unconditional commitments by emerging economies represents their acknowledgement of their role as a significant emitter now and into the future as well as their desire to hold historical emitters responsible for past contributions to climate change. For example, India ties its NDC to the "availability and level of international financing and technology transfer" and expects that "developed countries would recognize that without means of implementation and adequate resources, the global vision is but a vacant dream." | Show us the money: Climate finance Many NDCs note that mitigation pledges and adaptation needs will require considerable financial resources to realize. Many CVF countries explicitly enumerate their mitigation and adaptation finance needs and some even itemize priority projects with associated price tags (see Table 1). Many CVF countries use NDCs to demonstrate their readiness to receive international financial support either through existing adaptation funds or their experience with large multinational climate funds. Afghanistan explicitly identifies who they view as crucial sources of climate finance, "Afghanistan requires the UNFCCC, the Global Environmental Facility (GEF), the Green Climate Fund (GCF), and other international institutional arrangements to provide the extra finance and other support needed to successfully implement LEDS across all sectors of its economy without compromising socioeconomic development goals." Among the top emitters, Mexico, Brazil, and China all emphasize the importance of South-South cooperation. China is the only country among top emitters who explicitly mentions creating a South-South fund to support other developing countries. Significantly, other donor countries such as the United States, the European Union, Japan, and Canada do not mention financial commitments for climate aid in their NDCs. | Adaptation is not optional While the NDCs are principally envisioned as plans for how nations will reduce national greenhouse gas emissions, nearly all the analyzed NDCs also mention adaptation as the essential partner of mitigation. Across both the top emitters and CVF countries, adaptation is labeled as "essential" and "inevitable." China places adaptation and mitigation on "equal footing" and includes adaptation plans that target specific actions. Kiribati and many other low-lying Pacific states position adaptation as a means of survival, "For Kiribati, where climate change threatens the very existence of the nation and population, adaptation is not an option-but rather a matter of survival." In the CVF countries, adaptation eclipses mitigation in the NDC discourses because the potential for significant emissions reductions is minimal and the NDCs provide an opportunity to showcase projects, make plans, and request support. $218 million USD is needed for mitigation before 2030 and requires international support. Will seek adaptation funding through economic and fiscal incentives, regional agencies, bilateral processes, concessional funding from private sector, civil society and the general public Tanzania 10-120% by 2030 below BAU $150 million USD needed to build adaptive capacity, $500 million USD for adaptation and resilience by 2020, increasing to $1 billion by 2030, $60 million is needed for mitigation by 2030. These actions "strongly" depend on international finance and tech support Despite the prominence of adaptation within NDCs, no country defines adaptation although some countries offer detailed lists of adaptation actions. Many CVF countries that are also least developed countries refer to their National Adaptation Plans for more details. The diverse and imprecise treatment of adaptation is likely because adaptation was a vague option offered by the UNFCCC in their instructions for the NDCs as compared to the more explicit instructions on mitigation. | Carbon markets: At what scale and in what form? Market mechanisms, such as carbon offset markets, was explicitly addressed by a subset of the nations reviewed. Brazil, China, Costa Rica, Ethiopia, Kenya, Rwanda, Nepal, and Santa Lucia mention that an emission trading scheme will be (or should be) part of their mitigation approach. However, only China and Costa Rica extensively outline how and at what scale they envision their carbon market operating. The scale and type of carbon offset market varied among nations. For example, Rwanda stated its intention of selling carbon credits in an international market place while St. Lucia proposes a national cap-and-trade market. Other countries, such as Brazil, mention that they simply want the option of employing market mechanisms to achieve their mitigation goals. | Silences in the NDCs While we focus on the prevalent discourses within the NDCs, we also note some themes in climate governance that are absent. For example, very few NDCs discuss science or use scientific evidence to make their case. This absence is particularly notable because climate science has been central to the UNFCCC negotiation process where climate modeling efforts established the warming degree thresholds and associated emission targets that underpin the NDCs. However, 10 countries in the sample used future climate impact estimates established through scientific studies to target their adaptation efforts and nine countries in the sample identify research institutes, further climate risk assessments, and other science-based efforts as important components to adaptation and mitigation efforts. For example, the Philippines highlighted the importance of "science-based climate/disaster risk reduction" in development plans. Within the top emitters, only China and India explicitly mention supporting climate research institutes as integral parts of their NDCs. Another enormous gap is the lack of any discussion of embodied emissions or the emissions created through the production, processing, or transport of goods. Even China, where a large share of emissions is in exported goods, does not raise this issue. Gender is mostly raised in the NDCs as a determinant of social vulnerability or as an overall development goal with only Costa Rica and Mexico mentioning gender in relation to mitigation goals. Gender is not mentioned by China, the European Union, Canada, the United States, Japan or Russia. Only a handful of countries briefly mention indigenous peoples (Brazil, Indonesia, Mexico, Nepal, Philippines) or human rights. | DISCUSSION AND CONCLUSIONS The NDCs submitted to the UNFCCC are important socio-political documents that offer important insights into the politics, needs, and priorities of each nation. First, they express the magnitude and strategies of promised emission reductions. Second, they reveal some of the underlying values and political positions of countries regarding responses to climate change. The broader literature on the quantitative and sectoral aspects of the NDCs offers critical perspectives by highlighting the inadequacy of the current NDCs in the context of the goal of limiting warming to 2 C, discussing the uncertainties in some of the promised mitigation strategies, and identifying the reliance of many countries on certain policies such as those on forests or renewable energy. Our own analysis of the discourses in the NDCs provides added critical depth by showing the ways in which the issues of climate change mitigation and adaptation are framed, uplifting some narratives and obscuring others. These discourses reflect the major tensions in the debate over climate change and our collective future. While the 160 NDC submissions were prepared with a common template, and often with the advice of consultants, there remain substantial differences between them. The discourses embedded within the NDCs present the national circumstances of each nation and how nations construct their own vulnerability, readiness for climate finance, good will, and culpability (or lack thereof) for climate change. We do find stark contrasts in NDC discourses between North-South as well as historical emitters and emerging economies, reflecting deeper debates regarding justice and equity between nations within the UNFCCC negotiations. For example, the CVF countries devote substantial space to the discussion of their vulnerabilities and with scant information on mitigation unless it is linked to external assistance. Several top emitters do not discuss responsibility or make commitments to provide assistance and climate finance. The discourses embedded in the NDCs outline the contours of an ongoing debate about who will pay for global mitigation and adaptation efforts, the readiness of nations to receive aid, and the relative responsibility of international aid, private sector, and the public sector in supporting NDC actions. While some actors, such as the European Union, present brief statements, others include long discussions of responsibility or vulnerability. Because of the varying lengths of the NDCs it is difficult to weigh the significance of different ideas or sections of text but in general we observe that while all countries were asked to include explicit mitigation goals, CVF nations prioritized discourse around their vulnerability, adaptation needs and financial readiness. Among the emerging economies in the top emitter's sample, there is a substantial discourse regarding their willingness to contribute to reducing emissions despite minimal historical responsibility. Large historical emitters such as the European Union and the United States submitted relatively brief NDCs that present quantitative goals without much sectoral specificity or discussion on equity or climate finance. Despite their importance during negotiations, the NDCs include little on robust monitoring or accountability structures, perhaps because of the highly political and contested nature of monitoring and evaluation in the UNFCCC negotiations. Nations agreed on the NDC process because these documents are non-binding and flexible, but in turn, this fosters limited action, measurement and accountability. The flexibility of these socio-political documents is reflective in the vague discursive positioning of many nations' NDCs on issues such as providing climate finance or monitoring mitigation efforts in the land use sector. While some nations are more specific about how they will reach their emission reduction goals, most are not. Nations largely signal that increased renewables and land use change will enable emission reduction goals to be attained, but do not acknowledge the challenges and uncertainty of forest protection. Some NDCs seek to use market mechanisms, including REDD+ and carbon trading to achieve their reductions, whereas others do not discuss trading or markets at all. Few discuss the role of the private sector or address gender and human rights, despite their inclusion in the Paris Agreement. Although many NDCs identify possible policies, they do not address the challenges and timing of implementation. To conclude, our discourse analysis of the NDCs coupled with the review of existing literature highlights many critical fault lines and silences in the NDC process that are central to understanding the current landscape of global climate change (in)action. The different national perspectives and concerns we find in the review of the NDCs' material commitments and discursive positioning contributed to the deferral of many challenging NDC decision points in the Katowice Rulebook approved at COP24 in 2018. For example, rule setting for the NDCs was deferred without a firm deadline and a decision on Article 6 regarding market mechanisms was postponed to COP25. Despite these deferrals, it was agreed that in the second NDC iteration, countries will need to: (a) include information on the reference/base year/period information for mitigation targets, (b) clarify whether the emissions reduction targets are single or multi-year, (c) include the mitigation co-benefits of proposed adaptation actions, and (d) explain why the NDC is fair and ambitious. Additionally, NDC accounting will be mandatory in the next iteration via a biennial transparency report to avoid double counting. These newly agreed upon provisions will add rigor to the NDC process but do not address many of the problems identified in the literature and in our analysis. As other scholars have noted, the Paris Agreement with its national commitments to climate action was an important landmark in global efforts to address climate change. We agree that despite some opposition to emission cuts by certain countries, including the planned withdrawal of the United States and Brazil from the Paris Agreement, the NDCs remain an important guidepost in an ongoing process of global cooperation on climate change. However, our analysis illustrates that NDCs should be read as important statements, not only on material action, but on the discursive positioning of countries in global climate policy debates. | APPENDIX 1: DISCOURSE ANALYSIS METHODOLOGY We conducted a review of the NDCs in order to analyze the discourses employed by the founding nations of the CVF (n = 19) as well as the top national greenhouse gas emitters (n = 10). To being, we downloaded the sample of 19 NDCs in October 2016 from the UNFCCC NDC database. All NDCs were then uploaded in the NVivo 11 qualitative data analysis software (QSR International) for analysis. To initiate our discourse analysis, we identified a priori analytical categories based on our initial review of the current literature on NDCs. In particular, we drew on many of the categories identified by the scholars at NDC Explorer (Pauw et al., 2018). We aggregated and condensed analytical categories identified by NDC Explorer to include the following analytical categories: vulnerability, agriculture, water, renewable energy, health, ecosystems, market mechanisms, land use, forestry, monitoring and review, conditional mitigation targets, unconditional mitigation targets, costs of adaptation, adaptation finance, adaptation actions, mitigation finance, technology transfer, capacity building, section on fairness, responsibility, gender, human rights, and REDD. After a preliminary analysis of the 29 NDC documents, we then refined the analytical codes based on preliminary results. We added secondary analytical categories such as: use of science, non-state actors, embedded emissions, and carbon neutrality based on our initial findings. We also aggregated particular themes, creating an umbrella category for adaptation, funding mechanisms, and vulnerability. We then recoded the NDC documents to account for these refined analytical categories. Following this secondary analysis, we dropped the sectoral analytical categories that did not yield relevant discursive findings such as water and health. These categories were dropped because they were highly varied and dispersedly covered within the NDCs. We then recoded the NDCs a third and final time, focusing on analytical codes that were largely absent in the NDCs to ensure that these codes were not missed in earlier analyses. These analytical categories include gender, use of science, human rights, and embodied emissions. All coding was done in NVivo 11 qualitative data analysis software. The .pdfs of the case study NDCs were coded by both authors. Passages relating to an analytical category were highlighted and saved in a common folder with all passages related to that analytical category from all the analyzed NDCs. We then analyzed each analytical category by country, identifying converging and diverging discourses within an analytical category. This information was compiled in a Microsoft Excel database and this database was used to identify the most prominent discourses.
2019-06-14T14:57:47.823Z
2019-05-29T00:00:00.000
{ "year": 2019, "sha1": "f9f91f687cda15d69f328c5342cb92257f2b357d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wcc.589", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "64035201032135983ca0da2dc27ce0e94bd5b439", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
16008720
pes2o/s2orc
v3-fos-license
INVESTIGATION OF CROSS-CULTURAL DISTINCTIONS EFFECT ON THE CONSUMERS ’ BEHAVIOR IN THE HIGHER EDUCATION MARKET The article states that the investigations of the cross-cultural distinctions effect on the consumer behaviour are challenging and popular in the international scientific and business community and are of the interdisciplinary character. The objective of this investigation is to develop the methodological approach to the assessment of the cross-cultural features impact on the consumers’ behaviour in the higher educational services market. The culture model disclosing the list of the cultural values, material and institutional environment characteristics, adapted to the Universities educational services market, was developed during this investigation. The method for creating the contingency matrix of the elements, forming the model of a certain country consumers’ culture, and the consumers (students) behavioural features in the market of the higher educational services grouped against the 7P complex (Product, Price, Promotion, People, Process, Physical evidence) has been suggested. The developed methodical approach has been approved by Chinese and Russian students. The investigation outcomes can be used to develop the measures to improve the international competitiveness of universities. Introduction The economic development today should be based on the improvement of the conditions for the human capital reproduction.It should occur simultaneously with the education modernization as the basic condition for a qualitative development of human resources to provide the economy proportional development.The current state of the universities necessitates the solution of a number of problems to improve their effectiveness and competitiveness. Globalization results in boundary spanning between the countries without the cultural features sacrificing, promoting the topicality of the cross cultural investigations of the consumers' behaviour in different goods and services markets.The geographical borders are crossed with fewer risks than the cultural borders are due to the greater rigidity and resistance of the nation cultural values to any changes in comparison to the technologies and other factors affecting on the consumers' behaviour.The culture, in the current context, influences on all stages of the decision making about purchasing a good or a service by a consumer, including the problem recognition, information searching, options assessment and others.However, the consumers' choice depends on their cultural environment characteristics and on the processes of the cross-cultural interactions.The same situation is significant in the higher educational services market.Increasing competition among the countries, anticipates the necessity of searching new sources of the universities compatibility improvement.Consideration of the cultural characteristics of the educational service consumers' behaviour may become one of the factors for the universities international competitiveness improvement.The objective of this investigation is to develop the methodological approach and tools for investigating the cultural characteristics influence on the consumers' behaviour in the higher educational services market. The papers of the above mentioned scientists indicate the specificity of the culture and personality interrelations, the developed methods for studying their interrelations character, investigation of the cultural influence on the consumers' behaviour.At present the empirical approach dominates in the cross-cultural researches: most part of the theoretical conclusions is based on the field surveys.The main problem of the cross-cultural researches is the lack of the universal methodological approaches to the investigations of the cross-cultural features effect on the consumers' behaviour in different countries.The current measurement methods are the products of the certain culture, reflecting the specificity of this particular culture, which makes its adaptation difficult in the context of the other cultures.All of this proves the timeliness of the investigation topic and insufficient intensity of its study. Methodology The methodical tools developed for the investigation of the cross-cultural distinctions influence on the customers' behaviour on the service market starts with the determination of the investigation logics, formation of the investigation main hypothesis, development and validation of the inquiry questionnaire.The final stage is the evaluation of the suggested tools by the Chinese and Russian students. Challenge problem: The problem to be investigated is to find out the lags in the customers' behaviour in different cultures (Russian and Chinese) on the higher educational services market; The problem to be solved is to adapt the universities marketing mix (Product, Price, Promotion, People, Process, Physical evidence) to the features of the certain culture consumers' behaviour while admitting and training students of different nationalities, which is the pacing factor for the international competitiveness of the university. Determination of the investigation purposes and objectives: The purpose of the investigation is to study the culture model influence on the consumers' behaviour on the higher educational services market of Russia and China. According to this purpose the following objectives are set up in this investigation project: a) To develop a culture model considering the higher educational services market features; b) To determine quantitatively the culture model elements effect on the consumers' behaviour model parameters on the higher educational services market. Formation of the investigation main hypothesis Hypothesis 1: the culture model is formed under the influence of the cultural value system, institutional and material environment elements. Hypothesis 2: consumers' behaviour features of the higher educational services market depend upon the certain country culture model.4. The investigation methods: cabinet and field investigation using quantitative and qualitative data collection methods.The main trends of the investigation are the following: • culture values investigation (Solomon, 2012): terminal values (active exciting life, life wisdom, health, job, nature and art beauty, love, material well-being, friends, public recognition, intellectual development, productive life, physical and intellectual improvement, entertainments, freedom, family, happiness of others, art, self-confidence); instrumental values (punctuality, politeness, high-level requirements to the liveliness, sense of humor, discipline, selfconsistency, uncompromising attitude to oneself and to others, education, sense of responsibility, rationality, self-control, courage to persist in one's opinion, solid will, tolerance, honesty, understanding of the others opinion, diligence, delicacy); • institutional environment investigation: the influence level of the global (regional) geopolitical situation on the educational institution preferences; the level of the diplomatic relations between the countries; the level of the political stability in the country; the level of the social infrastructure development, influencing on the population life quality; the level of the government educational regulations; the influence level of the country population beliefs on the educational services proposal; the influence level of the religious restrictions on the educational services consumption; • material environment investigation: the level of the technological and scientific environmental development in the country; the education institutions availability; the level of the modern technologies and equipment utilization at the educational institutions; the education institutions geographical situation; the level of the country economic development; • customers' behaviour features investigation on the market of the higher educational services against the 7P complex (Product, Price, Promotion, People, Process, Physical evidence).The issues of the marketing mix components are divided into 7 main blocks.The issues from each component were divided into sub-issues to define the respondents' relation more precisely. Block 1. (Product): Educational program variety: -According to the proposed education degree (pre-higher education, bachelor, master, post-graduate); -According to the training areas and profile; -Joint dual-degree programs; -Programs for professional retraining. Block 3. (Place): -Realization of education programs by franchising; -Existence of special agreements with the admitting higher education institutions admitting students and rendering them additional services such as the training programs for accessing a higher education institution, English language courses, training for the qualification examinations; -The university location.-Average age of the academic staff; -The level of the academic staff qualification; -The university technical personnel. Block7. (Physical evidence): The level of higher education institution social infrastructure development including: -Dormitory availability; -Hotel availability; -Availability of canteens in every academic building; -Canteen availability in every dormitory; -Outlets for selling food products (beverages, pies, etc.) in every academic building -Outlets for selling food products (beverages, pies, etc.) in every dormitory; -Availability of the higher education institution health and recreation resort; ЕКОНОМИКА -First-aid post availability; -Copy shop availability; -Outlets for selling stationary goods; The development level of the material and technical facilities, providing the academic process, including: -Stadiums availability (volleyball, football, basketball, tennis courts); -Sport hall availability; -Number of classrooms (lecture halls), specialized classrooms, laboratories; -Size of classrooms (lecture halls), specialized classrooms, laboratories; -Library availability; -Free Wi-Fi availability in the academic buildings. 5. The sources of the secondary information.To develop the tools for the field investigations, content analysis of the secondary information on the studied issue should be carried out.Printed and electronic, business and specialized publications, professional books, internet resources, analytical review articles in press are recommended to be considered as the sources of the secondary information.6. Raw information collection.The results of the Russian and Chinese student questionnaire survey are the raw information sources.The purpose of the questionnaire survey is to obtain quantitative estimates to define the dependence of the consumers' behaviour on the higher educational services market upon the culture model.7. Determination of the sampled population.The sampled population for conducting the questionnaire survey included 520 persons.While the sampled population forming, the age, education and nationality were considered.8. Data analysis.The obtained questionnaire survey results were processed by the statistical methods, creating the contingency matrix of the cultural elements and the consumers' behaviour parameters. Results The educational services market in the Asian-Pacific region is rapidly developing.Globalization and economic integration change the current education system concept.Students can choose the education level, place and ways of its obtaining.The education markets in the PRC, Singapore, Malaysia, developing at a quick rate, have already deprived the USA, Great Britain and Australian traditional markets of their competitive advantage.Today education is notably crossing over the economic life of the society and the education activity is becoming the most important component of the country economic development.Some definite trends have developed on the educational services market in the Asian-Pacific region: -Absolute increase of the students number; -Global education internationalization and openness (educational service export has become one of the most perspective trends of the foreign economic relations in the Asian-Pacific region countries for the last decades; -Higher education mass nature increase (in the XXI century higher education becomes a key and fundamental component of the human community sustainable development; -Rush development of the East Asia countries in the field of education; -Growth of the informational transformation (the creation of the global information networks practically effaced the boundaries between the states in the field of the educational information flow, confronted the education with an accomplished fact when not only educational institutions but also global information resources have become the source for a new knowledge and educational information obtaining); -Continuous education; -Increasing role of the English language in the education system; -Higher education diversification and internationalization (diversification relates to the establishment of new educational institutions, introduction of new education trends, new disciplines, arranging interdisciplinary programs. Internationalization is aimed at the national systems rapprochement, defining and developing common universal concepts and components in them, those foundations which are the basis for the national cultures variety, contributing to their mutual enrichment and stimulating them for achieving high standards). Educational services, having a number of unique properties relating to the consumer involvement in the production process, perishability, and intangibility of the obtained information and its quality, create some complexity problems in promoting and introducing them to the service market.Those difficulties are faced not only by the service producers, but also by the consumers who should choose the proper educational institution and education program necessary to satisfy their requirements for obtaining knowledge and appropriate education.Due to this fact, educational institutions should research the educational market, cross-cultural features of these services consumers to stay competitive. Culture model building.J. Moven (1995) culture matrix, adjusted to the higher educational services market was used while developing the culture model.The elements, making up the culture model, were represented as the cultural environment (terminal and instrumental values) material and institutional environment.The culture elements were evaluated by the respondents according to the Likert scale, where 1 means strongly disagree; 2 means disagree; 3 means neither agree, nor disagree; 4 means agree; 5 means strongly agree.A questionnaire survey of Chinese and Russian students was conducted on the basis of the developed methodical instruments.Its results allowed determining the significance of terminal and instrumental values for the respondents (fig. 1, 2).Terminal values are the values which cannot be explained by other, more common or more important values.Such values usually include: love, happiness, wisdom and others.Personal traits backing up a person in the life are usually considered as instrumental values.They are: politeness, responsiveness, diligence and others (Solomon, 2012).The satisfaction with the elements of the material environment is characterized by the greater gaps than that with the cultural values according to the assessments of the Russian and Chinese students (Figure 3).It should be noted that the Russian student satisfaction with the elements of the material environment is significantly less than that of the Chinese students, which influences on the culture model formation, that, in its turn, forms the consumers' behaviour in the higher educational services market. The respondents' estimates of the institutional environment show that Chinese students are greatly satisfied with the institutional environment elements.The greatest gaps in the respondents' answers are stated on such indicators as the population life level and quality, the level of the political stability in the country, as well as the level of diplomatic relations between the countries.Students estimate coincide on such indicator as global (regional) geopolitical situation influence on the choice of the educational institution, which is especially important for choosing the educational institution abroad.The hypothesis that the cultural model is formed under the influence of the cultural value system, institutional and material environment has proved to be completely true. To confirm the second hypothesis the authors, basing on the questionnaire survey, applied the statistical analysis method by developing the contingency matrix of the elements, forming the culture model and the students' behaviour characteristics in the higher educational service market, grouped against the marketing mix 7P (Product, Price, Promotion, People, Process, Physical evidence) (table 1, 2 The analysis of the contingency matrix of Russian students behaviour characteristics and the culture model elements in the higher educational service market demonstrates a high level of the culture elements influence on the consumers' behaviour in the higher educational service market (values of the marketing mix elements indicators compliance with the culture model elements vary in the range of 3.8-4.8on the five-point scale).However, the extent of the culture model elements influences on the consumer behaviour regarding the product choice (educational service), its cost, promotion etc. is different. Summarizing the investigation results it was found that the culture model influence on the choice of the educational level (pre-higher education, bachelor, master, post-graduate programs), major and profile, joint double degree programs, professional retraining programs were demonstrated up to an average extent (values of the marketing mix elements indicators compliance with the culture model elements vary in the range of 4.1-4.3 on the five-point scale).The same principles were found while analysing the culture model influence on the Russian students' attitude to the choice of the education costs, availability of the educational franchising programs, special agreements with the admitting high school, university location, availability of the university social infrastructure, including dormitories, hotels, canteens, food products sales outlets, university first-aid posts, copy centres etc.It should be noted that the culture model elements have special effect on the students' preferences in choosing the communication options (open Information Weekends, university site, career days, newspaper and magazine publications, advertising materials etc.) which affect cultural values to a greater extent (4.8 points to 5). The culture model influence on the Chinese students' behaviour in the higher educational service market is slightly different (Table 2).The high level of the institutional and material environment, development interrelations and their influence on the consumers' behaviour in the higher educational service market can be traced when the educational program, its cost, students' requirements for the academic process arrangement and for the university social infrastructure conditions are chosen.The institutional and material environments, in their turn, influence on the cultural values formation for the certain nation representatives. ЕКОНОМИКА The culture model is more influential on the Chinese students' preferences in the cost, university location and on their respond to the certain promotional tools. Conclusion The outcomes of the investigation are the following: 1.The methodological approach and the tools for analysing cross-cultural distinctions effect on the students' attitude to the offered higher education services in terms of 7P (Product, Price, Promotion, People, Process, Physical evidence) were developed.2. The following factors effecting the culture models formation and building were determined: -The list of cultural values (terminal and instrumental); -Institutional environment elements (the influence level of the global (regional) geopolitical situation on the educational institution preference; the level of the diplomatic relations between the countries; the level of the political stability in the country; the level of the social infrastructure development, providing the population life quality; the level of the government education regulations; the influence level of the country population beliefs on the educational services proposal; the influence level of the religious restrictions on the educational services consumption; -Material environment elements: (the level of the technological and scientific environmental development in the country; the education institutions availability; the level of the modern technologies and equipment utilization at the education institutions; the education institutions geographical situation; the level of the country economic development).The culture models for the analysed countries adapted to the certain Asia-Pacific region countries educational services markets were built on the basis of the determined factors.The gaps between the Russian and Chinese culture models were defined during the cross-cultural analysis.The factor analysis influencing on the culture model formation confirmed the hypothesis that the cultural value system, institutional and material environment elements were the most significant factors. 3. The methodological approach and the tools for estimating the culture elements influence on the students' attitude to the choice of the educational services offered by the universities in terms of 7P (Product, Price, Promotion, People, Process, Physical evidence) was suggested.The cross-cultural gaps between the Russian and Chinese students' behaviour on the higher educational services market, which should be considered by the universities to improve their international compatibility, were determined on this basis.The high level of the cultural values influence on the students respond to different promotion complexes elements used by the universities was specified.Unlike Russian students, the Chinese students cultural features are to the great extent formed under the influence of the institutional and material environment elements, which in their turn, influence on their cultural value system changes. Future investigations of these issues can be aimed at the confirmation of the hypotheses suggested in the paper on the extended empirical investigations by the example of the increased number of both Asian and European countries.A complex approach, including both quantitative and qualitative investigation methods (focusgroups, in-depth interviews with the representatives of different cultures of the target group and others) is needed. Block 4. (Promotion): promotion of the university educational programs, information about the rendered services, their quality, faculty qualification, including: -Conducting open Information Weekends; -The university site availability; -Conducting career days; -Issuing of newspapers and magazines publications, brochures.Block 5. (People): Academic staff, including: Figure 1 . Figure 1.The distribution of the respondent average estimates against the terminal values according to the Likert scale, author's development Russian students indicated health, active life, family and self-confidence as the most important terminal values.Chinese students indicated family, health, friends, productive life, and self-confidence.The analysis outcomes showed that the most notable gaps in the terminal values significance were in such values as entertainments, public recognition.It should be noted that the investigated group (students) age peculiarities made the impact on the value system to a greater extent than the cultural features did.Instrumental value characteristics are shown in figure 2. Figure 2 . Figure 2. The distribution of the respondent average estimates against the instrumental values according to the Likert scale, author's development Figure 3 . Figure 3.The distribution of the respondent average estimates against the satisfaction with the material environment characteristics according to the Likert scale, author's development Figure 4 . Figure 4.The distribution of the respondent average estimates against the satisfaction with the institutional environment characteristics according to the Likert scale, author's development Table 1 . ). Matrix of Russian students behaviour characteristics (against Product, Price, Promotion, People, Process, Physical evidence) compliance with the culture model elements in the higher educational service market, authors development Table 2 . Matrix of Chinese student behaviour characteristics (against Product, Price, Promotion, People, Process, Physical evidence) compliance with the culture model elements in the higher educational service market, authors development
2015-12-07T17:26:25.782Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "1f97cee870e79d5403d4b5bd17af9612ea9567ca", "oa_license": "CCBYNC", "oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-137X/2015/0350-137X1503017M.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07f5aa4ba45994e7940599524ff470a3e5b95784", "s2fieldsofstudy": [ "Business", "Education", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
211473079
pes2o/s2orc
v3-fos-license
Genetic heterogeneity and clonal evolution during metastasis in breast cancer patient-derived tumor xenograft models Graphical abstract Introduction Patient-derived tumor xenograft (PDX) models, in which tumor cells from a human patient are implanted into an immunocompromised mouse, can resemble the original patient tumor in many ways (reviewed in [1,2]). PDX models' similarities to patient tumors make them uniquely well-suited for studying phenomena like metastasis and intra-tumor clonal heterogeneity. Metastasis is also an evolutionary process in which one or more clones from a primary tumor seed a new tumor at a distant site [27]. How metastasis affects genetic heterogeneity depends on the whether a distant tumor was seeded by one or several clones. When a clone from a heterogeneous primary tumor seeds a distant metastasis, heterogeneity can decrease due to a so-called 'population bottleneck'. Metastases that are seeded by multiple clones can instead result in unchanged or increased heterogeneity, depending on the clonal composition of the metastasis sample. Heterogeneity can also increase in an initially clonal metastasis given enough time for new mutations to accumulate. Most studies of patients have shown reduced heterogeneity or monoclonality in metastases, for example in breast, renal, and ovarian cancer [28][29][30]. Nevertheless, metastases may also be seeded by more than one clone [30][31][32] and increased heterogeneity has been observed in rare cases, for example in metastases from small intestine neuroendocrine tumors [33]. Here, the genetic heterogeneity of two PDX models of triple negative breast cancer (TNBC) and their metastases was evaluated. Two commonly used methods to generate PDX metastases in mice were employed [48,49]: 1) PDX fragments implanted in the mam-mary fat pad were allowed to develop metastases spontaneously; and 2) suspensions of PDX cells were injected in the tail vein to seed experimental metastases. We found that the level of heterogeneity in PDX tumors depends on whether metastases were generated by the spontaneous or experimental metastases method and is sensitive to the amount of mouse stromal cells in the tumor. After controlling for these factors, we observed a loss of heterogeneity in PDX metastases compared to their orthotopic 'primary' tumors, consistent with a population bottleneck. Methods More detailed methods are available in the Supplementary Methods and an overview of the study design is provided in Fig. 1. Two established and characterized TNBC primary tumor models were selected: B1 (1004-HBRX) and B2 (1921-HBRX) [41]. Model B1 was obtained from a Grade III primary TNBC with lymph node metastases and B2 was obtained from a Grade III primary TNBC with no evidence of metastases. We used two common approaches to obtain either 'spontaneous' or 'experimental' metas- Fig. 1. (A) Two common approaches were used to obtain either spontaneous (top) or experimental (bottom) metastases. For spontaneous metastasis (top), a fragment of the patient-derived tumor xenograft was implanted in the mammary fat pad of a mouse to generate the 'primary orthotopic' tumor. This primary tumor was removed, and divided into five fragments for sequencing. Spontaneous metastases are any metastases that subsequently arose in the mouse. Experimental metastases (bottom) are any metastases that arose in a mouse after the tail vein injection of a patient-derived tumor xenograft cell solution. (B) Metastases were obtained from two PDX models of breast cancer (B1 and B2) using the spontaneous and experimental approaches with 10 mice per condition. The primary tumors from the mice in the spontaneous metastasis experiments and all metastases were collected. Large tumors were dissected into several pieces and sequenced. tases (Fig. 1A, Fig. S1, Supplementary Methods). To collect spontaneous metastases, a PDX tumor was dissected into fragments that were implanted orthotopically into the mammary fat pad of 10 untreated NOG mice (NOD.Cg-Prkdcscid il2rgtm1Sug/JicTac, Taconic). The resulting primary tumors were resected and the mice were monitored for development of spontaneous metastases ( Fig. 2A, Fig. S2A). To generate experimental metastases a mixture of dissociated cells from five (B1) or seven (B2) PDX tumors were injected into the tail veins of NOG mice that were monitored for development of experimental metastases. In total, 10 NOG mice approaches. An empty box indicates that no metastasis was found in that mouse's organ. For example in mouse B1 1 , no metastases were found in its lungs or lymph nodes, but five samples from its primary tumor and one spontaneously-arising metastasis from its liver was sequenced. Dashes indicate that a mouse did not survive the implantation procedure (e.g., B1 18 , B1 19 , B1 20 ). (C) Primary orthotopic tumors were collected from 20 mice (10 B1 and 10 B2), and each primary tumor was dissected into five pieces for sequencing, as depicted in a representative image from mouse B2 5 . (D) Large metastases were also divided into smaller pieces for sequencing. The lungs often contained multiple metastases that were dissected into multiple pieces for sequencing, as shown here for mouse B2 19 . (E) The mouse content for all sequenced PDX tumor samples for the orthotopic primary tumors (mammary gland) and the metastases for models B1 (blue) and B2 (red) was estimated from the number of sequencing reads that uniquely mapped to the mouse and human genomes. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) were monitored for metastasis per approach and model (Fig. 1B), which resulted in eight mice with spontaneous metastases and 14 mice with experimental metastases (and three mice did not survive the implantation, Fig. 2B, Fig. S2B). The resected orthotopic primary tumors were dissected into five fragments and large metastases into several fragments for whole exome sequencing ( Fig. 2C, D, and Supplementary Methods). We have matched primary tumors and metastases from the same mice for the spontaneous approach experiments. However, we do not have matched primary tumor and metastasis samples for the experimental approach because the experimental metastases originated from the cells injected in the tail vein. Generation of orthotopic primary tumors and metastases from TNBC PDX models We used two PDX models in this study: B1 and B2. For each model, one tumor was resected and divided into 10 fragments of approximately 5 mm 3 , which were implanted in the mammary fat pad of 10 immunocompromised NOG mice (one fragment per mouse). Each of these fragments generated a primary tumor, which was further resected and collected for sequencing. Mice were monitored after resection of the primary tumor for the development of spontaneous metastases. Only model B1 generated spontaneous metastases (B1: 80%, 8 of 10; B2: 0%, 0 of 10), which were found most commonly in the liver but also in the lymph nodes ( Fig. 2A). Experimental metastases were generated by injecting approximately 1 million cells of a cell suspension into the tail veins of 10 NOG mice per model. The cell suspensions were created by combining several resected tumors (B1: 5 tumors, B2: 7 tumors), dissociating the cells, and filtering for human cells. Most mice injected with cells from the B1 or B2 model developed experimental metastases (B1: 57%, 4 of 7; B2: 100%, 10 of 10), which were primarily found in the lungs (Fig. 2B). Assessment of tumor purity One common confounding factor for studying heterogeneity is tumor purity because differences in tumor purity can cause biased heterogeneity estimates. Tumor purity decreases when the tumor samples also contain non-tumor cells, typically from the patient's surrounding normal tissue [50]. Bulk samples from PDX models can contain cells that originate from the mouse host that can affect the calculation of heterogeneity. We assessed the purity of our samples bioinformatically by estimating their mouse content from whole exome sequencing data, and validated our approach on a subset of nine samples by comparing our bioinformatic estimates to those obtained from qPCR experiments (Fig. S3). While the bioinformatic approach found less mouse content than we measured experimentally, the estimates were well correlated between the two methods (Spearman's rank correlation q: 0.88, p = 0.003, n = 9) and we thus used the bioinformatic approach to estimate the mouse content in all sequenced samples. The mouse content in our samples differed depending on the tissue from which the sample was obtained (Fig. 2E). We found high levels of mouse content in the liver and lung metastases (median 39% liver and 27% lung; n = 10 and 30 respectively) and lower levels in the primary orthotopic tumors and lymph node metastases (median 3% mammary gland and 16% lymph nodes; n = 100 and 6, respectively). The difference in mouse content between the primary orthotopic tumors and most metastases is important, because it can lead to incorrect biological conclusions when comparing primary tumors and metastases that could simply be an artifact of the systematic differences in mouse content. Estimation of genetic heterogeneity Care must be taken when estimating the heterogeneity within a tumor because such an estimate can reflect experimental artifacts rather than biological variation [51]. Better heterogeneity estimates can be obtained by using standard variant callers to call somatic mutations and their allele frequencies from matched tumor-normal samples sequenced to high read depth [51]. PDX samples present two main challenges to the standard approach that we addressed in our analyses: matched normal samples are often not available, and PDX samples contain cells from the host (as shown in Fig. 2E). Fortunately, conducting PDX experiments allowed us a unique opportunity to address these challenges with PDX-derived tumor sequence data in two ways. First, we increased our ability to detect genomic positions with heterogeneity by collecting multiple biological replicates for each PDX model; we obtained 10 primary orthotopic tumors per model, and sequenced five samples from each tumor (50 sequenced samples from primary tumors per model). We used these samples to identify heterogeneous genomic positions with at least 100-fold coverage in which an alternative allele is found in at least five samples. Second, we controlled for mouse content in our PDX samples by sequencing four host mouse control samples that we used to select human-specific and eliminate mouse-biased genomic regions. In doing so, we identified 4641 (model B1) and 587 (model B2) human-specific heterogeneous genomic positions distributed across the genome ( Fig. 3A and B, Fig. S4). We used these positions to estimate the genetic heterogeneity of each sample as the average minor allele frequency of the heterogeneous genomic positions (Fig. S5, Supplementary Methods). Importantly, our method for estimating genetic heterogeneity is not correlated with mouse content, such that we have minimized the bias that can arise from increased mouse content (Fig. 3C, D, Spearman's rank correlation; B1: q = 0.013, p = 0.93; B2: q = 0.25, p = 0.08; n = 50 primary samples per model). Consistency of genetic heterogeneity in primary orthotopic tumors We were interested in determining the extent to which heterogeneity is consistent between replicated primary tumors of the same PDX model, and regionally within each primary orthotopic tumor. We first compared whether primary tumors from the same model have similar levels of genetic heterogeneity. Heterogeneity was relatively consistent between the 10 different B1 primary tumors, but much more variable between the 10 B2 primary tumors (Fig. 4A top panel). We found only two B1 primary tumors with statistically different heterogeneity from the other B1 primary tumors (resected from mice B1 3 and B1 4 ), but most B2 primary tumors were different from each other (Mann-Whitney test for all primary tumor pairs). Next, we looked at the regional differences in genetic heterogeneity within each primary orthotopic tumor by measuring the heterogeneity of five spatially-distinct subsamples. We observed a broad range of genetic heterogeneity within most B2 primary tumors, consistent with both regional variation and the high variability between B2 primary tumors. In contrast, we found few regional differences within B1 primary tumors with a notable exception in the tumor resected from mouse B1 4 . Strikingly, there appear to be two subclones within primary tumor B1 4 , and the heterogeneity of those subclones is consistent with the two levels of heterogeneity found across all B1 primary tumors. Specifically, two regional samples within the B1 4 primary tumor had higher levels of heterogeneity similar to those found in the primary tumor from mouse B1 3 , while the other three regional samples had lower levels similar to those found in the primary tumors of the other B1 mice (Fig. 4A, top left panel). This pattern is consistent with the transfer of regional genetic heterogeneity from the expanded precursor B1 tumor to the mammary fat pads of mice B1 1 -B1 10 . Indeed, spatial heterogeneity in the originating tissue has been previously observed to have such an effect in one PDX model of colon cancer [52]. Changes in genetic heterogeneity during metastasis To understand the effect of metastasis on genetic heterogeneity, we next compared the primary tumor and metastasis samples. Because we found inherent differences in heterogeneity between primary orthotopic tumors (Fig. 4A top panels), we focused on comparing the heterogeneity between pairs of primary tumors and the spontaneous metastases they formed (Fig. 4B, B1: 6 mice with a primary tumor and spontaneous metastases, B2: 0 mice). We found a significant decrease in heterogeneity in the B1 sponta-neous metastasis samples (linear mixed effects analysis, Χ 2 (1) = 6.2, p = 0.013, n = 38; primary: 0.0386 ± 0.0003, metastasis: 0.0379 ± 0.0004, heterogeneity ± standard error, Supplementary Methods) which is consistent with previous observations in patients [28][29][30]. Thus, orthotopic implantation of PDX tissue to generate spontaneous mutations can provide a biologically consistent model of changing heterogeneity during metastasis. As described above, model B2 only forms metastases by tail vein injection, not spontaneous metastases ( Fig. 2A, B). We wondered whether experimental metastases could also be used to model heterogeneity. We compared the experimental metastasis samples to the primary tumors using a Mann-Whitney test for unpaired data (Fig. 4A, Supplementary Methods), and found that they either showed no difference (B1, Mann-Whitney, p = 0.95) or that they unexpectedly increased (B2, Mann-Whitney, p = 0.010). Thus, in the models described herein, experimental metastases obtained by tail vein injection do not provide a biologically consistent model of reduced heterogeneity during metastasis. Discussion We present here the first study of how genetic heterogeneity changes during metastasis in two patient-derived tumor xenograft models of triple-negative breast cancer. To this end, we generated metastases using two approaches. In the first, we obtained spontaneously arising metastases and matching primary tumor samples by orthotopically implanting patient-derived tumor samples into the mammary fat pads of 10 mice. In the second approach, we obtained experimentally generated metastases that arose after injecting cells from patient-derived tumor samples into the tail veins of 10 mice. We isolated the DNA from these samples, sequenced their exomes, and computed the genetic heterogeneity of each sample. From our data, we concluded that spontaneously-arising metastases are a more realistic method to study changes in heterogeneity during metastasis than experimental metastases obtained from tail-vein injections. Presence of mouse host stroma in human xenograft tumors is required for tumor growth and may mimic some aspects of the human tumor microenvironment, despite its reduced immune context (discussed in [46,47,53]. Additionally, different tumor microenvironments can influence the clonal evolution of the tumor by exerting different selective pressures (for example [54,55]). However, our study focuses on the genetic heterogeneity of the patient-derived tumor itself. In this context, stromal cells can bias genetic heterogeneity estimates because the sequenced samples are a mixture of human tumor cells and mouse stromal cells. Regions of the mouse genome that are homologous with the human genome can falsely overinflate heterogeneity measures from bulk sequenced samples of mixed human tumor and mouse cells. The potential scope of the problem is large as 80% of mouse genes have an orthologue in the human genome [56], and between 0.9% and 97.5% of each of our PDX tumor samples consisted of mouse-specific DNA. A variety of experimental and computational approaches to handle mouse contamination have been developed [50,57], but can still leave a small fraction of false positives from the host mouse [50]. Additionally, several methods to measure intra-tumor heterogeneity or reconstruct the subclonal phylogeny have been developed that can account for tumor cellularity from the surrounding normal tissue [13,[58][59][60][61][62], but they do not explicitly address the issues faced with data from PDX models and some measures may be insufficient to describe evolutionary dynamics [20]. We thus developed a bioinformatics approach to mitigate the bias in heterogeneity that can arise from contamination with the mouse by estimating each sample's heterogeneity using only unbiased genomic positions (Fig. 3A). There are two consequences of our approach to removing mouse bias. First, the remaining unbiased genomic positions are not suitable to use to identify driver mutations -many functionally important regions (like cancer drivers) have high homology with the mouse genome, and so are removed from our analysis. Second, our heterogeneity scores can only be used to compare samples within a given model (e.g., all of the B1 model tumors) because different unbiased and heterogeneous sites are identified for each model. Thus, we cannot compare heterogeneity across models unless we explicitly set out to do so. Heterogeneity is a reflection of clonal evolution within a tumor, and it can differ within a tumor based on how the tumor evolved and grew [63]. Because of this, we wondered to what extent clonal evolution affected the heterogeneity of replicates from the same model. We used the heterogeneity score for each sample to evaluate to what extent heterogeneity is consistent between replicate tumors of the same model. Model B1 had relatively consistent heterogeneity both within and between primary tumors, though we did find that our samples formed two distinct groups with higher and lower heterogeneity. An intriguing explanation consistent with finding two distinct groups is that regional genetic heterogeneity from the expanded precursor B1 tumor was transferred to the mammary fat pads of mice B1 1 -B1 10 . Indeed, regional differences in genetic variation is unsurprising [64]; for example, a previous observation showed that tumor cells originating from nearby regions within a colon cancer tumor were more similar to each other than those found further away [65]. On the other hand, model B2 showed less consistency both within and between primary tumors. Model B2's higher variability in heterogeneity could either be inherent to the patient's tumor itself, or because its heterogeneity score is derived from fewer genomic positions than that of model B1. Regardless, the variability between tumors for both models suggests that we should strive to evaluate changes in heterogeneity during metastasis relative to the primary tumor from the same mouse. In addition, we looked at the effect of metastasis on heterogeneity in our PDX models. Metastasis is an evolutionary process [27] and a successful metastasis requires that cells undergo several steps [66][67][68][69] each of which can generate a population bottleneck and reduce heterogeneity. Indeed, current evidence from patients and PDX models suggest that metastases are formed by a single clone or small cluster of cells from the primary tumor [22,28,[70][71][72]. While we therefore expected genetic heterogeneity to decrease during metastasis, we did not know whether this would be the case in PDX models. The two approaches we used to generate metastases (spontaneous and experimental) present different selective pressures on the primary tumor cells. Spontaneous metastases would need to escape from the primary tumor, and then both survive the trip to the distant organ and successfully colonize it [66][67][68][69]. Because experimental metastases are generated by tail vein injection of tumor cells, they would only need to survive and colonize after injection. However, these cells must also survive the experimental steps necessary for the injection, namely generating a cell solution by tumor cell dissociation and mouse cell depletion. Thus, it is possible that the same model may have a different propensity to metastasize spontaneously and experimentally. Indeed, we observed such differences in our two models -model B1 was relatively poor at forming experimental metastases, while model B2 was incapable of forming spontaneous metastases, consistent with the lack of metastases in the patient from which it was derived. Despite differences in selective pressures, a previous study showed no gene expression differences between metastases obtained by orthotopic implantation and tail vein injection of a breast cancer cell line (but they did show differences in morphology) [73]. The locations of the metastases can also depend on the model. Spontaneous metastases in previous studies were found in the mouse lung in one study [39], or the lymphatics, lung, and peritoneum in another [37], while the B1 model spontaneously metastasized to the liver and lymph nodes. Finally, we looked at how heterogeneity changed during spontaneous and experimental metastasis and found differences between the two methods. Heterogeneity decreased as expected during a population bottleneck in spontaneous mutations, but either showed no change or an unexpected increase in experimental metastases. The increased heterogeneity could arise from a combination of several factors. First, the sample itself could contain many small experimental metastases as shown in Fig. 1D, where the lung was split into several pieces, each of which contains multiple experimental metastases. Second, there could be more heterogeneity in the cell solution injected in the mouse tail veins because it is comprised of cells from multiple tumors. Third, individual cells injected into the tail-vein could have a higher propensity to aggregate into a multi-clonal metastasis than cells from spontaneous metastases. Indeed, this explanation is consistent with recent observations in PDX breast cancer models of metastasis [32]. Measuring the original heterogeneity from the PDX tumors that were implanted in the mammary fat pad or injected into the tail vein could help to distinguish between these options, however we unfortunately do not have these samples. Regardless, we found that the method for generating metastases is important and that spontaneous metastases show a reduction in heterogeneity consistent with a population bottleneck. Our study suggests that the method to generate metastases is important when studying heterogeneity using PDX models. For the two PDX models we studied here, the expected changes in heterogeneity during metastasis were obtained by implanting PDX tissue in the mammary fat pads and waiting for spontaneous metastases. However, the spontaneous approach has drawbacks -not every PDX model is capable of generating metastases in this way (only one of our two PDX models worked), and it generates fewer metastases (between 0 and 2 per mouse). When we generated experimental metastases by injecting patient-derived tumor cells in the tail vein of mice, both PDX models generated metastases, but showed unrealistically high levels of heterogeneity. We therefore recommend using spontaneously-generated metastases for studies involving genetic heterogeneity and clonal evolution. Conflict of interest All authors were employed by Novartis Institutes for BioMedical Research while conducting research for this paper.
2020-02-06T09:11:43.542Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "44e9a6cdcbddf6ef9379243904eb21e859f94bc0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2020.01.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e08d1f9670518aea8be7d72522b0b3a5d521669d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
31594458
pes2o/s2orc
v3-fos-license
Medium-term results following arthroscopic reduction in walking-age children with developmental hip dysplasia after failed closed reduction Background Arthroscopic reduction has become increasingly popular as an alternative to open reduction for the treatment of developmental dysplasia of the hip (DDH). However, patient outcomes beyond one and a half years after surgery remain unclear. The purpose of this study is to report the medium-term outcomes of walking-age patients who received arthroscopic reduction after an unsuccessful closed reduction. This research was conducted as part of a retrospectively registered study. Methods We performed arthroscopic reduction in eight children with DDH after failed closed reduction between January 2010 and January 2012 and followed all cases for a minimum of 5 years. Arthroscopic reduction was performed using a two-portal approach without traction. Capsular release and resection of the transverse acetabular ligament were also performed if needed. Patient demographics, clinical variables, anatomical assessment measures, and post-operative complications were extracted from medical records. Results We treated five male and three female patients with an average age at operation of 15.6 months (range, 12 to 22 months). All obstacles to reduction were corrected arthroscopically. Concentric reduction of the hip joint was observed in post-operative X-rays in all cases. The average safe zone was increased from 17.5° (8° to 30°) to 42.1° (36° to 50°) after the operation. The average acetabular (AC) index was reduced from 40.3° (33° to 65°) to 21.9° (19° to 26°) at the end of follow-up. No complications occurred and no patients developed necrosis of the femoral head, recurrent dislocation, or residual hip dysplasia. Conclusions Arthroscopic reduction is a suitable surgical procedure for the treatment of DDH among walking-age children with failed closed reduction and severe dislocation. This method is quick and safe, and it can be performed without post-operative complications over the medium term. Introduction Developmental dysplasia of the hip (DDH) is a relatively common hip deformity among infants. Early detection and treatment of DDH are critical to avoid the risk of disability [1]. The application of a Pavlik harness or a spica cast during the first week of life can be effective in most cases. Among infants and young children with DDH, the success rate of early Pavlik harness treatment can be as high as 90% [2]. Unfortunately, many patients, especially those in developing countries, miss this early treatment window. Closed reduction at a later stage is associated with a higher failure rate and can lead to hip instability. Open reduction, sometimes combined with acetabuloplasty and femoral osteotomy, remains the standard treatment after failed closed reduction [3][4][5][6]. However, serious complications [7,8], including avascular necrosis of the femoral head, may occur following open reduction and negatively affect patient outcomes. Previous studies [9][10][11] have reported that the rate of necrosis can be as high as 69% if a medial approach is used and up to 30% if an anterior approach is used. In the search for a less invasive alternative to open reduction, arthroscopic reduction has been performed in several studies [12][13][14][15] to treat children with DDH. For example, McCarthy and MacEwen [13] reported the outcomes of three patients with hip dysplasia who received arthroscopic reduction 9 months after the procedure. One patient developed residual dysplasia that required surgery. Eberhardt et al. [14] performed arthroscopic reduction on five very young infants and reported outcomes at a mean follow-up of 13.2 months. A later study by Eberhardt et al. [15] reported the experiences of nine walking-age children who received arthroscopic reduction and acetabuloplasty to treat dislocated hips with a mean follow-up of 15.4 months. However, patient outcomes after a longer time period remain unclear. To fill this research gap, our study assessed the medium-term outcomes of walking-age patients who underwent arthroscopic reduction after an unsuccessful closed reduction. Study participants This was a prospective single-centre observational study. The study participants included eight children with DDH after failed closed reduction scheduled to undergo arthroscopic reduction between January 2010 and January 2012 at the Third Affiliated Hospital of Southern Medical University. Surgery indications included patients who underwent a failed closed reduction aged 12 to 24 months, with a magnetic resonance imaging (MRI) scan indicating the presence of intra-acetabular soft tissue or an inverted labrum. The study excluded children over 24 months of age and cases with hip infection (synovial fluid puncture), a comorbid condition (e.g., disease of immune system), or a history of hip surgery (such as acetabuloplasty). Medical procedure All patients were recruited to participate in the study and received an arthroscopic reduction for the treatment of dislocated hips. All procedures were performed by the same surgeon who has extensive experience in adult hip arthroscopic surgery. The procedure was performed under general anesthesia and in a supine position. Arthrography was conducted before the operation to assess the position of the femoral head in relation to other anatomical structures. Two portals without traction were used in all cases. A small pad was placed under the affected hemipelvis. Anatomical landmarks including the femoral artery, the femoral head, the anterior superior iliac spine, and the pubic symphysis were marked prior to incision. With the affected hip in a 90°flexed and 40°-60°abducted position, three Kirschner wires were positioned in parallel spaced 0.5 cm apart and directed inward and downward to the pubic symphysis. The wires were placed above, at the same level of, and below the femoral head. Fluoroscopy was used to guide the initial portal placement. To reduce the arthroscopic puncture using a trocar and reduce the X-ray radiation effects in children, three Kirschner wires were used because we only performed intraoperative fluoroscopy once, when we established an anterolateral portal. One of the three Kirschner wires can be used to position and direct the anterolateral portal's puncture. We marked direction; the trocar used for the arthroscopic puncture is able to accurately enter the hip. After an anterolateral portal was marked, a spinal needle was inserted into the hip joint following the previously marked direction. After passing through the tough joint capsule, 20 ml saline was injected. If the needle was successfully placed in the hip joint, the saline fluid was ejected from the needle after removing the syringe. Fluoroscopy was conducted to determine the depth of the spinal needle in the joint cavity. A mark was made on the arthroscope cannula to prevent articular cartilage damage caused by an excessively deep puncture in the joint cavity. A vertical incision of 1 cm was made in the skin with hemostatic forceps, which were used for subcutaneous blunt dissection. An arthroscope puncture trocar was introduced into the hip joint to the depth as marked, and a characteristic "pop" could be felt when penetrating the joint cavity. The arthroscopic sheath was inserted into the joint capsule along the sheath core. In addition, an anterior portal was created where the perpendicular line of the anterior superior iliac spine and the horizontal line of the pubic symphysis met. Arthroscopy was performed using a 4.0-mm, 30°a rthroscope. After introduction into the joint cavity, the arthroscope was turned laterally and then in the medial direction to examine the acetabular rim, ligamentum teres, and femoral head. An exploration was conducted to identify obstacles to reduction, including a hypertrophic ligamentum teres (Fig. 1a), fibrofatty or pulvinar tissues (Fig. 1b), and a hypertrophic acetabular labrum (Fig. 1c). The acetabular pulvinar tissue was removed using a shaver (Fig. 1d), and the hypertrophic acetabular labrum and the ligamentum teres was resected with an electrocautery probe (Fig. 1e). After these steps, the horseshoe-shaped articular surface and the acetabular fossa became visible. If a capsular constriction was present, a capsular release was performed with an electrocautery probe. Resection of the transverse acetabular ligament was performed if needed. The surgery lasted 50 ± 10 min in all cases. Proper positioning of the femoral head and the acetabulum were confirmed by X-ray following the arthroscopic reduction procedure (Fig. 1f ). A spica cast was applied to retain the hip in a moderately flexed and abducted position for 12 weeks followed by the application of a Pavlik harness to maintain the reduction fixation for 3 to 6 months. During the first year of follow-up, doctor appointments and X-ray examinations were arranged every month. During the second year and thereafter, doctor appointments and X-ray examinations were conducted once per year. The patient's gender, age at operation, affected side, previous treatments, and pre-operative Tönnis grade of dislocation were obtained. Anatomical measurements including the safe zone and the acetabular (AC) index were collected both before and after the operation. The AC index refers to the angle formed by Hilgenreiner's line and a line that extends along the acetabular roofs. A normal AC index is less than 30°. A safe zone was used to assess the stability of the hip joint after arthroscopic reduction, which is defined as the range between the maximum hip abduction angle and the maximum hip adduction angle without dislocation. Safe zone determination was carried out upon hip flexion at 90°after hip joint reduction, followed by recording the full hip abduction angle and the thigh adduction angle to hip joint extrusion. A larger safe zone indicates better stability of the hip joint and vice versa. The ideal safe zone ranges from 30°to 65° [16]. X-rays were obtained at different time points during the follow-up in all cases to monitor the reduction of the hip joint. The post-operative complications [17] included in the analysis were residual hip dysplasia, subluxation or repeated dislocation of the hip, and avascular necrosis of the femoral head. Statistical analysis Data were collected in a Microsoft Excel workbook. Descriptive statistics such as the mean, standard deviation, count, and percent are reported. Paired two-sample Student's t tests were performed using SPSS 22.0 software (IBM, Armonk, NY, USA) to test the statistical significance of the changes in anatomical assessments before and after the operation. Results The study included five male and three female patients, with an average age at operation of 15.6 months (12 to 22 months). All patients were affected unilaterally with five affected hips on the right side and three on the left side. Before arthroscopic reduction, seven patients were treated unsuccessfully with open adductor tenotomy and a spica cast and one failed closed reduction through the application of a Pavlik harness, and all cases were performed serial radiographic studies to monitor whether the reduction is concentric or eccentric for 3 months at least. According to the Tönnis grade of dislocation, two grade III hips and six grade IV hips were included. The main obstacles to reduction included pulvinar tissue, a hypertrophic ligamentum teres, a hypertrophic transverse acetabular ligament, and capsular constriction, Fig. 1 Arthroscopic images of the hip joint. a A hypertrophic ligamentum teres. b Arthroscopic images of the acetabular fossa and pulvinar tissues. c Arthroscopic images of a hypertrophic acetabular labrum. d Resection of the acetabular fossa and pulvinar tissue using a shaver. e Resection of a hypertrophic acetabular labrum with an electrocautery probe. f Proper positioning of the femoral head and the acetabulum following arthroscopic reduction which were observed in all eight cases. An inverted labrum, which represents changes to the labrum cartilage complex but is not an obstacle to reduction, was observed in two hips. A pressure lesion in the cartilaginous acetabular roof was observed in all hips, and a neolimbus formation was observed in two hips. Patients' demographics and pre-operative characteristics are shown in Table 1. Arthroscopic reduction, including the resection of the pulvinar tissue and ligamentum teres, transverse ligament incision, and capsule release, was performed unilaterally in all patients. All reduction obstacles could be arthroscopically eliminated. Concentric reduction of the hip joint was observed on post-operative X-rays in all cases. The average pre-operative safe zone was 17.5°(8°t o 30°), while the post-operative safe zone was 42.1°(36°t o 50°). Therefore, the safe zone increased by 24.6°on average (95% CI, −30.7, −18.5, p < 0.001) after the surgery. The patients were followed-up for a period of 60 months. The average pre-operative AC index was 40.3°(33°to 65°). The average AC index at the final follow-up was 21.9°(19°to 26°), which represents an average decrease of 18.4°(95% CI, 10.4, 26.4, p < 0.001). No complications, such as wound hematoma, infection, and neurological or vascular injuries, occurred after surgery. During the follow-up period, none of the patients developed necrosis of the femoral head, and periodic Xray examinations showed continuous growth of the ossific nucleus of the femoral head. None of the patients developed a recurrent dislocation or residual hip dysplasia. X-ray images of one case before the operation (Fig. 2a) and during follow-up (Fig. 2b-h) are shown. The surgery results and complications during follow-up for individual patients are presented in Table 2. Discussion The treatment for dislocated hips depends on the age of the patient, the degree of dislocation, the anatomical configuration of the proximal femur, and the existing acetabular dysplasia. If the dislocated hip cannot be treated through closed reduction during the first year of life, more extensive treatment is necessary [18]. Although the open reduction remains the standard treatment after failed closed reduction, arthroscopic reduction has been performed in several studies. Despite the increasing number of reports of arthroscopically assisted reduction [13,19], medium-term outcomes have yet to be reported. Our research findings suggest the medium-term effectiveness of arthroscopic reduction for walking-age children with DDH who failed closed reduction. Further follow-up is warranted to assess the long-term results, as the patients had not reached skeletal maturity. Seven of eight of our patients previously underwent open adductor tenotomy and spica cast treatment, which failed to restore the femoral head-acetabulum concentricity. Evidence [20] indicates that the failure rate of this treatment can be up to 50%. The high failure rate of closed reduction can be largely attributed to excessive intra-acetabular contents such as hypertrophic fibrofatty (pulvinar) tissue, a thickened ligamentum teres, and an inverted labrum. A small amount of the content may gradually disappear after closed reduction and lead to the formation of femoral head-acetabulum concentricity, allowing the hip joint to return to its normal morphology. In contrast, a large amount of intra-acetabular content will obstruct the repositioning of the femoral head into the acetabulum, which is a major cause of closed reduction failure [21]. One case in our sample failed Pavlik harness treatment. If radiographic assessments show that the hip is not responding to treatment within 3 weeks of application of the harness, the treatment should be discontinued [18]. Various surgical approaches [22][23][24] have been developed for hip arthroscopy. All arthroscopic reduction procedures in this study were conducted using a twoportal approach with an anterolateral portal and an anterior portal under Kirschner wire-assisted positioning of the hip joint. A similar two-portal approach has been adopted by previous studies [14,15] to examine the anatomical structure within the hip joint and to perform arthroscopic reduction. To ensure proper positioning and portal placement and to avoid unnecessary soft tissue damage, we used a mobile C-arm machine and multiple Kirschner wires to establish the hip surgical approaches in all cases. We were able to examine all key anatomical structures and remove all obstacles to reduction through the established portals. The main obstacles to reduction observed in this study included pulvinar tissue, a hypertrophic ligamentum teres, hypertrophic transverse acetabular ligaments, and capsule constriction. These findings are consistent with previous reports [14,15]. We also observed pressure lesions in the cartilaginous acetabular roof in all cases and an inverted labrum and neolimbus in two cases. These rates are comparable to previously reported rates by Eberhardt et al. [15] who studied older pediatric patients with less severe dislocations. Bulut et al. [19] reported a combination of arthroscopic reduction and open psoas tenotomy. Unlike Bulut et al., who performed an arthroscopically assisted procedure, we performed a purely arthroscopic reduction. We used arthrography to determine whether the reduction was concentric. All patients reported good outcomes. If the postarthroscopic reduction was not concentric, psoas tenotomy was performed. All obstacles to reduction were examined and eliminated arthroscopically. In all cases in this study, the hips could be repositioned and stably retained in a Pavlik harness and a spica cast without the use of psoas tenotomy. Although the medium-term results are encouraging and demonstrate the feasibility of arthroscopic reduction in treating walking-age children with DDH, some limitations of this study should be noted. First, our sample size was relatively small. Second, our study did not have a control group to compare the complication rates associated with other treatment options. Third, this case series reported a single surgeon's experience, and all cases included in this study were treated at one community hospital. Finally, the age of the included patients ranged from 1 to 2 years. Despite the limitations inherent to this case series, we present the medium-term results of arthroscopic reduction among walking-age children with severe hip dislocation. The results of this study are promising because no arthroscopic-associated complications occurred within an average follow-up period of 5 years. This study, together with other published studies [13][14][15], demonstrates that arthroscopic reduction is suitable for treating DDH among walkingage children, even those with severe dislocation. Further research that directly compares the results and complications of open reduction to arthroscopic reduction with longer follow-up periods and without psoas tenotomy will be necessary to confirm the efficacy of arthroscopic reduction. Conclusions Arthroscopic reduction is a suitable surgical procedure for the treatment of DDH among walking-age children with failed closed reduction and severe dislocation. It is quicker, safer, and can be achieved without post-operative complications over the medium term.
2017-10-19T06:37:34.677Z
2017-09-21T00:00:00.000
{ "year": 2017, "sha1": "486ad0120b9739e559310bbbe2f62cdc269227df", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-017-0635-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7aadaf6e40fcf9ea930cf7942cd3e5049829462", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231901090
pes2o/s2orc
v3-fos-license
Biofabricated Fatty Acids-Capped Silver Nanoparticles as Potential Antibacterial, Antifungal, Antibiofilm and Anticancer Agents The current study demonstrates the synthesis of fatty acids (FAs) capped silver nanoparticles (AgNPs) using aqueous poly-herbal drug Liv52 extract (PLE) as a reducing, dispersing and stabilizing agent. The NPs were characterized by various techniques and used to investigate their potent antibacterial, antibiofilm, antifungal and anticancer activities. GC-MS analysis of PLE shows a total of 37 peaks for a variety of bio-actives compounds. Amongst them, n-hexadecanoic acid (21.95%), linoleic acid (20.45%), oleic acid (18.01%) and stearic acid (13.99%) were found predominately and most likely acted as reducing, stabilizing and encapsulation FAs in LIV-AgNPs formation. FTIR analysis of LIV-AgNPs shows some other functional bio-actives like proteins, sugars and alkenes in the soft PLE corona. The zone of inhibition was 10.0 ± 2.2–18.5 ± 1.0 mm, 10.5 ± 2.5–22.5 ± 1.5 mm and 13.7 ± 1.0–16.5 ± 1.2 against P. aeruginosa, S. aureus and C. albicans, respectively. LIV-AgNPs inhibit biofilm formation in a dose-dependent manner i.e., 54.4 ± 3.1%—10.12 ± 2.3% (S. aureus), 72.7 ± 2.2%–23.3 ± 5.2% (P. aeruginosa) and 85.4 ± 3.3%–25.6 ± 2.2% (C. albicans), and SEM analysis of treated planktonic cells and their biofilm biomass validated the fitness of LIV-AgNPs in future nanoantibiotics. In addition, as prepared FAs rich PLE capped AgNPs have also exhibited significant (p < 0.05 *) antiproliferative activity against cultured HCT-116 cells. Overall, this is a very first demonstration on employment of FAs rich PLE for the synthesis of highly dispersible, stable and uniform sized AgNPs and their antibacterial, antifungal, antibiofilm and anticancer efficacy. Introduction The growing pursuits in metal-based nanomaterials synthesis are hotly debated in several fields while acknowledging their unique physico-chemical and biomedical properties with specific advocacy for fitness in clinical settings as fascinating treatment modality, worldwide [1]. Considering that there is a wide scope to achieve desired properties in synthesized nanoparticles (NPs) including shape, size and stability by manipulating reaction conditions such as pH, temperature, concentration of metal precursors and concentration and nature of bio-reducing agents [2][3][4][5][6][7][8]. Besides, surface capping or encapsulation material of NPs deserves special importance due to being directly or indirectly concerned with Synthesis and UV-Vis Analysis of LIV-AgNPs Briefly, an apparent color change in the reaction mixture containing the aqueous solutions of PLE and AgNO 3 in 1:3 ratios (v/v), from pale yellow to light brown indicated the PLE bio-actives meditated bio-reduction of Ag + to LIV-AgNPs after 20 min at 25 ± 5 • C. The color of reaction mixture tuned into intense brown after 24 h. The appearance of a sharp UV-Vis band at λ max 428 nm was observed which is likely due to the surface plasmon resonance (SPR) of nascent LIV-AgNPs in colloidal solution (Figure 1a). The UV-Vis absorption peak position (400-500 nm) and formation of characteristic brown color LIV-AgNPs were found concordant with the reports published on plant mediated green synthesis of AgNPs [6]. Besides, UV-Vis absorption (λ max 428 nm) analysis of colloidal LIV-AgNPs up to six months revealed that the NPs were highly stable as the experiments showed no significant change in SPR peak (Figure 1b). Assessment of Bio-Actives in Pristine PLE and LIV-AgNPs by GC-MS and FTIR Before, synthesis of LIV-AgNPs, the pristine PLE was put through to GC-MS analysis [24] in order to presume plausible bio-active compounds that may acted as, (i) reducing agent for free metal cations (Ag + → Ag 0 ), (ii) stabilizing agent while growth on nascent NPs in progress during nucleation phase and (iii) capping of fully grown or stabilized NPs as described in our previous study [24] was illustrated in schematic mechanism of LIV-AgNPs formation ( Figure 2). The GC-MS spectrum of pristine PLE ( Figure 3) reflected a total of 37 peaks (P) for a variety of bio-actives were described in our previous study [24]. Based on their peak area, four major bio-actives in PLE were found to be long/short chained hydrocarbon fatty acids containing terminal -OH and -COOH groups, viz. nhexadecanoic acid (P15-21.95%), linoleic acid (P19-20.45%), oleic acid (P20-18.01%) and stearic acid (P21-13.99%) [24]. Besides, two polyphenolic bio-actives were also detected namely cardanol monoene (P27-11.92%) and piperine (P31-1.83%) [24] likely play axillary role in bio-reduction and capping of NPs (Table S1) [24]. Next, the FTIR-based assessment of as-prepared LIV-AgNPs also demonstrated the presence of PLE bio-actives that can be argued being responsible for bio-reduction of metal cations into nascent NPs, stabilization and capping of AgNPs. The FTIR spectrum in Figure 4a-c, demonstrates a variety of molecular signatures of PLE bio-actives adsorbed on AgNPs, which in fact appeared as sharp, broad, strong and weak signals pertaining to their band behavior such as stretching, banding and vibrations. In Figure 4a, a dense area of FTIR spectrum ranged between 3500 cm −1 and 3700 cm −1 indicated the presence a majority of PLE bio-actives associated to AgNPs surface and hence we analyzed this area at a high resolution. The observations of this section suggest the presence of medium and sharp stretching were assigned to the free -OH groups of alcohols [25]. Whereas, strong and broad stretching around 3236 cm −1 confirmed the presence of intermolecular bonded -OH and -NH groups of carbohydrates/lipids and primary amines, respectively, as depicted in Figure 4b, signify the reduction of Ag + to Ag 0 and capping of AgNPs [26]. The weak vibrations between 2926 and 2850 cm −1 , and 2135 cm −1 were assigned to stretching of C-H and C≡C groups of lipids and alkyne, respectively (Figure 4b). The peak at 1737 cm −1 is likely due to the presence of carbonyl (C-O) group of FAs, whereas peak at 1645 cm −1 represent carboxylic groups (C=O) of FAs and amine group (N-H) of protein ( Figure 4c) [24,26]. Indeed, the appearance of C-O and C=O signals strongly advocate the involvement of FAs and proteins in bio-reduction and PLE bio-actives corona likely physisorbed on the surface of LIV-AgNPs. Besides, the peak at 1456 cm −1 can be ascribed to CH 2 deformation or due to C-O-H bending, 1373 cm −1 represents O-H groups of phenolic compounds, signal at 1153 cm −1 was taken as C-O-C stretching which signified the presence of carbohydrates, peak around 1026 cm −1 was assigned to O-H stretching of polyphenols ( Figure 4c) [26]). Overall, our GC-MS and FTIR results strongly suggest an active role of PLE attributed FAs and polyphenolic in the synthesis of LIV-AgNPs. In same line, Rao and Trivedi [27] have also demonstrated formation of FAs encapsulated AgNPs using stearic, palmitic and lauric acids as bio-reducing and stabilizing agents. Recently, the study of Gnanakani et al. [26] exhibited the FAs namely octadecanoic, hexadecanoic and octadecanoic acids in microalgae Nannochloropsis extract as potential bio-reducing and stabilizing agents in synthesis of AgNPs. Beyond the abundance of FAs, auxiliary phenolics, proteins, carbohydrates and enzymes bio-moieties in the benign milieu of PLE can be argued to play both key roles in plant extract mediated bio-fabrication of nanomaterials [25]. Electron Microscopic Properties of LIV-AgNPs The SEM micrographs in Figure 5a demonstrated a significant level of agglomerations in LIV-AgNPs when allowed to dry to solid powder. Besides, the elemental composition of PLE-AgNPs obtained by using EDS showed prominent peaks for carbon (30.6%), oxygen (44.85%) and silicon (9.43%) along with the characteristic peak of Ag (11.39%) at approximate 3 keV (Figure 5b). Contrarily to powdered LIV-AgNPs (Figure 5a), the TEM analysis of colloidal LIV-AgNPs solutions witnessed a great level of dispersity in aqueous environment, which was likely attributed by repulsion forces existed between two O-H groups hanging out from the soft PLE corona of AgNPs (Figure 5c). At the same time, the ImageJ software-based size determination on TEM micrographs revealed the sized of LIV-AgNPs was ranged between 1-10 nm with an average diameter of 5.37 ± 1.09 nm (Figure 5d). SEM Based Analysis of LIV-AgNPs Interaction and Cellular Damage To validate the antibacterial and antifungal activities of LIV-AgNPs, the treated and untreated cells of test strains were compared under SEM visualization. The results in Figure 8b-c exhibited significantly ruptured cell wall with deep pits and cavities formation in MDR-PA cells treated with 100 µg/mL of LIV-AgNPs, which were likely due to internalization and surface contact killing or on-site augmented cations mediated toxicity, as described elsewhere [25]. Under identical conditions, Gram-positive MRSA cells were observed with significant structural damage along with tremendous bulging and deep cuts in cell membrane (Figure 8e,f), which indicated increased cytoplasmic granularity likely due to prompted interaction and internalization of LIV-AgNPs as compared to untreated cells (Figure 8d) [5]. Similarly, in the case of fungi, the LIV-AgNPs exposed C. albicans cells showed significant changes in native morphology such as deep pits in cells compared to untreated control (Figure 8h,i) as reported elsewhere [32]. Besides, Anuj et al. [33] have demonstrated a steady release of Ag + from AgNPs and thus accumulated cations can destabilize cell membrane to combat with efflux-mediated drug resistance in Gram-negative bacteria. Recent study of Al-Kadmy [34] has also suggested that coating of AgNPs had enhanced penetrative ability through the cell wall and kills the E. coli, S. aureus and vancomycin resistant Enteroccci cells on banknote currency effectively under tentative conditions, as compared to AgNO 3 . Antibiofilm Studies of LIV-AgNPs Both, bacterial cells; Gram-negative MDR-PA and Gram-positive MRSA, and C. albicans fungi are well known for their biofilm producing ability and chronic nosocomial infections spread in hospital and associated settings [35,36]. Although, several metallic nanoantibi-otics were found having great potential either to cease or eradicate biofilm adherence [37]. Whereas, the propensity of nanoantibiotics to readily diffuse through the biofilm biomass in order to reach microbial cells seemed to be compromised due to enzymatic, non-enzymatic and pH mediated degradations [38]. Interestingly, the evidence suggests that FAs, either free or physisorbed on to surface of NPs can (i) suppress the regulation of quorum-sensing (QS) genes, (ii) quenched the diffusible QS signal factors such as acyl-homoserine lactones and autoinducer-2 (AI-2) and (iii) dysregulate the associated non-QS targets like efflux pumps, oxidative stress and ergosterol synthesis [39][40][41]. Taken together the antimicrobial potential of FAs and AgNPs, we tested LIV-AgNPs for their antibiofilm activities. In, fact, our GC-MS results prompted us to consider the LIV-AgNPs as encapsulated by PLE bio-active FAs viz. n-hexadecanoic acid (P15-21.95%), linoleic acid (P19-20.45%), oleic acid (P20-18.01%) and stearic acid (P21-13.99%) (Figure 3, Table S1) [24] and hence responsible for significant anti-biofilm activities against MDR-PA, MRSA and C. albicans. The data in Figure 9 revealed the inhibition of biofilm formation by MDR-PA cells as 23.31 ± 5.2%, 31.17 ± 3.2%, 40.16 ± 5.5%, 53.37 ± 4.2% and 72.75 ± 2.2%, at 31.25, 62.50, 125, 250 and 500 µg/mL of LIV-AgNPs, respectively, versus untreated control (100%). Under identical conditions, MRSA cells could limit the accumulate biofilm mass as 10.17 ± 2.3%, 15.06 ± 2.5%, 27.00 ± 2.9%, 49.70 ± 3.9% and 54.40 ± 3.1%, respectively. Besides bacterial cells, the biofilm formed by C. albicans was also found declined significantly (p < 0.05 *) as 25.60 ± 2.2%, 35.60 ± 1.3%, 41.65 ± 1.7%, 59.9 ± 3.2% and 85.44 ± 3.3%, respectively. In parallel, the SEM based comparative analyses of untreated controls ( Figure 10a,c,e) and LIV-AgNPs (100 µg/mL) treated MDR-PA (Figure 10b), MRSA ( Figure 10d) and C. albicans (Figure 10f) cells were resulted in significant disruption in their biofilm architectures. Overall, the obtained trends in biofilm formation suggest that FAs hold a great potential to inhibit or disrupt biofilm formation against several microbial pathogens, including S. aureus [42], P. aeruginosa [43] and C. albicans [39,44]. Beyond the proven antibacterial and antibiofilm track record of AgNPs [45][46][47], a variety of FAs have earlier been warranted as potential antimicrobial agent. For instance, study of Santhakumari et al. [48] demonstrated hexadecanoic acid (100 µg/mL) could interrupted the QS by loosening of biofilm architecture (>60%) of vibrios spp. like Vibrio harveyi, V. parahaemolyticus, V. vulnificus and V. alginolyticus without affecting their planktonic growth. Besides, 12.8 µg/mL of hexadecanoic acid alone could inhibit the biofilm formation in P. aeruginosa and E.coli as 64% and 81%, respectively [43]. In the same context, Soni et al. [49] also demonstrated that palmitic acid (hexadecanoic acid), stearic acid, oleic acid and linoleic acid present in extract of ground beef inhibit the auto-inducer signals activity of the reporter strain (Vibrio harveyi) and reduced E. coli biofilm formation. Antiproliferative Properties of LIV-AgNPs on Human Colon Cancer Cells (HCT-116) Cell Viability Assay by MTT and Microscopic Analysis of HCT-116 Cells In addition to antimicrobial activities, PLE-capped AgNPs were also assessed for their anticancer potential. For this, human colon cancer cells were-cultured with colloidal LIV-AgNPs (10-100 µg/mL) for 24 and the nano-toxicity of LIV-AgNPs against HCT-116 cells was measured by employing colorimetric MTT assay. Precisely, compared untreated control cells (100 ± 2.5%), there is an apparent decline trend in cell viability as 86.10 ± 5.9%, 81.5 ± 8.2% and 46.75 ± 7.9% at 10, 50 and 100 µg/mL of LIV-AgNPs, respectively ( Figure 11). At about 100 µg/mL, we observed a ca. 50% inhibition of the cell proliferation after 24 h. In parallel, HCT-116 cells exposed to LIV-AgNPs (10, 50 and 100 µg/mL) were also investigated for NPs induced morphological changes. The representative micrographs of HCT-116 cells clearly demonstrate that treatment of LIV-AgNPs caused significant morphological changes (Figure 12 b-d) as compared to untreated cells (Figure 12a). Our results were strongly supported by the findings of Kuppusamy et al. [50] who determined the IC 50 value of their Commelina nudiflora capped-AgNPs as 100 µg/mL against cultured HCT-116 after 24 h. Besides, as compared to a single extract like Chlorophytum borivilianum extract functionalized AgNPs, which showed IC 50 value of 254 µg/mL [51], the as prepared poly-herbal encapsulated LIV-AgNPs can act as much effective anticancer nanomedicine against human colon cancer cells. In this context, linolenic acid polymers impregnated to AgNPs have also been reported to show 82.3% inhibition rate against the rat pheochromocytoma PC 12 tumor cell line [52]. Similarly, fatty acids rich Argemone mexicana extract encapsulated AgNPs (100 µg/mL) were found to inhibit 80% human cervical cancer cell line (SiHa) proliferation [53]. The AgNPs have also been reported disrupting respiratory chain and cell division while releasing Ag+ in order to augment enhanced bacterial killing. It has reported that coating of AgNPs can result in improved functionality and corrosion resistance of magnesium structures in biomedical settings [54]. With the widespread application and inevitable environmental exposure, AgNPs can be accumulated in various organs. More serious concerns are raised on the biological safety and potential toxicity of AgNPs in the central nervous system (CNS), especially in the hippocampus. Further, Chang et al. [54] investigated the biological effects and the role of PI3K/AKT/mTOR signaling pathway in AgNPs mediated cytotoxicity using the mouse hippocampal neuronal cell line (HT22 cells). They found that AgNPs reduced cell viability and induced membrane leakage in a dose-dependent manner and AgNPs also promoted the excessive production of reactive oxygen species (ROS) and caused the oxidative stress in HT22 cells [54]. Preparations of Aqueous Extract of Liv52 Drug To prepare the fatty acids rich poly-herbal Liv52 drug extract, Liv52 tablets (Himalaya Global Holdings Ltd., Bangalore, India), were crushed to fine powder and 5 g was then dissolved in 100 mL of ultra-pure water. After 1 h, the PLE solution was centrifuged at 12,000 rpm for 10 min and so collected supernatant was additionally filtered through the Wattman paper No. 1 [24]. Thus, obtained aqueous PLE was stored at 4 • C for the green synthesis of LIV-AgNPs. GC-MS Based Assessment of Bio-Actives in Poly-Herbal Liv52 Drug Extract (PLE) Considering the fact that Liv52 is a poly-herbal composition of C. spinosa, C. intybus, S. nigrum, T. arjuna and A. millefolium extracts [23], the gas chromatography massspectroscopy (GC-MS) based analysis on methanolic extract of PLE was performed to ascertain the bio-actives compounds that plausible involved in reduction, capping and stabilization of LIV-AgNPs, following the method described elsewhere [24,31]. Nanofabrication of Poly-Herbal liv52 Drug Extract Capped AgNPs (LIV-AgNPs) For the synthesis of LIV-AgNPs, PLE (25 mL) was mixed into 75 mL of 0.1 mM AgNO 3 solution. The reaction mixture was then kept in dark at room temperature (30 ± 5 • C). The color of reaction mixture was changed from pale yellow to brown after 20 min and became even dark brown within 24 h, which indeed indicated the reduction of Ag + to Ag 0 NPs [8]. UV-Vis Spectroscopy and FTIR Analysis Formation of LIV-AgNPs was monitored by using UV-Vis spectroscopy in range of 300-800 nm as described recently elsewhere [55]. The Fourier-transform infrared spectroscopy (FTIR) was performed to ascertain the presence of PLE bio-actives that have likely played either key or auxiliary role in the reduction Ag + to Ag 0 , stabilization of nano silver and capping of nascent LIV-AgNPs during synthesis [8]. Electron Microscopic and EDS Analysis of LIV-AgNPs The shape, size and elemental composition of LIV-mediated synthesized AgNPs was carried out by scanning electron microscope (SEM), transmission electron microscope (TEM) and energy dispersive spectroscopy (EDS) following the methods described in our previous study [56]. XRD Analysis of LIV-AgNPs The crystallinity and size of bio-synthesized LIV-AgNPs was analyzed by XRD machine as protocol described recently [57]. Microbial and Human Carcinoma Cell Cultures In this study, multi-drug resistant Pseudomonas aeruginosa (laboratory strain), methicillinresistant Staphylococcus aureus (ATCC 33591) and Candida albicans (ATCC 14053) were used to investigate the antibacterial, anticandidal and antibiofilm activities of synthesized PLE-AgNPs. For anticancer efficacy assessment, the human colon cancer (ATCC No. CCL-247) cell line was used. Both, the microbial and human carcinoma cell cultures were maintained as described in earlier studies [9,58]. The antibacterial and antifungal activity of synthesized LIV-AgNPs was carried out using two-fold micro broth dilution method in the range of 62.5 to 2000 µg/mL against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans fungal strains as method described by Ansari et al. [59]. The MIC value is defined as the lowest concentration of LIV-AgNPs at which no visible growth of bacteria and Candida was observed. After MIC determination of LIV-AgNPs, aliquots of 100 µl from wells having no visible growth was seen were further spread on MHA and SDA plates for 24 h at 37 • C and 28 • C, respectively, to calculate the MBC and MFC values. The lowest concentration of LIV-AgNPs that kills 100% population of tested bacteria and Candida, is considered as MBC/MFC values [59]. Further, agar well diffusion assay was performed to determine the zone of inhibition (in millimeter) of LIV-AgNPs against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans as method described by Jalal et al. [8]. Ultrastructural Alteration Caused by LIV-AgNPs in Bacterial and Candidal Cells The morphological changes caused by LIV-AgNPs in bacterial and yeast strains cells were examined by SEM analysis following protocol described in previous reports [60]. Briefly,~10 6 CFU/mL of MDR-PA, MRSA, and C. albicans cells treated with 100 µg/mL of LIV-AgNPs were incubated at 16 h at a recommended temperature. Thereafter, washing of treated and untreated samples were performed using centrifugation and then the pellets was fixed with glutaraldehyde (4% v/v) followed by osmium tetroxide (1%). After fixations, dehydration, drying and gold coating was performed and finally the effects of LIV-AgNPs on test strains of bacteria and Candida was seen under SEM at an accelerated voltage of 20 EV [61]. Inhibition of Biofilm Forming Abilities of MDR-PA, MRSA and C. albicans The inhibition in biofilm formation after treatment with LIV-AgNPs was quantitated by employing the microtiter crystal violet assay [61]. Briefly, 20 µl of freshly cultured MDR-PA, MRSA and C. albicans were admixed with 180 µl of varying concentrations (31.25, 62.50, 125, 250 and 500 µg/mL) of as prepared LIV-AgNPs and then the plates were kept in incubator for 24 h. The cells without LIV-AgNPs were considered as control group. After incubation, the content from the microtiter wells were decanted and gently washed with PBS and left for drying. The adhered biofilm biomass was then stained with crystal violet solution (0.1% w/v) for 30 min. The excess dyes were decanted and washed again with PBS and dried the wells completely. So stained biofilm was then solubilized with 95% ethyl alcohol and quantitated by optical density at 595 nm [62]. Visualization of Biofilm Architecture by SEM Besides, the effect of LIV-AgNPs on MDR-PA, MRSA and C. albicans biofilm architecture was investigated by SEM [62]. In brief, 100 µl fresh cultures of tested bacterial and yeast strains with and without LIV-AgNPs were inoculated on a glass coverslip in a 12-wells plate for overnight. After incubation, the glass coverslips were taken off and washed with PBS to remove the unadhered cells. After washing, the coverslips were fixed with glutaraldehyde (2.5% v/v) for 24 h at 4 • C. After fixation, washed the coverslips again and then subjected it to dehydration, drying and gold coating. After that, the effects of LIV-AgNPs on biofilm of tested bacteria and yeast were observed using SEM [61]. MTT Assay Human colorectal carcinoma cell line was used to investigate the anticancer potential of synthesized LIV-AgNPs at different concentrations (10, 50 and 100 µg/mL) in a 96-well cell culture plates by measuring optical density at 570 nm and the cell viability (%) was estimated using given formula [62]. Statistical Analysis Statistical analysis of data was done by one-way analysis of variance (ANOVA), Holm-Sidak method, multiple comparisons versus the control group (Sigma Plot 11.0, San Jose, CA, USA). The results indicate mean ± S.D. values determined with three independent experiments done in triplicate. The level of statistical significance chosen was * p < 0.05 unless otherwise stated. Conclusions This study demonstrates a simple one-pot procedure for synthesis of fatty acids rich aqueous extract of poly-herbal drug Liv52 stabilized LIV-AgNPs. GC-MS results demonstrated substantial proofs that PLE contributed terminal -OH and -COOH functional groups bearing FAs, namely n-hexadecanoic acid (21.95%), linoleic acid (20.45%), oleic acid (18.01%) and stearic acid (13.99%), that were speculated to reduce Ag + into Ag 0 and followed by stabilization with soft corona formation around the nascent NPs surface during synthesis reaction. Besides, the LIV-AgNPs were found to be potential nano-therapeutics agents in order to control bacterial growth and biofilm formation against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans strains, in vitro. Significant interaction of PLE-AgNPs with both, Gram-negative and Gram-positive bacterial and fungal strains was observed. The propensity of LIV-AgNPs interaction and internalization in planktonic cells as well as biofilm biomass appeared clearly in SEM analysis of treated experimental sets of MDR-PA, MRSA and C. albicans owing to the difference in their cell wall composition. However, the antibacterial and antibiofilm potential of LIV-AgNPs might be due to a swift surface contact through a stubborn biofilm matrix formed around the colonized cells requires further investigations to understand the mechanism of their action mode for nanoantibiotics development. In addition, the dose-dependent cytotoxicity trend of LIV-AgNPs against cultured human colon cancer cells ensured that the FAs-rich PLE capped nanomaterials could act as potential anticancer nanodrugs. However, the anticancer data of LIV-AgNPs here reported are only preliminary and will be successively deeply investigated exploring their cytotoxicity on normal cells as well as the antiproliferative activity of LIV-52 extract alone, as control. Data Availability Statement: The data presented in this study are available in this manuscript.
2021-02-13T06:16:37.123Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "fd34d5d57cf57e3a72912b7eb4c216ce50a171bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/14/2/139/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3af4aea9fc89d505f73e0e4f5e0506f10091f001", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255108828
pes2o/s2orc
v3-fos-license
Cool-dry season depression in gas exchange of canopy leaves and water flux of tropical trees at the northern limit of Asian tropics Trees on the northern boundary of Asian tropics experience hot-humid and cool-dry seasons, but little is known about their seasonal dynamics in canopy physiology. We used a canopy crane to reach the canopy of nine tropical tree species and measured canopy leaf gas exchange, water status, and trunk sap flux during the hot-humid and cool-dry seasons in Xishuangbanna, China. We found that most tree species exhibited significant reductions in maximum photosynthetic rate (Amax), stomatal conductance (gsmax), predawn and midday leaf water potentials, and maximum sap flux density in the cool-dry season. Compared to the hot-humid season, Amax declined by 19–60%, and maximum water flux declined by −14% (an increase) to 42%. The cool-dry season decline in Amax of four species can be partly explained by an increased stomatal limitation (decreased gsmax and intercellular CO2 concentrations). Therefore, a predicted increase in drought in this region may decrease the carbon sequestration and productivity of these forests. We did not find a tradeoff between performance (Amax in the hot-humid season) and persistence through the cool-dry season; species with higher Amax in the hot-humid season did not show higher percent seasonal declines in the cool-dry season. Amax was significantly and positively associated with the trunk sap flux for both seasons, but the association was weaker in the cool-dry season. Thus, our results suggest that some tradeoffs and trait associations are environment dependent. Our results are important for understanding carbon and water fluxes of seasonal tropical forests and their responses to environmental changes. Introduction The dynamics of tree carbon and water fluxes are driven by environmental factors such as temperature, solar radiations, and relative humidity (Fauset et al. 2019;Dusenge and Way 2017;Way et al. 2015). Seasonal dynamics in these environmental factors can lead to changes in canopy leaf physiological performances, which are species specific (Aragao et al. 2014;Chen and Cao 2015;Siddiq et al. 2017). Tree leaf photosynthesis (A) and water fluxes are sensitive to changes in environmental conditions and reach their maximum values under optimum conditions (Tucci et al. 2010;Yang et al. 2012;Zhang et al. 2014a;Gitelson et al. 2014). In the tropics, the optimum conditions are observed during moderate atmospheric temperature and humidity, which create the suitable driving force (vapor pressure deficit) for water fluxes (Siddiq et al. 2017) and suitable temperature for photosynthesis (Cao et al. 2006;Kumagai et al. 2006;. In tropical areas with a seasonality in temperature and/or rainfall, (e.g., marginal tropics), the reduction of temperature and/or rainfall during the cool and/or dry season can result in reduced carbon and water fluxes (Vongcharoen et al. 2018;Frenne et al. 2019;Santanoo et al. 2019). Forests at marginal tropics, e.g., those at the northern edge of Asian tropics, are characterized by a seasonality in temperature and rainfall, which results in a hot-humid season and a cool-dry season. This will probably result in the seasonal changes in canopy leaf physiological performances, which have not been well studied until now. These forests are strong carbon sinks and contribute significantly to the global carbon cycle (Zhang et al. 2006(Zhang et al. , 2016Tan et al. 2012;Cristiano et al. 2014), but the physiological mechanisms explaining their high carbon-sink function and seasonal dynamics are not well understood. The marginal tropical rainforests in Xishuangbanna, China, which are on the northern boundary of Asian tropics, are typical Asian tropical rainforests in terms of species composition and phenology, and an important component of the Indo-Burma diversity hotspot (Myers et al. 2000;Cao et al. 2006;Hua 2013). They are also strong carbon sinks (Zhang et al. 2006) contributing significantly to the global carbon cycle. The tropical forests of this region are under the threat of degradation due to global warming, increasing drought, decreasing fog persistence, and the introduction of exotic species for commercial uses (Singh et al. 2019;Zhang et al. 2014a;Qiu 2010;Li et al. 2006). All these changes may significantly alter the water and carbon cycles of the region. For instance, the carbon fixation of the forests was significantly reduced in this region due to the drought event in 2010 ). An understanding of water and carbon fluxes of trees from this region under different environmental conditions will help to predict their response to projected climate change including an increase in climate variability and to develop effective management strategies. Although there are some studies reporting the seasonal changes in photosynthesis of crops and small trees (Zhang et al. 2014a) and ecosystem-level carbon fluxes of the marginal Asian tropical forests (Zhang et al. 2006), more mechanistic studies are needed to understand their canopy physiology in responding to ambient seasonal environmental changes. For instance, temperate plants are found to follow a general tradeoff between maximum photosynthesis in the favorable season, and persistence through the unfavorable season; species with higher maximum photosynthetic performance (A max ) in the favorable season show higher percent seasonal declines in A max during the cold or dry season . However, it is unknown whether trees from the marginal tropics with less seasonality compared to the temperate regions follow the same tradeoff. Understanding tree physiology and its seasonal dynamics of marginal tropical forests will also help to predict the response of temperate forests that are adjacent to them to future warming, and the response of tropical forests to a predicted increase in climate variability (e.g., seasonal drought or dry spells). Further, a more physiological understanding of these forests can improve the performance of the global land surface models, which are used to understand and predict the global water and carbon fluxes in a changing climate. Marginal tropical and subtropical forests are under-represented in these models (Pan et al. 2020;Gentine et al. 2019;Li et al. 2018). It has been observed that photosynthetic carbon gain and water flux are coupled (Cowan & Farquhar 1977;Santiago et al. 2004, Brodribb andFeild 2000;Fauset et al. 2019;Siddiq et al. 2019) because both processes are regulated by the stomata. A large water flux enabled by a high transport capacity will result in a high leaf water potential (less negative) during active transpiration at a given evaporative demand, which can potentially facilitate photosynthetic gas exchange (Landsberg et al. 2017). However, environmental conditions of the habitat can shift the coupling between water transport and leaf gas exchange (Sack et al. 2005), and therefore, this coupling can also be potentially changed due to seasonal changes in environmental conditions. The evaporative cooling strategies will adjust according to seasonal changes in temperature. In the cool season, the needs for cooling through canopy transpiration are less, while in the hot-humid season, the canopy needs a significant amount of evaporative cooling to avoid heat damage. In addition, water flux and stomatal conductance may not be the major limiting factors on photosynthesis in the cool season as tropical trees can be sensitive to chillinginduced photodamage (Levitt 1980;Dungan et al. 2003;Huang et al. 2010;Zhang et al. 2014b;Yang et al. 2017). Therefore, water flux and photosynthesis are not necessarily coupled in the unfavorable season. The other factors such as leaf phenology and leaf age that influence leaf photosynthesis (Kitajima et al. 1997(Kitajima et al. , 2002 can also alter the coupling between water flux and carbon gain in the cool-dry season, as these forests have species with a range of leaf life spans including both evergreen and deciduous species. In general, how this coupling responds to environmental changes, and how it shifts in different seasons are not well understood. Here, we accessed the canopy of tropical trees in Xishuangbanna with a canopy crane and measured canopy leaf carbon assimilation and water fluxes in the hot-humid and cool-dry seasons. The main objectives of the present study were (1) to quantify the seasonal changes in canopy photosynthesis and water flux of trees at the northern limit of Asian tropics; (2) to test whether the potential cool-dry season declines in A max of some species of this region is due to increased stomatal limitation, and whether the seasonal changes in environmental conditions shift the coordination between maximum water flux and maximum photosynthetic performances; and 3) to test whether there is a tradeoff between maximum photosynthetic performance (A max in the hot-humid season) and persistence through the cool-dry season (less percent decline in A max ) across species. We hypothesized that the species with high rates of carbon fixation during the hot-humid season have higher seasonal declines in the cool-dry season according to the performance vs endurance tradeoff . It was also hypothesized that most tree species will show significant declines in photosynthesis and water use, mainly caused by an increased stomatal limitation due to decreased water availability. We also hypothesized that the coordination between photosynthesis and water flux will be weaker during the cool-dry season due to the increased limitation of factors other than water transport (e.g., chilling-induced photoinhibition) on photosynthesis. Study site and species The experimental set-up for this study was established in Xishuangbanna Tropical Botanical Garden (XTBG; 21°54 0 N, 101°46 0 E, 580 m a.s.l.), southern Yunnan Province, Southwest China. This region has a typical tropical monsoon climate and, hence, a pronounced hot-humid season with plenty of rains from May to October, and a dry season from November to April. The dry season can further be divided into a cool-dry season from December to February and a hot-dry season from March to April. The mean annual precipitation is 1560 mm, approximately 80% of which falls during the wet season. The mean annual temperature of the study site is 21.7°C (Cao et al. 2006). In this study, we selected 26 individual trees from 9 species in plantation stands of 40 years old (Table 1). Among the nine studied six species, i.e., Hopea hainanensis, Shorea assamica, Vatica magachapoi, Mesua ferrea, Dalbergia odorifera, and Pterocarpus indicus are naturally distributed in the southern China while the other three species: Anisoptera laevis, Dipterocarpus alatus, and Swietenia mahagoni are exotic trees. The former two species are naturally distributed in northern Thailand and adjoining tropical areas while Swietenia mahagoni is naturally found in tropical Caribbean islands of the United States. Among the nine species, six are evergreen while the rest are deciduous (Table 1). The canopy physiological measurements were carried out in September 2012 for the hot-humid season, while the cool-dry season measurements were done during the first week of January 2013. All the deciduous species start shedding their leaves at the end of February or the beginning of March and start flushing new leaves in mid-April. Seasonal differences of climatic variables There were distinct differences in the atmospheric temperature between the hot-humid and cool-dry seasons of the studied year. The mean daily temperature during the cool-dry season was 18°C, while it was 25°C in the hot-humid season. The mean atmospheric vapor pressure deficit (VPD) during the cool-dry season was 0.3 kPa, while in the hot-humid season, it was 0.73 kPa. The average solar radiation in the cool-dry season was 600 lmol m -2 s -1 , while in the hot-humid season, it was 640 lmol m -2 s -1 . During normal sunny days, the duration of hourly mean day light with photosynthetic photon flux density (PPFD) [ 600 lmol m -2 s -1 in the hothumid season was from 9:00 to 19:00, while during the cool-dry season, the duration was 10:00-18:00 on the top of tree canopies. Thus, there was a two hours difference in light availability to canopy leaves between the hot-humid and cool-dry seasons. The rainfall during the hot-humid months was [ 200 mm per month, while in the cool-dry season, it was \ 100 mm per month ( Fig. 1a-d). Canopy gas exchange and leaf water potentials To access the canopies with the height range of 25-35 m, we used a canopy crane mounted on a truck. Trees close to the edges of the stands were not used to minimize potential edge effects. The maximum (lightsaturated) leaf photosynthesis (A max ; lmol m -2 s -1 ) and stomatal conductance (g smax ; mol m -2 s -1 ) were measured using a portable photosynthesis measurement system (LI-6400; LI-COR, Nebraska, USA) under ambient conditions on sunny days for both hothumid and cool-dry seasons. The maximum gas exchange was measured between 09:00 and 11:00. The chamber temperature during the measurement time of the hot-humid season was approximately 23°C, and the leaf to air vapor pressure deficit (VPD leaf ) was approximately 1.0 kPa. During the cool-dry season, the chamber temperature was 17°C and the VPD leaf was 0.7 kPa. The PPFD within the chamber was set at 1000 lm -2 s -1 as the maximum gas exchange rates were achieved at this level and to avoid photoinhibition. For each tree, six to eight new fully developed mature leaves from different sunexposed canopy-top terminal branches of two to four individuals per species were selected to measure canopy gas exchange at the top of the canopy. For each tree, six to eight stable values of photosynthetic rate and stomatal conductance were logged and stored in the LI-6400 instrument, and the average value of each species was calculated. Intrinsic water use efficiency was calculated by dividing the photosynthetic rate with stomatal conductance (Farquhar et al. 1982). The intercellular values of carbon (C i ; lmol mol -1 ) were also obtained from the LI-6400 while measuring the gas exchange. The leaf water potentials were measured on-site from five to six leaves per tree using a pressure chamber (PMS, Albany, OR, USA). Predawn leaf samples were collected and measured in the field between 06:00 and 07:00, whereas midday samples were collected between 12:30 and 14:30 on sunny days. Sap flow and meteorological data We used the daily maximum sap flow data (water flux; peak sap flux density during the day; g m -2 s -1 ) of Asterisk indicates the exotic species, while the rest are indigenous to southern China hot-humid and cool-dry seasons from the sap flow measurements for the same trees that were used to measure canopy photosynthetic gas exchange. Sap flow was measured using Granier-type heat dissipation sap flow sensors (Granier 1987) from 2012 to 2013, and the daily maximum sap flow data of the same days with canopy gas exchange measurements were used for this study. The technique involves the heating of one sensor using an electrical source, while the other sensor was not heated and used as the reference sensor. The temperature difference between these two sensors was used to calculate the sap flux density. The details are mentioned in Siddiq et al. (2019). The original Granier equation was calibrated to calculate the sap flux density, as that Granier equation can substantially underestimate the sap flux density of tropical trees (Siddiq et al. 2017). The hourly mean meteorological data, i.e., temperature, solar radiations, relative humidity, and rainfall were collected from the Xishuangbanna Tropical Rain Forest station, situated about 900 m away from the study site. Data analysis The effect of species and season on canopy gas exchange (A max and g smax ) were analyzed by a twoway ANOVA using SPSS (IBM version 19). The differences in maximum canopy photosynthesis (A max ), stomatal conductance (g smax ), water use efficiency, intercellular CO 2 concentration, predawn and midday leaf water potentials between the two seasons, for the individual species, were analyzed using a t test. Duncan's method was used for the comparison of mean A max and g smax between the hot-humid and cooldry seasons across the studied deciduous and evergreen species. A linear regression was fitted to the relationships between percentage decline in A max and g smax from the hot-humid to the cool-dry season to test whether the potential decline in A max was associated with decrease g smax . The relationship between A max of hot-humid season and the absolute or percent decline in A max during the cool-dry season was also fitted with a linear regression to test the potential tradeoff between maximum performance and persistence through the cool-dry season. The association of maximum photosynthetic rate or stomatal conductance with the maximum sap flux density was analyzed with a linear regression to test the coupling between water flux and photosynthesis for both seasons. The generation of graphics and regression analyses were carried out using the Sigmaplot software (version-12.5; Systat Software Inc. USA). Variations across species The studied species showed high variations in canopy photosynthetic gas exchange (A max and g smax ), water use efficiency, and midday leaf water potentials in both the hot-humid and cool-dry seasons. The significant effect of species and season, and species-season interaction was observed among the studied species (Table 2). Further, evergreen species had significantly higher A max during both the hot-humid and cool-dry seasons ( Table 3). The g smax of the hot-humid season was significantly higher in evergreen than in deciduous trees, while no significant difference was detected between the two groups of trees in the cool-dry season ( Table 3). The highest A max across species were found in D. alatus, i.e., 18.71 lmol m -2 s -1 and 11.59 lmol m -2 s -1 in the hot-humid and cool-dry seasons, respectively. The lowest A max in the hothumid season was observed in S. assamica, (6.85 lmol m -2 s -1 ). In the cool-dry season, the lowest A max was found in M. ferrea, which was 3.24 lmol m -2 s -1 (Fig. 2). The highest stomatal conductance was found in D. alatus, which were 0.37 and 0.20 mol m -2 s -1 during the hot-humid and cool-dry seasons, respectively. The lowest g s-max during the hot-humid season was found in S. assamica (0.039 mol m -2 s -1 ). In the cool-dry season, the lowest g smax was found in in M. ferrea, which was 0.031 mol m -2 s -1 . The species also showed high variations in their water use efficiency in both seasons, ranging from 49.42 to 204.65 lmol mol -1 during the hot-humid season, and 55.45-190.04 lmol mol -1 in the cool-dry season. The intercellular CO 2 concentration also varied highly across species. In the hothumid season, it ranged from 100 to 350 lmol mol -1 , while in cool-dry season, it ranged from 174 to 28 lmol mol -1 (Fig. 2c, d). Seasonal declines in canopy photosynthetic gas exchange and water flux The seasonal dynamics in maximum photosynthetic rate (A max ) and stomatal conductance (g smax ) differed among individual species (Table 2). Significant seasonal declines in the values of A max were found in six out of the nine species, while the other three species did not significantly change (Fig. 2a). Significant seasonal declines in g smax were found in only four species. Two species showed significant increases in g smax in the cool-dry season, and four species showed significant declines of g smax in the cool-dry season, while the remaining three species showed no change (Fig. 2b). The percentage of photosynthesis reduction in the cool-dry season compared to the hot-humid season ranged from 19% in S. mahagoni to 60% in M. ferrea. Three species showed a significant decline in water use efficiency in the cool-dry season compared to the hot-humid season, while two species showed significant increases and the other three species showed no change (Fig. 2c). Four species showed significant declines in the intracellular CO 2 concentration in the cool-dry season compared to the hot-humid season, while three species showed significant increases and two species showed no change (Fig. 2d). For daily maximum sap flux density, five out of the nine species showed significant declines in the cool-dry season compared to the hot-humid season, while the other four species (D. odorifera, H. hainanensis, S. assamica and S. mahagoni) did not show significant differences between the two seasons (Fig. 3). A significant decline in predawn leaf water potential in the cool-dry season (compared to the hot- humid) was found in six out of the nine species (not in A. laevis, D. odorifera, and M. ferrea). A significant cool-season decline in midday leaf water potential was found in seven species but not in M. ferrea and P. indicus (Fig. 4a, b). Relationship between water flux and photosynthetic gas exchange A significant and positive relationship (R 2 = 0.50; P \ 0.01) was found between the percent decline of A max and the percent decline in g s from the hot-humid to the cool season (Fig. 5). A significant and positive relationship was also found between maximum sap flux density and photosynthetic rate in both the hothumid and cool-dry seasons, although the relationship was weaker during the cool-dry season as compared to that of the hot-humid season (Fig. 6a). There was also a significant and positive relationship between maximum sap flux density and stomatal conductance in the hot-humid season (Fig. 6b). The relationship between sap flux density and stomatal conductance during the cool-dry season was not significant (P [ 0.1, Fig. 6b). No relationship between the A max of the hot-humid season and the percent decline in A max from the hothumid to the cool-dry season has been found across species (relationship not shown). Discussion Our study quantified the seasonal dynamics in canopy leaf photosynthetic gas exchange, and trunk water flux of nine tree species at the northern limit of the Asian tropics. The studied tree species showed high variations in canopy photosynthetic performances and trunk water flux, as well as their seasonal changes (two-way ANOVA; Table 2). This pattern suggests diversified responses of trees in the marginal tropics to the seasonal unfavorable conditions and diverged strategies in achieving high annual carbon assimilation. Our results did not support the hypothesis that species with high photosynthetic rates in the hothumid season will have more percent declines in cooldry season, i.e., a photosynthetic performance vs. persistence tradeoff, as found in temperate plants . The absence of a tradeoff between photosynthetic performances under favorable conditions and persistence through the unfavorable season (low seasonal declines) in marginal tropical trees could be because the ''stress'' level in the cool-dry season of this region is not strong enough to make this tradeoff detectable. Also, the leaf age effects (Field 1983(Field , 1987Kitajima et al. 1997Kitajima et al. , 2002 and potential different strategies in responding to seasonal stress between evergreen and deciduous species may confound the potential tradeoff. Six out of nine species studied showed significant declines in the maximum photosynthesis rate in the cool-dry season. However, despite significant declines, the cool-dry season canopy photosynthetic rate of the studied species ranged from 3.24 to 11.59 lmol m -2 s -1 , indicating a significant amount of net carbon gain during the cool-dry season. This finding provides a physiological explanation of the ecosystem-level carbon sequestration during the cooldry season in this region and their great contribution to the global carbon cycles (Zhang et al. 2006). Furthermore, soil and tree nocturnal respiration is lower due to lower temperatures during the cool-dry season (Barbour et al. 2005;Anderegg et al. 2015;Siddiq and Cao 2018), which can also contribute towards more Table 3 Duncan's test result for the comparison of mean A max and g smax between the hot-humid and cool-dry seasons across the studied deciduous and evergreen species Cool-dry 0.10 (± 0.003), 0.11 (± 0.002) 3.75 0.053 Fig. 2 Maximum photosynthetic rate (A max , a), stomatal conductance (g smax , b), water use efficiency (c), and intercellular CO 2 concentration (d) of nine tropical tree species in the hot-humid and cool-dry seasons, where ***P \ 0.0001, **P \ 0.001, *P \ 0.01, ns indicates non-significance. Bars indicate species means ? SEs and the asterisks indicate the significant seasonal differences in the individual species. Species codes are listed in Table 1 Species codes Table 1, where: ***P \ 0.0001, **P \ 0.001, *P \ 0.01. Bars indicate species means ? SEs and the asterisks indicate the significant seasonal differences in the individual species positive ecosystem carbon accumulation in the cooldry season. The water loss of the trees, indicated by leaf stomatal conductance and tree sap flux density, also declined during the cool-dry season in five species. This could be beneficial to these trees for water conservation in the cool-dry season. Water conservation in the cool-dry season is important for trees as most tree species experienced some degree of drought stress indicated by predawn water potentials being lower than -0.5 MPa, and as low as -0.8 MPa. The leaf water potentials were generally reflecting the rainfall pattern; a significantly lower rainfall in the cool-dry season will result in dryer soils and, therefore, lower leaf water potentials. Interestingly, the VPD in the cool-dry season was lower compared to the hot-humid season despite lower rainfall (Fig. 1). Lower VPD and, thus, lower transpirational demand can be the reason for lower water flux in the cool-dry season ( Fig. 3; Siddiq and Cao, 2016). Since the VPD was lower in the cool-dry than in the hot-humid season (due to lower temperatures), the lower midday leaf water potentials in the cool-dry season compared to the hot-humid season cannot be explained by an increased transpirational demand. Rather, it should be related to decreased soil water content. Our results suggest that the cool-dry season decline in A max in some tree species can be explained by an increased stomatal limitation. The percent decline in A max in the cool-dry season is significantly associated with the percent decline in g s , suggesting that the decline in A max can be at least partly explained by increased stomatal limitation. This is at least true for four species (A. laevis, D. alatus, M. ferrea, and V. magachopai). For these four species showing significant seasonal declines in A max , their g smax and C i also declined significantly in the cool-dry season compared to the hot-humid season (Fig. 2). For them, decreased g s is limiting CO 2 uptake, resulting in lower C i and A max in the cool-dry season. For the other two showing significant declines in A max (P. indicus, H. hainanensis), their g smax and C i showed increases or no change in the cool season. Therefore, their decreases in A max cannot be explained by increased stomatal limitation but can probably result from lowtemperature-induced photoinhibition, as found in crops and tree seedlings in the region (Huang et al. 2010;Zhang et al. 2014b), or reduced photosynthetic carboxylation capacity under lower temperatures (Kumarathunge et al. 2019). In addition, leaf age may also be a possible factor explaining seasonal declines in A max (Field 1983(Field , 1987Kitajima et al. 1997Kitajima et al. , 2002. A recent study (Bielczynski et al. 2017) Percentage decline in g s-max in cool-dry season The relationship between percent seasonal decline in A max and g s-max from the hot-humid to the cool-dry season across nine tropical tree species studied. The line is a linear regression fitted to the data, where **P \ 0.001 Fig. 6 Maximum photosynthetic rate (a; A max ) and stomatal conductance (b; g s-max ) in relation to maximum sap flux density in hot-humid season (open dots) and cool-dry season (closed dots) across the studied species. Solid lines are linear regressions fitted to the hot-humid season data and the dashed line is a linear regression fitted to the cool-dry season data, where **P \ 0.001, *P \ 0.01, ns indicates that the relationship during the cool-dry season between g s and sap flux density was not significant emphasizes that both increased leaf and plant ages can cause declines in photosynthetic performance. For evergreen trees, because they continuously flush leaves throughout the year in this region and we selected newly-fully-developed leaves for measurements, the leaf age effect on A max should be minor. For one deciduous species showed seasonal declines in A max (P. indicus) but not in g smax , the age effect could at least partly explains the decline in A max because the leaves were six week away from shedding during the cool-dry season measurements. However, no declines in A max were found in the other two deciduous species. Notably, two species with more southern and warmer native distribution limits (A. laevis and D. alatus; see materials and methods) showed the highest A max among all the studied species in both the hot-humid and cool-dry seasons. This suggests their high physiological plasticity in responding to changes in temperatures and contradicts our general understanding that species with warmer native habitats have lower resistance to low temperatures (Armando et al. 2016;Korner 2016). Our study found a seasonal shift in the coupling between water flux and photosynthesis, and the coupling is weakened during the cool-dry season. The coupling of canopy photosynthesis and trunk water flux during both the hot-humid and cool-dry seasons indicate the canopy level synchronization of these two processes, supporting Drake et al. (2018). However, although the photosynthetic rate and the trunk water flux remained significantly associated during the cool-dry season, the coefficient of the relationship was lower compared to the hot-humid season. The same pattern was found for the relationship between stomatal conductance and trunk water flux. It was significant in the hot-humid season but became not significant during the cool-dry season. The weaker coupling between canopy photosynthesis and trunk water flux during the cool-dry season compared to the hot-humid season could be because there are more other limiting factors on canopy photosynthesis rather than water supply in the cool-dry season. For instance, chilling can induce declines in leaf photosynthetic electron transport (Huang et al. 2010;Zhang et al. 2014c) and carboxylation activity (Kumarathunge et al. 2019). Also the leaf age effect (Field 1983(Field , 1987Kitajima et al. 1997Kitajima et al. , 2002 may also change the coupling. Conclusion In conclusion, the studied trees showed a high variation in seasonal dynamics of canopy leaf gas exchange at the northern limit of Asian tropics. Three species showed no seasonal declines in A max , while the rest also maintained positive carbon assimilation during the cool-dry season, suggesting that the forests are productive throughout the year. These results also provide a physiological explanation for the carbonsink function of the forests in the cool-dry season (Zhang et al. 2006). The seasonal declines in gas exchange are associated with increased stomatal limitation in some but not all the tree species, suggesting that further warming and increased VPD may have different impacts on limiting photosynthesis of different species. Variations in the response to seasonal changes in temperatures and soil water content also suggest a potential shift in species composition of the forests under climate change. Further, some of the tree species showed water stress with predawn water potentials being as negative as -0.8 MPa in the cool-dry season, indicating that an increase in drought in this region (Jia and Pan2016;Zhang et al. 2019) could further exacerbate the water stress and decrease the carbon sequestration potential of tropical forests in this region. In addition, we did not find the hypothesized tradeoff between maximum photosynthetic performance under favorable conditions and persistence through the unfavorable season as found in temperate plants. We also found a seasonal shift in the coupling between water flux and photosynthesis. Therefore, our study confirms that a lot of trait correlations and tradeoffs are environment or climate-dependent (Sack et al. 2005).
2022-12-26T15:07:32.981Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "a8d284124f8a904a97778830dd4f3a6aa5e35488", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-223937/latest.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "a8d284124f8a904a97778830dd4f3a6aa5e35488", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
55059123
pes2o/s2orc
v3-fos-license
TRAP ATTRIBUTES INFLUENCING CAPTURE OF Diabrotica speciosa (COLEOPTERA: CHRYSOMELIDAE) ON COMMON BEAN FIELDS Refinements in trap characteristics may improve ability to monitor and mass-trap beetles. Field assays were conducted in common bean fields to assess responses of Diabrotica speciosa (Germar) to some trap characteristics. Golden yellow plastic cups (750 mL) traps caught more D. speciosa females and males than did clear traps. Carrot slices in Petri dishes baited with Lagenaria vulagaris L. powder (cucurbitacin source 0.28%) caught more beetles than did dishes with carrot alone. Dispensers for the floral volatile attractant 1,4-dimethoxybenze were also compared. Rubber septa dispenser attracted more beetles than did control (dental wicks saturated with acetone). Captures on dental wick, starch matrix and feminine pad dispensers were intermediate and did not differ from those on rubber septa and unbaited controls. Perforated bottle traps (2000 mL), when baited with the floral attractant, caught more beetles than did window bottle traps (both traps contained L. vulgaris powder) in most assessments done from two to ten days after trap placement in the field. Traps with the insecticide carbaryl captured more beetles than did traps without it, 2-4 and 8-10 days after trap placement in the field, but not in the remaining periods (0-2, 4-6 and 6-8 days). Traps baited with 1,4-dimethoxybenzene captured more beetles than did the unbaited ones in all assessments (each other day from two to ten days after trap placement in the field). Finally, similar amounts of beetles were captured using plastic bottle traps (2000 mL): perforated, window (both with cucurbitacin) and sticky (without cucurbitacin) traps, when were baited with the floral Since refinements in trap characteristics may improve ability to monitor and mass-trap beetles (Hesler & Sutter, 1993), field experiments were conducted to assess responses of D. speciosa to different trap characteristics.We were particularly interested in whether traps would catch both sexes, as this attribute would be especially advantageous to crops where females oviposit.The tests reported in this study evaluated methods of luring insects to traps (color and volatile attractant), design of volatile attractant dispenser and trap entry ports, and means of retaining trapped insects (cucurbitacin, insecticide and adhesive). MATERIAL AND METHODS Field experiments were carried out in Londrina (Latitude 23°19'S, Longitude 51°12'W), in the state of Paraná, Brazil. Common bean, Phaseolus vulgaris L., cv. IAPAR 59, fields (sown on September 09, 2000;March 25, 2001 andMay 25, 2001) were used as testing sites due to the natural occurrence of high populations of D. speciosa.Traps were placed 20 cm above the plant canopy and, when colored, were coated with yellow gold Suvinil paint 2450-0103 (BASF S.A., São Bernardo do Campo, São Paulo, Brasil).Plastic cups and bottle sticky traps were externally coated with insect adhesive Tangle Trap (Tangle Foot Co., Grand Rapids, MI, USA).Cucurbitacins were from L. vulgaris powder (0.28% cucurbitacin B).The powder was sprayed with carbaryl insecticide [Sevin 480 SC (2.25 per 1000 mL)] which was then dried in the shade before placement in the traps.Carbaryl insecticide was chosen due to its widespread use and efficacy (Roel & Zatarin, 1989;Metcalf & Metcalf, 1992) in cucurbitacin baits and traps for diaboriticites.Traps were baited with the attractant 1,4-dimethoxybenze (200 mg/2 mL acetone) and unless otherwise stated the dispenser used was rubber septum.Traps were placed in the field at 3 PM and insects were removed after 48 hours or at 48 hour intervals during longer tests.Cucurbitacin traps were left in the field for a maximum of 12 days to avoid powder detachment from the plastic (Ventura et al., 1996).Distance between traps was 5 m within a block, and 10 m between blocks.Beetles were identified according to species and sex in the laboratory. Responses to yellow color and cucurbitacins Yellow and clear sticky cups (750 mL) (n = 10) were placed up side down on wooden stakes on October 3, 2000 to assess color responses.To assess responses to cucurbitacin B, Petri dishes (8.5 cm diameter) containing carrot slices were used.Although carrot is not a referred host plant of D. speciosa, it has been used successfully for beetle rearing in laboratory as a sole foodstuff (Ventura et al., 2001).Dishes were baited or not with L. vulgaris powder and were placed on the ground among crop rows on October 23, 2000.Ten replicates were used. Effect of type of volatile dispenser Yellow cup traps were baited with floral attractant, equipped with four types of dispensers: rubber septum (Aldrich, Milwaukee, USA), dental wick (Companhia Manufatura de Tecidos de Algodão, Cataguazes, MG, Brazil), a starch matrix and a feminine hygiene pad Intimusgel (Kimberly, Eldorado do Sul, RS, Brazil).Four replicates were used.The starch matrix was prepared according to Weissling et al. (1989), except for corn flour source (we used Yoki corn flour, Alimentos Yoki, Cambará, PR.) and placed in a voile bag.Control traps received dental wicks saturated with 2 mL of acetone (n = 5).Traps were placed upside down on iron stakes.Dispensers were hung by a string from a hole in the bottom of the cup and placed 1 cm below the traps.Traps were placed in the field on April 8, 2001 and captures were assessed every other day until April 20, 2001. Design of entry ports The trap design successfully used for monitoring D. speciosa and Cerotoma arcuata tingomariana Bechyné on common bean fields (Ventura et al., 1996) was used in a larger size (0.06 versus 2.0 L) because increases of absolute captures are related with rising sizes of the traps (Youngman et al., 1996).In addition, these bottles, originally used as soft drink containers can be easily obtained on recycling posts.Perforated bottle traps having about 150 holes (5-mm diameter) per bottle, made with a hot iron stick (Figure 1a) were compared with window bottle traps having four strips (3.5 × 25 cm), symmetrically cut from the bottle surface (Figure 1b).Ten replicates were used.Both trap models were yellow and contained a plastic strip (3.5 × 25 cm) treated with L. vulgaris powder and insecticide.The bottom of the bottles was removed and acrylic (12.0 × 12.0 × 3.8 cm) vials containing water and detergent were used below the traps to collect dead insects.Rubber septum with floral attractant was placed in the trap.Traps were hung in iron stakes (upside down "L").The experiment was settled two times, on June 6 and 18, 2001.The tests were left until June 16 and 28, 2001, respectively. Effect of insecticide Perforated bottle trap baited with floral attractant was used to test the effect of insecticide addition on L. vulgaris powder.Ten replicates were used.Traps were exposed in the field on June 18, 2001.Samples were collected every other day until June 28, 2001. Effect of volatile attractant Rubber septum with floral attractant in acetone or acetone only were placed in perforated bottle traps to test the effect of volatile attractant.Ten replicates were used.Traps were placed in the field on June 20, 2001.Samples were collected every other day until June 30, 2001. Comparison of perforated, window and sticky bottle traps This test evaluated the relative efficacy of perforated, window, and sticky bottle traps.Seven replicates were used.Sticky traps were made with identical bottle type and color, used for the other traps and did not have the strip with L. vulgaris powder.All traps were baited with floral attractant.Traps were placed in the field on July 6, 2001 and removed 24 hours later due to the high field beetle population (sticky traps could lose efficacy due to excessive adhesion). Experimental design and statistical analysis All experiments were conducted in a randomized complete block design.Data were transformed by log (x + 1) constant to normalize the data and reduce heterogeneity of variances.The analysis of variance (ANOVA) was performed followed by Duncan's multiple range test (SAS Institute, 1989) when F values were significant (P < 0.05) and more than two means were compared.Otherwise, paired t-tests were used to analyze data. RESULTS AND DISCUSSION Yellow traps attracted more D. speciosa female and male beetles than did clear traps [4.3 ± 0.4 versus 0.0 ± 0.0 and 2.1 ± 0.5 versus 0.1 ± 0.1, respectively (t = 6.79,P = 0.000, df = 9 and t = 4.47, P = 0.002, df = 9 respectively)].No females and few males were captured by clear traps.Saturn yellow was the most attractive color to D. virgifera virgifera and D. barberi and the color preference ranking was similar for males and females (Hesler & Sutter, 1993).D. virgifera virgifera was also more attracted to light yellow than to red, blue, dark blue and ultraviolet colors in the laboratory (Ball, 1982).On cucurbits, yellow traps attracted more Acallyma vittatum (F.), and D. virgifera virgifera beetles than did white traps, but capture numbers of Acallyma trivittatum (Mannerheim) and D. undecimpunctata howardi were similar on both trap colors (Hoffmann et al., 1996).Future investigations may determine responses to hue, brightness and saturation of yellow and other colors with similar wavelengths. Carrot slices baited with L. vulgaris powder attracted more males and females than did unbaited slices (4.3 ± 0.5 versus 0.0 ± 0.0 and 0.5 ± 0.5 versus 0.0 ± 0.0, respectively) (t = 7.20, P = 0.000, df = 9 and t = 3.00, P = 0.015, df = 9, respectively).Number of captured males was 8.6 times higher than females (t = 4.34, P = 0.02, df = 9).The higher male population of D. speciosa associated with carrot slices treated with L. vulgaris supports observations of massive male predominance aggregated on high cucurbitacin content plants (personal observation; Martinez, S. and Avila, C., personal communication for M.U.Ventura).However, field collected males and females showed similar feeding responses to cucurbitacins in laboratory assays (Cintra I. and M.U.Ventura, unpublished).Tallamy & Halaweish (1993) suggested that the predominance of D. barberi and D. virgifera virgifera males compared with females captured in cucurbitacins traps in previous studies (Shaw at al., 1984;Fielding & Ruesink, 1985) is possibly due to male free-ranging mobility rather than sensitivity to cucurbitacins, for their data showed similar responses of males and females with no prior exposure to cucurbitacins (a physiological state that probably more correctly reflects field beetle populations).In addition, unmated D. undecimpunctata howardi males may prefer to eat cucurbitacins continually to keep its storage until mating occurs (Tallamy & Halaweish, 1993).Captured insects sex ratio was close to one in the remaining tests (Tables 1, 2, 3, 4 and 5). The number of insects when volatile attractant was dispensed on rubber septa differed from unbaited controls (Table 1).Capture numbers on dental wick, starch matrix and feminine hygienic pad dispensers were generally intermediate between those of rubber septa and unbaited traps, and differed from the control only in the 2-4 day period (Table 1).Rubber septa were used in subsequent experiments.Apparently, the starch matrix did not retain satisfactorily the floral attractant.The volatility of the semiochemical and the pH of the matrix have been implicated as possible factors affecting the retention for several compounds (Weissling et al., 1989). Higher number of beetles were captured by perforated bottle than window bottle traps (1 to 2.4 times more beetles in the first and second experiment) (Table 2).Plastic strips containing L. vulgaris powder lasted longer in perforated bottle traps than in the window ones.Fungal growth was observed in window traps due to rainwater.Window traps apparently allowed greater and faster access of beetles to the feeding stimulant (L.vulgaris powder) although they escaped more easily.The visible surface of the window trap is smaller and probably reduces attraction.Perforated amber plastic medicine vial traps successfully captured and killed D. barberi and D. virgifera virgifera (Shaw et al., 1984).Perforated cucurbitacin yellow cup traps baited with sex pheromone were used to monitor D. undecimpunctata howardi in peanut fields (Herbert Jr. et al., 1996). Traps with carbaryl insecticide captured about as many beetles than did traps without it, both 2-4 and 8-10 days after trap placement in the field (1.8 and 2 times more beetles, respectively) (Table 3).A similar trend occurred in the remaining dates (0-2, 4-6 and 6-8 days), but no differences were detected.Beetle escape difficulty from perforated bottle traps may have hampered any effect of insecticide treatments (Table 3).Insects also might have been exposed to the insecticide trap, but died outside.In previous studies with other Diabrotica species, cucurbitacin baited traps with Carbaryl insecticide were used, but comparisons between traps with and without insecticide were not reported.Shaw et al. (1984) used 50% (wt: vol) dilution in water, and Barberchek et al. (1995) 2%, while we used the dose recommended by the manufacturer, 0.23%.Similar captures of both sexes on traps (Tables 1, 2, 3, 4 and 5) suggest the cucurbitacin trap can be widely used.D. speciosa is a multivoltine beetle which disperses into the crop field from other hosts (Ventura et al., 1996).Traps could be placed on the field edges to prevent infestation thus adhering to the "artificial trap crop" concept (Deem-Dickson & Metcalf, 1995).Probable applications are trap settlements to avoid damage on leaves and fruits.Female beetles could be intercepted before they attack crops in which their larvae develop on roots (e.g.potato roots).Under these conditions, this management strategy could prove to be cost-effective (Hoffman et al., 1996).Females of D. undecimpuctata howardi are reported to be more responsive to cucurbitacins after mating (Tallamy & Halaweish, 1993). Similar number of beetles were captured on perforated, window and sticky bottle traps (Table 5).These results clearly define cucurbitacin traps as advantageous due to its simplicity (growers refuse to use traps with sticky traps) (Hesler & Sutter, 1993;Whitworth et al., 2002).Commercial sticky traps have been compared with cucurbitacin baited vial traps, e.g. the yellow sticky Multigard trap baited with volatile attractants caught more D. virgifera virgifera than did the clear and white cucurbitacin baited traps (Trécé lure trap) (Whitworth et al., 2002).D. virgifera virgifera and D. barberi responded differently to several volatile attractants used as bait Pherocon AM, white carton (sticky) and vial (cucurbitacin baited) traps (Lance, 1990).Additional studies should emphasize trap features that enhance D. speciosa capture (size, color, design and durability; insecticide, volatile attractant and cucurbitacin contents; type of volatile and cucurbitacin dispenser etc.).Trap economic threshold and injury level (monitoring) and the number of traps per area (mass trapping) on several crops, in which the insect is a key pest, must be established besides correlation between beetle trap captures and field populations.These studies are necessary to investigate the potential of plant kairomones traps in the management of D. speciosa. Table 1 - Mean number (±SE) of adults and sex ratio of D. speciosa caught per yellow cup trap baited with 1,4dimethoxybenzene (200 mg) in different dispensers in common bean crop (8 to 20 April, 2001). 1 Means in the same column with different letters are different based on Duncan's studentized range test (P < 0.05), n = 4. 2 Numbers in blackets refer to sex ratio (females/males).*Significant at 5%; **Significant at 1%. Table 2 - Mean number (±SE) of adults and sex ratio of D. speciosa caught per yellow perforated or window bottle traps baited with L. vulgaris powder (sprayed with carbaryl insecticide) and 1,4-dimethoxybenzene (200 mg) in common bean crop (June 6 to 16 and 8 to 18, 2001, respectively). Table 4 - Mean number (± SE) of adults and sex ratio of D. speciosa caught per yellow perforated bottle traps baited with L. vulgaris powder sprayed with carbaryl insecticide and baited or not with 1,4-dimethoxybenzene (200 mg) in common bean crop (June 20 to 30, 2001). 1 Means in the same column with different letters are different based on paired t-test (P < 0.05), n = 10. 2 Numbers in blackets refer to sex ratio (females/males). Table 3 - Mean number (± SE) of adults and sex ratio of D. speciosa caught per perforated yellow bottle traps baited with L. vulgaris (sprayed or not with carbaryl insecticide) and 1,4-dimethoxybenzene (200 mg) in common bean crop(June 18 to 28, 2001).Means in the same column with different letters are different based on paired t-test (P < 0.05), n = 10. 2 Numbers in blackets refer to sex ratio (females/males). Table 5 - Mean number (±SE) of adults and sex ratio of D. speciosa caught per yellow bottle traps of three types baited with L. vulgaris powder and 1,4dimethoxybenzene in common bean crop after 24h (July 6, 2001) (n=10).
2018-12-08T03:56:59.038Z
2005-08-01T00:00:00.000
{ "year": 2005, "sha1": "0b169bb368c9be627a9cbbc613fbd99cda634994", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/sa/a/mRtTbCKsq4tx9n7L4BQqSPv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0b169bb368c9be627a9cbbc613fbd99cda634994", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
209378147
pes2o/s2orc
v3-fos-license
Physicochemical characteristics and photocatalytic performance of TiO2/SiO2 catalyst synthesized using biogenic silica from bamboo leaves In this work, TiO2/SiO2 composite photocatalysts were prepared using biogenic silica extracted from bamboo leaves and titanium tetraisopropoxide as a titania precursor via a sol–gel mechanism. A study of the physicochemical properties of materials as a function of their titanium dioxide content was conducted using Fourier transform infrared spectroscopy, a scanning electron microscope, a diffuse reflectance ultraviolet-visible (UV-vis) spectrophotometer, and a gas sorption analyzer. The relationship between physicochemical parameters and photocatalytic performance was evaluated using the methylene blue (MB) photocatalytic degradation process under UV irradiation with and without the addition of H2O2 as an oxidant. The results demonstrated that increasing the TiO2 helps enhance the parameters of specific surface area, the pore volume, and the particle size of titanium dioxide, while the band gap energy reaches a maximum of 3.21 eV for 40% and 60% Ti content. The composites exhibit photocatalytic activity with the MB degradation with increasing photocatalytic efficiency since the composites with 40 and 60% wt. of TiO2 demonstrated the higher degradation rate compared with TiO2 in the presence and absence of H2O2. This higher rate is correlated with the higher specific surface area and band gap energy compared with those of TiO2. Introduction Environmental remediation technology is the one of the most studied topics in environmental science, as there are many pollutants with varying characteristics produced by a wide range of industrial activities. Advanced oxidation processes involving photocatalytic mechanisms (photo-oxidation, photoreduction, and photodegradation) are techniques that have attracted attention during the five decades since they were introduced by Fujishima and Honda in 1970. Photocatalytic oxidation is a promising method for the environmentally safe degradation of organic wastewater, including dye waste from the textile industry [1,2]. In addition to being a renewable process, the photocatalytic oxidation of dyes and organic molecules carries a lower cost compared to other techniques, such as adsorption, chemical oxidation, and ozonation [3,4,5]. This technique is highly regarded due to its use of photons from solar energy, the lack of chemicals, and the low cost. Highly active photocatalysts are a requirement, and TiO 2 is a very well-known photocatalyst material [5,6] However, some modification of TiO 2 is required to enhance its performance for industrial applications. In addition to doping and the structural modification of titania nanoparticles to change the photocatalytic properties, supported titania has been reported to exhibit a different photocatalytic performance from that of titania alone [8,9,10]. The support material gives better results due to its interaction with titanium oxide. Therefore, titania-silica (TiO 2 /SiO 2 ) composite is the most well-known and intensively studied material [11,12,13,14]. Silica-supported titanium oxide exhibits a different photocatalytic performance than titania alone [7,8,9,10]. This is partly because of the interaction between titanium oxide and silica and partly because of the different structures of surface titanate and bulk titania. Some studies have revealed a relationship between a material's photocatalytic activity and the surface titanium oxide structure, preparation method, and loading amount. To improve the stability and other properties of TiO 2 /SiO 2 , a recent approach to TiO 2 /SiO 2 synthesis was developed that involves the formation of a hierarchical structure or aerogel. TiO 2 /SiO 2 aerogel has been reported to have excellent photocatalytic properties because the mesostructure can overcome intrinsic weaknesses, particularly in relation to the electron-transporting ability. Previous studies have provided data that confirm the intensive light harvesting demonstrated by TiO 2 /SiO 2 with a mesoporous structure [11]. TiO 2 /SiO 2 materials possess different performance properties depending on the mechanism of preparation and the synthetic route. Moreover, the porous structure of silica within the composite is also affected by various parameters, including the silica precursor. Silica prepared from plants or other biomass materials (termed biogenic silica) is increasingly used because of its renewable properties. Many attempts have been made to synthesize silica, silica nanoparticles, and mesoporous silica in the form of aerogels, hydrogels, etc. from a range of biomass materials, including rice husks, sugarcane bagasse, peanut shells, and other agricultural waste [9,10]. According to existing studies on biogenic silica synthesis, a silica aerogel with a mesoporous structure is a possible option to enhance the performance of TiO 2 /SiO 2 aerogel. Considering the proliferation of bamboo plants in Indonesia, this research proposes that bamboo leaves be used as a biogenic silica source in the preparation of TiO 2 /SiO 2 materials. The utilization of bamboo leaves for silica aerogel formation has been discussed in the literature, but the characteristics of biogenic silica derived from these leaves have not been intensively studied. Furthermore, the contribution of biogenic silica characteristics to the mechanism of TiO 2 /SiO 2 formation, the specific mechanism involved, and investigation of the effects of synthetic parameters on the final physicochemical characteristics are of interest [12,13]. Studies on the preparation of TiO 2 /SiO 2 using biogenic silica from bamboo leaves have not been undertaken. According to other investigations, the physical performance of TiO 2 /SiO 2 can be described based on such parameters as the specific surface area, crystallinity, and band gap energy as a result of specific preparation variables. As the performance activities of the TiO 2 /SiO 2 materials in the photo-oxidation processes of organic molecules are significant in terms of their use as photocatalysts, such studies are important for the advancement of environmental technology. The main objective in this study was to investigate the physicochemical characteristics of TiO 2 /SiO 2 materials prepared using biogenic silica from bamboo leaves. These characteristics were measured as a function of preparation variables, such as Ti content, and calcination temperatures and their relationship with the photocatalytic activity performance were investigated. Materials The reagent titanium tetraisopropoxide (Ti(OiPr) 4 ) was purchased from Sigma-Aldrich (Germany). Methylene blue (MB), H 2 O 2 , tetraethyl ortho silicate (TEOS) and acetic acid were obtained from Merck (Germany). Double distilled water was used in preparing the photocatalysts. All reagents were used without any further purification. Bamboo leaves were collected from Gigantochloa apus plants grown in the Sleman District, Yogyakarta, Indonesia. The leaves were washed with water and oven-dried before being calcined at 900 C for 2 h to produce bamboo leaf ash (BLA). SiO 2 extraction from BLA SiO 2 extraction from BLA was performed by refluxing BLA with 4 M NaOH for 6 h. The reaction during the reflux is as follows: The slurry was then filtered, and the black residue was rinsed with boiling water. The viscous, transparent, and colorless filtrate had a pH of 13. The filtrate was cooled to room temperature, and slow titration was carried out by dropping in 1 M H 2 SO 4 until white SiO 2 gel was obtained and the pH reached 8. The gel was neutralized by adding double-distilled water several times to remove excess NaOH and sulphate ions, and was then decanted off before slow drying in an oven at 40 C. To determine the silica content and surface profile of the compact white silica gel product, analyses were performed using a gravimetric method and scanning electron microscopy-energy dispersive X-ray spectrophotometry (SEM-EDX). Preparation of TiO 2 /SiO 2 TiO 2 /SiO 2 composites were prepared using the silica present in the gel. A sol-gel method for the titania and silica reaction was used for the synthesis. As the TiO 2 precursor, Ti(OiPr) 4 was reacted with the SiO 2 obtained from the BLA extraction. In order to study the effect of the Ti content on the physicochemical properties of the composite, various Ti contents in the TiO 2 /SiO 2 were obtained by changing the Ti(OiPr) 4 content in the sol-gel reaction. For each preparation, Ti(Oipr) 4 was diluted in 100 mL of ethanol, followed by the dropwise addition of 4 mL of acetic acid to initiate the hydrolysis reaction. Acetic acid was added to control TiO 2 hydrolysis, as described elsewhere [14,15]. The mixture was then slowly mixed with the silica gel in water. For each reaction, the resulting colloidal solution was continuously stirred for one additional hour, followed by aging for 48 h at room temperature. The colloid was dried in an oven at 80 C before being calcined at 500 C. The mass percentage of TiO 2 in the composite was set to 20%, 30%, 40%, and 60%. The composites were encoded as 20TiO 2 /SiO 2 , 30TiO 2 /SiO 2 , 40TiO 2 /-SiO 2 , and 60TiO 2 /SiO 2, referring to the Ti content. As reference material, TiO 2 was prepared using a similar procedure and precursor as in the TiO 2 /SiO 2 preparation but without mixing with silica gel. Characterization Powder X-ray diffraction (XRD) patterns of the samples were determined using a Shimadzu X6000 diffractometer (Tokyo, Japan) and Nifiltered Cu Kα radiation operating at 30 mA and 40 kV. The diffraction data were collected using a continuous scan mode with a speed of 4 / min. Fourier transform infrared (FTIR) spectra of the samples were collected in the 400-4000 cm À1 region with a Perkin Elmer spectrometer (Singapore) using the KBr technique The surface morphologies of the samples were observed using SEM (JEOL, Tokyo, Japan). The specific surface areas (Brunauer-Emmett-Telle [BET] method), pore volumes, and pore radii (using Barret-Joyner-Hallenda/BJH method) of the samples were obtained by N 2 physisorption at 77 K using a Quantachrome apparatus (Singapore). All the samples were degassed at 150 C prior to each analysis. A diffuse-reflectance UV-vis spectrophotometry (UV-DRS) instrument (JASCO V760, JASCO; Tokyo, Japan) was used in the range of 190-850 nm to determine the band gap energy (E g ) using BaSO 4 powder as a reference material. The E g value was calculated using the Kubelka-Munk function (1): where R ∞ is the measured absolute reflectance of the sample (R sample / R standard ) and E g was calculated as the intercept from the plot of (F(R∞) hv) 1/2 versus hv. Photocatalytic activity evaluation The evaluation of the photocatalytic performance of materials for the degradation of MB was performed on a reactor equipped with a UV lamp (Philips, 366 nm, 30 W) serving as the light source. Typically, 0.2 g of photocatalyst was added to 500 mL of MB solution (20 mg L À1 ). Each mixture was previously stirred for 15 min in the dark before light exposure to obtain absorption-desorption equilibrium before being subjected to light irradiation. MB photocatalytic degradation experiments were conducted both in the presence and absence of H 2 O 2 . For kinetics analyses, 2 mL of solution was sampled at certain time points of each experiment. The MB concentration was detected using a Hitachi U-2010 UV-vis spectrophotometer (Hitachi; Tokyo, Japan). Table 1.The SEM results indicate that both the BLA and SiO 2 exhibited irregular shapes and seemed to have an amorphous structure. The EDX spectra of BLA and SiO 2 demonstrated the presence of SiO 2 as the dominant component, with 27.65% and 47.65%, respectively. Increasing the TiO 2 content in the samples led to the appearance of surface aggregates in the TiO 2 /SiO 2 samples. The spherical forms may be ascribed to the formation of titanium dioxide. By comparing the TiO 2 contents, we found that a higher Ti content corresponded to larger aggregates on the surface. Fig. 2 shows the comparison of the FTIR spectra of the SiO 2 and TiO 2 / SiO 2 materials. Broad bands at 3430 cm À1 and 1630 cm À1 in both spectra corresponded to the stretching vibration and bending vibration of hydroxyl groups and surface adsorbed water, respectively. The TiO 2 /SiO 2 sample exhibited characteristic peaks at 500 cm À1 , 1080 cm À1 , and 950 From the pattern, it can be seen that only the anatase phase was present, while the rutile phase did not appear. Referring to previous works regarding the formation of TiO 2 and TiO 2 -SiO 2 , calcination temperatures in the range of 400 C-600 C tend to produce anatase crystals rather than rutile crystals, which are formed when the temperature exceeds 700 C [17]. Physicochemical characterization of materials The grain size in the anatase phase was calculated from the full width at half maxima (FWHM) of the peaks at 25.1 and 38.01 using Scherrer's Eq. (2): where λ is the X-ray wavelength, β is the FWHM of the diffraction line, θ is the diffraction angle, and K is a constant that has been assumed to be 0.9. The data are presented in Table 2. The crystallite size depends on the TiO 2 content in the composite, and the average particle diameter increased with increasing TiO 2 . These results are in good agreement with those reported by previous studies on the synthesis of SiO 2 /TiO 2 [18,19,20]. A higher TiO 2 content in the synthesis causes the particle size of the aggregates to increase as a consequence of the larger size of the pre-formed TiO 2 nanoparticles during the sol-gel transition [16]. The microstructural properties of the materials were investigated using N 2 adsorption-desorption isotherms at 77 K Fig. 4 shows the adsorption-desorption isotherms and Barrett-Joyner-Halenda (BJH) pore distributions from the desorption profiles of SiO 2 and TiO 2 /SiO 2 samples. The pure SiO 2 material exhibited an isotherm related to nonporous material, while the TiO 2 /SiO 2 samples exhibited a type IV isotherm, characteristic of the combination of microporous and mesoporous materials. The adsorbed amount of N 2 molecules at low relative pressures increased with TiO 2 content. The surface area, pore volume, and average pore radius calculated are listed in Table 2. The BET-specific surface area and BJH pore diameter results suggest that increasing the TiO 2 content from 0% to 40% leads to improvements in the specific surface area and pore volume. However, the average pore size decreased due to the formation of different pores. From the pore distributions, it can be noted that the composites exhibited different features than the pure SiO 2 due to the presence of TiO 2 species, and their shapes depended on the TiO 2 content. The formation of larger pore diameters was achieved for the 60-TiO 2 /SiO 2 sample, resulting in larger aggregates of TiO 2 for higher TiO 2 amounts in the TiO 2 /SiO 2 material. The band gap energy of TiO 2 /SiO 2 was also affected by the TiO 2 content. The band gap energy is theoretically related to the particle size of a semiconductor. Diffuse reflectance spectra (DRS) in the UV-vis interval were analyzed to estimate the band gap of the samples. Plots of the band gap energy of TiO 2 /SiO 2 materials as well as the calculated band gap energy are depicted in Fig. 5. The increase of the Ti content in the composite led to the enhanced band gap energy, and it reached a maximum at 3.22 eV for 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 (Fig. 5). The values did not vary linearly with the quantum size effect, which means that a smaller particle size reflects increasing band gap energy [16]. The reason for this inconsistency is possibly related to the non-homogeneous distribution of TiO 2 particles on the SiO 2 surface. Photocatalytic activity The photocatalytic performances of the prepared materials were assessed based on room temperature MB photocatalytic degradation in the absence and presence of H 2 O 2 as an additional oxidant. TiO 2 is photoactive material, and because a photon impinges on the semiconductor photocatalyst there will be an excitation of electrons from the valence band to the conductance band. This process leaves holes (h þ ) that then will interact with solvent or hydroxyl ions in the system to form radicals. The presence of H 2 O 2 contributes to the more rapid formation of OH radicals in the system and leads to the faster oxidation of organic compounds. The mechanism is as follows: The experiments involved a typical MB photocatalytic degradation experiment; 0.25 g L À1 of catalyst was added to 500 mL of an aqueous solution of MB, which was placed in the solution chamber supported with a 20 W UV lamp as the photon source. Before light irradiation, the suspension was stirred magnetically for 15 min to develop adsorption-desorption equilibrium. Sampling was performed at certain times, and the treated solution was analyzed using a spectrophotometric method. Fig. 6 shows the kinetic patterns of MB photocatalytic degradation of materials in the absence and presence of H 2 O 2 . The kinetic patterns are also compared with the illumination without the addition of any photocatalyst. It is seen that the presence of all TiO 2 /SiO 2 materials plays a role in MB decolorization, as there is no significant change in MB concentration with only UV light and no photocatalyst. The photocatalysis mechanism is also confirmed by a faster decolorization slope in the presence of UV light compared with pre-treatment without UV light illumination, which means that only adsorption takes place. Five kinetic models were used to analyze the kinetic data relating to the photocatalytic degradation of MB on TiO 2 /SiO 2 samples. The pseudozero-order model describes the degradation process and can be generally expressed as (3): The pseudo-first-order model can be expressed as (4): The pseudo-second-order model can be expressed as (5): The parabolic diffusion model can be expressed as (6): The modified Freundlich model can be expressed as (7): In these equations, C 0 and C t are the concentrations of dye molecules in the solution at times 0 and t, respectively; k is the corresponding rate constant; α is the kinetic order of the parabolic diffusion model; and b is the Freundlich constant. The fitting of the kinetic data to various models and their corresponding coefficients of determination (R 2 ) are listed in Table 3. We found that in the absence of H 2 O 2 , the photocatalytic degradation of MB over SiO 2 obeys pseudo-zero-order kinetics, while the kinetic data for the TiO 2 /SiO 2 samples and TiO 2 are well fitted to pseudo-first order kinetics. This suggests that the rate of MB degradation over TiO 2 /SiO 2 samples depends on the amount of dye molecules in the solution, while the MB degradation over SiO 2 is mainly affected by the adsorption process. The addition of H 2 O 2 in the reaction system catalyzed by 40TiO 2 / SiO 2 and 60TiO 2 /SiO 2 changes the kinetic models. The MB photocatalytic degradation over TiO 2 /SiO 2 catalysts in the absence of H 2 O 2 obeys pseudo-first-order kinetics; the same is true for the MB photocatalytic degradation over 20TiO 2 /SiO 2 and 30TiO 2 /SiO 2 with the addition of H 2 O 2 . Meanwhile, the reaction over 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 with the presence of H 2 O 2 is more accurately fitted to the modified Freundlich model. The pseudo-first-order mechanism is based on the assumption that the rate-limiting step is the chemical sorption of MB as target molecules and that the oxidation occurs through photoinduced electron transfer between the reactants and photoactive particles [21,22]. With the modified Freundlich model, the reaction is controlled by heterogeneous diffusion from the photocatalyst surface interaction among reactants. This suggests that the system is controlled by the adsorption-desorption mechanism and that the degradation of the dye molecules occurs on the photocatalyst surface before being desorbed from the surface patches. This change refers to the higher specific surface area of 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 compared with 20TiO 2 /SiO 2 and 30TiO 2 /SiO 2, which potentially provide a larger active surface to adsorb MB and H 2 O 2 . The combination of higher specific surface area and band gap energy accelerates degradation by the more stable radicals formed, as there is an interaction between the photon and the photocatalyst. This assumption is strengthened by the degradation reactions over TiO 2 in the absence and presence of H 2 O 2 , which obey pseudo-first-order kinetics. The result is a lower initial rate compared with 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 . Based on the kinetics constant and initial rate values, the photocatalytic activity of TiO 2 is in between that of 40TiO 2 /SiO 2 and 40TiO 2 /SiO 2 . This suggests that based on TiO 2 content, the composite of TiO 2 /SiO 2 enhances the photocatalytic efficiency, while the lower TiO 2 content in 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 results in a higher degradation rate. In the perspective of utilization of Bamboo leaves ash as SiO 2 source, the comparison was also conducted with the use of 20TiO 2 /SiO 2 synthesized using TEOS as SiO 2 precursor (designated as 20TiO 2 /SiO 2 -TEOS) with the physicochemical character data presented in Table 4. Based on the kinetics plots presented in Fig. 7, pseudo-first order plot (Fig. 6c) suggests that there is no significant different photocatalytic activity represented by the insignificant difference in kinetics constants the presence and absence of H 2 O 2 . The kinetics constants for photocatalytic degradation without H 2 O 2 over 20TiO 2 /SiO 2 and 20TiO 2 /SiO 2 -TEOS are 5.6 Â 10 À3 /min and 6.0 Â 10 À3 /min, meanwhile in the presence of H 2 O 2 , the constants are 4.2 Â 10 À2 /min and 4.6 Â 10 À2 /min, respectively. The higher values are corresponding to higher specific surface area. The role of composite formation of TiO 2 with SiO 2 can be evaluated by the turnover numer (TON) which calculated by following equation: The data depicted in Fig. 6d represents that all TiO 2 /SiO 2 samples demonstrated higher values compared with TiO 2 for the photocatalytic degradation by both with H 2 O 2 and without H 2 O 2 addition. It means that as photoactive material, the photocatalytic activity of TiO 2 in support formation tends to be more effective in composite form. The occurrence of faster oxidation in the presence of an oxidant is proven by the spectrophotometric spectra of the treated solutions in Fig. 8. The faster reduction of the MB maximum wavelength and the hypsochromic shifts throughout the time of treatment are ascribed to the chemical change of the MB structure. The change in the UV-vis spectrum due to the photo-oxidation mechanism depicts the result of the Ndemethylation of the dimethylamine group in MB [23,24]. The values of the initial rate suggest that the presence of H 2 O 2 enhances the degradation rate by twice that with respect to its absence. The more rapid MB degradation with the increasing Ti content is related to the more rapid formation of radicals with increasing holes and solvent and H 2 O 2 interaction during the mechanism. Overall, it is about 99% of the MB degradation reached at about 40 min by all TiO 2 /SiO 2 photocatalyst samples under photo-oxidation mechanisms. Meanwhile, similar degradation percentage values are reached at around 120 min via the photocatalysis mechanism. This is also assumed based on the higher band gap energy for the higher Ti content and the larger specific surface area and pore volume in the TiO 2 /SiO 2 materials. Conclusion A series of TiO 2 /SiO 2 composites using biogenic silica from bamboo leaves as the silica source and titanium isopropoxide were synthesized using a sol-gel process. The materials showed physicochemical characteristics to be functional as photocatalysts. TiO 2 in anatase phase is found in all varied TiO 2 content. It is also noted that in the range of 20-60% wt. varied TiO 2 percentage in the preparation, the higher TiO 2 content helps to increase particle size in the anatase phase and to increase specific surface area, pore distribution, and band gap energy (E g ). These physicohemical properties have an important role in the photocatalysis mechanism. The presence of H 2 O 2 , higher specific surface area, and higher band gap energy are prominent factors that accelerate the degradation mechanism. The notifiable data is that kinetics of photocatalytic degradation over 40TiO 2 /SiO 2 and 60TiO 2 /SiO 2 with the addition of H 2 O 2 obey the modified Freundlich kinetic model, while other photocatalytic degradation using TiO 2 /SiO 2 samples obeys pseudo-first-order kinetics. Declarations Author contribution statement Is Fatimah: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Yoke-Leng Sim, Fethi Kooli, Oki Muraza: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
2019-11-22T01:18:40.030Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "6c6e20fe235381e98f12fbb21972d95694c2cd15", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844019364266/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58ba65e03ab4311346bda91fc5f42a130698dc46", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
215233883
pes2o/s2orc
v3-fos-license
Priming with low doses of methyl-CCNU reduce the toxicity of high doses of methyl-CCNU and melphalan, and increase the lifespan of mice implanted with Lewis lung carcinoma. Pretreatment of mice with low doses of methyl-CCNU was shown to reduce the toxicity of lethal doses of methyl-CCNU or melphalan administered one or two days following the low dose. There was an increase in survival rate, body weight, thymus and kidney wet weight. Tissue morphology was less affected in the primed mice as compared to mice receiving the high dose or a high-low dose combination. In mice implanted s.c. with Lewis lung carcinoma, priming with 5 mg kg-1 methyl-CCNU 2 days before injection of a very high (35 mg kg-1) dose significantly increased the lifespan as compared to treatment with the high dose alone or with high-low dose combination. When the dose of methyl-CCNU was further increased to 40 mg kg-1 toxic death occurred, which was, however, significantly reduced by 'priming' with the low dose given. When low-high dose combination was used twice (the high dose was given on day 7 or 9, and 18 or 20 after tumour inoculation), priming with 5 mg kg-1 (but not with 10 mg kg-1) two days prior to the high dose was beneficial in reducing toxic death (in two experiments) and either increasing lifespan or not significantly increasing it. In no case was there protection of the tumour by the low-high dose combinations. Toxicity of certain anti-cancer drugs may be significantly reduced by the administration of low doses of the same or another cytotoxic drug (1 to 7 days) prior to the high dose. The optimal time interval between treatments is dependent on the specific drug combination and the animal species investigated (Millar et al., 1975;1978a, b). This has been demonstrated in healthy and in tumour-bearing animals, and in the last case a beneficial effect of low-high dose combinations was observed (Millar and McElwain, 1978c;Millar et al., 1980;Rose et al., 1975). Our own studies with methyl-CCNU (NSC-95441; 1-(2-chloroethyl)-3-(4-methylcyclohexyl)-1 -nitroso-trans-urea), a nitrosourea compound and a very effective anti-cancer drug in experimental animals and man, revealed its unique properties in curing a wide variety of experimental leukaemias (meningeal guinea pig leukaemia; B and T myelogenous leukaemia, viral and radiation-induced mouse leukaemias) even when administered in a single dose (Peled et al., 1982;Perk et al., 1974;1977). Methyl-CCNU is, however, very toxic, causing both acute killing of blood forming bone marrow cells and epithelial cells along the gastrointestinal tract. It also causes delayed toxicity to other tissues such as the kidney tubular epithelium and the epithelium of the eye lens. In our own experiments with mice, rats and chickens we have demonstrated damage to the testicular germinal cells; a delayed and sustained effect on the kidney tubular epithelium resulting in polydipsia and polyuria and an alteration in calcium and phosphorus metabolism; and the formation of eye lenticular cataracts 4 to 6 months after cessation of treatment with methyl-CCNU (Zimber & Perk, 1978;Zimber et al., 1980). The purpose of this investigation was to test whether the toxicity of methyl-CCNU might be reduced by pretreatment with low doses of the drug, and if the combination of low and high doses of methyl-CCNU may be beneficial in the treatment of tumour-bearing mice. Materials Drugs Mice were treated with methyl-CCNU and melphalan at the age of 8-10 weeks. Methyl-CCNU was first dissolved in ethanol; mulgofen (polyoxyethylated vegetable oil, EL-620, GAF Corp., NY) was then added, and finally this solution was brought to volume with sterile 0.9% saline (1:1:16 v/v, respectively). Also, 100mg melphalan (Alkeran) were dissolved in 1 ml acid alcohol and diluted in 9 ml buffer (materials supplied by Burroughs Wellcome and Co., London). Both drugs were injected i.p. (in 0.2 to 0.4ml) and always from 9 to 11 a.m. Low doses of methyl-CCNU were 5 and 10mg kg-1. High doses of methyl-CCNU ranged from 30 to 55 mgkg-1. Melphalan was used in doses of 15 and 20mgkg-1. The time intervals between treatment with low and high doses of drugs ranged between 1 to 3 days. Lewis lung carcinoma Lewis lung carcinoma (3LL) (kindly provided by Dr S. Segal, the Weizmann Institute of Science, Rehovot) was serially transplanted s.c. in C57B1 male mice. For implantation, non-necrotic areas of the tumour were treated with 0.25% trypsin and 0.1% DNAase, washed and diluted in sterile PBS (pH 7.4). Cell suspension containing 2 x 105 or 1 x 106 cells were inoculated s.c. on the abdomen. Eight days after tumour inoculation mice were treated with low dose of methyl-CCNU followed 2 days later by treatment with high dose of this drug. Other experimental groups were treated with the high dose alone, or with the high dose followed 2 days later by the low dose. Tumour size was determined by measuring the longest two diameters with a caliper twice a week. In all the experiments body weights were recorded once a week. Organ wet weights (thymus, spleen, kidneys and testes; also lung and s.c. tumoursin mice inoculated with 3LL tumour cells) were recorded for all moribund and dead mice and for those deliberately killed. These organs were processed routinely for histological examination and stained with haematoxylin and eosin. Survival studies The effect of low doses of methyl-CCNU administered prior to sublethal and lethal doses in normal mice In a preliminary experiment we investigated the effect of 10 mg kg-1 of methyl-CCNU given 3 days prior to 30mg kg-1 -a high dose which was used by us previously in studies of experimental leukaemia and drug toxicity (Peled et al., 1982;Perk et al., 1974;Perk & Pearson, 1977;Zimber & Perk, 1978;Zimber et al., 1980). -This protocol did not improve the condition of mice observed for a period of 10 weeks as compared to mice receiving the high dose alone or a high-low dose combination. Thus, in all methyl-CCNU treated groups body weights were significantly decreased, polyuria was evident and plasma urea level was 50% elevated (data not shown). Also, pretreatment with 5mg kg-1 methyl-CCNU 3 days before the administration of a lethal dose of 50mg kg-1 of this drug did not alter mortality rate. However, body weights, thymus and kidney wet weights and morphology examined in mice surviving the low-high (5/12 survivors) and high-low (4/12 survivors) dose combinations 40 days after administration of the high dose showed better recovery in the first group (Table I) Table I the number of mice with pathologic changes in the kidney and thymus is given. These were arbitrarily designated +, + or + + depending on the mass of tissue affected and the severity of changes: kidney tubular swellings and loss; thymus thinning of the cortex and hypocellularity. We then tested the effect of 5mgkg-1 of methyl-CCNU administered one or two days before the lethal dose of 50mgkg-1. A protective effect of pretreatment with the low dose was evident. showing significantly higher survival rate ( Figure 1). Mice treated with high-low dose combinations or with high dose only showed 58 to 67% mortality within 3 between the low dose (5mg kg-) and the high dose (50mg kg -1) was 1 or 2 days Control ----5+50mgkg-1, 1 day * 0; 5+50mgkg-', 2 days U-U; 50+5mgkg- weeks, as compared to 17% mortality in the low-high dose combinations. Survivors were killed 22 days after treatment, tissues were wet weighed and examined microscopically. These results showed increased (or better recovery of) thymus weight in mice pretreated with the low dose of methyl-CCNU 1 or 2 days prior to the high dose (Table II). Also, microscopic examination of the thymus and kidney showed decreased incidence and degree of hypocellularity in the thymus and a decrease in the swelling of the tubular epithelium in the kidneys. The effect of low dose of methyl-CCNU administered prior to a lethal dose of melphalan in normal mice In this experiment low doses (5 and 10mg kg-1) of methyl-CCNU were given 1, 2 or 3 days prior to a lethal dose (20mgmg-1) of melphalan, which when given alone caused 100% mortality within 7 days (Figure 2). Mice pretreated with low doses of methyl-CCNU showed reduced mortality rate. When given one day apart, a 42% survival was evident in the mice primed with 5mg kg-I methyl-CCNU. When the time interval was extended to 2 days the best effect (50% survival) was obtained when using 10mgkg-1 as a priming dose. Priming with 5 or 10mgkg-1 3 days before melphalan had only a slight effect on survival. Based on these results, the effect of priming of mice with 10mg kg 1 methyl-CCNU 2 days before treatment with 15 mg kg 1 melphalan was studied. Results of this experiment (data not shown) showed that this treatment increased the survival (13/20 in melphalan alone group vs. 20/22 in methyl-CCNU primed mice) and enhanced the recovery of thymus weight. Corrected thymus wet weights were 1.95+0.43, 0.73+0.46 and 1.13+0.33 in control, melphalan treated and methyl-CCNU primed and melphalan treated mice, respectively. The effect of low-high dose combinations of methyl-CCNU in the treatment of Lewis lung carcinoma Since priming of mice with low doses of methyl-CCNU showed a protective effect against lethal doses in healthy mice, it was of interest to test whether high doses of methyl-CCNU would be also well tolerated in mice previously implanted with a tumour, and whether a therapeutic gain might be achieved with low-high dose regimens. This could happen if normal tissues would be protected by the priming low dose while the tumour would not (Millar et al., 1982). We first used doses of methyl-CCNU which were higher than those previously used by us in experimental tumour systems (Peled et al., 1982;Perk et al., 1974;1977). Mice were implanted s.c. with 1 x 106 3LL tumour cells and treated 10 days later (when tumour size was -0.5-1.0cm3) with 45 or 40mgkg-1 methyl-CCNU, and combinations of 40 and 5 (high-low) and 5 and 40mgkg-1 (low-high). It appeared, however, that these doses were toxic: 30-60% of the treated mice died without tumours 6-25 days following treatment with methyl-CCNU. Mortality rate was significantly decreased with the low-high combination as compared to the high-low dose combination and to treatment with 45 mg kg-1 (which caused the highest mortality), but was not different from that observed in mice with 40 mg kg-1 methyl-CCNU. Overall, there was no beneficial effect of priming with the low dose on tumour growth. We then tested the effect of 35mg kg-1 methyl-CCNU as the high, tumour killing dose, and the low-high dose combinations of 5 and 35mg kg-administered 2 days apart. Results of this experiment are illustrated in Figure 3. All the untreated tumour inoculated mice died within 30 days (MST was 18 days). In contrast, mice treated with 35mg kg-1 or with 35mg kg-1 followed 2 days later by 5mg kg-1 showed longer survival (MST was 35 and 37 days, respectively). Following a temporary shrinking of the tumours for -11-18 days a new burst of tumour growth took place, which caused death within 34-58 days following tumour inoculation. The most pronounced effect was observed in the low-high dose group with MST of 44 days. However, even in this group 10/12 mice died within 60 days after tumour inoculation, with lung metastases in most animals and, occasionally, metastases in other organs (i.e. kidney, liver). Double treatment with low and high dose combinations of methyl-CCNU In two experiments mice were treated twice with high dose of methyl-CCNU and with low-high and high-low dose combinations. The time interval between the high and low doses was two days. In the first experiment an initial high dose of 30mg kg-1 methyl-CCNU was administered on day 9 and a second high dose on day 20 following s.c. inoculation of 2 x 105 3LL tumour cells. A low dose (5mgkg-1) was administered 2 days before or after the high dose. In this experiment there was no difference between the low-high vs. the high-low combinations until day 40, but survival at 50 days was 2/10 in the low-high combination as compared to 6/10 in the high-low dose combination. At 60 days there were 2/10 survivors in both groups. Rate of death was much faster in the group treated twice with the high dose alone; MST was 30 days as compared to 41 and 44 days in the low-high and high-low dose combinations, respectively. All the untreated tumour-inoculated control mice died within 20 days following inoculation. This experiment was then repeated using 30mgkg-1 as the high dose, given 7 and 18 days after tumour inoculation, and either 5 or 10mgkg-1 methyl-CCNU as the low dose. Results of this experiment (Table III) showed decreased toxicity, decreased rate of tumour development and increased lifespan in the low-high dose treated mice receiving 5mgkg-1, as compared to all other drug treatments which showed a similar increase in lifespan. Mortality in this experiment was mostly due to toxicity until day 46. Nontreated tumour-inoculated mice died 18+2 days after tumour inoculation. Treatment with high dose of methyl-CCNU plus vehicle administration 2 days afterwards served as control. At 90 days there were three survivors in the group primed with 5mgkg-1 and one survivor in the group primed with 10mgkg-1 methyl-CCNU, and all these mice were without tumours on macroscopic and microscopic examination. Discussion The results herein show that priming with low doses of methyl-CCNU at appropriate times before treatment with a high dose can reduce its toxicity in normal and in 3LL tumour bearing mice, and that this protocol does not protect the tumour. Thus, 5 or 10mgkg-1 methyl-CCNU administered 1 or 2 days prior to lethal doses, markedly decreased mortality, enhanced body weight gain, thymus and kidney weight and the normal morphology of these organs. Moreover, this treatment was beneficial when employed with lethal doses of melphalan, another alkylating agent and widely used anti-cancer drug. Also, and most important, the combination of low and high doses of methyl-CCNU appeared to have a therapeutic gain in mice bearing the rapidly proliferating and highly metastatic 3LL tumour. Even in experiments with Lewis lung carcinoma in which lifespan was not significantly increased by priming with a low dose, this treatment was not worse than the administration of high dose alone. There was evidence for a decrease in toxic deaths in tumour-bearing mice treated with the low-high dose combinations whenever the high doses administered alone were lethal, this in either single or double treatment modalities. There was no protection of the tumour by priming with low doses of methyl-CCNU. This phenomenon of a beneficial effect of low doses of cytotoxic anti-cancer drugs administered prior to high doses or prior to therapeutic or lethal doses of gamma irradiation, was previously reported by Rose et al. (1975) and by others (Gregory et al., 1971;Blackett & Aguado, 1979;Millar & Hudspith, 1976;Millar et al., 1975;1978b, c, d;. No mechanistic explanation for this effect has been elucidated yet. It was suggested that DNA precursors which are released from dead cells are involved (Millar et al., 1978a). Since in sheep and man (Millar et al., 1978d;Hedley et al., 1978) the beneficial effect persists and is actually optimal 7 days after such treatment, when cellular breakdown products are probably not present in large quantities, one should postulate another mechanism. Since the protective effect of low doses of cytotoxic drugs against radiation damage and death was demonstrated for a variety of drugs, with different pharmacokinetic behaviour, one may consider the role of changes in drug metabolism by microsomal enzymes and thus the availability of drugs or their active metabolites to target tissues (i.e. bone marrow, intestinal epithelium) (Conney, 1965;Orrenius et al., 1969;Hill et al., 1975;Oliverio, 1976). BCNU, a nitrosourea compound and anticancer drug was shown to alter the activity of various microsomal enzyme systems (Wilson & Larson, 1981). This occurred, however, 20 days after treatment. Although the nitrosoureas and other alkylating agents are considered to be cell non-specific, the DNA synthetic phase is the most sensitive to their action (Tannock, 1978). What effect do low doses of such drugs have on the progression of stem cells and proliferating cells through phases of the cell cycle is not known. Such treatment may cause the clustering of some cells in a phase of the cell cycle which is more resistant to high doses of the drug administered a few days afterwards (Kobayashi et al., 1981). Millar et al. (1978b,c) have shown that the protective effect of priming with low doses of cyclophosphamide (CY) prior to high doses of CY or radiation is due to faster recovery of bone marrow stem cells and not to a reduction in the fraction of stem cells killed. However, it is difficult to distinguish between true stem cells and cells capable of replication and tissue 'regeneration' following an acute damage (Potten et al., 1979). Improvement in stem cell survival is more difficult to demonstrate, as compared to the recovery of more mature cell populations. Priming with low doses of alkylating agents such as CY and nitrosoureas, may kill maturing cells (i.e. in the bone marrow) thus releasing stem cells from their inhibitory control (Fried et al., 1973). One should note that priming with low doses of CY was also shown to protect slowly proliferating cells of the urothelium and the lung (Millar et al., 1978a;Collis et al., 1980;Evans et al., 1983b). If cell recovery in both rapidly and slowly dividing cells is indeed the main event underlying the effect of low doses of CY, methyl-CCNU and other drugs, one should analyse repair enzymes, processes, following such treatment. Priming with low doses of CY was shown to be beneficial in the treatment of patients with melanoma (Hedley et al., 1978) and in the treatment of human neoplasms transplanted into immunodeprived mice (Evans et al., 1983b;1984). The lowhigh dose combinations of nitrosoureas may be especially suitable in cancer patients. These compounds are very efficient tumour cell killing drugs (Valeriote et al., 1968) which are, however, very toxic and are therefore used with long time intervals -6 to 8 weeksbetween subsequent treatments (Carter & Livingston, 1982). Administration of low dose of nitrosoureas prior to the high doses should be feasible timewise. A recent report on the inconsistency of the effect of priming with low doses of CY prior to high doses of melphalan in mice (Kulkarni et al., 1985) seems to point to potential difficulties in adopting this approach to man. However, the multitude of data pointing to a beneficial effect of such treatment in tumour bearing mice, including those transplanted with human tumours, calls for further studies on the effect of low doses of cytotoxic drugs on different cell populations, according to their stage of differentiation and maturation, and on recovery processes and drug metabolism.
2014-10-01T00:00:00.000Z
1988-03-01T00:00:00.000
{ "year": 1988, "sha1": "567e4082740796f5c8f2e42b4d5332d19b8e7415", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2246517?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "567e4082740796f5c8f2e42b4d5332d19b8e7415", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
247792938
pes2o/s2orc
v3-fos-license
Theory of Acceleration of Decision Making by Correlated Time Sequences Photonic accelerators have been intensively studied to provide enhanced information processing capability to benefit from the unique attributes of physical processes. Recently, it has been reported that chaotically oscillating ultrafast time series from a laser, called laser chaos, provide the ability to solve multi-armed bandit (MAB) problems or decision-making problems at GHz order. Furthermore, it has been confirmed that the negatively correlated time-domain structure of laser chaos contributes to the acceleration of decision-making. However, the underlying mechanism of why decision-making is accelerated by correlated time series is unknown. In this study, we demonstrate a theoretical model to account for accelerating decision-making by correlated time sequence. We first confirm the effectiveness of the negative autocorrelation inherent in time series for solving two-armed bandit problems using Fourier transform surrogate methods. We propose a theoretical model that concerns the correlated time series subjected to the decision-making system and the internal status of the system therein in a unified manner, inspired by correlated random walks. We demonstrate that the performance derived analytically by the theory agrees well with the numerical simulations, which confirms the validity of the proposed model and leads to optimal system design. The present study paves the way for improving the effectiveness of correlated time series for decision-making, impacting artificial intelligence and other applications. Introduction Optics and photonics have been extensively studied for highspeed information processing in various applications, especially machine learning [1][2][3][4][5]. One of the important branches of the research frontier is reinforcement learning [6], wherein the impacts of photonics have been intensively examined [7][8][9]. e multi-armed bandit (MAB) problem regards decision-making in obtaining high rewards from multiple selections, called arms, wherein the best arm is initially unknown. MAB problems concern a difficult tradeoff known as the exploration-exploitation dilemma, which captures a fundamental aspect of reinforcement learning [6]. e physical properties of photons have been utilized in solving MAB problems [7,8]. In particular, chaotically oscillating ultrafast time series generated by semiconductor lasers, called laser chaos, has been successfully utilized in resolving two-armed bandit problems in GHz order, which we call laser chaos decision-maker hereafter [7]. As introduced below, the principle of the laser chaos decisionmaker simply depends on the signal-level comparison between the chaotically oscillating time series and the threshold level. It has also been demonstrated that such a level comparison-based principle is scalable in a tree architecture, which can be experimentally demonstrated up to 64 arms [10]. Furthermore, the applications of laser chaos decisionmakers have been studied to benefit from their prompt adaptation abilities in dynamically changing uncertain environments [11][12][13][14]. Takeuchi et al. applied laser chaos decision-making to channel selection problems in wireless communications [11], in which communication channels suffer from dynamically changing disturbances due to traffic, interference, or fading [15]. Kanemasa et al. extended the principle using laser chaos decision-maker to channel bonding in IEEE 802.11ac networks [12]. Furthermore, Duan et al. optimized user-pairing in non-orthogonal multiple access (NOMA) systems by laser chaos decisionmaker [13]. Moreover, Kanno et al. combined laser chaosbased decision-making with photonic reservoir computing, where adaptive model selection is realized to enhance the computing capability [14]. In [7], it was demonstrated that the autocorrelation inherent in laser chaos time series impacted the decisionmaking performances. Indeed, chaotic time series with negative maximum autocorrelation yield superior performances when compared with pseudorandom numbers, colored noise, and random shuffle surrogate data of the original laser chaos time series [7]. Furthermore, Okada et al. extensively examined the decision-making acceleration by laser chaos using surrogate analysis, such as the Fourier transform surrogate [16]. It was found that both statistical distributions of the amplitude of time series and negative autocorrelation therein impact decision-making performances [16]. In the literature, the usefulness of negative autocorrelation in time series has been theoretically analyzed regarding code division multiple access (CDMA) [17][18][19]. To achieve high performance in CDMA, the cross-correlation between the spreading sequences must be small. e optimal negative autocorrelation to minimize the interference has been mathematically derived, and the chaotic map that generates the smallest cross-correlation was defined. In addition, ref. [19] clarifies that the negative autocorrelation that minimizes cross-correlation accelerates the performance of solution search algorithms for combinatorial optimization problems. An FIR filter to generate the optimal chaotic CDMA sequence was also proposed based on the negative autocorrelation analysis [20]. Moreover, the effectiveness of such optimal negative autocorrelation codes has been experimentally demonstrated using software-defined radio systems [21]. However, regarding decision-making, the fundamental underlying mechanism of how negative autocorrelation inherent in time series impacts performance superiority is still unclear. at is, the results in the previous studies [7,16] are all limited in empirical findings. If the effectiveness of the negative autocorrelation in laser chaos or correlated time series for decision-making is theoretically grasped, it allows, for example, a systematic design approach to derive the optimal autocorrelation depending on given problem situations. Besides, the insights gained by mathematical modeling ensure the reliability of the effectiveness provided by the negative autocorrelation in time series. In this study, we theoretically construct a model to account for the effect of negative autocorrelation in decisionmaking performances. e theory of this study is inspired by correlated random walk [22,23]. Contrary to conventional random walks, which have transition probabilities independent of prior events, correlated random walks have probabilities dependent on prior events [22,23]. at is, the notion of correlated random walks allows us to represent state-dependent, different probability evolution dynamics. Such a theoretical architecture accounts for the interplay between the correlated time series and the evolution of decision-making. We clarify the validity of the proposed theoretical model by confirming the excellent agreement of the decision-making performances derived analytically by the proposed model and by numerical simulations. e rest of the article is organized as follows: Section 2 reviews the mechanism of laser chaos decision-maker. In Section 3, we introduce a numerical method to generate an arbitrary autocorrelation in time series, by which the relevance between autocorrelation and the resultant decisionmaking performance is systematically examined. Section 4, which is the most important contribution of this study, demonstrates the theoretical model of decision-making based on correlated time sequences. Section 5 demonstrates the agreement of the decision-making performances predicted by the proposed theory and numerical simulations. Section 6 concludes the article. Laser Chaos Decision Maker: Using Time Series for Decision-Making As mentioned in Section 1, the laser chaos time series allows ultrafast decision-making. Figure 1(a) schematically illustrates the architecture of the laser chaos decision-maker for a two-arm bandit problem, which is the scope of this study [7]. e two arms are called slot machines A and B. Laser chaos is generated by subjecting a portion of the output light back to the laser by an externally arranged reflector, which is called delayed feedback. We compare the intensity level of the laser chaos with a certain threshold value, which is denoted by e decision-making is executed as follows: When the sampled value of the time series is above the threshold, the decision is to choose slot machine A; otherwise, slot machine B is selected. e threshold T(t) is updated according to the result of the slot machine play. Overall, the threshold update is conducted under the assumption that the revised threshold will lead to the same decision in the subsequent decisions when the present action is successful, whereas the threshold is revised to the opposite direction when the present action is a failure [7,8,10]. More precisely, the values of threshold T(t) are determined by where TA(t) is called the threshold adjuster and [ * ] is the nearest integer to * . [TA(t)] can take an integer value ranging from −N to N, with N being a natural number. 2 Complexity erefore, the number of levels that the threshold adjustor can take is 2N + 1. Here, k is a coefficient to convert [TA(t)] to T(t). TA(t) is updated depending on the result of the action conducted at t − 1: where Δ denotes increment, which is given by Δ � 1 in this study. α is the forgetting parameter for weighting previous threshold adjuster variables, ranging from 0 to 1, that is 0 ≤ α ≤ 1. Ω is called the penalty parameter [7,8]. A hierarchical formation of such two-armed bandit problems has been proposed to deal with problems with more than two arms [10]. e elemental structure is the abovementioned two-armed situations with a dynamically updated threshold. is study focuses on two-arm situations as the first theoretical analysis on the laser chaos decisionmaker. e analysis of cases with more than two arms can be done by extending the method proposed in this study; however, that will become a very complicated analysis. erefore, we focus on a simple case in this study, and the cases with more than two arms will be our future work. Effectiveness of Correlated Time Series on Decision-Making As described in Section 1, the performance of the two-armed bandit problem using laser chaos time series depends on the autocorrelation inherent therein [7,16]. e best performance is obtained when the autocorrelation of the time series exhibited its negative maximum [7]. Furthermore, the surrogate data analysis of laser chaos time series clarifies the impact of time-domain correlation [10]. In this study, to examine the influence of correlations in time series in a systematic manner, we introduce an artistically constructed time-correlated time series and analyze its influence on decision-making performance. We construct a time series whose amplitude follows a Gaussian distribution while having a determined autocorrelation by utilizing the Fourier transform surrogate method [24]. e various steps involved are as follows: G re a te r th a n T (t ) L e s s t h a n T ( t ) Complexity (1) A time series r(t) is constructed with t ranging from 0 to T −1, where T is the length of the time series. Here, we suppose that r(t) � r(0) λ t . Specifically, r(t) � λ r(t − 1) holds, indicating that r(t) undergoes a time correlation specified by λ to its previous point r(t − 1). We call λ the autocorrelation coefficient in this study. rough the process above, the autocorrelation of the resultant r′(t) is equivalent to that of r(t). However, the amplitude distribution of r′(t) follows a Gaussian profile because of the randomized phase factors in the Fourier domain. e above-described process corresponds to a special case of Fourier transform surrogate [24]. Snapshots of the time series generated for the cases when the time correlation is specified by λ � 0.8, 0, and −0.8 are shown in Figures 1 respectively. All of the timeseries signals appear random, but there are distinct differences in their autocorrelation. With λ � 0.8, the signal level at time t is similar to the signals around that point, that is radically large signal-level differences in consecutive data points are rarely observed (Figure 1(b)). Conversely, with λ � −0.8, meaning a strong negative autocorrelation, the signal at time t has almost the exact opposite value to the surrounding data ( Figure 1(d)). As a result, the time series exhibits a highly time-varying structure. Meanwhile, the histogram of the signal level of these time series follows the same Gaussian distribution. It should be noted that the above-described Fourier transform surrogate-based procedure does not perfectly reproduce the experimentally observed laser chaos time series. is is because the correlation in the above process is determined only by r(t) � λ r(t − 1) in Step (1), whereas the experimental laser chaos involves very long-range time correlations via delayed optical feedback. However, we consider that the Fourier transform surrogate-based method is quite beneficial to this study for several reasons. e first is that the correlation between two successive points can be specified by an arbitrary number, allowing λ values smaller than even −0.5, which was experimentally not feasible, at least in the previous studies [7,10]. erefore, systematic analysis is enabled for a wide range of λ. e second is that amplitude distributions are kept equivalent between each other even when λ is configured to different values, which also allows us a clear examination of the impact of autocorrelation inherent in the time series. For these reasons, we use the time-series r′(t) generated using the above process. We then analyze how the MAB performance depends on the autocorrelation specified by λ. In evaluating the performance of the MAB problem, we employ the correct decision rate (CDR). e CDR(t) is defined as the ratio of selecting a slot machine with the highest reward probability at a time step t and averaged over m simulations or cycles. at is, CDR(t) is expressed by where m is the number of cycles with different random initial conditions. Here, C i (t) � 1 when the slot machine with the highest reward probability is selected at the tth decision (or time t) of the ith cycle. In other words, correct decisionmaking is conducted. Otherwise, C i (t) � 0, meaning that correct decision-making is not executed. In the following simulations, m � 60000. Figure 2 summarizes the calculated CDR at t � 1000 as a function of the autocorrelation coefficient λ in several different reward environments and the setting of the decisionmaker. e reward probability of the two slot machines, called machine A and machine B, is denoted by P A and P B , respectively. For example, in Figure 2(a), P A and P B are given as 0.9 and 0.3, respectively. In this situation, the correct decision is to select machine A as it is the slot machine with the highest reward probability (P A > P B ). In addition, the number of levels of threshold adjustor is 5, and specified by N � 2. It should be emphasized that a higher CDR is obtained when the autocorrelation is negative; indeed, the best CDR is given by λ � −0.6. Table 1 summarizes the reward probabilities of slot machines and the number of threshold levels N for each MAB problem. In Figures 2(b) and 2(c), P A and P B are differently configured while maintaining the same threshold number as in Figure 2(a) (i.e., N � 2). More specifically, the difference of P A and P B is only 0.1 in Figure 2(b) by setting (P A , P B ) � (0.6, 0.5). Similarly, the difference is 0.2 in Figure 2(c) by setting (P A , P B ) � (0.9, 0.7). at is, the difficulties in finding the best machine are configured differently. Here, it should be noted that the highest CDR is accomplished when the autocorrelation coefficient λ is given by −0.8 and −0.3 in Figures 2(b) and 2(c), respectively. at is, the best decision-making is realized with negatively correlated time series. e reward setting of (P A , respectively. e only difference is in the threshold value, which is specified by N � 4. e achieved CDR was different because of the change in the value of N. However, it should be noted that the highest CDR performances are all obtained with negative autocorrelation when λ is given by −0.6, −0.9, and −0. Theoretical Model of Decision-Making Using Correlated Time Series is section shows a mathematical model to account for the impact of correlated time series on decision-making. Here, we focus on two-armed bandit problems where two slot 4 Complexity machines are called machines A and B. Figure 3 shows a conceptual architecture of the proposed model. We assume that slot machine A has a larger reward probability than slot machine B, that is P A > P B . erefore, the correct decision would be to choose slot machine A. Here, we assume that the subjected time sequence takes either of the two signal levels specified by + x or −x, which is denoted by sky blue marks in Figure 3. In the meantime, remember that the threshold level, T(t) given by equation (1), takes in total 2N + 1 different signal levels, each of which is represented by −N, −N + 1, . . ., N − 1, N. Furthermore, we assume that the higher-level signal + x satisfies N − 1 < x < N, meaning that the upper signal level of the incoming time series is below the maximum threshold level but greater than the second maximum threshold. Similarly, the lower signal level (−x) satisfies −N < −x < −N + 1, indicating that the lower signal level of the subjected time series is above the minimum threshold level but less than the second minimum threshold. Based on the decision-making principle described in Section 2, we summarize the decision-making process in the present situation. Let the signal level of the incoming time series at time t and the threshold level at time t be denoted by s(t) and T(t), respectively. (1) If s(t) is given by + x, the decision is to select machine A because s(t) � +x is greater than N -1. Furthermore, the incoming signal s(t) contains inherent correlations, as discussed in Sections 1 and 2. Concerning the fact that s(t) under study is a two-level signal train, we can think of the probability where the signal level s(t + 1) at time t + 1 is different from s(t) at time t, that is s(t + 1) � +x results after s(t) � −x or s(t + 1) � −x after s(t) � +x. Since the autocorrelation between two consecutive timings is given by λ, such a signal-level changing probability is given by μ � (1 − λ)/2. Conversely, the probability of exhibiting the same signal level is given by 1 − μ � (1 + λ)/2. erefore, such stochastic processes are represented by conditional probabilities given by Table 1: e settings of the reward probabilities of slot machines (P A and P B ) and the parameter N that specifies the number of threshold levels (2N + 1). Pr(s(t + 1) � ± x|s(t) � ∓x) � μ and Pr(s(t + 1) where Pr denotes probability. e important aspect is that the internal status of the decision-maker, represented by T(t), is tightly coupled with the correlated time series subjected to the system as well as the betting results of the slot machine playing, which is specified by P A and P B . e behavior of the revision of T(t) is described by the following cases: threshold is updated as when slot machine A is selected, the threshold is updated as T(t) + 1 when machine A fails with probability 1 − P A , T(t) − 1 when machine A wins with probability P A , and when slot machine B is selected, the threshold is updated as It should be noted that regardless of the machine selection and betting result, the threshold level always increases or decreases in this case, meaning that the same threshold level is not allowed. e procedure summarized above is a special case of the principle shown in Section 2 by specifying the parameters therein by k � Δ � Ω � α � 1. In addition, we have to emphasize that the upper and lower limits of T(t) are newly posed when the decrement or increment of the threshold is not permitted beyond the range between −N and N. Complexity Hereafter, we refer to this as the stopping rule. is setting is the simplest case for the laser chaos decision-maker. We use this simplest case to keep our analysis model from being too complicated. Cases with other settings may be possible by extending our proposed scheme, but this will be a future project. To theoretically deal with the abovementioned seemingly complex situations, we introduce a set v t � (T(t), s(t)), which represents the state of the model at time t. e space spanned by v t is −N, −N + 1, · · · , N − 1, Herein, we can characterize the state transition probability between two states. Let, for example, the current state is specified by (i, +x) while T(t) is not at the border, that is −N + 1 ≤ i ≤ N − 1. Here, we consider the probability of the state transition as (i + 1, −x). It should be noted that the decision is to select machine A in this given situation (i, +x) since the signal level + x is larger than the current threshold T(t). In this state transition from (i, +x) to (i + 1, −x), the threshold is incremented (i ⟶ i + 1) and the incoming signal level is reversed (+x ⟶ −x). Such a situation occurs when the slot machine A playing is unsuccessful and the incoming signal level is flipped, whose probability is given by (1 − P A )μ. Similarly, all transition probabilities are determined. e notion of correlated random walk allows us to summarize such transitions in a unified manner [22,23]. We first introduce the probability of the state v t by π t (v) � π t (i, σ), meaning the probability of the state with T(t) = i and s(t) = σ. In addition, we define a probability vector π t (i), which is given by which combines the probabilities involving the threshold level being i for different signal levels of the time series (+x and −x). We denote the probability of the threshold being i at time t, regardless of the incoming signal level, by π t (i), which is mathematically equivalent to the L 1 -norm of π t (i). at is π t (i) � π t (i, +x) + π t (i, −x) � π t (i, +x) 1 Based on these preparations, the recurrent formulae of π t (i) lead us to precisely characterize the behavior of the system. Case 1. e probability vector for the case when the threshold is between −N + 1 and N -1 at time t + 1 is given by where the matrices P and Q are given by Equation (11) clearly implies that the probability vector of the threshold being i comprises the transitions from the states with the thresholds being i -1 and i + 1. e elements of the matrices P(i) and Q(i) are intuitively easily understood by the following. e dynamics given by equation (11) are schematically illustrated in Figure 4(a). e matrix P(i) concerns the probability of decrementing the threshold level. For example, the (1, 1)-element of P(i), or P 1,1 (i), represents the probability of the transition from the state (i, +x) to (i − 1, +x). e state (i, +x) indicates that the decision is to select machine A. e decrement of the threshold indicates that the result is a win. e probability of consecutive identical signal levels is given by 1 − μ. Hence, P 1,1 (i) � P A (1 -μ). Similarly, P 1,2 (i) means the probability of the transition from the state (i, −x) to (i -1, +x); the difference is the change of the polarity of the incoming signal level. erefore, P 1,2 (i) � (1 -P B ) μ. Similarly, P 2,1 (i) corresponds to the probability of the transition from the state (i, +x) to (i -1, −x), and P 2,2 (i) corresponds to the transition from (i, −x) to (i − 1, −x). e blue arrows in Figure 4(a) schematically represent the role of the matrix P(i), which concerns the decrementing of the threshold level. Conversely, the matrix Q(i) concerns the probability of incrementing the threshold level. Q 1,1 (i), for example, represents the probability of the transition from the state (i, +x) to (i + 1, +x), meaning that the threshold is incremented while the signal level is unchanged. is situation represents the decision to select machine A, the result is lost, and the polarity of the incoming signal is the same; the corresponding probability is given by (1 -P A ) (1 -μ). Similarly, other elements of Q(i) are specified straightforwardly. e red arrows in Figure 4(a) schematically represent the role of the matrix Q(i), which concerns the incrementing of the threshold level. Case 2. e probability vector for the case when the threshold is at the edge on the negative side, −N at time t + 1 is specified by π t+1 (−N) � P(−N)π t (−N) + P(−N + 1)π t (−N + 1). (14) Edges are to be treated carefully in this case. First, P(−N + 1) in the second term on the right-hand side of equation (14) describes the transition of the decrement of the threshold level from -N + 1 to N, which has already been defined in equation (12). Second, since there are no threshold levels smaller than −N, the transitions involving increments or any Q matrix are not included in equation Complexity (14). ird, what is different from Case 1 above is that the threshold level can be maintained at the edges, which is indicated by the first term on the right-hand side of equation (14). More specifically, the P matrix at −N is given by P 1,1 (−N) means the state transition from (−N, +x) to (−N, +x). is corresponds to the decision to select machine A, the result is a win, and the signal polarity is unchanged. erefore P 1,1 (−N) = P A (1 -μ). Similarly, P 1,2 (−N) means the state transition from (−N, −x) to (−N, +x); what is different from P 1,1 (−N) is the change in polarity. Hence, P 1,2 (−N) = P A μ. Likewise, P 2,1 (−N) and P 2,2 (−N) can be obtained. e blue arrows in Figure 4(b) illustrates the role of the matrix P(−N), which concerns keeping the same threshold level. Case 3. Similar to Case 2, the probability vector for the case when the threshold is N at time t + 1 is specified by Update threshold Threshold level 8 Complexity e meaning of equation (16) is similar to equation (14). Q(N − 1) in the right-hand side of equation (16) has been already defined in equation (13). As in Case 2, the threshold level can be maintained at the edge, which is shown by Q(N) in equation (16). is is given by Q 1,1 (N) means the state transition from (N, +x) to (N, +x). is corresponds to the decision to select machine B, the result is a win, and the signal polarity is unchanged. erefore Q 1,1 (N) � P B (1 -μ). Similarly, Q 1,2 (N) indicates the state transition from (N, −x) to (N, +x); what is different from Q 1,1 (N) is the change in polarity. Hence, Q 1,2 (N) � P B μ. Likewise, Q 2,1 (N) and Q 2,2 (N) can be obtained. e red arrows in Figure 4(c) illustrate the role of the matrix Q(N), which concerns keeping the same threshold level. Finally, a remark is needed for the matrix P at N and matrix Q at −N, which should be different from the one given by equations (12) and (13), and are given by is is because the decision at the edges does not depend on the incoming signal level. For example, with the threshold at N, the decision is always to select machine B because both signal levels + x and −x are smaller than the threshold. Hence P 1,1 (N) means the probability of the state transition from (N, +x) to (N − 1, +x), meaning that the decision is to select machine B, the result is a loss, and the polarity of the signal is unchanged. μ). Similarly, all other elements in equations (18) and (19) are specified. e blue arrows in Figure 4(c) and the red arrows in Figure 4(b) illustrate P(N) and Q(−N), respectively. Figure 5 summarizes the chains of the probability vector π t (i) by equations (11), (14), and (16). e blue arrows, which regard the decrement of the threshold level, are induced by either a win by selecting machine A or a loss by selecting machine B. In contrast, the red arrows, which represent the increment of the threshold level, are triggered by either a win by selecting machine B or loss by selecting machine A. e thresholds at the edge (−N and N) involve arrows of transitions to an identical threshold. Finally, the CDR can be discussed using the probabilities defined above. Assume that the correct decision is to select machine A. e selection of machine A is realized excessively in the following two cases: (1) e threshold is −N. In this case, both signal levels −x and +x result in the decision to choose machine A. (2) When the threshold is between −N + 1 and N -1, the input signal level of +x results in the decision to choose machine A. Hence, the probability of selecting machine A at time t, denoted by CDR (theory) (t), is given by Evaluation With the theoretical model shown in Section 4, we can calculate the time evolution of the probability vector π t (i) and its L 1 -norm π t (i) from any initial conditions. Consequently, CDR (theory) (t) is derived by equation (20). Here, we examine the case when the reward probabilities are given by P A = 0.9 and P B = 0.7 and assume that N is given by 2, meaning that the number of threshold levels is 5. Herein, the initial probability vector is given by π 1 (0) � (0.5, 0.5) while assuming all the other vectors are zero. e autocorrelation coefficient λ specifies the time-correlated, two-level signal trains. Figure 5(b) shows the analytically calculated chains of probability vectors. As time evolves, the probability vector at the edge (i � −2) increases, indicating a high likelihood of choosing machine A, which is the correct decision (since P A > P B ). To examine the mechanism more deeply, Figures 6(a)-6(c) demonstrate the time evolution of the probability when the threshold is at level i (i � −2, −1, 0, 1, 2) and when the autocorrelation λ is specified by −0.8, 0, and 0.8, respectively. What is commonly observed in these figures is that π t (−2), indicated by blue curves, increases as the time elapses, leading to a high chance of selecting machine A or correct decision-making. Meanwhile, π t (2), indicated by green curves, exhibits approximately 0.2 at a time step of 25 when λ is 0.8 ( Figure 6(c)), whereas it shows nearly zero at the same timing when λ is −0.8 (Figure 6(a)). is indicates that the probability of choosing machine B, which is the wrong decision, is not negligible when λ � 0.8. From another perspective, the blue, red, and yellow markers in Figure 6(d) characterize the probabilities of the threshold at t � 1000, which is written as π 1000 (i), when the autocorrelation is specified for λ values given by −0.8, 0, and 0.8, respectively. We can clearly observe a large probability greater than 0.6 about the threshold level of −2, regardless of λ values. It is remarkable that for λ � −0.8, the probability monotonically decreases as the threshold increases, whereas for λ � 0.8, the probability increases when the threshold increases from 0 to 2. Even with zero autocorrelation (λ � 0), a slight increase in probability is observed at the threshold level of 2. We assume that a positive autocorrelation tends to conduct similar decisions consecutively, and hence the decision can be locked in a status, which is actually not the Figure 5: (a) Chains of the probability vector π t (i) given by equations (11), (14), and (16). (b) An example of the evolution of probability vector π t (i) when the initial condition is π 0 (0) � (0.5, 0.5), the autocorrelation coefficient λ is −0.8, the threshold number is specified by N � 2, and the reward environment is (P A , P B ) � (0.9, 0.7). Probability π t (-2) π t (-1) π t (0) π t (1) π t (2) λ = -0.8 Figure 6: Continued. optimal one. Indeed, a related tendency is observed in Figures 6(a)-6(c), where the dynamic change of probabilities, most notably by π t (0) indicated by orange curves, exhibits a strong oscillatory behavior with λ � −0.8, whereas it is attenuated when λ � 0.8. As discussed in Section 4, the decision-making ability can be theoretically derived as CDR (theory) (t), given in equation (20) using the probability model. We examined CDR (theory) (t) depending on a variety of conditions. Herein, the reward probabilities (P A , P B ) and the number of threshold levels specified by N are summarized in Table 1, which are the same as discussed in Section 3 and Figure 2. For example, Figure 7(a) concerns the case (P A , P B ) � (0.9, 0.3) and N � 2. e red curves in Figure 7 show CDR (theory) (1000) as a function of autocorrelation coefficient λ ranging from −0.95 to 0.9 with 0.05 interval. In addition, λ � −0.99 is examined. For all cases in Figure 7, the maximum CDR (theory) (1000) is obtained when the autocorrelation coefficient is negative, indicated by red arrows therein, which coincide with the numerical observations shown in Figure 2. Furthermore, we numerically simulate the correct decision rate CDR(t) defined in equation (3) based on the original decision-making algorithm described in Section 3 while adapting the stopping rule in Section 4. e results are shown by the blue curves in Figure 7. We observe in all panels in Figure 7 that the results from theory (red) and simulation (blue) match well with each other. Additionally, while the blue marks exhibit fluctuations since they are obtained as a statistical average via numerical results, the results in red marks are smooth because they are analytically derived based on the theory described in Section 4. Conclusion In this study, we construct a theoretical model to account for the acceleration of decision-making by correlated time sequences. Previous studies have shown that the solution to the two-armed bandit problem is accelerated by negative autocorrelation inherent in the time series subjected to the decision-making system. However, its underlying mechanisms are unclear. We begin the discussion by clarifying the impact of time-domain correlation on decision-making by utilizing time series with specific autocorrelation designed via Fourier transform surrogate. Coinciding with the prior reports of using experimentally observed laser chaos time series, we confirm that the negative autocorrelation accomplishes superior decision-making performance. e difficulties in understanding the underlying mechanism of such acceleration stem from the fact that multiple entities are involved: the dynamical reconfiguration of the internal status of the decision-maker (the threshold level and its revision), timedomain structure of the incoming time series, and stochastic attributes of the environment (reward probability of slot machines). e theoretical model of this study unifies these entities based on correlated random walks. Furthermore, the decision-making performance obtained analytically by the theoretical model agrees with the numerical results from simulations, which validates the proposed theory. Additionally, this indicates that the optimal autocorrelation for maximizing can be obtained through the model without executing enormous numerical simulations. e proposed scheme to select the best laser chaos with the best autocorrelation can accelerate performance in applications such as wireless communication systems [11][12][13]. is study constitutes a foundation of the intellectual mechanism enhanced by correlated time series, which is important for future information and communications technology. e laser chaos decision-maker can quickly solve MAB problems with GHz order decisions. erefore, it will be possible to optimize decisions in wireless communication systems in real time. However, a dedicated device for the laser chaos decision-maker is necessary. In the meantime, a chip-scale photonic implementation has been recently demonstrated [25] on the basis of the recent advancements in integrated photonics technology, indicating the potential for system integration and miniaturization. Data Availability e data that are used to support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-03-31T01:16:17.997Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "72f332e039bb7b157e8069be69d1f76b291ba170", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2022/5205580.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7527858daf92429fa942a9b16bba7ddb4bf6104e", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
53625394
pes2o/s2orc
v3-fos-license
Creating an Entrepreneurship Ecosystem in Higher Education Entrepreneurship is a crucial element for each country that aims to be competitive and developed within knowledge-based world economy. In “Green Paper on Entrepreneurship in Europe” (2003) European Commission defines entrepreneurship as “the mindset and process to create and develop economic activity by blending risk-taking, creativity and/or innovation with sound management, within a new or an existing organisation”. Extant literature on entrepreneurship concludes by the thesis that entrepreneurs can be made (Gorman et al, 1997, Henry et al, 2005). Taking the words of Drucker (1985) “It’s not magic, it’s not mysterious, and it has nothing to do with the genes. It’s a discipline. And, like any discipline, it can be learned”. In this sense, entrepreneurs can be taught and schools have an important role in this process. Following Kuratko (2005), an” entrepreneurial perspective” can be developed in individuals. Nowadays Higher Education Institutions have an important role in the improvement of entrepreneurship, being part of an entrepreneurial ecosystem with business and government. The market labor faced many changes in last year’s, the unemployment rates increased in Europe and the possibility of creation a firm comes as an important way to add value to economy and to create jobs. In this environment, universities and colleges must provide entrepreneurship education, develop and adapt competencies, skills, disseminate knowledge, technology and increase economic development as well as help students to create new firms and provide the adequate set of training to manage them across their life cycle. Considering the question How to improve entrepreneurship in higher education? this chapter aims to present an entrepreneurship ecosystem developed in Setubal Polytechnic Institute (SPI) Portugal, and discuss some results in an early stage of implementation as well as propose future directions and discuss some barriers faced in the implementation of the model. The literature is particular intense in the analysis of the effectiveness of a particular course or initiative and less focused in an integrated approach of how can Higher Education Institutions promote entrepreneurship. We propose to study entrepreneurship education as 1 a whole, considering an entrepreneurship ecosystem whose consider three dimensions: curricular entrepreneurship courses; extra-curricular entrepreneurship programs and support infrastructures. The first dimension refers to entrepreneurship courses included in graduate and postgraduate programs, and pretend to present the methodologies and practices applied that supports the "learning by doing" methodology, a crucial method in the Bologna Process. The extra-curricular entrepreneurship programs concerns to a set of national, international and regional programs that aims to improve entrepreneurship in higher education, that involve voluntary students and teachers (e g. European graduate program-Junior Achievement, National program of Polytechnic institutions -Poliempreende). Finally, the third dimension describes the support infrastructures created to progress entrepreneurship in higher schools and in region (e g. Office for Knowledge Transfer and Information -OTIC). Additionally this research intends to provide an integrated vision of these three dimensions and map the linkages with outside community and existent networks that stimulate social capital and facilitate the entrepreneurial process. The methodology of this research will be a case study applied to SPI, which has five schools: the Business School; the Technology Schools of Setúbal, the Technology Schools of Barreiro , the Health School and the Education School. In scholar year 2007-2008 SPI had 6371 students and 505 teachers. We believe that our approach represents a starting point to a better comprehension of entrepreneurship ecosystem in Higher Education Institutions that grants economic development. Furthermore, the results of these analyses supported in a case study provide a better understanding the role of institutions of higher education in promoting an integrated approach of entrepreneurship. PART I -An Approach to Entrepreneurship in Higher Education Nowadays, entrepreneurship becomes a buzzword present in all contexts, from politicians to media and from academia do business people. Despite fashionable trend, we cannot ignore the relevance of entrepreneurship for economic development, including economic growth job creation, innovation (Acs and Armigton, 2003;Autio et al, 2007;Carree and Thurik, 1998) as well as for social inclusion, allowing marginal groups to become active economic actors and promoting equal opportunities for women (Volkmann et al, 2009). The relationship of entrepreneurship and economic development it is complex and implies strong and diverse connections and linkages across several institutional players (Bosma et al, 2009). In this vein, the promotion of entrepreneurship demands an entrepreneurial ecosystem where Higher Education Institutions (HEI) plays a crucial position in collaboration with other stakeholders, namely governments (central and local), business associations, entrepreneurs, NGO, service providers, financial institutions, incubators, and several others. Entrepreneurship education only recently got some attention from scientific community, and is far from maturity, despite a large number of initiatives, experiences and curricular courses and programmes developed in last decades across the world (Charney and Libecap, 2000;Li and Matlay, 2005;Solomon at al, 2002;Solomon, 2005). From HEI is expected the development of entrepreneurial capabilities as well as the creation of entrepreneurial mindset in their students, allow them to create and to explore opportunities in private firms or in public or nongovernmental organizations (EC, 2008;Volkmann et al, 2009). Additionally, HEI can provide a set of competences, such as technical skills, business management skills and personal entrepreneurial skills (Hisrich and Peters, 1998). Moreover, in a knowledge economy, where innovation plays a central role, the R&D developed in HEI can create disruptive technologies and innovative ideas, contributing for more new firms, especially gazelle ventures acting in a global international basis. Some studies appoint several perceptions of role of HEI to promote entrepreneurship. A study developed by Carter and Collison (1999) refers to retrospective perceptions of alumni towards the general provision of entrepreneurship education in Higher Education Institutions. The results of this study reveal an interest among alumni in entrepreneurial activities. However some constraints are pointed namely the lack of both finance and experience when setting up a business, conclusions shared by Linan (2008) that add the lack of self-confidence from students as an important barrier to not start a firm. The results also pointed to a need of a more practical grounding for graduates, specifically citing financial management and business communications skills as key elements missing from the undergraduate curriculum. There was an agreement that HEIs have an essential role to play in providing alumni with both formal post-qualification training and social support networks to promote entrepreneurial activity. In a more strategic analysis, one can say that are some characteristics showed by entrepreneurial universities, anchored in a cross disciplinary and cross campus initiatives allow that all students can apply for them (Volkmann et al, 2009). Besides a widespread application, top management engagement in these issues, providing a clear vision and institutional support can contribute for the implementation of a entrepreneurship strategy, where the development of external linkages with entrepreneurs and other organizations should be present (Clark, 1998). The vision have also to, incorporate a market orientation, where scientific and technological capabilities developed by academics and students should be commercialized in the market through new firms, patents, licenses or other contractual arrangements (Bok, 2003). Despite these features, Schramm (2006) claims from more work in the field, recognising that further efforts should be developed. Research recognises that courses or programmes in entrepreneurship can enhance participant intentions' to start a business as well us to develop entrepreneurial capabilities or to create more innovative or profitable ventures (Brown, 1990, Dominguinhos andCarvalho, 2009;Henry et all, 2005;Reynolds, 1997), showing the significance of entrepreneurship education. Entrepreneurship education has evolved in waves (Volkmann et al, 2009). If in the beginning it was associated with management courses, gradually got his space, "to generate more quickly a greater variety of different ideas for how to exploit a business opportunity, and the ability to project a more extensive sequence of actions for entering business" (Vesper and McMullan, 1998:18). In pedagogical issues, several methodologies such as lectures were gradually replaced by application of active methodologies (Bell, 2008;Fayolle et al, 2006;Heinonen and Poikkijoki 2006;Peterman and Kennedy, 2003), such as problem base learning, project development, entrepreneur for a day, business drinks, simulations and other similar, allowing students to develop their potential by assuming more responsibilities in learning process. We can argue that a more open policy towards community's involvement becomes crucial, bring entrepreneurs to the classroom to talk about their experiences, contact with local entrepreneurs, company visits, involve local business organizations in curricula's design, offer workshops and seminars, invite business angels and risk capitalists. The successful of this strategy depends on teachers competences in the area as well as in their research work in the field of entrepreneurship (EC, 2003) allowing the development of an adequate curricula (Volkmann et al., 2009). The Methodology In this section it is our intention to characterize the methodological approach and tools used in this research, present the mains and specific goals and the propositions of the study. This empirical research applies the case study methodology. According with Bell (1997), this methodology allows the researcher to focus in one case or specific situation and allows the identification of the interactive processes involved. Yin (1994) considers that the case study method is most appropriate for the investigation that search questions such as "how" and "why" about a contemporary phenomena about which the researcher has little or no control. Regarding the main sources of evidence referred to by (Yin, 1994) -documents, interviews, archived data, direct observations, participant observations and physical artefacts, it has used, the analysis of documents, direct interviews and direct observations. This multiple sources of evidence are used in order to allow the triangulation. Lakatos and Marconi (2001) describe interviews as conversations, whose purpose is to provide the necessary information to the researcher. With an interactive nature, this technique allows researchers to study complex subjects that could hardly be investigated in depth by means of questionnaires (Mazzotti and Gewandsznajder, 1993). Yin (1994) refers the importance of the use of interviews and according to this author it represents an important source of evidence for case studies. Following this line, semi-structured interviews were carried out with Junior Achievement (one of the institutions responsible for extra-curricular entrepreneurship programs), and the responsible for OTIC. The interviews took place during March and April of 2008, and last for sixty to ninety minutes. Also the student's involvement was quite important because it allowed significant contributions. Through focus groups it was possible to explore how points of view are constructed as well as how they are expressed (Kitzinger and Barbour, 1999). How to improve entrepreneurship in higher education? Considering the main question "How to improve entrepreneurship in higher education?", this section presents a case study applied Setúbal Polytechnic Institute -Portugal. We propose to study entrepreneurship education as a whole, considering an entrepreneurship ecosystem whose consider tree dimensions:  Curricular entrepreneurship subjects;  Extra-curricular entrepreneurship programs;  Support infrastructures. Curricular entrepreneurship subjects The Polytechnic Institute of Setúbal (SPI) was created in 1979. Since the beginning these institutions intend to encourage both professional expertise and scientific knowledge. SPI comprises five Colleges covering such areas as Engineering, Technology, Education, Sports, Art, Communication, Business Administration and Health Care. Our college, BS (Business School) is one of the schools of Public Higher Education of the SPI, and today undergraduate, Masters, Post-Graduation, Courses and Technological Specialization in science business. Created in 1994, with about 2000 students, BS has a significant size and is well recognized by businesses and other organizations. The school aims to train professionals in business areas, with a flexible and dynamic attitude. Strategically BS is focused on a differentiation based on:  Satisfaction and employability of graduates -more than 90% of students in BS find employment in less than a year after they complete a degree;  Linkages to business context -students engage in a compulsory internship for completion of the degree; teacher promote open classes and guest lectures by entrepreneurs and business people; , visits to companies and organization are common; case study methodology are often used in classes, business consultancy and training are offered to firms;  Pragmatic education -in BS classes are dynamic, pragmatic and oriented towards real business and organizational situations;  Accessibility of teachers -the relationship between teachers and students is characterized by an open door policy;  Conditions for study -BS 's facilities are modern;  Innovative practices -the use of simulation and technological resources, organization of business hours, language laboratories, workshops, personal development, among other practices are common in BS. Entrepreneurship is one of the foundation stones of business education in BS. There are important reasons that justify this importance. The first of all is that entrepreneurship is an important issue for world economy. Another important reason is related with the change of the social contract between companies and their employees. In the past companies offered long-term security in return for loyalty, however from the 1980s, first in America and then in other advanced economies, the companies began downsizing their workforces. This made a huge difference to people's experience at the workplace. In the 1960s workers had had an average of four different employers by the time they reached 65. Today they have had eight by the time they are 30. Consequently people's attitudes to security and risk also changed. If a job in an organisation can so easily disappear, it seems less attractive and the creation of its own job can be an attractive option. In this context SPI tried to promote entrepreneurship education and in 2006 reformulate the curricula of Entrepreneurship, after 8 years course on New Business Creation. Besides the name, the methodology was radically changed, to fit Bologna process and to accommodate the recommendations of the scientific publications in the field of entrepreneurship education. In this vein, learning by doing approach was adopted, anchored in the development of a set competences connected to more entrepreneurial behaviours. Entrepreneurships subject is supported in the methodology "leaning by doing", particularly at practical classes. The theoretical classes adopted the expositive method combine with the organization of open classes and conferences. The methodology "leaning by doing", allow students to reach entrepreneurial competences through group dynamics and team experiences (table 1) According with Dominguinhos et al (2008: 9), the results of the evaluation of the methodological model applied at Entrepreneurship classes justify the importance of this kind of learning, which allows a more efficient apprenticeship when compared with others traditional teaching methods. These results revealed that: 1) "The activities in classroom, based on active pedagogical methodologies, contribute to satisfactory results concerning entrepreneurship learning and student's satisfaction. 2) Other similar extra class activities also contribute to entrepreneurship learning and student's satisfaction. 3) The methodology used -learning by doing -is, in students' perspective, easy and friendly. 4) The activities developed and resources available were considered by students adequate to the methodology applied. 5) Students express satisfaction with evaluation system, confirming that the curricula unit evaluation was well accepted by the students, except concerning with satisfaction and difficulties involved with guest events invitation and activities planning. This exception proves that students need to improve their competencies related with communication, autonomy and self-confidence in their relations with external environment, including stakeholders". Extra-curricular entrepreneurship programs In addition to curriculum entrepreneurship subjects, SPI propose entrepreneurship extracurricular programs whose students could participate voluntarily. In this section we present two entrepreneurship voluntary programs: a. Junior Achievement, European Graduate Program b. Poliempreende, Polytechnics Institutes National Program JA Worldwide is the world's largest organization dedicated to educating students about workforce readiness, entrepreneurship and financial literacy through experiential, hands-on programs. Junior Achievement programs help prepares young people for the real world by showing them how to generate wealth and effectively manage it, how to create jobs which make their communities more robust, and how to apply entrepreneurial thinking to the workplace. Students put these lessons into action and learn the value of contributing to their communities. JA's allows volunteers from the community to deliver our curriculum while sharing their experiences with students. Embodying the heart of JA, the about 384,925 classroom volunteers transform the key concepts of the lessons into a message that inspires and empowers students to believe in themselves, showing them they can make a difference in the world. JA has different entrepreneurship programs according the age and degree of education: elementary school programs, middle grades programs, high school programs and other particular events and specific programs. With a range of different programs, Junior Achievement teaches about concepts relating to entrepreneurship, financial literacy, and work readiness. The volunteers bring real-life business experience and guidance into the classroom at a time that represents an essential crossroads for young people. In this chapter we will focus in Junior Achievement's high school programs help students make informed, intelligent decisions about their future, and foster skills that will be highly useful in the business world. JA Graduate Program analyzes and explores personal opportunities and responsibilities within a student-led company and is accompanied by a tutor. In Portuguese case the tutors are a senior manager from a Portuguese private Bank (Millenium BCP). The group develop a set of concepts in the program (Business, Choices, Competition, Division of labor, Entrepreneurship, Expenses, Fixed Costs, Goods, Incentive, Income, Liquidation, Management, Marketing, Parliamentary procedure, Price, Productivity, Profit, Production, Research and development, Services, Stock, Variable costs) and develop several skills (Assembling products, Consensusbuilding, Critical thinking, Estimating, Filling out forms, Interpreting data, Math computation, Negotiating, Presenting reports, Problem-solving, Public speaking, Research, Selling, Teamwork). SPI participate in JA Post-Graduate Program with two students groups from ESCE and TS (Technology School) whose propose to create a company and participate in national competition. Poliempreende, Polytechnics Institutes National Program is another extracurricular program to promote entrepreneurship. This program is a national contest applied only to polytechnic institutions and aims improve an entrepreneurial culture and develop student's entrepreneurial skills and promote the creation of innovative firms in each region with positive impacts in local development. This project favors the participation of students with ideas supported in knowledge from different scientific areas and schools in order to mobilize different skills work in multidisciplinary teams and facilitate technology transfer. Poliempreende considers two training cycles: Workshops E1 and E2. These workshops occur during academic year and include: an ideas competition and a business plan competition. The workshop E1 highlights hands on methodology and try to develop new attitudes, initiative, decision making, uncertainly management capability, negotiation techniques and communication skills. The workshop E2 aims the development of several personal competencies (leadership, communication, and valorization of team work, ethics and organizational culture) and the development of an entrepreneurial project supported in a Business Plan. The Business Plan evaluation is made in two steps. Firstly the competitors are evaluated by a regional jury in each Polytechnic Institute. The three best business plans won a prize and the best team jump to a national competition. The second step consists in national competition whose includes national prizes and special support to set up a new firm. Support infrastructures Finally the third dimension describes the support infrastructures created to foster entrepreneurship in higher schools and in region (e g. Office of Knowledge transfer and information -OTIC; and ACTIVLAB). OTIC stimulates and promotes the transfer of ideas and innovative concepts from research developed in colleges to firms. Additionally, acts as a forum to match business needs with solutions provided in the SPI. In this sense, the establishment of strong ties with business organizations becomes the main strategic goal of OTIC. OTIC aims to promote the enrichment of scientific portfolio SPI in conjunction with the real needs of businesses in the region, based on a rigorous exploration of the market and building an environment of cooperation and trust through transfer of technology and knowledge in joint projects. The main objectives of the OTIC are:  Identify results that transfer, generated by groups of research or product of research alone;  Detection of unmet needs in the business environment and its transformation into innovative projects;  Promote the establishment of multidisciplinary teams Polytechnic Enterprise, for the resolution of specific problems of companies;  Providing an environment of cooperation between the Polytechnic Institute, businesses and other organizations in the region;  Promoting entrepreneurship and supporting the processes of business creation. OTIC promotes: 1) Scientific and technical support to small and medium sized companies based in the region of Setúbal; 2) Development of R&D projects and activities of technology transfer in partnership with business; 3) Training activities for companies in the region; 4) Cycle of events devoted to technology transfer and international cooperation (workshops, conferences and meetings) as well as those dedicated to promote an entrepreneurial culture. In 2008 was created a laboratory for entrepreneurship -ACTIVLAB. The main purpose of the infrastructure is to allow entrepreneurs to test their ideas and give some logistical support in first six months of new ventures. It is an open space, for 8 different firms that offer, totally free, a personal computer, fax, and telephone, access to internet and to library and data bases. Additionally, teachers offer consultancy to young entrepreneurs in management and technical areas. ACTIVLAB works in close cooperation with OTIC, and entrepreneurs can benefit from their formal and informal contacts. The Entrepreneurial Ecosystem Wikipedia defines natural ecosystem as a completely independent unit of interdependence organisms which share the same habitats. Applying this concept to social sciences, an entrepreneurial ecosystem includes a set of tangible and intangible resources and actors characterized by an interdependence relationship that creates important synergies. Results and Barriers This section highlights the main results and barriers found in the implementation and development of the entrepreneurial ecosystem. The figure bellow resumes activities and infrastructures existing in SPI to promote entrepreneurship. Source: Designed by the authors Fig. 3. Activities and Infrastructure to Promote Entrepreneurship in SPI These activities are performed internally but also with the establishment of linkages to external organizations across entrepreneurial process. In the figure above, activities and initiatives delivered by SPI are considered as well as the infrastructure to support those activities across entrepreneurial process. First stage concerns opportunities and new ideas. A set of activities are developed in entrepreneurship regular courses and in others short training courses and in R&D projects, where new technologies are created. ACTIVLAB becomes the natural place for those ideas grow up and become more mature, after studying the market. The second stage is where entrepreneurs test the business idea or technological concept before set up the new venture. In this phase, entrepreneurial team prepare the business plan to get money from investors. Our experience shows that the vast majority of students' stops here and few start a new venture. In third stage, namely going to the market, entrepreneurs got the support of ACTIVLAB, particularly logistical facilities, and from OTIC, in soft skills. Finally, in fourth stage, SPI provide some consultancy and training courses, as well as the development of joint R&D projects. Concerning ACTIVLAB, three projects were installed in its facilities. One entrepreneur, that won the first place in the regional completion of ideas, is testing a project in the area of alternative energies. Another, a firm created with strong support from OTIC, runs a business in information technologies. The third one, from the competition promoted by Junior Achievement, is trying to set up a consultancy firm. The set of programmes and infrastructures described allows SPI to create a more entrepreneurial culture in last three years. These statement is supported by some results from a national survey aplyed to students from Higher Education Institutions, public and private, as wells from universitiy and polytechnics. Three main indicators were measured: % of students who created a firm; % of students who created a firm or are doing some steps to do it; and % of student who said that will create a firm in the future. The data is shown in Figure 4 Globally, students from private and polytechcs reveal a higher propension to create firms. In the case of SPI, the percentage of students that create a firm is almost twice the average. If we consider those who taken some steps toward this goal, in SPI the percentage is 19,1% against 11,6% in national terms. But when we consider entrepreneurial intentions, the resuks are quite the same in SPI as in national average. One can say that the entrepreneurial ecosystem is still in its infancy in Portugal (Redford, 2009), ans SPI it is not an exception. One can speculate that having the support infrastructure allows SPI to transform more ideas in firms. In SPI, two different types of events have contributed for the development of entrepreneurial ecosystem. In one hand, scientifically and pedagoghicaly, entrepreneurship has been boosted by some local champions, above all teachers, based in their PhD programmes. They were able to change the goals and methodologies of the courses to create a more student oriented approach, based in the results of scientific research. This focussed approach, based in Business School and in a small group of teachers, was complemented by a transversal project (OTIC), with strong support from top management, applied to all schools. At the time, in engineering, education and health schools entrepreneurship was seen as an unfamiliar subject, and now more and more degrees have, at least, an course on entrepreneurship. In last two years the investment was carried out to create a more entrepreneruial culture across schools, specially Deans and Scientific Boards, teachers and students. And, despite all initiaves taken in classes and by OTIC, to interact with outside community, activites are specially executed by SPI, internally, with a mix of theoretical and pratical classes, supported in infrastructures to promote business creation and connections to external support organizations. In next figure, we summarize activities taken by SPI, based in a framework proposed by Engles et al ( (GEM, 2004(GEM, , 2008Eurobarometer, 2003). In this sense, it is not easy to convince students to apply to entrepreneurship courses, especially when they are pushed, by family and society, to get a job after the degree. For teachers, there are no real incentives to become an entrepreneur. For a full time professor, it is forbidden to accumulate a private activity, and patents and business activities are not very worth in academic careers. For scientific boards, because only a limited number of teachers are working in the area, it is difficult to accept transversal courses in all degree. First of all, because a question of power. Secondly, entrepreneurship only recently got its scientific legitimacy and most part of teachers is not familiar with the subject. Finally, for executive boards, until recently, there were no real incentives to promote entrepreneurship, because all external evaluations do not take into consideration the number of firms created by students or teachers, or the number of patens registered, but concentrate in professional inclusion of young graduates in labor market. Recommendations and Concluding Remarks Three main dimensions deserve a closer attention to reinforce entrepreneurial ecosystem in SPI, anticipated in the next figure. Source: Engles et al (2008), adapted Fig. 6. Projected activities in SPI to promote entrepreneurship First of all, in a more internal perspective, the dissemination of entrepreneurship courses for all five colleges and for all degrees. Less than fifty percent of first cycle degrees offer, at least, an elective course on entrepreneurship. This is a cultural change and demands a strong commitment from Scientific Boards as well as from Executive Boards. We believe that this change can occur for two set of reasons. Firstly, in Portugal, a national movement pro entrepreneurship start in last year's. There is several programs devoted to new firms creation and to commercialization of R&D. Secondly, internally, the new legal framework and structure, with an Academic Council, facilitates the implementation of transversal courses in all schools. This movement should be complemented by a Master in Entrepreneurship and Innovation, allowing scientific research between teachers and students as well as a strong focus in business creation during two years' degree, reducing the percentage of mortality from those who start a business, according with international results . This Master will allow us to work together with management students and those come from sciences and technology, where more innovative ideas can e developed. A second area is refers to the creation a research centre devoted to entrepreneurship. This research centre should act in different areas. First, to reinforce academic research and field work, promoting academic legitimacy. At the same time, it is a laboratory for joint projects with external organizations, an area that deserves closer attention and need to be emphasizing in SPI. Thirdly, there is a need for more involvement of students and outside organizations as well as entrepreneurs and business organizations. The creation of a Club of Entrepreneurs could overcome this weakness. Such infrastructure can promote workshops and seminars, counseling and support novice entrepreneurs, provide financial support, such as seed capital or money from some business angels, social capital initiatives opening SPI to the community. Promoting entrepreneurship in SPI shows how complex and difficult it becomes, particularly in a society where entrepreneurship education is still in its infancy (Redford, 2009). If in early stages the role of local champions becomes essential to generate examples and to show that it is possible, advanced stages claims for more ground support, especially from Executive and Scientific Boards, to spread out all the activities as well as to put some money in the development of all kind of activities. Additionally, to promote new ventures, Higher Education Institutions need to invest in support infraestrutures, such as logistical facilities and specialized consultancy or establish strong partnerships with organizations that offer these services. Last but not the least it becomes essential a strong focus in scientific research that push HEI to create innovative products and technologies with market orientation.
2018-11-05T17:49:07.774Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "435437ae35f8d64b90205c7c850f7a2e3ad0fac9", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/10542", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5d7910d0e497a62e51ff4f4e50d193bd4088ff60", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
270101686
pes2o/s2orc
v3-fos-license
Strategies and Ways of International Communication of Nadam Culture in Inner Mongolia--Taking Ordos International Nadam Congress as an Example Inner Mongolia Nadam culture, as one of the important minority cultures in China, is of great significance in international communication. Taking the Ordos International Nadam Congress as a case study, this paper discusses the strategies and ways of international communication of Inner Mongolian Nadam culture. Firstly, it introduces the history and characteristics of Nadam culture as well as its status and influence in Inner Mongolia, and then analyses the background, significance and international influence of Ordos International Nadam Congress. Then, strategies such as government support and guidance, cultural exchange and cooperation, and the use of modern technology to carry out publicity are proposed, and the ways of international exchange exhibition, media communication channels, academic research and academic exchange are discussed in detail. Finally, the challenges faced by the international communication of Inner Mongolia Nadam culture are analysed and corresponding countermeasures are put forward. Through the research of this paper, we aim to provide reference for the international dissemination of Inner Mongolia Nadam culture, promote its promotion and dissemination in the international arena, and enhance Chinese and foreign cultural exchange and cooperation Introduction As one of the unique minority cultures in China, Inner Mongolia Nadam culture carries rich historical and cultural connotations and has important historical significance and cultural value.With the accelerating pace of China's opening up to the outside world and the promotion of the "One Belt, One Road" initiative, Inner Mongolia Nadam culture has gradually attracted the attention of the international community.In order to better promote and disseminate Inner Mongolia Nadam culture, Ordos International Nadam Conference came into being, and became a window and platform for Inner Mongolia Nadam culture to go to the world.This paper will take the Ordos International Nadam Congress as an example to discuss the strategies and ways of international dissemination of Inner Mongolia Nadam culture, in order to provide reference and inspiration for its international dissemination. History and characteristics of Nadam culture 1.Origin of Nadam culture Nadam culture originated from the production life and cultural traditions of ancient Mongolian nomads, and it is a unique traditional form of Mongolian sports and culture.From the 12th century to the beginning of the 21st century, Nadam has gone through eight stages of development: the gestation and formation period from the 12th century to the Mongol Yuan period, the formation period in the Mongol Yuan period, the maturity period in the Ming Dynasty and the early Qing Dynasty, the transitional period from the middle of the Qing Dynasty to the period of the Republic of China, the period of development of a new model from the founding of the new China to the early 1980s, the period of diversified development in the 1980s, the period of diversified and internationalized sprouting in the 1990s, and the period of diversified and internationalized sprouting in the 21st century.Internationalization budding period and the internationalization development period since the beginning of the 21st century.The Nadam activities, with the "Three Great Mongolian Masterpieces" as the main content, have been enriched and improved in the process of inheritance, development and internationalization. [1] Nadam activities play an important role in Mongolian social life, not only as an important carrier to embody Mongolian national culture, but also as an important way to pass on and promote Mongolian traditional culture. Main features of Nadam culture First, Nadam is a traditional sports and competitive activities, covering a variety of traditional projects, such as horse racing, archery, wrestling, etc., which embodies the brave and heroic national spirit of the Mongolian people.Nadam is not only a kind of sports competition activity, but also a traditional cultural event.In Nadam, people can not only watch the competitions of various traditional events, but also enjoy the rich and colorful cultural programs such as song and dance performances, folklore display, etc., and experience the charm of traditional Mongolian culture.Secondly, Nadam is a cultural tradition with strong national characteristics.In Nadam activities, Mongolian people show their traditional cultural elements such as costumes, songs, dances, instrumental music, etc., which inherits and promotes Mongolian national culture.Finally, Nadam has strong mass and participation.As a traditional cultural activity, Nadam not only attracts a large number of spectators, but also allows the public to participate in it, which promotes cultural exchanges and interactions among people. The status and influence of Nadam culture in Inner Mongolia Nadam culture has an important status and far-reaching influence in Inner Mongolia.As an important traditional cultural activity in Inner Mongolia Autonomous Region, Nadam Assembly is not only an important cultural card of Inner Mongolia, but also an important carrier of national unity, cultural inheritance and spiritual civilization construction.Nadam Assembly not only enriches the cultural life of Inner Mongolia, but also promotes the communication and integration among various nationalities, and enhances the national unity and social harmony.At the same time, Nadam culture is also an important support for the development of tourism in Inner Mongolia, attracting a large number of tourists to come to watch and participate, and promoting the prosperity of the local tourism industry. Background and significance of Ordos International Nadam Congress Ordos International Nadam Congress is one of the largest and most influential Nadam cultural events in Inner Mongolia Autonomous Region.Since it was first held in 2002, it has been successfully held for many times and become an important cultural brand and tourism festival in Inner Mongolia and even in the whole China.The Ordos International Nadam Conference demonstrates the rich and colorful traditional culture of the Mongolian people by holding traditional events such as horse racing competitions, archery competitions, wrestling competitions and so on, attracting many domestic and foreign tourists to watch and participate in the event.Through this platform, it can enhance the cohesion and sense of belonging of Mongolian people of all ethnic groups and promote the inheritance and development of Mongolian culture.At the same time, it also makes an important contribution to the development of tourism in Inner Mongolia Autonomous Region, attracts a large number of tourists to visit and watch, promotes the prosperity of the local economy and culture, and is a folklore celebration gathering with comprehensive effects. [2]It was listed as a national intangible cultural heritage in 2006.The organization of Ordos International Nadam Assembly not only enriches the cultural life of Inner Mongolia, but also has important significance and role.Firstly, the Nadam Assembly is a kind of inheritance and promotion of traditional Mongolian culture, which helps to stimulate the Mongolian people's national pride and sense of belonging, and strengthens the combination of tradition and innovation of Mongolian culture.Secondly, Ordos International Nadam Assembly is a platform for national unity and cultural exchange, attracting tourists and participants from different regions and nationalities, promoting mutual understanding and exchange among nationalities, and enhancing national unity and social harmony.Once again, the Nadam Conference has played a positive role in promoting the development of tourism in Inner Mongolia, enhanced the visibility and influence of Inner Mongolia, and promoted the prosperity of the local economy and social stability.With the deepening of international communication and cooperation, the international influence of Ordos International Nadam Assembly is also gradually enhanced.As a traditional cultural activity with strong national characteristics, Nadam Assembly has attracted the attention of tourists and media from all over the world, and become a window and platform for Inner Mongolia region to go to the world.Through participation in the Nadam Assembly, the international community has gained a deeper understanding of the traditional culture of Inner Mongolia, enhanced the friendship and exchanges between people of different countries, and made positive contributions to the promotion of diversified exchanges and sharing of cultures of different countries. Challenges of international dissemination of Inner Mongolia Nadam culture Inner Mongolia Nadam culture, as the treasure of Mongolian culture, carries rich historical heritage and national emotions.In August 2010, the Ordos International Nadam Conference was held, which made Mongolian Nadam become an international "sports event", and gained the "right of speech" of the national sports and culture.The international sports began to know Mongolian Nadam, and the Nadam sports and culture "shook hands" with the world for the first time. [3]However, in the process of its international dissemination, it faces many challenges.These challenges not only come from the level of cultural awareness and understanding, but also involve the choice and use of communication channels, as well as cultural output and international discourse. Cultural awareness and understanding Inner Mongolia Nadam culture has a low level of international recognition, which is closely related to its history, tradition and cultural characteristics that are different from the current situation of Western culture.Many countries and regions have cognitive biases and deficiencies, or even misunderstandings and prejudices about Mongolian culture.For example, in some countries, people may be more inclined to categorize Mongolian culture simply as nomadic culture or culture related to grassland life, while ignoring the rich diversity and deep historical deposits of Inner Mongolia Nadam culture.This difference in cultural cognition has brought difficulties to the international dissemination of Inner Mongolia Nadam culture, and it is necessary to strengthen the publicity of its characteristics and connotations through a variety of ways, so as to promote the international community's comprehensive understanding and recognition of it. Selection and application of communication channels Facing the challenges of the information age, the international dissemination of Inner Mongolia Nadam culture needs to adapt to the ever-changing dissemination environment, choose appropriate dissemination channels and flexibly use various means of dissemination.However, due to cultural differences and language barriers, it is often difficult to find suitable communication channels and audience groups for Inner Mongolia Nadam culture in international communication.For example, although emerging social media and webcasting platforms provide new opportunities for cultural dissemination, how to attract more target audiences internationally has become a challenge yet to be solved.In addition, the cultural communication habits preferences of different countries and regions need to be fully considered in order to develop more targeted communication strategies to enhance the international influence of Inner Mongolia Nadam culture. Cultural exports and international discourse In the context of globalization, the export of national cultures and international discourse have become more and more important.However, the status of Inner Mongolia Nadam culture in the international discourse system is relatively low, and its international dissemination is constrained by the Western-dominated international cultural pattern.For example, some countries and regions are more receptive to the influence of Western culture and have reservations about the spread of non-Western culture.This has led to the relatively weak influence of Inner Mongolia's Nadam culture in international cultural exchanges and limited international discourse.In order to overcome this challenge, Inner Mongolia needs to strengthen its cultural self-confidence, actively advocate multicultural exchange and dialogue, and enhance its status and influence in the international cultural discourse system.To sum up, the challenges facing the international dissemination of Nadam culture in Inner Mongolia involve many aspects, which need to be solved through policy support, international exchanges and cooperation and other ways.Only by fully understanding and coping with these challenges can we better promote Inner Mongolia Nadam culture to the world and achieve the long-term goal of its international communication. Government support and guidance As the main promoter and supporter of cultural undertakings, the government should increase the financial investment in the international dissemination of Nadam culture.The government can provide financial support through the establishment of special funds, increase the financial allocation of cultural projects, etc., to ensure the smooth organization of the Nadam Conference and the development of international communication activities.Government departments should strengthen the policy and regulatory support for the international dissemination of Nadam culture and provide policy guarantee and legal support for it.Relevant preferential policies and regulations can be formulated to provide convenient conditions for the holding of the Nadam Congress and attract more international tourists and participants. The government can actively build a platform for cooperation between the government and international organizations, and strengthen exchanges and cooperation with international organizations and foreign governments.It can expand international cooperation channels and enhance the dissemination and influence of Nadam culture in the international arena by holding Nadam cultural exchange conferences and signing cultural cooperation agreements. Cultural exchanges and cooperation The international dissemination of Nadam culture requires the establishment of extensive exchanges and co-operation with cultural institutions around the world.The government can actively organize cultural delegations to go abroad cultural exchanges, sign cooperation agreements with foreign cultural institutions, and jointly organize cultural exhibitions, art performances and other activities, so as to promote Nadam culture to the international level. Internationally renowned cultural groups and artists can be invited to participate in Nadam cultural activities, enriching the content and enhancing the quality of the activities.Through the participation of international cultural groups, can promote cultural exchanges and mutual understanding, expand the international influence of Nadam culture.It is also possible to organize Nadam cultural exchange forums and symposiums, inviting internationally renowned scholars, experts and cultural representatives to participate in the discussions, and to jointly explore the history, characteristics, inheritance and development of Nadam culture.Through academic exchanges and collision of ideas, it can promote the theoretical research and international dissemination of Nadam culture. Use modern science and technology means to carry out propaganda The official website and social media platform of Nadam culture can be set up to release the information and dynamics of Nadam cultural activities in a timely manner and carry out online publicity and promotion.Through the power of the Internet and social media, Nadam culture can be pushed to the global scale, attracting more international attention.Produce promotional videos and documentaries of Nadam culture, showing the historical evolution, rich connotation and activity spectacle of Nadam culture.Through the dissemination of videos and documentaries, the glamour of Nadam culture can be visually demonstrated, attracting more international audiences and tourists.Online exhibitions and activities of Nadam culture can also be carried out by using webcasting and online exhibition platforms.Through the way of network live broadcasting, the Nadam cultural activities can be spread to all parts of the world in real time, so that more international audiences can participate in them and increase the international exposure of Nadam culture. International Exchange Exhibition As an important platform for Inner Mongolia Nadam culture, Ordos International Nadam Congress should make full use of its international influence and invite audiences and representatives from all over the world to participate.By displaying the characteristics and charms of Nadam culture, it attracts more international tourists to come to experience and understand the traditional culture of the Mongolian people.Inner Mongolia Nadam culture can demonstrate its unique cultural charm by participating in international cultural festivals and art exhibitions.This not only allows more international audiences to understand and feel the Nadam culture, but also promotes cultural exchanges and co-operation and expands the international influence of Nadam culture.The government can invite foreign government officials, cultural delegations and media reporters to visit Inner Mongolia Nadam cultural activities and experience the traditional Mongolian culture and folk customs.Through diplomatic means and international exchanges, the visibility and influence of Nadam culture in the international arena can be enhanced. Media dissemination channels The coverage and publicity of Nadam culture can be carried out through international mainstream media, such as CNN, BBC and so on.With the influence and coverage of these media, the Nadam culture can be promoted to all over the world, enhancing its international popularity and reputation.Actively use international social media platforms, such as Facebook, Twitter, Instagram and so on, to carry out the publicity and promotion activities of Nadam culture.By releasing content and organizing online interactions on social media, more international fans can be attracted to pay attention to and participate in it, expanding the international influence of Nadam culture.It is also possible to co-operate with internationally renowned travel media for the publicity and promotion of Nadam culture.These media usually have a wide readership and professional reporting team, which can help Nadam culture get more exposure and recognition in the international tourism market. Academic research and academic exchanges The international dissemination of Inner Mongolia Nadam culture can be promoted by inviting international scholars to participate in relevant research and academic exchange activities.The government can organize international Nadam culture seminars, academic forums and other activities, inviting experts and scholars from all over the world to discuss the history, characteristics, and significance and influence of Nadam culture in today's world, and promote cultural exchanges and cooperation.Support national scholars and research institutions to carry out international research on Nadam culture, and encourage them to publish their research results in international academic journals, conference proceedings and other platforms.This can introduce the unique charm of Inner Mongolia Nadam culture to the international academic community and enhance its academic influence and recognition in the international arena.Regularly organize international academic conferences and exchange activities on Nadam culture, inviting international scholars, cultural experts and practitioners to participate.Through these academic exchange platforms, academic cooperation and exchanges between different countries and regions can be promoted, and the dissemination and development of Nadam culture in the international academic community can be promoted. Conclusion and Prospect As an important part of China's minority culture, the international dissemination of Inner Mongolia Nadam culture is of great significance and far-reaching influence.The Nadam sports culture that has been accumulated for thousands of years cannot be limited to the "three arts of men" in the face of the new era, and it must keep pace with the times, innovate constantly, and participate in the Ordos International Nadam Conference as a platform for international communication with the success and influence of the Conference, and disseminate it widely to the world. [4]By formulating scientific and reasonable communication strategies and measures to strengthen the dissemination and promotion of Inner Mongolia Nadam culture in the international arena, it can not only enhance the understanding and recognition of Chinese culture by people from all over the world, but also promote the international cultural exchanges and co-operation, and make a positive contribution to the building of a community of human destiny.Looking ahead, we have reason to believe that under the joint efforts of all parties, the international dissemination of Inner Mongolia Nadam culture will usher in a broader space for development and make new contributions to the prosperity and progress of world culture.
2024-05-30T15:12:24.937Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "34bdb86c1356ea052c39fe09d04f94eaa040c1b6", "oa_license": "CCBYNC", "oa_url": "https://bcpublication.org/index.php/FHSS/article/download/6365/6180", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1c4dcb2db0c8089f77dc0c722144ad98447b9c55", "s2fieldsofstudy": [ "Sociology", "History" ], "extfieldsofstudy": [] }
73494607
pes2o/s2orc
v3-fos-license
GBA mutations p.N370S and p.L444P are associated with Parkinson's disease in patients from Northern Brazil ABSTRACT Mutations of the GBA gene have been reported in patients with Parkinson's disease (PD) from a number of different countries, including Brazil. In order to confirm this pattern in a sample of PD patients from northern Brazil, we conducted a case-control study of the occurrence of the two most common mutations of the GBA gene (c.1226A>G; p.N370S and c.1448T>C; p.L444P) in a group of 81 PD patients and 81 control individuals, using PCR-RFLP, confirmed by the direct sequencing of the PCR products. In the patient group, three patients (3.7%) were heterozygous for the GBA c.1226A>G; p.N370S mutation, and three (3.7%) for GBA c.1448T>C; p.L444P Neither mutation was detected in the control group (p =0.0284). Patients with the c.1448T>C; p.L444P mutation showed a tendency to have an earlier disease onset, but a larger sample number is required to confirm this observation. Our results suggest an association between the GBA c.1226A>G; p.N370S and c.1448T>C; p.L444P mutations and the development of PD in the population of patients from the Northern Brazil. of Lewy bodies, proteinaceous intracytoplasmic inclusion in the reminiscent neurons in the substantia nigra pars compacta and other regions of the brain. The pathogenesis of PD is still not well understood, although some degree of interaction between environmental factors and genetic predisposition appears to play an important role in the development of the disease 1,2 . Significant advances have been made over the past 20 years in the comprehension of the molecular pathogenesis of PD, which have contributed to the identification of candidate genes with a given pattern of Mendelian inheritance. A number of mutations have been identified in the SNCA (alpha-synuclein), PRKN (parkin), DJ1 (oncogene DJ1), PINK1 (PTEN-induced putative kinase 1) and LRRK2 (leucine-rich repeat kinase 2) 3,4 . A number of studies have recorded the occurrence of parkinsonian manifestations in patients with Gaucher disease (GD), a lysosomal storage disorder caused by homozygotic mutations in the glucocerebrosidase (GBA) gene that codify a glucocerebrosidase enzyme with reduced activity 5,6 . This has led to the suggestion that the presence of mutant alleles of the GBA gene may be a risk factor for the development of parkinsonian manifestations 7,8,9 . Indeed, an international and multicenter study with a sample of approximately 5,000 PD patients and equal number of controls has found a strong association between GBA mutations and risk of PD with an odds ratio greater than five 10 .This finding may support the "loss of function" hypothesis, which postulated that the reduction in enzymatic activity leads to an increase in the levels of glucosylceramide in specific regions of the brain 8 . It seems likely that the presence of the mutant enzyme is a contributing risk factor, but is not a direct cause of PD. One possibility is that the mechanism is related to the defective processing of toxic proteins, which is aggravated by a relative reduction in the activity of the glucocerebrosidase enzyme, and the resulting accumulation of glucocerebrosides 11 . In 2004, Feany suggested that the connection of the alpha-synuclein to lipidic membranes would protect this protein from inadequate and clumped folding 12 . Mutations of the GBA gene would alter the lipid composition of the membrane, which would favor a build-up of alpha-synuclein in the cytosol and subsequently in the Lewy bodies 11 . A study has already shown that the affinity of alpha-synuclein for the surface of lipids is sensitive to their composition 13 . The highest frequencies of mutations of the GBA gene have been found in PD patients of Ashkenazi Jewish ancestry, with rates of 13.7% to 31.3% in comparison with 4.5% to 6.2% in control groups 7,14,15,16 . The frequencies recorded in PD patients in non-Jewish populations representing other populations, such as Italians, Caucasian Americans, Greeks, Brazilians, British, and Taiwanese, are invariably much lower -3.5% to 12.0%while controls from the same populations range from 0% to 5.3% 1,17,18,19,20,21 . The lowest rate recorded to date was 2.3% in Norwegian PD patients, compared with 1.7% in the control 9 . Previously, in North Africa, a study found no association between PD and mutations of the GBA gene; however, a more recent African study date suggested a risk association between mutations in the GBA gene and PD 22 . The mutations c.1226A>G; p.N370S and c.1448T>C; p.L444P are the most widely analyzed in studies of the association between mutations of the GBA gene and the manifestation of PD 7,16,17,21,22,23,24 . Given this, the present study investigated the presence of these two mutations in a population with PD from northern Brazil. In addition to systematic comparisons with previous studies, the primary aim of the study was to contribute to the evaluation of these mutations as a risk factor for the development of PD. Participants In this cross-sectional study, 81 PD patients (50 male and 31 female) with a mean age of 69.5±10.6 years old (range 44-95) participated, as well as 81 controls (52 male and 29 female) with a mean age of 67.3±14.9 years old (range 34-96), matching in age and gender. All the patients were selected from the João de Barros Barreto University Hospital of the Federal University of Pará, where they were undergoing medical treatment, thus it was a convenience sample. These patients were all diagnosed according to the clinical criteria established by the United Kingdom Parkinson' s Disease Society Brain Bank. The patient group covered a heterogeneous group of unrelated people with cases of both early (<50 years old) and late onset of symptoms. The mean age at the onset of the first symptoms was 55.12±11.64 years (range 28-78). All the patients were from the northern Brazilian city of Belém. The control group comprised individuals with no symptoms of PD or any other neurodegenerative disease, and no family history of PD in first-or second-degree relatives. Their ages varied from 41 to 96 years. Both patients and control individuals were all volunteers, and signed a written informed consent. The study was approved by the Research Ethics Committee of the João de Barros Barreto University Hospital at the Federal University of Pará, under protocol number 2547/06. Genetic analysis The DNA of both groups was obtained for analysis of the GBA c.1226A>G; p.N370S and c.1448T>C; p.L444P mutations that was conducted in three stages. The first stage consisted of the pre-amplification of a fragment that extends from exon 8 to exon 11 of the GBA gene, using the following primers F: 5'-ACAAATTAGCTGGGTGTGGC-3' and R: 5'-TAAGCTCACACTGGCCCTGC-3 25 . The second stage of the analysis involved the use of the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) approach. For the analysis of the GBA c.1226A>G; p.N370S mutation, the internal primers (F: 5'-GCCTTTGTCCTTACCCTCG-3') and (R: 5'-GACAAAGTTACGCACCCAA-3') were used. The products of this PCR were digested with the XhoI restriction enzyme. For the GBA c.1448T>C; p.L444P mutation, the internal primers were (F: 5'-TGAGGGTTTCATGGGAGGTA-3') and (R: 5'-AGAGTGTGATCCTGCCAAGG-3'), and the PCR products were digested with the NciI restriction enzyme. Positive and negative controls were included in all assays. The third stage of the analysis was the confirmation of the presence of the mutations using a direct sequencing reaction with an ABI PRISM BigDye Terminator Cycle Sequencing (Applied Biosystems, USA) kit in an ABI-PRISM 377 (Applied Biosystems, USA) automatic sequencer. Statistical analysis Statistical analyses used Fisher'sexact test. RESULTS The analysis of the GBA c.1226A>G; p.N370S and c.1448T>C; p.L444P mutations began with digestion by the restriction endonucleases. The presence of the mutations was then confirmed by sequencing analysis (Figure). Six (7.4%) of the 81 PD patients presented one of the two most common mutations of the GBA gene. Half of these six (3.7%) were heterozygous for GBA c.1226A>G; p.N370S, and the other half were heterozygous for GBA c.1448T>C; p.L444P. By contrast, neither mutation was found in any of the members of the control group. The frequency of the mutant alleles GBA c.1226A>G; p.N370S and c.1448T>C; p.L444P (1.85% in both cases) in PD patients was significantly different from that of the control (Fisher's exact test: p = 0.0284). Clinically, all six PD patients with one of the mutations presented with typical parkinsonian phenotypes, with the onset of symptoms at ages of between 28 and 71 years, and a mean age at the time of the study of 49.6±17.4 years. Three of the patients had suffered an early onset of symptoms, i.e. <50 years old, and two had a family history of PD ( Table 1). The initial symptoms were asymmetric in all cases, with tremors reported in four patients, rigidity in one, and an alteration of the walk in the other. Autonomic and cognitive dysfunctions, or psychiatric disturbances were not reported in any of the cases, and most of these patients had responded well to treatment with dopaminergic agonists or levodopa. Overall, the clinical symptoms of these patients were indistinguishable from those of patients with no mutations of the GBA gene (Table 2). We recorded higher frequencies of heterozygous patients (Table 3). Outside Brazil, a number of studies of the mutations of the GBA gene have been conducted in a variety of ethnic groups. The Ashkenazi Jews have by far the highest frequencies of these mutations in both PD patients (13.7%-31.1%) and controls (4.2%-6.4%) 7,14,15,16 . Apart from the Norwegian population, the general pattern of significantly-higher frequencies than expected of GBA gene mutations in PD patients is repeated throughout most of the World 2,17,18,19,20,21,22,23,24 . Ashkenazi Jews present a relatively high incidence of GD, which affects approximately one in every 10,000 individuals. It thus seems possible that the high frequency of GBA mutations observed in Ashkenazi PD patients may be linked to the incidence of GD 17 . In Brazil, however, GD is rare, occurring in one in every 400,000 individuals 27 , although the true frequency may be higher, given that not all patients may be diagnosed correctly. This estimate nevertheless indicates that GD is around 40 times less common in Brazil in comparison with the general Ashkenazi population. The incidence observed in Brazil implies that one in approximately 500 individuals may be heterozygous for GD. The relatively high frequency of PD patients in our sample who are heterozygous for mutations of the GBA gene reinforces the role of these mutations in the etiology of PD. While our frequency of PD patients was much lower than those recorded in Ashkenazi Jews, it is consistent with that found in a study of 230 Portuguese patients. In this case, 6.1% of the PD patients were heterozygous for GBA mutations, whereas this condition was recorded in only 0.7% (3 of 430) of the control individuals. The GBA c.1226A>G; p.N370S is the most common in both Portuguese and Ashkenazi Jewish populations 28 . The mean age of onset of the disease in PD patients was lower in patients with GBA mutations (49.6± 17.4 years) in comparison with those with no mutations (55.1±11.6 years) ( Table 2). This finding is consistent with some 2,6,21,22 , but not all the other studies of GBA mutations in PD patients 7,9,26,30 . These differences may be accounted for by the fact that the modifier genes that contribute to development of the phenotype of the disease may also vary systematically in different populations. Epistatic interactions between genes or specific types of interaction among haplotypes at multiple loci may also contribute to differences in the onset of the disease 2 . In both the southeastern and Southern study populations, however, all the PD patients with GBA mutations had both a family history of the disease and early onset (<50 years old) ( Table 3). In Southeastern Brazil 17 , the two patients had onset ages of 42 and 46 years, while in Southern Brazil 24 , the ages were 34 and 40 years. Guimarães et al. did not find any significant difference, either in family history or age of onset, among PD GBA carriers and noncarriers (Table 3), but this showed a tendency to occur at an earlier age 26 . In the present study, half of the six patients with GBA mutations had experienced early onset of the disease, whereas the others suffered their first symptoms at 55 years of age (no family history), 67 years (PD in the mother) and 71 years (no history). Interestingly, the PD patients heterozygous for c.L444P mutation showed a tendency to have an earlier age of onset compared withthose heterozygous for p.N370S mutation (Table 1). Although no statistical test could be used in our study, other studies also observed differences in the PD phenotype according to which mutation is present 15,16 . Another observation was that, among the four Brazilian studies, the mutation c.L444p has been more frequent than the mutation p.N370S (Table 3). Among our patients, only a third (2 of 6) of the PD patients with mutations of the GBA gene had a family history of the disease (Table 3). This corresponds with the pattern observed in some studies 2,9,22,23 , but not others, in which most of the patients with GBA mutations had a family history 7,17,21,24 . Our results suggest that GBA mutations may be related to a greater risk for PD development in patients with family history as well in patients without family with the disease. It seems reasonable to assume that at least part of the frequency variation among populations is due to differences in sample size and the criteria used to include patients, as well as those in the techniques used to identify the mutations 21,30 ( Table 3). The different genetic origins of the populations and varying degrees of interaction with environmental factors may also contribute to observed differences, as suggested by Moraitou et al. 19 . Although less heterogeneous than Brazil, Greek and Italian studies have found significant differences comparing PD patients and controls from urban and rural areas 19 , and from the North and South regions 21 . Thus, due to the extremely mixed population of Brazil, research indifferent regions across the country is necessary in order to properly characterize GBA mutations in our population. Spitz et al. 17 PD: Parkinson's disease; PCR-RFLP: polymerase chain reaction-restriction fragment length polymorphism. and Socal et al. 24 worked with only Southeastern and southern populations, respectively. The population of the Pará State that participated in this study hasa genetic contribution of 60% European ancestry, 12% African ancestry and 28% Amerindian ancestry, whereas Southern Brazil has, almost exclusively, European ancestry 31 . Before this study, only De Guimarães et al. used samples from Northern, along with Southeastern and midwestern samples, but they did not examine the frequencies by regions 26 . This study further reinforces the association of mutations of the GBA gene as a factor of genetic susceptibility for the development of PD in the Brazilian population. It also shows a higher frequency of GBA mutations in a Brazilian region poorly studied in the neurogenetic field and which has a different ancestry from those Brazilian regions where similar studies have been done. However, the exact molecular and cellular mechanisms involved in the association between the mutations and the development of the disease remain unknown.
2019-03-11T17:22:22.513Z
2018-08-08T00:00:00.000
{ "year": 2018, "sha1": "0711b12ba7459ee81411b51d2bd6caf1a4e1fd88", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/anp/v77n2/1678-4227-anp-77-02-0073.pdf", "oa_status": "GOLD", "pdf_src": "Thieme", "pdf_hash": "6900b5ce892d528ecd0b00571fdd80cdf6f47905", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204831978
pes2o/s2orc
v3-fos-license
Two Weeks of High-Intensity Interval Training in Combination With a Non-thermal Diffuse Ultrasound Device Improves Lipid Profile and Reduces Body Fat Percentage in Overweight Women This study evaluated the effectiveness of an innovative strategy which combined low-frequency ultra sound (LOFU) with high-intensity interval training (HIIT) to improve physical fitness and promote body fat loss in overweight sedentary women. A placebo controlled, parallel group randomized experimental design was used to investigate the efficacy of a 2-week combined LOFU and HIIT program (3 sessions per week). Participants were allocated into either the Experimental HIIT group (HIITEXP, n = 10) or Placebo HIIT group (HIITPLA, n = 10). Baseline exercise testing (maximal oxygen uptake, lower limb strength and substrate oxidation test), dietary assessment, anthropometric measures and blood sampling were completed in week 1 and repeated in week 4 to determine changes following the program (Post-HIIT). During each training session, the HIITEXP and HIITPLA groups wore a non-thermal diffuse ultrasound belt. However, the belt was only switched on for the HIITEXP group. Delta change scores were calculated for body weight, body fat percentage (Fat%), muscle mass, V.O2max, hip and waist circumferences, and all lipid variables from Baseline to Post-HIIT. Statistical analysis was completed using a repeated-measures factorial analysis of variance by group (HIITPLA and HIITEXP) and time (Baseline and Post-HIIT). Results showed significant improvements in maximal oxygen uptake (HIITEXP; Baseline 24.7 ± 5.4 mL kg–1 min–1, Post-HIIT 28.1 ± 5.5 mL kg–1 min–1 and HIITPLA; Baseline 28.4 ± 5.9 mL kg–1 min–1, Post-HIIT 31.4 ± 5.5 mL kg–1 min–1) for both groups. Significant decreases in Fat% (HIITEXP; Baseline 32.7 ± 3.2%, Post-HIIT 28.9 ± 3.5% and HIITPLA; Baseline 28.9 ± 3.5%, Post-HIIT 28.9 ± 3.4% kg), waist circumference (HIITEXP; Baseline 95.8 ± 9.6 cm, Post-HIIT 89.3 ± 8.9 cm and HIITPLA; Baseline 104.3 ± 3.5 cm, Post-HIIT 103.6 ± 3.4 cm) and triglycerides (HIITEXP; −29.2%, HIITPLA; −6.7%) were observed in the HIITEXP group only. These results show that HIIT combined with LOFU was an effective intervention to improve body composition, lipid profile, and fitness. This combined strategy allowed overweight, sedentary women to achieve positive health outcomes in as little as 2 weeks. INTRODUCTION Sedentary behavior and physical inactivity are closely associated with the development of risk factors for metabolic syndrome including glucose intolerance, insulin resistance, hypertension, dyslipidemia, and obesity (Eckel et al., 2010). Fat localization, particularly abdominal adipose tissue, is also a major determinant in the occurrence of metabolic disorders (Wajchenberg, 2000). Due to the increasing prevalence of overweightness and obesity, combined with the significant health costs and economic burden of sedentary behavior, it is important to investigate strategies that induce a loss of body fat and promote long-term weight management (Ogden et al., 2006). In this context, a balanced diet and physical activity interventions are the main approaches used to reduce body fat and improve an individual's blood lipid profile (Donnelly et al., 2009;Johns et al., 2014). Indeed, comparisons between sedentary and physically active groups in cross-sectional studies have shown the positive influence of exercise on blood lipid profile (Durstine et al., 2001). Moderateintensity continuous training (MICT) is currently recommended to promote weight loss (Donnelly et al., 2009), as prolonged exercise has been demonstrated to increase fat mobilization and oxidation (Katzmarzyk et al., 2001;Lazzer et al., 2017). However, evidence suggests that high-intensity interval training (HIIT) is also an effective strategy for reducing body fat (Boutcher, 2011;Maillard et al., 2018) and could lead to a greater loss in fat mass than MICT (Wewege et al., 2017). High-intensity interval training sessions involve short periods of high-intensity exercise [80-100% of maximum heart rate (HRmax)] interspersed with passive rest, or low intensity exercise for recovery (Weston et al., 2014). HIIT programs have shown similar fitness benefits to MICT, but can be completed in a shorter amount of time and have been perceived as more enjoyable, which can promote exercise adherence (Bartlett et al., 2011). Moreover, several studies have shown that HIIT programs have a positive effect on fat loss (Tremblay et al., 1994;Mourier et al., 1997;Gibala and McGee, 2008;Tjonna et al., 2009;Boutcher, 2011) due to metabolic adaptations such as increased fat oxidation (Talanian et al., 2007), increased oxidative metabolism (Tremblay et al., 1994;Gibala and McGee, 2008) and enhanced insulin sensitivity (Babraj et al., 2009). For example, Tremblay et al. (1994) compared the effect of a 20-week MICT and HIIT program (5 sessions per week) on body fat loss. A ninefold greater reduction in subcutaneous body fat was observed following the HIIT program compared to MICT, when the exercise sessions were corrected for total energy cost. Moreover, a significantly greater expression of the enzyme 3-hydroxyacylcoenyme A dehydrogenase which is involved in the β-oxidation pathway was observed following HIIT. This lends support to the ability of HIIT to improve fat oxidation to a greater extent than MICT. Similar findings were reported by Trapp et al. (2008) using a shorter training duration (20-min per session vs. 40-min) where a 15-week (3 sessions per week) led to a significant decrease in total fat mass (−2.50 ± 0.83 kg) and reduction in central abdominal fat (−0.15 ± 0.07 kg) compared to the MICT and sedentary control groups. Collectively, these results demonstrate the efficacy of HIIT to improve fitness and reduce body fat. However, there are other non-exercise based inventions that may also promote fat loss and further improve blood lipid profile. Low-intensity (up to 17.5 W/cm 2 ), low-frequency (20-200 kHz) ultrasound (LOFU) therapy is a safe, noninvasive body-contouring technique to reduce fat mass (Hotta, 2010;Tonucci et al., 2014;Friedmann, 2015;Juhasz et al., 2018). This technology uses ultrasound to produce a mechanical stress that disrupts the cellular membrane of adipose tissue (Hotta, 2010). Specifically, microcavities are created that subsequently cause cell destruction and fat liquefaction (Tonucci et al., 2014). These mobilized fat cells are metabolized in the liver or removed via catabolic processes. While LOFU is an emerging tool that requires further well-controlled studies, the initial findings are promising. Studies using LOFU have reported a significant reduction in abdominal circumference (−2.1 cm) (Tonucci et al., 2014), decreased total fat mass (−3.5%) and reduced subcutaneous adipose tissue (mean −2.4%) (Milanese et al., 2014) across a series of treatments. The aim of the present study was to evaluate an innovative weight loss strategy coupling LOFU with a HIIT program on fitness level, body composition and lipid profile in sedentary, overweight women. We hypothesized that in addition to the benefits of HIIT on fat metabolism and fitness, the increased fat mobilization induced by LOFU would lead to an increased uptake of fat by the exercising muscles and further reductions in fat mass and body weight. It was also anticipated that the added LOFU would lead to comparable changes in body composition and lipid profile following a short-term 2-week training intervention as a longer 15 to 20-week training intervention. Participants Twenty-three healthy, sedentary women aged between 20 and 49 years old were initially recruited. Participants volunteered after being fully informed of the study purpose, protocols, procedures, and potential risks of involvement. All participants provided informed consent and the investigation was approved by a local ethic committee (University of Nice) in compliance with the Declaration of Helsinki. On the initial visit to the laboratory, each subject underwent a physical examination conducted by a cardiologist. Based on this examination, participants with high blood pressure, extrasystoles or serious heart rhythm abnormalities were excluded from the investigation. Participants were then classified as sedentary with self-reported ≤ 15min of moderate aerobic exercise per week (Bartfay and Bartfay, 2014). A further medical screening was performed and participants with recent infections, muscular and/or joint disability, smokers and those using hormonal replacement therapy, antioxidant supplementation or taking medication affecting lipid or lipoprotein metabolism were excluded from the study. The phase of the menstrual cycle was also reported and participants began the baseline exercise testing while in the follicular phase. Participants were then randomly allocated via random number allocation into one of two groups. Due to illness, three participants withdrew from the investigation (Experimental n = 2, Placebo n = 1). The, we analyzed 20 participants: 10 for the Experimental HIIT group (HIIT EXP ) and 10 for the Placebo HIIT group (HIIT PLA ). There was no significant difference in body composition, anthropometric or fitness measures between the groups before the study commenced. Procedures A placebo controlled, parallel group randomized experimental design was used to investigate the efficacy of a combined HIIT and LOFU program. The investigation was conducted over a period of 4 weeks. Baseline exercise testing (maximal oxygen uptake, lower limb strength, and substrate oxidation test), dietary assessment, body composition, anthropometric measures and blood sampling were completed in week 1 and repeated in week 4 to determine changes following the program (Post-HIIT). Each group (HIIT EXP and HIIT PLA ) were asked to complete the same HIIT program in weeks 2 and 3 (3 sessions per week). During each training session, the HIIT EXP and HIIT PLA groups wore a non-thermal diffuse ultrasound belt (Slim Sonic L-1440, Lausanne, Switzerland) using the technology of Sonic Resonance R . This device has been described in a previous study (Hafiz et al., 2014). The system operated at a frequency ranging from 30 to 42 kHz. The intensity applied was fixed between 5 to 8 W/cm 2 cavitation power. The device weighed 1.6 kg and was placed on each participant's waist. It was switched on for the HIIT EXP group and switched off for the HIIT PLA group for 45min during exercise. Participants were asked to maintain their typical dietary practices throughout the investigation and were not informed that energy intake was a variable that would be assessed. An overview of the experimental design is outlined in Figure 1. Evaluation of the Lower Limb Maximal Strength Measurements were undertaken 2-day before and 2-day after the six HIIT sessions. Participants were familiarized with the testing procedure during a standardized warm-up which consisted of jump squats (2 sets of 5 reps), countermovement jumps (2 sets of 5 reps) and submaximal isokinetic bilateral lower limb extensions at 40% 1RM (2 sets of 15 reps) on a digital leg press (eGym, Munich, Germany). Muscle strength was measured as peak force (F max ) during an isokinetic maximal voluntary bilateral lower limb extension on a digital leg press during a single concentric movement. Velocity was regulated to move through the full range of motion in 2-s. Customized software was then used to translate the torque values into kilograms and calculate the F max , irrespective of joint angle. Strict care was taken to ensure identical test protocols for all participants, which included standardized verbal encouragement and visual feedback provided by a realtime display of the force output. Successive trials were performed until F max could not be improved any further, which typically included seven to nine attempts (Aagaard et al., 2000). Maximal Oxygen Uptake Test One week before and 2 to 4 days after the training period, participants undertook an incremental cycle test to exhaustion on a stationary electromagnetically braked cycle ergometer (Monark LC6 novo, Vansbro, Sweden) to determine maximal oxygen uptake (VO 2 max) and maximal aerobic power (MAP). To minimize the effects of diet on physical performance, participants standardized their diet in the 24-h prior to each maximal oxygen uptake test. Furthermore, 2-h prior to each maximal oxygen uptake test, participants consumed a meal which contained at least 2 g/kg/body mass of carbohydrate. This meal was recommended by a dietician based on the participant's 3-day dietary analysis. Following a 6-min warm-up at 60 W, the workload was increased at increments of 20 W every 2-min until exhaustion. During this test, oxygen uptake (VO 2 ) and expiratory flow (VE) were collected and respiratory exchange ratio (RER) was calculated from the ratio between oxygen uptake and carbon dioxide output (VCO 2 ·VO 2 −1 ) using a breath by breath gas analyzer (Cosmed Quark CPET, Rome, Italy). Heart rate (HR) was recorded using a chest belt (Cosmed wireless HR monitor, FIGURE 1 | Overview of the experimental protocol. One repetition maximum on leg press (1-RM Test), high-intensity interval training (HIIT), Maximal oxygen test (VO 2max Test). Frontiers in Physiology | www.frontiersin.org Rome, Italy). The criteria used for the determination ofVO 2 max were threefold: a plateau inVO 2 despite an increase in power output, a RER above 1.1, and a HR above 90% of the predicted maximal HR (Howley et al., 1995). Expired gases and HR values were averaged every 10-s.VO 2max and MAP were defined as the average of the highest consecutiveVO 2 and power output values recorded during a 1-min period. Substrate Oxidation Assessment Following the maximal oxygen uptake test, participants returned to the laboratory a minimum of 24-48-h later at the same time of day after an overnight (12-h) fast to determine substrate oxidation. Before this trial, they were required to record all food and drink ingested in the previous 24-h on a dietary log and asked to confirm that they were indeed in a fasted state. This form was photocopied and returned to each subject, and they were required to replicate this dietary intake before the Post-HIIT assessment of substrate oxidation. Before exercise, resting gas exchange data were acquired for 4-min to ensure that participants were not hyperventilating. Exercise consisted of 4-min of cycling (Monark LC6 novo, Vansbro, Sweden) at 40 W followed by 20 W increases in intensity every 3-min until RER remained higher than 1.0 for at least 60-s. This protocol is similar to the one employed by Achten et al. (2002) but adapted for sedentary women. Cadence was maintained between 60 and 80 rpm. Gas exchange data (Cosmed Quark CPET, Rome, Italy) and HR (Cosmed wireless HR monitor, Rome, Italy) were continuously obtained, and the last 2-min of gas exchange data from each stage was averaged to calculateVO 2 andVCO 2 and to determine RER. Specifically, whole body rates of carbohydrate (CHO) and fat oxidation (g min −1 ) were calculated fromVO 2 andVCO 2 values measured during the submaximal cycling test using the non-protein RER values and according to standard equations (Jeukendrup and Wallis, 2005): CHO oxidation = 4.210 (VCO 2 ) -2.962 (VO 2 ) and fat oxidation = 1.695 (VO 2 ) -1.701 (VCO 2 ). Dietary Intake Assessment To minimize a possible nutritional bias, all participants were instructed to maintain their accustomed dietary habits throughout the investigation from 3-day prior to Baseline testing in Week 1 to the completion of the post-testing in Week 4. No attempt was made to modify the nutrient composition of the individual's diets or their total energy intake. However, the participants all worked for the same organization and were asked to eat breakfast, lunch and snacks from the cafeteria each day of the intervention. Participants were instructed to record their dietary intake for 3-day before the Baseline maximal oxygen uptake test, including 1 weekend day, and for 3-day during the second week of training (HIIT WK 2 ). These 3-day dietary records were analyzed for total energy intake and for composition of carbohydrates, fats and protein using a commercially available computer software program (Nutrilog, Marans, France). Body Composition and Anthropometric Assessments Body composition (body weight, lean mass, and body fat percentage [Fat%]) was assessed following a 12-h fast with a bioelectrical impedance analysis device (Tanita model MC780 MA; Tanita Europe B.V., Amsterdam, Netherlands) that has been previously validated compared to dual-energy x-ray absorptiometry [fat mass (kg) ICC: 0.88; Lin C: 0.89] (Verney et al., 2016), in compliance with the manufacturer's guidelines. Anthropometric assessment included height, hip and waist circumference. Circumference measurements were evaluated at the same height and under constant tension by the same tester using a calibrated tape measure and standardized technique (Bernritter et al., 2011). Participants wore light shorts and stood with arms crossed and hands tucked under the axillae and were instructed to relax their abdominal muscles, exhale, and hold their exhalation throughout each measurement. Measures were taken in duplicate and the mean value was recorded. These assessments were performed at Baseline and Post-HIIT. Blood Analyses Blood samples were collected before and after 2 weeks of HIIT, following a 12-h fasting period. Participants were instructed to avoid alcohol and strenuous physical activity 48-h before collection. Samples were collected from the antecubital region into 4 ml ethylenediaminetetraacetic acid (EDTA) anticoagulant and serum separator tubes (SSTs). Then plasma samples were immediately transferred to pre-chilled microtubes and SST were immediately placed in ice and centrifuged at 3000 rpm for 10-min. All samples were stored at −20 • C for later analyses at a commercial laboratory within 2 weeks of the completion of the training program. Baseline and Post-HIIT samples were analyzed in a single laboratory session to reduce interassay variation. Glycerol was analyzed by fluorometric techniques (Randox Laboratories, County Antril, United Kingdom) (Foster et al., 1978), triglycerides (TGs) (Bucolo and David, 1973) and non-esterified fatty acids (NEFAs) were analyzed by an enzymatic colorimetric technique (NEFA kit, Biomnis, Paris, France) by combining acetyl coenzyme A synthetase and acetyl coenzyme A oxidase. The fasting glucose concentration was analyzed using the Atellica R CH Glucose Oxidase test (Siemens Healthcare SA, Renens, Switzerland) (Fossati et al., 1983). High-Intensity Interval Training Protocol (HIIT) HIIT was performed on Monark cycle ergometers (Monark LC6 novo, Vansbro, Sweden), three times per week, for 2 weeks. Participants completed all training sessions at a similar time of day. Two familiarization sessions were completed prior to the BaselineVO 2 max test to allow the participants to become accustomed to wearing the LOFU device while exercising for 30-min at a low intensity (RPE 11-12) (Borg, 1982). Training intensity was controlled through the maximal HR (HRmax) obtained during the maximal oxygen uptake test. The HIIT protocol was adapted from the training program previously described by Steckling et al. (2016). All sessions began with a 10-min warm-up at 60% HRmax, included a cool-down at 60% HRmax and total training session time was 45-min. Sessions 1 and 2 consisted of eight, 2-min intervals at 90% HRmax with 2-min active recovery at 60% HRmax between each interval. Sessions 3 and 4 consisted of six, 3-min intervals at 90% HRmax with 2-min active recovery at 60% HRmax between each interval. Sessions 5 and 6 consisted of five, 4-min intervals at 90% HRmax with 2-min active recovery at 60% HRmax between each interval. All sessions were supervised by staff. Participant Satisfaction At the end of the training period, a self-administrated questionnaire on customer satisfaction was completed [Client Satisfaction Questionnaire (CSQ-8)] (Attkisson and Zwick, 1982). The CSQ-8 can be easily scored and consists of eight items designed to measure client satisfaction with different services. Each item of the CSQ-8 can be scored from 1 to 4. The final score is calculated by adding up the individual items' scores (minimum satisfaction = 8 and maximum satisfaction = 32). Statistical Analysis All data were stored in an electronic database and analyzed using specialized statistical software (SPSS v20.0, Chicago, IL, United States). Results are expressed as mean ± standard deviation (SD). Delta change scores were calculated for body weight, Fat%, muscle mass,VO 2 max, hip and waist circumferences, and all lipid variables from Baseline to Post-HIIT. The normality of distribution for each variable was tested using the Shapiro-Wilk test. Statistical analysis was completed using a repeated-measures factorial analysis of variance (ANOVA) by group (HIIT PLA and HIIT EXP ) and time (Baseline and Post-HIIT). If significant main effects were observed, a Tukey's Honest Significant Difference test was performed as post hoc analysis to further discern differences. When assumptions of normality or homogeneity of variances were not met, the data was log-transformed before analysis. Means were then de-transformed back to their original units. The criteria to interpret the magnitude of effect size was > 0.2 small, > 0.5 moderate, > 0.8 large, and > 1.3 very large (Cohen, 1988;Rosenthal, 1996). An a priori sample analysis revealed 10 pairs of subjects was the minimum required in a matched pair design to be able to reject the null hypothesis that this response difference is zero with probability (power) 0.8. The Type I error probability associated with this test of this null hypothesis is 0.05 (G * Power version 3.1.3, Universität Kiel, Germany). Statistical significance was accepted at P < 0.05. Correlation analysis was employed to explore relationships among these change scores. Exercise Testing Sessions Evaluation of the Lower Limb Maximal Force (F max ) Both groups improved their F max (HIIT EXP : 16.0%; HIIT PLA : 14.4%) with no significant difference between groups (P = 0.77) ( Table 1). Substrate Oxidation Assessment Across all participants, the change in RER (Figure 2) in response to training was only examined up to 100 W in HIIT PLA (n = 10) and in HIIT EXP (n = 9), as many women did not attain a power output greater than 120 W during the Baseline maximal oxygen uptake test. No significant difference was observed between groups. However, there was a tendency toward a greater rate of fat oxidation in HIIT EXP following combined HIIT and LOFU at 60 W (Baseline: 0.45 ± 0.03 g min −1 to Post-HIIT: 0.65 ± 0.4 g min −1 ) (Figure 3) and at 80 W (Baseline: 0.31 ± 0.02 g min −1 to Post-HIIT: 0.41 ± 0.4 g min −1 ) (Figure 4). Dietary Intake Assessment Nutrient intake was similar (P > 0.05) for both groups at Baseline ( Table 2). Results revealed no effect of HIIT (P = 0.29) or difference between groups (P = 0.37) for total calorie intake. However, a significant increase (16.6%, P < 0.05) in CHO intake was observed in HIIT PLA, from 46.8 ± 5.9% at Baseline to 54.6 ± 4.4% at HIIT WK 2 . In addition, a significant decrease (17.6%, P < 0.05) in fat intake was observed in HIIT EXP, from 36.3 ± 6.4% at Baseline to 29.9 ± 3.2% at HIIT WK 2 . No significant difference was observed for protein (P > 0.05) irrespective of group or period. Body Composition and Anthropometric Assessment Changes in body composition and anthropometric measures are presented in Table 3. There was a significant interaction (group × time) effect for body weight (P < 0.01), with a significant −1.8% (P < 0.01) decrease in body weight for women in the HIIT EXP group following the HIIT program. This decrease in body weight led to a decrease of BMI for the HIIT EXP group (−1.9%; P < 0.01). No significant variation in body weight or BMI was observed in HIIT PLA group (P > 0.05). Significant changes in body composition were also observed in the HIIT EXP group only, with a −4.5% (P < 0.01) decrease in Fat%. In comparison, no change (0.2%, P > 0.05) in Fat% was observed in the HIIT PLA group. No difference was noted in either group for lean mass (P > 0.05). An interaction effect (group × time, P < 0.01) was observed for the hip circumference with a significant decrease for HIIT EXP group (hip: −4.1%, P < 0.01 vs. −0.7%, P > 0.05, for HIIT EXP and HIIT PLA, respectively). Moreover, an interaction effect (group × time, P < 0.01) was observed also for waist circumference with a significant decrease for the HIIT EXP group only (waist: −6.7%, P < 0.01 vs. −0.8%, P > 0.05, for HIIT EXP and HIIT PLA, respectively). Blood Analyses Changes in biochemical variables are described in Table 4. There was a significant interaction (group × time) effect for TG (P < 0.05), with a significant reduction over time for the HIIT EXP group only (−29.2% vs. −6.7% for HIIT EXP and HIIT PLA, respectively). A significant interaction (group × time) effect was calculated for NEFA (P < 0.05) with a decrease Post-HIIT for the HIIT EXP group (−33.9% vs. −5.7% for HIIT EXP and HIIT PLA, respectively). A significant interaction effect (group × time) for glycerol was recorded (P < 0.05) with a decrease in glycerol concentration for HIIT EXP group (−31.1% vs. −8.4% for HIIT EXP and HIIT PLA, respectively). A significant interaction was recorded for fasting glucose (P < 0.05) with a decrease over time for HIIT EXP only (−5.9% vs. 0.6% for HIIT EXP and HIIT PLA, respectively). Change in Subject Satisfaction The completed CSQ-8 was returned by 18 women (9 in each group, HIIT EXP and HIIT PLA ), thus displaying a response rate of 90%. The median CSQ-8 after the HIIT program was 29.2 and 16.1 for HIIT PLA and HIIT EXP , respectively. Thus, indicating a very high level of satisfaction among women exercising using the diffuse ultrasound compared with the placebo group of women. 92% of HIIT EXP group would have the procedure performed again compared to only 57% for HIIT PLA group. Moreover, the majority (85%) of women in the HIIT EXP group would recommend the program to a friend, but this was only the case for 50% in the HIIT PLA group. In addition, we observed a significant correlation between the level of customer satisfaction in HIIT EXP group and the reduction of circumference measurements obtained after the HIIT program (r = −0.88; P < 0.05). DISCUSSION The main finding of this study was that HIIT combined with LOFU resulted in a significant reduction in Fat% (−4.5%). Moreover, this improved body composition occurred with There is a growing body of evidence to support the efficacy of HIIT to improve fitness (Batacan et al., 2017). In the current investigation, six HIIT-sessions over a 2-week period led to significant increases inVO 2 max for the previously untrained women in both HIIT PLA and HIIT EXP groups. This finding supports the beneficial effects of short bouts of HIIT on aerobic fitness (Boutcher, 2011) compared to traditional prolonged steady-state training (Nybo et al., 2010). Indeed, when 36 untrained men completed a 12-week program consisting of either intense-interval running (HIIT), strength-training or prolonged steady-state moderate-intensity running (MICT), the HIIT group reported a 14% increase inVO 2 max (Nybo et al., 2010). While only a 7% improvement inVO 2 max was observed in the MICT group andVO 2 max in the strength training group remained unchanged. It is also noteworthy that HIIT elicited this greater increase in fitness level despite participants completing just a third of the total training duration compared to the MICT and strength-training groups. This finding supports the notion that training intensity is more important than training volume for the development of cardiorespiratory fitness (Wenger and Bell, 1986). Several other studies have reported a similar increase inVO 2 max in untrained participants to those observed in the present study. For example, Talanian et al. (2007), used a similar protocol of 2 weeks of HIIT and reported a 13% increase inV O 2 max in previously untrained women. These improvements in fitness level suggest that a combination of high training intensities (90% HRmax), short durations of each bout (4-min) with intermittent recovery periods provide a strong stimulus for physiological adaptation. While reductions in fat mass with HIIT have previously been associated with improvements in the fat oxidation pathway (Astorino et al., 2017), in the present investigation, there was only a trend toward a decreased RER across a range of submaximal workloads. This trend was greater in the HIIT EXP compared to the HIIT PLA . Prior investigations have linked an improved capacity for fatty acid oxidation following HIIT to an upregulation of key metabolic enzymes within the mitochondria and skeletal muscle (Talanian et al., 2007;Burgomaster et al., 2008;Perry et al., 2008;Boutcher, 2011). This includes a significantly greater expression of the enzyme 3-hydroxyacylcoenyme A dehydrogenase (Tremblay et al., 1994), citrate synthase (Tremblay et al., 1994;Talanian et al., 2007;Perry et al., 2008), muscle β-hydroxyacyl coenzyme A dehydrogenase (Talanian et al., 2007(Talanian et al., , 2010Perry et al., 2008), and total muscle fatty-acid-binding protein (Talanian et al., 2007(Talanian et al., , 2010. Using a comparable number of HIIT sessions (Katzmarzyk et al., 2001), training volume (10 × 4-min bouts) and intensity (90% HRmax) to the current study, Talanian et al. (2007) reported a significantly improved fat oxidation. However, in this study, and others that have found positive changes in RER, substrate oxidation during exercise was determined using a different protocol (Burgomaster et al., 2008;Perry et al., 2008;Talanian et al., 2010). Rather than measuring RER during graded exercise, a low-intensity prolonged, steady-state bout of exercise was used. This is a limitation of the current investigation as it is possible that changes may have been observed if the participants' RER was assessed over a longer exercise duration. Conversely, this result may also be explained by the time spent at 90% of HRmax being insufficient to induce whole-body fat oxidation, compared to other short-term (2 to 6 weeks) studies (Burgomaster et al., 2008;Perry et al., 2008). # Interaction effect (P < 0.01), † difference between Baseline and Post-HIIT (P < 0.05), ‡ difference between Baseline and Post-HIIT (P < 0.01). Percentage change from Baseline to Post-HIIT. The criteria to interpret the magnitude of the effect size were as follows; >0.2 small, >0.5 moderate, >0.8 large, and >1.3 very large). HIIT EXP 5.11 ± 0.42 4.81 ± 0.39 * † −5.9 0.74 * Interaction effect (P < 0.05), † difference between Baseline and Post-HIIT (P < 0.05), ‡ difference between Baseline and Post-HIIT (P < 0.01). Percentage change from Baseline to Post-HIIT. The criteria to interpret the magnitude of the effect size were as follows; >0.2 small, >0.5 moderate, >0.8 large, and >1.3 very large). Similar to the findings in the present study for the HIIT EXP group, HIIT has previously been shown to induce fat loss. For example, Tremblay et al. (1994) noted a reduction in fat mass after 20 weeks of HIIT. Trapp et al. (2008) confirmed these results reporting a 2.5 kg reduction in fat mass following a 15-week HIIT program. However, the same positive changes in body fat mass were not observed in the HIIT PLA group. Despite completing the same HIIT program, fat mass (−1.5%) and body weight were unchanged. Moreover, not all HIIT programs have induced changes in body mass. A metaanalysis identified that collectively, short-term (<12 weeks) HIIT showed no effect on body weight and fat mass in either normal or over-weight populations (Batacan et al., 2017). This suggests that the addition of the LOFU to a HIIT program contributed to the accelerated rate of fat loss observed in the HIIT EXP group. The novel part of the present study was to add LOFU to a HIIT program. Ultrasound devices have emerged with the increasing demand for non-invasive and safe methods to reduce localized fat (Milanese et al., 2014;Tonucci et al., 2014). In this technique, a dome shaped transducer emits pulsating low frequency, low intensity ultrasound waves that are directed onto a small focal point of unwanted fat tissue. Histological analysis and clinical studies have shown that this focused ultrasonic energy is released specifically in the target subcutaneous adipose tissue without causing damage to blood vessels, nerves, connective tissue, or muscles (Brown et al., 2009). This strong mechanical stimulus creates cavitation (breakdown of fat cell membranes), leading to lipolysis in subcutaneous adipose tissue and a reduction of fat deposits (Milanese et al., 2014;Tonucci et al., 2014). In this context, the reduction in abdominal fat (i.e., localized adipose tissue) after LOFU is mainly the result of mechanical disruption of subcutaneous adipocytes. The release of adipose tissue also leads to a subsequent increase in circulating triglycerides within the interstitial fluid. It has been suggested that once the mechanically disturbed triglycerides are released into the circulation, they follow the normal physiological fat metabolism pathways (Hotta, 2010). Therefore, by wearing an ultrasound device during exercise, these disrupted fat cells may then more easily enter the fat oxidation pathway to be metabolized as a fuel source. This combined with the improved ability to oxidize fat through an augmented hormonal response at both a systemic (i.e., increases in circulating catecholemines and insulin sensitivity) (Boutcher, 2011) and local (i.e., elevated skeletal muscle irisin) (Archundia-Herrera et al., 2017) level associated with HIIT demonstrates how the two modalities can have a synergistic effect on reducing Fat%. Moreover, by having a higher level of circulating free fatty acids this augmented fat oxidation may also continue during the post-exercise recovery period (Greer et al., 2015;Wingfield et al., 2015). The results of the current investigation show strong support for the efficacy of combined HIIT and LOFU to target localized fat. Significant reductions in circumferences were observed for the HIIT EXP only (waist: −6.5 ± 1.2 cm; hip: −4.6 ± 0.8 cm). These changes in circumference are greater than those previously reported using ultrasound in isolation (Moreno-Moraga et al., 2007;Teitelbaum et al., 2007;Tonucci et al., 2014). For example, Tonucci et al. (2014) observed significant, but smaller reductions of ∼1.5 cm in the waist circumference, ∼2.1 cm in abdominal circumference and ∼1.9 cm in the umbilical circumference following five sessions (60 days) of ultrasound therapy in 20 healthy, sedentary females. Moreno-Moraga et al. (2007) observed a slightly larger 3.95 ± 1.99 cm reduction in circumference following three sessions (monthly visits) in 30 healthy patients. Although, it should be noted that these studies used a lower number of LOFU sessions than utilized in the present investigation. Generally, a reduction in waist circumference is associated with abdominal obesity and is correlated with both visceral and subcutaneous fat (Pou et al., 2009). This is in line with the current findings, where both a reduction in waist circumference and Fat% was observed. Other studies have also shown similar reductions in fat mass with LOFU treatment (Milanese et al., 2014;Shek et al., 2016). For instance, Milanese et al. (2014) completed DXA scans on 28 nonobese women prior to and following a 10-week (2 sessions per week) ultrasound program and found that participants lost an average of −3.4% fat mass, with a reduction of −3.9% in the trunk. Comparable results have been observed with HIIT alone, with a significant reductions in total abdominal and visceral fat mass (Maillard et al., 2018). Nonetheless, considering the sizeable improvements in body composition observed in the current investigation, it appears that there is a synergistic effect of HIIT and LOFU. A limiting factor of fat oxidation is the release of triglycerides from the adipose tissue (Purdom et al., 2018). With the mechanical stress induced by the LOFU belt, this release is artificially stimulated and is no longer regulated by hormonal or enzymatic factors. HIIT has also been shown to promote increased levels of circulating free fatty acids during and following exercise (Wingfield et al., 2015). It is thought that these increases can promote increase fat oxidation during both exercise and the post-exercise recovery period, leading to body fat loss and an improved lipid profile (Greer et al., 2015). Indeed, in the present investigation there was a tendency toward increased fat oxidation and significant reductions in TG, NEFA, and glycerol concentrations in the HIIT EXP group. While some studies have reported decreases in TG (Elmer et al., 2016) and NEFA (Salvadori et al., 2014), these results have not been consistently observed with HIIT (Batacan et al., 2017). There is also limited evidence in the literature on the impact of LOFU on changes in lipid blood profile as most investigations focus on measures of body composition. Nonetheless, the improvement in lipid profile found in the present study following 2 weeks of exercise is comparable to previous HIIT interventions lasting 8 to 12 weeks (Wisloff et al., 2007). This similar improvement suggests that using LOFU during HIIT can promote greater positive changes in lipid blood markers than exercise alone. A limitation of this study is that while diet was monitored, it was not strictly controlled for the duration of the training intervention. Both total caloric and macronutrient intake can influence fuel utilization during exercise (Fletcher et al., 2017). Moreover, hunger and hormonal responses were not recorded throughout the training period. However, due to the similar total energy intake reported in both groups, we hypothesized that there was no effect of training or LOFU on appetite or hunger perceptions. Although, an increase in %CHO intake at the end of the HIIT program was observed for the control group, with no change in body fat, body weight, or total calorie intake. In this context, an assessment of eating behaviors may have provided additional insight into their lack of weight loss. As many aspects of eating behaviors are associated with longterm weight control and changes in brain areas implicated in appetite control (McFadden et al., 2013). A deeper understanding of hunger and hormonal responses could have also provided additional insight into the reasons underlying the significant decrease in fat intake (−6.4%) in the HIIT EXP in the second week of training. Another limiting factor, was the lack of non-exercise control group to assess the effects of LOFU in isolation. It would also have been of interest to conduct follow up testing, 1-2 months post-intervention to determine whether the changes in body composition and lipid profile were stable. An additional limitation of the investigation was the use of bioelectrical impedence (BIA) to assess body composition rather than dual-energy x-ray absorptiometry (DXA). The BIA device used in the current investigation has been found to be highly correlated with DXA for measurement of fat mass and Fat% (Verney et al., 2016;Thivel et al., 2018). However, less agreement has been reported between the BIA and DXA in the measurement of muscle mass and a lower reliability has been observed with a high initial body weight (Thivel et al., 2018). As changes in fat mass were the key parameter and the groups were closely matched when the present study commenced, BIA was considered a viable and cost-effective alternative to using DXA to assess body composition. Nonetheless, taken together the collective findings of the study indicate that the combination of HIIT and LOFU can positively impact health, fitness, body composition and that further research into the exercise intervention is warranted. The high level of satisfaction among women exercising with LOFU further supports future investigation into the implementation of HIIT and LOFU. In the HIIT EXP group 91.7% would complete the program again compared to only 57.1% for HIIT PLA group. These levels of satisfaction are higher than those reported using LOFU alone (Tonucci et al., 2014). This may be due to the observable changes in body composition with combined HIIT and LOFU in a shorter time period. Considering, previous LOFU studies have provided less LOFU treatments over a longer time period (Moreno-Moraga et al., 2007;Tonucci et al., 2014). Moreover, the reduction in circumference measurements was strongly correlated with participant satisfaction (r = −0.88) in the HIIT EXP group. These findings suggest that the level of satisfaction is high when participants experienced significant improvements in body composition following a short-term intervention. In summary, a 2-week, six session HIIT program combined with LOFU was an efficient strategy to increase fitness level and improve body composition in sedentary overweight women. Based on the findings of the present study, it appears that the addition of LOFU to HIIT can further increase the mobilization of free fatty acids from adipose tissue leading to greater losses in fat mass. This combined approach also accelerated positive health outcomes, such as an improved lipid profile and reductions in waist and hip circumference in a short-term intervention. Further research is required to confirm these novel findings and optimize the use of this strategy in both men and women. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the University of Nice. The patients/ participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS CH designed the research and project management. L-AM and XN collected and analyzed the data. KS prepared and edited the manuscript.
2019-10-23T13:12:41.175Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "8db52af3d1633501cc116226b16bfa8f8c154b3f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.01307/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8db52af3d1633501cc116226b16bfa8f8c154b3f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231662341
pes2o/s2orc
v3-fos-license
Female teachers effect on male pupils' voting behavior and preference formation This study examines the influence of learning in a female teacher homeroom class in elementary school on pupils' voting behavior later in life, using independently collected individual-level data. Further, we evaluate its effect on preference for women's participation in the workplace in adulthood. Our study found that having a female teacher in the first year of school makes individuals more likely to vote for female candidates, and to prefer policy for female labor participation in adulthood. However, the effect is only observed among males, and not female pupils. These findings offer new evidence for the female socialization hypothesis. Introduction As political leaders, women are expected to play a critical role in reducing the gender gap in society. Female council candidates receive more preferential votes when a female mayor has recently been elected into office (Baskaran & Hessami, 2018) 1 . Exposure to female politicians during young adulthood has a long-term influence; thus increasing the probability of women working in wage employment (Priyanka, 2020) 2 . Regarding social background, the gap between male and female candidates has reduced significantly as a result of changes in social norms (King & Leigh, 2010) 3 . Moreover, changes in the political system such as women's suffrage, have had a positive impact on education and childhood (e.g., Carruthers & Wanamaker, 2015;Miller, 2008). Many studies have highlighted the factors that increase the probability of females winning elections (Bauer, 2020;Hogan, 2007;King & McConnell, 2003;Lublin & Brewer, 2003;Moehling & Thomasson, 2020). However, the effect of female teachers on voting behavior has not been examined. Therefore, this study investigates how people vote for female candidates and prefer gender equalization by considering the effect of female teachers in early childhood education. Our investigation of the effect of female teachers on pupils' voting behavior and preferences later in life is inspired by existing studies suggesting a cross-gender effect; mothers influence their sons to prefer working women (Kawaguchi & Miyazaki, 2009) and the wives of men whose mothers worked are significantly more likely to work (Fernandez, Fogli, & Olivetti, 2004). In addition to the effect of working mothers on their sons, various types of different gender-matching effects have been observed. Having daughters transformed a man's view on women empowerment in society (e.g., Glynn & Sen, 2014;Milyo & Schosberg, 2000;Oswald & Powdthavee, 2010;Washington, 2008) 6 . These findings emphasize women's influence on men's views and preferences, which is called female socialization. We independently collected individual-level data through an internet-survey directly after the election in Japan. In the survey, we asked respondents about their views on active female participation in society, and whether they voted for female candidates. We also inquired about the genders of their homeroom teachers in elementary school. Based on the data, we found the following: Generally, women were more likely than men to prefer active female participation in society and to vote for female candidates. Male pupils who had a female teacher in the first year of elementary school were more likely to vote for female candidates and prefer female social participation than those who had a male teacher. However, this effect was not observed among female pupils. This implies that female teacher-male pupil matching reduces the gender difference in voting behavior and influences one's preference to support active female involvement in society. The study aids in bridging education economics and voting behavior to provide new evidence that early childhood education facilitates a change in male pupils' views in the long-term, thus promoting female socialization. The remainder of this article is organized as follows. Section 2 proposes the testable hypotheses. Section 3 describes the setting and data. Section 4 presents the empirical methodology. The estimation results and their interpretation are presented in Section 5. The final section presents some reflections and conclusions. Miller (2008) found that suffrage extension is positively related to public goods expenditures. Further, Miller (2008) indicated that women gaining votes led to reduced child mortality. These findings suggest that increased public goods expenditures were allocated in ways that improved child health. Carruthers and Wanamaker (2015) found that suffrage led to an increase in public school expenditures. From these findings, we can argue that women consider the well-being and future of children. During school life, children are more likely to appreciate female intention if they belong to a female teacher's homeroom class. Naturally, pupils are more likely to rely on and trust female homeroom teachers than male teachers. This gives pupils motivation to support female social participation, which persists later in life. Hypothesis However, female pupils are motivated to participate in society for themselves regardless of the gender of their teachers. Hence, the effect of female teachers is observed among males, but not for female pupils. Therefore, female socialization is promoted by male pupils. Thus, we propose the following hypotheses: Hypothesis 1: Having a female teacher influences male pupils to vote for female candidates in the election after male students become adults. Hypothesis 2: Having a female teacher influences male pupils' preference for female labor participation later in life. The setting and the data To investigate voting behavior, we obtained individual-level data through a webbased survey in July 2016, conducted immediately after the House of Councilors election in Japan. The Nikkei Research Company was commissioned to conduct the web survey. Surveys were openly posted on the Nikkei Research, and therefore the surveys were conducted until a sufficient sample had been collected. Since we aimed to collect over 10 000 observations, the survey was active until 10 000 observations were collected. A total of 12 176 respondents were asked to complete the questionnaire. In the questionnaire, we asked respondents whether they voted for female candidates and inquired about their views on female participation in the workplace. In addition, we obtained basic economic and demographic data such as sex, age, educational background, parental educational background, household income, job status, marital status, number of siblings, residential prefecture, and residential prefecture at six years of age. Furthermore, we gathered information about their educational experience such as if they worked and learned in groups in elementary school. This data was collected as prosocial behavior may have been facilitated by teaching practice (Algan et al., 2013). To construct the panel illustrates that the sample's demographic composition is equivalent to the 2015 Japan Census composition. As for educational background in Japan, according to OECD statistics, the percentage of individuals graduating university was approximately 50.5 % in 2016 7 . In our dataset, the percentage of those who graduated from university was 56 %. Hence, to a certain extent, the dataset represents Japanese society. Observations used for estimations are slightly reduced because some respondents did not respond to questions on variables included in the model. In Japan, there were 47 prefectures. There were 47 election districts, which were equivalent to prefectures. We asked for participants' residential prefecture to identify election districts where they voted. Out of the 47 prefectures, there were no female candidates in 15 prefectures in the 2016 election. While estimating voting behavior, we limited the sample to prefectures where female candidates stood in the 2016 election. Respondents who did not cast a vote were not included in the sample. Hence, at most, the sample used for the voting behavior and female participation was 2192 and 3350, respectively. Table 1 provides definitions of the key variables and their descriptive statistics, based on the sample used for the estimation of the view of female participation. In the first year, 81 % of the pupils were assigned to a female teacher class. This rate monotonically declined to 40 % in the sixth year. As is well known, in Japan, women teachers tend to teacher lower grades compared to male teachers. This is because the workload is larger in higher grades. For instance, teachers are obliged to lead higher grade students to overnight school excursions. Commonly, women teachers balance housework along with their work as teachers. Therefore, they avoid teaching a higher-grade class. Parents cannot choose the gender of the teacher, especially in the first grade. This means that the random assignment criteria of natural experiments has been met (Yamamura & Tsutsui, 2019). Teachers are acquainted with pupils' characteristics and dispositions while teaching them and observing their behavior in school. In higher grades, schools have more information regarding matching between pupils and teachers. If a conflict arises, the pupil is assigned to a presumably more suitable teacher in the next year. If pupils were inappropriately matched with a female teacher class in the past, they might be assigned to a male teacher class. In other words, the assignment to a female teacher class in higher grades seems to be determined by accumulated information about the compatibility between teacher-pupil genders. Therefore, the assignment to a women's class in first grade is more random and exogenous than other grades. Hence, the firstgrade assignment is free from selection bias. As explained, assignment of a class was randomized in the first year. However, the probability of being assigned to a female teacher class may vary according to the female teacher ratio in the area where respondents resided in the year of entering school. From official surveys, we gathered the number of both male and female teachers for 47 prefectures in different years, which enabled us to calculate the female teacher rate 8 . We also gathered information about respondents' residential prefectures at six years of age. We then matched the ratio of female teachers with the respondents by considering their years of entering school and their respective prefectures at the age of six 9 . Table 1 shows that the female teacher ratio ranged between 0.24 and 0.73, indicating a wide variation of the probability of being assigned to female teacher class according to time and place. From our original data, we calculated years of being assigned to a female teacher class during the elementary school period, which ranged between 0 and 6 as there are six grades in the elementary school of Japan. To compare the relationship between the female teacher ratio and the probability of being assigned to a female teacher class, we calculated these standardized values which are illustrated in Figure 2. Figure 2 shows that both years of female teacher class and female teacher ratio increased when respondents were younger. Figure 3 shows a comparison of years of being in a female teacher class between males and females. We observed a similar trend of being assigned to a female teacher class between them. In Table 2, we compared the ratio of belonging to the female teacher class between male and female pupils. With the exception of the second grade, there was no statistically significant difference. In the second grade, the teacher gathered information about the characteristics of the pupils through teaching them in the first grade. Therefore, some pupils are selectively assigned to a male teacher class in the second grade if they are more likely to be better suited to male teachers than female teachers. However, in general, there is no bias when pupils are assigned to a female teacher class. Moreover, we also checked the female teacher ratio of residential prefectures and the probability of being assigned to a female teacher class. Table 3 shows the mean difference test of the female teacher ratio in residential prefectures between the group in a female teacher class and the group in a male teacher class in each grade. For male respondents, the female teacher ratio was higher by approximately 0.03 points for those assigned to a female teacher class than for those in a male teacher class regardless of grades. The statistical significance level was 1 % in all grades. A similar tendency was also observed for female respondents. These observations suggest that the female teacher rate in the residential prefecture increased the probability of being assigned to a female teacher class despite random assignment. It is plausible that younger respondents are more able to recall the teacher's sex in elementary school, causing bias. However, as illustrated in Figure 4, response rates for questions about teachers' sex in elementary school are almost 80 % and are almost the same in each cohort group. Therefore, bias is unlikely to have occurred. According to a 2015 survey on information technology, over 90 % of the working-age population in Japan are web users. Therefore, selection bias for web users does not need to be considered. 10 According to the definition of key variables in Table 1, Vote woman is a dummy variable that accepts 0 or 1 while Support woman ranges between 1 and 5. The larger these variables, the more respondents are likely to support active female participation in society. To compare these variables, we standardized them in Figure 5. A cursory examination of Figure 5 shows that female respondents are more likely to support women's roles that are in line with intuition. The difference in Support woman between male and female respondents was larger than that in Vote woman. We interpreted this as follows: female candidates have political opinions, with varying beliefs on females' roles in society. Hence, some female candidates may have a traditional view and are therefore less likely to support women's active role in society as compared to male candidates in the same election district. Therefore, Support woman more directly captures the respondents' views than Vote woman. Figures 6 and 7 illustrate the differences in Vote and Support woman between those assigned to female and male teacher classes in each grade, respectively. Figure 6 shows that overall, male pupils assigned to a female teacher's class in lower grades were more likely to vote for female candidates. In contrast, female pupils assigned to a male teacher class were more likely to vote for female candidates. From Figure 7, we see that both male and female pupils assigned to a female class were more likely to support women's participation in the workplace. The difference in these variables between female and male teachers' classes is the largest in first grade and declines as pupils are promoted. These findings suggest that female teachers may facilitate a positive view on female roles in society among pupils when they become adults. This effect is significant in the first grade. Empirical methodology Our baseline model assesses the influence of a female teacher homeroom class in elementary school on pupils' voting behavior and views on women's role later in life. The estimated function takes the following form: Vote woman i (or support woman i)= α0 + α1 Female teacher in first year + α2 Years of female teacher from the second to-sixth yeari + Xi B + u i. Vote woman i or Support woman i is the dependent variable. Vote woman is a dummy variable that accepts 0 or 1, and thus the Probit model was used. The Support woman variable ranges between 1 and 5, and thus the OLS model was used. The key independent variable is Female teacher in first year because it captures the random assignment to the female teacher class. Its coefficient has a positive sign if female teachers in the first year influence pupils to vote for female candidates and support women's active participation in society later in life. In addition to the full-sample estimation, we use subsamples divided by the respondents' gender to examine the teacher-pupil gendermatching effects. Female teacher in the first year shows a significant positive sign only for the male sample, if a different gender-matching effect exists (e.g., Oswald & Powdthavee, 2010;Washington, 2008). To control for the influence of female teacher class in higher grades, we included Years of female teachers from the second to sixth year, which aggregated years in higher grades during the elementary school period. In alternative specification, instead of Years of female teacher from the second to sixth year, we simply added five dummies of the female teacher class. In addition, the vector of the control variables is denoted by Xi and the vector of the estimated coefficients is denoted by B. As control variables, we added the female teacher ratio of the respondent's residential prefecture at the respondent's school age. Further, we added the number of female candidates in the respondent's election district. In addition, the control variables were seven dummies for educational background as a proxy for the quantity of education, age, 17 income dummies, and 19 occupation dummies. We also controlled for variables such as group work and pro-competition curricula because specific educational features such as teaching practices could have influenced pupils' preferences and world views (e.g., Algan et al., 2013;Aspachs-Bracons et al., 2008;Milligan et al., 2004). It is plausible that family conditions also influenced the formation of preferences. Parents' education levels are controlled for by including the father's and mother's educational attainment dummies. Further, family composition is an important factor that affects views on social and economic issues (e.g., Borrel-Porta, Costa-Font, & Philipp, 2019; Oswald & Powdthavee, 2010;Washington, 2008). Therefore, the number of siblings and the dummies of marital status are included separately. The estimation results for these control variables were not reported. However, these variables are included in all estimations. 11 Table 1 shows that the sample included the election district without female candidates. Estimation results Hence, to examine voting behavior, we used a sub-sample that excluded observations without female candidates. Hence, the sample size for the estimation of Support woman is larger than that of Vote woman. We began by examining the results of Vote woman estimations. Table 4 shows that the coefficient of "Female teacher in first year" was positive in all columns. We observed statistical significance for the male sample, but not for the female sample. Years of female teacher in higher grades and other female teacher dummies did not show a significant positive sign, with the exception of Female teacher in sixth year in column (4). These results are consistent with our prediction. Its marginal effect is 0.10-0.11 for the male sample which indicates that a man was 10 or 11 % more likely to vote for female candidates if he was assigned to a female teacher class in the first year of elementary school. These results support Hypothesis 1. Besides this, other variables did not exhibit statistical significance. Regarding the results of Support woman, Table 5 shows that the coefficient of Female teacher in first year was positive in all columns. We observed statistical significance for the male sample as well as the sample composed of male and female respondents, but not for the female sample. The value of the coefficient was 0.15 in column (4) From our analysis, we conclude that female teachers influenced the world view of male pupils directly after entering elementary school; thus, driving them to support female participation in politics as well as in the labor market. Conclusion This study explored how education reduces the gender gap in society. In particular, we focus on the effect of female teachers on the formation of male pupils' views in early childhood education. For this purpose, we employed a quasi-natural experiment of teacher-student random gender matching in first-grade elementary schools in Japan. Using independently collected individual-level data directly before the House of Councilors election, we found that males are more likely to vote for female candidates and to prefer policy for female labor participation if they belonged to a female teacher's homeroom class in the first grade of elementary school. However, this effect was not observed for female respondents. From these findings, we argue that exposure to the opposite gender in early childhood leads males to have female role models. This holds true not only within one's family (Fernandez et al., 2004;Kawaguchi & Miyazaki, 2009), but also in school. Therefore, female teachers play roles similar to that of a mother for male pupils in early childhood. This study contributes to the endeavor of bridging education and voting behaviors to support the female socialization hypothesis. Like all empirical work, there are some limitations of this study. The key independent variable (female teacher dummy) is a recall variable that possibly suffers from measurement error bias 12 . Further, the dependent variable of voting for females is binary, which may have resulted in a significant loss of information that could have been captured with a continuous dependent variable. However, these limitations do not undermine the study's value. In contrast, they highlight that further work on the effects of teachers' genders on voting behavior is warranted. parenting daughters alter attitude towards gender norm? Oxford Economic Papers. Washington, E., 2008. Female socialization: How daughters affect their legislator fathers' voting on women's issues. American Economic Review, 98, 311-332. Note: "High" means high school. "Vocational" means vocational school which have been entered after graduating from high school. "Graduate" means graduate school. Note: Apart from the job dummies indicated, 13 other job dummies were included in the estimation model: (1) Chief executive officer, (2) Temporary employee, (3) Public officer, (4) Specialists (lawyers, accountants), (5) Self-employment, (6) SOHO (Small Office Home Office), (7) Part-time worker, (8) Outside worker, (9) House worker, (10) University student, (11) High school student, (12) Unemployed or retired, (13) Other worker. Notes: *** and ** denote statistical significance at the 1 % and 5 % levels, respectively. Values without parentheses are the marginal effects. Values in parentheses are standard errors, clustered by prefectures. The sample is restricted to areas with female candidates. In all columns, various control variables such as dummies for education method (Assign a value of 1 if there was a task in which students worked together as a group at elementary school; if not, assign a value of 0. Assign a value of 1 if there were running races during sporting events at elementary school and teachers ranked 32 the finishing order; if not, assign a value of 0), schooling years, ages, number of children, household income, marital status dummies, job dummies, father's and mother's educational attainment dummies, and a constant are included. However, these estimates have not been reported. Notes: ***, **, and * denote statistical significance at the 1 %, 5 %, and 10 % levels, respectively. Numbers in parentheses are standard errors clustered by prefectures. In all columns, the control variables in Table 4 are included. However, these estimates have not been reported
2021-01-22T02:15:46.324Z
2021-01-21T00:00:00.000
{ "year": 2021, "sha1": "4c5cb5604ce6735458cb2139083dcef5d7a87b60", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4c5cb5604ce6735458cb2139083dcef5d7a87b60", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
52013312
pes2o/s2orc
v3-fos-license
Ingesting Self-Grown Produce and Seropositivity for Hepatitis E in the United States Background Hepatitis E virus (HEV) is a major cause of hepatitis in developing and industrialized countries worldwide. The modes of HEV transmission in industrialized countries, including the United States, remain largely unknown. This study is aimed at evaluating the association between HEV seropositivity and consumption of self-grown foods in the United States. Methods Cross-sectional data was extracted from the 2009–2012 National Health and Nutrition Examination Survey (NHANES). Data from the dietary interview and the serum HEV IgG and IgM enzyme immunoassay test results were linked and examined. Univariate and multivariable logistic regression models were used to evaluate the significance and effect size of an association between self-grown food consumption and hepatitis E seropositivity. Results The estimated HEV seroprevalence in the civilian, noninstitutionalized US population was 6.6% in 2009–2012, which corresponds to an estimated hepatitis E national seroprevalence of 17,196,457 people. Overall, 10.9% of participants who ingested self-grown foods had positive HEV antibodies versus 6.1% of participants who did not consume self-grown foods (P < 0.001; odds ratio (OR) 1.87; 95% CI 1.41–2.48). In the age-stratified multivariable analysis, the correlation between ingesting self-grown foods and HEV seropositivity was significant for participants 40–59 years old, but not overall, or for those < 40 years or ≥60 years. Conclusions Ingesting self-grown food, or simply the process of gardening/farming, may be a source of zoonotic HEV transmission. Introduction HEV is an enterically transmitted acute viral hepatitis and major cause of sporadic and epidemic hepatitis worldwide [1][2][3]. Infection by HEV is typically self-limited and resolves within 4-6 weeks; however, the severity of infection ranges from subclinical to fulminate hepatitis [2,4,5]. Hepatitis E is unique in its severity among pregnant women, who may face up to a 28% mortality rate, and the immunocompromised, who frequently progress to chronic hepatitis E without antiviral treatment [6][7][8]. Although identified clinical cases are rare, asymptomatic hepatitis E infection is now known to be widespread in industrialized nations [1,4]. HEV transmission is linked to fecally contaminated drinking water [9,10]. These cases are usually reported as part of a large-scale outbreak, often following a flood or another natural disaster in developing countries [5,11]. The modes of HEV transmission in industrialized countries, including the United States, remain largely unknown. For years, it was suspected that HEV infections diagnosed in industrialized nations stemmed from travel to hyperendemic regions, especially in South Asia and North Africa [2]. Recent studies have demonstrated autochthonous (locally acquired) infection in several industrialized countries [12][13][14]. Unlike hepatitis A virus, HEV is not readily transmitted by person-to-person contact; therefore, it is postulated that these autochthonous infections originate from a zoonotic source [15]. Hepatitis E is recognized as a zoonotic disease [16]. Swine are a known HEV reservoir and many wild and domestic animals have been connected to transmission [5,10,16]. People whose occupations involve working with animals, especially livestock, have higher rates of HEV seropositivity [17,18]. More recent studies support zoonotic transmission of HEV from domestic and pet animals [16]. Foodborne transmission has also been clearly described as a mode of sporadic zoonotic transmission in industrialized countries. Reports highlight consumption of raw or undercooked pork products (wild and domestic), game meats, shellfish, and produce as possible sources of human HEV infection [4,10,[19][20][21]. We propose that humans may be exposed to HEV through self-grown foods, such as fruits and vegetables from a home garden contaminated by a zoonotic source. Our aim is to use cross-sectional data to evaluate the association between HEV seropositivity and consumption of self-grown foods. Materials and Methods Using data collected in the 2009-2012 National Health and Nutrition Examination Survey (NHANES), we performed a cross-sectional study to assess whether ingesting self-grown foods is associated with detection of HEV-specific IgM and IgG. The NHANES sample is a stratified multistage probability cluster designed to represent the total civilian noninstitutionalized US population. Probability sampling weights are applied to collected data to account for oversampling and nonparticipation and are used to calculate national estimates. Standardized interviewers, physical examinations, and tests of biologic samples are used to collect data in five major categories: demographics, dietary information, physical and dental examinations, laboratory tests, and questionnaires. Additional details concerning NHANES sampling and survey design can be found in the NHANES section of the Centers for Disease Control and Prevention (CDC) website [22]. The NHANES questionnaire asks participants to report demographics including age, gender, race/ethnicity, and birthplace. The race/ethnicity is reported in five categories: Mexican American, other Hispanic, non-Hispanic White, non-Hispanic Black, or other/multiracial. As for the dietary questionnaire, two dietary interviews were administered to all participants. The first interview was administered in person at the designated NHANES site. The second interview was administered over the phone 3-10 days later. Proxy interviews were administered for children 1-11 years old and for individuals unable to reliably answer questions themselves. During each dietary interview, participants were asked to report the types and sources of each food item eaten over the previous 24-hour period. Food items grown, caught, or hunted by the participant or someone the participant knew were categorized as self-grown foods. The recorded selfgrown foods were then subcategorized into groups, which included fruits, vegetables, meat products, dairy, and grains. HEV-specific IgM and IgG serologic tests were performed on all participants ≥ 6 years using enzyme immunoassay (EIA) kits provided by DSI-Diagnostics System Italy of Saronno, Italy. The CDC reported the IgM assay (DS-EIA-ANTI-HEV-M) as having 98% sensitivity and 95.2% specificity, and the manufacturer reported the IgG assay (DS-EIA-ANTI-HEV-G) as having 100% sensitivity and 97.5% specificity [13]. Detailed specimen collection and processing protocols can be found in the NHANES Laboratory Procedures Manual [23]. Further information concerning the anti-HEV EIA kits can be found in the CDC Laboratory Procedure Manuals for HEV testing [24,25]. 2.1. Statistical Methods. The raw number of survey participants is reported to describe the inclusion and exclusion of subjects from combining data from the demographic questionnaire, dietary interview, and laboratory NHANES datasets. The NHANES demographic and dietary questionnaires are available online [25][26][27]. All further analyses and seroprevalence estimates accounted for the NHANES survey design and included appropriate survey weights, which accounted for the differential probabilities of selection, nonresponse, and oversampling. Univariate analysis of the respective variables and seroprevalence was conducted first with standard errors estimated by the Taylor series linearization. Comparison of continuous variables was performed using Student's t-statistic. Univariate and multivariable logistic regression evaluated the significance and effect size of investigated independent variables with hepatitis E seropositivity (either IgG or IgM positive) as the dependent variable. The independent variables evaluated include: ingestion of self-grown food, sex, race/ethnicity, and foreign versus US born. The NHANES race/ethnicity categories were used with the modification that "Mexican-American" and "other Hispanic" categories were combined. The seroprevalence by age was calculated and used to estimate a force of infection method as described by Shkedy et al. [28]. Rates of both self-grown food consumption and hepatitis E seropositivity were evaluated to identify whether self-grown food was significantly associated with hepatitis E seropositivity among various age groups. These age groups were based on the force of infection curve. An alpha level of 0.05 was utilized to determine statistical significance. Analyses were conducted using SAS 9.3 (SAS Institute, Cary, NC). The study was reviewed and deemed exempt by the responsible institutional review board because the data is deidentified and publicly available online. Results 20,293 individuals participated in the 2009-2012 NHANES survey; 16,984 were ≥6 years old and therefore eligible to participate. Out of all eligible individuals, 14,951 had recorded HEV serologies. Participants with insufficient blood drawn and incomplete laboratory exams do not have recorded HEV serologies. Serum analysis showed that 894 (6.0%) individuals were positive for HEV-specific IgG and 146 (1.0%) were positive for HEV-specific IgM. Evaluating participants positive for either HEV-specific IgG or HEV-specific IgM yielded a seroprevalence of 6.6% (95% confidence interval (CI) 5.8%-7.4%), which corresponds to an estimated national HEV seroprevalence of 17,196 The dietary interviews of participants with resulted HEV serologies included 317,416 foods. A total of 1833 items were reported in the self-grown food category. Self-grown foods reported by HEV-seropositive participants included fruits (45%), vegetables (23%), meat products (15%), dairy products (9%), honey/sweeteners (5%), and grains (3%). In addition to consuming self-grown foods, several other factors were found to be associated with HEV seropositivity. HEV-seropositive individuals were found to be older, with a median age of 57.6 years (IQR 45.0-69.0), compared to a median age of 40.7 years (IQR 23.7-56.3; P < 0 001) in the overall serum-tested population. This finding is consistent with previous reports [29,30]. Furthermore, the estimated seropositivity steadily increases throughout life, with a maximum force of infection peaking at 36 per 100,000 in the 6th decade of life. The prevalence and force of infection by age are presented in Figure 2. Univariate and multivariable-adjusted odds ratios for HEV seropositivity by selected characteristics are shown in Table 1, with multivariable-adjusted odds ratios further stratified by age in Table 2. Birth country was a significant associative factor, as the seroprevalence of participants born outside of the Unites States (9.3%; SEP 0.88; OR 1.78; 95% CI 1.23-2.58) was higher than that of US-born participants (6.0%; SEP 0.45). In gender comparisons, young females (6-39 years) were found to have higher HEV seropositivity compared to males of the same age group (OR 1.44, 95% CI 1.03-2.02). Finally, non-Hispanic Black participants had lower rates of HEV infection than White participants (OR 0.60, 95% CI 0.46-0.79). Other race/ethnicity comparisons were not statistically significant. Discussion This study demonstrates an association between ingesting self-grown food and seropositivity for HEV. Based on the 2009-2012 National Health and Nutrition Survey (NHANES), the estimated HEV seroprevalence of the US civilian and noninstitutionalized population is 17.2 million people (6.6%). Since the best-characterized modes of HEV transmission-drinking fecally contaminated water and working with infected swine-are relatively rare in the US, exploring other modes of transmission is critical to understanding HEV epidemiology. HEV transmission through contaminated self-grown produce is plausible based on the ever-expanding range of identified zoonotic hosts and identification of HEV on agricultural products [19,20,31]. A mounting body of evidence has demonstrated that other animal species play a major role in HEV transmission to humans [10,16,32]. Despite documentation of crossspecies HEV transmission to humans, the mechanisms of infection remain unclear [33]. Swine are a well-known reservoir, as multiple studies have reported positive HEV serologies in swine worldwide [5,34]. Recent reports have expanded the range of possible zoonotic hosts after finding similar strains of HEV in cattle, deer, rabbits, wild boar, birds, rats, and several other animals [16,33]. As the list of identified zoonotic sources has expanded, so have the possible modes of transmission. Consuming raw or uncooked meats has been linked to cases of human HEV in multiple countries [4,10,21,35]. Occupations that require frequent contact with livestock are known to increase the likelihood of HEV infection [17,18,36,37]. Gardeners who work with, or live near, these animals may be spreading HEV through irrigation runoffs, or transferring the virus directly from animal feces to garden products. Therefore, even the process of gardening (planting, weeding, and preparing soil) may be a cause of HEV exposure. Recent North American and European studies have demonstrated that HEV can be found on agricultural products such as raspberries, strawberries, lettuce, and other vegetables at the primary production level and point of sale [19,20,31]. Teshale and Hu reported an increased HEV prevalence in Americans who consumed more green leafy vegetables or lettuce salad [3]. In our study, fruits and vegetables comprised two-thirds of all reported selfgrown foods, with tomatoes, cucumbers, and apples representing the most commonly reported self-grown food items among seropositive participants. Still, it is important to bear in mind that the NHANES nutritional survey is a brief snapshot of participant-reported foods ingested by individuals. The behavior of gardening, farming, and ingesting self-grown foods is likely a pattern that persists across all seasons and types of produce. In our univariate analysis, consuming self-grown food was found to be significantly associated with HEV seropositivity in all age ranges. The multivariable logistic model, however, found ingesting self-grown food to be significantly associated with positive HEV serologies only for participants This study identified other factors associated with HEV seropositivity. As expected, increasing age corresponded with increased HEV seroprevalence. Gender played a role in HEV infection in the younger population (6-39 years), as females were found to have higher HEV seroprevalence than males. Among all races/ethnic groups, our results, in concordance with other reports [12,29], found the highest rates of HEV seroprevalence in the non-Hispanic White population. Finally, foreign birth was significantly associated with positive HEV serologies only for participants in the youngest age category (6-39 years). This finding is consistent with the epidemiology of HEV infection in developing nations, where young adults have the highest rates of HEV detection and disease [38][39][40]. Unfortunately, travel history was not included in the NHANES survey and we are therefore unable to determine whether patients have travelled outside of the United States to endemic regions. This study is one of few nationally representative studies looking at HEV transmission in the United States. Furthermore, NHANES' standardized sampling methods, questionnaires, and protocols make it a reliable and powerful source for national epidemiological studies. It is important, however, to interpret our findings in the context of their limitations. First, this is a cross-sectional study and therefore cannot demonstrate self-grown food consumption as a cause of HEV infection. Second, the data may be subject to recall bias, as participants were asked to remember foods eaten during two 24-hour periods. Third, past studies have detected HEV-specific IgG in the blood up to 14 years after infection [5,41], which helps explain why HEV seroprevalence rises throughout life, but may also mask associations, as there could be a lag between exposure to HEV and detection of HEV-specific antibodies. Finally, it is possible that there are other confounding factors that should be considered, such as geography (i.e., urban versus rural region), occupation, and genotype data that are not reported by NHANES. These confounding variables could be useful in describing HEV epidemiology and associated factors. Documenting the distribution of HEV genotypes could be particularly helpful in clarifying the proportions of travel-related and autochthonous HEV infections in the United States. In conclusion, hepatitis E virus is prevalent in the United States, with an estimated national seroprevalence of 6.6%. Middle-aged participants (40-59 years) in the 2009-2012 NHANES dietary survey who reported consuming selfgrown foods, such as vegetables from a home garden, were more likely to be seropositive for HEV. Self-grown foods exposed to HEV-contaminated animal feces may be a source of zoonotic transmission to humans in industrialized countries. In addition, the process of growing produce and working in HEV-contaminated soil may be a mode of zoonotic HEV transmission to humans. As with all foods, self-grown foods should be carefully cleaned before consumption, especially for people at high risk for HEV-related complications, such as pregnant women and immunosuppressed patients. Further investigation concerning HEV transmission is an essential step toward developing effective preventive measures against hepatitis E infection. Data Availability Hepatitis E serologies and dietary surveys analyzed for this article are available from the Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey website (https://wwwn.cdc.gov/nchs/nhanes/ search/nnyfs12.aspx). Disclosure The views expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of the Department of the Navy, Department of the Air Force, Department of Defense, or the United States Government. An abstract with preliminary research was presented in poster form at the Digestive Disease Week in Chicago, Illinois, in May 2014. Conflicts of Interest The authors have no conflicts of interest or financial relationships to disclose.
2018-08-18T21:15:58.427Z
2018-07-15T00:00:00.000
{ "year": 2018, "sha1": "1c6f7aa3832af71657bedce74e817ec20c4a0d0f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/grp/2018/7980413.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fda3ba4088659540a680a35abb80161c81a57984", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234778138
pes2o/s2orc
v3-fos-license
Dark Patterns, Electronic Medical Records, and the Opioid Epidemic Dark patterns have emerged as a set of methods to exploit cognitive biases to trick users to make decisions that are more aligned with a third party than to their own. These patterns can have consequences that might range from inconvenience to global disasters. We present a case of a drug company and an electronic medical record vendor who colluded to modify the medical record's interface to induce clinicians to increase the prescription of extended-release opioids, a class of drugs that has a high potential for addiction and has caused almost half a million additional deaths in the past two decades. Through this case, we present the use and effects of dark patterns in healthcare, discuss the current challenges, and offer some recommendations on how to address this pressing issue. INTRODUCTION The amount of information required to make sound clinical decisions is enormous and continuously growing [1,2]. The combination of patient attributes, laboratory results, imaging-along with patient values and preferences-makes this process very complex [3]. Further, the availability of novel genetic and molecular assays that test for hundreds or thousands of genes or proteins and the emergence of previously unknown diseases make the task impossible without the support of external systems to aid clinicians and patients in sound decision making. The complexity of such decisions is one of the reasons explaining why patients only receive around half of the recommended health interventions [4,5]-a situation with disastrous consequences for their health and well-being. Electronic Medical Records (EMRs) have emerged in the past twenty years as comprehensive information systems used to collect and synthesize patient data, and to provide decision support for health professionals. The category of devices and artifacts used to facilitate clinical decision making are collectively known as clinical decision support systems (CDSSs). CDSSs can facilitate the documentation of relevant clinical information, alert clinicians about abnormal laboratory results, suggest relevant clinical pathways, summarize patient variables, and many other forms of decision support. Although CDSSs can be implemented through non-digital methods, such as paper reminders [6], most CDSSs are embedded in Electronic Medical Records. Given the diversity of clinical problems, interventions, and possible outcomes, evidence supporting the use of CDSSs is heterogeneous, but there is a growing number of patient and process outcomes that have been shown to be improved through the use of CDSSs. As an example, a recent overview of systematic reviews on the use of CDSS to improve outcomes in patients with diabetes found that 83% of all included studies showed positive impacts on processes of care and 1/3 of them demonstrated benefits in managing blood pressure, blood glucose, and even a reduction in mortality [7]. The accumulating evidence has made CDSSs an attractive method to influence clinical decision making and to change clinician's behaviour. However, at the same time that the digitisation of CDSSs has enhanced the speed, accuracy, and scalability of clinical decision making, it has also increased the risk of making the decision process more opaque and of reducing the agency of clinicians. This risk is amplified by recent advances in artificial intelligence and machine learning, which despite offering promising improvements in decision making performance, might not allow for inspection of how the recommendations were reached. This context, combined with competing interests from pharmaceutical companies and medical device manufacturers, creates fertile grounds for the proliferation of dark interface design patterns in CDSSs. We consider dark patterns to be common interface design solutions leveraging cognitive biases and heuristics to trick users into making decisions that are more aligned to third party interests than to their own. In this paper we discuss a case of dark patterns influencing patient treatment through the modification of a CDSS embedded in a commercial electronic health record. CLINICAL DARK PATTERN Chronic pain is a frequent and difficult to treat condition that can generate significant consequences to patients and to the healthcare system overall. Significant efforts have been made to train clinicians to identify and treat patients with chronic pain. A range of possible treatments for chronic pain exist, ranging from surgical implants to psychological therapies. Among them, the use of opioids has been long recommended for the treatment of severe to moderate pain related to cancer or other serious illnesses [8], but its use remains controversial for non-malignant pain management. Though they can be effective at managing pain in the short term, there are concerns related to its long-term efficacy, side-effects, and potential for addiction and drug abuse [9]. Such addictive potential raises important ethical issues related to the over-prescription of opioids for chronic pain management. This is especially critical considering the current opioid abuse epidemic in the United States that was, at least in part, fuelled by drug companies encouraging and incentivizing the prescription of opioids for pain management by emphasizing benefits and downplaying the risk of addiction. It has been estimated that more than 450,000 died of opioid overdoses between 1999 and 2018 in the United States. In this context, the prescription of opioids for pain management should be done sparingly and cautiously. In 2020, news reported that an electronic health record (EMR) system provider that offered a cloud-based EMR solution had paid 145 million US dollars to settle a lawsuit accusing them of modifying their CDSS to promote the use of long-action opioids for the treatment of chronic pain. A review of the court documents made public by the US Department of Justice showed that the company added a treatment option, unsupported by clinical evidence, to the options presented to clinicians when deciding the next therapeutic steps for patients suffering of chronic pain. The documents revealed that the software provider solicited remuneration "to design the Pain CDS to cause healthcare providers to extend the duration of ERO [extended-release oxycodone] prescriptions, convert patients receiving IROs [immediate-release oxycodone] to EROs, to increase the overall market of ERO-using patients, and to measure its ability to deliver such results"[10]. In Figure 1 we can see evidence presented in the legal documents showing the options presented to clinicians when defining a follow-up plan for patients with chronic pain (we highlighted the controversial option in red). The options presented are frequently used to treat patients with chronic pain, however, the company inserted an additional option suggesting the use of opioids, including long-acting an extended release opioids. The options presented were sourced not only the paper did not suggest the use of opioids in these cases, the paper advocated against the use of opioids given the high potential to develop opioid dependence and addictions [11]. From the evidence presented, it is clear that the additional option inserted was not highlighted in any way that would allow the clinician to identify its different provenance. To further nudge clinicians to prescribe opioids, the system included three alerts. The first prompted doctors to record a pain score from the patient. The second prompted them to collect a BPI score (brief pain inventory) of patients who reported a pain score equal or higher than 4 (out of 10) twice or more in the past 3 months. The third prompted the creation of a pain management plan for the patients who reported a pain score equal or higher than 4 within four months and patients with chronic pain. The legal documents reveal that marketing professionals were involved in the design of the system, and that they believed that the questions in the BPI would focus clinicians' attention on pain symptoms and would increase the likelihood of them creating a pain management plan for the patient. The legal documents we reviewed also present extended email chains describing the collaboration between an opioid-producing drug company and the EMR vendor. Two aspects are salient from the email discussions. First, the drug company specifically targeted "opioid-naïve patients", meaning patients that had not previously used the treatment and were at high risk of abuse. Second, the EMR company produced an internal analysis that it shared with the drug company showing that extended release opioids were among the least effective interventions to reduce pain. A preliminary analysis by the EMR company explained that the CDSS had generated alerts during 21 million patient visits for 7.5 million unique patients and almost 100,000 healthcare providers after only 4 months of going live. The CDSS alert operated until the spring of 2019 after alerting more than 230,000,000 times and resulted in tens of thousands of additional opioid prescriptions. ADDRESSING DARK PATTERNS IN CLINICAL DECISION SUPPORT SYSTEMS The case above highlights the serious consequences that dark patterns can produce in medical systems. This is an example of the interface interference pattern-"manipulation of the user interface that privileges certain actions over others. " [12]-where commercial deals between the EMR provider and a pharmaceutical company led an option that is harmful to patients to be presented as an alternative that is equivalent to more suitable ones. It can also be understood as an instance of the sneaking pattern-"attempting to hide, disguise, or delay the divulging of information that is relevant to the user" [12]-as the system is not upfront as to what is known about the limitations and risks of the treatment, as well as as to the commercial deals behind the recommendation. The effectiveness of the design decision in advancing the pharmaceutical company's goals can be explained by their misuse of several cognitive biases, including: • Salience bias: By designing alerts that focused the clinician's attention on the pain experienced by the patient, the system made it more salient in the doctor-patient interaction, and therefore, more likely that the doctors would work towards a pain management plan, many of which would include the prescription of opioids. • Authority bias: The options listed in the suggestions for a follow-up plan were supposedly derived from a prestigious academic publication, which increased the likelihood that the clinician would attribute a greater accuracy to the recommendation. • Hyperbolic discounting bias: Because people tend to prefer immediate payoffs relative to later payoffs, the short-term effectiveness of opioids makes them a more attractive option than alternatives that are more effective in the long-term. • Automation bias: The natural tendency for users to over-rely on automation means that recommendations provided by CDSSs are often accepted with little scrutiny. The example shows that even relatively simple interface design decisions can lead to disastrous health outcomes. This trend is likely to continue as the complexity of the models behind these decision support systems increases. As these systems begin to incorporate black box machine learning models, the tendency is for their recommendations to become even more inscrutable. Further, the use of embodied intelligent interactive agents will make these recommendations even more persuasive, requiring additional effort from the part of clinicians to overcome the cognitive biases underpinning dark patterns. Completely addressing the problems highlighted in this case requires multiple perspectives. Here we propose a few ways in which design can contribute: • Explainability: systems should be able to explain how they came to the conclusions that they did in a way that humans can understand it. Though work has started in this area-especially in the health domain (e.g. [13])-this is a an area that requires substantially more research, both in terms of developing effective explainable algorithms and in designing human-centred explanations understandable by their end-users. • Knowledge Provenance: as health information systems keep evolving, they not only contain patient information but also biomedical knowledge. Explicit methods to convey the source of external knowledge being delivered through EMRs is critical [14]. The recent availability of APIs to interact with EMRs [15] should open the possibility to integrate computable knowledge from trusted and verifiable external sources. • Educating health professionals about the limitations of automated decision making systems: people often believe in recommendations from AI systems as if they were true (e.g. [16]). There should be more awareness about the limitations of these systems, both in terms of their general limitations (e.g. data bias, overfitting, etc.) and within the applications themselves (e.g. visualising the uncertainty of specific recommendations, tracing the provenance of the recommendations in a visual manner). • Libertarian paternalism: any interface design will privilege some alternatives over others. The example above shows that even though the system did not overtly highlight the opioid option, the simple presence of that alternative was problematic. Designers must acknowledge cases where the options are not equivalent and create choice architectures that privilege consensus opinions and well-chosen defaults. The libertarian paternalistic approach is one that preserves the agency of the clinician but that nevertheless nudges them towards directions that will promote the welfare of the patients [17]. Addressing dark patterns in the design of CDSSs is a complex and wicked design problem, requiring expertise from health professionals, user experience and interface designers, cognitive psychologists, digital ethicists, among others. Ultimately, it is critical that the system supports clinicians to make decisions that explicitly prioritise patients' interests and outcomes.
2021-05-20T01:16:18.664Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "13d0d2f75a846ed902a8e7af2086206ff98cb76f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "13d0d2f75a846ed902a8e7af2086206ff98cb76f", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119139687
pes2o/s2orc
v3-fos-license
Relating the type A alcove path model to the right key of a semistandard Young tableau, with Demazure character consequences There are several combinatorial methods that can be used to produce type A Demazure characters (key polynomials). The alcove path model of Lenart and Postnikov provides a procedure that inputs a semistandard tableau $T$ and outputs a saturated chain in the Bruhat order. The final permutation in this chain determines a family of Demazure characters for which $T$ contributes its weight. Separately, the right key of $T$ introduced by Lascoux and Sch\"utzenberger also determines a family of Demazure characters for which $T$ contributes its weight. In this paper we show that the final permutation in the chain produced by the alcove model corresponds bijectively to the right key of the tableau. From this it follows that the generating sets for the Demazure characters produced by these two methods are equivalent. Introduction In their 1990 paper [LS] Lascoux and Schützenberger introduced the notion of the "right key" of a semistandard Young tableau. One of the foremost applications of the right key is presented as Theorem 1 of [RS1], which provides a type A Demazure character formula that sums over a set of semistandard Young tableaux whose right keys satisfy a certain condition. Several equivalent methods have since been introduced to compute the right key of a semistandard tableau T . The method from [Wil] produces the "scanning tableau" S(T ) for T , which was shown to equal the right key of T . In their 2007 paper [LP] Lenart and Postnikov introduced the alcove path model. Among many other applications, this model can be used to produce Demazure characters in arbitrary type. When specialized to type A, the Demazure character is given as a sum over certain "admissible subsets". The type A "filling map" is described in Section 3 of [Le2]. Its inverse inputs a semistandard tableau and outputs a saturated chain in the Bruhat order. These chains are in bijection with the admissible subsets. The main result of this paper is as follows: Given a semistandard tableau T , find its scanning tableau S(T ). The scanning tableaux are in bijection with certain permutations; denote the permutation for S(T ) by σ T . Then for the same T apply the inverse of the filling map to produce its saturated Bruhat chain, denoted B T . Theorem 5.4 states that the final permutation in the chain B T is σ T . Thus we have proved that the final permutation in B T , which plays a role in the alcove model world analogous to the role of the right key in the tableau world, has a key tableau that is indeed equal to the right key of Lascoux and Schützenberger. The conjecture of this equality arose during discussions with Lenart. The connection between the two subjects is obtained here by forming the inverse of the filling map from [Le2]. The results presented in Section 5 make this connection completely explicit. From the main result it will follow that not only are the Demazure characters produced by these two methods equal, but their generating sets are as well. We achieve this by providing a set of semistandard tableaux that is in direct correspondence with the appropriate admissible subsets. This set of tableaux is seen to equal the set of tableaux from Theorem 1 of [RS1] mentioned above. This paper is organized as follows: Section 2 provides the necessary familiar definitions for our work. Section 3 recalls the "scanning method" of [Wil] and introduces some new terminology for it. Section 4 provides a slightly simplified version of the inverse of the filling map from the type A specialization of the alcove model. Section 5 proves the main result, which is the relationship between the methods in Sections 3 and 4. Section 6 gives the details of the Demazure character equalities that are a consequence of the main result. Background definitions Fix a positive integer n, and consider it fixed henceforth. An n-partition λ = (λ 1 , ..., λ n ) is a sequence of weakly decreasing non-negative integers. Let Λ + n denote the set of all n-partitions with λ n = 0. (Arbitrary n-partitions are fine for Sections 2 through 5, but we will restrict to these partitions that correspond to the dominant weights of type A n−1 .) Fix a non-zero λ ∈ Λ + n . The Young diagram of λ is a diagram consisting of λ i left-justified empty boxes in the i th row for 1 ≤ i ≤ n − 1. Henceforth we will simply use λ to refer to its Young diagram. Define c i to be the number of boxes in the i th column of λ for 1 ≤ i ≤ λ 1 , i.e. the length of the i th column of λ. Let 1 ≤ ζ 1 < ... < ζ d ≤ n − 1 denote the distinct column lengths of λ. Set ζ 0 := 0 and ζ d+1 := n. Let β h denote the index of the rightmost column of length ζ h for 1 ≤ h ≤ d, and set β d+1 := 1. Let (j, i) denote the intersection of the j th column and i th row of λ. (We reverse from the normal convention because the columns play a larger role than the rows in this paper.) Write (j, i) ∈ λ if and only if 1 ≤ j ≤ λ 1 and i ≤ c j . Define a reading order on λ by (l, k) ≤ (j, i) if l < j or l = j and k ≥ i. In this case we say that the location (l, k) occurs (weakly) before (j, i), and thus (j, i) occurs (weakly) after (l, k). We will refer to advancing a location, by which we mean increasing the location via this ordering to one that occurs after it. The location immediately following (j, i) is (j, i − 1), and the location immediately preceding it is (j, i + 1). For the sake of convention, identify (j, i − 1) with (j + 1, c j+1 ) when i = 1, and identify (j, i + 1) with (j − 1, 1) when i = c j . A filling of λ is an assignment of one number to each box in λ. Define the set [n] := {1, 2, ..., n}. An n-semistandard tableau T is a filling of λ with values from [n] such that the values weakly increase from left to right within each row and strictly increase top to bottom within each column. In this case λ is called the shape of T . Let T λ denote the set of all n-semistandard tableaux with shape λ. Given T ∈ T λ , let T (j, i) be the value in T at the location (j, i) ∈ λ. Use C 1 , ..., C λ1 to denote the columns of T from left to right; hence the length of C i is c i . We say that T is a key if the values in C i also appear in C i−1 for 1 < i ≤ λ 1 . The right key of T , denoted R(T ), is a key determined by the values of T that was introduced by Lascoux and Schützenberger in [LS]. (We will not need a computational definition for R(T ).) A permutation is a bijection from [n] to itself, and the set of all permutations is denoted S n . Given φ ∈ S n we will often refer to its one-rowed form (φ 1 , ..., φ n ); here φ i is the image of i under φ for 1 ≤ i ≤ n. Define S λ n to be the set of all φ ∈ S n such that φ ζ h−1 +1 < ... < φ ζ h for 1 ≤ h ≤ d + 1. (This is the set of minimal coset representatives of S n /S λ , where S λ is the subgroup of permutations that fix λ.) Define the λ-key of φ, denoted Y λ (φ), to be the key of shape λ whose columns of length ζ h contain φ 1 , ..., φ ζ h arranged in increasing order for 1 ≤ h ≤ d. For an example, refer to Figure 1 in Section 4. The figure contains the one-rowed forms of several permutations, written vertically. Let φ be the rightmost permutation and let λ = (4, 4, 3, 2, 1, 1, 1). Then the rightmost tableau in Figure 1 If λ is not strict, i.e. not all parts are distinct, then there are multiple permutations in S n that will yield the same key. Specifically, all permutations in a given coset of S n /S λ have the same key. By the construction of S λ n and the definition of the key of a permutation, we see that if multiple permutations produce the same key, the shortest permutation in the Bruhat order that produces this key is in S λ n . It is well known that this key construction is a bijection from S λ n to the set of keys of shape λ. Further, it can be seen that for φ, ψ ∈ S λ n , one has Y λ (φ) ≤ Y λ (ψ) if and only if φ ≤ ψ in the Bruhat order [BB]. The Scanning Method The following method inputs a semistandard tableau T of shape λ and outputs its "scanning tableau" S(T ), which is a key of the same shape. By Theorem 4.5 of [Wil], the scanning tableau is equal to the right key R(T ) of T . Fix λ ∈ Λ + n and let T ∈ T λ . Fix 1 ≤ j ≤ λ 1 , as the procedure is applied once to each column of λ. Initialize the scanning paths from the j th column by P (T ; j, i) := {(j, i)} for 1 ≤ i ≤ c j . Technically speaking the scanning paths are sets of locations in λ, but we will also refer to the values in a scanning path, which are simply the values in T at the locations in the path. The paths are constructed from bottom to top using the following definition: Given a sequence x 1 , x 2 , ..., define its earliest weakly increasing subsequnce (EWIS) to be the sequence x a1 , x a2 , ..., where a 1 = 1, and for b > 1 the index a b is the smallest index such that Consider the values T (l, c l ) for l ≥ j to form a sequence and compute its EWIS. Each time a value is added to this EWIS, append its location to P (T ; j, c j ). When this process terminates, delete the values and boxes in P (T ; j, c j ) from T and λ, then repeat the process for the new lowest box in C j . In general, to compute P (T ; j, i) for 1 ≤ i < c j : Compute and then delete P (T ; j, k) for c j ≥ k > i. Use the values in the lowest box of the j th through rightmost columns of the resulting tableau to create a sequence, and compute its EWIS. Each time a value is added to the EWIS, append its location to P (T ; j, i). For an example, let T be the first tableau in Figure 1, located in Section 4. Its scanning paths that begin in its first column are indicated in the second tableau in Figure 1: A superscript of x on T (j, i) indicates that (j, i) ∈ P (T ; 1, x). To compute P (T ; 1, i) using this figure, one must imagine that the entries with superscripts larger than i and their boxes have been deleted from T and λ. It can be seen that the result of deleting a scanning path always leaves a valid shape and a semistandard tableau. Further, once P (T ; j, 1) has been computed and deleted then C j through C λ1 have been deleted. In other words, every location (l, k) ≥ (j, c j ) is contained in exactly one scanning path that begins in the j th column. To create the scanning tableau S(T ), apply the above process once for each column of T . Then define S(T ; j, i) (the value in S(T ) at the location (j, i) ) to be T (l, k), where (l, k) is the final location in P (T ; j, i). Continuing the example, the third tableau in Figure 1 is S(T ). The scanning tableau can be used to produce a permutation via the inverse of the bijection described in Section 2. Fix T ∈ T λ and find its scanning tableau S(T ). Define σ T to be the permutation such that the values σ ζ h−1 +1 , ..., σ ζ h for 1 ≤ h ≤ d are the values in the columns of length ζ h of S(T ) that are not in the columns of length ζ h−1 , arranged in increasing order, and σ ζ d +1 , ..., σ ζ d+1 = σ n are the values from [n] that do not appear in the first column of S(T ), arranged in increasing order. Since S(T ) is a key, this process is well-defined. By construction we have σ T ∈ S λ n . Further, we see that S(T ) = Y λ (σ T ). Continuing the example, σ T is the rightmost permutation in Figure 1. Given a column l < λ 1 and a location (j, i) > (l, 1), for 1 ≤ k ≤ c l define the most recent location of P (T ; l, k) relative to (j, i) to be the latest location in P (T ; l, k) that occurs before (j, i). Computationally, this restriction simply truncates the sequences from which the EWIS's will be computed, and hence truncates the P (T ; l, k). The most recent value of P (T ; l, k) relative to (j, i) is the value in T at the most recent location of P (T ; l, k) relative to (j, i). Continuing the example, in the second tableau let (j, i) = (3, 3), l = 1, and k = 5. Then the most recent location of P (T ; 1, 5) relative to (3, 3) is (2, 3), and the most recent value is 7. Lemma 3.1. The most recent value of P (T ; l, k) relative to (j, i) decreases as k decreases. Proof. Fix 1 ≤ l < λ 1 . Let 1 ≤ h < k ≤ c l and (j, i) > (l, 1). Let (a, b) and (x, y) be the most recent locations of P (T ; l, h) and P (T ; l, k) relative to (j, i) respectively. Since P (T ; l, k) is computed before P (T ; l, h), the location (a, b) is in the shape used to compute P (T ; l, k) and was not appended to P (T ; l, k). Let (a, b ′ ) denote the bottom location of the a th column when P (T ; l, k) was computed, i.e. the value T (a, b ′ ) is in the sequence whose EWIS is used to determine P (T ; l, k). It is easy to see from its definition that the final value in the EWIS obtained from a finite sequence is the largest value in that sequence. Further, the EWIS contains all occurrences of the largest value. From this we have Lemma 3.2. Fix 1 ≤ l < λ 1 and (j, i) > (l, 1). The path from the l th column that contains (j, i) is the path with the largest most recent value relative to (j, i) that is less than or equal to T (j, i). Proof. As k decrements from c l , compute and delete the P (T ; l, k) until (j, i + 1) has been deleted. The column-strict condition on T and the definition of EWIS imply that the most recent values relative to (j, i) from these paths are larger than T (j, i). The definition of EWIS also guarantees that the remaining paths will not append (j, i) if their most recent value relative to (j, i) is larger than T (j, i). Continue to compute and delete the scanning paths until reaching the first path whose most recent value relative to (j, i) is less than or equal to T (j, i). At least one such path must exist, namely the path that contains (j − 1, i). Lemma 3.1 implies that this most recent value relative to (j, i) is the largest such value. Clearly this EWIS will choose T (j, i) and so this path will append (j, i). Applying the alcove model in type A to semistandard Young tableaux The "filling map" from the type A specialization of the alcove model is presented in Section 3 of [Le2]. Its inverse inputs a tableau T of shape λ and outputs a saturated chain in the Bruhat order. We are most interested in the final permutation in the chain, which will be denoted π T . This inverse procedure consists of repeated application of the "greedy algorithm" described in Algorithm 4.9 of [Le2] to the locations in T in a certain order. The author is indebted to Lenart for explaining how to use Algorithm 4.9 to describe the inverse of the filling map. Here we provide the details of a shortened version of this procedure to see how the values in T produce π T . The distinction between it and the full version will be discussed below. Fix λ ∈ Λ + n and T ∈ T λ . The following inverse procedure will produce one permutation for each location in λ between (1, 1) and (λ 1 , 1) inclusive in the reading order. The locations are indicated as superscripts. As the locations advance through λ, the permutations increase in the Bruhat order. The first permutation is π (1,1) := (π (1,1) 1 , ..., π (1,1) n ) whose first c 1 entries are the values in C 1 (maintaining the increasing order), and whose final n − c 1 entries are the remaining values of [n] (also in increasing order). As (j, i) advances from (2, c 2 ) to (λ 1 , 1), the procedure produces π (j,i) from π (j,i+1) based on the relationship between C j−1 and C j . More specifically, the procedure uses π (j,i+1) to produce a permutation whose i th entry is T (j, i), without changing any other entries from π (j,i+1) whose index is less than or equal to c j . From this we see that at an arbitrary location (a, b) > (1, 1), the permutation π (a,b) has π Otherwise, the semistandardness of T and the previous paragraph imply that T (j, i) = π (j,i+1) k for some k > c j . In this case, perform the following greedy algorithm: Initialize the index i 0 := i, so we have π . For 1 ≤ a ≤ n such that a = i x for any 0 ≤ x ≤ m, set π (j,i) a := π (j,i+1) a . This completes the creation of π (j,i) and we say that the greedy algorithm has been executed at (j, i). Once the algorithm has been executed at (λ 1 , 1), our final permutation π (λ1,1) =: π T has been produced. So the column-strict condition on T implies the result at (β h , 1). Note that the h = 1 case has been completely proven, since β 1 = λ 1 . When h = d + 1, the definition of π (1,1) implies the result at (1, 1). Now for any 1 < h ≤ d + 1, assume that π (j,i+1) 1). Let x be the smallest index and y the largest index satisfying ζ h−1 + 1 ≤ x ≤ y ≤ ζ h such that the greedy algorithm executed at (j, i) chooses π (j,i+1) x and π (j,i+1) y . If no such indices exist the result follows trivially. Otherwise, let i ≤ z ≤ ζ h−1 be the largest index less than x such that π (j,i+1) z is chosen by the greedy algorithm at (j, i). Clearly π with equality in the former if and only if z = x − 1. If x = y, then π for all other ζ h−1 + 1 ≤ a ≤ ζ h , and so the result follows. If x < y, the induction hypothesis implies that all of π , ..., π (j,i+1) y are chosen by the greedy algorithm at (j, i). Then π , the result follows for π (j,i) . At (j, i) = (λ 1 , 1), the statement of the lemma for 1 ≤ h ≤ d + 1 says that: The remaining paragraphs in this section describe the connection between the above method and the full version of the inverse of the filling map. First note that if m > 1 when the greedy algorithm is executed at (j, i), then there are permutations between π (j,i+1) and π (j,i) in the Bruhat order. The full version of the inverse of the filling map yields m − 1 permutations between π (j,i+1) and π (j,i) : After obtaining π (j,i+1) , produce π (j,i+1;1) by interchanging only π (j,i+1) i0 and π (j,i+1) i1 and leaving the remaining values unchanged. In addition, the full version of the inverse of the filling map begins with the identity permutation which we will denote π (0,0) . Let C 0 be a column of length n whose value in row i is i. Then the process from the previous paragraph is executed at the locations (1, c 1 ) through (1, 1) to obtain a saturated chain from π (0,0) to π (1,1) . Let B T denote the result of combining these chains to produce a saturated chain from π (0,0) to π T . Then B T is the saturated chain produced by the full version of the inverse of the filling map in the alcove model specialization in type A. Lastly, for c ∈ [n − 1] define Γ(c) := ((c, c + 1), (c, c + 2), . . . , (c, n), (c − 1, c + 1), . . . , (c − 1, n), . . . , (1, c + 1), . . . , (1, n)). Then define Γ(λ) to be the concatenation of Γ(c 1 ), Γ(c 2 ), . . . , Γ(c λ1 ). For T ∈ T λ , the set Γ(λ) is the ordered list of all transpositions that may be applied to produce the permutations in B T . Let x denote the number of elements in Γ(λ), and consider the list to be indexed from 1 to x. Each time a transposition in Γ(λ) is applied during the creation of B T , underline it. Define J T ⊆ [x] to be the indices of the underlined transpositions. Then J T is called an admissible subset. (The original [Le1] definition for a subset of [x] to be admissible is that its corresponding chain in the Bruhat order is saturated, which is a consequence of the construction above.) Also note that given an admissible subset J, one can reverse this procedure to find its corresponding tableau T J . That is: given an admissible subset J, form the Bruhat chain corresponding to J and let T J be the tableau whose j th column contains the first c j entries of π (j,1) for 1 ≤ j ≤ λ 1 . See Section 3 of [Le2] for full details. Assume Π (j,i) The value of π i will change to π im = T (j, i), i.e. we have T (j, i) ∈ Π (j,i) Thus the semistandardness of T ensures that b = i or b > c j , and so π b will be chosen by the greedy algorithm. By construction, the index b is the largest index less than or equal to ζ h that the greedy algorithm will choose. In other words, the new index for the value π b in π (j,i) will be larger than ζ h . Thus By the induction hypothesis π b = T (l, k) for some T (l, k) ∈ U (T ; β h , j, i + 1). By the construction of π b , the value T (l, k) is the largest value in U (T ; β h , j, i + 1) less than T (j, i). Again by Lemma 3.2, the path whose most recent value is T (l, k) will append (j, i). So T (j, i) is now the most recent value of a path but T (l, k) is not. The remaining paths are unchanged. In summary U (T ; β h , j, i) = ( U (T ; β h , j, i + 1)\{T (l, k)} ) {T (j, i)}. Since π b = T (l, k), the induction hypothesis implies the result. For 1 ≤ x ≤ n, write π Tx for the x th entry of π T . After the greedy algorithm has been executed at (λ 1 , 1) we have Π Recall that σ T is the permutation corresponding to S(T ). Then for λ ∈ Λ + n and T ∈ T λ , we have: Corollary 5.3. For any T ∈ T λ , we have Y λ (π T ) = S(T ). Corollary 5.3 is the most direct way that we know of to show that S(T ) is indeed a key. Since both π T and σ T are in S λ n , we see that the final permutation produced by the inverse of Lenart's filling map [Le2] is the permutation that corresponds to the scanning tableau: Theorem 5.4. Let T ∈ T λ and let π T be the final permutation in the saturated Bruhat chain produced by the inverse of the filling map in the type A specialization of the alcove model. Then π T = σ T . Finally, since [Wil] showed that S(T ) = R(T ), we see that the key of the final permutation produced by the inverse of the filling map is the right key of T introduced by Lascoux and Schützenberger in [LS]: Corollary 5.5. Let T ∈ T λ . Then Y λ (π T ) = R(T ). Demazure Character Consequences To keep this paper purely combinatorial we use Theorem 1 of [RS1] to define the Demazure character d λ,w (x), also known as the "key polynomial". For λ ∈ Λ + n and w ∈ S λ n , define the set of tableaux D λ,w : For the connection to representation theory see, for example, the appendix of [PW]. In particular, all of the Demazure characters for sl n (C) can be produced in this fashion (with their exponents shifted to be integers). Theorem 6.2 below does follow immediately from Corollary 5.5, but a stronger connection can be established: In [RS2] Reiner and Shimozono present a tableau description of a condition from [LMS] to determine whether or not a given tableau is in D λ,w : Fix T ∈ T λ and denote its columns C 1 , ..., C λ1 . Let (1 x ) be the partition consisting of x ones, i.e. whose Young diagram is a single column of length x. A defining chain for T is a sequence of weakly increasing elements in the Bruhat order w 1 ≤ ... ≤ w λ1 such that Y (1 c j ) (w j ) = C j for 1 ≤ j ≤ λ 1 . Then T ∈ D λ,w if and only if there exists a defining chain {w j } for T with w λ1 ≤ w in the Bruhat order. Lemma 7 of [RS2] is attributed to Deodhar and states that every T has a minimal defining chain. That is, there exists a defining chain w 1 ≤ ... ≤ w λ1 for T such that if v 1 ≤ ... ≤ v λ1 is a defining chain for T then w j ≤ v j for all 1 ≤ j ≤ λ 1 . Thus T ∈ D λ,w if and only if its minimal defining chain {w j } has w λ1 ≤ w. They then define the canonical lift w(T ) of T to be the shortest permutation in the Bruhat order such that Y (1 c j ) (w(T )) equals the j th column of R(T ) for 1 ≤ j ≤ λ 1 . Let λ j and T j denote the results of removing C j+1 , ..., C λ1 from λ and T respectively. Lemma 8 of [RS2] states that the minimal defining chain w 1 ≤ ... ≤ w λ1 for T has w j equal to the canonical lift of T j for 1 ≤ j ≤ λ 1 . Equivalently, in the minimal defining chain w j is the shortest permutation such that Y λ j (w j ) = S(T j ) for 1 ≤ j ≤ λ 1 . Proof. For 1 ≤ j ≤ λ 1 , the location (j, 1) is the last location in λ j . Since the production of π T applies the greedy algorithm as the locations advance from (1, 1) to (λ 1 , 1), the construction of π (j,1) is independent of the columns to the right of C j . In other words π (j,1) = π T j , the final permutation when the greedy algorithm is applied to T j . Thus Corollary 5.3 gives Y λ j (π (j,1) ) = S(T j ). By Corollary 4.2 we have π (j,1) ∈ S λ j n , so π (j,1) is the shortest permutation satisfying the previous condition. As an example, when T is the first tableau in Figure 1, the first, fifth, eighth, and tenth permutations in Figure 1 make up the minimal defining chain for T . Applying the proposition for j = λ 1 we see that: Theorem 6.2. Let T ∈ T λ and w ∈ S λ n . Then T ∈ D λ,w if and only if π T ≤ w. Lastly, Theorem 3.6 (3) of [Le1] presents a Demazure character formula for λ and w obtained by summing the weights of all admissible subsets whose corresponding saturated chain has its final permutation less than or equal to w. Let Ad(λ, w) denote the set of admissible subsets from this theorem that contribute to the Demazure character for λ and w. It is known that the process of constructing the tableau T J from an admissible subset J described in Section 4 is a bijection from Ad(λ, w) to the set of corresponding tableaux. Define A λ,w := {T J ∈ T λ | J ∈ Ad(λ, w)}. The following result is clear once the above constructions are understood: Corollary 6.3. Fix λ ∈ Λ + n and w ∈ S λ n . Then D λ,w = A λ,w . Proof. The process of finding the minimal defining chain for T , expanding it to the saturated chain B T via the inverse of the filling map, and subsequently finding its admissible subset J T is the desired bijection between the generating sets from D λ,w to Ad(λ, w).
2015-06-15T20:01:38.000Z
2015-06-15T00:00:00.000
{ "year": 2015, "sha1": "76adf4ccc1e37c969c15ff3504598e28bd1d9cba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76adf4ccc1e37c969c15ff3504598e28bd1d9cba", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53291609
pes2o/s2orc
v3-fos-license
Bacterial Extracellular DNA Production Is Associated with Outcome of Prosthetic Joint Infections In a retrospective study the association of the production of extracellular DNA (eDNA) in biofilms of clinical staphylococcal isolates from 60 patients with prosthetic joint infection (PJI) and the clinical outcome were investigated. Data from a previous study on eDNA production determined in 24-hour biofilms of staphylococcal isolates (Staphylococcus aureus n=30, Staphylococcus epidermidis n=30) was correlated with the patients' clinical outcome after 3 and 12 months. Statistical analysis was performed using either the Spearman's rank correlations test or the t-test. eDNA production of S. epidermidis in 24-hour biofilms correlated with the patients' outcome ‘not cured‘ after 12 months. For S. aureus no such correlation was detected. Thus, eDNA may be a virulence factor of S. epidermidis. Quantification of eDNA production as a surrogate marker for biofilm formation might be a potential predictive marker for the management of PJI. Introduction Periprosthetic joint infections (PJI) are most challenging complications of orthopedic implant surgery. With the rapidly increasing number of implanted prostheses, the impact of PJI is steadily increasing. The relative incidence ranges between 2% and 2.4% of total hip (THA) and total knee arthroplasties (TKA) [1]. The pathogenesis of PJI is associated with the formation of bacterial biofilms involving the tissue around the implant and implant surfaces. Biofilm formation is a bacterial strategy to survive under adverse conditions [2]. The production of extracellular polymeric substances (EPS) protects bacteria against environmental damage. Moreover, bacteria coated by EPS are also able to escape the innate immune response [3]. Generally, biofilms with EPS production enable exchange of genes between the tightly packed bacterial cells. Moreover, their altered metabolic state leads to resistance to antibiotics and consequently persistence of infection and treatment failure [4]. About two-thirds of implant-associated infections in orthopedic surgery are caused by two staphylococcal species: S. aureus and S. epidermidis [5]. Staphylococci have different mechanisms to form biofilms, which depend on environmental conditions. The most common pathway used by S. epidermidis is the production of polysaccharide intercellular adhesin (PIA). PIA is actively induced through stress conditions, such as, e.g., shear flow, and heat, and enhances EPS production [6]. When bacteria were previously exposed to antibiotics, increased production of extracellular DNA (eDNA) was shown to enhance the physical properties of EPS and biofilms resistance to antibiotics [7,8]. eDNA is released either by active secretion or 2 BioMed Research International by cell autolysis and was shown to be linked to the ability of bacteria to take up DNA from the environment. This feature called competence contributes to the strategy to survive in the environment. [9]. The production of eDNA is regulated by the bacterial population density in response to the accumulation of quorum sensing signals of the closely packed bacterial cells [10]. eDNA binds with other biofilm polymers (i.e., polysaccharides and proteins), thus securing structural stability of the biofilm, and favors bacterial adhesion to abiotic surfaces [11]. Targeting eDNA might be a strategy for the treatment of implant-associated infections and other biofilm associated infections [7,12,13]. In a previous study, the time course of eDNA production in biofilms of clinical isolates of S. aureus and S. epidermidis was studied. The amount of eDNA (mean % area eDNA) was visualized and quantified using confocal laser scanning microscopy (CLSM) and TOTO6-1 staining. Image J software was used to score the images of stained biofilms. eDNA production was greater in clinical isolates of S. epidermidis and S. aureus isolated from PJI compared to eDNA production of control isolates from the skin of healthy volunteers. After 24 hours, the amount of eDNA was greater in biofilms of S. epidermidis than in biofilms of S. aureus. The production of eDNA varies extensively during the time course of biofilm development, as well as the respective staphylococcal species [14]. The aim of the present study was to retrospectively investigate a possible association of eDNA production of in vitro biofilms of S. aureus and S. epidermidis clinical isolates from patients with PJI and the outcome of the treatment of PJI. The clinical outcomes after 3 and 12 months and the amount of eDNA production of the respective staphylococcal isolates in 24-hour biofilms were correlated. Additionally other influencing parameters like age, weight, the Charlson index for comorbidity (CCI), the site of the infection, and laboratory infection parameters including C-reactive protein, fibrinogen, and leukocyte count were studied. Study Design. The study population of this retrospective study was a previous study population whose pathogens, 60 clinical S. aureus, and S. epidermidis isolates from infected hip and knee prosthesis were examined for eDNA production [14]. The ethics committee of the Medical University of Vienna Austria approved the study protocol (Ethic committee no.: 19025). Patient Characteristics. Patients' data were retrospectively retrieved from the electronic patient records. Information was collected and anonymously processed using the University of Vienna Research documentation and analysis platform (RDA, research documentation, and analysis). Patients' characteristics included age, weight, and body-mass-index (BMI). Comorbidities were collected and categorized using the Charlson Comorbidity Index; (Comorbidity-Adjusted Life Expectancy, CCI) [15] (Table 1). Implant indwelling time was also collected and infection classification ( Table 2) was performed accordingly. Additionally inflammatory markers such as C-reactive protein (CRP), fibrinogen, and number of leucocytes were assessed at the time of diagnosis of PJI and three weeks thereafter. The clinical outcomes after 3 and 12 months were defined as (1) cured if patients were able to walk, no further antibiotic treatment and pain medication were needed and neither local nor systemic signs of infection were present, (2) not cured, if patients continued taking antibiotics in order to cure or suppress infection or were planned for another revision surgery, or (3) deceased (Table 3). PJI were classified into early (onset < 1 month after implantation surgery), delayed (onset 3-24 months after surgery), or late infections (onset > 24 months after surgery) [16]. where "a" is the aggregate antibiotic resistance score of all isolates, "b" is the number of antibiotics, and "c" is the number of isolates. The MAR of all tested isolates was 0,183. According to [18] a MAR index of 0.183 indicates that the aggregate antibiotic resistance is low; i.e., the isolates were in general susceptible to the tested antibiotics. Statistical Methods. Spearman's rank correlation and the t-test were used to assess parallels in eDNA production, antibiotic resistance, patients clinical conditions, and outcomes. A p-value of <0.05 was considered to be statistically significant. eDNA values were log-transformed and checked for normal distribution before applying the t-test to calculate the approximate log-normal distribution. Calculations were performed using IBM5-SPSS5 Version 24.0 (IBM Corp. Armonk. NY. USA). Results Sixty patients ( patients was classified as cured, and the outcome in 8 patients was classified as not cured including a patient who died from the infection. A more detailed description of the outcomes with regard to pathogens or type of prosthesis is given in Table 3. A PJI considered as chronic infections were caused by S. epidermidis. Early or acute late infections were caused by S. aureus (Table 2). Twelve patients were lost during followup: 8 patients due to incomplete datasets and 4 patients died from their comorbidities or other age-related diseases (57-84 years; median 75 years old) during the observation period. eDNA Production and Clinical Outcome. There was a correlation between the amount of eDNA in 24 h S. epidermidis biofilms and patients outcome 'not cured or respectively dead' after 12 months (n=27, r=0.391. p=0.044) but not for S. aureus (Table 4, Figure 1). For all isolates from hip prostheses, there was a positive correlation between eDNA production and the patients outcome "not cured or respectively dead" after 12 months (n=21, r=0.605, and p=0.004) ( Table 4). Charlson comorbidity index (CCI) showed no correlation to eDNA production of 24 hours biofilms. Discussion The increasing life expectancy together with the constant progress in medicine increases the number of patients receiving medical implants, e.g., knee and hip prostheses, pacemakers, or many other medical implants and devices [19]. Therefore medical implant related infections are an increasingly substantial burden to the healthcare system [20,21]. According to the surveillance of the European Centers for Disease Prevention and Control (ECDC) the incidence of surgical site infections (SSIs) after hip and knee surgery was 1.1%, (ranging from 0.3% to 3.8%) for THA and 0.6% (range 0.0% to 3.4%) for TKA. http://ecdc.europa.eu/ en/healthtopics/Healthcare-associated infections/surgical-site-. infections/Pages/Annual-epidemiological-report-2016.aspx In order to treat these infections a thorough understanding of the pathogenesis and the pathogens is pivotal.Clinical outcomes of PJI with respect to their causing pathogen and respective biofilm formation ability are subject of a few studies only. A prospective study in 124 patients with orthopedic implant-related osteomyelitis showed the influence of biofilm formation and antibiotic resistance on the outcome. In the subgroup of 90 patients with lower extremity infections the increase of S. epidermidis biofilm thickness correlated with decreased cure rates [18]. Mittag Index, the Harris Hip Score (HHS) and the Hospital for Special Surgery Score (HSS). They did not demonstrate a correlation between implant infection classified according to the modified Tsukayama classification system [22] and outcome defined using WOMAC, HSS or HHS score [23]. However, in this study, the most frequent pathogens were Enterococcus spp. followed by a mixture of bacteria causing polymicrobial infections So far a correlation between eDNA production in staphylococcal biofilms and clinical outcome of PJI has not been reported in the literature. In the present study S. epidermidis isolates showed significantly greater eDNA production than S. aureus isolates in the respective 24h biofilms [14]. Infections of S. aureus and S. epidermidis are considered distinguishable by their clinical symptoms and course: S. aureus infections usually present with classical local signs and symptoms of infection with pain, redness, swelling, temperature and impaired function and a systemic immune response with fever, hypotension, etc., leucocytosis and elevated C-reactive protein, etc... Infection caused by S. epidermidis presents usually with subacute signs and symptoms of infection and an unspecific and delayed onset. In the present patient population infections with S. epidermidis presented as chronic infections. Early or late acute infections were exclusively caused by S. aureus (Table 2).We were able to demonstrate that eDNA production of S. epidermidis 24 hours biofilms correlated with the clinical outcome 'not cured respectively dead' after 12 months, (p=0.044). eDNA production is a relatively stable characteristic of many S. epidermidis strains [14]. Thus, it may be hypothesized that production of eDNA by S. epidermidis isolated from PJI contributes to the pathogenesis and may be used to predict clinical outcome. Exposure to antibiotics has been linked to eDNA production in biofilms [24,25]. Perioperative antibiotic prophylaxis is a standard of care in orthopaedic prosthetic surgery [26]. However, Doroshenko et al. reported higher eDNA levels in biofilms of S. epidermidis after prior exposure to vancomycin [25]. Schilcher et al. described that subinhibitory concentrations of clindamycin increased the ability of S. aureus to form biofilms and shift the composition of the biofilm matrix towards higher eDNA content [27]. In the present study, isolates resistant to rifampicin and fucidic acid produced less amounts of eDNA than susceptible ones. But, antimicrobial resistance was tested only using the disk diffusion method testing planktonic bacteria compared to biofilm susceptibility testing performed in the other studies [27] or as demonstrated by Brady and colleagues in their study comparing minimum biofilm eradication concentration and minimum inhibitory concentration breakpoint in planktonic versus biofilm grown staphylococci [28]. However, further investigation into the effects of rifampicin or fusidic acid on eDNA production should be done performing resistance testing in biofilm growth systems. Inflammation biomarkers such as fibrinogen, C-reactive protein, and leucocyte count did not correlate with eDNA levels of 24 hours biofilms of the respective pathogens. Yet, a significant difference between the clinical presentation of PJI caused by either S. aureus or S. epidermidis was found in our patient population likewise in earlier studies [29,30], where patients with PJI caused by S. aureus exhibited greater serum levels of C-reactive protein and fibrinogen compared to patients with PJI caused by S. epidermidis. The limitations of the present study are inherent to the retrospective nature of the study because not all clinical and laboratory data are available, and there is a rather small sample size of a nevertheless very well defined patient population. Due to the small sample size, multivariate statistical analysis was not indicated. Moreover, in vitro conditions of biofilm formation may not fully reflect clinical biofilms in PJI [31]. Conclusion In conclusion, a correlation between increased eDNA production of S. epidermidis 24h biofilms and adverse clinical outcome after 12 months was demonstrated. Quantification of eDNA production of the pathogen as a surrogate marker for biofilm formation might be a potential predictive marker for the management of PJI caused by S. epidermidis. eDNA might also be a possible therapeutic target. Further prospective and sufficiently powered clinical studies will be needed to strengthen the role of eDNA production of pathogens on the clinical course and its relevance in PJI. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest. Authors' Contributions Beata Zatorska designed the study, performed the analyses, collected the data, and drafted the paper. Nicolas Haffner assisted in data collection, study design, and data interpretation. Carla Renata Arciola contributed substantially to the scientific background of the paper and writing the discussion. Luigi Segagni Lusignani assisted in the statistical data analyses. Elisabeth Presterl assisted in data collection and study design and reviewed the manuscript critically. Magda Diab Elschahawi contributed substantially to the concept, research design, and writing the paper. All authors have read and approved the final manuscript.
2018-11-16T19:36:54.535Z
2018-10-22T00:00:00.000
{ "year": 2018, "sha1": "e0f7d867c7e1fea977623dd90b760322cd3ab3ea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2018/1067413.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9913e9c6767ef9f3fdd5abca3c5eea2b9fbc4e05", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231904993
pes2o/s2orc
v3-fos-license
Prognostic and predictive value of FCER1G in glioma outcomes and response to immunotherapy Purpose Glioma is the most prevalent malignant form of brain tumors, with a dismal prognosis. Currently, cancer immunotherapy has emerged as a revolutionary treatment for patients with advanced highly aggressive therapy-resistant tumors. However, there is no effective biomarker to reflect the response to immunotherapy in glioma patient so far. So we aim to assess the clinical predictive value of FCER1G in patients with glioma. Methods The expression level and correlation between clinical prognosis and FER1G levels were analyzed with the data from CGGA, TCGA, and GEO database. Univariate and multivariate cox regression model was built to predict the prognosis of glioma patients with multiple factors. Then the correlation between FCER1G with immune cell infiltration and activation was analyzed. At last, we predict the immunotherapeutic response in both high and low FCER1G expression subgroups. Results FCER1G was significantly higher in glioma with greater malignancy and predicted poor prognosis. In multivariate analysis, the hazard ratio of FCER1G expression (Low versus High) was 0.66 and 95 % CI is 0.54 to 0.79 (P < 0.001), whereas age (HR = 1.26, 95 % CI  1.04–1.52), grade (HR = 2.75, 95 % CI 2.06–3.68), tumor recurrence (HR = 2.17, 95 % CI  1.81–2.62), IDH mutant (HR = 2.46, 95 % CI 1.97–3.01) and chemotherapeutic status (HR = 1.4, 95 % CI  1.20–1.80) are also included. Furthermore, we illustrated that gene FCER1G stratified glioma cases into high and low FCER1G expression subgroups that demonstrated with distinct clinical outcomes and T cell activation. At last, we demonstrated that high FCER1G levels presented great immunotherapeutic response in glioma patients. Conclusions This study demonstrated FCER1G as a novel predictor for clinical diagnosis, prognosis, and response to immunotherapy in glioma patient. Assess expression of FCER1G is a promising method to discover patients that may benefit from immunotherapy. Introduction Glioma is served as the most prevalent malignant tumor in central nervous system, which accounts for more than 70 % of intracranial tumors with high degree of malignancy [1,2]. Arising from glia cells, gliomas can be subdivided into a broad category of tumors, such as astrocytoma, oligodendroglioma, and glioblastoma (GBM). Regardless of tumor aggressiveness and malignancy, the average median time of overall survival is only 12-18 months [3,4]. Although a variety of therapies are currently available, including surgery, radiotherapy, chemotherapy and immunotherapy, they still remain a low survival. Therapeutic response rely on intra-tumoral heterogeneity and intricacy programmed by genetic and epigenetic effectors. Besides, there are many physiological barriers, like blood-brain barrier (BBB), as a challenge to effective treatments. Driven by the infiltrative nature of gliomas, surgical resection seems to be an ineffective long-term procedure and recurrence often occur with fatal consequences. Moreover, aggressive therapies compromised the patient's life quality and drives harmful side effects. Therefore, great understanding of the biological behavior and mechanism underlying tumor progression is essential to improve clinical diagnosis and therapeutic prognosis, even for the development of novel effective therapies. Currently, cancer immunotherapy based on immune checkpoint blockades (ICBs), notably anti-CTLA4 (cytotoxic T-lymphocyte associated protein 4), anti-PDCD1/ PD-1 (programmed cell death 1), anti-CD274/PD-L1, has emerged as a revolutionary treatment for patients with advanced highly aggressive therapy-resistant tumors. Unfortunately, the clinical reality is that only a small number of patients benefit from immunotherapy. Moreover, there is no effective biomarker to reflect the response to immunotherapy in glioma patient so far. With the development of high-throughput microarray technology, gene expression profiles have been used to identify genes associated with progression and clinical prognosis of glioma [5][6][7]. A gene signature identified from four different published microarrays has been validated in GBM and LGG cohorts [8][9][10]. However, the predictive significance of the gene signature in glioma patients is unclear and is not currently applied in clinical practice. FCER1G is a key molecule involved in allergic reactions [11], located on chromosome 1q23. 3 and encodes the γ subunit of fragment crystallizable (Fc) region (Fc R) of immunoglobulin. Fc R is a signaltransducing subunit that plays an critical role in chronic inflammatory programs. The binding between the Fc of immunoglobulins and the Fc R of immune cells activates cellular effector functions and may trigger destructive inflammation, immune cell activation, phagocytosis, oxidative burst, and cytokine release [12][13][14]. It has been illustrated that FCER1G participated in various diseases, such as squamous carcinogenesis, diabetic kidney disease, multiple myeloma, and clear cell renal cell carcinoma [12,[15][16][17]. However, the role of FCER1G in tumor progression and underlying molecular mechanisms are poorly understood. This study aimed to demonstrated FCER1G as a promising predictive target for glioma prognosis and response to immunotherapy. Tumor samples collection Human glioma tissues were considered exempt by the Human Investigation Ethical Committee of Shanghai General Hospital affiliated to Shanghai Jiao Tong University. Human tumor samples were consecutively recruited between January 2019 and January 2020 from the Department of Neurosurgery in Shanghai General Hospital. A total of 20 patients with glioma underwent the surgery for the first time and had not previously received radiotherapy or chemotherapy. Data source and expression analysis Pan-cancer dataset in The Cancer Genome Altas (TCGA) which consists of 33 kinds of cancer and adjacent tissue samples or GTEx expression matrixs were analyzed with UCSCXenaShiny [18] (https ://hiplo t.com.cn/advan ce/ ucsc-xena-shiny ). In this study, we analyzed both GBM and LGG. All the glioma datasets and were obtained from Gliovis [19] (http://gliov is.bioin fo.cnio.es/), including six datasets containing 2336 samples : 642 grade II patients, 780 grade III patients and 914 grade IV patients. (Additional file 1: Table S1) Immunohistochemical analysis Patient tumor samples were fixed in 4 % paraformaldehyde for 24 hours and then embedded in paraffin. Paraffin blocks were cut into 5 µm sections. Rehydrated tissue sections were blocked with 5 % BSA overnight at 4 ℃ and then were stained with FCER1G (Abcam, ab151986, USA). After washing with PBS, the sections were incubated with biotinylated anti-rabbit IgG (Vector Laboratories, CA, USA). The ABC method (Vector Laboratories) was used. The sections were observed using an AX-80 microscope (Olympus, Tokyo, Japan). Images were dealt with Image J software and relative expression was calculated. Immune cells and bioinformatic analysis The single sample gene set enrichment analysis (ssGSEA) was used to define a enrichment score to represent the degree of absolute enrichment of a gene set in each sample within a given dataset with R package "GSVA" [20]. Normalized enrichment scores could be calculated for each immune category. 28 types of immune cells' gene set signatures were obtained from a previous study [21]. (Additional file 1) Based on the median expression values of FCER1G, CGGA dataset was divided into a high FCER1G expression group (top 50 %) and a low FCER1G expression group (bottom 50 %). R package "limma" was used for differential expressed gene (DEGs) analysis. The biological significance of the DEGs was defined as |logFC|≥1.5 and adj.pvalue < 0.05. Gene Ontology (GO) including biological process (BP), molecular function (MF) and cellular component (CC) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were utilized for gene set annotation using the R package "clusterProfiler" [22]. Gene Set Enrichment Analysis (GSEA) was further used to investigate the functional enrichment with R package "Pi" [23]. To explore the correlation between the expression levels of FCER1G and immune status, a total of 25 immunity-related gene sets covering both innate and adaptive responses were from a previous study [24] (Additional file 1). Gene Set Variation Analysis from R package GSVA [20] was performed to obtain the immune profile of the glioma samples. Quantify of relative abundance of TIICs and prediction of the immunotherapy response The CGGA dataset (n = 1013, Grade II = 291, Grade III = 334 and Grade IV = 388) was used as the discovery set and the TCGA-GBMLGG dataset(n = 620, Grade II = 226, Grade III = 244 and Grade IV = 150) was used as the validation set. Immune Cell Abundance Identifier (ImmuCellAI) [25] (http://bioin fo.life.hust.edu. cn/ImmuC ellAI #!/analy sis) is a novel algorithm that uses gene set signatures to estimate the abundance of 24 immune cells from transcriptomic data. In contrast to other known algorithms designed to estimate immune cell composition from transcriptomic data, it focuses on subsets of T cells that are associated with tumor progression and initiation. The gene set signatures of the T-cell subsets used in this study are listed in the Supplementary Material ,which included 18 subtypes of T cells and 6 other types of immune cells. Moreover, ImmuCellAI can be used to predict the reponse of Immune checkpoint blockade (ICB) therapy with the ICB response prediction being checked. To predict their putative response to anti-PDL1 drug, glioma samples were scored with the GSVA method using the T-cell inflammatory (TIS) signature. This signature was derived from a previous study [24] and listed in Additional file 1. Tumor immune dysfunction and exclusion (TIDE) (http://tide.dfci.harva rd.edu/login /) is a computational method developed to predict the immune checkpoint blockade response based on pretreatment tumor gene profiles that integrate the expression signatures of T-cell dysfunction and T-cell exclusion to model the mechanisms of tumor immune evasion [26]. Furthermore, the Subclass Mapping (SubMap) method was applied to evaluate the expression similarity between the two subgroups and the patients with different immunotherapy responses [27]. P-values were used to evaluate the similarity, and the lower the P-values were, the higher the similarity. In this study, we utilized TIDE, TIS, SubMap and ImmuCel-lAI to predict the potential immunotherapy responses of patients with gliomas. Statistical analysis All statistical analysis were carried out by R software 3.6.1. Kolmogorov-Smirnov tests were used to evaluate the distribution normality of each dataset to determine whether a non-parametric rank-based analysis or a parametric analysis should be utilized. Spearman correlation analysis were used for correlation analysis. The Fisher exact test and Wilcoxon rank-sum tests were used to test hypotheses in categorical and continuous variables, respectively. In the survival analysis, associations between characteristics and overall survival were evaluated by Cox proportional hazard models. Kaplan-Meier survival curves were drawn and compared among subgroups using log-rank tests with R packages "survival" and "survminer". Meta-analysis was performed with R package "meta". ROC curves, sensitivity as well as specificity were generated using R package "pROC". For all statistical analyses, P value < 0.05 was considered significant. Patients in 33 types of tumor cohorts were then divided into high and low expressed group according to the median value of FCER1G gene expression. Subsequent survival analysis obtained significant differences across several cancer types. Specifically speaking, patients with high expression level of FCER1G showed a shorter Fig. 1 Pan-cancer analysis of FCER1G expression. a UCSCxena shiny was used to visualize FCER1G mRNA expression in The Cancer Genome Atlas (TCGA) pan-cancer datasets. *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001, ns no significance (Wilcoxon test). b Dot plot of correlation between FCER1G with OS, PFI, DFI, DSS. (Red represents HR > 1 and P value < 0.05; Blue represents HR < 1 and P value < 0.05; Gray represents P value > 0.05) overall survival (OS), progression-free interval (PFI) and disease-specific survival (DSS) than low expression patients both in LGG and GBM cohort (Fig. 1b). The expression level of FCER1G increased with the progression of glioma In the subsequent study, we focused on exploring the clinical value of FCER1G in gliomas. To explore the expression levels of FCER1G mRNA in different stages of gliomas, we used six datasets to analyze FCER1G expression levels. We observed that the expression level of FCER1G increased in glioma with high malignancy. In CGGA dataset, a significant increase of FCER1G expression was noted in WHO grade III (n = 334), and grade IV (n = 388) than grade II (n = 291) (IV versus III: P < 0.001; IV versus II: P < 0.001; III versus III: P = 0.037, Fig. 2a). In the TCGA-GBMLGG dataset, a remarkable upward trend in FCER1G expression with tumor progression was further confirmed in grade II (n = 226), III (n = 244) and IV (n = 150 glioma patients (IV versus III: P < 0.001; IV versus II: P < 0.001; III versus III: P = 0.0012, Fig. 2b). Furthermore, the same trend was also found in the Rembrandt dataset with 98 grade II, 85 grade III, and 130 grade IV patients (IV versus III: P < 0.001; IV versus II: P < 0.001; III versus III: P = 0.31, Fig. 2c). Moreover, according to analysis of GEO dataset, we also found that the GSE16011 cohort with grade II (n = 24), grade III (n = 85), and grade IV (n = 159) glioma patients (IV versus III: P < 0.001; IV versus II: P < 0.001; III versus III: P = 0.48, Fig. 2d), GSE43289 dataset with 3 grade II, 6 grade III, and 28 grade IV patients (IV versus III: P = 0.3; IV versus II: P = 0.0071; III versus III: P = 0.38, Fig. 2e), and the GSE4412 dataset (26 grade III and 59 grade IV patients, P < 0.0001, Fig. 2f ) all exerted higher expression of FCER1G in high grade glioma. To further validate these results, IHC for FCER1G and qRT-PCR was performed to assess FCER1G expression in patient-derived glioma tissue samples. As expected, in comparison with low grade glioma (LGG) tissues, a significant increase of FCER1G was revealed in high grade glioma (HGG) tissues ( Figure. 2g, h). according to the above data, the expression of FCER1G increased with the development of glioma, suggesting that FCER1G may be involved in the malignant progression of glioma. Increased FCER1G expression predicts poor prognosis in gliomas After we illustrated the correlation between FCER1G expression level and tumor progression of glioma, we next investigated the prognostic value of FCER1G. According to the median value of FCER1G expression, patients were divide into high and low expression group. The Kaplan-Meier curve and log-rank test analysis revealed that patients with higher expression of FCER1G from CGGA (HR:0.69, 95 % CI 0.49-0.98), TCGA dataset (HR:0.31, 95 % CI 0.23-0.41), Rembrandt (HR:0.49, 95 % CI 0.39-0.61), and GSE16011 (HR:0.49, 95 % CI 0.38-0.64), showed significantly poorer overall survival (OS) than those with low expression (Fig. 3a, c, e and f ), while patients from GSE43289 and GSE4412 dataset showed similar trend with no statistic significance (Fig. 3e, f ). The sample sizes of the six cohorts were very different, three over 500 samples and two less than 200 samples. To improve the stability of the results, a fixed effects model was employed to pool the HRs of the six cohorts, and the result also validated that patients with high level of FCER1G expression exerted shorter OS times than patients with low expression level (RR = 1.30, 95 % CI 1.24-1.38, Fig. 3g). To better understand the role of expression of FCER1G in patients with glioma, we analyzed the CGGA dataset with clinical data of 1013 glioma patients. We divided the patients into high expression group (n = 506) and low expression group (n = 507) based on FCER1G levels. Through univariate analysis of clinical characteristics, we found that FCER1G was more likely to be associated with older age (P = 0.002), high malignancy (P < 0.001), GBM type (P < 0.001), post-operative relapse (P < 0.001), poorer survival (P < 0.001), IDH wild type (P < 0.001), and different therapeutic options (Radiotherapy, P = 0.047; chemotherapy, P = 0.009), however, there is no significant differences in gender ( Table 2). The expression level of FCER1G was significantly related to the OS in glioma patients. FCER1G expression value was a stable factor affecting the survival level of glioma patients. FCER1G is associated with immune infiltration and immune activation in gliomas Patients diagnosed with the same histological cancer types may have different immune infiltration levels, which could lead to diverse clinical outcomes. The immune profile of gliomas relating to the prognosis and immunotherapy has been widely reported in several cancers, including gliomas. FCER1G is served as an important regulatory player, involving in initiating the transfer from T-cells to the effector T-helper 2 type and mediating the allergic inflammatory signaling of mast cells and Fig. 2 The expression level of FCER1G increased with the progression of glioma. The X-axis represents the WHO grade while the Y-axis represents FCER1G expression value(log2). Based on Wilcoxon test. a CGGA, b TCGA, c Rembrandt, d GSE16011, e GSE43289, and f GSE4412. g Representations and h quantification of immunohistochemistry detection of FCER1G in low grade glioma LGG and HGG interleukin 4 production from basophils [28,29]. Therefore, the correlation of FCER1G and immune infiltration levels was evaluated to reveal the possible mechanism by which FCER1G affects the prognosis of gliomas. The relative quantity of the 28 immune cells from the CGGA dataset was systematically estimated using the ssGSEA algorithm (Fig. 4a). The correlations of FCER1G expression with infiltrating levels of immune cells was evaluated by spearman method, which revealed close relationship between FCER1G with T cells, macrophages, and B cells (Fig. 4b). These results suggested that FCER1G expression was involved in immune infiltration remodeling of gliomas. Next, we try to further elucidate the relationship between FCER1G expression and immune infiltration and to explore the molecular mechanisms of FCER1G with STRING database. The result showed that FCER1G had a closely interactions with FCGR3A, ITGB2, LYN, SYK, in which FCER1G acts as a core gene (Fig. 4c). Moreover, we analyzed the differential expression values between high and low FCER1G group. A total of 372 genes were up-regulated and 22 genes were down-regulated (adj.pvalue < 0.05, FC > 1.5 or <-1.5, Fig. 4d). Then we analyzed the enriched GO terms and KEGG pathways with the DEGs. Among the biological process terms of GO, most of DEGs were enriched in neutrophil activation, leukocyte migration, collagen-containing extracellular matrix, and cell adhesion molecule binding (Fig. 4e). According to the KEGG analysis results, staphylococcus aureus infection, phagosome, and cell adhesion molecules (CAMs) were remarkably enriched (Additional file 3: Fig. S2). Gene set enrichment analysis (GSEA) was also used to explore the mechanisms of FCER1G in gliomas. The CGGA data were analyzed with "MsigdbC2KEGG" (KEGG gene set, listed in Additional file 1). The enrichment results (nominal p value < 0.05 and FDR < 0.25) are shown in Additional file 1: Sheet 3. Results showed that various immune activation and tumor progression associated genes were enriched, especially in cytokine signaling in immune, DNA replication and PD-1 signaling (Fig. 4f ), reflecting relatively enhanced tumor progression and activated inflammation. Identification of the correlation between FCER1G and immune phenotype of gliomas To further explore the existence of malignant gliomas with a hot immune phenotype, manually curated gene sets related to both adaptive and innate immune responses were used to quantify the immune phenotype (Fig. 5a). The heatmap showed that, with increasing FCER1G expression, the immune phenotype tended to be "hot". This was consistent with the conclusions drawn above that FCER1G played a key role in the glioma activated immune response. The Spearman's test revealed a high correlation between the expression of FCER1G with PDL1 signaling (r = 0.45, P < 0.05), CTLA4 signaling (r = 0.38, P < 0.05), and T cell mediated immunity (r = 0.42, P < 0.05), which further confirmed the findings in GSEA results (Fig. 5b, d). Subgroups divided by FCER1G expression predict potential immunotherapy responses of gliomas The above findings suggested that FCER1G was closely associated with T cells, which play an important role in immunosurveillance evasion in malignant gliomas [30]. Strong correlations were found between PD1 (PDCD1) and PDL1 (CD274)/PDL2 (PDCD1LG2), between CTLA4 and CD80/CD86, and between CXCR4 and CXCL12 in gliomas Additional file 4: Fig. 3a-c). The relative abundances of 24 types of immune cells in the TME of gliomas were quantified with ImmuCellAI. Notably, the proportions of TIICs showed marked variations between the FCER1G high and low subgroups (Fig. 6a). Moreover, FCER1G showed significant correlations with PD1 (r = 0.42, P < 0.01), PDL1 (r = 0.62, P < 0.01),and CTLA4 (r = 0.34, P < 0.01) (Fig. 6b, c), same conclusions were also drawn in analysis of TCGA GBMLGG dataset (Additional file 4: Fig. 3d, e). To verify transcriptome results from public datasets, 20 patients from Shanghai general hospital were included in our study and quantitative real-time PCR were utilized to investigated the correlation between expression levels of FCER1G and PD1, and the results showed that FCER1G was positively correlated with PD1 (r = 0.62, p < 0.01) (Additional file 5: Fig. 4a). Patients with high FCER1G expression showed high levels of the therapeutic targets PD1, PDL1 and CTLA4, which indicated a hypothetic treatment as immune checkpoint. Taken together, FCER1G may be a good index for quantifying the tumor immune microenvironment and prediction for immunotherapy responses of gliomas. Discussion FCER1G, known as FcRγ, is a key molecule involved in tumor progression. Previous studies have shown that is an innate immunity gene and involved in the development of eczema, clear cell renal cell carcinoma, meningioma, and childhood leukemia [17,[31][32][33]. In our study, great malignancy and poor outcomes have been confirmed in patients in FCER1G-high group compared to the FCER1G-low group. to gain insight into intrinsic mechanism and signal pathways, DEGs between the two group were analyzed. As a result, up-regulated DEGs in the subgroup with poor outcomes are enriched in immune response and inflammatory response, which was also confirmed by both KEGG functional enrichment analysis and GSEA analysis. Tumor progression is a complex process that requires interaction between cancer cells, the microenvironment, and the immune system, influencing both tumor initiation and progression [34]. Recent research suggests that immune system cells have an essential accessory role of preserving tissue integrity and function during homeostasis, infection, and noninfectious perturbations by eliminating pathogens, exerting some influence on the clinical outcomes of tumors [35,36]. Many studies have also demonstrated that high immune infiltration is associated with improved clinical outcomes and better response to treatment in cancers [37][38][39][40][41][42]. We illustrated that various immune activation and tumor progression associated genes were enriched, especially in cytokine signaling in immune, DNA replication and PD-1 signaling by GSEA. The cytokine signaling and PD-1 signaling pathways have been identified as key signaling pathways in immunotherapy to glioma. In this study, a cox regression model was built to predict the prognosis of glioma patients with multiple The correlation between the ssGSEA scores of 28 immune cells and the expression of FCER1G in gliomas. c STRING database shows the PPI network of FCER1G. d Volcano plot of the DEGs expression between FCER1G high and FCER1G low. Cut-off criteria for DEGs significance was adj. p value < 0.05 and the absolute value of the log2 fold change > 1.5. The Y-axis displays the -log10 P-value for each gene, while the X-axis displays the log2 fold change for that gene relative to FCER1G expression. e GO results for differential expression genes. The X-axis represents gene ratio and the Y-axis represents different enriched pathways (BP: biological progress; CC cellular component, MF molecular function). f Rank-based gene set enrichment analysis shows significantly activated immune related pathways in FCER1G high subgroup compared with FCER1G low (LFC, log fold change) (See figure on next page.) factors, including FCER1G expression, age, grade, tumor recurrence, IDH status, and chemotherapeutic status. Furthermore, we illustrated gene FCER1G as a novel diagnostic and therapeutic target for the first time, which stratified glioma cases into high and low FCER1G expression subgroups that demonstrated with distinct clinical outcomes. Then, we explore the underlying molecular mechanisms of FCER1G in tumor progression and potential correlation between FCER1G expression and immune cell activation and response to immunotherapy in patients with glioma. The treatment of gliomas is highly individualized and tests are available to guide the use of radiotherapy or chemotherapy. In instance, O [6]-methylguanine-DNA methyltransferase (MGMT) testing assesses drug resistance in temozolomide-based chemotherapy [43,44]. Besides, radio-sensitivity and XPO1 expression were combined to predict the effectiveness of radiotherapy [45]. However, there is a lack of a diagnostic biomarker guiding adjuvant immunotherapy, in which immune checkpoint is a possible factor. Currently, the clinical benefit of ICB is only observed in a minority of patient with glioma, many of which tend to relapse after a short-term benefit. The type, density, functionality, and location of different immune cell in the tumor microenvironment are major factors predicting the response to ICB. Indeed, tumor infiltrated with preexisting T cells are more likely to present response to ICBs. Thus, majority of tumors can be defined as "cold" immune desert tumors and "hot" inflamed immune infiltrated tumors [46,47]. In line with this concept, it is a novel strategies to explore biomarker to assess tumor immune microenvironment and predict tumor sensitivity to immunotherapy. Our research, with large sample size of 1013 patients, confirmed that the FCER1G is a novel independent prognostic predictor to find patients who respond to immunotherapy effectively. The relative abundances of 24 types of immune cells in the TME of gliomas were quantified with ImmuCel-lAI. Notably, patients with high FCER1G expression showed high levels of the therapeutic targets PDL1 and CTLA4, which indicated a hypothetic treatment as immune checkpoint. PDL1 is a key negative regulator for immune inhibitory axis signaling controlling T lymphocyte infiltration in solid tumors, which is widely expressed in glioma cell lines [48,49] and human specimens [50,51]. PD-L1 is recently served as a oncogenic gene. Down-regulation of PDL1 significantly decreases tumor volume of U87 glioma in nude mice, while overexpression of PDL1 promotes tumor progression [52]. Moreover, CTLA4 is one of the most fundamental immunosuppressive cytokines, which inhibits T-cell activation and terminates the T-cell response [53]. Positive correlation between FCER1G with PD-L1 and CTLA4 indicated its predictive value in response to immunotherapy. Furthermore, patients in FCER1Ghigh subgroups get higher TIS scores, reporting to be correlated with response to anti PDL1 checkpoint inhibitor pembrolizumab. The possibility of immunotherapy response was predicted in patients with gliomas by ImmuCellAI, SubMap and TIDE algorithm, both of which suggested that high levels of FCER1G tended to more likely respond to immunotherapy. Despite these findings, there is a limitation for this study exist. The data of samples were download from CGGA, TCGA, and GEO database and the particular information about the extent of surgical resection was not provided, which is a critical factor for overall survival. Thus, further analysis with more detailed clinical information should be presented in following studies. And we lack sufficient clinical data to validate the predictive value of FCER1G for glioma immunotherapy response, we will continue to investigate the potential predictive value of FCER1G in future studies. In summary, this study demonstrated FCER1G as a novel predictor for clinical diagnosis, prognosis, and response to immunotherapy in glioma patients. Assess expression of FCER1G is a promising method to discover patients that may benefit from immunotherapy. These results are of great clinical significance and will contribute to personalized therapy. CD274, and CTLA4 in CGGA datasets. Patients were divided into high and low expressed group by the medium expression level. d Expression levels of PDCD1, CD274, and CTLA4 in FCER1G-high and FCER1G-low subgroup.
2020-11-05T09:09:26.621Z
2020-11-03T00:00:00.000
{ "year": 2021, "sha1": "644520b10b158ec8e458bb1293f829af8725851d", "oa_license": "CCBY", "oa_url": "https://cancerci.biomedcentral.com/track/pdf/10.1186/s12935-021-01804-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f293b979860ca6211515900c94f4b874374b511", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254626838
pes2o/s2orc
v3-fos-license
How to make the engage really engaging: A framework for an instructional approach for the pre-service teachers The five phases of the 5E instructional model based on the constructivist learning theory encourages inquiry in the science classroom. The first, engage phase of the 5E inquiry model plays a critical role in piquing students ’ interest and in the pre-diagnostic assessment before beginning the lesson. In this study, 55 pre-service teachers (PSTs) enrolled into a science methods course and participated in a qualitative research study. Using the 5E instructional approach, PSTs planned and implemented peer teaching and field teaching. The data from the PSTs inquiry-based peer teaching lesson plans, field teaching lesson plans, peer teaching sessions INTRODUCTION AND LITERATURE REVIEW posited inquiry as "a technique that encourages students to discover or construct information by themselves instead of having teachers directly reveal the information." Inquiry in science classrooms is considered an amalgamation of "cognitive, social, and physical" practices (NRC, 2012). Inquiry-based science teaching is important for in-depth understanding of science content. According to Furtak (2006), the inquiry in science teaching happens in two forms: scientific and constructivist. The scientific form affirms that students learn science best by doing what scientists do. The constructivist form affirms that students discover and construct knowledge from their experiences. The 5E model is one such instructional method that uses inquiry to teach students about science content (NRC, 2012). The 5E instructional model pushes the students to be scientific and constructivist at the same time. Bybee et al. (2006) established the 5E instructional model, which originated from the three phased learning cycle. In addition to the existing three phases: exploration, concept introduction, and concept application, engage, and evaluate were added. Therefore, the 5E model has five phases: engage, explore, explain, elaborate, and evaluate. The three phases of the learning cycle (exploration phase, concept Introduction phase, and concept application phase) align with explore, explain, and elaborate phases of the 5E model, respectively. The initial engage phase is a new phase during which teachers assess students for their prior knowledge and generate students' interest concerning the topic at hand. Most teacher educators use the 5E lesson plan model as a framework to support pre-service teachers (PSTs) in professional development programs to design and teach science lessons (Duran & Duran, 2004). However, few focus on the engage phase and how students can use the engaging engage phase in truly engaging way to keep the student's attention. The engage phase plays a vital role in assessing the student's prior knowledge, addressing misconceptions, and laying a good foundation. Bybee et al. (2006) summarized the engage phase as the activity that "makes connections between the past and present learning experiences, exposes prior conceptions, and organizes students' thinking toward the learning outcomes of current activities" (p. 2). The engage phase sets the stage for the whole lesson and allows students to learn new knowledge. The engage phase also plays a crucial role in directing students to the main idea or the objective of the lesson. Tanner (2010) posits that the teachers believe the engage phase usually happens at the beginning of the class, but that teachers can take the liberty to engage students throughout the lesson. He indicated that the engagement could also be structured through homework assignments, writing reflections, reading articles, or watching videos. OPEN ACCESS Previous empirical research mentions the benefits of the engage phase and how if implemented correctly the engage phase sets the stage for a meaningful science lesson. However, there is a paucity of research concerning a specific structure for the engage phase or what constitutes a good engage phase of the 5E model. Knowing that the engage phase is student-centered, a motivational period that creates a desire to learn more, and nudges the students to ask themselves: "What do I already know about this topic?" (Duran & Duran, 2004), we chose to investigate and create a framework for the successful planning and implementation of the engage phase. In this study we explored the ways PSTs chose to engage students, if they asked questions, and were they able to relate the engage phase to the objective of the lesson. We argue that a successful engage phase must satisfy at least one of the following conditions: a) its close relation to the lesson objective, b) its ability to assess students' prior knowledge and identify misconceptions, or c) create curiosity among students about the concept being taught. METHOD For this study we adopted a qualitative research method with 55 PST participants. The participants were introduced to the 5E inquiry model as part of the science methods course for elementary education. As part of the course work at a private university in North Texas, the PSTs used the 5E inquiry model to teach their peers and students in the field. PSTs' peer teaching lesson plans, field teaching lesson plans, peer teaching observations, and semi structured interviews were the data sources for this study. Rubrics for lesson plans and peer teaching sessions were designed to collect and analyze the data. The data from field teaching lesson plans, peer teaching lesson plans, peer teaching sessions, and interviews were constantly compared according to Glaser's (1965) method. The analyzed data indicated the relativity of the chosen engage phase to the objective of the lesson, types of questions PSTs asked to assess students' prior knowledge, and types of engage activities the PSTs chose to create and model curiosity in students. RESULTS According to the analysis of the data collected in this study, the engage phase should: • relate to the objective of the lesson, • assess students' prior knowledge and identify their misconceptions, and • create curiosity among students Relate to the Objective of the Lesson Abrahams and Millar (2008) posit that "science involves an interplay between ideas and observation" (p. 1965). The activity, question, or a video chosen in the engage phase must develop a strong connection between the observations made during the engage phase and the big scientific ideas in the lesson objectives. The credibility of the engage activities, their relativity to the objective of the lesson, and the science inquiry appeared to be highly motivating for the students. The motivation resulted in the improved students' desire to help themselves push through any initial confusion to grasp the authentic scientific information (Schinske et al., 2008). PSTs cannot plan to guide students to link the theoretical ideas and the observations made through the activities. Students will only be able to link their observations to the big ideas if the PSTs present them with clear learning objectives. Well planned engage activities not just help students link their ideas and observations but also motivate them to discover the underlying scientific principles. The results showed 73% of the PSTs had an engage phase related to the objective of the lesson. In their interviews, 24% of the PSTs shared that the successful engagement at the beginning of the lesson was directly proportional to the students understanding of the lessons' objective. Table 1 shows some examples of the engage phase that successfully relate to the lesson objective and others that fail to relate to the lesson objective. In Table 1, the first example (PST-28 and PST-29), PSTs chose an engage phase related to the lesson objective. They chose to ask questions and assess student's prior knowledge about the topic. PSTs chose a good mix of open and close ended questions. The open-ended question "how does the animal cell differ from a plant cell?" promoted discussion among students, stimulated student's thinking, and allowed students to hypothesize, speculate, share their existing ideas. The close ended question "what are different parts in an animal cell?" checked whether students were able to retain and recollect previously learned information. It also helped the teacher understand if students were thinking and connecting commonly held set of ideas. The second example shows PST-13 and PST-14 choosing to read a book as an engage activity. The activity was related to the objective of the lesson, as reading the book aloud piqued students' interest immediately. For example, while reading "The very hungry caterpillar" the students asked their peers what will the caterpillar transform into? The third example in the table, PST-30 and PST-31, while teaching layers of the earth, asked students to cut the snicker bar into half. PSTs during this engage activity neither provided guiding questions nor guided students to compare it to the layers of the earth. The activity was facilitated poorly and failed to engage the students The last example of PST-25, PST-26, and PST-27 chose to engage students by showing them a video about static electricity. The video chosen was too long (six minutes) and poorly animated for the age group, also presenting the information directly to the students. The video demonstrated the balloon experiment where the bits of paper stick to the balloon and explained the science backing it. The same experiment would have made a great explore activity for students, if followed by inquiring questions. The PSTs could have stopped the video after the demonstration and asked questions to create curiosity and check students' understanding. The engage activity failed to intrigue students or stimulate their thinking. Also, PSTs did not plan to ask guiding questions, which could have filled the gaps between the video and the topic. Assess Students' Prior Knowledge and Identify Their Misconceptions The results showed that only 15% of the PSTs were successful in asking good questions in the engage phase of this study. PSTs' during their interviews shared their ambiguity with the type of questions to ask in the engage phase. The results show that apart from engaging students by choosing a relevant engage activity, it is crucial to choose one, which can assess their prior knowledge about the topic. To assess what students already know about the topic, PSTs can incorporate guiding questions into activities. Engaging students can be as simple as asking them what they already know about the day's topic before you start; this strategy has the bonus of revealing what students already know (Allen & Tanner, 2002). As this study takes the constructivist approach, we believe that knowledge is constructed from one's experiences. Students come into the classroom from diverse backgrounds with diverse experiences. When new concepts are introduced to them in the classroom, students link the concepts with their preconceived notions and life experiences. Some of those notions and experiences may lead to misconceptions. It is very important to identify the misconceptions students have regarding the topic. As Taber (2014) mentioned in his study, the good teaching practices require the teachers to acknowledge students' preconceived knowledge, existing conceptions, and misconceptions, which might affect their understanding of the scientific ideas. Allen and Tanner (2002) opined that questioning in the engage phase initiates teaching, as the process influences the behaviors, attitudes, and reveals students' misconceptions and misunderstandings. They also believe that "when practiced artfully, questioning can play a central role in the development of students' intellectual abilities; questions can guide thinking as well as test for it" (p. 63). Table 2 shows the examples of questions asked by the PSTs in our research study. The examples mentioned above in Table 2 are a good combination of open, closed, and rhetorical questions, which can assess students' prior knowledge concerning the topic and guide their thinking. For example, PST-3 and PST-4 while teaching "matter" to grade 3 students asked "where does the ice cube melt quicker? Closer or away from flame?" The students answered that melting happens closer to the flame, then PSTs asked "why?"; so, the students talked about the heat making the molecules move more freely. The PSTs followed the inquiry by asking "how are the molecules in an ice cube?" for which some students answered closely packed, some said loosely packed. The PSTs addressed the loosely packed misconception in some students leading them to understand three different forms of matter. Create and Model of Curiosity Among Children As discussed by Millar (2010), teaching science is much more than plainly delivering the content and expecting the students to learn what the teacher intends to teach. In a recent study, Hodson (2014) posits that the teachers aiming to develop "scientifically literate students" must create curiosity in the science classroom. Curiosity often helps students to bridge the gap between what they know and what they want to know. The teacher should lead the students in their journey from "what they know" to "what they want to know". The teacher's mission should be to support students to make sense of new ideas in the light of their existing ideas and link them to experience learning (Driver, 1985). Discussing the scientific habits of mind, Lawson (2009) explained "science as a way of thinking, a spirit of inquiry driven by a curiosity to understand nature" (p. 5). Curiosity among students sparks a desire to look for answers presenting "teachable moments" for the teacher in the classroom. The teacher should use engage phase to create such moments to set up the other phases of 5E. To foster curiosity in the science classroom and develop students' scientific literacy, teachers must use multitudinous pedagogical approaches. Teacher's task is to provide opportunities for students to be both curious and critical in the quest for scientific literacy (Higgins & Moeed, 2017). The engage phase of the 5E inquiry model plays a vital role in creating curiosity among students in the science classrooms. According to Tanner (2010), exposing students to a challenge statement on a common misconception can help them recognize that they still have things to learn. Table 3 shows example of good engage activities used by our PSTs. The first example in Table 3 is a good engage activity where the teacher provides tangible materials for students to engage with and discover how some objects are attracted by the magnets and some are not. The activity was followed by questions, which guided students a little further into the inquiry as well as spark the student's curiosity. Whereas, in example two, the teachers chose to ask students an openended question without properly engaging them. The question was very direct and not appropriate considering the student's age. The teachers should practice asking quality questions, which are a vital medium for curiosity. They should allow students to tinker with materials and thoughts, which also stimulate curiosity and lead to innovative outcomes. Curiosity can also be modeled by exploring students' interests, asking critical questions about their ideas, and inviting students to perceive their scientific questions as mysteries to be solved. Activities like engaging students in examining the scientific journals encourage students to be dedicated to the difficult scientific objectives, stay ontask, and successfully navigate and complete the assignments presented to them (Schinske et al., 2008). DISCUSSION AND CONCLUSION PSTs in this study used the 5E instructional model to design and teach science lessons to their peers and students during their field work. The aim of the methods of teaching science course was to encourage PSTs to design and implement lessons using the inquiry model to understand the advantages and challenges of each phase while teaching. According to Tanner (2010), the first "engage" phase of the 5E inquiry model is often skipped or neglected by educators. In this study, 73% of PSTs successfully planned and implemented the engage phase, especially the one, which relates to the objective of the lesson. This shows that 73% of the PSTs understood the role of the engage phase and its relationship to the objective of the lesson. However, there is still a need for the PSTs to perceive engage as a critical phase to pique students' interest and assess their prior knowledge. Assessing student's prior knowledge is another major constituent of the engage phase. Student's preconceived notions, big ideas, and misconceptions related to any topic can be assessed through questioning. According to Allen and Tanner (2002), "questions challenge students' thinking, which leads them to insights and discoveries of their own." In this study, 91% PSTs chose questioning as way to engage the students but only 15% of them were successful in asking good questions in the engage phase. Quality questioning also plays a crucial role in fostering curiosity among students. Creating and modeling curiosity is another major constituent of an engage phase while teaching science as Luce and Hsi (2015) opined that "discipline of science requires curiosity". In this study, 40% of PSTs chose showing a video as an engage activity, but only 24% of the PSTs were successful in engaging the students through videos. Though audio visual mediums are great sources of teaching, they are distracting in the classroom with unnecessary dramatization and too much information presented at once for the students. Also, teachers must ensure the authenticity of the information in the video and recheck its suitability to the learners. In this study, only 56% of the PSTs planned lessons with a good engage phase that relates to the objective of the lesson, with good questions to assess students and pique students' curiosity. This proves that more work is required from the PSTs to design an engage phase that lays the foundation for good 5E lessons as a whole. They also require more training and support familiarizing themselves with the cognitively appropriate questions activities.
2022-12-14T16:06:22.720Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "676186cb87b0169febb2fb9b2b9c36153288a4db", "oa_license": "CCBY", "oa_url": "https://www.ejsee.com/download/how-to-make-the-engage-really-engaging-a-framework-for-an-instructional-approach-for-the-pre-service-12706.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e1ac03173e235d79cff2b22f6860477bd6aa826e", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
134083122
pes2o/s2orc
v3-fos-license
Tropical Forest and Ecosystems Services in Indian Context Tropical forest are sensitive, adaptive and vital ecosystem. They cover approximately 7% dry land area on earth. The productive, protective and regulative functions of the forests are economically valuable enough to the tune of billion of US $ per year.The goods and services including timber, food, fodder, medicines, hydrological cycle,shelter, culture, aesthetic and recreation are provided by them. Growing development is causing threat to the existence of these useful and important ecosystem. Major threat to these forests are population explosion, growing urbanization, agriculture, industrialization, deforestation, overexploitation of resources, excessive mining, climate change, fragmentationandhabitat destruction. These factors have been destroying the forests very rapidly putting a great number of plants and animalsin danger of extinction. Therefore, it is necessary to formulate a correct conservation strategy and sound management plan for restoration of these critical ecosystem. Current World Environment Journal Website: www.cwejournal.org ISSN: 0973-4929, Vol. 13, No. (1) 2018, Pg. 151-158 CONTACT Vijay Kumar Yadav vkyadavdvc@gmail.com Department of Botany, D. V. College, Orai. 285001 (U.P.) India. © 2018 The Author(s). Published by Enviro Research Publishers This is an Open Access article licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (https://creativecommons.org/licenses/by-nc-sa/4.0/ ), which permits unrestricted NonCommercial use, distribution, and reproduction in any medium, provided the original work is properly cited. To link to this article: http://dx.doi.org/10.12944/CWE.13.1.14 Article History Received: 21 May 2017 Accepted:18 November 2017 Introduction The rich and varied ecosystem providing most of the goods and services to the man kind is none other than the tropical forest.These are the most important ecosystem in the nature, on the planet earth.They provide rich varied resources to the world, upon which human society has continue of to thrive from the time immemorial and are considered as the most complex and species rich ecosystem of the world 1,2 .These forests spread across a wide range of ecoclimatic conditions along the equatorial region, from the tropical rain forests, representing appreciably hot low land to snow-clad mountain and area of unusually no seasonality inprecipitation to persistently humid conditions 3 .Tropical forests are no more recognized merely a habitat of charismatic, rare, unique endangered species of plants and animals including reptiles, birds, and mammals providing particular habitat of special interest, but also better known to provide timber, fuel wood, climate amelioration, soil and water conservation, clean air and regulating climate, floods, pollination, also providing various cultural and economic services and various kinds of raw material with enormous social benefits 4,5,6,7,8 .They are infact the life support ecosystems on the earth.Ecological importance of tropical forest is undoubted, yet these are threatened due to population increase, urbanization, deforestation,agriculture, fragmentation, habitat loss, legal and illegal logging, mining, fire and climate changes 6,9,10,11 .These above factors alter the role of forest resulting adversely in reducing the services such as regulation of floods, biodiversity, landslides, loss of soil productivity provision of food and livelihoods,declining security to million of people living in and around the forest 12,13 . The present review is an effort to establish the related problems and propose recommendation for achieving the conservation goals alongwith suitable measures for restoration of tropical forests. Function Tropical forest are fragile ecosystem which perform three major functions: productive, protective and regulative 6 .Productive function include timber, firewood, food, fodder, fiber and medicinal plants etc. 14 .Protective function showed that forest soil readily absorb water and as a result, surface runoff rarely occurs outside the stream channel in the forest areas, causing important water catchment to form water table beneath the forests 15 .Eva transpiration in the forest area maintain the atmospheric moisture regulating the environmental temperature of the region 16,17,18 .They also prevent desertification, radiation, landslides and pollution 4,13,19 . Regulative function involve the biogeochemical cycles, floods, drought etc. Forest play an important role in global carbon cycle.Most of the carbon enters in the ecosystem through photosynthesis.The Net sink of carbon from forests depends on human behavior, if disturbed by human or natural causes, carbon emission into atmosphere increase or result in decline in the sequestration potential 20 .Tropical forest are the largest sink of carbon in the world 21 .Tropical forests contain about 40 % of global terrestrial carbon which account for more than half the global gross productivity and sequester large amount of CO 2 from the atmosphere 22,23 .Carbon stocks in the forest predominantly in live biomass and in soils, with smaller amounts in course woody debris 24,25,26,27 .In tropical forest world wide, about 50 % of the total carbon is stored in the above ground biomass and 50 % is stored in top 1 meter of the soil 28 .However, there are marked differences among the sites observed by various workers in different countries 29,30 .The litter fall and death of organisms, its detritus decomposition add organic carbon into soil by microbial activities 31 .During the period from 1995 to 2005, carbon stock, in the forests have been estimated to increase from 6244.78 million tons to 6621.55 million tones, there by registering annual increment of 37.68 million tones of carbon 32 .Several studies have established the fact that carbon sequestration by forest could provide relatively lowcost net emission reduction 33,34 .The tropical forest are more effective in carbon sequestrationthan other forests 35 .Forests regulate hydrological cycle which include increasing precipitation, flood water detention, ground water discharge and sediments retention, preventing mitigation consequences of floods 36 .They also play important role in hydrological cycle, regulating the water flows and sub soil water regimes, recharging of aquifers and maintaining the flow of water in rivers and rivulets as they are the source of large number of rivers and rivulets in the country 7 . Goods and Services Forests provide valuable goods such as timber, fuel wood, fodder to be major and direct contribution for their higher economic value along with non timber forest products (NTFPs) including fruits, nuts, pods, barks, gums, resins and medicinal plants to the human being 15,37,38,39 .Bamboos are commonly known as poor man's timber, due to its utility and accessibility to common people, is an important resource from the forest.The total bamboos bearing area of our country has been estimated 13.96 million ha 38 .The paper mills are the main consumers of bomboos, purchasing it at an average rate of Rs. 1500/-per tonne and total economic value of annually harvestable bamboos is estimatedto be Rs 24298.25 crore per year 7 . Ecosystem services of the forests have been evaluated to be its indirect contribution.Climatic amelioration is one of the major roleof the forests providing clean air and pollution free enviroment.In our country, the people are enyoying clean air and pollution free environment in the villages which are located in the fringes of forest area 40 .Various workers have evaluated the services provided by the trees from time to time.The services of a tree serving for about 50 years in terms of monetary value has been estimated, which by providing oxygen worth 31250 US. dollars, air pollution and soil erosion control worth 62000 US. doller, soil fertility worth US.$ 31250, water recycle worth US $ 37500 and shelter for birds and animals worth U.S.$ 31250 41 .United states forest have been estimated to produce climatic control benefits annually worth US.$ 18.5 billion 42 .Role of forest in reducing air pollution through SO 2 and particulate matter absorption through tree has been established by 43 , by extending life expectancy of the population and reducing hospitalization.The population living in pollution free forest area are benefited by reduction in the amount saved on healthcare expenditure 44 .Water is known to perform number of ecological functions like hydrological cycle, nutrient cycle, temperature control, life support to plants and animals 45 .Forest ecosystem maintain large number of rivers and rivulets.Forest water shed have better available quality of water.Shimla catchment forest is the best example showing drinking water supply to township 7 . Forest provide seeds for growing local crops in the fringe area villages which have minimum chances of attack by the organisms causing diseases and damage to the crops, providing biological control and also minimize the use of insecticides and pesticides 6 .The role of birds, bees and plants has been estimated to the tune of enormous economic value 46 .Though it is very difficult to assess its exact economic value yet they amount to worth of billions of U.S.dollers per annum globally 47,48 . Forests play important role in ensuring food security to the society as well as they provide habitat for animal, birds and insects which have enormous value.Recreational opportunities and amenities are another important services generated by the forests.National parks and wild life sanctuaries in the country attract a large number of domestic and international tourist, visiting these areas and other ecotourism destinations which are increasing day by day 49,50,51 .The other important services rendered by the forest is from the trees where a single tree after surviving for about 100 years provide goods and services worth Rs 64 lacs, as oxygen worth Rs 11.00 lac, fertilizers and prevention of soil erosion worth Rs 12.8 lacs, absorption of air pollution worth Rs 21.00 lac, shelter to animals and birds worth Rs 10.6 lacs and fruits, flowers, medicines etc. worth Rs 8.6 lacs 19 . Above given description clearly mentions the large quantum of goods and services provided by the forests regularly for the mankind.The economic value of goods and services generated by the forests of India have been estimated to be around 43.79% (Rs.305110.95crores) by direct benefits including fodder, NTFPs, timber, bamboos and fuel wood, where rest 56.21% (Rs.391712.2crores) share is contributed by indirect benefits including prevention of soil erosion and landslides, climate amelioration, water retention and water supply, pollination, recreation, food and water security, carbon sequestration andbiological control 7 . Biodiversity Tropical forests are one of the richest terrestrial ecosystem which support variety of life forms and maintain huge global biodiversity.They cover 7% of the earth land surface yet they harbour high biological diversity supporting about 50% of the described and even large number of undescribed species 52,53,54 .India is One of the 12 mega biodiversity countries of the world consisting of 17000 flowering plant species having 8% global biodiversity though it covers only 2.4% of the earth surface yet it is also one of the biodiversity hot spots of the richest and highly endangered eco-regions of the world 55,56 . Biodiversity is essential for human survival and economic well-being and for the ecosystem function and stability 57 .Studies on thebiodiversity in relation to the ecosystem functioning have revealed that species diversity enhances productivity and stability of ecosystem 58,59 .It effects the strength and capacity of ecosystem to provide essential goods and services necessary for well being and prosperity of human population in both developed and developing countries 4 .Importance of tropical forests and its high biodiversity value for wide range of endangered species including insects, amphibians, reptiles, and terristial plants have been well established and provide essential resources for millions 60,61,62,63 . Threats Tropical forest are one of the most threatened ecosystem of the world due to increasing human population, land use pattern, deforestation fragmentation, agriculture, logged repeatedly, intensive hunting, climatic changes and degradation leading to loss of habitat destruction 64,65,66,67,68,69,70 .Deforestation causes loss of biological diversity with a reduction of ecosystem services such as carbon sequestration and storage, soil quality, habitat for bird and insect community providing and regulating the pollination services 71,72 .Effect of fragmentation on plant species include loss of species population, reduction in remnant population sizes, changes in densities of reproductive individuals, reduced reproductive success and increased isolation of remnant population 73,74,75 .Anthropogenic effect on tropical forests can be grouped into two broad categories such as local effect which include local land cover changes, invasive species and global effect include changes in atmosphere and climatic condition caused largely by fossil fuel consumption and remote land cover changes 76,77,78 . The population of tropical countries increased from 1.8 billion in 1950 to 4.9 billion in 2000 and has been projected to grow by further 2 billion before 2030 79 .India is a rural country with about three fourth of its population residing in villages.The total population of 147 million is located in the vicinity of forests in the world 80 .Vast majority depend upon the forests for meeting their basic needs of food, fodder, fuel, timber, medicinal plant and pollution free 7,13,40,81,82 .The depletion rate of these forests have been very rapid in the recent past.These uncontrolled activities of the human beings have to be strictly monitored else it would be very difficult to recover and restore the forest area for our future. Conclusion The tropical forests are very useful and important ecosystem which harbor a vast majority of biodiversity which is declining due to natural and manmade reasons.There is an urgent need of scientists and social workers to find out the measures to protect and conserve these ecosystem as they are economically important for the human societies, specially for those residing near the fringes of these ecosystem in the villages which cover a majority of the Indian populationand economy.Some of the suggested measures are: Awareness programs should be arranged by involving NGO's and making participation of masses to educate them for the care, protection and conservation of the forests.Theforest products should be considered andbe taken care of and such plantation in the fringes of villages be ensured to lessen their dependence on the forests to let these plant species grow there in the forest.The agricultural fields be extended in these areas to make their use for producing these forest products there in.The modern approaches of the government in providing gas fuel for the kitchens of the villages be extended at large to save wood cutting, otherwise used as fuel.The laws should be enforcedstrictly to save the forests wood.Government arrangements should be made to save the destruction of forest by using fire fighting steps which are otherwise used at later stagesshould be applied at the primary level, because by the time advance measures are taken, it becomes too late to save the loss in the forest.The illegal mining should be dealt with very strictly by enforcing severe punishment to save the nature conserving these forests in their natural state.There should be strict monitoring by the agencies specially made for the purpose to act on ground for conserving these forests with power to recommend for the execution of those involved in illegal acts destroying the forest.It is the high time for the government to involve their machinery with the help of NGO,s and the public, making the "save forest" a movement for our own future.
2019-04-27T13:09:38.996Z
2018-04-20T00:00:00.000
{ "year": 2018, "sha1": "ec1df1fccdf868abb46c7ccc5faa86c06967aa95", "oa_license": "CCBY", "oa_url": "http://cwejournal.org/pdf/vol13no1/Vol13_No1_p_151-158.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec1df1fccdf868abb46c7ccc5faa86c06967aa95", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Geography" ] }
4014109
pes2o/s2orc
v3-fos-license
An Accurate Approximation to the Distribution of the Sum of Equally Correlated Nakagami-m Envelopes and its Application in Equal Gain Diversity Receivers We present a novel and accurate approximation for the distribution of the sum of equally correlated Nakagami-m variates. Ascertaining on this result we study the performance of Equal Gain Combining (EGC) receivers, operating over equally correlating fading channels. Numerical results and simulations show the accuracy of the proposed approximation and the validity of the mathematical analysis. I. INTRODUCTION T HE knowledge of the statistics of the sums of multiple signals's envelopes is important in the analytical performance evaluation, such as that of equal gain combining (EGC) systems. However, the evaluation of the probability distribution function (PDF) and the cumulative distribution function (CDF) of these sums can be rather cumbersome even for the statistically independent Nakagami-m or Rayleigh fading channels [2]- [7]. An infinite series technique for computing the PDF of a sum of independent random variables (RVs) was derived in [2]. Applying this technique, the error rate performance of EGC systems under Nakagami fading was presented in [3], whereas, in [4], the problem was analyzed in frequency domain in terms of semi-analytical expressions with infinite integrals. Other two studies on EGC diversity in Nakagami fading that use numerical integration over Gil-Palaez single infinite integral and Hermite quadrature over double finite-infinite integral are presented in [5] and [6], respectively. Closed form solutions for some modulation schemes are also obtained for dual and triple diversity under Rayleigh fading [2], [5]. All above mentioned works assumed independent fading channels. However, in real-life applications, fading among diversity branches is correlated, which renders the analytical analysis under correlated Nakagami fading with a particular practical interest. Since the joint PDF of multiple correlated fading branches is not known, the published results for EGC diversity in correlated fading channels deal primarily with the dual branch case [7]- [9], where error probabilities for binary and QAM signals over correlated Rayleigh channels are expressed in form of infinite series. Only several papers address EGC in correlated fading with multiple order diversity. In [10], EGC performance was determined by approximating the moment generating function (MGF) of its output SNR, where the moments are determined exactly for exponentially correlated Nakagami channels in terms of multi-fold infinite series. A completely novel approach for performance analysis of diversity combiners in equally correlated fading channels was proposed in [11], where the equally correlated Rayleigh fading channels are transformed into a set of conditionally independent Rician RVs. Based on this technique, the authors in [12] derive the moments of the EGC output SNR in equally correlated Nakagami channels in terms of the Lauricella hypergeometric function, and then uses them to evaluate the EGC performance measures, such as outage probability (as infinite series) and error probability (using Gaussian quadrature with weights and abscissas computed by solving sets of nonlinear equations). All of the above approaches yield to results that are somewhat complex, not expressed in closed form, and require computation of infinite series, all of which is attributed to the inherent intricacy of the exact sum statistics. This intricacy can be circumvented by searching for suitable highly accurate approximations for a sum of arbitrary number of Nakagami RVs. Various simple and accurate approximations to the PDF of sum of independent Rayleigh, Rice and Nakagami RVs are proposed in [13]- [15], which then are used for analytical EGC performance evaluation. [15] uses the moment matching method to arrive at the required approximation. In this paper, we use the moment matching method to obtain highly accurate closed form PDF approximation for the sum of arbitrary number of non-identical equally correlated Nakagami RVs with arbitrary mean powers. We then apply this approximation to efficiently estimate the performance of EGC systems by avoiding many complex numerical calculations inherent for the methods in abovementioned previous works. Even approximate closed form expressions allow one to gain insight into system performance by considering, for example, large SNR or small SNR behaviors. II. AN ACCURATE APPROXIMATION TO THE SUM OF EQUALLY CORRELATED NAKAGAMI-m ENVELOPES Let Z be a sum of L non-identical equally correlated Nakagami-m RVs, Z 1 , Z 2 , ... , Z L , The PDF of each envelope Z k , 1 ≤ k ≤ L is given by [1] (2) having an arbitrary second moment E[Z k ] = Ω k , 1 ≤ k ≤ L, same fading parameter m z (assumed to be positive integer) and same envelope correlation coefficient between each pair of RVs where E[·], cov(·, ·) and var(·) denote expectation, covariance and variance, respectively. We propose the unknown PDF of be approximated by the PDF of an equivalent RV defined as where R k , 1 ≤ k ≤ L, represent a different set of L identical equally correlated Nakagami RVs with equal average powers, E[R k ] = Ω R , equal fading parameters m R and equal correlation coefficient ρ R . Additionally, it is assumed that Both the MGF and the PDF of R 2 had been determined in closed form as [16,Eqs. (42a) and (36)] and respectively, where 1 F 1 (·, ·, ·) is the Kummer confluent hypergeometric function [17, Eq. (9.210)]. The PDF of R is determined by simple transformation of RVs, f R (r) = 2rf R 2 (r 2 ), which yields to One now needs to determine Ω R and m R so that (8) be an accurate approximation of the PDF of Z defined by (1). For this, we apply the moment matching method by respectively matching the second and fourth moments of RVs Z and R: The second and the fourth moments of R are determined straightforwardly by using the MGF (6) and applying the moment theorem, i.e., The second and fourth moments of Z are determined by applying the multinomial theorem and the results presented in [10, Eq. (21)], [12,Eq. (43)] and Appendix A, yielding and where with 2 F 1 (·, ·; ·; ·) denoting the Gauss hypergeometric function [17, Eq. (9.100)]. Note that (14) and (15) where F A (· · · ) denotes the Lauricella hypergeometric function of N variables defined by [17,Eq. (9.19)]. Note that coefficients W (4), W (2, 2), W (3, 1) and W (2, 1, 1) can be expressed in terms of the more familiar hypergeometric functions as per (B.1), (B.2), (B.4) and (B.6), respectively. Introducing (9) and (9) into (11) and (11), one obtains the needed parameters for the PDF approximation (8) of Z in closed form as where E[Z 2 ] and E[Z 4 ] are respectively determined from (13) and (14). Note that the fading parameter m R is typically calculated to a positive real number. A. Special Case: Sum of Identical Equally Correlated Nakagami RVs Let the equally correlated Nakagami RVs Z k ,1 ≤ k ≤ L have same second moments E[Z 2 k ] = Ω Z (equipowered branches), same fading parameter m Z (as positive integer) and same correlation coefficient ρ between each pair of RVs. In this case, (13) and (14) are simplified by using (A.6) into where the necessary coefficients W (k 1 , k 2 , k 3 , k 4 ) are again calculated by (16). The needed parameters for the PDF approximation (8) of Z are obtained from (17) and (18). III. APPLICATION IN THE PERFORMANCE ANALYSIS OF EGC RECEIVERS We consider a typical L-branch EGC diversity receiver exposed to slow and flat Nakagami fading. The envelopes of the useful branch signals Z k are non-identical equally correlated Nakagami random processes with PDFs given by (2), whereas their respective phases are i.i.d. uniform random processes. Each branch is also corrupted by additive white Gaussian noise (AWGN) with power spectral density N 0 /2, which is added to the useful branch signal. In the EGC receiver, the random phases of the branch signals are compensated (co-phased), equally weighted and then summed together to produce the decision variable. The envelope of the composite useful signal, denoted by Z, is given by (1), whereas the composite noise power is given by σ 2 EGC = LN 0 /2, resulting in the instantaneous output SNR given by where RVs G k = Z k / √ LN 0 , 1 ≤ k ≤ L, form a set of L non-identical equally correlated Nakagami RVs with E[G 2 k ] = γ k /L, same fading parameters m z and same correlation coefficient ρ among the diversity branches. Note thatγ k = Ω k /N 0 denotes the average SNR in the k-th branch. Using the results from Section II, it is now possible to approximate PDF and MGF of (21) by (7) and (6), respectively, with Ω R replaced byγ = Ω R /(LN 0 ). These closed form approximations are then used to determine the outage probability and the error probability of L-branch EGC systems in correlated Nakagami fading with high accuracy. B. Average Error Probability Comparing (1) and (4), it is obvious that the error performance of an EGC system can be approximated by the performance of an equivalent maximal ratio combining (MRC) system for which many closed form solutions exist. For example, [19] derives the error probabilities of L-branch MRC with coherent and non-coherent detection of binary signals in identical correlated Nakagami fading channels. Thus, the average bit error probabilities of the coherent BPSK system and non-coherent BFSK are respectively expressed as [19,Eq. (32)] and [19,Eq. (26)] (24) where F 2 (·; ·, ·; ·, ·; ·, ·) is the Appell hypergeometric function (as the special case of Lauricella F A function of two variables) defined by [17,Eq. (9.180 (2))]. IV. ILLUSTRATIVE EXAMPLES AND DISCUSSION In this Section, the proposed approximation for the sum of arbitrary number of non-identical equally correlated Nakagami channels is validated by Monte-Carlo simulations. The simulation of correlated Nakagami random signals is realized by using the method proposed in [21,Section VII].
2009-08-25T05:48:30.000Z
2009-06-14T00:00:00.000
{ "year": 2009, "sha1": "08e8eec5cfd93babdb7d5354184d5232b951eb73", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0908.3539v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4b6aa26154ce5231ddde8fdd3d6c8c11f3d0f1fd", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
251855447
pes2o/s2orc
v3-fos-license
Selection criteria and ranking for sustainable hydrogen production options (cid:1) A holistic study of hydrogen production options for a sustainable and carbon-free future. (cid:1) It outlines the benefits and challenges of hydrogen production methods. (cid:1) Sixteen methods are selected for sustainability investigation based on seven different criteria. (cid:1) It covers economic, technical, environmental, and thermodynamic aspects of sustainability. (cid:1) It can help stakeholders deploy a hydrogen roadmap for a more sustainable future. Introduction We are in an era where climatic and environmental calamities have been more apparent and more devastating.There is a vital need to implement renewable energy technologies locally and globally to address the energy challenges and develop carbon-free fuel (primarily hydrogen) options. The future role of hydrogen depends upon how effectively sustainable hydrogen production technologies will reach commercial maturity [1].The path to sustainable hydrogen (Fig. 1) implies that there is a need to link clean and renewable sources to the end-users in the market via sustainable hydrogen production systems. There are many examples in the open literature on the economic, environmental, and technical aspects of fossil and non-fossil-based hydrogen production methods [2e4].For instance, Dawood et al. [5] have provided a pathway toward 100% renewable energy systems and investigated hydrogen's role in future energy systems.They have further assessed several hydrogen production options based on efficiencies, environmental impact, and technology maturity level rather than only focusing on cost-effectiveness. Parra et al. [6] have presented a technoeconomic review of hydrogen energy systems, including power-to-power, powerto-gas, hydrogen refueling, and stationary fuel cells.They have focused on the capital and operating expenses and efficiencies and provided recommendations for policymakers.El-Emam and Ozcan [7] have extensively analyzed the literature on technological, economic, and environmental aspects of clean-hydrogen production.They have introduced hydrogen production routes compatible with renewable and nuclear sources by highlighting the recent advances and developments in the literature.In addition, El-Emam and Ozcan [7] have discussed the current and future expectations on cost aspects of the clean-hydrogen economy.They have also analyzed the recent literature on the environmental aspects of clean-hydrogen production technologies.They have concluded that clean and carbon-free hydrogen production can be economically feasible in the near future. The technoeconomic assessment of hydrogen production methods [8] shows the current state of the hydrogen supply chain as a forwarding energy vector, comprising resources, generation and storage technologies, demand market, and economics.Ji and Wang [9] have reviewed the current status, recent advances, and challenges of fossil-and renewablebased hydrogen production methods.They have compared the life cycle cost and environmental impact of different hydrogen production methods.They have concluded that electrolysis and thermochemical cycles coupled with clean energy sources show considerable potential in terms of economics and environmental friendliness. In another example of the sustainability assessment [10], the authors have reviewed various photocatalytic systems for hydrogen production via overall water splitting and photoreforming of biomass-derived organic substances.They have also evaluated quantum efficiency and solar to the hydrogen conversion efficiency of existing photocatalysts.Ferraren-De Fig. 1 e A potential pathway to reach 100% sustainable hydrogen in the energy market. i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 1 3 7 Cagalitan et al. [11] have provided an overview of a variety of hydrogen production methods via biochemical reactions (biohydrogen).They have also discussed the challenges of biohydrogen production and offered solutions to some issues. With similar motivation, Vyas et al. [12] have reviewed photocatalytic hydrogen production via carbon quantum dots to make photonic and bio-based hydrogen affordable, reliable, clean, and the fuel of the future.Sun et al. [13] have evaluated graphitic carbon nitride heterojunction photocatalysts for solar hydrogen production.They have focused on solar-to-hydrogen and apparent quantum efficiency, hydrogen evolution reaction rate, charge transfer, and additional material properties for more sustainable photocatalytic hydrogen production. In a roadmap for current and future exploration of carbonfree hydrogen production and exportation [14], the authors have assessed several available alternatives for Qatar.They have considered using natural gas as a feedstock for hydrogen production through steam methane reforming (SMR), solar integrated steam methane reforming with carbon capture, and electrolysis.They have identified the potential of each alternative based on selected technical, economic, and environmental criteria.They have concluded that green hydrogen has the potential as a sustainable fuel in Qatar in the near future.They have highlighted that green hydrogen will become quite competitive in the region as technologies associated with clean hydrogen production improve and the cost of renewable energy falls.Dolle et al. [15] have reviewed the recent developments and trends concerning the electro-reforming of biomass.Similarly, Lepage et al. [16] have investigated biological and electrochemical hydrogen production processes with lower technology readiness levels.Both studies indicate that although electro-reforming processes are less mature, they have significant advantages.Some of these advantages are the mild operating conditions, lower energy consumption; clean hydrogen production without downstream purification processes; and the possibility of coproduction of value-added compounds at the anode. Ma et al. [17] have provided insights into the feasibility of using hydrogen and bioethanol blends as energy carriers in the foreseeable future upon discussions on the advantages and the disadvantages.They have provided comprehensive overviews of hydrogen and bioethanol production, storage, and transportation.They have also summarized the current problems and potential solutions.According to the authors, the increasing research on bioethanol reforming to hydrogen and the emergence of solid-state storage methods for hydrogen could make it possible for hydrogen to be used as a carrier of energy sources in the near future. As discussed above, numerous examples in the literature focus on evaluating and enhancing hydrogen production performance.Several studies also comparatively and quantitatively evaluate the sustainability performance of various hydrogen production options.However, there is a lack of studies combining energetic, exergetic, economic, and environmental performance with technology maturity.Also, there is a lack of studies comparatively assessing the sustainability performance of green, blue, gray, brown, orange, and turquoise hydrogen.The motivation behind this study is to address this gap, which can be a meaningful contribution to the literature. This study develops selection criteria and ranking specifically for hydrogen production methods from renewable and non-renewable sources, and these methods are then holistically discussed, assessed, and compared with each other.These sixteen hydrogen production methods are comparatively assessed based on environmental, technical, economic, and thermodynamic performance.The study further aims to show the strengths and weaknesses of the selected hydrogen production methods, potentially providing valuable information to the industry, buildings, and transportation sectors.Such information will help provide a clear roadmap to 100% hydrogen use in all sectors and further guide researchers, policymakers, and industry while transitioning to a hydrogen economy for a sustainable and zero-carbon future. Hydrogen production methods This section introduces and discusses selected hydrogen production methods for sustainability evaluation.According to their color codes, the selected hydrogen methods are provided in Fig. 2. Green hydrogen is produced through processes, such as water electrolysis, by employing renewable energy sources.It is called green because there is no CO 2 emission during the production process.Blue hydrogen is sourced from fossil fuels.However, the CO 2 is captured and stored (carbon capture and sequestration).Gray hydrogen is produced from natural gas and commonly uses the steam methane reforming method.During this process, CO 2 is produced and eventually released into the atmosphere.Brown hydrogen is produced from coal.The gasification of coal is a method used to produce hydrogen.However, it is a very polluting process, and CO 2 and carbon monoxide are produced as by-products and released into the atmosphere.In this study, biomass gasification is classified as orange because it still releases CO 2 into the atmosphere, but biomass is not a fossil fuel.The hydrogen production options that rely on thermal energy are classified as turquoise hydrogen. The selected hydrogen production methods can be categorized based on their primary energy and material (input) resources.These methods use either one or the combination of the following primary energy sources: electrical, thermal, photonic, and biochemical.Electrical and thermal energies can be obtained from renewables, biomass, nuclear, or fossil fuels.The electrical and thermal energy source significantly affects the corresponding methods' emissions. Electrolysis Water electrolysis powered by renewable energy sources is expected to enable the scale-up of hydrogen production.During the water electrolysis processes, there are no CO 2 emissions.Hence, storing surplus renewable energy as hydrogen shows excellent promise.Another advantage is that hydrogen from water electrolysis has high purity (99.9%) [18] and can be used in many industrial processes.Typical characteristics of leading electrolysis technologies are listed in Table 1. i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 1 3 7 Note that the Perovskites are commonly used as anodes in conventional alkaline electrolyzers, and Ni-alloys are cathodes.Efficiency is reported to be between 59 and 70% [20].The advantages are the low capital cost, relatively stable structure, and mature technology.However, the disadvantages are corrosive electrolytes, gas permeation, and slow dynamics.The challenges related to conventional alkaline electrolyzers are improving durability and reliability. The anodes of solid alkaline electrolyzers are Ni-based, and the cathodes are also Ni alloys.Solid alkaline electrolyzers are in laboratory-scale operation, and therefore there is not sufficient information about their efficiency in the literature.Solid alkaline electrolyzers combine the advantages of alkaline and proton exchange membrane (PEM) electrolyzers.Then again, they have some drawbacks, including low OH À conductivity in polymeric membranes.Improving the electrolyte conductivity is one of the main challenges related to solid alkaline electrolyzers [21]. PEM electrolyzers are in the near-term commercialization stage with 65e82% efficiencies.PEM electrolyzers have numerous advantages, including compact design, fast response and start-up, and highly pure hydrogen production.The disadvantages of the electrolyzers are the use of high-cost polymeric membranes and the requirement of noble metals due to their acidic medium.Reducing noble metal usage in PEM electrolyzers is one of the primary challenges [22].Table 1 e Typical characteristics of leading electrolyzer technologies (adapted from Ref. [19]). Electrolyzer Type Charge carrier Temperature ( C) Electrolyte Application Lab-scale i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 1 3 7 In the literature, solid oxide electrolyzers (SOE) are grouped into two categories, as H þ SOE and O 2À SOE.H þ SOE is laboratory-scale, while O 2À SOE is in the demonstration stage [23].Both SOEs have numerous advantages, including enhanced kinetics and thermodynamics, lower energy demand, and low capital costs.The challenges related to SOE are microstructural changes in the electrodes, delamination, and passivation [24]. Another electrolyzer technology, co-electrolysis, is in the laboratory-scale application.This technology shows great potential, especially in the direct production of syngas.Coelectrolyzers and SOE technologies have similar disadvantages, including mechanically unstable electrodes, which cause cracking, safety issues due to high temperatures, and sealing issues.The challenges associated with co-electrolysis are carbon deposition and microstructural changes on the electrodes to address these disadvantages [25]. Plasma reforming The plasma reforming method can produce hydrogen from various feedstocks, including alcohols and fossil fuels, without wasting precious or rare resources [26].In the literature, there are several examples of plasma reforming, such as corona discharge [26], dielectric barrier discharge [27], and gliding arc discharge (GAD) [28].Gliding arc discharge has lower energy consumption and smaller reactor volume than corona discharge and dielectric barrier discharge [29]. In the open literature, some of the ideal sources to produce hydrogen via plasma reforming are methane, toluene, dodecane, high-octane gasoline, and diesel [30].The need for fuels with high energy density is because of the hightemperature requirement of plasma reforming.In plasma reforming, the feedstock fuels need to be exposed to extremely high temperatures higher than 2000 C in the presence of oxidants to produce syngas and hydrogen [31]. Thermolysis Thermolysis is a thermal water decomposition reaction that occurs in a single step, following a straightforward chemical principle [32].In thermal water decomposition reactions, the vast amount of energy needed to break water molecules is supplied by heat.For this reason, as a thermal water decomposition reaction, thermolysis requires large amounts of heat at high temperatures.In the literature [33], thermal water decomposition is reported to start at temperatures over 1700 C. In order to achieve complete conversion of water via thermolysis, the temperature requirements are above 4000 C. The heat requirement of the thermolysis reaction can be met by solar thermal, biomass, or geothermal heat [34].The challenge with solar thermal as the energy source in thermolysis is the supply of uninterrupted heat to support continuous operation [35]. One major disadvantage related to thermolysis is high operating temperatures.It is not easy to find safe, durable, and high-temperature-resistant materials in most cases.Another challenge is the separation of hydrogen and oxygen because, in thermolysis, the product gases are not generated in two different outlets.Since the thermolysis reaction product gases are a mixture of oxygen and hydrogen gases, there is always a considerable risk of these two gases recombining into water.Besides, the recombination reaction is very explosive, which is a safety issue.One possible solution to prevent the recombination of oxygen and hydrogen is quenching.However, quenching is effective at lower temperatures.Preventing recombination of the product gases via quenching requires temperatures to be reduced dramatically by 1500e2000 C in a few milliseconds after the thermolysis reaction [9].There are several efforts to separate the product gases in the literature, such as membranes, different reactor centrifugations, and supersonic jets [8].However, these technologies are pretty limited. Several challenges, as discussed above, slow down the large-scale application, market introduction, and commercialization of thermolysis.However, despite the drawbacks, thermolysis technologies are continually improving.Thermolysis still has significant advantages, especially in small scales [35,36]. Thermochemical water splitting As mentioned before, single-step water decomposition requires temperatures above 4000 C for complete water dissociation [37].Thermochemical cycles include a series of chemical reactions with a net reaction of water dissociation.By including multiple steps, the temperature requirement of thermal water dissociation is reduced, which is a significant advantage for integrating heat from renewable sources in thermal hydrogen production.However, among the current thermochemical cycles studied in the literature, only a few have promising results in the lab scale. Mandal and Jana [38] have designed a reactive distillation column for the SeI cycle to tackle the reaction equilibrium limitations and overcome azeotrope formation in the HI dissociation reaction.HI decomposition and H 2 separation occur concurrently in the designed system, leading to lower energy consumption and higher efficiency. Another example of the SeI cycle has been performed by Yilmaz and Selbas [39] by combining the cycle with solar energy.The authors have reported the overall energy and exergy efficiencies of around 32.76% and 34.56%, respectively.Park et al. [40] have presented an integrated system that combines the SeI cycle with a steam boiler.The steam boiler is used to supply the heat demand of the thermochemical cycle.The authors have eliminated the H 2 SO 4 dissociation reaction in the SeI cycle, reducing the temperature requirements and associated costs. Thermochemical SeI cycles require catalysts to enhance the HI dissociation and conversion rates.Since it is an expensive catalyst, Pt requirement increases the system costs and is a significant challenge for the deployment of thermochemical cycles.For this reason, there are many attempts in the literature to replace Pt with safer, more abundant, and more affordable catalysts.Some catalysts, such as PdeCeO 2 -300, show enhanced activity, stability, and HI conversion rates than Pt alone [41]. Another example of promising thermochemical cycles in the literature is CueCl cycles, which have two to five steps.An example is the four-step CueCl cycle, where HCl production and drying co-occur.The advantages are reduced sedimentation and clogging.However, there are several disadvantages, such as high-temperature requirements and low productivity.These challenges are addressed by combining oxychlorination and dissociation reactions in a single step called a three-step cycle.Using a single reactor for all thermochemical reactions is an essential advantage of the three-step cycle.Nevertheless, there are still disadvantages, such as high-temperature heat requirement and the corrosive medium, which increase the system costs [38]. Biomass gasification Biomass gasification is considered the most convenient and economical method for hydrogen production.In this process, special chemical agents are used to convert biomass into a mixture of gaseous products called syngas or producer gas at relatively high temperatures between 900 C and 1200 C. Air, oxygen, steam, or their combinations can be utilized as chemical agents in biomass gasification [42,43].In the gasification process, biomass quality is critical.For instance, biomass moisture content must be kept between 9 and 22% [44]. It is apparent that complex thermochemical reactions occur during the gasification process.As a result, gaseous and solid species are simultaneously interconverted.The critical stages in the process include drying, partial oxidation, pyrolysis, and gasification.There are two roles of partial gasification; one is to produce heat to be used in the following reaction stages, and the other role is to lower the moisture ratio in the biomass feedstock.Pyrolysis occurs between 200 and 700 C, where the partial oxidation reaction supplies heat.In this stage, oxidizing agents (O 2 or air) are used to produce a mixture of gases such as H 2 , CO, CO 2 , and CH 4 and release moisture.In the pyrolysis stage, char and tar are also formed, then they undergo thermal cracking and produce noncondensable gases and light hydrocarbons [45]. Compared to the biomass combustion and pyrolysis processes, gasification has significant advantages.Firstly, biomass gasification has a higher conversion efficiency.Because the product gases (CO, H 2 , and CH 4 ) have a higher calorific value [46].A wide range of biomass feedstocks can be used for gasification, including wood, agricultural waste, and the organic part of municipal solid waste [47].Biomass gasification followed by combustion in cogeneration mode provides benefits [48,49].This approach has higher efficiency and can be used for synthesizing different fuels as valuable products. Biomass gasification effectively converts biomass into energy, suitable for distributed (decentralized) energy systems.Distributed systems have multiple benefits, such as reduced transmission, delivery, distribution losses, and lower transportation costs.Therefore, biomass gasification can play a crucial role in future hydrogen energy systems. Photocatalysis Photocatalysis is a prospective way to efficiently convert and store solar energy, which is beneficial for achieving sustainable hydrogen production [50].There has been almost one century of research on photocatalysis, and there are still debates on the difference between photocatalysis and photosynthesis [51].The International Union of Pure and Applied Chemistry (IUPAC) defines photocatalysis as "photocatalysis function as a change in the rate of a chemical reaction or its initiation under the action of ultraviolet, visible, or infrared radiation in the presence of a substance, the photocatalyst, that absorbs light and is involved in the chemical transformation of the reaction partners [52]." Photocatalysis research can be grouped into five categories: water splitting for H 2 production [53], CO 2 reduction [54], nitrogen fixation [55], contaminant degradation [56], and organic synthesis [57].In all categories, photocatalysts show the advantage of direct solar energy conversion into valuable products without generating electricity as an intermediate step [58]. Photocatalysis has several limitations, such as bandgap and thermodynamic spontaneity, hindering the reaction rate or yield [59]. Osterloh [60] has investigated several photocatalytic and photosynthetic water splitting methods to identify these limitations and provided several ways to tackle these challenges.Rajeshwar et al. [61] have investigated the free-energy change of photocatalytic and photosynthetic reactions to tackle the thermodynamic limitations.Photocatalysis is recognized as an ongoing and promising research topic in the literature, and many studies focus on the materials sciences and engineering design aspects [62,63]. Photoelectrochemical cells The primary mechanism of photoelectrochemical (PEC) water splitting involves the direct conversion of sunlight into hydrogen fuel.The process occurs with the help of an external bias applied to the photoelectrodes immersed in an electrolyte [64]. Four basic steps are generally involved in PEC water splitting: First, sunlight falls on the photoactive working electrode (i.e., photoanode) and generates electron-hole pairs.Secondly, the photogenerated holes at the photoanode surface cause water oxidation.Thirdly, electrons generated from incident photons transfer through an external wire from the photoanode to the photocathode.Then reduction of H þ at the photocathode surface takes place by these electrons to form hydrogen gas [65].Efficient carrier separation is carried out using an external power supply between two electrodes.In the PEC process, to split water, the photoactive material must have an appropriate bandgap to generate the electron from the incoming photonic energy [66]. The band positions of the semiconductors are another critical factor affecting the PEC water splitting performance.The conduction band edge determines the reducing power of the photogenerated electrons.At the same time, the valence band edge determines the oxidation power of the photogenerated holes [67]. The PEC-based hydrogen production has gained considerable attention during the past years because of its several advantages, such as clean operation and direct sunlight conversion.These advantages give an outstanding opportunity to link solar energy to sustainable hydrogen production.PEC can also be linked to integrated systems for multigeneration, such as producing power, hydrogen, heat, cooling, and freshwater simultaneously [68].It is also possible to produce hydrogen from wastewater with PEC [69]. The PEC-based hydrogen production technologies are still in the early research stages.Developing, designing, analyzing, building, evaluating, and enhancing PEC systems is crucial for affordable and reliable hydrogen production.For this reason, enhancing PEC performance requires intensive research focusing on the following areas: transport phenomena, thermodynamics, electrochemistry, engineering design and evaluation, materials science, system integration, and comprehensive performance evaluation. Hybrid thermochemical cycles Hybrid thermochemical cycles are primarily thermochemical cycles where heat and some electricity are used simultaneously, which gives two significant advantages.First, they have lower electricity consumption than electrolysis.Second, they have lower temperature and heat requirements than thermal water splitting.Low-temperature heat requirement allows for moderate-temperature heat sources, such as process waste heat [70].Energy efficiencies up to 48e50% can be obtained from hybrid processes [71]. The Westinghouse's two-step hybrid sulfur (HyS) cycle is the most well-known hybrid thermochemical cycle for hydrogen production.Westinghouse originally proposed HyS as a thermochemical-electrochemical cycle to supply largescale hydrogen in the 1970s.HyS is the first demonstrated hybrid thermochemical cycle and has just two reactions.These reactions are the thermal decomposition of sulfuric acid (heat consuming step) and the electrochemical oxidation of SO 2 with water (electricity consuming step) to yield sulfuric acid and hydrogen [72,73].The voltage requirement of the electrochemical oxidation step is less than 0.20 V.This amount is significantly lower than the electrolysis voltage requirement, which is around 1.23 V [74]. The most critical challenges of the HyS cycle are reported to be SO 3 reduction temperature and corrosive chemicals [75,76].The reaction rate can be enhanced by utilizing iron oxide-based catalysts.Silicon Carbide (SiC) can be used to make the system components corrosion-resistant [73].Integrating the HyS cycle with concentrated solar energy is reported to have an overall system efficiency of over 25% and cost between 3 and 6 USD/kg H 2 [71]. Coal gasification Hydrogen production via coal gasification is reported to be the most cost-efficient method [77].The process also has another advantage of high calorific value syngas production [78].Plasma gasification is comparatively new and promising among the existing coal gasification processes [79].Plasma gasification has higher conversion efficiency than other gasification options because of its higher operating temperatures.For the same reason, it is the only gasification option that allows waste metal recovery.Some of the other advantages of plasma gasification are listed as follows [80e82]: ➢ Higher efficiency than combustion (<50%), pyrolysis (<43%), and other gasification technologies (<19%) ➢ Less tar output and a higher carbon conversion rate than other gasification methods ➢ Production of syngas from organic wastes with higher conversion efficiency ➢ Less toxic residues, such as ash, slag, et cetera ➢ Higher hydrogen content in the syngas output compared to other gasification methods As a result, plasma gasification can be an advantageous coal gasification method to produce hydrogen.The process generates syngas that contains CO and H 2 .After the gas cleaning process, pure hydrogen (about 99%) can be obtained.In the literature, coal gasification is reported to produce around 0.1 kg of hydrogen from 1 kg of coal [77]. Natural gas reforming Natural gas is a mixture of gases, and its main constituent (up to 99%) is methane [83].The primary use of natural gas is direct combustion for heat and electricity generation purposes.Other emerging uses of natural gas turn methane into several other fuels and industrial chemicals [84].For example, methane can be converted to H 2 or syngas via steam methane reforming [85], dry methane reforming [86], and partial methane oxidation [87]. Steam methane reforming is the primary hydrogen and syngas production method used in the industry [88].The process consists of an endothermic reaction between methane and steam at elevated temperatures (about 750e950 C) and pressures (around 14e20 atm) [89].In some cases, a water-gas shift reaction occurs during the process, further enhancing the production of H 2 . The research and development activities regarding natural gas reforming focus on developing affordable, clean, efficient, and reliable steam methane reforming technologies.Ni/Al 2 O 3 is the most commonly used catalyst because of its high activity and low cost [90].Nevertheless, it has some disadvantages, such as coking formation and sintering of Ni particles.For this reason, designing advanced catalysts is needed to develop catalysts with high sintering and coking resistance.Modifying Ni catalysts by promoters, novel metals, self supports, and solid solutions are some of the examples [91].Other approaches are finding alternatives to Ni-based catalysts.It is also crucial to design catalysts that are not noble metal-based. There are several approaches to reducing the energy requirement and enhancing the efficiency of steam methane reforming in the literature.Some examples are chemical looping, electro-catalytic reforming, oxidative reforming, photocatalytic reforming, plasma reforming, solid oxide fuel cell, sorbent enhancement, and thermo-photo hybrid reforming [92e96]. Photofermentation In photofermentation, photosynthetic bacteria capture light energy and convert organic acids generated during anaerobic fermentation to H 2 and CO 2 in a nitrogen-deficient environment [97].These photosynthetic microorganisms exist in the natural environment and can process a wide range of substrates over a broad light spectrum.There is no O 2 generation in photofermentation, which is advantageous because O 2 inhibits H 2 production [11]. In photofermentation, photofermenters capture the light from the sun or an artificial source.The sun is a cheaper light source, but it cannot produce hydrogen continuously.Artificial light can support uninterrupted biohydrogen production through cloudy, foggy days and during nights.On the other hand, artificial lights require energy input, and they have higher costs, such as capital, operating and maintenance, et cetera [98,99]. The hydrogen yield of photosynthesis depends on the light intensity, medium, microorganism, photofermenter design, substrate, and several other factors [97,98].Light is the primary energy source of photofermentative hydrogen production [99,100].Both type and intensity of light should be considered when designing photobioreactors.It is also essential for conducting the photofermentative biohydrogen production process. The temperature range of photofermentative hydrogen production can be classified into four types: medium temperature (25e40 C), elevated temperature (40e65 C), extremely high temperature (65e80 C), and ultrahigh temperature (above 80 C).Most photofermentative bacteria produce hydrogen at medium temperature, typically 30e40 C, although faster metabolism and higher hydrogen yield and production rate could be achieved at higher temperatures [101]. In addition, pH value is essential to maintain the intracellular dynamic equilibrium, hydrogenase activity, cellular redox potential, and many other metabolic activities.The optimal pH for photofermentative biohydrogen production is generally around 5.0e7.0.This interval may vary with different substrates, inoculum, or culture conditions.If the pH is not controlled, the final pH values after fermentation usually drop to 4.0 due to the production of volatile fatty acids.It is worthy to note that the decrease in pH would impact hydrogen production by inhibiting hydrogenase activity [102]. Artificial photosynthesis In artificial photosynthesis, a biochemical reaction mimics natural photosynthesis.Titanium oxide nanoparticles imitate the role of chlorophyll and capture the incoming light [103].Replicating the natural process can simultaneously provide electricity, food, and fuel (e.g., hydrogen, methane, methanol) [104]. Artificial photosynthesis has several challenges, such as low efficiency of light capture, electron transfer, water splitting, and CO 2 reduction [103].The available catalysts (e.g., a blue dimmer, cobalt, iridium, and rhodium) have low efficiency and high cost.For this reason, there have been numerous studies on developing innovative catalysts to enhance the process [105e107]. Despite the disadvantages listed above, artificial photosynthesis has higher solar-to-hydrogen efficiency than PVbased electrolysis.One reason is that artificial photosynthesis uses light absorbers to use a more significant portion of the incoming photonic energy than the semiconductors used in PVs.However, the light absorbers (e.g., natural and synthetic dyes) increase the cost, and artificial photosynthesis has a shorter system lifetime.Therefore, PV electrolysis remains the preferred option due to its lower cost and longer life [106]. Sustainability performance assessment This study investigates the strengths and weaknesses of the carefully chosen hydrogen production methods based on seven sustainability criteria.The first two criteria to be investigated are related to the hydrogen production efficiency from the first and second laws of thermodynamics.The first law of thermodynamics helps determine energy efficiency, while the second law brings exergy efficiency forefront.Since efficiency is generally described as the amount of desired output divided by the amount of required input, the energy efficiency equation becomes: Here, _ m is the hydrogen production rate in terms of kg/s, LHV is the lower heating value of hydrogen (taken as 121 MJ/ kg) and _ E in is the rate of energy use in the process in terms of MJ/kg.The energy efficiency equation can be modified into the exergy efficiency equation as In the exergy efficiency equation, ex ch H 2 denotes the chemical exergy of hydrogen and _ Ex in is the rate of exergy input into the process.This study takes energy and exergy efficiencies of the selected hydrogen production methods from the literature [2,4,5]. The third criterion is the cost of hydrogen production, which is particularly important, especially during the commercialization and scaling up steps.An essential step towards sustainable hydrogen widely used in the market is making it more affordable.Currently, natural gas reforming and coal gasification have the lowest hydrogen production cost, but they also have the highest emissions.The emissions can be reduced by carbon capture (CC) technologies.However, CC increases the hydrogen production cost by about 10e20% [9]. Compared to traditional, fossil-based options, most renewable-based hydrogen production technologies are still in relatively early development stages.As a result, the lifecycle cost of renewable hydrogen is higher than the fossilbased one.However, there has been a significant decrease in the cost of renewable-based hydrogen.Further cost reduction is expected in most emerging renewable hydrogen production methods due to developments in materials sciences and system designs.Fossil-based hydrogen via carbon capture can be a transition step while renewable hydrogen becomes more affordable on large scales.In this study, the cost data of the selected hydrogen production methods are taken from Refs.[3,4,7,35,108]. The fourth and fifth criteria are the global warming and acidification potentials based on the life cycle assessment (LCA).LCA is a reliable method to investigate the actual environmental impact [109].Global warming potential (GWP) shows kg CO 2 emissions per kg H 2 produced.Acidification potential (AP) indicates waste discharge into the soil and water in terms of grams SO x released per kg H 2 produced.GWP and AP are the most commonly used environmental impact indicators [110].The literature shows the clear advantage of renewable hydrogen over fossil-based hydrogen in terms of GWP and AP [111,112].In addition to these resources, the GWP and AP data are gathered from Refs.[2,4]. The sixth criterion is the cost of carbon (CC).CC measures the marginal external cost of a unit of CO 2 emissions due to the damage to the environment and health.The estimation of CC is conducted based on different models, which can be found in detail in the literature [113e118].The CC is estimated to be 160 USD per ton of CO 2 emissions in this study. Technology maturity level (TML) which is treated as the seventh criterion, is a modified and consolidated rating scale between 1 and 10.The TML helps communicate the level of maturity of a particular technology.The rating 1 refers to the exceedingly early stages of research, primarily assigned to novel and small-scale options, and the rating 10 refers to the total market integration.The technology maturity level data are obtained from Ref. [5]. In the last step, all performance criteria are normalized and ranked between 0 and 10 to conduct the comparative assessment.0 indicates the least desirable case, and 10 is the ideal option.There is no normalization applied to the technology maturity level since it is already on a 0e10 scale, with 0 and 10 indicating the least and most desirable options, respectively.For all criteria, the sustainability performance of the selected hydrogen production methods increases as their ranking increases from 0 to 10. Energy and exergy efficiencies are normalized by: The remaining criteria (global warming and acidification potentials, production cost, and cost of carbon are normalized accordingly and ranked as Here, (i) represents the selected method.The "max value" in equation ( 4) denotes the highest value in the corresponding performance category.For example, if a method has the highest emissions among the selected options, it ranks 0. Alternatively, if an option has the lowest cost, it would be assigned to the highest ranking.It should be noted that ranking 10 is assigned to zero cost and emissions.Therefore, none of the methods get the ideal ranking.This approach aims to show that each option still has improvement potential in all selected performance criteria. Once all performance criteria for each of selected hydrogen production options are normalized and ranked between 0 and 10, their total sustainability scores are then calculated.The total scores are compared to the hypothetical ideal scenario where hydrogen is produced with 100% energy and exergy efficiencies, zero GWP, AP, and cost, and a TML of ten.Next, the average scores are calculated.There are eight cases to calculate the average scores, which are namely: EI: all criteria have equal importance EE: energy efficiency has a weight of 40%, and the rest of the criteria have a weight of 10% each ExE: exergy efficiency has a weight of 40%, and the rest of the criteria have a weight of 10% each Cost: production cost has a weight of 40%, and the rest of the criteria have a weight of 10% each CC: cost of carbon has a weight of 40%, and the rest of the criteria have a weight of 10% each GWP: GWP has a weight of 40%, and the rest of the criteria have a weight of 10% each AP: AP has a weight of 40%, and the rest of the criteria have a weight of 10% each TML: TML has a weight of 40%, and the rest of the criteria have a weight of 10% each Results and discussion This section presents and discusses the comparative performance evaluation results of the selected hydrogen production methods based on the selected performance criteria.The first criterion is energy efficiency (Fig. 3).High temperature electrolysis has the highest energy efficiency among the selected production options, followed by large-scale electrolysis and steam methane reforming.In high temperature electrolysis, electricity and heat are used together to split water.Since thermal energy supplies part of the required energy input for water dissociation, high temperature electrolysis uses less electricity and has higher energy efficiency.High temperature electrolysis is still in the research and development phase and is not commercialized, but it has many significant advantages.Some of its benefits support the reverse process (such as reversible solid oxide electrolyzer/fuel cell).The costs of high temperatures are expected to be competitive in a couple of decades [119,120]. On the contrary, photofermentation has the lowest energy efficiency.Photocatalysis and artificial photosynthesis have the second and third lowest energy efficiencies.One reason for that poor performance is the lack of photoactive materials that convert a more significant portion of the solar spectrum into hydrogen in an efficient manner.Therefore, more research and development activities are needed to develop more effective, reliable, durable, and clean photoactive materials. As can be seen from Fig. 4, hybrid thermochemical cycles have the highest exergy efficiency, followed by steam methane reforming and coal gasification.Reforming and gasification with carbon capture have slightly low efficiency due to the energy requirement of the capture processes.On the other hand, photofermentation, photocatalysis, and photoelectrochemical cells have the lowest exergy efficiency.The exergy efficiency of the hydrogen production processes can be enhanced by eliminating the waste heat via energy recovery such as multigeneration.Similar to the energy efficiency results, advanced photoactive materials could significantly enhance the exergetic performance of photonic hydrogen production processes.The production cost comparison (Fig. 5) shows that steam methane reforming has the lowest cost, followed by plasma reforming and coal gasification.Furthermore, the most expensive option is photoelectrochemical cells, followed by PV electrolysis and photocatalysis.The results show the clear economic advantage of large-scale, commercially developed hydrogen production methods over the relatively new, labscale options.The production cost of hydrogen production methods depends on various factors such as the energy input prices, capital or investment costs, technological maturity, and carbon prices.Undoubtedly, there are many uncertainties in the future costs of the selected hydrogen production options.The expectations show cost reduction in renewable, especially solar-based, hydrogen due to technological and engineering improvements, such as materials sciences towards highly efficient catalysts and more durability.Another expectation is that increasing carbon prices would make fossil-based hydrogen less affordable, increasing the costcompetitiveness of renewable hydrogen. In the literature, the production costs of hydrogen are estimated based on the capital and operational costs.For the more expensive hydrogen production options such as PEC, PV electrolysis, and photocatalysis, the highest portion of the production cost belongs to the capital cost.Reducing the related capital costs through developing new materials and integrating them into the energy systems could potentially enhance the economic performance of the green hydrogen production methods. The global warming potential, GWP, comparison (Fig. 6) shows coal gasification has the highest CO 2 emissions, followed by plasma and steam methane reforming.On the contrary, artificial photosynthesis, photofermentation, and photoelectrochemical cells have the lowest CO 2 emissions.As the results show, incorporating carbon capture technologies in fossil-based hydrogen production methods can significantly reduce their GWP.It is estimated that incorporating carbon capture can reduce the GWP of coal gasification by up to 80% and steam methane reforming by up to 70% [121].Nevertheless, carbon capture technologies have not been demonstrated feasible in practical applications.Besides, even though carbon capture technologies could significantly reduce the global warming potential of fossilbased hydrogen production options, they may still not be considered as sustainable because of the depletion of nonrenewable resources. It should be noted that the results obtained from different literature sources have vastly different values for global warming and acidification potential values.The variation in the reported emissions is especially significant for smallerscale hydrogen production options (e.g., green hydrogen).The variation may be due to the difficulty of obtaining normative parameters for methods not in large-scale production.The inventory data used for the evaluation are different or require more self-defined parameters.For this reason, in this study, average values of the reported data are used for CO 2 and SO 2 emissions. The fifth criterion is acidification potential, AP (Fig. 7).The results show biomass gasification has the highest AP, followed by coal gasification.In contrast, photon-based options (artificial photosynthesis, photofermentation, photocatalysis, and photoelectrochemical cells) have the lowest AP.The results are pretty similar to the GWP because of the high CO 2 and SO x emissions of fossil fuels.The difference here is the biomass gasification performance.Although it can be considered carbon-neutral, biomass gasification has high SO x emissions, which could endanger the water and land resources.The GWP and AP both depend on several factors, such as the primary energy and conversion process.In order to reduce the negative impact of hydrogen production on the environment, hydrogen should be produced from carbon-free sources via environmentally benign processes.One remarkable outcome of the acidification potential assessment is biomass gasification performance.Biomass gasification has a relatively good performance in terms of global warming potential, but it has remarkably high acidification potential.Biomass gasification has lower CO 2 emissions than fossil fuel-based methods, but its CO 2 emissions are higher than green hydrogen production.When the current state of the art of green hydrogen production methods is taken into account, biomass gasification can offer an alternative option during the transition from fossil fuels to renewable sources. The cost of carbon (CC) results (Fig. 8) show the same trends as the GWP data (Fig. 6).Coal gasification and plasma and steam reforming have the highest CC.On the other hand, photon-based options have the lowest SCC.This study takes CC as 160 USD per ton of CO 2 emissions.In the future, CC might increase, making photonic hydrogen more attractive in terms of its environmental, social, and economic advantages. In the literature, it is anticipated that hydrogen production, just like electricity generation, could potentially face a carbon price for any emissions occurred onsite.Therefore, the cost of carbon could potentially be a significant criterion for estimating the sustainability performance of hydrogen production options.With the introduction of new carbon taxes and more strict carbon restrictions, fossil-based hydrogen production is estimated to become less favorable and sustainable, even for large-scale operations. Fig. 9 shows the technology maturity level of the hydrogen production options.Large-scale and commercialized production options have the highest TML, such as steam methane reforming, coal gasification, and biomass gasification.Photonic hydrogen has the lowest TML.The biggest challenge with most sustainable hydrogen production methods includes the cost of hydrogen production, the maturity of the technology, and the scalability of the production process.Currently, a significant share of hydrogen production is from steam methane reforming, and it remains the most viable option for moving hydrogen into the energy market in the near term.Blue hydrogen is seen as the main route for lowcarbon hydrogen production.The current cost of using this method is lower than green hydrogen production methods because of the existing high cost of capital investments required to transition to greener technologies. Hydrogen production via steam methane reforming and coal gasification have higher technological maturity despite their significantly high global warming and acidification potentials.The dependence on natural gas and coal as finite resources and the necessity of long-term CO 2 storage in limited suitable geologic storage sites restricts the suitability of steam methane reforming and coal gasification as longterm solutions, even with carbon capture technologies.Nevertheless, hydrogen from steam methane reforming and coal gasification may be the bridging technologies that could facilitate the energy transition. The results show that most of the better environmental performance methods have higher costs and lower TML.One significant challenge is to meet the environmental, social, technical, and thermodynamic requirements at once.Therefore, the following aspects should be considered in future research to conquer this challenge: Materials science and engineering research: Significant achievements are needed to accomplish substantial progress in theory.Equally, computer modeling and calculations are also helpful for predicting, evaluating, and Fig. 8 e Cost of carbon (US$/kg H 2 ) comparison of the selected hydrogen production options. i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 1 3 7 optimizing different types of hydrogen-production methods.This aspect can guide the development of experimental routes and the optimization of process conditions, thereby improving the efficiency and reducing hydrogen production costs.The ultimate aim is to make large-scale green hydrogen more affordable, reliable, and accessible.Sustainable energy use and system integration: Current hydrogen production technologies are associated with complicated processes, high energy consumption, and costs.Therefore, the process coupling strategy is recommended for combining some essential processes to enhance the efficiency of hydrogen-production technologies, e.g., electrolysis in integrated energy systems.Moreover, the sustainability of the energy source and production process is critical.These measures could significantly reduce costs and the environmental impact of hydrogen production.Other advantages can be improving efficiency, operational flexibility, convenience, future market competition, and application prospects. In addition to the TMLs, the technology readiness level (TRL) is considered a meaningful criterion to describe the extent of development necessary to reach the stage of commercialization.TRL ranges from theoretical principle, at TRL 1, to operational plant, at TRL 9 [33].When evaluating the TRL, the minimum TRL of any technology component should be taken, as the limiting factor in development is the component with the lowest TRL.SMR, coal gasification, and large scale electrolysis are commercially available and have a TRL 9.More information on the TRL of the selected hydrogen production options can be found in Refs.[7,33]. The ultimate goal is to produce hydrogen in a clean, reliable, safe, affordable, and efficient manner.For this reason, the normalized rankings are comparatively presented in Fig. 10, which essentially indicates the ideal case. In Fig. 10, the total ranking of each method is given as sustainability performance.For each criterion, the selected methods rank between 0 and 10.The ideal case has a total ranking, a sustainability performance, of 70, while the lowest possible performance would be 0.With this approach, the methods that have a total that is closer to 70 have a higher sustainability performance.The rankings are normalized based on the procedure explained in Equations ( 3) and (4).Therefore, there are no units in the ranking results.Fig. 10 shows that large-scale electrolysis has the closest performance to the ideal case when all criteria are taken into account.Steam methane reforming with carbon capture has the second high score.It should be noted that this option has lower energy and exergy efficiencies, and it is more expensive than steam methane reforming without carbon capture.Developing more efficient, cheaper, and practical carbon capture technologies that support large-scale hydrogen generation via steam reforming could significantly accelerate the transition towards hydrogen energy.Thermochemical and hybrid thermochemical cycles also show promising performance.Their sustainability performance depends on the thermal and electrical energy sources and the catalyst performances.In Fig. 10, all performance criteria are assumed to be equally important. In Fig. 11, several case studies are generated to identify the most and least desirable hydrogen production methods.There are eight different case studies in Fig. 11.All performance criteria are equally important in the first case study Similarly, the parameters, namely GWP, AP and TML, have the highest importance in each one of the remaining cases.The results show that large-scale electrolysis has the highest sustainability performance in almost all cases.There is one exception, the fourth case.Production cost is the determining factor in the fourth case, and coal gasification with carbon capture has the highest score. When the overall performance of green, blue, gray, brown, orange, and turquoise hydrogen production methods are compared (Fig. 12), it can be seen that green hydrogen production methods have higher performance in terms of environmental criteria.However, their energy and exergy efficiencies, cost, and technology maturity level performance are relatively low.On the contrary, blue hydrogen production methods have the highest energy efficiency performance.Gray hydrogen production methods have the highest economic performance.Brown hydrogen production methods have the highest exergy efficiency and the technology maturity performance; however, they also have the least desirable performance in terms of cost of carbon and the global warming potential.Similarly, orange hydrogen has high technology maturity performance but the least attractive performance in acidification potential.In terms of technology maturity level, turquoise hydrogen has the lowest performance, followed by green hydrogen. Overall, when their average performances are taken into account, blue hydrogen has the highest sustainability score (6.91/10), followed by turquoise hydrogen (6.26/10), orange hydrogen (5.57/10), green hydrogen (5.42/10), gray hydrogen (5.26/10), and brown hydrogen (4.40/10).The results show the need to find alternatives to coal-based hydrogen.Green hydrogen production methods require research and development to enhance their technoeconomic performance.The short-term goal for green hydrogen should be to minimize the costs, and the long-term goal should include enhancing exergy efficiency, performance, and durability. Research and development requirements for sustainable hydrogen production can be summarized as more comprehensive technoeconomic analyses, reliability enhancement, effective and feasible integration with renewables, advanced materials, innovation in system design and operation, and intelligent system control and integration.The technoeconomic analysis models should consider the entire lifecycle and economic impact of emissions, pollution, and nonrenewable resource degradation. Conclusions This study comparatively assesses the sustainability performance of selected hydrogen production methods based on their technical, economic, thermodynamic, social, and environmental aspects.The main findings obtained from this study are sixfold as follows: The high temperature electrolysis and photofermentation have the highest and lowest energy efficiency, respectively.In terms of exergy efficiency, the hybrid thermochemical cycles and photofermentation are the most and least efficient ones. Fig. 2 e Fig.2e Specific hydrogen production methods selected for assessment and their color codes.(For interpretation of the references to color/colour in this figure legend, the reader is referred to the Web version of this article.) Fig. 3 e Fig. 3 e Energy efficiency comparison of the selected hydrogen production options. Fig. 4 e Fig. 4 e Exergy efficiency comparison of the selected hydrogen production options. i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 13 7 Fig. 5 e Fig. 5 e Production cost (US$/kg H 2 ) comparison of the selected hydrogen production options. i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 13 7 Fig. 6 e Fig. 6 e Global warming potential (kg CO 2 /kg H 2 ) comparison of the selected hydrogen production options. Fig. 7 e Fig. 7 e Acidification potential (g SO x /kg H 2 ) comparison of the selected hydrogen production options. Fig. 9 e Fig. 9 e Technology maturity level (TML) comparison of the selected hydrogen production options. 7 ( i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 7 ( 2 0 2 2 ) 4 0 1 1 8 e4 0 1 3 EI).In the other cases, one criterion is chosen to be the most critical, with a weight of 0.4 on the final grade, while the remaining six criteria are assigned the weight of 0.1.For instance, in the second case study (EE), energy efficiency has the highest weight (0.4), and the remaining criteria have equal weights (0.1).Exergy efficiency has the highest weight in the third case study (ExE) case study.Cost is the fourth case, where production cost has the highest importance.CC has the highest impact on the final decision in the fifth case. Fig. 10 eFig. 11 e Fig. 10 e Overall sustainability performance comparison of the selected hydrogen production methods and the hypothetical ideal case. Figs. 10e12 are generated not only to highlight the strengths and weaknesses of the selected hydrogen production options but also to underline the research and development opportunities related to each method.The different cases shown in Figs.11 and 12 can help decision-makers, industrial professionals, and researchers identify research directions for different scenarios with different priorities. Fig. 12 e Fig. 12 e Overall performance comparison of selected hydrogen production color groups.(For interpretation of the references to color/colour in this figure legend, the reader is referred to the Web version of this article.)
2022-08-27T15:14:03.024Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "d464aa14c28e33c39459fb00989cce4000b6f053", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ijhydene.2022.07.137", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ad272c1a863b58f80e200bbb4cc9493366a2a61a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
265465385
pes2o/s2orc
v3-fos-license
Reduction of renal interstitial fibrosis by targeting Tie2 in vascular endothelial cells Background Tie2, a functional angiopoietin receptor, is expressed in vascular endothelial cells and plays an important role in angiogenesis and vascular stability. This study aimed to evaluate the effects of an agonistic Tie2 signal on renal interstitial fibrosis (RIF) and elucidate the underlying mechanisms. Methods We established an in vivo mouse model of folic acid-induced nephropathy (FAN) and an in vitro model of lipopolysaccharide-stimulated endothelial cell injury, then an agonistic Tie2 monoclonal antibody (Tie2 mAb) was used to intervent these processes. The degree of tubulointerstitial lesions and related molecular mechanisms were determined by histological assessment, immunohistochemistry, western blotting, and qPCR. Results Tie2 mAb attenuated RIF and reduced the level of fibroblast-specific protein 1 (FSP1). Further, it suppressed vascular cell adhesion molecule-1 (VCAM-1) and increased CD31 density in FAN. In the in vitro model, Tie2 mAb was found to decrease the expression of VCAM-1, Bax, and α-smooth muscle actin (α-SMA). Conclusions The present findings indicate that the agonistic Tie2 mAb exerted vascular protective effects and ameliorated RIF via inhibition of vascular inflammation, apoptosis, and fibrosis. Therefore, Tie2 may be a potential target for the treatment of this disease. Impact This is the first report to confirm that an agonistic Tie2 monoclonal antibody can reduce renal interstitial fibrosis in folic acid-induced nephropathy in mice. This mechanism possibly involves vascular protective effects brought about by inhibition of vascular inflammation, apoptosis and fibrosis. Our data show that Tie2 signal may be a novel, endothelium-specific target for the treatment of tubulointerstitial fibrosis. INTRODUCTION Renal tubulointerstitial fibrosis is an important factor in the progression of chronic kidney disease (CKD). 1,24][5][6] In one such study, Yuan et al. 7 demonstrated that peritubular capillary loss may result in tubulointerstitial fibrosis in folic acid-induced nephropathy (FAN) mice.Consequently, protecting damaged renal microvasculature is a crucial approach to ameliorating renal interstitial fibrosis, and angiogenesis may play an important role in the process. 8,9ie2 is a transmembrane receptor tyrosine kinase found almost exclusively on endothelial cells.Angiopoietin-1 (Ang-1) is a endogenous ligand for Tie2 that can activate Tie2.1][12][13][14] Previous reports have shown that Ang-1/Tie2 signaling improves endothelial survival, downregulates inflammatory pathways, 15,16 and rescues apoptosis. 17,18Recent studies have found that dysregulation of Ang-1/Tie2 signaling is a significant feature in patients with CKD. 19Interestingly, previous research has reported contradictory results about the effect of Ang-1 in renal fibrosis.For example, while a soluble, stable, and potent Ang1 variant (COMP-Ang1) was found to protect peritubular capillaries, downregulate inflammation and delay fibrotic changes in a unilateral ureteral obstruction (UUO) mouse model. 20Contrary to the findings observed in UUO models, administration of a soluble form of Ang-1 was found to enhance fibrosis and inflammation in a FAN mouse model. 21In previous studies, Yuan et al. have demonstrated that the agonistic Tie2 monoclonal antibody (Tie2 mAb) stimulates Tie2 activation, 22 which maintains the integrity of recently formed interstitial vessels. 23Further, Tie-2-expressing capillaries were found to undergo proliferation in the fibrotic interstitium between atrophic tubules in a mouse model of FAN. 24n the present study, we have tried to expand this line of research by exploring the potential renoprotective effects of Tie2 mAb. In this study, the in vivo FAN mouse model and the in vitro lipopolysaccharide (LPS)-stimulated human umbilical vein endothelial cell (HUVEC) injury model were used to verify that exogenous treatment with Tie2 mAb can improve tubulointerstitial lesions and decrease tubulointerstitial fibrosis by reducing peritubular capillary inflammation and upregulating peritubular capillary density.Our in vivo and in vitro findings indicate that Tie2 mAb may have therapeutic applications in the treatment of CKD. Animal experiments: the FAN model and Tie2 mAb treatment Male CD1 mice (n = 18; age, 4 weeks old) weighing 14-18 g were purchased from Suzhou Sinosure Biotechnology Co. Ltd. (Suzhou, China).The mice were randomly divided into three groups (n = 6 per group): WT, FAN, and FAN+Tie2.On day 1, mice of the FAN and FAN +Tie2 groups were intraperitoneally administered FA (F7876, 240 mg/kg; Sigma, Saint Louis) in vehicle (0.2 ml of 0.3 mol/L NaHCO 3 ).The FA dose used has been previously shown to reliably induce severe nephrotoxicity. 24Mice of the WT group were only intraperitoneally administered vehicle.Urine samples were tested for determining the urinary albumin-to-creatinine ratio (UACR) on day 2. Right nephrectomy was performed on each of the mice to obtain tissue for routine histological analysis by Masson staining on day 7.The results revealed FA-induced renal injury, as evidenced by a high uACR and renal pathological changes.Mice in the FAN+Tie2 group were treated by intraperitoneal injection of 1 mg of Tie2 mAb (AF313; R&D, Minneapolis) on day 9.The mice were sacrificed by cervical dislocation, and left nephrectomy was performed for renal tissue studies on day 28 (Fig. 1a). In vitro experiments: LPS-induced HUVEC injury model and Tie2 mAb treatment HUVECs were obtained from Procell Life Science and Technology Co. Ltd. (Wuhan, China) and were kept at 37 °C in a 5% CO 2 -containing humidified incubator.Dulbecco's modified Eagle's medium (DMEM; Sigma, Saint Louis) supplemented with 10% fetal bovine serum (FBS; Gibco, New York) was used as the culture medium.HUVECs were incubated in a 5% CO 2 incubator at 37 °C, and the medium was replaced every 2-3 days.The cells were then exposed to LPS (10 µg/ml; L4005, Sigma, Saint Louis) for construction of the in vitro model.Following this, HUVECs with LPSinduced injury were treated with Tie2 mAb (5 µg/ml; AF313, R&D, Minneapolis). Hematoxylin-eosin and Masson staining Masson staining was performed using the standard procedure as described previously. 25At 7 days after FA injection, the right kidneys of the mice were freshly dissected, fixed, processed, and embedded in paraffin.The kidney sections were cut to a thickness of 3 mm, and the sections were stained with Masson's trichrome.At 28 days after FA injection, the left kidneys were freshly dissected, fixed, processed, and embedded in paraffin.Then, 3-mm sections were cut and stained with hematoxylineosin (HE) and Masson's trichrome.To observe the pathological changes in renal tissues, the sections were subjected to HE staining and scored on a scale from 0 to 4 (0: no changes, 1: changes affecting <25% of the section, 2: changes affecting 25-50% of the section, 3: changes affecting 50-75% of the section; and 4: changes affecting 75-100% of the section). 26The tissue sections were visualized and photographed under a light microscope (Olympus Corporation, Tokyo, Japan).Tubulointerstitial areas were evaluated by Masson's trichrome.Using the Olympus Cell Soft Imaging System, the area of tubulointerstitial fibrosis and the total area of each visual field were measured and their ratio was calculated.Then, the mean of all the values was calculated. Western blot analysis Protein was extracted from renal tissues and HUVECs using radioimmunoprecipitation assay lysis buffer (Beyotime Institute of Biotechnology, Shanghai, China) containing 1% phenylmethylsulfonyl fluoride (Beyotime Institute of Biotechnology, Shanghai, China).The Enhanced BCA Protein Assay kit (Beyotime Institute of Biotechnology, Shanghai, China) was used to determine protein concentration.Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto polyvinylidene fluoride membranes.The membranes were probed with primary antibodies against VCAM-1 (1:2000; ab134047, Abcam, Cambridge, UK) or α-SMA (1:2500; ab32575, Abcam, Cambridge, UK).Western blot analysis for specific protein expression was performed according to established procedures as described previously. 27 Statistical analysis GraphPad Prism Version 6 (GraphPad Prism Software Inc., San Diego, California) was used for data analysis and figure preparation.Results were expressed as mean ± SEM.One-way analysis of variance (ANOVA) was used to compare the means of the three groups.p < 0.05 was considered to indicate significance. Attenuation of renal atrophy and lesions in FAN model mice treated with Tie2 mAb On day 2, urine samples from FAN model mice were collected for determining UACR, which is an established marker for renal dysfunction. 29UACR was higher than normal in the model mice.On day 7, the right kidneys were resected for routine histological analysis by Masson staining.Tubular damage was detected 1 d after FA administration.Over the next few weeks, most tubules regenerated but exhibited patchy atrophy, and interstitial fibrosis was also observed. 24Consistent with these findings, the present results revealed FA-induced renal injury, as evidenced by high UACR and renal pathological changes (Fig. S1a, b). We divided the mice into three groups: WT mice, FAN model mice, and FAN model mice treated with the agonistic Tie2 mAb. Figure 1 shows the changes induced in FAN mice treated with Tie2 mAb.The mice were intraperitoneally administered 1 mg of Tie2 mAb on day 9 (Fig. 1a).Visual examination by autopsy of the FAN mice revealed renal shrinkage with uneven surfaces, and a lower kidney/body weight ratio than the WT group (p < 0.01).In the FAN mice treated with Tie2 mAb, the degree of kidney shrinkage was reduced (Fig. 1b) and the kidney/body weight ratio was substantially improved compared with the FAN mice that did not receive this treatment (Fig. 1c, Table 1).These data suggest that treatment with Tie2 mAb ameliorated renal atrophy in FAN mice.We further evaluated histopathological changes in kidney tissue by HE staining.A considerable number of tubular casts and severe cell infiltration, tubular atrophy, and interstitial fibrosis were detected in the FAN mice; these effects were suppressed by Tie2 mAb administration (Fig. 1d).In addition, the tubulointerstitial injury score was decreased after Tie2 mAb treatment in the FAN mice (p < 0.001) (Fig. 1e).These findings suggest that treatment with Tie2 mAb mitigated FA-induced tubulointerstitial lesions in mice. Suppression of renal tubulointerstitial fibrosis in FAN mice treated with Tie2 mAb Masson staining was performed to observe collagen deposition and renal interstitial fibrosis.Significantly increased interstitial collagen deposition was observed on day 28 in the FAN mice, but this effect was suppressed by Tie2 mAb administration in the antibody-treated FAN mice (Fig. 2a).Treatment with Tie2 mAb resulted in a marked reduction in the FA-induced tubulointerstitial fibrosis area (p < 0.01) (Fig. 2b).In addition, the expression of FSP1 in renal tissues was evaluated by immunohistochemical staining (Fig. 2c).FSP1 has been shown to be correlated with serum creatinine levels, creatinine clearance, and fibrosis area as confirmed by renal biopsy, and is a predictor of end-stage renal disease. 30A significant increase in FSP1 staining in renal tissues was observed in the FAN mice on day 28, whereas Tie2 mAb treatment effectively inhibited FA-induced expression of FSP1 (p < 0.05) (Fig. 2d). Attenuation of inflammation and improved density of peritubular capillaries in FAN mice treated with Tie2 mAb The expression of VCAM-1, an important mediator of vascular inflammation, was assessed by western blotting (Fig. 3a).Elevated expression of VCAM-1 was observed in the FAN mice, but this effect was diminished on treatment with Tie2 mAb (p < 0.01) (Fig. 3b).This indicates that Tie2 mAb had an anti-inflammatory effect.Furthermore, immunohistochemical staining of CD31, a typical marker of endothelial cells (Fig. 3c), showed that peritubular capillaries could be easily observed by CD31 immunostaining.In the FAN mice, a decrease in the number of CD31-positive capillaries was observed in renal tissues, but this effect was mitigated on administration of Tie2 mAb (p < 0.001) (Fig. 3d).These results suggest that treatment with Tie2 mAb prevented decrease in the density of peritubular capillaries in FAN mice. In vitro protective effect of Tie2 mAb on vascular endothelial cells Based on the ameliorative effect of Tie2 mAb on peritubular capillary lesions and renal interstitial fibrosis in the mouse model of FAN, we used HUVECs with LPS-induced injury as an in vitro model to visualize and quantify the effect of Tie2 mAb on vascular endothelial cells.The mRNA expression levels of the factor VCAM-1 (Fig. 4a) and the apoptosis gene Bax (Fig. 4b) in the LPS group were significantly higher than those in WT group (p < 0.05), while those in Tie2 mAb treatment group were significantly lower than those in the LPS group (p < 0.05).This suggests that Tie2 mAb had anti-inflammatory and anti-apoptotic effects on injured HUVECs.Thus, Tie2 mAb promoted endothelial cell regeneration.Next, the in vitro effect of Tie2 mAb on the expression of the pro-fibrotic factor α-SMA was examined in HUVECs.As shown in Fig. 4c, d, the protein expression level of α-SMA was significantly increased on stimulation with LPS in HUVECs.However, this increase in α-SMA levels was significantly suppressed by treatment with Tie2 mAb (p < 0.05).These results demonstrate that Tie2 mAb treatment resulted in a weakened fibrotic response in endothelial cells. DISCUSSION The folic acid-induced nephropathy (FAN) mouse model is a classical model for acute kidney injury (AKI) and sequential tubulointerstitial fibrosis.FA induces dose-dependent nephrotoxicity in mice, accompanied with acute tubular epithelial apoptosis. 24][33] That it, endothelial cell injury is not directly caused by FA.Eventually, this leads to peritubular capillary lesions and tubulointerstitial fibrosis in FAN model mice.Similarly, we used LPS as an inflammatory agent to establish an in vivo model of endothelial cell injury in HUVECs.LPS is one of the most common proinflammatory stimuli that promotes the production of various inflammatory cytokines. 34The FAN model and LPS-induced injury model are analogous to each other.To demonstrate the successful induction of the FAN model, we performed right nephrectomy to obtain tissue for Masson staining and evaluate the tubulointerstitial lesions in murine renal tissues on day 7.This was also performed in the WT group mice.It should be noted that the removal of one kidney did not cause tubulointerstitial lesions at this time. Our data demonstrate for the first time that administration of Tie2 mAb ameliorates renal tubulointerstitial fibrosis, dampens renal inflammation, and promotes the growth of peritubular capillaries after FA-induced renal injury in a mouse model.6][17][18] Recently, Carota et al. 35 suggested that activation of Tie2 by inhibition of vascular endothelial protein tyrosine phosphatase reduces the expression of pro-inflammatory and pro-fibrotic gene targets in diabetic nephropathy.Rübig et al. 36 also provided evidence that the in vivo activation of Tie2 by PEGylated VT, a drug-like Tie2 receptor agonist, can counteract microvascular endothelial barrier dysfunction, improve renal recovery, and reduce mortality in ischemic acute kidney injury.Here, we used a reliable model of CKD to study the impact of Tie2 mAb on kidney fibrosis.In accordance with these previous findings, we found that Tie2 mAb treatment prevented FAinduced renal vascular inflammation, improved the density of peritubular capillaries, and significantly attenuated interstitial fibrosis. In the present study, we observed that the accumulation of FSP1 positive cells and the interstitial fibrotic area were significantly decreased by Tie2 mAb treatment.The findings indicate that Tie2 mAb had an anti-fibrotic effect.Consistent with these findings, the in vitro protein expression level of α-SMA, a putative marker of myofibroblasts, 37 was increased in HUVECs with LPS-induced injury, while it was markedly repressed by treatment with Tie2 mAb.In addition, the anti-inflammatory effect of Tie2 mAb was demonstrated by downregulation of VCAM-1 expression.In accordance with these findings, there is increasing evidence that Tie2 activation is associated with anti-inflammatory effects in endothelial cells, 38,39 and that Tie2 mAb inhibit endothelial VCAM-1 expression.In addition, in the present study, we examined LPS-induced apoptosis in endothelial cells as evidenced by elevated levels of the proapoptotic gene Bax. 40nterestingly, Tie2 mAb administration enhanced endothelial cell survival in response to apoptotic injuries. 41Hence, Tie2 mAb There is increasing evidence to show that peritubular capillaries play an important role in CKD and are a key regulator of CKD progression. 4Peritubular capillary rarefaction is found not only in diabetic nephropathy 42 and hypertensive nephropathy, 43 but also in IgA nephropathy, 44 congenital nephrotic syndrome, 45 lupus nephritis, 46 and polycystic kidney disease. 479][50][51] In the present study, we observed that Tie2 mAb administration in FAN mice is associated with an increase in the density of peritubular capillaries.This effect might be due to a Tie2 activation-induced elevation in endothelial cell number that directly enhances endothelial cell proliferation 52 or prevents endothelial cells apoptosis. 53We were unable to explore the mechanisms in detail in the present study, so more research is needed to explore the signaling pathway involved in renal tubulointerstitial fibrosis and how it is affected by the administration of Tie2 mAb. Fig. 1 Fig. 1 Attenuation of renal atrophy and lesions in FAN mice with Tie2 mAb treatment.a FA or vehicle only was administered on day 1.Urine samples were tested for determining UACR on day 2. Right nephrectomy was performed for routine histological analysis by Masson staining on day 7. FAN+Tie2 group mice were intraperitoneally administered 1 mg Tie2 mAb on day 9.All the mice were sacrificed on day 28.b Renal appearance was evaluated and (c) kidney/body weight were measured to assess renal atrophy.d HE staining was performed to evaluate the histopathological changes in murine renal tissues (magnification, ×200).e Renal lesions were evaluated by calculating the mean tubular-interstitial injury score.All data are presented as mean ± SD (n = 6).*p < 0.05, **p < 0.01, ***p < 0.001.FA folic acid, FAN folic acid nephropathy, Tie2 mAb agonistic Tie2 monoclonal antibody, UACR urinary albumin-to-creatinine ratio (mg/mg). Fig. 2 Fig. 2 Suppression of renal tubulointerstitial fibrosis in FAN mice by Tie2 mAb treatment.a Masson staining was performed to evaluate tubulointerstitial fibrosis in murine renal tissues (magnification, ×200).b Semiquantitative score of tubulointerstitial fibrosis in Masson's trichrome-stained sections.Ten randomly selected high-power fields were quantified and the average value was calculated for each mouse.c Expression of FSP1 in murine renal tissues was assessed by immunohistochemical staining (magnification, ×400).d The number of infiltrating FSP1-positive cells in kidneys was counted.All data are expressed as mean ± SD (n = 6).*p < 0.05, **p < 0.01, ***p < 0.001.FSP1 fibroblast-specific protein 1. Fig. 3 Fig. 3 Attenuation of inflammation and improved density of peritubular capillaries in FAN mice by Tie2 mAb treatment.a Protein expression level of VCAM-1 in murine renal tissues was assessed by western blotting.b Densitometry results are presented as the relative ratio of VCAM-1 to β-actin.c The density of PTC was evaluated by immunohistochemical staining of CD31 (magnification, ×400).d Density of PTC in the kidneys.All data are expressed as mean ± SD (n = 6).*p < 0.05, **p < 0.01, ***p < 0.001.VCAM-1 vascular cell adhesion molecule-1, PTC peritubular capillaries, CD31 cluster of differentiation 31. Table 1 . Kidney weight and body weights.
2023-11-29T06:17:05.518Z
2023-11-27T00:00:00.000
{ "year": 2023, "sha1": "3df1c9e49fc176006aba75ea45ca3a34f378ae7a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41390-023-02893-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f60c4a0e4497644dcf3b1efab0ef1171f77986df", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
248938505
pes2o/s2orc
v3-fos-license
Clinical Application Effects of Different Preoperative Blood Management Schemes in Older Patients with Delayed Intertrochanteric Fracture Surgery Introduction Research on preoperative blood management in older patients with delayed surgery for intertrochanteric fracture is scarce, especially regarding hematopoiesis and hemostasis. We assessed the effectiveness of optimized blood management programs in older patients undergoing delayed surgery for intertrochanteric fractures. Methods This retrospective study included 456 patients who underwent delayed surgery for intertrochanteric fractures. According to the optimized blood management plan, the patients were divided into four groups: group A was the control group; group B received 1 g of tranexamic acid (TXA) intravenously at admission; group C underwent sequential TXA treatment after admission until 1 day before surgery (1 g/day); and group D received iron supplements (200 mg/day) in addition to the treatment administered to group C, with or without recombinant human erythropoietin (rHuEPO; 40,000 IU). The primary outcomes were preoperative hidden blood loss (HBL), preoperative allogeneic blood transfusion (ABT) rate, hemoglobin (Hb) change, and actual Hb drop. Results The Hb reduction, calculated HBL, and hospitalization duration in groups C and D were significantly lower than those in groups A and B. The preoperative ABT rates in groups C and D were significantly lower than those in groups A and B, with no significant difference between groups C and D. Discussion The results of this study suggested that iron supplementation (with or without rHuEPO) combined with the sequential IV TXA scheme did not show a better clinical effect than the sequential IV TXA scheme in the management of patients undergoing delayed surgery for intertrochanteric fractures. Therefore, further evaluation is needed before recommending iron supplements and rHuEPO in older patients. Introduction Owing to its large and rapidly aging population, China has the largest number of older people worldwide. Hip fractures have become a public health concern owing to their poor prognosis; thus, China is currently facing great challenges due to an increasing number of patients with hip fractures. 1,2 A systematic analysis reported 1-year mortality rates after hip fracture and intertrochanteric fracture (IF) in China of 13.96% and 17.47%, 3 respectively. Perioperative anemia is an important factor for mortality after hip fracture surgery and is closely related to the fracture type, such as intracapsular or extracapsular. 4,5 Acute post-traumatic anemia and the need for transfusion in older patients with IFs are the major concerns for orthopedic surgeons. Pre-anemia increases the severity and incidence of postoperative anemia and increases the need for blood transfusion. 6 Therefore, an ideal blood management program should include an attempt to minimize preoperative blood loss and to correct preoperative anemia. 7 Post-traumatic hyperfibrinolysis is one of the most important causes of preoperative blood loss, and the administration of anti-fibrinolytic drugs is an effective treatment to reduce hidden blood loss (HBL). 8 Previous studies have focused on tranexamic acid (TXA) to reduce intraoperative and postoperative hemorrhage in IFs; [9][10][11] however, studies on occult hemorrhage before the operation are limited. 12 Intravenous (IV) iron supplementation, with or without recombinant human erythropoietin (rHuEPO) therapy, has been proposed as an intervention to correct preoperative anemia. 13 However, the actual clinical effects remain controversial. 14,15 Strong evidence is lacking to support the preoperative clinical benefit of iron supplements and rHuEPO in older patients with IFs. Over the past 8 years, we have been committed to perioperative blood management for older patients with hip fractures. Within this patient population, this study focuses on the management of patients undergoing delayed surgery for IFs. We have adopted a variety of intervention programs, including the administration of TXA, iron, and rHuEPO; however, the clinical effects of different application programs have not been effectively summarized. Therefore, we summarized the original data to evaluate (1) whether the optimized blood management plan was effective and safe; (2) compared with an early IV single injection of TXA, whether sequential TXA effectively reduced post-traumatic blood loss and allogeneic blood transfusion (ABT) rate; and (3) whether adding iron supplements, with or without rHuEPO, can further maintain hemoglobin (Hb) levels. Materials and Methods Patients We conducted this retrospective large-scale self-censorship study at the Department of Orthopedics and Trauma, Hong Hui Hospital (level-1 trauma center) among older patients diagnosed with an IF and with an operative delay >72 h, between January 2013 and October 2020. All the data in this study were derived from two randomized controlled studies conducted in our department, which have been registered in the Chinese Clinical Trial Registry (ChiCTR-TRC-1800017754 and ChiCTR-INR-16008134). This study was approved by the hospital ethics committee (No. 201606008) and was conducted in accordance with the principles of the Declaration of Helsinki. All participating patients provided written informed consent. The inclusion criteria were older patients with IF, age ≥65 years, injury time ≤12 h, and Hb level ≥110 g/L on PTA. The exclusion criteria were (1) polytrauma; (2) open fractures or continuous bleeding from other parts of the body, such as spleen rupture or gastrointestinal bleeding; (3) recent or continuous thromboembolic events and long-term consumption of oral anticoagulants before injury; (4) known allergies to TXA, rHuEPO, or iron supplements; (5) coagulation dysfunction caused by diffuse intravascular coagulation or liver and kidney dysfunction; (6) severe brain, heart, liver, or kidney dysfunction and patients who could not tolerate surgery; (7) patients with pathological fractures or tumors; (8) waiting time for post-traumatic to surgery >5 days; (9) American Society of Anesthesiologists (ASA) score of V; and (10) administration of any drugs, other than TXA, iron, and rHuEPO, that are conducive to hemostasis and hematopoiesis. Study Design and Blood Management The patients were divided into four groups according to the actual optimized blood management plan that they received at post-trauma admission ( Table 1). The details of each strategy are described below. 826 Sequential IV TXA: sequential TXA treatment after admission until 1 day before surgery (1 g/day). Iron supplements, with or without rHuEPO: patients received additional IV iron supplements (200 mg/day, Hengsheng Pharmaceutical Co., Ltd., Nanjing, China), with or without rHuEPO (first dose: 40,000 IU, following doses: 10,000 IU; Shenyang 3SBio Inc., Shenyang, China). The indications for the discontinuation of iron supplementation and rHuEPO were: Hb level >130 g/L for male individuals and Hb level >120 g/L for female individuals. Preoperative Management and Discharge Criteria All patients were managed according to the standardized IF pathway protocol after admission, including blood pressure control, blood glucose monitoring, and standardized fluid and blood transfusion procedures. 36 The only difference was in the above three main preoperative blood optimization management programs. In addition, according to the recommendations of the China Orthopedic Major Surgery Venous Thromboembolism Prevention Guidelines, all patients were treated with anticoagulants (enoxaparin, intrahepatic [IH] 20 mg qd) and intermittent compression boots as a preventive treatment for lower extremity venous thrombosis during hospitalization. If a positive result was observed on lower extremity vein ultrasound, the enoxaparin (IH) regimen was changed to bid. According to the agreement signed upon admission, the patients underwent IV ultrasound examinations of both lower extremities at the bedside every day during hospitalization and were examined at the outpatient clinic 14 and 30 days after discharge. Patients were discharged according to the following criteria: surgical incision without bleeding, an Hb level of >100 g/L, an albumin level of >30 g/L, and stable vital signs. Data Collection The hospital records contained data on age, sex, body mass index, preoperative blood volume, injury side, time from injury to operation, duration of operation, ASA classification, and AO fracture classification (A1/A2/A3). We noted the Hb and hematocrit (Hct) levels at PTA, day 2 of PTA, preoperatively, and on POD 1. Outcome Measurements Primary Outcomes The primary outcomes in the present study were preoperative HBL, preoperative ABT rate, Hb change, and actual Hb drop. HBL was calculated using the Gross formula, as follows: 37 HBL = PBV × (Hct 1 -Hct 2 ) + Hb trans Hct 1 = initial Hct level upon admission Hct 2 = lowest Hct level detected before surgery The PBV was calculated using Nadler's equation, as follows: 38 For male patients, K 1 = 0.3669, K 2 = 0.03219, and K 3 = 0.6041 For female patients, K 1 = 0.3561, K 2 = 0.03308, and K 3 = 0.1833 The ABT indications, formulated according to the guidelines of the Chinese Ministry of Health, were: (1) Hb level <70 g/ L; (2) 70 g/L <Hb level <100 g/L; and (2) in patients with symptoms of dizziness, tachycardia, asthma, and fatigue. Because most older people are in a frail state before injury, our department and the anesthesiology department jointly developed a new blood transfusion strategy in which patients with Hb levels of <90 g/L received ABT preoperatively. 11 Secondary Outcomes The secondary outcomes included intraoperative and POD 1 blood transfusion rates and hospitalization duration. Complications, such as intramuscular venous thrombosis, DVT, and symptomatic PE, were recorded preoperatively. In addition, the incidence of 30-day mortality and 30-day all-cause re-admission were recorded through a review of existing medical data of the department. Clinically suspected PE was diagnosed based on clinical symptoms and findings from enhanced chest computed tomography scans. Statistical Analyses Statistical analyses were performed using IBM SPSS Statistics for Windows, version 22.0, and GraphPad Prism, version 8.0. One-way analysis of variance and Tukey's post-hoc tests were performed to analyze parametric data, while Kruskal-Wallis H and Mann-Whitney U-tests were applied to nonparametric data. Chi-square or Fisher's tests were used for the analysis of qualitative variables. Statistical significance was set at P <0.05. Patient Demographics A total of 833 consecutive patients with IFs were screened between January 2013 and October 2020, and their eligibility for participation in this study was assessed. Among these, 377 patients were excluded according to the exclusion criteria, including 38 aged <65 years, 124 with an injury time of >12 h, 73 with an Hb level of <110 g/L upon admission, 11 with polytrauma, 27 with long-term oral anticoagulants before injury, 4 with pathological fractures, 37 with a waiting time post-trauma to surgery of >5 days, 26 who received other hemostatic or hematopoietic medications after trauma, 18 lost to follow-up, and 19 who received non-proximal femoral nail anti-rotation fixation. Finally, this study enrolled a total of 456 patients with IFs ( Figure 1). No significant differences in demographic data upon admission were found between the groups, and the baseline characteristics of the four groups were comparable ( Table 2). Figure 2A shows the mean Hb concentrations of the four treatment groups from post-trauma admission (PTA) to postoperative day (POD) 1, including the decreasing trends in each group. The comparison between groups showed a significant difference in the mean Hb values preoperatively and POD 1 (P<0.001). Primary Outcome The preoperative mean Hb values were 105.06±8.52 g/L, 106.59±9.04 g/L, 111.47±10.01 g/L, and 112.86±8.79 g/L in groups A, B, C, and D, respectively. The Hb values in groups A and B differed significantly from those in groups C and D but did not differ significantly for group A vs group B and group C vs group D ( Figure 2B). The mean Hb values on Figure 2C). The average preoperative drops in Hb values in the four groups (21.48 g/L, 21.02 g/L, 17.36 g/L, and 16.55 g/L, respectively) were significant (P<0.001). Compared with groups A and B, groups C and D showed significant inter-group statistical differences (all P<0.001); however, pairwise comparisons between the groups showed that the differences between groups D and C and between groups B and A were not statistically significant ( Figure 2D). Figure 3 shows that the average preoperative HBL was 526±159 (range: 292-1063) mL in group A, 531±154 (range: 221-955) mL in group B, 466±115 (range 276-997) mL in group C, and 449±117 (range: 242-1168) mL in group D, with significant differences between the groups (P<0.001). Pairwise comparisons between groups showed significant differences for groups C and D compared with groups A and B (P<0.001) but no significant differences between groups A and B and groups C and D (P>0.05). The preoperative transfusion rates in groups B, C, and D were lower than those in group A, although the difference between the groups was not significant (P>0.05). Secondary Outcomes Compared with groups A (n=21) and B (n=17), groups C (n=12) and D (n=10) had fewer patients who required intraoperative ABT (P=0.104). The intraoperative blood loss and operation duration of the four groups of patients were similar, with no significant differences between the groups (all P>0.05). Similarly, fewer patients in groups C and D required ABT on POD 1 compared with those in groups A and B; however, the difference was not significant (P=0.353). Intermuscular vein thrombosis was the most common condition preoperatively, with 22 cases (17.40%) in group A, 15 (15.63%) in group B, 19 (16.37%) in group C, and 20 (16.95%) in group D. The incidence rates were similar among groups (P=0.986). No patients in any group experienced symptomatic pulmonary embolism (PE). Compared with patients in groups A and B, those in groups C and D showed a good effect regarding a shortened hospitalization duration (P<0.001). No difference was detected between groups C and D (P>0.05). According to the existing medical data of our department, 16 patients had been hospitalized within 30 days. Among them, four were admitted to the hospital due to poor blood sugar control, four with hypoproteinemia, two with redness and swelling, four with deep venous thrombosis (DVT), and two with stroke (Table 3). Discussion The most important findings of the current study are as follows. First, post-traumatic administration of sequential IV TXA reduced the preoperative waiting period of HBL and maintained Hb levels without increasing the incidence of preoperative DVT. It also showed some benefits in reducing transfusion rates. Second, the patients did not receive additional benefits from the combined supplementary treatment plan (iron supplementation with or without rHuEPO). Finally, the early blood management intervention of patients should follow the conclusions of the CRASH-2 trial, in which the first IV single-dose TXA should be administered within 8 h post-trauma. IFs cause substantial blood loss in older and frail patients, exposing them to preoperative anemia, which negatively impacts clinical outcomes and mortality. 16,17 Wu et al 18 reported that blood loss occurring between the time of fracture and operation is the main reason underlying decreased Hb levels, with significantly higher Hb decreases before the operation compared with after the operation. In their prospective analysis of perioperative hidden blood loss in 123 older patients with femoral IFs, Li et al 19 observed an HBL in the preoperative waiting period of approximately 375.6 mL, which was 62.4% of the total HBL on POD 1 (602 mL) and 48.6% (772 mL) of the total HBL on POD 3. Therefore, blood management during the preoperative waiting period for older patients with delayed surgery is particularly important. Two consecutive global multicenter studies showed that the early application of TXA effectively reduces the incidence of mortality due to traumatic bleeding. 8,20 A prospective randomized controlled study by Ma et al 12 confirmed that a single-dose IV TXA (1 g, 200 mL) interventional treatment in the early post-traumatic period of older patients with IF effectively reduced HBL. A pharmacokinetic study of TXA showed that its half-life is approximately 3 h. However, the high fibrinolytic state of the body due to trauma is persistent, and patients with severe trauma more often require transfusion support with red blood cells and plasma within 6 h. 21 In addition, the use of anticoagulants, such as enoxaparin or rivaroxaban, after trauma, may cause new bleeding. Thus, a single dose of TXA in the early posttraumatic period cannot sufficiently inhibit hyperfibrinolysis; therefore, this finding provides the theoretical basis for multiple doses and sequential administration of IV TXA to inhibit fibrinolysis and to reduce HBL after trauma. IV iron supplementation, with or without rHuEPO, before surgery, is considered a compelling potential intervention to increase Hb levels and to reduce perioperative transfusion exposure. Yoon et al 22 strategy combined with IV iron supplementation in 859 patients with hip fracture before surgery, reporting less total blood loss, lower blood transfusion rates, and higher Hb levels at 6 weeks after surgery. Muñoz et al 23 conducted a pooled observational analysis of 2547 patients undergoing major orthopedic surgery (lower-limb arthroplasty and hip fracture repair), confirming that very short-term perioperative IV iron supplementation, with or without rHuEPO, was closely related to reduced transfusion rates and length of hospital stay. García-Erce et al 24 reported that the preoperative administration of rHuEPO was associated with reduced transfusion requirements in patients with anemia due to hip fracture managed with perioperative IV iron and a restrictive transfusion protocol. However, some scholars remain skeptical about the clinical effects of IV iron supplementation. Jeong et al 25 found that, although the serum iron profile of patients with IV iron supplementation improved, it could not increase the Hb level or reduce the transfusion rates among patients who underwent primary staged bilateral total knee arthroplasty during a single hospitalization. All of the abovementioned studies were observational. To date, there is no broad consensus or established clinical practice guideline to support the routine use of preventive IV iron supplementation, with or without rHuEPO, for the treatment of anemia before major elective surgery; furthermore, the best treatment timing, specific doses, and whether patients with anemia require additional nutritional supplements remain unclear. This study is the first to investigate the effectiveness and safety of different preoperative blood optimization management programs for older patients with delayed IF surgery. The results showed lower HBL volumes for sequential IV TXA (group C) and iron supplementation treatment (with or without rHuEPO, group D) in the preoperative waiting period compared with the administration of single-dose TXA (group B) and the control group (group A); however, the HBL volume did not differ significantly between groups C and D (466.25±114.82 mL vs 448.66±116.83 mL, P=0.247). In the last laboratory results before surgery, the Hb levels of groups C and D were higher than those of groups A and B, with no significant difference between groups C and D. The decreases in Hb levels also showed the same trends. These findings indicated that the extended use of TXA, with or without iron and rHuEPO, reduced the HBL and actual Hb loss after admission. The findings regarding the preoperative transfusion rate also confirmed the advantages of the long-term use of TXA, with rates of 22.22%, 19.79%, 14.66%, and 14.41% for groups A, B, C, and D, respectively. Another important finding was that, theoretically, TXA intervention in the early post-traumatic period can reduce HBL; however, in this study, the HBL and Hb levels did not differ significantly between groups A and B. Our results showed a mean time from trauma to admission in the single-dose group of 8.77 h, which was higher than those reported in the CRASH-II trial (8 h) 8 and by Ma et al (2 h). 12 Therefore, early single-dose intervention should be administered within 8 h after trauma. applied a restrictive transfusion Preoperative sequential IV TXA (12 patients, 10.34%) and supplementation treatment (10 patients, 8.47%) programs also showed benefits in reducing the requirement for intraoperative transfusion compared with single-dose TXA (17 patients, 17.71%) and control programs (21 patients, 16.67%) (P=0.104). In addition, on POD 1, transfusions were required by 11 patients (8.73%) in the control group, 8 (8.33%) in the single-dose group, 8 (6.90%) in the sequential IV TXA group, and 4 (3.39%) in the supplementation treatment group. Groups C and D showed reduced blood transfusion rates during surgery and on POD 1 because good blood preservation before surgery can increase the tolerance of older patients to surgical trauma. A reduction in blood transfusion rate not only reduces the risk of infection in the blood but also contributes to a substantial reduction in healthcare costs and resource utilization for these patients. 26 In addition, the length of stay differed significantly between the four groups in this study (P<0.001), with the length for groups C and D being significantly less than that for groups A and B. Therefore, good preoperative blood management can reduce posttrauma HBL and transfusion requirements due to IF in older patients, reduce the incidence of anemia, maintain high Hb levels during hospitalization, and shorten hospitalization duration, which is closely related to postoperative physical function recovery and the concept of enhanced recovery after surgery. 27 While some studies have reported that delaying the operation time increases the probability of postoperative complications and mortality in older patients with hip fracture, 28 other studies have shown no significant difference in the mortality of patients whose surgery is delayed by up to 3 days. 29 The results of this study showed no mortality or serious adverse events within 30 days after the operation among the four groups. Surgeons have focused on the safety of IV TXA. Although many studies have reported administering the routine dose without increasing the risk of venous thromboembolism, there remains no consensus regarding the safety of higher doses or prolonged use. 30,31 Conservation of blood products, reduced laboratory costs, and shorter hospital stays are likely the major factors driving the cost savings associated with TXA use. 32 Similarly, the https://doi.org/10.2147/CIA.S362020 DovePress Clinical Interventions in Aging 2022:17 832 use of erythropoiesis agents increases the risk of thrombotic events. 33 In this study, the incidence of venous thrombosis in the lower extremities did not differ significantly in the three groups and the incidence of venous thrombosis was comparable to the control group. IV iron supplementation can cause life-threatening hypersensitivity reactions, cardiovascular events, and infections. 34 We did not encounter such adverse events in this study. These results are consistent with those of a previous systematic review that included 10,390 patients participating in 103 trials, which concluded that IV iron supplementation was not associated with adverse drug reactions or increased risk of infection. 35 Although this was not a prospective randomized controlled study, it still has several strengths. Our study employed strict inclusion criteria; for older patients with an Hb level >110 g/L, it is difficult for doctors to pay attention to preoperative anemia and blood transfusion requirement in the clinic. However, the results of this study confirmed a declining trend in the Hb level of patients with delayed surgery. Furthermore, the time from injury to admission was strictly controlled to reduce potential interference. This study also has some limitations. First, this study only focused on hospitalization and a short follow-up period, which may not be sufficient to evaluate the clinical efficacy and safety of the treatment. Second, as the coagulation index was easily missed by the doctor during the post-trauma admission examination, it was excluded from this study. Third, when the Gross formula was obtained, considering the impact of dehydration on the Hb level upon admission, the Hb level upon admission was corrected to 0.9 to simulate 10% dehydration for all patients; however, rehydration following admission might have interfered with the accuracy of Hb measurements. Finally, a prospective study might better verify the effectiveness and safety of an optimized blood management program with the application of TXA, iron supplement, and rHuEPO in the older population with hip fractures. Conclusion Older patients with delayed IF surgery who received sequential IV TXA combined with iron supplementation, with or without rHuEPO, in the preoperative waiting period did not show better effects than those who received sequential IV TXA alone. Therefore, further evaluation is required before recommending iron supplementation to patients. In addition, if the patient undergoes early TXA intervention, the first IV time should be guaranteed within 8 h post-trauma. Data Sharing Statement Correspondence and requests for materials should be addressed to X.H.Z and Z.K.
2022-05-21T15:06:01.081Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "f679cc425905a2ac5eb5b4acfc534bc6e02d7e69", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "334cf0c47fecdab1e75c61736c73e071cf058d18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }