id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
4197949 | pes2o/s2orc | v3-fos-license | Accurate kinetic energy evaluation in electronic structure calculations with localised functions on real space grids
We present a method for calculating the kinetic energy of localised functions represented on a regular real space grid. This method uses fast Fourier transforms applied to restricted regions commensurate with the simulation cell and is applicable to grids of any symmetry. In the limit of large systems it scales linearly with system size. Comparison with the finite difference approach shows that our method offers significant improvements in accuracy without loss of efficiency.
Introduction
Density functional theory (DFT) combined with the pseudopotential method has been established as an important theoretical tool for studying a wide range of problems in condensed matter physics [1]. However, the computational cost of performing a total-energy calculation on a system scales asymptotically as the cube of the system size. Consequently, plane-wave pseudopotential DFT can only be used to study systems of up to about one hundred atoms on a single workstation and up to a few hundred atoms on parallel supercomputers. As a result there has been considerable recent effort in the development of methods whose computational cost scales linearly with system size [2].
A common feature of many of the linear-scaling strategies is the expansion of the single-particle density matrix in terms of a set of localised functions. We refer to these functions as 'support functions' [3]. A support function is required to be non-zero only within a spherical region, which we refer to as a 'support region', centred on an atomic position. Here we consider a representation of the support functions in terms of a regular real space grid, which constitutes our basis set. If the set of support functions is {φ α }, the single-particle density matrix is expressed as where K αβ is the matrix representation of the density matrix in terms of the duals of the support functions. In general the support function set is not orthonormal. Real space methods have the advantage that they provide a clear spatial partitioning of all quantities encountered in a density functional calculation, a property that is ideal for code parallelisation. As a result, this approach has gained popularity in recent years and a number of such density functional calculations have been reported by different authors [4,5,6]. These approaches use finite difference (FD) methods [7] for the calculation of the kinetic energy. In terms of the support functions the kinetic energy is where T αβ denotes kinetic energy matrix elements between support functions, given in Hartree atomic units by The evaluation of the kinetic energy matrix elements requires the action of the Laplacian operator on the support functions. Here we will show that in the case of localised support functions, fast Fourier transform (FFT) methods can be adapted for the application of the Laplacian, providing an algorithm with essentially the same computational cost as FD but with higher accuracy and also ready applicability to any grid symmetry. In the following two sections we present the FD method and our new FFT-based method and compare them both in theory and in practice.
Theory
For functions represented as values on a regular grid, integrals like the one of equation (3) can be calculated, or rather approximated to increasing accuracy, by a sum over grid points, as long as the value of the integrand is known at every grid point: whereT is the Laplacian operator in the discrete representation, w is the volume per grid point, and the sum formally goes over all the grid points in the simulation cell.
Finite Differences
The most straightforward approach to the evaluation of the Laplacian operator applied to a function at every grid point is to approximate the second derivative by finite differences of increasing order of accuracy [7]. For example, the ∂ 2 φ/∂x 2 part of the Laplacian on a grid of orthorhombic symmetry is where h x is the grid spacing in the x-direction, A is the order of accuracy and is an even integer, and the weights C −n . This equation is exact when φ is a polynomial of degree less than or equal to A. The leading contribution to the error is of order h A x . The full Laplacian operator for a single grid point in three dimensions consists of a sum of (3A + 1) terms.
In principle, for well behaved functions, the second order form of equation (5) should converge to the exact Laplacian as h → 0. Therefore to increase the accuracy of a calculation one would need to proceed to smaller grid spacings. However, in most cases of interest, this is computationally undesirable and instead, formulae of increasing order are used to improve the accuracy at an affordable cost [8]. Chelikowsky et al. [9], in their finite difference pseudopotential method, have tested the finite difference expression for up to A = 18 on calculations of a variety of diatomic molecules and have suggested A = 12 as the most appropriate for their purpose, as the higher orders did not provide any significant improvement.
Alternative discretisations of the Laplacian operator are possible, such as the Mehrstellen discretisation of Briggs et al. [5]. This is a fourth order discretisation that includes off-diagonal terms, but only nearest neighbours to the point of interest. It is more costly to compute than the standard fourth order formula of equation (5) and it is still not clear whether its fourth order is sufficient. One may also use FD methods on a grid with variable spatial resolution, such as that of Modine et al. [10] which is denser near the ionic positions. Such a scheme, however, has the added overhead of a transformation of the Laplacian from Cartesian to curvilinear coordinates. In this paper we use only the FD scheme of equation (5).
The FD approach has desirable properties, both in terms of computational scaling and parallelisation. The Laplacian in the FD representation is a near-local operator, becoming more delocalised with increasing order. Therefore, the cost of applying it to N grid points is strictly linear (compared to N log N for Fourier transform methods). Also, as a result of its near-locality, ideal load balancing can be achieved in parallel implementations by partitioning the real space grid into subregions of equal size and distributing them amongst processing elements (PEs) while requiring little communication for applying the Laplacian at the bordering points of the subregions.
If N s represents the size of the system, then the number of support functions will be proportional to N s and so will the total number of grid points in the simulation cell, resulting in a total computational cost proportional to N 2 s for the application of the Laplacian on all support functions. More favourable scaling can be achieved by predicting the region in space whithin which the values of a particular function will be of significant magnitude and operating only on this region [4,11]. Linear-scaling can be achieved by strictly restricting from the outset the support functions to spherical regions centred on atoms [12]. In this case, the cost is qN s with q being the cost of applying the Laplacian on the points of a spherical region, which is constant with system size.
FD methods nevertheless have disadvantages that do not appear in the plane-wave formalism. Firstly, there is no a priori way of knowing whether a particular order of FD approximation will be sufficient to represent a particular support function accurately. In addition, while plane-wave methods can handle different symmetry groups trivially through the reciprocal lattice vectors of the simulation cell, real space implementations need to consider every symmetry separately and require considerable modifications to the code and higher computational cost. Briggs et al. [5] have demonstrated this difficulty by performing calculations with hexagonal grids while most common applications of real space methods in the literature are limited to grids of cubic or orthorhombic symmetry [4,6,9,12].
The computational cost for the calculation of the Laplacian of a single support function with the FD method scales as (3A + 1)(1 + A/D) 3 N reg where N reg is the number of grid points within the support region, and D is the number of grid points along the support region diameter and is proportional to N 1/3 reg . This estimate of cost includes all the nonzero values of the Laplacian, which in general occur not only at the grid points inside the support region but also at points outside, up to a distance of A/2 points from the region's boundary. It is important to include the contribution to the Laplacian from outside the support region in the sum of equation (4) in order to obtain the best possible accuracy for a given order A and also to ensure the Hermiticity of the discretised representation of the Laplacian,T , and hence of the kinetic energy matrix elements T αβ .
Localised discrete Fourier transform
We now present an accurate, linear-scaling method for calculating the kinetic energy matrix elements T αβ of equation (3). We use a mixed space Fourier transform approach that is applicable to any Bravais lattice symmetry. Fourier transformation is a natural method to adopt for this task since in a total-energy calculation one computes other terms, such as the electron density and the Hartree energy, using reciprocal space techniques. This implicitly defines the basis set that we use to be plane-waves and for consistency we should calculate the kinetic energy using the same basis set, i.e. using Fourier transform methods. Thus we calculate the ∇ 2 φ term in reciprocal space, where the Laplacian operator is easy to apply, then transform the result back to real space and obtain the matrix elements T αβ by summation over grid points (4). One way to achieve this would be to perform a discrete FFT on each support function φ, using the periodicity of the entire simulation cell. However, unlike the FD algorithm, the FFT is not a local operation and the cost of applying the Laplacian to all the support functions in this way would be proportional to N 2 s log N s , which clearly does not scale linearly with system size.
It is possible to overcome this undesirable scaling without compromising accuracy by performing the FFT over a restricted region of the simulation cell, which we call the 'FFT box' (figure 1). Before defining the FFT box, there are two points that should be noted. Firstly, the operatorT must be Hermitian. This will ensure that the kinetic energy matrix elements, T αβ are Hermitian, and hence the eigenvalues real. Secondly, when calculating two matrix elements such as T αβ and T γβ , we require the quantityT φ β in both cases. To be consistent, our method for calculating the matrix elements must be such thatT φ β is the same in both cases, i.e. we requireT φ β to have a unique and consistent representation throughout the calculation. It is important that both these conditions are satisfied when it comes to optimisation of the support functions during a total-energy calculation, and we shall return to this point later. In order to fulfil the above requirements, it can be seen that for a given calculation the FFT box must be universal in shape and dimensions. As a result, it must be large enough to enclose any pair of overlapping support functions within the simulation cell. To define a suitable FFT box, we first consider a box with the same unit lattice vectors as the simulation cell, but of dimensions such that it exactly circumscribes the largest support region present in the simulation cell. We then define a box that is commensurate with this, but with sides that are twice as long (and hence a volume eight times as large). This we define to be the FFT box. It is clear that this FFT box is large enough to enclose any pair of support functions exhibiting any degree of overlap.
zero-padding region
To calculate a particular matrix element T αβ for two overlapping support functions φ α and φ β , we imagine them as being enclosed within the FFT box defined above and we treat this region of real space as a miniature simulation cell. We Fourier transform φ β using the periodicity of the FFT box and apply the Laplacian at each reciprocal lattice point using standard plane-wave techniques [1]. It is then a simple matter of using one more FFT to back-transform ∇ 2 φ β to real space and subsequently calculate T αβ by summation over the grid points of the FFT box, according to equation (4).
The result obtained by this process is equivalent to performing a Fourier transform of φ β over the whole simulation cell, applying the Laplacian and then interpolating to a coarse, but still regular, reciprocal space grid with only N box points, N box being the number of grid points in the FFT box, before back-transforming to real space. This coarse sampling in reciprocal space has a negligible influence on the result because each support function is strictly localised in real space and therefore smooth in reciprocal space.
It is worth noting the implicit approximation that we make in calculating the kinetic energy in the way prescribed above. In general, ∇ 2 φ β is nonzero outside the support region of φ β itself, and it is essential to take this into account in the calculations. By construction, we neglect contributions to the kinetic energy from support functions whose support regions do not overlap as we expect them to be negligibly small. This approximation may be controlled via a single parameter, the FFT box size, with respect to which the calculation may be converged if necessary. The same approximation is of course present in the FD method as well.
We expect certain advantages to the FFT box algorithm over FD based methods. Firstly, the FFT box method should be more accurate than any FD scheme since it takes into account information from every single point of the support function and not only locally. However, it is still perfectly local as far as parallelisation is concerned since we only deal with the points within a single FFT box each time, and this constitutes a very small region of the simulation cell. The parallelisation strategy in this case would still consist of partitioning the real space grid of the simulation cell into subregions of equal size and distributing them amongst PEs. Then, FFTs local to each PE are performed on FFT boxes enclosing pairs of overlapping support regions belonging completely to the simulation cell subregion of the given PE. For pairs of overlapping support regions containing grid points common to the subregion of more than one PE, the pair would have to be attributed to one PE and copied as a whole to it for the local FFT to proceed. This would involve some communication overhead, as in the FD case for pairs of overlapping support functions with points in more than one subregion. Another important advantage of the FFT box method is that it is applicable, without any modification, to regular grids of any Bravais lattice symmetry. This is not true of FD methods.
The number of grid points in a cubic FFT box is N box (which is related to N reg by N box = 8 × 6N reg /π ≃ 15.3N reg ). Therefore the computational cost of applying the FFT method to a single support function in such an FFT box is 2N box log N box , and thus for all support functions the cost is proportional to 2N s N box log N box , where N box is independent of N s . In other words the cost scales linearly with the number of atoms in the system.
Tests and discussion
We have performed tests of the FD and FFT box methods for calculating the kinetic energy of localised functions. Choosing a particular type of support function φ with spherical symmetry, placing one at R α and another at R β , we rewrite the integral of equation (3) as For our first test we calculate the following quantity as a function of the distance d between the centres R α and R β where T ex (d) is the exact value of the integral in the continuous representation of the support functions and T ap (d) is its approximation on the real space grid, either by FD or the FFT box method. We chose φ(r) to be a 2s valence pseudo-orbital for a carbon atom, generated using an atomic norm-conserving carbon pseudopotential [13] within the local density approximation. The pseudo-orbital is confined in a spherical region of radius 6.0 a 0 , and vanishes exactly at the region boundary [14]. It is initially generated as a linear combination of spherical Bessel functions, which are the energy eigenfunctions of a free electron inside a spherical box. Our functions are limited up to an energy of 800 eV, resulting in a combination of fourteen Bessel functions. The formula for calculating kinetic energy integrals between Bessel functions is known [15] and we used it to obtain T ex (d) for our valence pseudo-orbital. We then calculated η 1 (d) with a grid spacing of 0.4 a 0 (corresponding to a plane-wave cut-off of 839 eV) in an orthorhombic simulation cell, as we are restricted to do so by the FD method. With these parameters N box is 60 3 , and hence it is trivial to perform the FFT of one support function on a single node. η 1 (d) is plotted for the FFT box method and for various orders of the FD method in the top graph of figure 2.
It can be seen that low order FD methods are inaccurate as compared to the FFT box method, and only when order 28 FD is used does the accuracy approach that of the FFT box method. The A = 12 FD scheme, the highest order that has been used in practice for calculations [9], gives an error of −3.97 × 10 −5 Hartree at d = 0 as compared to 1.027 × 10 −5 Hartree for the FFT box method. The feature that occurs in the top graph of figure 2, between d = 5 a 0 and d = 7 a 0 , is an artefact of the behaviour of our pseudo-orbitals at the support region boundaries where they vanish exactly, but with a finite first derivative. This causes an enhanced error in all the methods when the edge of one support function falls on the centre of another.
The error in the FFT box method is small, yet non-zero, and we attribute this to the inherent discretisation error associated with representing functions that are not bandwidth limited on a discrete real space grid. Convergence to the exact result is observed as the grid spacing is reduced, as expected.
As our next comparison of the FFT box and FD methods we used the same pseudo-orbitals as before, but considered the quantity as the measure of the error, where T P W is the result obtained by Fourier transforming the support functions using the periodicity of the entire simulation cell. One may think of T P W as being the result that would be obtained from a plane-wave code: the support functions may be considered as generalised Wannier functions. Calculating the kinetic energy integrals by performing a discrete Fourier transform on the support functions over the entire simulation cell (an O(N 2 s log N s ) process for all support functions) is equivalent to summing the contributions to the kinetic energy from all of the plane-waves up to the cut-off energy determined by the grid spacing. Thus our FFT box method can be viewed as equivalent to a plane-wave method that uses a contracted basis set (i.e. a coarse sampling in reciprocal space). In some ways η 2 (d) is a better measure of the relative accuracy of the FD and FFT box methods as our goal is to converge to the 'exact' result as would be obtained using a plane-wave basis set over the entire simulation cell. η 2 (d) is plotted in the bottom graph of figure 2.
T P W was computed using a cell that contained 256 grid points in each dimension. Increasing the cell size further had no effect on T P W up to the eleventh decimal place (10 −11 Hartree). The plots show that the FFT box method performs significantly better than all orders of FD that were tested. For example at d = 0 the error for A = 28 FD is −3.49 × 10 −6 Hartree as compared to −1.09 × 10 −9 Hartree for the FFT box method. The fact that the FFT box error is so small shows that coarse sampling in reciprocal space has little effect on accuracy, as one would expect for functions localised in real space.
Our implementation can produce similar FFT box results to the above in regular grids of arbitrary symmetry (non-orthogonal lattice vectors) as long as we include roughly the same number of grid points in the support region sphere. As we described earlier the application of the FD method to grids without orthorhombic symmetry is not straightforward.
Furthermore, in our implementation the kinetic energy matrix elements T αβ for both the FFT box method and the FD method (of any order) are Hermitian to machine precision. This is a direct consequence ofT , our representation of the Laplacian operator ∇ 2 on the grid, being Hermitian. As mentioned earlier, this is an important point. The matrix elements T αβ may always be made Hermitian by construction withoutT itself being an Hermitian operator. This would ensure real eigenvalues, as is required. However, when it comes to optimisation of the support functions during a total-energy calculation, we require the derivative of the kinetic energy with respect to the support function values [12]: where the r i are grid points belonging to the support region of φ α . These relations both hold only ifT is an Hermitian operator, and support function optimisation can only be performed in a consistent manner if there is one unique representation ofT φ β for each support function φ β . It is also worth noting that the evaluation of these derivatives is the reason why we prefer to perform the sum of equation (4) for the FFT box method in real space, rather than in its equivalent form in reciprocal space. Applying the FFT box method in reciprocal space would be no more costly as far as integral evaluation is concerned but we would require an extra FFT per support function for the subsequent evaluation of equation (9). For all the methods we describe in this paper we observe variation in the values of the kinetic energy integrals when we translate the system of the two support functions with respect to the real space grid. This is to be expected as the discrete representation of the support functions changes with the position of the support region with respect to the grid. Such variations may have undesirable consequences when it comes to calculating the forces on the atoms. In FFT terminology, they result from irregular aliasing of the high frequency components of our support functions as they are translated in real space. Ideally, in order to avoid this effect, the reciprocal representation of the support functions should contain frequency components only up to the maximum frequency that corresponds to our grid spacing, in other words it should be strictly localised in reciprocal space. Unfortunately this constraint is not simultaneously compatible with strict real space localisation. It should be possible however to achieve a compromise, thus controlling the translation error by making it smaller than some threshold. Such a compromise should involve an increase in the support region radii of our functions by a small factor. This situation is similar to the calculation of the integrals of the nonlocal projectors of pseudopotentials in real space with the method of King-Smith et al. [16] which requires an increase of the core radii by a factor of 1.5 to 2. For example, if we consider two carbon valence pseudoorbitals of support radius 6.0a 0 and with d = 5.0a 0 and translate them both in a certain lattice vector direction over a full grid spacing, the maximum variation in the value of the integral with the FFT box method is 8.28×10 −6 Hartree. If we then do the same with carbon pseudo-orbitals generated with precisely the same parameters but instead with a support radius of 10.0 a 0 , the maximum variation with respect to translation is reduced to 2.05 × 10 −8 Hartree.
Conclusions
In conclusion, we have presented a new and easy to implement method for calculating kinetic energy matrix elements of localised functions represented on a regular real space grid. This FFT box method is based on a mixed real space -reciprocal space approach. We use well established FFT algorithms to calculate the action of the Laplacian operator on localised support functions, whilst maintaining linear-scaling with system size and near locality of the operation. This makes our FFT box method suitable for implementation in the order-N code that we are developing. We have performed tests of the FFT box method and various orders of FD. Comparing to the exact integrals of the continuous representation, we have demonstrated that our approach is more accurate than low order FD approximations and only when A = 28 FD is used does the accuracy become comparable to that of the FFT box method. We have also highlighted the connection between the FFT box method and plane-wave methods and shown that our approach is up to three orders of magnitude more accurate than A = 28 FD when compared to the 'exact' result within the plane-wave basis set of the entire simulation cell. Furthermore, our approach for calculating the kinetic energy is consistent with the way in which other quantities in a total-energy calculation, such as the electron density and the Hartree energy, are computed as these are also calculated using reciprocal space techniques. Finally, we also note that our FFT box method is more versatile than FD as it is applicable to real space grids based on any lattice symmetry whereas FD schemes are usually only applied to orthorhombic grids. | 2014-10-01T00:00:00.000Z | 2001-11-01T00:00:00.000 | {
"year": 2001,
"sha1": "3115fd2f0d11939ddd4e27daf73d56020df6e50f",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "53301fd19d737e9508d354579583736c37b0e1df",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
238117968 | pes2o/s2orc | v3-fos-license | HSOA Journal of Genetics & Genomic Sciences Recurrent Pregnancy Loss: Proposal for a Novel Diagnostic Protocol with New Molecular Genetics Insight
Although RPL remains unexplained in about 50% of the cases, gynecological, endocrinological, immuno-hematological and genetical factors have been described.
history of RPL [2,3]. However, surgical treatment of malformations does not seem to be associated with an improved reproductive outcome [4].
II. The intramural and submucosal fibroids could increase the risk of miscarriage and decrease live birth rate. However, there are still limited evidence on benefits of surgical treatment of uterine fibroids on reproductive outcomes [5].
III. There is some evidence that the uterine synechiae (Asherman's syndrome) may cause infertility and the RPL [6].
IV. Given the factors described above, it may be reasonable to look for uterine factors in young women with RPL, including ultrasound scan also with the three-dimensional approach or with sonohysterography, according to the local resource.
V. Regarding the potential role of local infections, the role of Ureaplasma, Chlamydia and Mycoplasma, or other agents, is still a subject of debate. Therefore, detection and therapy for these infections are not routinely recommended [2,3].
VI. The role of progesterone in RPL is controversial. Evidence showed that progesterone supports the pregnancy. For example, leutectomy prior to 7 weeks causes miscarriage, low progesterone levels have been linked to increase risk of first trimester miscarriage, and progesterone antagonist (mifepristone) have been successfully used in induction of abortion. Therefore, the central role of progesterone in early pregnancy led clinicians and researchers to hypothesize that progesterone deficiency could be a cause of some miscarriages. This hypothesis has results in numerous clinical trials of progesterone supplementation in early pregnancy bleeding, as well as in women with history of recurrent miscarriage. However, when studied in well designed placebo-controlled randomized trials, supplementation with progesterone did not result in improved reproductive outcome in women with RPL [7].
I. Because some data suggests that clinical or subclinical hypothyroidism is associated with RPL [3], many guidelines recommended testing for Thyroid-Stimulating Hormone (TSH) levels in RPL patients and treating only those with overt hypothyroidism [8].Our proposal is to test thyroids hormone levels in RPL patients and refer to endocrinologist in case of abnormal results.
II. Regarding glucose metabolism, different guidelines report that well controlled diabetes is not a risk factor for RPL [1] so we suggest testing glucose and glycosylated hemoglobin only in case of presence of clinical suspect.
III.
A lot of studies were performed on the possible influence of PCOS on the pregnancy and RPL [1,9]: however strong evidence of this IV. Elevated prolactin may cause ovulatory dysfunction and infertility but its role in RPL is still controversial and unclear. The European Society of Human Reproduction and Embryology guidelines do not recommend testing prolactin in the absence of clinical suspicion, while the American Society for Reproductive Medicine states that testing can be considered [1]. There is some weak evidence to suggest that normalising hyperprolactinemia can improve live births in RPL [1]. Our suggestion is to test prolactin level only in case of signs or symptoms and eventually refer to a specialist.
Immuno-Hematologic Causes
Two main immuno-hematological issues are been involved in RPL: (I) a thrombophilic status with particular attention to antiphospholipid syndrome and (II) an autoimmune disregulation.
I. Thrombotic events can cause abortion at any time of pregnancy. A thrombophilic status can be due to specific genetic polimorphisms (see Genetical causes) or to other acquired conditions that can alter coagulation status of the patient. The most frequent of these acquired conditions in RPL patients is the presence of anticardiolipin and/or lupus anticoagulant antibodies that, when associated with pregnancy loss or thrombotic events, is also known as Antiphospholipid Syndrome (APS). The APS, by the direct inhibition of placenta ion or the disruption of adhesion molecules or the thrombosis of placental vasculature, can play an important role in the RPL pathogenesis [1]. There are a lot of studies that have demonstrated the correlation between RPL and APS [11]. In fact, many guidelines recommend testing the APS in RPL woman and also suggest treating RPL patients with APS positivity with aspirin and low molecular weight heparin [1][2][3]. Consequently our suggestion is to check complete coagulation profile and APS antibodies in RPL patients and to refer to hemostasis specialist if there are abnormal values.
II. There are a lot of studies that have investigated the potential role of immune system in the pathogenesis of RPL. This includes the study of human leukocyte antigen (HLA) typing, natural killer cells, proinflammatory interleukin polymorphisms and immunomodulation with, for example, intravenous immunoglobulin, corticosteroids, intralipid infusion, auto-transfusion of lymphocytes and platelet rich plasma [12][13][14][15][16]. Anyway there is limited evidence that immunodisregulation and/or immunomodulation has any effect on RPL so investigations for auto-immunity, outside of APS, are not recommended [1]. Thus we don't suggest any specific autoimmunity analyses, rather than those for APS.
Genetical Causes
To date, there are two principal genetics factors that can influence the pregnancy outcome: (I) the presence of chromosomal abnormalitiesand (II) the presence of genomic DNA mutations.
Although these anomalies are frequent in spontaneous abortions, they are also isolated and couples experiencing this event do not present a significant risk of recurrence for future pregnancies. Unfortunately, in the majority of cases is not possible to perform the genetic tests on abortion specimens and the miscarriage remains idiopathic. An important part of these idiopathic abortions are due to a rearrangement of a chromosomal balanced anomaly present in one of the partners (translocation, inversion or insertion). It has been shown that in about 5% of couples with RPL, one of the partners carries a balanced asymptomatic chromosomal anomaly [1,2] that can be transmitted in unbalanced lethal form to the offsprings. Moreover, this chromosomal rearrangement presents a relatively high risk of recurrence. Options for these couples include Pre-implantation Genetic Diagnosis (PGD), spontaneous conception with invasive testing of subsequent pregnancies to konw and avoid imbalance tramission (chorionic villus sampling or amniocentesis), gamete donation [1]. The preimplantation genetic diagnosis, using medically assisted procreation techniques, can exclude the possibility of transmitting unbalanced chromosomal anomalies by the genetic selection of embryos to transfer. Our suggestion is to perform peripherical standard karyotype for RPL couples that haven't diagnosis of miscarriage with completed aneuploidy. Our proposal is also to perform a high-resolution karyotyping, known as Comparative Genomic Hybridization Array, if they have normal standard karyotype but positive family history of RPL for the female partner. This new molecular technique can identify the woman with a rearrangement on X chromosome that could be lethal or pathogenetic for male's offspring.
II. If the RPL patients present positive familial history of a specific mendelian disease, the causative genes could be screened in partners in order to identify the pathogenetic mutations and eventually use the preimplantation genetic diagnosis. This therapeutical option can be applied also to consanguineous partners with RPL if the molecular analyses identify the specific mutations that cause the recurrent abortions. A lot of genes were studied and potentially involved in the pathogenesis of RPL such DYNC2H1, KIF14, RYR1, GLE1, AMN, THBD, PROCR, VEGF, TP53, NOS3, JAK2 and so many others [22,23] but there is not a strong evidence of pathogenicity in miscarriage or recurrent gene mutations to test. So, except for rare cases, our opinion is that there isn't to date any specific gene or mutation related to RPL in general population that can be studied or proposed. However, many polymorphisms are under investigation, but the evidence is still limited. Between these, the polymorphisms related to thrombotic predisposition are been largely studied and often suggested [24][25][26]. A lot of papers have reported the association between V Leiden and Prothrombin polymorphisms and RPL [27][28][29][30][31]. Our proposal is to check complete coagulation biochemical screening with addition only of V Leiden and Prothrombin polymorphisms for all RPL patients. Finally, because it is now possible to investigate a lot of genes simultaneously using the next generation sequencing, we also suggest for all RPL woman that present a suggestive family history of male abortions to refer to geneticcounsellor in order to screen X linked gene diseases that are lethal in male. | 2021-09-13T21:56:04.640Z | 2020-05-28T00:00:00.000 | {
"year": 2020,
"sha1": "1ea7edf07bd36da2f4d9b12b86c4d4872ce62348",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/ggs-2485/100018",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4088c6aa6bf5f0f728c5017ca0ab685810eb9deb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55751100 | pes2o/s2orc | v3-fos-license | Temperature-Dependent Development of the Entomopathogenic Fungus Nomuraea rileyi ( Farlow ) Samson in Anticarsia gemmatalis ( Hübner ) Larvae ( Lepidoptera : Noctuidae )
The influence of temperature on the development rate of the entomopathogenic fungus Nomuraea rileyi (Farlow) Samson in Anticarsia gemmatalis (Hübner) 3rd-instar larvae was studied under constant temperature regimes. A non linear model was used to describe temperature-dependent vegetative and reproductive development rates of the fungus. The temperature required to reach the highest vegetative development rate was estimated to be 25.5oC while lower and upper developmental thresholds were 11oC and 30oC, respectively. The rate of development for conidia release was stable from 18oC to 26oC. The estimated lower and upper thresholds of vegetative development coincide with conditions during the natural epizootics in central Argentina. Non-linear effects of temperature on vegetative and reproductive development rates and probability distributions of N. rileyi under in vivo conditions are described.
The importance of temperature on the outcome of entomopathogens infection was stressed by Thomas & Blanford (2003) and its effect on N. rileyi development was studied in vitro (Fargues et al. 1992) and in vivo (Getzin 1961, Gardner 1985, Tang and Hou 2001).However, none of the models mentioned above dealt with temperature as a variable driving the speed of vegetative development and sporulation.As with any poikilothermic organism, temperature represents the most important factor governing the speed of development.This is particularly relevant in temperate climate regions where temperature can have considerable daily fluctuations in one season.The velocity of development of the fungus vegetative and reproductive stages within the larval body determines how fast the host stops feeding and becomes infective, respectively.The infectious incubation period is of central importance for epizootics.Indeed, a short or long incubation period could represent the difference between secondary infections occurring or not in the host population.Thus, temperature, as a physical factor regulating developmental rate of the entomopathogen, has epizootiological consequences.In order to make reliable predictions of N. rileyi epizootics in the field, epizootiology models would include temperature.Therefore, the aim of this study was to describe in vivo temperature-dependent vegetative and reproductive development rates, median and probability distribution, of N. rileyi infecting A. gemmatalis larvae.Sporulation process may need other interacting variables such as humidity.However, for the sake of simplicity in understanding the process and performing the bioassays and models, only temperature-dependent development of this stage was studied.
Material and Methods
Third-instar larvae of A. gemmatalis and the N. rileyi isolate (Nr32) were obtained from the collection at the Laboratorio de Hongos Entomopatógenos (IMYZA, INTA Castelar).The fungal isolate was originated from the Manfredi (Córdoba, Argentina) and isolated from an A. gemmatalis larva, with a maximum of two subsequent in vitro passages.Culture media contained potato, maltose, yeast and agar (Edelstein et al. 2004).
Ten groups of five 3 rd -instar larvae, were randomly assigned and dipped in 5 ml 0.01% Tween 80 aqueous suspension with 10 8 conidia/ml and a control suspension.Groups of 50 larvae were placed in individual plastic cages, 3 cm wide, with voile cloth caps, and kept in climatic chambers at constant temperatures of 16, 18, 20, 22, 24, 26, 28, 30 and 34ºC (± 0.5ºC) for each bioassay.The larvae were daily fed ad libitum with a meridic diet.Mortality in each of the treatments was daily recorded, throughout periods of a maximum of 30 days.Survival analysis was performed by Kaplan-Meier statistic and logrankχ 2 test (Altmann 1991).Untreated controls were performed to test the survival of the host in response to each thermal condition and the absence of previous infection in the experimental larvae.
It was assumed that vegetative development was complete at the time of death of the host larvae.The mycosed dead larvae (white cadavers) were immediately placed in individual 5 cm plastic petri dishes, with a small piece of soaking cotton and observed through a maximum of 10 days.The time from death to conidia release was recorded at 16, 18, 20, 22, 26 and 28 ºC (± 0.5ºC).The median of the rates (time -1 ) were calculated for each temperature treatment and the relationship between the developmental rate (in days -1 ) and temperature was described by fitting the following model (Briere et al. 1999): ( ) where R(T) is the developmental rate for a given temperature T, a is a fitting empirical parameter, T 0 is the lower temperature developmental threshold and T L is the upper lethal temperature and R(T) = 0 when T is less or equal to T0 , or T is higher or equal to TL.The statistical difference from zero of the estimated parameters was evaluated by a t-test.
A non-predetermined function of temperature was fitted to the median reproductive development rates.Least square errors, degrees-of-freedom adjusted coefficient of determination (DOF adj R 2 ) and algebraic simplicity were used as fitting selection criteria from a collection of models (Table Curve 2D v2.02, Jandel Scientific -AISN Software, 1994).
Since vegetative development rates at each unit of time, under determined thermal conditions, are fractions of a total physiological time and t i are the standardized (by the median at each temperature) developmental rates, its accumulation through time is an estimation of the physiological age t.The absolute frequencies with respect to t i were used as a weighting variables in non-linear regression analysis (Curry and Feldman 1987).In order to represent the variability of vegetative and reproductive development rates per cohort, the cumulative relative frequencies of physiological ages (t) for each process was fitted to the following sigmoidal model (Eq.2): where a and b are fitting parameters.
Results
Survival of infected larvae at the end of the experiments was significantly lower than the mean of the control in all temperatures (P < 0.001; χ 2 = 112.48).However, at 30ºC and 34ºC a high mortality of uninfected A. gemmatalis larvae occurred (Fig. 1) and, consequently, no mycoses were observed.The mean survival of the mycosed larvae was 24% of the total population under all the thermal conditions (Fig. 2).In the range of temperatures between 18ºC and 28ºC, 50% of the cohort died after 5 to 10 days of infection.The highest mortalities (66 to 90%) were observed between 10 to 16 days.Distinctively, at 16ºC, the survival of the infected larvae decreased slowly through time reaching a 41% survival at 26 days and 36% at 29 days after infection.
Discussion
Because moisture was not a limiting factor during the observations, our study helps to understand only the thermal effect on sporulation.However, the sporulation process is clearly also dependent on humidity (Sujii et al. 2002).Thus, future efforts should be focused on testing possible interactions of both temperature and moisture effects.Besides, initial inoculum burden in the field may affect the infection probability making the system less predictable.
Secondary transmissions of entomopathogenic fungi are determined both by the infection processes and conidia release.In the present work it was shown that, although the mycosis incubation, as a fraction of the first process, is strongly affected by temperature, the release of infective bodies is only slightly influenced by thermal conditions.This suggest that a higher number of deaths will happen during temperate days and incubation period is more critical than the sporulation one, at least from the thermal point of view.In contrast, relatively high daily mean temperatures can accelerate development of A. gemmatalis larvae.For example, at 28ºC this pest develops approximately 10% faster than at 26ºC (Johnson et al. 1983), while our results show that at the same temperature, increase reproductive development of N. rileyi would occur at around half its optimum rate.In such case, the pathogen is likely to release infective bodies when most of the hosts have already moulted into less susceptible stages.Consequently, it seems that mild temperatures would facilitate secondary transmissions.
The developmental rate of the fungus at relatively high temperature conditions can be estimated from Eq. 3; the vegetative development rate at 32.4ºC was estimated as 0.034 day -1 .However, at this temperature, host death caused by other factors than N. rileyi infection was very high, so the fungus development rates were difficult to be recorded.On the basis of previous data (Getzin 1961), personal observations (Edelstein et al. 2004) and the present study, the fact that in vivo development rates follow a proportional magnitude of approximately one tenth of in vitro development rates can be assumed.If in vitro median radial growth rate of N. rileyi colonies from the same origin was 0.33 mm.day -1 at 32ºC (± 0.5ºC) (R.E.Lecuona, pers.observ.),then, under in vivo conditions the development rate is expected to be 0.033 day -1 .Temperature lethal effects depend on time of exposure to the extreme thermal condition.Therefore, the effects of daily variations of temperature on development are relative to the amount of hours or days at this condition.The sporulation process in vivo was shown to be as thermal sensitive as in vitro to low temperatures.The reproductive development rate was quite constant (0.28 to 0.33 day -1 ) over the 18ºC to 26ºC range.These temperatures are quite common in temperate regions, like central Argentina where 17.1ºC to 22.1ºC is the range of mean temperatures usually recorded during the February-April period (INTA 2003).During these months N. rileyi epizootics happen on A. gemmatalis larvae populations at these latitudes.According to the present study, reproductive development should not be affected significantly by temperature, with the exception of extreme thermal conditions.Indeed, at 28ºC reproductive development rate drops to 0.13 day -1 ; thus, conidia release is expected to last around three to four days, but during a period of above average temperature, the vegetative to reproductive development period could be extended to seven or eight days.
Vegetative and reproductive N. rileyi development have similar upper threshold temperatures (ca.30ºC), which is slightly higher than Manfredi (Córdoba, Argentina) historical average maximum temperature (28.9ºC) in February, the warmest month of the epizootic season (INTA unpubl.).Maximum temperatures would constitute a selection factor for the poikilothermal organisms, such as entomopathogenic fungi.It is possible then that the coincidence in these upper developmental thresholds is a selected trait.This trait allows for developing, killing the host and infecting other individuals throughout the season.According to the present results, minimum temperature is also a limiting condition for the fungus reproductive development and consequently for secondary infections, particularly during March (mean minimum temperature, 14.49ºC) and April (mean minimum temperature, 11.04 ºC; historical records 1960-2002, INTA unpub.), INTA unpub.).At this time, the soybean crop season is finishing and A. gemmatalis larvae are about to pupate.Therefore, with few or no hosts available for the fungus and limiting temperature conditions, vegetative cadavers would just persist in the field probably as resistant entities for the next year (Madelin 1963).
Our description of the influence of temperature on N. rileyi development rate in infected A. gemmatalis larvae improve the predictive capacity of models designed to describe the insect-entomopathogen dynamics in relation to previous models like those by Kish andAllen (1978), Boucias et al. (1984) and Sujii et al. (2002).The assumption of linearity for temperature-dependent development is acceptable within a narrow range of temperatures.In the present study non-linear temperature-dependent development equations are proposed to be applied on epizootiology models similar to the described by Sujii et al. (2002).Consequently, they will be likely to yield better descriptions of temporal insect disease dynamics.An argument can be done about a more realist approach on fluctuating environmental temperatures or contemplation of microclimatic conditions.However, very realistic models would confront to simplicity in the experimental design to be tested.Moreover, little differences on thermal requirements could be obtained with a variable temperature method (i.e.re-analyzed from Hagstrum & Milliken 1991).Further compilation of the resulting equations from the present study in a general mechanistic model and its validation on the basis of temporal dynamics of the epizootics under field conditions should be done.
It has been argued that the estimation of temperaturedevelopment relationships of natural enemies can substantially contribute to the selection of the most appropriate biocontrol agent to be used under different environmental conditions (Perdikis & Lykouressis 2002).To the best of our knowledge, this is the first study on N. rileyi in which non-linear effect of temperature on vegetative and reproductive development rates and probability distribution under in vivo conditions are described.
Figure 4 .
Figure 4. Observed cumulative relative frequencies (ARF) and estimated cumulative distribution functions (CDF) of vegetative and reproductive developmental processes. | 2018-12-31T07:37:26.697Z | 2005-08-01T00:00:00.000 | {
"year": 2005,
"sha1": "24f6b2589299f8040ab9be07a19f365c4728eaa1",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/ne/a/BbvCFGzbwKkydmdGTGtH6kQ/?format=pdf&lang=en",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4bed6fde1558493f620f5b8e44d3af5ba0493f6b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
229198198 | pes2o/s2orc | v3-fos-license | Employee‘s Satisfaction within the Context of an Organization’s Development: Study Results
: Purpose: The study's main objective was to assess employee satisfaction at the University of Technology and Life Sciences in Bydgoszcz. Specific objectives included areas of employee satisfaction, assessing the opportunities for professional development, job motivation, and satisfaction, the opinions on the changes necessary to be undertaken at the University to improve job satisfaction. Design/Methodology/Approach: The evaluation study was conducted using a survey questionnaire, which was distributed to the employees using an electronic questionnaire via the Limesurvey system, a traditional paper questionnaire, available at the University's lodges, and in a downloadable version available via the INTRA network. Conclusions: The study showed that the respondents are satisfied, among others, with their professional development at the University, the possibility of using their abilities or competencies, and their achievements. Practical implications: The study also showed that the respondents are satisfied with the opportunity to acquire knowledge, new skills, and self-fulfillment. In this group of questions, the university employees surveyed rate employment stability the lowest. It is insufficient for 22.32% of the respondents, while for 22.03%, it is sufficient. Originality/value Satisfaction testing is a valuable tool for streamlining the organization and increasing employee satisfaction.
Introduction
Currently, such issues as social interest, environmental protection, and relationships with various stakeholder groups are more often taken into account in the development strategies of enterprises. Enterprises undertook by enterprises, including activities to benefit local communities, employees, or the broadly understood natural environment, fit the sustainable development trend (Czop and Leszczyńska, 2012). Czop and Leszczyńska indicate that "the primary goal, from the perspective of a company, is to achieve harmony in the social, economic and environmental spheres," while sustainable development of a company, implemented internally, is focused, among others, on employees -their development, creation of optimal work conditions. Employees expect insurance of safety at work and opportunities for development, partner relations, and recognition of their achievements, which in turn provides them with professional satisfaction. The issue of job satisfaction has become an essential element of interest for managers at many institutions and organizations and has been widely described in the literature on the subject.
The concepts of job satisfaction and job contentment are used interchangeably in the literature as synonyms. In occupational psychology, for example, job satisfaction signifies "positive or negative feelings and attitudes towards work" (Schultz, Schultz and Kranas, 2011), while in humanist psychology, satisfaction refers to the wellbeing of an individual, resulting from the fact of employment, which is characterized by optimism, hope and a sense of peace (Dobrowolska, 2010). Other authors similarly treat this concept, identifying it with positive or negative feelings about the work performed and employee expectations (Lu, While, and Barriball, 2005). Job satisfaction, according to E. Locke (as per Springer), is understood as "the result of perceiving one's own work as that enabling achievement of important es from work, provided that these values are in line with the needs or help fulfill basic human needs" (Springer, 2011). According to A. Sowińska, the criterion differentiating the two terms -satisfaction and contentment, is the time of occurrence. Job satisfaction signifies a more extended period of contentment, while contentment refers to a shortterm condition (Sowińska, 2004).
In recent years, satisfaction with work has been clearly emphasized because the work environment should be friendly for every employee, and thereby managers can influence its development. According to Juchnowicz, job satisfaction/contentment is both a goal in itself and a measure of the effectiveness of human resource management. What is more, the level of job satisfaction is currently an essential indicator of solid branding. Hence it should become a priority for the management staff. (Juchnowicz, 2014) Research shows that job satisfaction impacts organizational behavior, including -job performance, quality, work discipline, and fluctuation (Bańka, 2007). The data contained in the '2018 Polish Professional Satisfaction' report [original title in PL: Satysfakcja Zawodowa Polaków w 2018] (For linguistic clarity, titles of the Polish publications mentioned in the body text of Małgorzata Zajdel,Bartosz Mickiewicz,Cosmina Toader 3 the article have been translated into English, with the original titles given in square brackets.), prepared based on a survey (survey platform HR.pl), covering 14 critical aspects of company and employment assessment, indicate that, compared to 2016, satisfaction with the organization of work in companies has increased significantly. It is worth adding that almost 7% of the respondents are satisfied with the company's image on the consumer market and the company's image as an employer (Raport Satysfakcja zawodowa Polaków, 2018).
Care for integrated development, including the incorporation of training and support in the employee development programs, allowing employees' potential while supporting their passions, should be the critical issue for managers. Michalak emphasizes that development opportunities are an essential determinant of job satisfaction. This aspect positively affects the value of the entire organization. (Michalak, 2019). Implementation of regular diagnosis of employee satisfaction will allow construction of a solid organizational culture and thus will make it an element of competitive advantage (Mrówka, 2000).
One of the essential features that lead any organization to success is the quality of the human resources. It is well known that where motivation and job satisfaction are, productivity and performance are achieved. Motivation and job satisfaction are paramount in achieving success, which can be seen at the organizational level and any department, project, or plan, thus being an important area of responsibility of management. (Ruşeţ et al., 2007;Toader, 2007).
Research Methodology
The study's main objective was to assess employee satisfaction at the University of Technology and Life Sciences in Bydgoszcz. The study results can promote activities that can increase employee satisfaction and improve the functioning of the University. Specific objectives included areas of employee satisfaction, the assessment of the opportunities for professional development, job motivation, and satisfaction, the opinions on the changes necessary to be undertaken at the University to improve job satisfaction, the University authorities' management, as well as identification of problem areas (Allen and Wilburn, 2002). In order to aim their objective, the authors made the following steps: a review of literature, research study through questionnaires, processing, and analysis of data, conclusions drawing (Artz, 2010).
The evaluation study was conducted using a survey questionnaire distributed to the employees using an electronic questionnaire via the Limesurvey system, a traditional paper questionnaire available at the University's lodges, and a downloadable version available via the INTRA network. As part of an additional study on the Information and Promotion System assessment, individual in-depth interviews were conducted with all employees of the Department of Information and Promotion (DIP), using a structured interview questionnaire (Bigliardi et al., 2012).
Employee's Satisfaction within the Context of an Organization's Development: Study Results 4
Additionally, a detailed analysis of the functioning of the Department, as well as the comments provided, among others, by the Deans, constituted the basis for the development of the survey questionnaire. The survey questionnaire consisted of 20 questions, which enabled feedback on the work environment, the information and communication flow, professional development, and the concept for restructuring the DIP. The questionnaire has four sections (Evaluation of University, Parent unit evaluation, Evaluation of professional development, Assessment of motivation and job satisfaction), each of question is an evaluation scale. Also, the questionnaire contains two questions (one closed question with a single correct answer -Do you see any signs of mobbing at work? Moreover, one open question -What do you think should be changed at the University to improve your job satisfaction?). Three hundred fifty-four employees took part in the survey, which constitutes 31.6% of all employees (Cahill et al., 2015).
Results and Discussions
In the opinion of 36.44% of the employees surveyed, its authorities' management was rated satisfactory. Maintenance and security were rated well by 41.24% of the surveyed. Regional promotion of the University was rated as satisfactory by 35.88% and as good by 24.01% of the employees.
The University's website, as an element of promotion, its transparency and timeliness were rated well by 35.03% of the respondents. However, it is worth emphasizing that the analysis carried out about the Department of Information and Promotion and its functioning indicated a need for urgent restructuring. Extensive activities have been undertaken in this respect, including, among other things, an increase of the current employment as well as website management and its modification in visual and functional terms (Kenny, Reeve, and Hall, 2016).
Analysis of the scope of the DIP employees' duties shows that the description of task distribution was incorrect. The organizational structure is illegible because it does not incorporate in the manager's duties the tasks arising, for example, from direct supervision, employee control, or from the control of the tasks they perform (Chileshe and Haupt, 2010).
What is more, each workplace has an individual, detailed job description card. The study showed that internal organization within the Department does not meet its needs, while work organization needs improvement. This results from the extended scope of tasks that often cause stratification of work at insufficient employment.
Analysis of the main factors causing the respondents' work-related stress allows explicit indication of 'time' as the primary source of stress. All surveyed employees determined the factors that cause stress in their current work, which are associated with two factors. The first group is generally related to time, where the surveyed indicated 'work pace,' 'timeliness,' 'time pressure,' or 'tight deadlines.' The second Małgorzata Zajdel,Bartosz Mickiewicz,Cosmina Toader 5 group of factors includes 'lack of clearly defined responsibilities, 'insufficient technical support, and, above all, 'an insufficient number of employees at the Department' (Bartolo and Furlonger, 1999).
The need to increase the level of employment has also been confirmed by analyzing the tasks implemented by employees in 2017, which practically from the beginning went beyond the real possibility of their efficient and effective implementation. However, it can be said that the employees form an excellent team that supports and motivates each other. When analyzing the results obtained, it should be added that the employees surveyed often emphasized that the fact that they work in a young and well-integrated team. Some of the surveyed stated that the motivation to perform their current job entails: 'work environment,' 'interpersonal relations, employeesupervisor relations,' or the 'atmosphere at the Department -cooperation with the manager' (Groot and Brink, 1999).
According to the employees, as confirmed by the survey, the current state of employment does not allow efficient and timely performance of the tasks entrusted.
In terms of task implementation, staff shortages, i.e., relative to the scope of the cooperation with the promotion coordinators at the faculties and the broadly understood popularization of science and about edition and distribution of the University journal 'Format,' can be signaled. Opportunities to participate in training, courses, or postgraduate studies should complete high-quality human resources (Feinstein, 2002).
The specificity of work at the Department requires the continuous acquisition of new skills and knowledge or further development of competencies and qualifications. All the respondents signaled the need for the acquisition of new skills, through participation, for example, in courses or postgraduate studies, which can also serve as a source of non-financial motivation, thus making employees eager to participate in such courses or to undertake further studies (Musriha, 2013). The need for flexible response to the emerging training needs should be considered here, all the more so because the nature of the work performed requires continuous training.
The specificity of the Department's work and the current shortage of staff raises the need to combine specific processes taking place within individual tasks, which may interfere with the implementation of the critical tasks. It is recommended to analyze the institutional and personnel structure in terms of the needs identified. An essential element in any work environment entails whether the work performed is fascinating for a given person because it affects the overall approach to the duties performed. This opinion has been confirmed in this study since all employees found their work interesting (Warr and Inceoglu, 2012). A conclusion, therefore, is that when ensured adequate working and pay conditions, the majority of these employees will remain part of the University's staff resources.
Employee's Satisfaction within the Context of an Organization's Development: Study Results 6
An established pattern of values, norms, beliefs, attitudes, and assumptions that shape people's behavior and manner of accomplishing tasks is the so-called organizational culture. (Armstrong, 2000) It is the organizational culture that is created over a long period. The study showed that the organizational culture was rated very well by all team members. The atmosphere at work is a factor that positively affects the quality of work and the level of employee satisfaction. This is particularly important in terms of the work environment, where such factors may occur as stress, periodic layering of work, high responsibility, and new tasks. The respondents evaluated the work atmosphere at their office according to a 5-point scale, where 5 was the highest rank. The work atmosphere was rated very highly (Rad and Yarmohammadian, 2006).
The flow of information and communication plays a vital role in the efficient and effective implementation of tasks and faculty proxies. Therefore, the respondents were asked to assess the flow of information between departments on a scale of 1 to 5, where 5 meant excellent information flow. Unfortunately, none of the employees rated this area very well. Everyone stated that the flow of information deserves a satisfactory rating (Chang et al.,2009).
Additionally, the respondents stated that more frequent working meetings with the direct superior, i.e., the Vice-Rector for Organization and Development, would significantly improve this area, providing strategic decision-making opportunities (Judge and Klinger, 2007).
Information and communication management requires improvement and inclusion of the departmental proxies for promotion. External communication is also associated with this area, which requires adequately selected tools and channels of information distribution and promotion.
Internal communication and information flow quality affect the quality and comfort of employees' work, especially those who directly contact the beneficiaries and provide them with information. The employees surveyed have identified convergent types of improvements that they believe should be implemented. The key, most frequently mentioned type of improvement entailed implementing proper information flow and improvement, or even development, of a communication system. Another suggestion was to implement the draft Principles of Cooperation with the University's Department of Information and Promotion. The proposal to develop the Principles of Information Provision via the website, the Fanpage, and the promotional monitors to promote the departments' events or publicize departmental achievements and publish content in the media is noteworthy (Judge and Bono, 2001).
Because information and communication flow has been repeatedly indicated as requiring refinement, it is recommended to thoroughly analyze the solutions used so Małgorzata Zajdel,Bartosz Mickiewicz,Cosmina Toader 7 far and introduce, in consultation with the proxies, adequate improvements (Apkan, 2013).
In the respondents' opinion, it is necessary to introduce organizational improvements at the Department to ensure the effectiveness of the promotional activities and proper flow of information at the University (Origo and Pagani, 2008). The improvements proposed include: an increase of employment, alternative division of responsibilities, increasing the capability for managing the Department, faster reaction on the part of the Supervisor, formal arrangement of the Vice-Rector's meetings with the Department, development of forms of effective cooperation between the departments, extension of the mailing database containing the institutions, organizations, schools, etc. cooperating with University by including the Departments' contacts, increased technical support, involvement of younger employees and students in representative activities, e.g. during events and celebrations, creation of a Volunteer Center.
The employees stated that when difficult situations arise at the University, the first problems sought are those associated with the promotion. Detailed analysis of the DIP's functioning indicates a need for its restructuring. It is necessary to supplement the staff within the existing job positions and employ at least two additional persons (Khuong and Tien, 2013).
Communication on the part of the University authorities, in the employees' opinion, was assessed as sufficient by 27.4% of the respondents. The fact that over 21% of the employees rated the communication methods and manners and the rules for granting awards as very poor, i.e., 28.53% of the surveyed and 23.16% as satisfactory, is noteworthy. The efficiency of the university administration functioning was rated as good by 34.46% of the respondents. The rules for granting awards, set out in the University's regulations, were rated as good in the opinion of 37.57% of the surveyed. The scope of the social and living benefits offered by the University was rated well by 39.27% of the respondents.
The functioning of labor unions was assessed as sufficient by 16.95% of the surveyed, while the fact that 34.46% of the employees selected the 'no assessment' answer is noteworthy. Over 35% of the employees rated the functioning of the internal employee Internet network (intra-University) well, whereas 31.07% of the employees evaluated the functioning of the IT systems used at the University, i.e., USOS, APD, PRIMER, DOCUSAFE, as satisfactory. Adaptation of the library Employee's Satisfaction within the Context of an Organization's Development: Study Results 8 resources to the needs of the research and teaching staff was rated as good by 32.49% of the surveyed. Over 32% of the respondents claim to have seen the signs of mobbing at the workplace (Moyes, Shao, and Newsome, 2008).
Conclusion
When the market success of organizations, including the universities, is determined by the people employed at those institutions, in particular by their knowledge and skills, the development opportunities that these entities get as part of their organization deserve special attention. This may mean that professional development contributes to mutual benefits -the institution provides educated staff and satisfied employees, while employees, besides raising their qualifications, are provided with a sense of satisfaction with tasks they carry out.
The study indicates that it is necessary to increase the University's promotional activities in the region. Because information and communication flow has been repeatedly indicated as requiring improvement, it is recommended to thoroughly analyze the solutions applied so far and introduce adequate forms of practical cooperation.
The employees surveyed assess their motivation well and their job satisfaction very well. Commitment to performance of professional duties has been assessed exceptionally high, as very good, by as many as 44.63% of the respondents. In contrast, employment stability has been rated the lowest by the employees surveyed, i.e., for 22.32% of the respondents, it is insufficient and sufficient for 22.03%. | 2020-11-05T09:09:38.734Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "3ee6c067628f047b9a54e1c49fe95691eda6c013",
"oa_license": null,
"oa_url": "https://www.ersj.eu/journal/2411/download",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4326dcd153ea29695d2677d1110cec3d71312608",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
29151154 | pes2o/s2orc | v3-fos-license | Removal of Nitrate in Simulated Water at Low Temperature by a Novel Psychrotrophic and Aerobic Bacterium, Pseudomonas taiwanensis Strain J
Low temperatures and high pH generally inhibit the biodenitrification. Thus, it is important to explore the psychrotrophic and alkali-resisting microorganism for degradation of nitrogen. This research was mainly focused on the identification of a psychrotrophic strain and preliminary explored its denitrification characteristics. The new strain J was isolated using the bromothymol blue solid medium and identified as Pseudomonas taiwanensis on the basis of morphology and phospholipid fatty acid as well as 16S rRNA gene sequence analyses, which is further testified to work efficiently for removing nitrate from wastewater at low temperature circumstances. This is the first report that Pseudomonas taiwanensis possessed excellent tolerance to low temperature, with 15°C as its optimum and 5°C as viable. The Pseudomonas taiwanensis showed unusual ability of aerobic denitrification with the nitrate removal efficiencies of 100% at 15°C and 51.61% at 5°C. Single factor experiments showed that the optimal conditions for denitrification were glucose as carbon source, 15°C, shaking speed 150 r/min, C/N 15, pH ≥ 7, and incubation quantity 2.0 × 106 CFU/mL. The nitrate and total nitrogen removal efficiencies were up to 100% and 93.79% at 15°C when glucose is served as carbon source. These results suggested that strain J had aerobic denitrification ability, as well as the notable ability to tolerate the low temperature and high pH.
Introduction
Nitrate, due to its high water-soluble characteristic, is possibly the major nitrogen contaminant in the water [1,2]. High concentration nitrate could contribute to water eutrophication [3] and even impose a serious threat to human health, such as malformation, carcinoma, and mutation when the nitrate transformed into nitrosamines [4]. The World Health Organization (WHO) has recommended that the nitrate concentration in drinking water should be lower than 10 mg/L [5] and the same value was also proposed by China [6]. Unfortunately, the nitrate concentration was higher than 10 mg/L in numerous aquifers and even exceeded 30 mg/L in some groundwater in China [7][8][9]. The nitrate remediation, thereby, is a big challenge and has become a matter of great concern in the recent years.
Previous researches have shown that the most commonly methods to remove nitrate from wastewater included biological denitrification with microorganism and physicochemical reduction using ion exchange, electrodialysis, reverse osmosis, zero-valent iron, and zero-valent magnesium [3,10]. Several reports demonstrated that bioremediation has much higher efficiency, has lower cost, and is easier to implement compared with physicochemical methods for nitrate removal from wastewater [11,12]. Likewise, numerous researchers discovered that the biological denitrification is the most promising approach because the bacteria could reduce the nitrate to the harmless nitrogen gas [13,14]. Therefore, there are a host of papers dedicated to isolation and identification of nitrate reduction bacteria, such as Stenotrophomonas sp. ZZ15, Oceanimonas sp. YC13 [15], Bacillus sp. YX-6 [16], Psychrobacter sp. S1-1 [17], Zoogloea sp. N299 [18], and Alcaligenes sp. TB [19], while the reported denitrifying bacteria including these mentioned above are almost mesophilic bacteria and the optimum denitrification temperatures are between 20 and 35 ∘ C. However, these mesophilic denitrifying bacteria might face great challenges in winter months because the low temperatures generally 2 BioMed Research International drastically inhibit their denitrification ability, cell growth, and proliferation especially when the temperature was at 10 ∘ C or less [20,21]. Thus, it is important to explore the bacteria which could effectively remove nitrate nitrogen at low temperatures.
Additionally, nitrate biodegradation may generate OH − which could inhibit the denitrification process and enzyme activity [22]. Zhang et al. [23] discovered that neither cell density increase nor nitrate reduction was found if the pH is greater than 8.5; Li [22] reported that the nitrate reduction was completely inhibited when the pH reached about 9.5. Therefore, the optimal pH of most new isolated aerobic denitrifiers ranged from 6.5 to 7.0, such as Ochrobactrum sp. (6.5-7.0) [24], Alcaligenes sp. S84S3 (7.0) [25], Psychrobacter sp. (7.0) [17], and Pseudomonas mendocina 3-7 (7.0) [26]. It may be difficult for these microorganisms to meet the requirements of alkaline sewage treatment.
In this study, a novel bacterial strain for candidate of aerobic denitrifier, capable of aerobic denitrification with nitrate, was identified as Pseudomonas taiwanensis, named J. To the best of our knowledge, few nitrate denitrification studies have been performed to focus on the species of Pseudomonas taiwanensis. In this research, the impact resistance of strain J to low temperature, extreme alkalinity, and high C/N ratio were investigated using nitrate as sole nitrogen source. The experimental results showed that the strain J possessed excellent tolerance to low temperatures, with 15 ∘ C as its optimum and 5 ∘ C as viable. Furthermore, it has been found that the nitrogen removal efficiency did not decrease obviously with temperature between 15 ∘ C and 40 ∘ C. Significantly, high C/N ratio and strong alkalinity were not the limiting factors for cell density increase and nitrate denitrification performance. Accordingly, the strain J could be used to treat the high C/N ratio wastewater and the alkaline wastewater in four seasons.
Bacterium and Media.
The psychrotrophic and aerobic denitrifying bacterium Pseudomonas taiwanensis strain J was stocked in 30% glycerol solution at −20 ∘ C.
Identification of the Psychrotrophic and Aerobic
Denitrifying Bacterium Strain J. Colony morphologies of strain J were monitored on the BTB medium plates after incubating at 15 ∘ C for 3 d. Cell morphologies of strain J were observed under the HITACHI S-3000N scanning electron microscope and atomic force microscopy. Phospholipid fatty acids (PLFAs) were extracted by using about 40 mg pure culture of strain J after incubating 48 h at 15 ∘ C. Each type of PLFAs was analyzed by Agilent 6850.
The nearly full-length of 16S rRNA gene sequence was amplified using DNA as template which was extracted by genomic DNA purification kit (Thermo scientific). The universal primers 27F (5 -AGAGTTTGATCCTGGCTCAG-3 ) and 1492R (5 -GGTTACCTTGTTACGACTT-3 ) were used for polymerase chain reaction (PCR) amplification. The PCR amplification was conducted in the 50 mL volume containing 2 L DNA, 2 L primer, 25 L 2x Taq PCR Master Mix, and 19 L sterile water. The PCR conditions were denaturation for 5 min at 94 ∘ C, 30 cycles of 1 min at 94 ∘ C, 30 s at 55.5 ∘ C, and 1 min at 72 ∘ C and extension for 10 min at 72 ∘ C. The 1.5 kb product was separated on a 1.5% agarose gel and purified by BioSpin gel extraction kit (BioFlux). The purified product was cloned into pMD520-T vector (Takara) and then sequenced by Invitrogen company. Then the 16S rRNA sequence of strain J was submitted to NCBI for accession number. Sequence alignment and multiple alignment were performed using NCBI Search Tool program (BLAST: https:// blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastn&PAGE TYPE=BlastSearch&LINK LOC=blasthome) and CLUSTAL W. And the phylogenetic tree was constructed using MEGA 6.0 software by neighbor-joining distance method and bootstrap analyses of 1000 replicates.
Effects of Culturing Conditions on the Denitrification
Ability of Strain J. Effects of six cultivation conditions including temperature (5 ∘ C, 10 ∘ C, 15 ∘ C, 20 ∘ C, 25 ∘ C, 30 ∘ C, 35 ∘ C, and 40 ∘ C), shaking speed (0 r/min, 50 r/min, 100 r/min, 150 r/min, and 200 r/min), pH (6.5, 7.0, 8.0, 9.0, and 10.0), inoculation quantity (0.5 × 10 8 CFU, 1.0 × 10 8 CFU, 1.5 × 10 8 CFU, 2.0 × 10 8 CFU, and 2.5 × 10 8 CFU within 100 mL DM), and carbon source (sodium citrate, sodium succinate, sodium acetate, sucrose, and glucose) on denitrification performance of strain J were determined by the single factor tests. The amount of carbon was changed to adjust the C/N ratio to 0, 5, 10, 15, 20, and 25 by fixing the amount of NaNO 3 as the nitrogen source. 1.0 × 10 8 CFU precultured bacterial suspension was inoculated into 250 mL sterilized conical flask which contained 100 mL DM broth medium for testing, except for inoculation quantity which will be specified later. After incubating for 48 hours, nitrate and total nitrogen were then determined for these samples, in order to analyze the influences of the various factors indicated above.
The nitrate and total nitrogen removal efficiencies were calculated by the equation: Rv = ( 1 − 2 )/ 1 × 100% to assess the denitrification ability of strain J. Note that Rv, 1 , and 2 represent nitrate or total nitrogen removal efficiency, the initial concentration of nitrate or total nitrogen in MDM broth medium, and the final concentration of nitrate or total nitrogen in MDM broth medium after incubation for 48 hours, respectively. All experiments were conducted in triplicate.
Analytical Methods.
The cell optical density (OD 600 ) was monitored by measuring the absorbance at the wavelength of 600 nm using a spectrophotometer (DU800, BECKMAN COULTER, USA). Total nitrogen was calculated by the absorbance value at 220 nm subtracting the two times background absorbance value at 275 nm after alkaline potassium persulfate digestion. Nitrate was detected using the supernatant after samples centrifuged at 8000 rpm for 8 min. NO 3 − -N was calculated by the absorbance value at 220 nm subtracting the two times background absorbance value at 275 nm [29]. The value of pH was measured by pH-Meter (PHS-3D, Shanghai Precision and Scientific Instrument Corporation).
The final results were obtained by the average at least three independent experiments and were presented as means ± SD (standard deviation of means). All statistical analyses were carried out by one-way ANOVA with Tukey's HSD test ( < 0.05) using Excel and SPSS Statistics 22, and graphical works were carried out by Origin 8.6 software.
Results and Discussions
3.1. Identification of Strain J. The colony morphologies of pure strain J are blue with a small white sport, convex, smooth with wet surfaces, regular edge, and opaque on BTB medium (Figure 1(a)). The strain J was gram-negative, rod-shaped, nonspore, and without flagellum (Figures 1(b), 1(c), and 1(d)).
Phospholipid fatty acids (PLFAs), as a key component of cell membrane, are an important indicator for identifying bacteria and fungi. The PLFAs is difference in different microbial groups but is relatively constant in the same microbial population, which have been proven to be relatively simple, fast, and inexpensive to be analyzed by gas chromatography [30,31]. The PLFAs of strain J showed 0.627 similarity index with Pseudomonas-syringae-syringae in the version 6.0 of Sherlock5 Microbial identification system. As depicted in Table 1, the value of 0.627 is greater than 0.5 which indicated that the strain J belongs to being a typical Pseudomonassyringae-syringae according to the principle of PLFAs identification. However, the temperature, growth phase, carbon substrate, and so on maybe affect the PLFA composition. Therefore, we further identified the strain J by using 16S rRNA gene sequence which could enhance the credible of the identification result.
The 16S rRNA gene sequence of strain J (1399 bp) was determined and exhibited 99% similarity with Pseudomonas taiwanensis. The phylogenetic tree was constructed by MEGA 6.0 software, which showed that the evolutionary divergence of strain J was also closely related to Pseudomonas taiwanensis instead of Pseudomonas-syringae-syringae ( Figure 2). This difference identification results between PLFAs and 16S rRNA gene sequence might be that the version 6.0 of Sherlock Microbial identification system did not contain the species of Pseudomonas taiwanensis. Although the species of Pseudomonas taiwanensis has been reported by Volmer et al. [32] and Schmutzler et al. [33], whether the strain of Pseudomonas taiwanensis could conduct aerobic denitrification has not been researched. The accession number of strain J in Gen-Bank nucleotide sequence databases is KY927411. Taking the physiological, phospholipid fatty acid and 16SrRNA gene sequence analyses into account, the strain J was identified as Pseudomonas taiwanensis and named J. Up to date, the species of Pseudomonas taiwanensis was hardly ever reported as the psychrotrophic aerobic denitrifier.
Effect of Temperature on Nitrogen Removal of the Strain J.
Temperature is usually regarded as an environmental stress factor for the survival of bacteria because too high temperature could induce nucleic acid or protein denaturation, whereas too low temperature could result in inhibition of cell growth and proliferation, alter protein expression patterns, and weaken metabolic activity [34]. Generally, different bacteria have diverse temperatures for cell growth and functional activity. The effects of different temperatures on cell growth and nitrate removal of strain J in MDM medium were shown in Figure 3(a). Pseudomonas taiwanensis strain J was able to grow and conduct aerobic denitrification with nitrate as nitrogen source at a broad temperature range from 5 to 40 ∘ C, which may expand its application scope in different seasons. The strain J possessed tolerance to low temperature, with 15 ∘ C as its optimum and 5 ∘ C as viable, which suggested that strain J was a psychrotrophic aerobic bacterium. Data in Figure 3(a) showed that the nitrate nitrogen removal efficiency was enhanced with the increased temperature in a certain range. The nitrate and total nitrogen removal percentage increased from 51.61% and 8.76% at 5 ∘ C to 100% and 84.48% at 15 ∘ C under the condition of 150 r/min, with initial nitrate nitrogen concentration of about 50 mg/L after 48 h cultivation. These results demonstrated that higher temperature could promote the nitrate removal efficiency of strain J ( < 0.05). The nitrite nitrogen accumulation was distinctly observed at 5 ∘ C and 10 ∘ C with the concentrations of 15.0 and 20.12 mg/L, respectively (data were not shown in Figure 3). It was interesting that the nitrite-nitrogen was not detected when the temperature is higher than 15 ∘ C, or even below the nitrite detection limit when nitrate was used as sole nitrogen source. These results indicated that too low temperature might result in high nitrite accumulation. The similar results were reported that the nitrite could be accumulated as the temperature below 15 ∘ C [8,35]. By contrast, the nitrate and total nitrogen removal efficiencies decreased continuously when the temperature further increased from 15 ∘ C to 40 ∘ C, and the nitrate and total nitrogen removal efficiencies decreased to 70% and 40.61% at 40 ∘ C, which demonstrated that the denitrification ability of strain J could be slightly inhibited by high temperatures. Accompanied with the reduction of nitrate and total nitrogen, the OD 600 value of strain J increased firstly from 0.04 to 0.69, corresponding to an average growth rate of 0.345 d −1 , and then decreased to 0.35 at 40 ∘ C. Above all results suggested that the temperature had a pronounced effect on the nitrogen removal efficiencies and cell growth of strain J ( < 0.05).
It is notable that nearly all the reported aerobic denitrifying bacteria were mesophiles and the optimum temperatures were ranged from 20 to 37 ∘ C, in which the cell growth and nitrogen denitrification were severely or totally inhibited at the temperature as low as 10 ∘ C [16,36,37]. As a result, the strain J could conduct aerobic denitrification efficiently at 5 ∘ C demonstrating that the strain J has much more tolerance to the cold condition, which may offer a new cold-adaptation aerobic denitrifier for nitrogen removal in winter.
Effect of Dissolved Oxygen on Nitrogen Removal of the Strain J.
It has been widely accepted that the dissolved oxygen (DO) concentration in solution could be adjusted by the rotation speed of the shaker. The faster the shaking speed is, the higher the DO level can be obtained. In the aerobic denitrification process, both of the dissolved oxygen and nitrate nitrogen could act as electron accepters. A large amount of papers showed that a certain DO concentration was beneficial to bacterial growth and nitrogen removal [8,38,39]. The influence of different shaking speed on cell growth and nitrogen removal by strain J is shown in Figure 3(b). Significant differences were observed among shaking speed of 0-200 r/min ( < 0.05). Poor nitrogen removal efficiencies were obtained in static cultivation, with only 18.36% nitrate and 2.84% total nitrogen removal efficiencies after 48 h incubation. The higher rotation speed significantly promoted the cell reproduction and nitrogen removal efficiencies. The optimal shaking speed for strain J growth and denitrification was 150 r/min, which is equivalent to 6.25 mg/L DO concentration [37]. The peak of nitrate and total nitrogen removal efficiencies were 100% and 84.75% under the conditions of 150 r/min and 15 ∘ C. Subsequently, the cell growth and nitrogen removal ability were inhibited at the shaking speed of 200 r/min, which may be that the nitrate reductase is sensitive to high concentration of dissolve oxygen. The value of OD 600 showed similar change with the nitrogen removal efficiencies and no obvious nitrite accumulation in the experiment. Apparently, the excessive low or high concentration of DO was unfavourable for bacterial growth and denitrification, which indicated that proper aeration could improve the nitrogen removal efficiencies of strain J in the practice application.
Effect of Carbon Source on Nitrogen
Removal of the Strain J. Carbon sources, with different chemical structures and molecular weights, usually served as the electron donor and energy for heterotrophic aerobic denitrification process. Generally, carbon sources with simpler structures and low molecular weight were more beneficial for aerobic bacterial denitrification [19]. Figure 3(c) showed that the different carbon sources had markedly affected nitrogen reduction efficiencies ( < 0.05) at 15 ∘ C and 150 r/min. The experimental results showed that sodium citrate, sodium succinate, glucose, and sodium acetate could well support the growth of the strain J and promote nitrate reduction. Strain J exhibited the highest nitrogen removal ability when the glucose was used as the sole nitrogen source, with the nitrate and total nitrogen removal percentage of 100% and 93.79%, respectively. Glucose, due to its simple and small molecular structure, being the best carbon source for aerobic denitrification, was also discovered in the previous paper, such as the strain of Anoxybacillus contaminans HA [34]. Nevertheless, the sucrose was not good for both cell growth and nitrogen removal ability of strain J. The accumulation of nitrite nitrogen was not observed in this carbon source experiments. Therefore, it could be concluded here that a variety of carbon sources were beneficial for strain J conducting aerobic denitrification, implying that the carbon sources were not the limiting factors for strain J in the nitrate reduction process at low temperature.
Effect of C/N Ratio on Nitrogen Removal of the Strain J.
Previous reports indicated that the high C/N ratio could promote the nitrogen reduction [40,41]. Sodium succinate, located in central position of tricarboxylic acid cycle (TCA), was used as carbon source by fixing the nitrogen of NaNO 3 , which may provide electron donor and energy rapidly for cell multiplication and aerobic denitrification. The effects of different C/N ratio on cell growth and nitrogen removal by strain J were shown in Figure 3(d). The significant differences in nitrogen removal percentage and cell growth were obtained among C/N ratio of 0-15 ( < 0.05). Little nitrogen reduction and bacterial reproduction were observed when the C/N ratio was 0, suggesting that strain J was not an autotrophic denitrifier. With the C/N ratio increasing from 0 to 15, the nitrate and total nitrogen removal percentage reached the peak value of 100% and 88.24%. However, although C/N ratio further increased from 15 to 25, the removal efficiencies of nitrate and total nitrogen were comparatively constant, which manifested that although high C/N ratio could not accelerate nitrogen reduction, it also could not inhibit the nitrogen removal ability of strain J. The results dramatically differ from the previous studies that much higher C/N could lower the nitrogen removal efficiency. For instance, Zhang et al. [8] reported that the ammonium removal rate decreased at the C/N of 20 by Microbacterium sp. SFA13, and Huang et al. [18]. found that the nitrogen removal efficiency yielded a slight decrease as the C/N ratio higher than 2 with the strain of Zoogloea sp. N299. It might be concluded that the strain J could tolerate a much higher concentration of carbon source and the optimal C/N ratio was 15, which was consistent with the strain of Marinobacter sp. possessing the optimal C/N ratio of 15 [42]. Moreover, the bacterial proliferation increased continuously with increasing of C/N ratio, corresponding to a peak OD 600 value of 1.11 at C/N ratio of 25, and no nitrite was detected in all C/N ratio solutions.
Effect of pH on Nitrogen Removal of the Strain J.
The nitrate and total nitrogen removal using the strain J at different pH in the MDM was investigated with 150 r/min and 15 ∘ C, as shown in Figure 3(e). This figure depicted that the different pH also had a great effect on nitrogen reduction and bacterial growth ( < 0.05). The growth and denitrification activities of the strain J were almost unchanged at the initial pH value of 6.5, implying that slightly acidic was harmful to the cell reproduction and metabolic activities. But above pH 7, the nitrogen removal efficiencies and the value of OD 600 were increased obviously. The maximum nitrate removal efficiency of 100% was obtained at pH 7.0. When the pH increased from 7.0 to 10.0, the nitrate removal percentage was lower than 100%, but higher than 90.51%. This result was consistence with the strain of Acinetobacter sp. CN86, in which the nitrate removal rate was lower slightly when the pH increased from 7.0 to 9.0 [43], but conflicted with another papers which reported that the nitrate reduction occurred under circumneutral pH or weak acidic conditions [44,45]. Compared with the degradation of nitrate, the value of pH arranging from 7.0 to 10.0 had no distinctively effect on total nitrogen removal ability of strain J with the removal percentage about 84.0%. Meanwhile, all values of OD 600 were almost equivalent to 0.67 except that the value of pH was 6.5. Apparently, the neutral and alkaline environments were conducive to heterotrophic aerobic denitrification, and the strong alkaline was not the limiting factor for aerobic denitrification.
Effect of Inoculation Quantity on Nitrogen
Removal of the Strain J. The nitrate and total nitrogen removal efficiencies were directly affected by the amount of strain J ( < 0.05), as described in Figure 3(f). The aerobic denitrification percentage increased with the increase of the inoculation amount from 0.5 × 10 6 CFU/mL to 2.0 × 10 6 CFU/mL at 15 ∘ C. But further increase of incubation quantity to 2.5 × 10 6 CFU/mL could result in a distinct decrease of the nitrogen removal efficiencies. The maximum nitrate and total nitrogen removal efficiencies were 100% and 83.15% and the maximum value of OD 600 was 0.65 with inoculation amount of 2.0 × 10 6 CFU/mL. It thus could be suggested that the cell growth space, oxygen supply, and nitrogen reduction might be influenced when inoculation quantity was higher than 2.0 × 10 6 CFU/mL. Furthermore, there was no nitrite nitrogen accumulation in this inoculation quantity experiments. Thus, taking the nitrogen removal efficiencies and cell growth into consideration, the optimal inoculation quantity was 2.0 × 10 6 CFU/mL. This optimized inoculation quantity of strain J could provide a reference for the actual wastewater treatment.
In this research, the nitrate nitrogen was used as the sole nitrogen source to preliminary explore whether strain J had a denitrification ability. The removal efficiencies of nitrate nitrogen and total nitrogen denoted the denitrification ability of the strain J, and the denitrification products (NO, N 2 O, and N 2 ) should be further studied with nitrate as sole nitrogen source.
Conclusion
The novel psychrotrophic aerobic bacterium, named strain J, was identified as Pseudomonas taiwanensis based on morphology and phospholipid fatty acid as well as 16S rRNA gene sequence. Approximately 100.0% of nitrate nitrogen was removed at 15 ∘ C in different single factor experiment. The total nitrogen removal efficiency was higher than 83.15%. The strain J possessed excellent cold resistance and was able to remove nitrate at 5 ∘ C, with the removal percentage of 51.61%. The low temperature (below 15 ∘ C) was the only factor to result in nitrite accumulation. Most carbon sources could support aerobic denitrification except sucrose. Moreover, high C/N ratio and alkalinity condition were conducive to the cell density increase and nitrate denitrification.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2018-05-30T00:46:52.891Z | 2018-03-28T00:00:00.000 | {
"year": 2018,
"sha1": "7990631e1228e252d046382b1035bed194424198",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2018/4984087.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a44f053ed33a578c37de1a452414caa6cc271905",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
235621295 | pes2o/s2orc | v3-fos-license | Usability evaluation of a Gamification-based programming learning platform: Grasshopper
Online learning allows the learning process to be carried out anywhere and anytime. However, empirical studies report several obstacles that occur in the online learning process; one of which is the lack of student involvement. The learning materials of programming lessons generally have a low level of participation. Therefore, it requires a learning approach that is more attractive, easily understood by students, and promote engagement. Gamification-based programming learning platforms are widely available today. Research on determining the effectiveness of their use, investigating the system usability scale (SUS), perceived benefits, level of satisfaction, and user experience when using it is still limited. This study aims to evaluate a Grasshopper as one of the gamification-based programming learning platforms. Thirty-one respondents who had studied various programming languages at both the high school and university levels were involved in usability testingsessions. The results of the usability evaluation using SUS showed that the score was above average. The assessment of benefits and satisfaction also showed an average value of 8.6 which indicates that most respondents were satisfied with this application.
Introduction
During the COVID-19 pandemic, the use of e-learning has become a basic need in the learning process at all levels of education. However, several studies have revealed that there are various challenges or obstacles in conducting online learning, including the lack of student motivation in using online learning media [1] [2], lack of student involvement [3] [4], lack of face-to face interaction [5], and subject matter that is difficult to learn [1].
The lack of student involvement in the online learning process is caused by several factors, one of which is the lack of content that is attractive for students and the absence of rewards for students who contribute actively on the platform. Online educational sites such as codeacademy.com and khanacademy.org use game elements so that the users will be involved in the learning process. The use of game elements in a non-game context is called gamification [6]. Researchers have found that the use of gamification can increase student commitment in learning activities, students' attendance and participation, students' contribution in answering questions, and the percentage of students passing the course [7].
The difficulty level of a subject is one of the reasons students are less engaged in the learning process. Several studies on the effectiveness of the application of gamification were carried out in programming courses, because programming courses are subjects with a low level of student involvement and a high failure rate [1][8] [9]. Research on the application of gamification elements has mostly carried out with the main focus on increasing the participation, involvement, and contribution of students. Gamification design in a suitable learning system will improve student learning performance [10]. However, the application of gamification in online learning does not always result in students' positive behavior, as found by research conducted by Kyewski and Krämer regarding the application of gamification in e-Learning courses. In fact, students felt that they were under external control and even social pressure [11]. In this study, Kyewski and Krämer indicated the need to evaluate the use of badges on the motivation, activities and performance of students in a more comprehensive way. This proves that the application of gamification design is not limited to operational requirements, but requires a deep understanding of human psychology [12].
Research on the application of gamification is mostly carried out using the experimental method by comparing one group of students that uses gamification and another group that does not use it. There are several researchers who have added questions as a form of evaluation of the results of the experiment. The theories used in research on the application of gamification elements are also diverse, such as the MDA Framework [1], flow theory [13], self-determination theory [14][15], self-determination theory and social comparison [11], goal setting theory [16], three dimension of engagement [17] [18], and other theories that are in line with the research objectives. In addition to experiments and mixed methods, there are also researchers who only evaluate gamification on the Duolingo platform by using the Game Refinement Theory. The researchers measure Duolingo's refinement value by using data of users who have enrolled in Duolingo on the Duolingo website and the number of courses in each language [19].
Several studies related to the use of gamification in learning applications show that research on gamification-based platform usability analysis is rarely conducted in research related to gamification. Usability is a qualitative analysis that determines how easy it is for a user to use an application. Usability of user interface is part of the game aesthetics element [20], the other two are design and visibility. Aesthetics describe the user's expression, feelings, and emotional responses. Therefore, research related to usability analysis should be carried out in conjunction with an analysis of user benefits and satisfaction which can be extracted from experiments using a platform. The experience of respondents using a gamification-based platform can reveal the benefits obtained by users and the level of user satisfaction.
This study uses usability testing of the Grasshopper application to determine whether the application is feasible and has the potentials to help students learn programming languages. We used Grasshopper because it is free to access, has a very attractive design, and comes with brief and clear instructions. Good graphic design is important. Reading long sentences, paragraphs, and documents is difficult on the screen [21]. Thus, an inappropriate design will actually make users feel frustrated and lose motivation [22]. The method used in this research is the qualitative analysis method (observation) and quantitative analysis method (questionnaire). Observations were made of students' screen recordings when using the Grasshopper application. The questionnaire used consisted of three parts, namely, part one consisting of questions about the System Usability Scale (SUS) to measure the use of an application, part two consisting of questions about the benefits of the application felt by the respondents, and part three containing questions about user experience before and after using the application and the level of user satisfaction with the application.
The purpose of this research is to analyze the usability scale of the Grasshopper platform using the System Usability Scale (SUS), and to get the perceived benefits and satisfaction levels of the respondents involved by observation and analyzing the results of the questionnaires filled out by respondents after using Grasshopper.
Repondents
Participants involved in this study were students who had been or were learning programming languages, both at the high school or vocational high school and university level. In the data collection process, thirty-one students met the criteria as shown in Table 1.
Experimental Design
This study used the usability testing method with experiments carried out on the use of the Grasshopper application. All respondents used the same application, so there was no control group in this study. The assessment of these experiments was carried out qualitatively, both by observing and understanding the interpretation of the respondents' answers to the open-ended questions. In addition, there was a quantitative assessment of the value of System Usability Scale (SUS), the value of benefits, and the level of respondent satisfaction after using the application.
Observation.
For the purposes of observation, scenarios were provided in the form of tasks that had to be completed by the respondents. In this section the screen recordings sent by respondents were saved, and the length of time it took respondents to complete each given task was recorded. The tasks given in each scenario were determined based on the usefulness of the Grasshopper application. The task scenarios given to the respondents can be seen in Table 2. Table 2. Task Scenarios.
Instructions TS1
Log in to the Grasshopper application with your Google account, Facebook or Apple ID.
TS2
Answer the question "Have you coded before?" When you first enter the Grasshopper application, then answer with "No, I'm new to coding" to enter the questions part in the "What's code?" TS3 Working on French Flag and Gabon Flag puzzles in the "Fundamentals" Menu.
TS4
Take the "How Many Blue?" Quiz on the "Fundamental" Menu.
TS5
Read and pay close attention to the material given on "Used a Function" in the "Fundamental" Menu.
TS6
Open the Achievement page / menu to see "Concepts Unlocked", "JavaScript Key Used", "Day Coding Streak"! TS7 Create New Snippet on the "Code PlayGround" menu in the form of Indonesian Flag and Hello world! and Respondent's name.
TS8
Complete the "Drawing Boxes" and "Benin Flag" tasks in the "Practices" menu! Task scenarios in Table 2 were used for observational purposes. Observations were made to measure the level of efficiency in usability from the time required to work on a given task. Runtime assessed how IOP Publishing doi:10.1088/1742-6596/1898/1/012020 4 efficient a system was in user reusability. Measurement of the working time in the task scenario was carried out using the Windows media player, taking into account the start and end times. Time records from the use of the Grasshopper application done by the respondents can be seen in Table 3. Table 3 exhibited that on average all features could be done well by the respondents. Seven respondents recorded their activities in the TS2 section, so that the TS1 activities of these respondents could not be observed. Nevertheless, logging into the application was an activity student usually did.
Usability valuation of the Grasshopper Platform using SUS.
In this section we used the SUS calculation adapted from the original by Sharfina and Santoso [23]. The list of questions in the research paper were translated into Indonesian and validated for use. The questionnaire used a Likert scale of 1-5, where 1: Strongly Disagree; 2: Disagree; 3: Neutral; 4: Agree; 5: Strongly Agree. The average SUS value obtained from the overall value given by the respondents is 69.27. A software product is considered to have a good usability value if the overall SUS value is equal to or above 68. In other words, the usability value of the Grasshopper application was quite above average. Table 4 shows the percentage of responses to questions about perceived benefits.
Respondent Satisfaction Analysis of the Grasshopper Platform.
This section contains a question "Rate your satisfaction level in using the Grasshopper application (from 1 to 10)", and one closed question about whether the respondent would recommend the use of Grasshopper to others and why. The respondent assessment of their satisfaction level in using the Grasshopper application shows an average value of 8.6 which indicates that most respondents were satisfied with this application. 96.9% of the respondents would recommend using Grasshopper in learning programming languages. From a total of 30 respondents, one respondent did not recommend the use of Grasshopper because there was no Python programming language in Grasshopper.
Conclusion
To measure the perception of the benefits of gamification-based applications, the authors used a Likert scale 1-5 instrument to find out whether respondents agreed with the statements given in a questionnaire. From the data obtained it was found that most respondents agreed with the statements about the benefits of the platform they used. As for the level of satisfaction, 51.6% of the respondents gave a score of eight, 29% gave a score of nine, and 19.4% gave a score of ten on satisfaction in using the Grasshopper application, and 96.9% of the respondents would recommend the use of this application. The analysis of the use of the Grasshopper application shows that it has a SUS score of 69.27. In other words, the value of the use of the Grasshopper application is above average. Thus, it can be concluded that the application can be accepted by users. Although most respondents benefited by using Grasshopper and were satisfied with it, they faced some obstacles, such language barrier of instruction used in the application. This is due to the fact that Grasshopper instruction language is in English only. IOP Publishing doi:10.1088/1742-6596/1898/1/012020 6 All the material in the Grasshopper application is in English. We recommend Grasshopper to add other languages of instruction to meet users' need. | 2021-06-24T20:02:32.259Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "0dd61fe20917164a30a17e2cc3a3b9756de8ed4d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1898/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0dd61fe20917164a30a17e2cc3a3b9756de8ed4d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
270779052 | pes2o/s2orc | v3-fos-license | Understanding the Temporal Dynamics of Accelerated Brain Aging and Resilient Brain Aging: Insights from Discriminative Event-Based Analysis of UK Biobank Data
The intricate dynamics of brain aging, especially the neurodegenerative mechanisms driving accelerated (ABA) and resilient brain aging (RBA), are pivotal in neuroscience. Understanding the temporal dynamics of these phenotypes is crucial for identifying vulnerabilities to cognitive decline and neurodegenerative diseases. Currently, there is a lack of comprehensive understanding of the temporal dynamics and neuroimaging biomarkers linked to ABA and RBA. This study addressed this gap by utilizing a large-scale UK Biobank (UKB) cohort, with the aim to elucidate brain aging heterogeneity and establish the foundation for targeted interventions. Employing Lasso regression on multimodal neuroimaging data, structural MRI (sMRI), diffusion MRI (dMRI), and resting-state functional MRI (rsfMRI), we predicted the brain age and classified individuals into ABA and RBA cohorts. Our findings identified 1949 subjects (6.2%) as representative of the ABA subpopulation and 3203 subjects (10.1%) as representative of the RBA subpopulation. Additionally, the Discriminative Event-Based Model (DEBM) was applied to estimate the sequence of biomarker changes across aging trajectories. Our analysis unveiled distinct central ordering patterns between the ABA and RBA cohorts, with profound implications for understanding cognitive decline and vulnerability to neurodegenerative disorders. Specifically, the ABA cohort exhibited early degeneration in four functional networks and two cognitive domains, with cortical thinning initially observed in the right hemisphere, followed by the temporal lobe. In contrast, the RBA cohort demonstrated initial degeneration in the three functional networks, with cortical thinning predominantly in the left hemisphere and white matter microstructural degeneration occurring at more advanced stages. The detailed aging progression timeline constructed through our DEBM analysis positioned subjects according to their estimated stage of aging, offering a nuanced view of the aging brain’s alterations. This study holds promise for the development of targeted interventions aimed at mitigating age-related cognitive decline.
Introduction
Brain aging, also referred to as senescence of the central nervous system, encompasses a gradual decline in cognitive functions and structural integrity [1].This complex and multifaceted phenomenon is characterized by the progressive loss of neuronal connections, decreased synaptic plasticity, and a diminished capacity for repair and regeneration.As we age, the brain undergoes significant alterations, including volumetric reductions in gray and white matter regions [2], changes in cortical thickness [3], and disruptions in functional connectivity patterns [4].These alterations not only affect the physical structure of the Bioengineering 2024, 11, 647 2 of 20 brain but also its functional capabilities.Understanding the dynamics of both imaging and non-imaging biomarkers during the aging process is crucial.These biomarkers provide critical insights into the changes occurring in the aging brain.By tracking these biomarkers, researchers can elucidate the trajectory of brain aging, identify early signs of cognitive decline, and develop potential interventions to mitigate age-related impairments.
In the realm of neuroscientific research, brain aging in cognitively normal individuals has been meticulously classified to include resilient brain aging (RBA) [5][6][7], normal brain aging, and accelerated brain aging (ABA) [5,8,9], sometimes colloquially termed advanced brain aging.RBA is a concept that describes the ability of certain individuals to maintain neural integrity relatively well as they age.Researchers believe that RBA may be influenced by a multifaceted interplay of factors, including genetic predispositions, a healthy lifestyle, and engagement in mentally stimulating activities.Understanding RBA is crucial as it provides insights into protective mechanisms that could potentially be targeted for interventions aimed at promoting healthy cognitive aging across populations.In contrast, ABA refers to a condition where the brain experiences more pronounced structural and functional changes than those typically observed in peers of the same age group.ABA has been associated with several factors, including chronic stress, poor health, exposure to environmental toxins, and genetic predispositions.Considerable documentation exists regarding distinctive brain aging biomarkers associated with ABA or RBA [10].Developed in response to the need for more precise tools in neuroscience, Brain Age Gap Estimation (BrainAGE) [11] is an emerging method that quantifies the discrepancy between an individual's chronological age and their brain's biological age, as inferred from neuroimaging data.The development of BrainAGE was driven by the recognition that traditional metrics of aging do not capture the complex and heterogeneous nature of brain aging.The estimation of BrainAGE is particularly instrumental in distinguishing between different patterns of brain aging.In the context of RBA, BrainAGE can reveal individuals whose brains exhibit a youthful phenotype relative to their chronological age, indicating a higher level of cognitive and neural resilience [12].These individuals may possess a more efficient cognitive reserve and better neural maintenance [13].Conversely, ABA is characterized by a larger-thanexpected BrainAGE, where the brain shows signs of aging that exceed the individual's chronological age.This can be indicative of early or more severe neurodegenerative processes and may be associated with an increased risk of developing cognitive impairments or dementia [13][14][15][16].The underlying mechanisms of ABA and RBA are complex and remain incompletely elucidated.Therefore, estimating the aging progression timeline in ABA or RBA is crucial for understanding the underlying physiological processes associated with these conditions.
There has been a growing interest in employing data-driven methodologies to explore the dynamics of imaging biomarkers across various stages of aging or disease progression [17].Among these methodologies, longitudinal approaches stand out as pivotal for offering a temporal perspective on the subtle changes occurring within the brain over extended periods.For example, Jedynak et al. [18] demonstrated the efficacy of longitudinal approaches by reconstructing long-term biomarker trajectories using short-term data.Similarly, Donohue et al. [19] employed a self-modeling regression method to estimate these trajectories, while Sabuncu et al. [20] utilized Cox regression for similar purposes.Despite the methodological straightforwardness of longitudinal approaches, they are often constrained by the stringent requirements of longitudinal data collection.Gathering such data necessitates robust study designs that account for multiple time points, which can be challenging due to issues like participant dropout, variations in imaging data acquisition, and the need for standardized protocols across different time points.In response to these challenges, alternative methodologies have emerged to leverage cross-sectional data to infer the temporal sequence of biomarker abnormalities.One notable method is the Event-Based Model (EBM) [21], which provides a potent probabilistic framework tailored to model disease progression dynamics with a specific focus on biomarkers.The EBM operates by representing changes in a biomarker's status as discrete "events," delineating the transition from a "normal" to an "abnormal" state.These events, denoted as Ei and arranged in a sequence S, are accompanied by the corresponding biomarker set X, thereby comprehensively encapsulating the progression trajectory.While the original EBM offered a robust probabilistic framework, a key limitation lies in its assumption that all subjects adhere to a singular sequence of events.To address this limitation, researchers have proposed several modifications to the EBM, enhancing its applicability and robustness.For instance, Young et al. introduced two significant modifications [22,23] aimed at refining the EBM approach.The first modification involves utilizing a dual normal distributions method to independently model each biomarker, thereby offering a more nuanced depiction of biomarker dynamics aligned with real-world data.The second modification relaxes the EBM assumption of a uniform event sequence between subjects, acknowledging the inherent variability in disease progression trajectories.These modified EBMs, along with the original formulation, belong to the generative models family focused on maximizing the likelihood P(X|S).In contrast, discriminative models, such as the discriminative EBM (DEBM) [24], estimate the sequence of events for each subject based on the posterior probability of individual biomarkers transitioning to an abnormal state.The adoption of discriminative models, like the DEBM, has demonstrated improved handling of large datasets and enhanced the stability of abnormal event ordering, thereby contributing to more accurate prognostic and diagnostic assessments in neuroimaging research.
Throughout this scientific inquiry, Lasso regression was employed to predict brain ages using features extracted from structural magnetic resonance imaging (MRI) (sMRI), diffusion MRI (dMRI), and resting-state functional MRI (rsfMRI).These feature sets included 207, 144, and 210 features from sMRI, dMRI, and rsfMRI, respectively.By leveraging these multimodal imaging modalities, Lasso regression offered a comprehensive approach to delineating brain aging trajectories.BrainAGE played a crucial role in stratifying the aging population into ABA and RBA categories.Within our dataset, 1949 subjects were representative of the ABA subpopulation, and 3203 subjects were representative of the ABA subpopulation.ABA or RBA subjects were further categorized into middle-old-aged and old-aged groups based on their specific age ranges.To track the progression of aging events, the DEBM was employed.Initially, the DEBM approximated the event sequence for each subject.Subsequently, a generalized Mallows model was applied to these approximate subject-specific event sequences, leveraging the probabilistic Kendall's tau distance to derive a central event sequence that encompassed all subjects.This relative distance between events facilitated the construction of an aging progression timeline, positioning subjects according to their estimated stage of aging progression.Through a meticulous dissection of these multimodal MRI features, our investigation aimed to yield novel insights into the complex landscape of ABA and RBA, thereby advancing our comprehension of the underlying mechanisms inherent in the aging process.
The structure of the subsequent sections is outlined as follows: Section 2 provides an extensive introduction, detailing essential aspects, such as the utilization the UK Biobank (UKB) data, the neuroimaging processing pipeline, the machine learning model applied for the brain age prediction, the estimation of the ABA and RBA aging progression timelines, and a thorough exposition of the statistical methodologies foundational to this study.Section 3 presents the study findings in meticulous detail.Subsequently, Section 4 conducts an in-depth discussion situating the findings within the broader landscape of neuroimaging research and the understanding of ABA and RBA.Finally, Section 5 encapsulates a concise summary of the study.
Participants
The dataset utilized in this study was derived from the UKB, which is a large-scale prospective study that enrolled around 500,000 participants aged 37 to 73 years between 2006 and 2010 [25].The UKB, which is accessible online at www.ukbiobank.ac.uk (accessed on 12 March 2022), obtained ethical approval from the North West Multicentre Research Ethics Committee, with a specific reference number of 11/NW/0382.Additionally, the research presented in this study received authorization from the UKB, with the designated application number being 68,382.
A subset of UKB participants underwent neuroimaging examinations.All brain imaging data were collected utilizing a 3 T Siemens Skyra scanner Siemens Healthcare GmbH, Erlangen, Germany).T1-weighted MRI scans were performed using a magnetizationprepared rapid gradient-echo (MPRAGE) sequence, yielding images with a 1 × 1 × 1 mm voxel size; a 208 × 256 × 256 mm 3 image matrix; and inversion time (TI) and repetition time (TR) parameters set at 880 and 2000 ms, respectively.The DMRI utilized two b-values, achieving a spatial resolution of 2 × 2 × 2 mm and sampling 100 unique diffusion directions.The rsfMRI sessions were conducted with parameters yielding a spatial resolution of 2.4 × 2.4 × 2.4 mm, a TR of 0.735 s, and a TE of 39 milliseconds.Cognitive evaluation was based on a neuropsychological battery encompassing nine cognitive domains [26].Notably, two cognitive scales under scrutiny-reaction time (UKB ID: 20023) and trail-making (UKB ID: 6350)-both measuring time as an outcome, were logarithmically transformed to enhance the analytical robustness.
The detailed selection process is detailed in Figure 1.The exclusion criteria were based on the International Classification of Diseases, Tenth Revision (ICD-10) diagnostic classification system.This screening resulted in 388,721 subjects aged 45-83.From this, 31,621 subjects with complete sMRI, dMRI, and rsfMRI data were selected and split into a training set (40%) and a test set (60%) for brain age prediction.Within the test set, individuals with a consistently positive BrainAGE across all imaging modalities were categorized as ABA, and those with consistently negative values as RBA.Participants lacking complete cognitive test data or covariates were excluded, leaving 3203 in the RBA group (mean age of 63.6 years, standard deviation of 7.97) and 1949 subjects in the ABA group (mean age of 64.6 years and a standard deviation of 6.96).Given the relatively slow pace of biomarker changes in ABA or RBA compared with neurodegenerative diseases, we categorized the study participants into three age groups (middle-aged, middle-old-aged, and old-aged), with a 7-year interval between each group to ensure distinct biomarker trajectories across different age spans.Specifically, the middle-aged group (serving as the reference group) consisted of all participants under the age of 55 in the test set, while the middle-old-aged and old-aged groups were drawn from the ABA and RBA cohorts.
Participants
The dataset utilized in this study was derived from the UKB, which is a large-scale prospective study that enrolled around 500,000 participants aged 37 to 73 years between 2006 and 2010 [25].The UKB, which is accessible online at www.ukbiobank.ac.uk, obtained ethical approval from the North West Multicentre Research Ethics Committee, with a specific reference number of 11/NW/0382.Additionally, the research presented in this study received authorization from the UKB, with the designated application number being 68,382.
A subset of UKB participants underwent neuroimaging examinations.All brain imaging data were collected utilizing a 3 T Siemens Skyra scanner Siemens Healthcare GmbH, Erlangen, Germany).T1-weighted MRI scans were performed using a magnetization-prepared rapid gradient-echo (MPRAGE) sequence, yielding images with a 1 × 1 × 1 mm voxel size; a 208 × 256 × 256 mm 3 image matrix; and inversion time (TI) and repetition time (TR) parameters set at 880 and 2000 ms, respectively.The DMRI utilized two b-values, achieving a spatial resolution of 2 × 2 × 2 mm and sampling 100 unique diffusion directions.The rsfMRI sessions were conducted with parameters yielding a spatial resolution of 2.4 × 2.4 × 2.4 mm, a TR of 0.735 s, and a TE of 39 milliseconds.Cognitive evaluation was based on a neuropsychological battery encompassing nine cognitive domains [26].Notably, two cognitive scales under scrutiny-reaction time (UKB ID: 20023) and trail-making (UKB ID: 6350)-both measuring time as an outcome, were logarithmically transformed to enhance the analytical robustness.
The detailed selection process is detailed in Figure 1.The exclusion criteria were based on the International Classification of Diseases, Tenth Revision (ICD-10) diagnostic classification system.This screening resulted in 388,721 subjects aged 45-83.From this, 31,621 subjects with complete sMRI, dMRI, and rsfMRI data were selected and split into a training set (40%) and a test set (60%) for brain age prediction.Within the test set, individuals with a consistently positive BrainAGE across all imaging modalities were categorized as ABA, and those with consistently negative values as RBA.Participants lacking complete cognitive test data or covariates were excluded, leaving 3203 in the RBA group (mean age of 63.6 years, standard deviation of 7.97) and 1949 subjects in the ABA group (mean age of 64.6 years and a standard deviation of 6.96).Given the relatively slow pace of biomarker changes in ABA or RBA compared with neurodegenerative diseases, we categorized the study participants into three age groups (middle-aged, middle-old-aged, and old-aged), with a 7-year interval between each group to ensure distinct biomarker trajectories across different age spans.Specifically, the middle-aged group (serving as the reference group) consisted of all participants under the age of 55 in the test set, while the middle-old-aged and old-aged groups were drawn from the ABA and RBA cohorts.
Image Processing
The UKB provides a broad spectrum of neuroimaging modalities [27].Following this, a standardized methodology is applied for image preprocessing and initial analysis.This meticulous approach results in the creation of a vast collection of imaging-derived phenotypes (IDPs).
T1-Weighted MRI (T1)
T1 MRI is a non-invasive modality crucial for examining the intricate anatomy of the human brain.Volume quantification was carefully performed using the FMRIB software library (FSL, version 5.0.10), which is accessible via the platform (http://fsl.fmrib.ox.ac.uk/ fsl, accessed on 12 March 2022).Additionally, the FMRIB's automated segmentation tool (FAST, version FAST3) was deployed to derive a total of 139 IDPs (ROIs) [28].The cortical thickness specific to the ROIs was obtained from the FreeSurfer anatomical parcellation [29], utilizing the Desikan-Killiany Atlas [30] for cortical regions, identifying 68 ROIs, with 34 in each hemisphere.
DMRI
DMRI assesses the white matter microstructure by mapping water diffusion patterns.In this investigation, the DTIFIT tool (accessible at https://fsl.fmrib.ox.ac.uk/fsl/fdt, accessed on 16 March 2022) computed the fractional anisotropy (FA) and mean diffusivity (MD).Furthermore, NODDI (Neurite Orientation Dispersion and Density Imaging) estimated the isotropic water volume fraction (ISOVF).Tract-based spatial statistics (TBSS) [31] was employed for the spatial statistics.The FA images were aligned onto a white matter skeleton using FNIRT-based warping [32], then applied to other dMRI measures.The resulting skeletonized images for each dMRI measure were subjected to averaging across a predefined set of 48 standard spatial tract masks, as defined by Susumi Mori's group at Johns Hopkins University [33].The resulting images were averaged across 48 standard spatial tract masks, yielding 144 IDPs.
RsfMRI
The analysis of the rsfMRI images employed the MELODIC (Multivariate Exploratory Linear Decomposition into Independent Components) framework [34], integrating group principal component analysis and independent component analysis to extract spatially orthogonal independent components (ICs) that represented distinct resting-state neural networks.A low-dimensional group-independent component analysis approach yielded a population-level spatial map of resting-state networks.Prior to the analysis, functional images underwent preprocessing with 25 fractions (UKB ID: 25752).After a careful exclusion process that eliminated four noise components, 21 components of interest remained, each representing unique resting-state networks.Additionally, a partial correlation matrix derived from rsfMRI data was utilized to quantify the network connections, yielding 210 values.
Brain Age Prediction Model
The Lasso (least absolute shrinkage and selection operator) is a regression technique that enhances linear models by penalizing coefficients to prevent overfitting, aiding generalization and variable selection.Its superiority in brain age prediction is welldocumented [34,35].Hence, we employed the Lasso in our research, with the regularization parameter (alpha) crucial for controlling the penalty magnitude.We meticulously defined the alpha grid search space (0.001, 0.01, 0.1, 1, 10, 100) to optimize model performance via fivefold cross-validation on the training dataset.
The BrainAGE [12] metric compares the estimated age of a person's brain with their actual chronological age, providing insights into their brain's aging trajectory compared with peers.This assessment not only sheds light on the overall state of brain maintenance (BM) [10,36] but also provides valuable insights into the presence and extent of any underlying neuroanatomical irregularities.The brain age prediction model is an advanced neuroscientific tool designed to estimate an individual's brain age through the analysis of neuroimaging data.It utilizes machine learning algorithms to discern patterns and features that differentiate the biological age of the brain from the chronological age.Computing the BrainAGE score involves determining the difference between the age forecasted by the model and the individual's chronological age (Equation ( 1 Here, "predicted brain age" refers to the brain age estimated by the Lasso model, while α and β represent the intercept and slope of the regression line from the chronological age and predicted age.Subjects in the test sets were categorized based on BrainAGE.Those with positive BrainAGE values across all three modalities were allocated to the ABA group, indicating accelerated brain aging.Conversely, individuals exhibiting negative BrainAGE values across the three imaging modalities were assigned to the RBA group, suggesting a more youthful brain state compared with their age.The detailed flowchart figure was shown in the Figure 2. (BM) [10,36] Here, "predicted brain age" refers to the brain age estimated by the Lasso model, while α and β represent the intercept and slope of the regression line from the chronological age and predicted age.Subjects in the test sets were categorized based on BrainAGE.Those with positive BrainAGE values across all three modalities were allocated to the ABA group, indicating accelerated brain aging.Conversely, individuals exhibiting negative BrainAGE values across the three imaging modalities were assigned to the RBA group, suggesting a more youthful brain state compared with their age.The detailed flowchart figure was shown in the Figure 2.
Estimating Biomarker Ordering Using DEBM
Utilizing cross-sectional data, the EBM characterizes the structured advancement of cumulative degenerative processes associated with aging, while concurrently assessing
Estimating Biomarker Ordering Using DEBM
Utilizing cross-sectional data, the EBM characterizes the structured advancement of cumulative degenerative processes associated with aging, while concurrently assessing the inherent uncertainty in this progression.Our study employed the DEBM, which is a wellestablished framework accessible at https://github.com/EuroPOND/pyebm(accessed on 8 August 2023).The DEBM constitutes a category of progression modeling that leverages cross-sectional data to predict the most likely sequential occurrence of events, focusing on biomarker degeneration during aging.Individuals are then assigned an aging stage based on their biomarker values within this sequence.The DEBM has been employed in a spectrum of neurological disorders, encompassing Alzheimer's disease (AD) [24,37], Parkinson's disease [38], amyotrophic lateral sclerosis [39], and multiple sclerosis [40].
Extensive validation studies highlighted the DEBM's superior accuracy and computational efficiency compared with other EBM implementations [24,41].The DEBM functions by utilizing cross-sectional data to infer the most probable sequence of events, particularly the degradation of biomarkers, which serve as indicators of the aging process.After evaluating their biomarker values, each individual is subsequently categorized into an aging stage within a sequence.This classification process is founded upon a probabilistic framework, wherein each biomarker undergoes evaluation to determine its status as either normal or degenerated, with the transition between these states regarded as a pivotal event.The main goal is to reveal the most likely ordered cascade of these events, outlining an individuals' trajectory from a state of health to the wide spectrum of manifestations associated with aging.
Figure 3 depicts the procedural steps of the DEBM.Initially, the model calculates a subject-specific sequence based on the posterior probability of biomarker degeneration.This sequence is personalized, derived from the individual's biomarker profile, indicating a unique progression of degenerative events.Next, the central sequence, regarded as the population's most typical event order, is determined.This sequence minimizes the aggregate of the probabilistic Kendall's tau distances in comparison with all subject-specific sequences.The DEBM acknowledges that individual subject sequences serve as noisy approximations of the central sequence, owing to physiological variations in biomarker expression.The DEBM employs a specialized mixture model.This model is initiated by estimating the distributions of biomarker values for middle-aged and elderly (ABA or RBA) subjects, leveraging data from individuals spanning the aging spectrum's extremes.A Bayesian classifier is trained to discern and exclude outliers and potentially mislabeled data.This method effectively segregates Gaussian distributions representing the normal and degenerated states of each biomarker.The initially biased distributions are then refined by integrating a broader dataset, encompassing middle-old-aged (ABA or RBA) subjects.This cohort displays a blend of biomarkers in both normal and degenerated states, including instances that may have been previously misclassified.The refinement process is facilitated by a Gaussian mixture model (GMM), which applies constraints derived from the relationship between the expected and biased distributions.The GMM function is iteratively fine-tuned with regard to both Gaussian parameters and mixing parameters until convergence is achieved.Optimal sequences are derived as the averages of orderings obtained from 50 bootstrapped iterations for the DEBM.Employing 10-fold crossvalidation, each participant is assigned an aging stage.These delineated aging stages are exclusively derived from individual biomarker profiles and their alignment along the aging progression continuum, as determined by the estimated sequence of biomarker alterations.Importantly, the characterization of these aging stages is independent of the individual's chronological age, highlighting the model's ability to deepen our understanding of aging beyond chronological years.
Selected Biomarkers
For enhanced interpretability and computational efficiency, not all multimodal neuroimaging biomarkers could be comprehensively utilized in the DEBM analysis.Therefore, a meticulous selection process was undertaken, which involved drawing from a pool of multimodal neuroimaging biomarkers and cognitive data and resulted in the identification of 34 biomarkers.These selected biomarkers exhibited specific characteristics crucial to the efficacy of our analysis.First, they encompassed a wide range of features, allowing for a holistic representation of the aging process in the ABA.Moreover, these selected biomarkers excelled in distinguishing between normal and degenerated states.Expanding our analysis beyond these IDPs (local features), we incorporated global features, namely, left and right cortical thickness, alongside mesoscopic scale features.Mesoscopic scale features were determined based on brain lobes or white matter anatomical locations and connectivity functions.The computation of these features relied on the feature values from the included brain regions.For each feature, we further computed the Cohen's d effect size between the middle-aged and elderly groups, leveraging all acquired multimodal imaging IDPs, global features, and mesoscopic scale features, in conjunction with nine cognitive tests.A Cohen's d value of 0.8 or higher indicates a large effect size.In instances where Cohen's d exceeded 0.8, indicating significant differences, biomarker selection was guided by the descending order of Cohen's d values within each category, reflecting the magnitude of the differences observed.Neuroimaging biomarkers were stratified into three primary classes: global, mesoscopic scale, and brain regions.Global biomarkers encompassed the left and right cortical thicknesses.Mesoscopic scale features retained four each of the gray and white matter features, which totaled eight.The local feature sets included the average cortical thickness, weighted average FA, MD, ISOVF for white matter fibers, and hippocampus volume.Each of the first four local feature sets retained four features: Freesurfer (UKB ID: 26781, 26782, 26863, 26883), FA (UKB ID: 25490, 25499, 25502, 25507), MD (UKB ID: 25517, 25518, 25538, 25539), and ISOVF (UKB ID: 25706, 25707, 25727, 25728), which resulted in a total of 17.Additionally, four nodes of rsfMRI high-order functional networks and three cognitive tests (reaction time, substitution of numerical symbols, and completion of matrix patterns) were included, which resulted in a total selection of 34 features.
Brain Age Prediction
Within the confines of this investigation, Lasso regression analysis was chosen for forecasting brain ages, with the mean absolute error (MAE) employed as the benchmark to gauge the model effectiveness.Notably, dMRI showed the highest predictive accuracy among the modalities.The implementation of Lasso regression on the dMRI data yielded a low MAE of 4.03 years.Furthermore, the T1 data, including the cortical thickness and gray matter volume, achieved an MAE of 4.17 years, whereas the rsfMRI showed a higher MAE of 5.28 years.The classification of the ABA and RBA groups hinged upon the consistency of positive or negative BrainAGEs across the three modalities within the brain-age prediction test set (n = 18,974).Specifically, the subjects were categorized as ABA if the BrainAGE across all modalities was consistently positive; conversely, the subjects were classified as RBA if the BrainAGE consistently exhibited negativity.As a result, 3203 subjects were classified into the RBA group (mean age = 63.6 ± 7.97), and 1949 subjects were classified into the ABA group (mean age = 64.6 ± 6.96).
Sequence of Biomarker Degeneration for ABA and RBA
The DEBM estimated the sequence for the thirty-four selected biomarkers, assigning them to stages 1 through 34.Figures 4 and 5 illustrate the sequence of biomarker degeneration and corresponding uncertainty in the ABA and RBA cohorts.Here, each square's color intensity signifies the frequency the bootstrap resampling iterations placed the biomarker at a specific position.Thus, the darkest square for each biomarker indicates its most frequent position, representing the mode.Such representation aids in grasping the biomarker position distribution and insights into the sequence's mode.For the ABA cohort, four functional networks served as the initial biomarkers to undergo degeneration, followed by two cognitive domains.In terms of the whole-brain cortical thickness, degeneration manifested in the right hemisphere preceding the left hemisphere.Notably, at the mesoscopic scale of cortical thickness, reductions in the temporal lobe cortical thickness preceded those in the frontal lobe.Within the mesoscopic white matter microstructure, degeneration in the FA of association fibers preceded degeneration in the ISOVF in association fibers.At the microscopic brain region level, the FA-related features appeared earlier, the ISOVF-related features appeared later, and the MD-related features exhibited anomalous transitions in the final stages.It is noteworthy that vulnerable regions in neurodegenerative diseases, such as the hippocampus, only demonstrated degeneration in the intermediate stages.In the RBA cohort, it is noteworthy that the biomarker ordering emphasized the executive control network, anterior default network 2, and basal ganglia network as the earliest markers of degeneration.However, unlike the ABA cohort, degeneration in the whole-brain cortical thickness manifested in the left hemisphere preceding the right hemisphere.At the mesoscopic scale of cortical thickness, both groups demonstrated alterations in the temporal and frontal lobes; however, in the RBA cohort, degeneration of the temporal lobe ranked fourth in prominence.Concerning the white matter microstructure, degeneration in the RBA group primarily occurred at relatively advanced stages.Degeneration associated with the MD tended to occur later than other measures of white matter microstructure, with gray matter degeneration preceding white matter degeneration.Furthermore, in the RBA cohort, the biomarker ordering exhibited a higher degree of uncertainty compared with that of the ABA cohort.
hort, four functional networks served as the initial biomarkers to undergo degeneration, followed by two cognitive domains.In terms of the whole-brain cortical thickness, degeneration manifested in the right hemisphere preceding the left hemisphere.Notably, at the mesoscopic scale of cortical thickness, reductions in the temporal lobe cortical thickness preceded those in the frontal lobe.Within the mesoscopic white matter microstructure, degeneration in the FA of association fibers preceded degeneration in the ISOVF in association fibers.At the microscopic brain region level, the FA-related features appeared earlier, the ISOVF-related features appeared later, and the MD-related features exhibited anomalous transitions in the final stages.It is noteworthy that vulnerable regions in neurodegenerative diseases, such as the hippocampus, only demonstrated degeneration in the intermediate stages.In the RBA cohort, it is noteworthy that the biomarker ordering emphasized the executive control network, anterior default network 2, and basal ganglia network as the earliest markers of degeneration.However, unlike the ABA cohort, degeneration in the whole-brain cortical thickness manifested in the left hemisphere preceding the right hemisphere.At the mesoscopic scale of cortical thickness, both groups demonstrated alterations in the temporal and frontal lobes; however, in the RBA cohort, degeneration of the temporal lobe ranked fourth in prominence.Concerning the white matter microstructure, degeneration in the RBA group primarily occurred at relatively advanced stages.Degeneration associated with the MD tended to occur later than other measures of white matter microstructure, with gray matter degeneration preceding white matter degeneration.Furthermore, in the RBA cohort, the biomarker ordering exhibited a higher degree of uncertainty compared with that of the ABA cohort.Figures 6 and 7 display the event center variance plots tailored to the ABA and RBA cohorts.These charts comprehensively delineate the trajectory of biomarker degeneration events over the aging timeline, emphasizing the relative temporal positioning of these events in relation to each other.A notable observation is the distinct divergence in aging trajectories between the ABA and RBA cohorts.In contrast to the RBA cohort, the biomarkers within the ABA cohort generally showed smaller event center standard deviations, suggesting a more consistent and predictable trend of biomarker degeneration.This uniformity could stem from similarities in pathological physiological changes experienced by individuals within the ABA cohort, leading to relatively stable trajectories of biomarker degeneration.In contrast, the larger event center standard deviation of the biomarkers in the RBA cohort indicates more variability in the timing and sequencing of the biomarker events within this cohort.This heterogeneity may reflect the activation of different protective mechanisms by individuals in the RBA cohort, which may delay the progression of cognitive decline, leading to a slower and more diverse aging process.Figures 6 and 7 display the event center variance plots tailored to the ABA and RBA cohorts.These charts comprehensively delineate the trajectory of biomarker degeneration events over the aging timeline, emphasizing the relative temporal positioning of these events in relation to each other.A notable observation is the distinct divergence in aging trajectories between the ABA and RBA cohorts.In contrast to the RBA cohort, the biomarkers within the ABA cohort generally showed smaller event center standard deviations, suggesting a more consistent and predictable trend of biomarker degeneration.This uniformity could stem from similarities in pathological physiological changes experienced by individuals within the ABA cohort, leading to relatively stable trajectories of biomarker degeneration.In contrast, the larger event center standard deviation of the biomarkers in the RBA cohort indicates more variability in the timing and sequencing of the biomarker events within this cohort.This heterogeneity may reflect the activation of different protective mechanisms by individuals in the RBA cohort, which may delay the progression of cognitive decline, leading to a slower and more diverse aging process.
Estimation of Aging Stage for ABA and RBA
The aging stage, which was a summarizing metric for each subject, was determined by estimating the subject's progression along the established timeline of aging progression (Figure 8).We employed a 10-fold cross-validation approach.Within each fold of the cross-validation, the DEBM was constructed using the training set, while the estimation of the aging stage was conducted on the test set.Within the ABA cohort, which was known for accelerated brain aging, most individuals within the middle-aged or middle-old-aged (ABA) range were positioned toward the left side, indicating less frequent biomarker degeneration.This suggests a relatively moderate trajectory of neurodegenerative changes in this cohort.Conversely, the elderly population (ABA) was predominantly located toward the right.This distribution suggests a swifter and more pronounced advancement of neurodegenerative processes, consistent with advanced age and increased susceptibility to neurodegenerative diseases.In contrast, the RBA cohort, characterized by its resilience to brain aging, exhibited consistent assignment of low or intermediate aging stages across various age brackets, with minimal variation in aging stages observed between different age groups.The minimal variation in aging stages between different age groups within the RBA cohort suggests that the aging process, especially neurodegeneration, may not manifest as prominently or could progress at a slower rate in this demographic.This phenomenon could stem from genetic, lifestyle, or environmental factors fostering a more resilient aging trajectory, rendering it less prone to the customary neurodegenerative alterations associated with aging.These findings underscore distinct patterns of age-related changes in neural integrity between cohorts, potentially reflecting varied trajectories of neurodegeneration and resilience across the ABA and RBA populations.
Estimation of Aging Stage for ABA and RBA
The aging stage, which was a summarizing metric for each subject, was determined by estimating the subject's progression along the established timeline of aging progression (Figure 8).We employed a 10-fold cross-validation approach.Within each fold of the cross-validation, the DEBM was constructed using the training set, while the estimation of the aging stage was conducted on the test set.Within the ABA cohort, which was known for accelerated brain aging, most individuals within the middle-aged or middle-old-aged (ABA) range were positioned toward the left side, indicating less frequent biomarker degeneration.This suggests a relatively moderate trajectory of neurodegenerative changes in this cohort.Conversely, the elderly population (ABA) was predominantly located toward the right.This distribution suggests a swifter and more pronounced advancement of neurodegenerative processes, consistent with advanced age and increased susceptibility to neurodegenerative diseases.In contrast, the RBA cohort, characterized by its resilience to brain aging, exhibited consistent assignment of low or intermediate aging stages across various age brackets, with minimal variation in aging stages observed between different age groups.The minimal variation in aging stages between different age groups within the RBA cohort suggests that the aging process, especially neurodegeneration, may not manifest as prominently or could progress at a slower rate in this demographic.This phenomenon could stem from genetic, lifestyle, or environmental factors fostering a more resilient aging trajectory, rendering it less prone to the customary neurodegenerative alterations associated with aging.These findings underscore distinct patterns of age-related changes in neural integrity between cohorts, potentially reflecting varied trajectories of neurodegeneration and resilience across the ABA and RBA populations.In the ABA cohort, the majority of middle-aged or middle-old-aged subjects exhibited intermediate aging stages, with a relatively low incidence of biomarker degeneration.Conversely, elderly subjects were predominantly assigned to later stages of aging, In the ABA cohort, the majority of middle-aged or middle-old-aged subjects exhibited intermediate aging stages, with a relatively low incidence of biomarker degeneration.Conversely, elderly subjects were predominantly assigned to later stages of aging, indicative of pronounced neurodegenerative processes with advancing age.In contrast, within the RBA cohort, individuals across different age brackets were uniformly assigned low or intermediate aging stages, while lacking notable variation in aging stages between different age groups.This observation implies that age-related neurodegeneration may not manifest prominently within the RBA population as age advances.
Discussion
Current knowledge of aging-related degeneration in ABA and RBA cohorts primarily stems from studies that focused on various imaging features.This collective body of research has pinpointed regional atrophy, alterations in functional networks, and changes in white matter tract microstructure as notable features.However, the specific sequence of degeneration among these biomarkers has remained largely elusive.To fill this gap, we conducted a data-driven DEBM analysis in this study.ABA saw an initial decline in four functional networks and two cognitive domains, where cortical thinning started in the right hemisphere and then the temporal lobe, FA changes preceded the ISOVF, and the MD exhibited anomalous transitions.Conversely, RBA showed early degeneration in executive control, default network 2, and basal ganglia networks, with cortical thinning predominantly in the left hemisphere and advanced-stage white matter degeneration.These findings highlight nuanced aging dynamics and complex mechanisms in both cohorts.This analysis shed light on the temporal dynamics of the aging progression of the ABA and RBA cohorts, providing valuable insights into the underlying mechanisms and guiding future research directions.
Central Ordering
The central ordering, resembling the main narrative, depicts the overall sequence of biomarker changes during the aging progression.To establish a central ordering, it is crucial to comprehend the variability in the biomarker changes during the brain-aging process across different subjects.Each individual may manifest a unique sequence of biomarker alterations.To account for these individual variances, a distinct change sequence is initially delineated for each subject, analogous to crafting a personalized timeline for the degradation of their biomarkers.However, the scope of the study transcended individual subjects; it aimed to discern common patterns shared among all the participants.By amalgamating the individual sequences of all the participants, the predominant sequence of changes was identified by employing the generalized Mallows model.This methodology identified the most frequently occurring sequence of biomarker degeneration events through the comparative analysis of biomarker degeneration across each individual.This central sequence served as a representative rendition, akin to an average, aiding in the comprehension of the overarching trajectory of biomarker decline during brain aging.Additionally, the Kendall's tau distance was utilized in this process to measure the dissimilarities between different sequences, thereby ascertaining the requisite number of exchanges to align two sequences.Through this methodological approach, a more accurate estimation of the general sequence of biomarker changes associated with aging progression could be achieved.
The central ordering analysis revealed heterogeneity in the central ordering of both the ABA and RBA sequences.Early degeneration of functional networks was evident in the ABA sequence.As an individual journeys through the aging process, noticeable changes arise in the activation patterns of individual brain networks.These age-related modifications spur individuals to develop compensatory strategies, utilizing alternative networks to counteract the onset of declining cognitive functions [42][43][44].This adaptive approach aims to alleviate the impact of age-related cognitive decline throughout the aging process and may manifest before a noticeable decline in cognitive function.However, compensatory mechanisms can only partially offset the decline in cognitive abilities, with further deterioration in cognitive domains occurring as age advances [45,46], affecting crucial functions, such as memory and attention.Hemisphere-specific degeneration in the ABA sequence showed earlier onset in the right hemisphere, which was potentially associated with asymmetrical brain aging [47].In the realm of adult brains, cortical asymmetry, which is characterized by the unequal thickness of the left and right sides of the cortex, is a notable phenomenon.This asymmetry is not merely incidental; rather, it serves a functional purpose, optimizing brain performance by assigning distinct roles to each hemisphere.Conventionally, the left hemisphere is closely associated with language processing, logical reasoning, and sequential tasks, whereas the right hemisphere excels in spatial perception, facial recognition, and aesthetic appreciation.In the context of aging brains, two prominent models elucidate cortical asymmetry changes: the Right Hemi-Aging Model and the Hemispheric Asymmetry Reduction in Older Adults (HAROLD) model.The Right Hemi-Aging Model posits a heightened vulnerability of the right hemisphere to age-related alterations [48], which is a conjecture supported by various studies [48,49], highlighting a more pronounced change in the right hemisphere relative to its left counterpart.Additionally, the early reduction in temporal lobe cortex thickness aligns with typical pathological features of neurodegenerative diseases like AD, given the pivotal role of the temporal lobe in memory formation.Prior studies have also shown associations between mild cognitive impairment (MCI), AD, and accelerated brain aging [12,13].In contrast, early degeneration in the RBA sequence highlighted the involvement of executive control networks, anterior default mode network 2, and basal ganglia networks, which are regions closely associated with executive functions, self-reflection, and motor control.This suggests that these functions may be affected earlier in the RBA sequence.Notably, degeneration in the left hemisphere of the RBA sequence preceded that in the right hemisphere.However, concerning event centers, the contrast between the hemispheres was less marked compared with the ABA sequence.This observation could potentially be elucidated through the HAROLD model, which suggests that with advancing age, the functional asymmetry between the left and right hemispheres of the brain gradually diminishes [50,51].In essence, older adults tend to engage both hemispheres in cognitive tasks that were previously lateralized to one hemisphere in younger adults.This compensatory mechanism is believed to aid older adults in maintaining cognitive performance despite age-related changes in their brain structure and function.RBA highlights the brain's adaptability and resilience to age-related changes.Models such as brain reserve and cognitive reserve have been employed to explicate RBA [52,53].Research suggests individuals with higher cognitive reserve show less disparity between hemispheres [54,55].Furthermore, the degeneration of white matter microstructure in the RBA sequence occurred at later stages, indicating that white matter damage may not be an early marker of aging in this sequence but rather associated with later-stage progression.The comparison of the two sequences also highlighted differences in white matter microstructural changes.Early degeneration of the FA in the ABA sequence may indicate early reorganization of white matter fibers, while late-stage degeneration of the MD in the RBA sequence may be related to severe impairment of white matter integrity in the later stages of aging.Additionally, the higher uncertainty in biomarker ordering in the RBA sequence may reflect greater variability in aging progression, which is possibly influenced by differences in genetic backgrounds, lifestyles, or environmental factors between individuals in the RBA group.
Event Centers
The event center can be regarded as a pivotal moment in the process of brain aging.Throughout the aging trajectory, alterations in biomarkers resemble distinct milestones along the path of aging progression.Across the aging continuum, shifts in biomarkers mirror distinct milestones in the trajectory of aging.Analyzing the temporal dynamics of brain aging and pinpointing the timing of various biomarker changes is like identifying specific chapters in a storybook where each significant event happens.To achieve this, comparing data across subjects yields a plausible sequence of event occurrences, yet merely knowing which events precede others is insufficient.Understanding the relative timing of these events, whether early or late in the aging process, is equally important.Understanding the relative timing of these events, whether early or late in the aging process, is equally important.To identify these time points, the probabilistic Kendall's tau distance was employed, which computes the distance between events by comparing the sequence of events across different subjects.Essentially, events that frequently co-occur have a shorter distance between them, whereas events that rarely co-occur have a greater distance.Subsequently, by calculating these distances, the "event centers" for each event are derived.Event centers represent the average temporal occurrence points for biomarker changes.Additionally, two hypothetical events-one occurring before disease onset and another after disease cessation-are introduced to help determine the earliest and latest points on the timeline.Through the discernment of these event centers, researchers can garner enhanced insights into the gradual progression of aging within both the ABA and RBA cohorts.
A notable observation was the distinct divergence in aging trajectories between the ABA and RBA cohorts.Compared with the RBA cohort, biomarkers within the ABA cohort typically exhibited smaller event center standard deviations, implying a more uniform and predictable trend of biomarker degeneration within the ABA cohort.This uniformity may be associated with greater similarity in pathological physiological changes experienced by individuals within the ABA cohort, resulting in relatively fixed trajectories of biomarker degeneration.In contrast, the larger event center standard deviation of biomarkers in the RBA cohort suggests greater variability in the timing and sequencing of biomarker events within this cohort.This heterogeneity may reflect the activation of different protective mechanisms [56] by individuals in the RBA cohort, which may delay the progression of cognitive decline, leading to a slower and more diverse aging process.
Aging Stages
Aging stages, which consider the relative distance between events, serve as indicators of an individual's progression along the aging continuum.This significantly enhances researchers' comprehension of aging dynamics.Building on prior biomarker analyses, the hierarchical organization of aging and the focal points of each degeneration event were meticulously delineated.The establishment of aging staging relies on the subjects' biomarker profiles and hierarchical organization, delineated by the succession of biomarker deterioration and utilizing a prior distribution informed by the sequence of biomarker decline.Leveraging the probability chain rule facilitates the management of conditional probabilities, enabling the estimation of the anticipated likelihood of subjects occupying specific aging stages.
In the ABA cohort, middle-aged individuals or middle-old-aged ABA subjects typically exhibited intermediate stages of aging, indicating a relatively slower rate of biomarker degeneration.This phenomenon likely arose from the incipient or mild nature of aging and degenerative processes in the brain during these age brackets.Conversely, elderly subjects were predominantly situated in the later stages of aging, which suggests a more conspicuous neurodegenerative progression with advancing age.This pattern resonates with typical trajectories observed in numerous neurodegenerative disorders, such as AD, where the disease severity and biomarker alterations typically intensify with age [57].In contrast to the ABA cohort, individuals across diverse age strata in the RBA cohort demonstrated a uniform distribution across lower or intermediate stages of aging.Notably, the variations in the aging stages between different age cohorts were negligible.This observation implies that in the RBA population, age-related neurodegenerative changes may exhibit less prominence as age advances.This phenomenon could be attributed to protective mechanisms inherent within the RBA cohort, exerting influence over the pace of age-related neurodegeneration [58,59].
Limitation
Our research had certain limitations that warrant acknowledgment.One constraint pertained to the notable homogeneity observed within the UKB cohort, with 94.6% of participants identifying as white ethnicity.The limited ethnic diversity may restrict the applicability of our findings to more ethnically diverse populations.Moreover, the overrepresentation of white participants could limit the applicability of our results to populations characterized by diverse genetic backgrounds, lifestyles, and environmental exposures.
Future studies should prioritize including more diverse cohorts to overcome this limitation.Second, the classification of ABA and RBA cohorts typically relies on brain age prediction models.In this study, we employed a more rigorous definition based on neuroimaging data from three modalities.However, many studies only utilized T1-weighted imaging data.Consequently, caution should be exercised when interpreting results, especially when cohort definitions differ.Third, the selection of biomarkers can significantly influence the interpretation of results.We selected 34 biomarkers in our study to achieve a thorough understanding of both the ABA and RBA cohorts.However, as the number of biomarkers increased, distinguishing their temporal positioning along the timeline became less apparent, posing challenges in discerning the sequence of closely related biomarker events.In future research endeavors, there is a critical need for a more comprehensive selection of biomarkers.Lastly, the DEBM provided a temporal sequence of biomarker degeneration but lacked explicit time information due to the non-linear intervals between subsequent events.Consequently, categorization into early and late events relied on comparisons with other markers over the aging trajectory.Integrating DEBM with longitudinal data and survival models could potentially provide estimates of aging progression timescales.This synergy facilitates a more nuanced understanding of the temporal dynamics underlying age-related degeneration, enabling precise delineation of the temporal aspects of the aging process.
Conclusions
Our study addressed a critical gap in the current research by investigating the temporal dynamics and neuroimaging biomarkers associated with ABA and RBA.Understanding the intricate processes of brain aging, especially the neurodegenerative mechanisms underlying these phenotypes, is crucial in neuroscience for identifying susceptibilities to cognitive decline and neurodegenerative disorders.This research contributes significantly in two main ways.First, we conducted a comprehensive examination of aging trajectories in ABA and RBA using advanced multimodal neuroimaging techniques.By integrating structural MRI, diffusion MRI, and resting-state fMRI data, we systematically explored the progression of neuroimaging biomarkers associated with ABA and RBA.This approach represents a pioneering effort to elucidate the timeline of biomarker changes in these distinct aging phenotypes, thereby advancing our understanding of their underlying mechanisms.Second, our study leveraged a large cohort of healthy volunteers (n = 31,621), allowing for a nuanced characterization of diverse ABA and RBA profiles.This extensive dataset provided insights into the variability and commonalities within these aging cohorts, offering a comprehensive view of how different individuals experience ABA or RBA.Ultimately, this research significantly advances our understanding of age-related cognitive decline, with the potential to reshape therapeutic strategies toward personalized interventions.
Figure 2 .
Figure 2. Flowchart illustrating the categorization of ABA and RBA subjects.
Figure 2 .
Figure 2. Flowchart illustrating the categorization of ABA and RBA subjects.
Figure 3 .
Figure 3. Overview of DEBM steps: (A) Biomarkers are transformed into probabilities of degeneration using GMM to estimate normal and degenerated states, as classified via Bayesian methods.(B) Subjectspecific biomarker degeneration sequences are inferred and used to derive a central ordering for aging progression timelines.(C) The central ordering stages subjects based on aging severity.
Figure 4 .
Figure 4.The positional variance diagram illustrating the sequence of biomarker degeneration of the ABA cohort.The y-axis (top to bottom) represents the maximum likelihood sequence of biomarker degeneration.
Figure 4 .
Figure 4.The positional variance diagram illustrating the sequence of biomarker degeneration of the ABA cohort.The y-axis (top to bottom) represents the maximum likelihood sequence of biomarker degeneration.
Figure 5 .
Figure 5.The positional variance diagram illustrating the sequence of biomarker degeneration of the RBA cohort.
Figure 5 .
Figure 5.The positional variance diagram illustrating the sequence of biomarker degeneration of the RBA cohort.
Figure 6 .
Figure 6.The event-center variance diagram illustrating the standard error of the estimated degeneration centers of the ABA cohort.The plot displays the event-centers of the various biomarkers, along with their respective standard deviations, as estimated from a batch of 50 independent bootstrap samples.
Figure 6 .
Figure 6.The event-center variance diagram illustrating the standard error of the estimated degeneration centers of the ABA cohort.The plot displays the event-centers of the various biomarkers, along with their respective standard deviations, as estimated from a batch of 50 independent bootstrap samples.
Figure 7 .
Figure 7.The event-center variance diagram illustrating the standard error of estimated degeneration centers of the RBA cohort.
Figure 7 .
Figure 7.The event-center variance diagram illustrating the standard error of estimated degeneration centers of the RBA cohort.
Figure 8 .
Figure 8. Estimation of aging stages across age groups.The aging stages were estimated utilizing a 10-fold cross-validation methodology.Each histogram illustrates the relative frequency distribution of aging stages within each clinical group, normalized for the respective age cohorts.Box plots displaying estimated aging stages for each cohort.The box plots depict the mean value, along with the standard deviation of the aging stage.
Figure 8 .
Figure 8. Estimation of aging stages across age groups.The aging stages were estimated utilizing a 10-fold cross-validation methodology.Each histogram illustrates the relative frequency distribution of aging stages within each clinical group, normalized for the respective age cohorts.Box plots displaying estimated aging stages for each cohort.The box plots depict the mean value, along with the standard deviation of the aging stage.
)).A score close to zero suggests typical aging, while positive scores indicate accelerated aging, and negative scores suggest a younger brain.
(2)mitigate age-related bias, an age-bias correction procedure (Equation (2)) is crucial.Corrected brain age = Predicted brain age − α − β × Chronological age(2) but also provides valuable insights into the presence and extent of any underlying neuroanatomical irregularities.The brain age prediction model is an advanced neuroscientific tool designed to estimate an individual's brain age through the analysis of neuroimaging data.It utilizes machine learning algorithms to discern patterns and features that differentiate the biological age of the brain from the chronological age.Computing the BrainAGE score involves determining the difference between the age forecasted by the model and the individual's chronological age (Equation (1)).A score close to zero suggests typical aging, while positive scores indicate accelerated aging, and negative scores suggest a younger brain. | 2024-06-28T15:29:17.026Z | 2024-06-25T00:00:00.000 | {
"year": 2024,
"sha1": "ccdc0b5f7a8c155b1000da67f72b8c2b7158a6fa",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5354/11/7/647/pdf?version=1719313092",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01096a047fcc93eed3ff2deb4990329245790eb3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
53535921 | pes2o/s2orc | v3-fos-license | Microwave Assisted Bioethanol Production from Sago Starch by Co-Culturing of Ragi Tapai and Saccharomyces Cerevisiae
Problem statement: Environmental issues such as global warming and re cent events throughout the world, including the shortage of pet rol um crude oil, the sharp increase in the cost of oil and the political instability of some crude oil producing countries, have demonstrated the vulnerability of the present sources for liquid fue l. These situations have created great demand for ethanol from fermentation process as green fuel. A main challenge in producing the ethanol is the production cost. A rapid and economical single step fermentation process for reliable production of bioethanol was studied by co-culturing commercializ ed ragi tapai with Saccharomyces cerevisae using raw sago starch. Approach: Enzymatic hydrolysis of sago starch by various amyl olytic enzymes was investigated to reveal the potential coupling mecha nism of Microwave Irradiation-Enzyme Coupling Catalysis (MIECC). Results: It was shown that enzymatic hydrolysis of starch us ing typical enzymes may successfully be carried out at microwave condit ion. The MIECC resulted in increasing initial reaction rate by about 2 times. The results testify on specific activation of enzymes by microwaves an d prove the existence of non-thermal effect in microw ave assisted reactions. Low power microwave irradiation (80W) does not increase the temperature beyond 40C and hence denaturation of the enzyme is avoided. The maximum ethanol fermentation efficiency was achieved (97.7% of the theoretical value) using 100 g L −1 sago starch concentration. The microwave assisted process improved the yield of ethanol by 45.5% compared to the non-m icrowave process. Among the other advantages of co-culturing of ragi tapai with S. cerevisiae is the enhancement of ethanol production and preven tion of the inhibitory effect of reducing sugars on amyloly tic activity and the reaction could be completed within 32±1 h. Conclusion: The present study have demonstrated the ability of using cheaply and readily ragi tapai for conversion of starch to gluc ose and the utilization of sago starch as a feed st ock, which is cheaper than other starches like corn and potato. The present study has highlighted the importance of well controlled microwave assisted en zymatic reaction to enhance the overall reaction rate of the process.
INTRODUCTION
Heavy reliance on the use of fossil resources for the generation of transportation fuels and materials, has cause a rising concern over their cost, sustained availability and impact on global warming and pollution. Motor vehicles account for a significant portion of urban air pollution in much of the developing world. It is projected that there will be 1.3 billion light duty vehicles, automobiles, light trucks, SUVs and minivans, on roadways around the world by 2030 (Balat and Balat, 2009). According to Goldemberg (2008), motor vehicles account for more than 70% of global carbon monoxide (CO) emissions and 19% of global carbon dioxide (CO 2 ) emissions. CO 2 emissions from a gallon of gasoline are about 8 kg. Bio-fuels are liquid or gaseous fuels made from plant matter and residues, such as agricultural crops, municipal wastes and agricultural and forestry by-products. Liquid biofuels can be used as an alternative fuel for transport, besides other alternatives such as Liquid Natural Gas (LNG), Compressed Natural Gas (CNG), Liquefied Petroleum Gas (LPG) and hydrogen (Semin et al., 2009). Bioethanol produced from renewable biomass, such as sugar, starch, or lignocellulosic materials, is one of the alternative energy resources that are both renewable and environmentally friendly (Balat et al., 2008;Baras et al., 2002). It can be blended with petrol (E5, E10, E85) or used as neat alcohol in dedicated engines, taking advantage of the higher octane number and higher heat of vaporisation and it is also an excellent fuel for future advanced flexi-fuel hybrid vehicles (Chum and Overend, 2001;Kim and Dale, 2005). Internal combustion engines operating on ethanol also produce fewer greenhouse gas (GHG) emissions since ethanol is less carbon-rich than gasoline.
Fermentation-derived ethanol can be produced from sugar, starch or lignocellulosic biomass. Sugar and starch-based feedstocks are currently predominant at the industrial level and they are so far economically favorable. Starch-based materials are currently most utilized for the ethanol production in North America and Europe. The hydrolysis of starch may be considered as a key step in the processing of starchbased feedstock for the bioethanol production. The main role of this step is to effectively provide the conversion of two major starch polymer components: amylose, a mostly linear α-D-(1-4)-glucan and branched amylopectin to fermentable sugars that could subsequently be converted to ethanol by yeasts or bacteria. The most commonly used distillers yeast S. cerevisiae is unable to hydrolyse starch. Traditional production of ethanol from starch requires a three-stage process; liquefaction of starch by α-amylase, saccharification of liquefied starch by enzymes to sugars followed by fermentation using S. cerevisiae. Amylolytic enzymes from bacteria and fungi are used for the saccharification of starch and this adds to the overall cost of the bioethanol production process. Simultaneous saccharification of starch with an amylolytic yeast or mold and fermentation of saccharified starch by distillers yeast is an effective method for direct fermentation of starch (Somda et al., 2011;Nadir et al., 2009).
Sago palm Metroxylon sagu is an important economic species and is now grown commercially in Malaysia, Indonesia, the Philippines and New Guinea for the production of sago starch. Sago Palm has a great potential to be a Malaysian leading producer of starch. From less than 20,000 hectares in 1991, sago plantations in Malaysia, have grown to around 53,000 hectares in 2010. One of the potential uses of the sago palm is for the production of bioethanol. More importantly, the sago starch is of such a quality that ethanol conversion efficiencies of up to 72% can be obtained (for hydrated ethanol). Taking an optimistic yield of 20 tons of clean starch per hectare, this comes down to an alcohol yield of 14,400 liters, (1540 gallons) per acre, making sago one of the most productive energy crops.
Current world bioethanol research is driven by the need to reduce the costs of production. For example, improvement in feedstock pretreatment, shortening of fermentation time, lowering the enzyme dosages, improving the overall starch hydrolysis and integration of the Simultaneous Saccharification and Fermentation (SSF) process could be the basis of cutting down production costs. Many microorganisms including Saccharomyces cerevesiae (yeast) are not able to produce ethanol from starch due to lack of starchdecomposing enzyme. Specific enzymes such as amylase, amyloglucoamylase and pulluanase are needed for hydrolyzing starch (Nurachman et al., 2010;Jamai et al., 2007). Tapai is a traditional fermented food popular in Malaysia and Indonesia. To prepare tapai, a carbohydrate source and an inoculum containing the microorganisms is necessary. The inoculum is called ragi tapai and is cheaply available in local market. Microorganisms found in the traditional ragi tapai are moulds (Rhizopus oryzae, Amylomyces rouxii, Mucor sp. and Candida utilis) and yeasts (Saccharomyces cerevisiae, Saccharomycopsis fibuliger, Endomycopsis burtonii). The moulds are strong amylolytic (Gandjar, 2003). Microwave Irradiation-Enzyme Coupling Catalysis (MIECC) has also been proven as a useful tool for many enzymatic transformations in both aqueous and organic solutions (Leadbeater et al., 2007;Yadav and Sajgure, 2007;Roy and Gupta, 2003). It has been proposed that in case of low power of high-frequency electromagnetic field the nonthermal activation of enzyme may be observed (Yadav and Lathi, 2007;Saifuddin et al., 2009). Enzymatic hydrolysis of starch is a very important not only for bioethanol production but many other industrial process and study of amylolytic enzyme working at microwave conditions is of great importance from both the scientific and industrial interest.
The objective of this study is to improve the bioethanol production from raw sago starch by using co-culturing approach and microwave irradiation. Microwave pretreatment will be carried on the starch solution as some previous studies have shown that application of microwave irradiation pretreatment may significantly increase the conversion of starch materials to glucose (Zhu et al., 2006;Palav and Seetharaman, 2007). The bioconversion of the microwave treated sago starch in single step process will be performed by co-culturing of commercial ragi tapai and Saccharomyces cerevisiae.
Microorganisms, culture conditions and reagents:
Saccharomyces Cerevisiae ("Angel" Super dry Yeast for fuel ethanol) was purchased from Angel Yeast Co., Ltd. China. Bacto-Peptone and Yeast extract was purchased from BD Diagnostic Systems USA. The other culture used was commercial ragi tapai (which provides the amylolytic enzymes). It was obtained from the local market.
Sago based on starch flour used in this experiment is from one brand and was obtained from local market in Malaysia. Saccharomyces cerevisiae was used for the fermentation of hydrolyzed sago starch. Before using as inoculums, both, the dry S. Cerevisiae (5 g) and ragi tapai (10 g) were aerobically propagated separately in 250 mL flasks containing 100 mL YEP broth media (10 g L −1 yeast extract, 10 g L −1 Peptone and 5 g L −1 NaCl) at 37°C and 250 rpm for 3 h. The liquid media was autoclaved at 121°C for 15 min before the aerobic propagation.
The chemical reagents were of analytical grade and used without further purification. Sodium hydroxide was purchased from Merck; acetic acid, sulfuric acid, calcium chloride, ammonium sulphate, magnesium sulphate, anhydrous ethanol, calcium hydroxide and anhydrous glucose were purchased from J.T. Baker.
Optimizing microwave irradiation duration and sago starch concentration: In a typical experiment 25 g of sago starch and 1 mg of CaCl 2 were dispersed in 250 mL of deionized water placed in glass flasks (10% w/v sago starch slurry). The pH was adjusted to 7.2. The mixture was heated up to 60°C for 5 min to obtain starch slurry. The slurry was heated to 80°C for 45 min to reduce viscosity. The slurry was allowed to cool to 40 o C before addition of the 10 mL of 10% ragi tapai culture suspension. The mixture was then subjected to the microwave treatment in a microwave oven (Sanyo, EM-S9515W). Output power was set at 80 W and the effects of heating between 1-10 min were investigated. The control samples were not subjected to the microwave irradiation. After the microwave treatment at each time interval, the flasks were kept in a water bath with shaker at 45°C with agitation at 60 rpm for up to 20 min. At the end of 20 min, the amount of glucose released was determined for each flask. The result will indicate the optimal time of microwave exposure to achieve the highest amount of glucose.
For optimization of sago starch concentration, similar experiment was conducted but using sago starch slurry of 20% (50 g/250 mL deionized water) and 30% (75 g/250 mL deionized water). The microwave output power was set at 80 W and the exposure time was set at 5 min (optimal time). The amount of glucose was measured as mentioned previously after 20 min incubation at 45 o C with agitation at 60 rpm. The result will indicate the optimal concentration of starch slurry to achieve highest amount of glucose and the highest percentage conversion to glucose. Percentage conversion to glucose was calculated using the following equation. The theoretical amount of glucose produced is 1110 g glucose for 1 kg starch (100% efficiency): amount of glu cos e produced g ( starch) kg Percentage conversion to glu cose 100% Theorcticalamount of glu cose g produced( starch) kg
= ×
For optimization of level of ragi tapai concentration, another similar experiment was conducted using fix concentration of sago starch slurry at 10% (25 g/250 mL deionized water) and three levels of the ragi tapai concentration of 10%, 20% and 30%. The microwave output power was set at 80 W and the exposure time was set at 5 min (optimal time). The amount of glucose was measured as mentioned previously after 20 min incubation at 45 o C with agitation at 60 rpm. The result will indicate the optimal concentration of ragi tapai to achieve highest the highest percentage conversion to glucose. Percentage conversion to glucose was calculated using the previously described equation.
Microwave assisted simultaneous saccharification and fermentation of sago starch: The hydrolysis of sago starch followed by yeast fermentation in single step was performed by sequential co-culture process. The yeast (S. cerevisiae) was added 2 hours after the microwave assisted saccharification process for the initiation of the fermentation process. The starch slurry was prepared as mentioned above. The sago starch slurry after addition of 10 mL of 10% ragi tapai was subjected to microwave treatment at 80 W for 5 min (optimal time). After the microwave treatment, the flasks were kept in a water bath with shaker at 45°C with agitation at 60 rpm for 2 h, to facilitate enzymatic hydrolysis (saccharification) of starch to sugar (glucose). At the end of each saccharification period, the flask were individually fermented by addition of 10 mL of 5% S. Cerevisiae culture suspension (having an absorbance of 3.8 -4.0 at 450 nm) along with (NH 4 ) 2 SO 4 (1.3 g L −1 ), MgSO 4 .7H 2 O (0.01 g L −1 ) and CaCl 2 (0.06 g L −1 ).The pH was adjusted to 5.5. The mixture was then subjected to the microwave treatment in a microwave oven with output power 80 W for 5 min. After the microwave treatment, the glass flasks were kept in an incubated shaker at 100 rpm and 37°C for 30 h. Ethanol concentration was measured after the 30 h fermentation period.
Conventional saccharification and fermentation was also performed according to the method mentioned previously but without the use of microwave irradiation. This served as the control process.
Analytical methods: The ethanol and glucose concentration in the samples was measured at several intervals. Samples were collected at every 2 h interval for the first 12 h which consist of 6 data and another 10 data at every 6 h for the next 60 h. A total of 16 data were collected for the entire run of 72 h. During the sago starch hydrolysis and fermentation, the content of reducing sugars, calculated as glucose, was determined by 3,5-dinitrosalicylic acid (DNS) method (Miller, 1959). A standard curve was drawn by measuring the absorbance of known concentrations of glucose solutions at 570 nm.
The ethanol concentrations of samples were determined using a spectrophotometric method with potassium dichromate reagent (Caputi et al., 1968). The supernatant taken at various interval was added with 25 mL of chromic acid (potassium dichromate reagent) followed by incubation at 80°C for 15 min. After incubation 1mL of 40% sodium potassium tartarate was added. The absorbance was measured in a UV-Vis spectrophotometer (spectronic 20) at 600 nm. A standard graph was plotted by taking different concentration of absolute ethanol (10-100%) and measuring its absorbance at 600 nm. The concentration of ethanol from the various intervals was determined by reading off from the standard graph.
The fermentation efficiency was computed from the theoretical ethanol yield and that obtained in the various treatments using the equation:
RESULTS AND DISCUSSION
Optimization of Microwave treatment for hydrolysis of starch: Hydrolysis of starch prior to fermentation to ethanol is a very important step because the yeast, S. cerevisiae, is non-amylolytic and was reported to be unable to hydrolyze starch (Jamai et al., 2007). It is however, a very good candidate for fermentation of sugar to ethanol. Figure 1 presents the influence of the duration of the microwave treatment (80 W) on the concentration of glucose achieved after the liquefaction of the sago starch slurry. The low level of MW power was used in order to avoid the enzyme denaturation and to minimize the thermal effects of the process. As shown in Fig. 1, the duration of 5 min was the optimal exposure time for the microwave treatment at 80W output power. Therefore microwave treatment of 5 min was selected for further experiments since during that time maximal glucose concentrations were attained. The microwave experiments were carried out at low power level and constant temperature. The reason is the well known phenomenon of enzyme denaturation at high temperature which decreases the catalytic activity of the enzymes. Microwave treatment helps in destroying the crystalline starch structure and hence makes it easier for the enzymes to convert it to glucose. Similarly, relatively short duration of the microwave treatment was also selected by other investigator as appropriate for destroying the starch crystalline arrangement (Palav and Seetharaman, 2007). Khanal et al. (2007) reported that ultrasound pretreatment (2 kHz; 20 and 40 s) enhanced glucose yield due to reduction in particle size and better mixing. It may also help in the release of starch from its complex with lipids. However, Nikolic et al. (2010) found that ultrasound treatment consumed a large amount of energy, adding further towards the cost of bioethanol production. On the other hand, similar increased in efficiency of the hydrolysis process probably through the same phenomenon can be obtained by using microwave irradiation which consumes much lesser energy than ultrasound.
The results from starch concentration optimization experiments (10, 20 and 30%) indicated that as the starch slurry concentration increased, the amount of conversion of starch to glucose also increased. Even though the quantity of glucose released was more from 30% (w/v) slurry (Fig. 2), the percentage conversion to glucose was the highest with 10% (w/v) slurry (Fig. 3). When 20% (w/v) and 30% (w/v) starch slurry were used the rate of conversion to glucose did not increase proportionally with the increase in starch content. In fact the rate was fastest with 10% slurry. In high viscous environment i.e. higher starch concentration the inactivation of amylase and other enzymes during the microwave irradiation may be observed because of hot spots generated in the system arising from poor heat exchange. On the other hand at lower viscosity i.e. lower starch concentrations heat exchange is better and hot spots are not created. It is anticipated that at low power level of microwave irradiation non-thermal effects of microwave play a role. At low power level the active site of the enzyme molecules may undergo conformational changes to favor the cleavage of the glycosidic bonds. This will enhance the efficiency of the enzyme. Non-thermal effects or microwave effect has been observed in a number of microwave assisted catalytic or enzymatic reactions (La Hoz et al., 2007;Yadav and Lathi, 2007;Saifuddin et al., 2011). Previous study on ethanol production from fresh cassava mash also showed that high viscosity caused resistance to solid-liquid separation and lower fermentation efficiency (Srikanta et al., 1992). High viscosity also causes several handling difficulties during processes and may lead to incomplete hydrolysis of starch to fermentable sugar (Wang et al., 2008). There was minimal increase in the percentage conversion of starch, when the ragi tapai concentration levels were increased from 10-20 and 30% with the starch slurry at 10% (Fig. 4). Increasing the level of concentration of ragi tapai did not result in any further increase in the starch hydrolysis. Most studies have shown that S. cerevisiae is non-amylolytic yeast and was reported to be unable to hydrolyze starch (Jamai et al., 2007). However study by Azmi et al. (2010) has shown that S. cerevisiae has the ability of hydrolysing starch but at very low rate. The study also reported that ragi tapai is more efficient in hydrolyzing raw starch to glucose compared to the fungi Candida tropicalis (Azmi et al., 2010). This will subsequently produce higher ethanol.
Microwave assisted simultaneous saccharification and fermentation of sago starch: Microwave assisted fermentation by yeast was performed by sequential coculture process in a single step. The S. cerevisiae (yeast) was added 2 h after the microwave assisted saccharification treatment. This was done in order to bring sufficient amount of sugar before the start of fermentation. Two sets of experiments (microwave assisted saccharification and fermentation and conventional saccharification and fermentation) were performed. In microwave assisted sacchrification and fermentation samples were subjected to 80W microwave irradiation for 5 min after addition of ragi tapai. After saccharification period of 2 h at 45°C the fermentation process was started by adding the yeast and subjected to microwave irradiation at 80W for 5 min. After which it was incubated at 37°C for up to 36 h. Figure 5 shows the result of both the microwave assisted saccharification and fermentation and conventional saccharification and fermentation. The microwave assisted process showed higher amount of ethanol production with 553 g ethanol per kg starch produced after 30 h of fermentation. In the conventional process the amount of ethanol produced was 296.1 g per kg starch at 30 h of fermentation. The co-culture after 2 h allows the mixed culture to hydrolyze the raw starch into glucose and made them available for S. cerevisiae to subsequently ferment them into ethanol. Simultaneous saccharification and fermentation has the advantage that high sugar concentrations are never achieved in the system, which will facilitate the enzymatic hydrolysis of starch to be carried in the forward direction. Microwave aided ragi tapai hydrolysis followed by yeast fermentation for 36 h showed that the residual reducing sugar in the fermented broth after 30 h was about 2.25 ± 1.50 g L −1 . Hence the total amount of utilized glucose was 98% indicating the end of fermentation. This indicated that most of the sugars formed were simultaneously fermented to ethanol before it accumulate and correspondingly inhibit the fermentation process by osmotic pressure on the cells (Bai et al., 2008). For the non-microwave process residual reducing sugars in the fermented broth after 30 h was about 10.60 ± 1.50 g L −1 . When we compared the results obtained in the simultaneous saccharification and fermentation of corn meal with microwave (Nikolic et al., 2008) and ultrasound pretreatment (Nikolic et al., 2010) to the control sample, the ethanol concentration was increased by 13.4% by microwave and 11.15% by ultrasound pretreatment. It was mentioned that this may be due to the mechanism of microwave action on swelling and gelatinisation of starch granules and destruction of the starch crystalline arrangement was probably different compared to the ultrasound. However in this study the improvement recorded with microwave treatment for both the processes (saccharification and fermentation) gave a total efficiency improvement of about 45.5%. As stated earlier low power microwave irradiation does not increase the temperature beyond 40°C and hence denaturation of the enzyme is avoided. The significant contributor is proposed to be the non-thermal effects, while the thermal effect plays only a minor role under these conditions. The "thermal" effects refer to interactions resulting in increased random motion of particles (e.g., atoms, molecules, ions, or electrons) where the kinetic energy statistics of such fluctuations are represented by a single thermodynamic equilibrium distribution (i.e., Maxwell-Boltzmann, Bose-Einstein, or Fermi-Dirac). "Non-thermal" effects refer to interactions resulting in non-equilibrium energy fluctuation distributions or deterministic, time-averaged drift motion of matter (or both) (Kuhnert, 2002;Booske et al., 1997). This provides the molecule collision under microwave irradiation extra driving force compared to that under conventional heating, which results in higher rate of reaction under mmicrowave irradiation as long as the enzyme is not deactivated by microwave. The other contribution of the non-thermal effects is that microwave energy can also modulate the configuration of enzyme molecules by accelerating the molecular rotation and electron spin oscillation of the active site of the enzyme, which can provide more chance to make the substrates fit to the enzyme in unit of time (Saifuddin et al., 2011). The specific nature of this enzyme increases yield tremendously. Li and Yang (2008) in the production of zeolite membranes, have speculated on the non-thermal effects of microwave reaction. Apparently, microwave heating could result in different membrane morphology, orientation, composition for the zeolite membranes. Huang et al. (2005) reported that there is microwave effect on substrate specificity of alcohol. The reaction rate altering can be explained in respect of two important parameters-polarity and steric hindrance effects.
CONCLUSION
Ragi tapai was chosen based on its ability to produce glucose and ethanol yields from starch directly as presented previous by other researches, but with low yields (Azmi et al., 2010). Since the yields are low, coculturing with yeast had been proposed as a way to increase the yield. Maximum processing time needed was 2 hr of hydrolysis and 30 hours of fermentation for the ragi tapai-yeast system. Rapid utilization of the glucose by yeast also prevented bacterial contamination in the broth, permitting an almost complete conversion of glucose to ethanol. Simultaneous single step bioconversion from unhydrolyzed sago starch into ethanol will not only reduce the cost of enzymes that is used in liquefaction and saccharification steps but will also reduce the substrate inhibition, especially on yeast cells. Besides the advantage of using cheaply and readily ragi tapai for conversion of starch to glucose, the feed stock is also cheaper than other starches like corn and potato. The present study had highlighted the importance of well controlled microwave assisted enzymatic reaction to enhance the overall reaction rate of the process. It is worth to point out some general statements: (1) Enzymatic hydrolysis of starch using typical enzymes may successfully be carried out at microwave condition (2) The effect of microwave irradiation strongly depends on: (a) Microwave power level -higher levels of MW may cause denaturation of the enzyme; (b) Viscosity of the reaction system that is the function of starch concentration-in less concentrated slurry the diffusion of heat is uniform with no hot spots and hence allowed the increase the reaction rate without denaturation of the enzyme (3) The dominant factor in the microwave assisted reaction in this study may be treated as non-thermal effects; (4) The Microwave Irradiation-Enzyme Coupling Catalysis (MIECC) effect on ethanol production had shown an reaction rate increase of close to two times.
In the future work, the ethanol produced could be tested on a spark-ignition engine to monitor the emission and other thermodynamic parameters. The ethanol can be tested to see its compliance with ASTM D4806, which is a standard for anhydrous denatured fuel ethanol for blending with gasoline and ASTM D5798, which is a standard specification for fuel ethanol (Ed75-Ed85) for automotive spark-ignition engine. | 2019-04-20T13:12:45.918Z | 2011-07-18T00:00:00.000 | {
"year": 2011,
"sha1": "36a568dd88afcb538e018da070113605ff2f8f81",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/jmssp.2011.198.206.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f1528775bc544975eb162a9142df8540268a820d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
237454380 | pes2o/s2orc | v3-fos-license | Latent-transforming growth factor β-binding protein 2 accelerates cardiac fibroblast apoptosis by regulating the expression and activity of caspase-3
Cardiac fibrosis is a core process in the development of heart failure. However, the underlying mechanism of cardiac fibrosis remains unclear. Recently, a study found that in an isoproterenol (ISO)-induced cardiac fibrosis animal model, there is high expression of latent-transforming growth factor β-binding protein 2 (LTBP2) in cardiac fibroblasts. Whether LTBP2 serves a role in cardiac fibrosis is currently unknown. In the present study, mouse cardiac fibroblasts (MCFs) were treated with 100 µM/l ISO for 24, 48, or 72 h, and small interfering RNAs (siRNAs) were used to knockdown LTBP2. Reverse transcription-quantitative PCR and western blotting were used to determine gene and protein expression levels, respectively. Caspase-3 serves a key role in cell apoptosis and is related to cardiac fibrosis-induced heart failure. Caspase-3 activity was therefore determined using a caspase-3 assay kit, CCK8 was used to determine the rate of cell proliferation and apoptosis rates were quantified using a cell death detection ELISA kit. The present study demonstrated that cell apoptosis and LTBP2 expression increased in MCFs treated with 100 µM/l ISO in a time-dependent manner. Expression and activity of caspase-3 also increased in MCFs treated with 100 µM/l ISO for 48 h compared with the control group. In addition, ISO stimulation-induced MCF apoptosis, along with the increased expression of caspase-3 were partly abolished when LTBP2 was knocked down. In conclusion, LTBP2 expression increased in ISO-treated MCFs and accelerated mouse cardiac fibroblast apoptosis by enhancing the expression and activity of caspase-3. LTBP2 may therefore be a potential therapeutic target for treating patients with cardiac fibrosis.
Introduction
Chronic heart failure is a critical end-complication and comorbidity of various cardiovascular diseases, such as hypertension and coronary heart disease (1). Cardiac fibrosis is an inevitable intermediate pathological process during the development of heart failure (2). Cardiac fibrosis is characterized by a net accumulation of extracellular matrix proteins (including fibrillar collagen types I and III, Elastin and Fibronectin) in the cardiac interstitium and contributes to both systolic and diastolic dysfunction in myocardial remodeling and heart failure (3,4). Cardiac fibroblasts (CFbs), the most prevalent cell type in the heart, serve a key role in the adverse cardiac fibrosis that occurs with heart failure (5,6). CFbs have been recognized to have a fundamental contribution to the cardiac response of a variety of injuries (7). For example, when ischemia-reperfusion injury occurs in the myocardium, triggering an inflammatory response, the inflammasome is activated in cardiac fibroblasts but not in cardiac myocytes (8). Furthermore, the inflammatory response generated by this injury is the key pathophysiological process in myocardial remodeling and the progression of heart failure (9). In the fibrotic heart, excessive cardiac fibroblasts impair the electromechanical coupling of cardiomyocytes, hence increasing the risk of arrhythmogenesis and mortality (7). Hence, CFbs are regarded as a therapeutic target for heart failure (10).
Recently, a study demonstrated genetic variation in cardiac fibrosis by characterizing the response of CFbs from multiple inbred mouse strains to isoproterenol (ISO) treatment (8). In addition, the study identified latent-transforming growth factor β-binding protein 2 (LTBP2) as a marker for cardiac fibrosis (11). LTBP2 is a protein expressed in elastic tissues, such as heart and muscle (12), interact with a number of matrix components, including collagen and fibronectin (13) and is a member of a superfamily of proteins comprising extracellular proteins fibrillins and latent-transforming growth factor β-binding proteins (13), however whether LTBP2 serves a role in cardiac fibrosis remains unclear. Recent studies have shown that LTBP2 is associated with apoptosis (14,15); however, whether the upregulation of LTBP2 expression promotes or inhibits apoptosis remains controversial. Liang et al (14) demonstrated that LTBP2 knockdown alleviated apoptosis in Latent-transforming growth factor β-binding protein 2 accelerates cardiac fibroblast apoptosis by regulating the expression and activity of caspase-3 osteosarcoma cells in vitro, while Suri et al (15) demonstrated that LTBP2 deficiency induced apoptosis in trabecular meshwork cells in vitro (16). Hence, whether LTBP2 affects cells apoptosis in cardiac fibrosis and its role in CFb apoptosis are important to understand in the context of cardiac fibrosis as well as heart failure development. More importantly, the mechanism through which LTBP2 affects cell apoptosis has not been elucidated. Mitochondria-related apoptosis is a key process in cardiac fibrosis (17,18), and caspase-3, the execution protein of mitochondrial-induced cell apoptosis serves a pivotal role in heart failure (19). A previous study illustrated that transforming growth factor β (TGF-β) induces apoptosis by targeting caspase-3 activity and LTBP2 has been identified to form latent complexes with TGFβ (20). Based on these aforementioned results, it was hypothesized that LTBP2 may promote CFb apoptosis by activating caspase-3. The current study was conducted to explore the role and mechanism of LTBP2 in ISO-induced apoptosis in CFb to provide a potential therapeutic target for heart failure.
Materials and methods
Mouse cell fibroblasts (MCFs) culture and treatment. MCFs (cat. no. CP-M074) were obtained from Procell Life Science & Technology Co. Ltd. Cells were cultured in 30 ml of DMEM (Gibco; Thermo Fisher Scientific, Inc.) containing 10% FBS (Gibco; Thermo Fisher Scientific, Inc.) and placed in a 37˚C, 5% CO 2 incubator for 90 min of absolute static incubation. When changing the culture medium, a cross was used to shake the dish and the principle of differential cell adhesion was used to wash and discard non-adherent cells After repeated washing with PBS at 37˚C, 10 ml of DMEM containing 10% FBS was added to the cells. Cells were then placed in an incubator for static culturing and the medium was changed every 3 days. Cells were passaged when 90% confluency was reached. MCFs were then treated with 100 µM/l ISO (8) (Sigma-Aldrich; Merck KGaA) in a 37˚C, 5% CO 2 incubator for 0, 24, 48 or 72 h, while the control group was treated with an equivalent amount of PBS.
Small interfering (si) RNA transfection. LTBP2 knockdown by siRNA reverses myocardial oxidative stress injury, fibrosis and remodelling during dilated cardiomyopathy. Therefore, LTBP2 and scrambled NC siRNAs were obtained from Shanghai Shenggong Biology Engineering Technology Service, Ltd. MCFs were grown in 12-well plates at a density of 6-8x10 5 and transfected in a 37˚C, 5% CO 2 incubator for 12 h. Samples were then transfected with a mixture containing 4 µl of Lipofectamine ® 2000 (Invitrogen; Thermo Fisher Scientific Inc.) and 3 µl of siRNA in 125 µl of Opti-MEM (Invitrogen; Thermo Fisher Scientific Inc.). Cells were then incubated at 37˚C in 5% CO 2 for another 12 h, after which the medium was changed to DMEM containing 100 µM/l ISO or PBS. Knockdown efficiency was determined by western blotting. The primer sequences used were as follows: siRNA LTBP2 forward, 5'-CCG GGU UAU AAG CGG GUU AUU-3' and reverse, GAA CCA AAC GUC UGC GCA AUU-3'; scrambled NC siRNA forward, 5'-UUC UCC GAA CGU GUC ACG UTT-3' and reverse, 5'-ACG UGA CAC GUU CGG AGA ATT-3'.
Reverse transcription-quantitative (RT-q)PCR. The thermocycling conditios for OCR were as follows: Pre-denaturation 95°C, 10 min, followed by 40 cycles at 95°C for 15 sec, 60°C for 60 sec and melting curve analysis at 60-95°C, with a 0.3°C rise every 15 sec. the 2 -ΔΔCq method was used for quantification (21). Experimental methods and conditions were performed according to the instructions of the PrimeScript RT Master Mix kit (Takara Bio, Inc.; cat. no. RR036A). Experimental group MCFs were treated with 100 µM/l ISO at 37˚C in 5% CO 2 , while control group cells were treated with an equivalent amount of PBS. After 72 h, all cells were harvested. Total RNA was isolated from MCFs using a RNAiso reagent (Takara Bio, Inc.). The PrimeScript RT reagent kit (Takara Bio Inc.) was used to transcribe isolated RNA to cDNA according to the manufacturer's protocol. qPCR was performed using a SYBR Premix Ex Taq II kit (Takara Bio Inc.). The mRNA level of β-actin was used as an internal control. The primer sequences used were: LTBP2 forward, Western blotting. 5x10 6 cells MCFs were collected and homogenized in lysis buffer (Beyotime Institute of Biotechnology) containing protease and phosphatase inhibitors (Roche Diagnostics) and incubated on ice for 30 min. Whole cell lysates were centrifuged at 12,000 x g for 15 min at 25˚C and protein concentrations were determined using the bicinchoninic acid (BCA) assay (Beyotime Institute of Biotechnology). Proteins (20 µg) were then separated by 8% SDS-PAGE and transferred onto PVDF membranes (EMD Millipore). Membranes were blocked with 5% skimmed milk in Tris-buffered saline solution with 0.1% Tween-20 (TBS/T) at room temperature for 2 h and subsequently incubated with specific primary antibodies (all Abcam) against LTBP2 (1:1,000; cat. no. ab121193), caspase-3 (1:1,000; cat. no. ab184787), β-actin (1:2,000; cat. no. ab8226) at 4˚C overnight. The membranes were washed with TBS/T 3 times and then incubated with an HRP-conjugated secondary antibody (goat anti-mouse IgG; 1:2,000, Beyotime Institute of Biotechnology; cat. no. A0216) at 37˚C for 2 h. Protein bands were visualized by chemiluminescence detection (ChemiDoc MP; Bio-Rad Laboratories, Inc.) using Image Lab software for densitometry (version 5.2; Bio-Rad Laboratories, Inc.).
Caspase-3 activity. Caspase-3 activity was determined using a colorimetric Caspase-3 assay kit (Abcam). 5x10 6 MCFs were collected and resuspended in 50 µl of lysis buffer and incubated on ice for 15 min, subsequently 50 µl of the 2X reaction buffer was added to each sample. DEVD-pNA substrate (1 mM) was added, and samples were incubated for 2 h at 37˚C. Caspase-3 activity was measured at an optical density (OD) of 400 nm. Protein concentration was assayed by BCA assay (Beyotime Institute of Biotechnology), BCA assay and protein concentration were performed as aforementioned. Caspase-3 activity was normalized to the protein concentration.
Cell proliferation. CCK-8 (Beyotime Insitute of Biotechnology) assay was used to determine cell proliferation. MCFs (2x10 4 cells/ml) were seeded onto 96-well plates at 100 µl per well (total of 56 wells) and further supplemented with 100 µl of DMEM containing 10% FBS at 37˚C. Experimental group cells were treated with 100 µM/l ISO, while controls were treated with PBS. After 24, 48, and 72 h of incubation, 100 µl of culture medium was aspirated and 10 µl of CCK-8 working solution (Beyotime Institute of Biotechnology) was added for 1 h at room temperature. The absorbance (A) values at 450 nm were measured by a microplate reader and 0 was set using blank control wells. After continuous detection for 7 days, by measuring absorbance at 450 nm, the proliferation rate of MCFs was calculated.
Cell apoptosis. Cell Death Detection ELISA kit (Roche Diagnostics GmBH) was used to evaluate the apoptotic rates according to the manufacturer's instructions. Cells (2x10 4 cells/ml) were treated with ISO or PBS and centrifuged at 11,000 x g at 25˚C for 10 min. After the supernatant was removed, cell pellets were incubated with 200 µl lysis buffer for 30 min at room temperature. Cytoplasmic lysates were transferred to a streptavidin-coated plate, then a mixture of anti-DNA-POD and anti-histone-biotin was added at 25˚C for 2 h. Absorbance at 405 nm was detected with a reference wavelength at 490 nm (Synergy Mx; BioTek Instruments, Inc.). Apoptotic rate was determined by measuring the absorbance at 405 nm.
Statistical analysis. SPSS 20.0 software (IBM Corp.) was used for statistical analysis. Results are expressed as mean ± standard error of mean (SEM). All experiments were repeated at least 3 times. An unpaired t-test was performed for comparison between 2 groups, multiple comparisons were compared using one-way analysis of variance (ANOVA) followed by the post hoc Tukey's test. P<0.05 was considered to indicate a statistically significant difference, while P<0.01 was considered to indicate a highly significant difference.
Cell apoptosis and LTBP2 expression increased in a time-dependent manner in MCFs treated with ISO.
Compared with control, MCFs treated with ISO at 100 µM/l for 24 h had decreased proliferation at 0, 24, 48 and 72 h (Fig. 1A), while cell apoptosis was enhanced in a time-dependent manner in ISO-treated cells at 24, 48 and 72 h (Fig. 1B). Similar to the changes observed in cell apoptosis, LTBP2 mRNA expression significantly increased in cells treated with ISO and demonstrated a time-dependent increase (Fig. 1C). Since there are no significant changes in LTBP2 mRNA expression at 72 h compared with 48 h, the latter timepoint was chosen for protein detection, which demonstrated increased LTBP2 protein expression following ISO treatment compared with the control (Fig. 1D). The results revealed that cell apoptosis and LTBP2 expression increased in a time-dependent manner in MCFs treated with ISO.
LTBP2 is involved in ISO-induced MCF apoptosis. siRNA transfection was used to knockdown the expression of LTBP2 and the role of LTBP2 in ISO-induced MCFs apoptosis was investigated. RT-qPCR and western blotting demonstrated successful LTBP2 knockdown ( Fig. 2A and B), and results of the CCK-8 assay demonstrated that cell proliferation was inhibited by ISO (Fig. 2C). Knocking down LTBP2 partially reversed the inhibition induced by ISO compared with scrambled siRNA (Fig. 2C), as well as abated the enhanced apoptosis caused by ISO treatment compared with scrambled siRNA (Fig. 2D). The results demonstrated the involvement of LTBP2 in the ISO-induced apoptosis of MCF.
Caspase-3 expression and activity increases in MCFs treated
with ISO. Gene and protein expression of caspase-3 measured by RT-qPCR and western blotting respectively, increased in cells treated with ISO at 100 µM/l for 72 h compared with that in control cells (Fig. 3A and B). In addition, enhanced caspase-3 activity was demonstrated following ISO treatment compared with the control group (Fig. 3C). The results demonstrated that Caspase-3 was involved in ISO-induced MCF apoptosis.
LTBP2 increases MCF apoptosis by regulating caspase-3 expression and activity.
To examine whether the effect of LTBP2 on MCF apoptosis is associated with caspase-3, LTBP2 was knocked down and the gene and protein expressions of caspase-3 was quantified in cells treated with ISO. The RT-qPCR and western blotting results demonstrated that LTBP2 deficiency partially reversed the increase in caspase-3 expression induced by ISO treatment (Fig. 4A and B). In addition, the results indicated that knocking down LTBP2 inhibited the increase in caspase-3 activity caused by ISO compared with the control group (Fig. 4C). The results revealed that LTBP2 affects MCF apoptosis induced by ISO through caspase-3.
Discussion
LTBP2 is a protein that associates with the extracellular matrix and binds to it in a fibrillin-dependent manner (22,23). In humans, it has been confirmed that LTBP2 serves a pivotal role in primary congenital glaucoma with defects in the trabecular network (16,24), and loss of LTBP2 also results in an autosomal recessive ocular syndrome (25). In addition, a series of clinical studies demonstrated that LTBP2 is related to coronary artery disease and heart failure (26,27), and a recent study revealed increased expression and localization of LTBP2 in the fibrotic regions of the myocardium after injury in mice and in patients with heart failure (11). Together, the findings of the aforementioned studies suggested that LTBP2 may play a role in the development of heart failure. However, its function as well as the underlying mechanism by which LTBP2 may be involved in heart failure remains unknown. Cardiac fibroblasts play a key role in heart failure (5,6); hence, there is great value in investigating the function of LTBP2 in this process.
The findings of the present study demonstrated that LTBP2 expression increased in a time-dependent manner in ISO-treated MCFs, which is consistent with a previous study (11) and implied that LTBP2 is involved in the development of cardiac fibrosis. In addition, the present study demonstrated that compared with control, ISO inhibited the proliferation of MCFs, which was alleviated by knocking down LTBP2 expression. This implies a role for LTBP2 in ISO-induced MCF apoptosis. LTBP2 has been reportedly involved in other types of cell apoptosis in recent studies, but its function in these processes varies. Liang et al (14) demonstrated that depletion of LTBP2 partly abolished the apoptosis of osteosarcoma cells induced by a microRNA-421 inhibitor. Suri et al (15) demonstrated that knockdown of LTBP2 induced apoptosis in trabecular meshwork cells compared with the control group. The present study demonstrated that LTBP2 deficiency partially reversed ISO-induced apoptosis, which is consistent with the results of Liang et al (14), but is contrary to that of Suri et al (15). The reason behind these contradictory results remains unknown; a possible explanation for this may be that LTBP2 is regulated by different pathways in different cells or when treated with different factors. Further studies are required to confirm this.
In addition, the findings of the present study revealed that LTBP2 enhances MCF apoptosis by regulating caspase-3 expression and activity which has not been previously reported. Caspase-3 plays a key role in cell apoptosis (28) and activation of caspase-3 is related to heart failure caused by cardiac fibrosis (19,29). The results of the present study demonstrated that knocking down LTBP2 led to a decrease in caspase-3 expression and inhibition of caspase-3 activity in MCFs treated with ISO. The mechanism through which LTBP2 regulates caspase-3 expression and activity is unclear. As mentioned previously, TGF-β may serve as a mediator between LTBP2 and caspase-3 and is a potential link that needs further studies to be substantiated.
Although studies have shown that serum levels of LTBP2 are increased in patients with heart failure, the effects of LTBP2 on heart failure are currently not being investigated (27). Appropriate proliferation of cardiac fibroblasts is a means through which the heart repairs itself in response to damage and restores function (7). However, excessive fibroblast proliferation can also trigger cardiac fibrosis and lead to the progression of heart failure (7). Whether the apoptosis of cardiomyocytes triggered by upregulation of LTBP2 promotes cardiac fibrosis and the progression of heart failure or aids in its recovery needs to be further verified in animal studies. In addition, since the TGFβ pathway is closely associated with cardiac fibrosis (30), the effect of LTBP2 on cardiac fibrosis may also be related to changes in the expression of TGFβ.
There are several limitations of the present study. Firstly, apoptosis was detected using the ELISA instead of flow cytometry. CCK-8 assay can be used to determine cell viability, proliferation and cytotoxicity, but is not a good measure of apoptosis (31). Secondly, the in vitro results of the present study were not verified in animal models. Further research should be undertaken to investigate whether LTBP2 deficiency in animal models can lead to cardiac fibrosis as well as heart failure, and these results may provide new targets for treatment and prevention of heart failure.
In conclusion, the results revealed that LTBP2 and caspase-3 expression was increased in cells during ISO-induced MCF apoptosis. Inhibition of LTBP2 expression alleviated ISO-induced MCF apoptosis by suppressing caspase-3 expression. The results also indicated that LTBP2 may influence the development of cardiac fibrosis by regulating caspase-3-induced cardiac fibroblast apoptosis. The results of the current study may provide novel ideas for mitigating the progression of heart failure. | 2021-09-10T05:17:24.439Z | 2021-08-09T00:00:00.000 | {
"year": 2021,
"sha1": "ce27aa3e681ca1d49de9d1a5876953b78905d7f5",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2021.10580/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce27aa3e681ca1d49de9d1a5876953b78905d7f5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235741523 | pes2o/s2orc | v3-fos-license | Established approaches and inspirations from models of neuronal spikes
Complexity and limited knowledge render it impractical to write down the equations describing a cellular system completely. Cellular biophysics uses hypotheses-based modelling instead. How can we set up models with predictive power beyond the experimental examples used to develop them? The two textbook systems of cellular biophysics, Ca signalling and neuronal membrane potential dynamics, both face this question. Both systems also have a non-equilibrium feature in common: on different time scales and for different observables, they exhibit stochastic spiking, i.e., sequences of stereotypical events that are separated by statistically distributed intervals, the interspike intervals (ISI). Here we review recent progress on the description of Ca spikes in terms of blips, puffs and cellular Ca spikes and focus on stochastic models that can explain the statistics of the single ISIs, in particular its mean and variance and the cell-to-cell variability of these statistics. We also review models of the stochastic integrate-and-fire type and measures like the spike-train power spectrum or the serial correlation coefficient that are used to describe neuronal spike trains. These concepts from computational neuroscience might be applicable for understanding long-term memory effects in Ca spiking that extend beyond a single ISI, such as cumulative refractoriness.
Introduction
The program of theoretical physics for understanding a given system is to specify first principles to it and to solve the resulting equations. That program has been extremely successful and defined our idea of an exact and quantitative science. The predictive power of the first principles originates from the astonishing correspondence between experimental objects and mathematical structures. The mechanics of macroscopic objects corresponds to variational principles and differential equations, the behaviour of microscopic objects corresponds to operator theory in Hilbert spaces.
The biophysics and biochemistry of cells obey the first principles, too. But cells consist of many components and interactions. Specifying the fundamental equations of physics to a living cell is close to impracticable. The approach of theoretical biophysics is consequently what usually is called mathematical modelling. Instead of a derivation from first principles, a hypothesis on the components and interactions assumed to be most relevant for a specific process of interest defines the model equations. The assumptions need to be verified retrospectively by contrasting model predictions with experimental results. Modelling has to find the a e-mail: nicolai.friedhoff@mdc-berlin.de (corresponding author) balance between capturing all relevant components, manageable complexity and the purpose of the model. Within this balance and in particular since modelling lacks the certainty of first principles, it is fundamental to start the formulation of the model equations within the mathematical structures to which the system to be modelled corresponds to gain predictive power. Otherwise a model might capture the experiment used to develop it, but very likely fails in predictions beyond this specific setting.
Only a few cellular dynamical systems are currently characterized well enough for identifying the mathematical structure corresponding to them. Intracellular Ca 2+ dynamics is one of them. The Ca 2+ pathway translates extracellular signals into intracellular responses by increasing the cytosolic Ca 2+ concentration in a stimulus dependent pattern [7,32,94]. The concentration increase can be caused either by Ca 2+ entry from the extracellular medium through plasma membrane channels, or by Ca 2+ release from internal storage compartments. In the following, we will focus on inositol 1,4,5-trisphosphate (IP 3 )-induced Ca 2+ release from the endoplasmic reticulum (ER), which is the predominant Ca 2+ release mechanism in many cell types. IP 3 sensitizes Ca 2+ channels (IP 3 Rs) on the ER membrane for Ca 2+ binding, such that Ca 2+ released from the ER through one channel increases the open probability of neighboring channels. This positive feedback of Ca 2+ on its own release channel is called Ca 2+ -induced-Ca 2+release (CICR). Opening of an IP 3 R triggers a Ca 2+ flux into the cytosol due to the large concentration differences between the two compartments. CICR sometimes strongly multiplies channel opening to a global release and concentration spike. The released Ca 2+ is removed from the cytosol either by sarco-endoplasmic reticulum Ca 2+ ATPases (SERCAs) into the ER or by plasma membrane Ca 2+ ATPases into the extracellular space.
IP 3 Rs are spatially organized into clusters of up to about fifteen channels within an area with a diameter of 100-500 nm. These clusters are scattered across the ER membrane with distances of 1-7 µm [10,53,59,85,92,93]. CICR and Ca 2+ diffusion couple the state dynamics of the channels. Given that the diffusion length of free Ca 2+ is less than 2 µm due to the presence of Ca 2+ binding molecules in the cytoplasm and SERCAs, the coupling between channels in a cluster is much stronger than the coupling between adjacent clusters [96]. The structural hierarchy of IP 3 Rs from the single channel to clusters and cluster arrays on cell level shown in Fig. 1 is also reflected in the dynamic responses of the intracellular Ca 2+ concentration as revealed through fluorescence microscopy and simulations [10,62,97,108]. Openings of single IP 3 Rs (blips) may trigger collective openings of IP 3 Rs within a cluster (puffs). Ca 2+ diffusing from a puff site can then activate neighboring clusters, eventually leading to a global, i.e., cell wide, Ca 2+ spike [35,53,62,63]. Repetitive sequences of these Ca 2+ spikes encode information that is used to regulate many processes in various cell types [7,55,73].
Ca 2+ exerts also a negative feedback on the channel open probability, which acts on a slower time scale than the positive feedback, and has a higher Ca 2+ half maximum value than CICR [10,50,63,67,99,108]. This Ca 2+ -dependent negative feedback helps terminating puffs. Therefore, the puff probability immediately after a puff is smaller than the stationary value but typically not 0. Channel clusters recover within a few seconds to the stationary puff probability from this Ca 2+dependent inhibition [10,50,63,67,99,108].
The negative feedback terminating global release spikes causes an absolute refractory period T min as part of the interspike intervals (ISIs) lasting tens of seconds [71,100,107]. The molecular mechanism of this feedback is pathway and cell type specific and not always known. A negative feedback on the IP 3 concentration might be involved [5,69]. Hence, the negative feedback that determines the time scale of interspike intervals is different from the feedback contributing to interpuff intervals. It requires global (whole cell) release events.
Modelling of Ca 2+ signalling has relied heavily on ordinary differential equations in the last decades, established as the rate equations for the average fractions of IP 3 Rs in states corresponding to IP 3 R state schemes and spatially averaged Ca 2+ , IP 3 and buffer concentrations [86][87][88]104]. This approach neglects noise and fluctuations [89]. However, the experimental evidence both on puffs and sequences of global spikes An open channel entails Ca 2+ release into the cytosol due to the large concentration difference between the ER and the cytosol. Since channels are clustered, opening of a single channel, which is called a blip, leads to activation of other channels in the cluster, i.e., a puff (middle). The cluster corresponds to a region with Ca 2+ release with a radius R cl that is fixed by the number of open channels. The stochastic local events are orchestrated by diffusion and CICR into cell wide Ca 2+ waves, which form the spikes on cell level (top). (Figure from ref. [83].) demonstrated random behavior and, therefore, the relevance of higher moments. Additionally, most models do not distinguish between local and global processes and feedbacks. That entailed in the end dependencies of system characteristics like, e.g., the average interspike interval (period) on measurable parameter values which deviate from experimental observations or require parameter values not supported by measurements [88,104]. The purpose of most models is to simulate cellular behavior, and ordinary differential equations are very convenient to that end. Their derivation, however, has to take the large fluctuations into account, i.e., has to start from stochastic theory as the mathematical structure corresponding to Ca 2+ dynamics. We will illustrate with the Siekmann IP 3 R model, how this might be done.
An alternative to simulating cellular behavior by differential equations is to determine the distribution of cellular properties generated by the noise inherent to the system [38,54]. Such an approach would correspond more to the noisy character of cell dynamics, but will only take hold, if the analysis of experimental results engages into such a view on cellular behavior and measures distributions and/or their moments [54]. We will discuss a concept for calculating the first two moments of the interspike interval distribution.
Ca 2+ spikes and their statistical measures have also some similarity with sequences of neural action potentials, the famous neural spike trains. We will also briefly discuss how concepts from computational neuroscience, such as multidimensional integrate-and-fire models and spike train power spectra could be useful to model and analyze Ca 2+ spiking. [8]. [Ca 2+ ] around or in clusters becomes very large fast during puffs, easily reaching tens of µM or more, Fig. 2 [6]. These are concentrations in the inhibitory regime (s. microdomains), such that Ca 2+ release also has a fast negative feedback component on clusters.
Recent experiments on puff behavior of all three isoforms of the IP 3 R shed light on their local dynamics in form of puff frequency, puff amplitudes, open channels per puff, rise and fall times, and duration [59]. The average puff duration (full duration at half-maximum, FDHM) is about 41 ms ± 3 ms for wild-type IP 3 Rs. While the opening of clusters is explained by CICR within clusters, possible closing mechanisms of single IP 3 Rs and clusters are still being discussed. Among possible puff termination mechanisms are stochastic attrition (there is always a probability for many channels to spontaneously close together within a short time window), local ER-depletion (the ER becomes devoid of Ca 2+ locally, not able to support local cluster Ca 2+ efflux), luminal activation (regulation by Ca 2+ or other molecule species on the ER-side of the IP 3 R), or coupled gating (single closing may trigger closing of the cluster due to coupled channel dynamics) [90]. High [Ca 2+ ] together with the biphasic Ca 2+ dependency is also assumed to be at least a contributing factor of puff termination [106].
Various single channel behaviours in the course of a puff have been measured in experiments. While a steep increase of the fluorescence signal measured directly at puff sites as a quick opening of coupled channels is common, the termination of puffs can be realized in numerous ways. Smooth decay, step-wise decay, or closing with infrequent re-opening or bursting re-openings are among the most occurring channel closing scenarios or puff shapes, respectively [106]. Sometimes multiple IP 3 Rs within one cluster close almost in nearsynchrony in experiment on some occasions, yielding the seldom occurring block puff [106]. This occurred more often compared to expectations based on sets of independently closing channels (stochastic attrition).
Observation of neighbouring open IP 3 Rs within clusters with either one or two open channels confirmed deviations from the behavior of pairs of independent channels. This overall behaviour cannot be explained by inhibitory fast high Ca 2+ (biphasic open probability) or local ER-depletion, suggesting an important but yet unknown channel coupling mechanism leading to coupled gating that renders puff duration and channelcoupled puff termination robust.
While regulation of IP 3 Rs by luminal Ca 2+ content or other molecules inside the ER has been a seemingly intractable question for decades, recent experimental studies have found further support for the hypothesis of luminal control. IP 3 Rs have been reported to be regulated by luminal [Ca 2+ ] ER and likely the widely-expressed luminal Ca 2+ buffer protein annexin A1 (ANXA1) which together inhibit IP 3 Rs at high [Ca 2+ ] ER [102].
New findings suggest that IP 3 Rs have two distinct modes of Ca 2+ release. A punctate liberation mode during the rise of the Ca 2+ transient which is then followed by a diffuse mode that sustains global Ca 2+ release. The punctate mode is terminated before reaching the peak, likely through an yet unknown mechanism regulated by [Ca 2+ ] ER . These two modes could also target different effector species, regulating different downstream elements of the IP 3 induced Ca 2+ signalling pathway. [60] 2.1.2 The dynamic regime of the local dynamics Intracellular Ca 2+ dynamics is a reaction diffusion system. The reactions comprise release of Ca 2+ from the ER, pumping by SERCAs, buffering and the binding/unbinding with other Ca 2+ binding sites. The reaction dynamics is local, diffusion provides the spatial coupling. The dynamic regime (excitable, bistable or oscillatory) of a reaction diffusion system is dominated by the the dynamic regime of the local dynamics. From a structural point of view, the local dynamics are the cluster dynamics.
[Ca 2+ ] profiles in the vicinity of single IP 3 Rs and within clusters cannot be measured directly, but can be simulated [96] or calculated analytically in good While [Ca 2+ ] peaks at the cluster located at r=0 µm, [Ca 2+ ] distanced from the cluster will be one to two orders of magnitude smaller. Since IP3R dynamics are subject to [Ca 2+ ] in very close proximity to them, this makes meaningful cell wide spatial averaging difficult at best [6] approximation [6,96]. The [Ca 2+ ] at the cluster locations is about one or two orders of magnitude larger than spatially averaged concentration values, and decreases steeply with increasing distance from the channel, Fig. 2. This leads to the existence of microdomains of large [Ca 2+ ] at clusters with open channels, which are only weakly coupled to neighboring clusters by steep concentrations gradients. It is the local Ca 2+ dynamics that affects cluster dynamics the most.
The Ca 2+ concentration at closed clusters is the resting concentration in the range of ≤ 100 nM. Concentrations at open channels are >20 µM [6,96]. The dynamic range of the regulatory binding sites for both the positive and negative feedback of Ca 2+ to the open probability extends from a few hundred nM to micromolar values below 10 µM [43,49,95]. Oscillatory dynamics require concentration values in the dynamic range. However, with these large concentration changes, the system essentially never is in this dynamic range and the regime of the deterministic limit of the cluster dynamics is either excitable or bistable (except tiny parameter ranges) [97]. This conclusion is supported by an investigation into the time scales on cluster level. Typical interpuff intervals last a few seconds [25,26,53,99], interspike intervals are in the range from about 20 s to a few minutes. If the local dynamics were oscillatory and caused the sequence of spikes, the time scale of the ISI should be detectable as a temporal modulation of properties of the puff sequence at a given site. That has not been found [99]. A modulation of puff sequences on the ISI time scale could not be detected and no evidence of an oscillatory regime of the local dynamics has been observed [99]. The ISI time scale has only been observed on cell level.
Replacing local Ca 2+ concentrations with globally averaged [Ca 2+ ] values as the input for IP 3 Rs, even though their values differ by orders of magnitude, leads to misleading IP 3 R and Ca 2+ dynamics [97]. Averaged global concentrations during spikes are in the dynamic range of the IP 3 R regulatory binding sites thus allowing for cluster-cluster coupling. Using them in mathematical models as the Ca 2+ concentration experienced by the IP 3 R entailed oscillatory dynamics. However, that dynamic regime shrinks to negligible parameter ranges, high frequency and tiny global amplitudes with realistic local concentrations [97] and could not be verified by local measurements [99]. Thus IP 3 R Ca 2+ dissociation constants guarantee spatial coupling but do not allow oscillatory local dynamics.
Interspike intervals of global spikes are random
Once a cluster of IP 3 Rs opens to create a puff, the released Ca 2+ diffuses within the cell. If it reaches neighbouring clusters there is a probability of triggering follow up puffs. This can then become a self amplifying process, until a critical number of open clusters is reached, resulting in a cell-wide Ca 2+ release event, called a Ca 2+ spike [61]. These global spikes can be measured similar to measuring local puffs and can be described with the same quantities, like interspike interval (ISI), duration, or amplitude. Measuring a sequence of Ca 2+ spikes over a few minutes to hours yields a spike train from which we obtain the sequence of interspike intervals, Fig. 3. Just like blibs and puffs, spike times are inherently random, the ISI as a property of subsequent spike times is random as well. A global Ca 2+ spike has an inhibitory effect on subsequent puff events. The recovery form that inhibition takes tens of seconds, i.e., it is negative feedback on long time scales. It creates an absolute refractory period T min during which no puffs occur.
We can quantify how random spike timing of a given spike train is by the relation between the standard deviation σ of ISIs and the average ISI, T av . We see in Fig. 3 that they are linearly related like Such a linear relation has been found for all cases investigated (8 cell types and 10 conditions [17,28,31,81,100], see also [68]). The coefficient of variation of the stochastic part T av − T min of the ISIs is CV = σ/(T av − T min ) = α. The larger the CV, the more stochastic is the output of a given process. A CV value equal to 1 indicates a Poisson process, which is maximally random. A vanishing CV indicates a deterministic process. We determined the CV or α resp. as the slope of the linear approximation to population data as in Fig. 3, and from 2 experimental conditions with an individual cell. We found both values to agree [80,82] turning α into an observable not subject to cell variability (which is different from the results with puff sites in this respect [99]). Additionally, the value of α turned out to be robust against changes of buffering conditions [81], represents the data of one experiment, i.e., measured spike train of a cell. The wide spread indicates large cell-to-cell variability, but there is a functional σ-Tav moment relation visible as a linear fit. Plots from [81] stimulation strength and three pharmacological perturbations of the Ca 2+ signalling system. That surprising robustness turns Eq. (1) into one of the equations defining Ca 2+ signalling from the perspective of quantitative approaches. The value of α is set by the time scale of recovery from global negative feedback terminating the release spikes [98].
The relation between average interspike interval and stimulation
Cells are stimulated by extracellular agonists [A] binding to receptors in the cell membrane. The strength of stimulation controls the intracellular concentration of IP 3 . In general, we observe only puffs at low stimulation, spikes at intermediate agonist concentration and maintained high Ca 2+ concentration in some cell types and with some pathways at very strong stimulation. Within the spiking regime, cells respond to an increase of agonist concentration with a decrease of the average ISI, T av [36,100]. It was found for all pathways tested that the population averaged response could be well fit to a single exponential function which depends on the strength of the stimulus, given by the extracellular agonist concentration [A], Fig. 4, that is Here T min is the smallest ISI reached at strong stimulation, i.e., the absolute refractory period plus spike duration, and T ref the reference ISI at a reference agonist concentration [A ref ]. β is a constant for a given cell type and signalling pathway. We also found β to be the same for all individual cells. Hence, it is another observable defining Ca 2+ signalling from the perspec- That exponential dependency on stimulation in Eq. (2) follows from paired stimuli experiments, i.e., runs much deeper than a simple direct fit of an ansatz to experimental data. The change of the average stochastic part of the ISI ΔT av due to an agonist concentration step is proportional to the average stochastic part T av1 − T min at the lower agonist concentration T av1 [100]:
Long time scales from slow global processes and small spike probabilities
With some cell types, individual cells or experimental situations, T av is much longer than any time scale that is relevant for the state dynamics of clusters or even global cellular dynamics. From a dynamical systems point of view applying to deterministic models, this should not be possible, since each time scale requires a process setting it. However, long time scales might result simply from small probabilities and not from a slow process. Decay of a single radioactive atom for example happens at a random moment in time. If the atom is rather stable, decay is unlikely and it takes a long time on average to happen. But there is no process leading to the decay event. The state of the atom is stationary up to the time of the event. That may also apply to spike generation with the cell in the role of the atom and generation of a spike corresponding to the decay event. If the spike generation probability is small, we may observe long average ISIs and the state of the cell before the spikes is essentially stationary.
There is no process setting the long time scale in that case. Alternatively, there might be a slow process setting a long average ISI. The recovery from the negative feedback, which terminates spikes, is a prime candidate for such a slow process. The negative feedback might for example decrease [IP 3 ] [5], which then needs to recover before the next spike can occur. The inhibitory effect is a substantial decrease of the puff probability, which entails an absolute refractory period.
We can use the CV or α to assess the relative weight of small probability vs slow process in setting T av . If the CV is equal to 1, the ISIs follow an exponential distribution and are maximally random. There is no slow process setting the long time scale in that case, very similar to nuclear radioactive decay of an atom. A CV of 0 would indicate a purely deterministic and noisefree process with vanishing deviation. If the CV value is between 0 (deterministic) and 1 (pure randomness), a slow process changes the spike probability without rendering spike generation deterministic. Note, the average ISI is not simply the inverse of the recovery rate in that regime [39]. Measured CVs are between 0.2 (e.g., stimulated hepatocytes) and 0.98 (e.g., spontaneous spiking in microglia).
Open problems
We consider as open problems what is lacking for a theory able to derive the cellular signals from molecular properties. The large cell-to-cell variability defines here what is meaningful to be described by theory. An intuitive explanation for cell variability among many other possibilities might be the differences in the relative cluster positions. However, also this question has not been exhausted yet.
The puff property distributions for amplitude, duration and IPI have been simulated or described by ansatzes by a variety of groups [14][15][16], but have not been analytically derived yet. We cannot expect analytic expressions using realistic channel models (see below), but the distributions have not been written down even for strongly simplified models. Lock et al. recently demonstrated that all three IP 3 R isoforms generate similar puff property distributions sampled from many puff sites [59]. Hence, the distributions cannot depend on detailed molecular properties and a simplifying approach as common ground would make sense and would be a starting point providing conceptual understanding.
The situation with respect to global signals is similar. The interspike interval distribution for ISI sequences normalized by the average has been measured for HEK cells and spontaneously spiking astrocytes [84] and simulated [37], but it has not been derived yet. The robustness of the coefficient of variation CV has been very well confirmed experimentally [81,98,100] and has been simulated [72,82,83,98], but has neither been derived in some analytical work.
The concentration response curve of the average ISI shows an exponential dependency on the extracellular agonist concentration stimulating the cell [100]. The agonist sensitivity in the exponent is cell type and pathway specific [100]. The pre-factor of the exponent picks up all the cell variability. This detailed knowledge on the concentration response curve also awaits its theoretical explanation.
Open problems with respect to methods mainly concern the role of fluctuations. The large values of coefficients of variation on all structural levels demonstrates that fluctuation are not negligible. Their potential role becomes more tangible by considering intracellular Ca 2+ signalling as a deterministic reactiondiffusion system. The dynamic regime is then fixed by the local dynamics. We have no experimental evidence for an oscillatory local dynamics of intracellular Ca 2+ signalling [99], and the whole literature on puffs suggests the local dynamics to exhibit only time scales of a few seconds. The experimental results are compatible with an excitable regime of the local dynamics. Consequently, spikes are due to fluctuations. Concepts taking fluctuations along in systems of ordinary differential equations (ODEs) exist [109], but have not been applied to the system, yet. We will discuss them also below.
Modelling concepts from molecular properties to global dynamics including fluctuations and noise
The essence of the Ca 2+ signalling system is defined by its general properties, which are also the basic requirements models should meet: -The sequence of dynamic regimes with increasing stimulation: puffs, spikes, permanently elevated Ca 2+ . Pathway dependent also a bursting regime may follow or replace the spiking regime. (2) and (3) with T min , α and γ being cell type and pathway specific but not subjected to cell variability. -ISIs depend sensitively on parameters of spatial coupling.
These general properties apply to all cells. Cells exhibit variability with respect to concentrations of the functional proteins, geometry of clusters and the cellwide cluster array, ER luminal Ca 2+ content etc. The general properties of Ca 2+ signalling cannot depend on the details of these highly variable cellular characteristics, which calls for models as simple as possible but meeting the above requirements.
Puff models should start from the molecular properties of the IP 3 R. Its random state changes are the source of noise. We will use one of the most recent models of the IP 3 R to describe concepts, the Siekmann model [79], which is a Markov model based on single-channel data. We will discuss in that context, how fluctuations might enter ODE-focused modelling approaches.
Puff property distributions form the basis for modelling of global dynamics. We will discuss concepts for calculating moments of ISI distributions. Most current models adapt molecular rate constants to global time scales to reproduce measured average ISI values. However, the origin of the time scales on global level are global processes. We will sketch how to introduce these global processes into the coupling between the puff dynamics and global dynamics which allows for using realistic molecular parameters.
IP 3 R clusters as ensembles of receptors described by the Siekmann model
Several channels (up to fifteen) form a cluster. The opening of one receptor channel within a cluster ('blib') increases the open probability of the other channels in the cluster due to strong channel coupling by Ca 2+ diffusion, which may cause a puff. We consider a cluster consisting of a stochastic ensemble of N channels.
We denote the number of channels in state i according to Fig. 5 as n i ≥ 0 with N = i n i , effectively removing one degree of freedom due to this require- Only the rates connecting these two modes are Ca 2+ dependent [40,79] ment. A puff occurs if some critical number of channels is in the open state, motivating to study the expectation value to be in a state i, n i .
The total change in probability for a set {n i } = {n 1 , n 2 , ..., n 6 } = {n 1 , n 2 , ..., n 5 , N − n 1 − · · · − n 5 } is given by the probability fluxes for each single channel transition from state j to state k, resulting in a change of {n i } like n j → n j − 1 ∩ n k → n k + 1.
We suggest to study whether higher moments may drive puff dynamics and where the hierarchy of moment equations can be cut off. This might lead to a set of ODEs as Ca 2+ signalling model with realistic parameter values, which establishes the ability to simulate time courses with all the computational comfort ODEs provide.
Spike generation as first passage process with time dependent transition probabilities
We would like to present a concept calculating the moments of the ISI distribution in this section as it naturally corresponds to the random spike timing. We also suggest a method to invoke global processes modulating the local dynamics.
The stochastic element of such a formulation of spike generation is a single cluster described by its IPI, puff duration and amplitude distributions. Such an approach dispenses with detailed intracluster concentration dynamics [98]. A model in the same spirit set up to simulate time courses has been developed by Calabrese et al. [14]. Clusters open sequentially. Once a critical number N cr of open clusters has been reached, the remaining ones will open with almost certainty due to coupling by Ca 2+ diffusion and the positive feedback by CICR. There are many (N paths ) paths from all clusters closed to this critical number (see Fig. 6). The ISI distribution is the distribution of first passage times from 0 to N cr open clusters with this approach.
The negative feedback terminating spikes entails a very small cluster open probability just after a global spike, from which all clusters slowly recover. Thus, slow time scales from global processes enter as a slow time dependence of the cluster IPI, puff duration and amplitude distributions.
Linear chain of states
We suggest to radically simplify the problem to reach a system which describes general properties not depending on assumptions restricting the validity of results too much and to reach possibly analytically tractable equations. We obtain a linear chain of states by averaging over all possible paths from 0 to N cr open clusters. That chain of states is indexed by the number of open clusters. The states are connected either by transition rate functions f i,i±1 (t, γ) or waiting time distributions Ψ i,i±1 (t, t − t , γ) that both result from puff properties.
The transition probabilities pick up slow time scales by their dependence on the time t since the last global spike. The probability to leave the initial state 0 and go further up at early times after a global spike is very small, such that no puffs occur early. One can almost only move to the left in the linear chain. Figure 8 shows exemplary waiting time distributions with recovery from global negative feedback for initial and later times.
Recovery from global negative feedback is described by a transient with rate γ. For the case of transition rates, we have After about t r = 5γ −1 the inhibitory effect vanishes and the system has recovered globally, i.e., The description with transition rates uses asymptotically markovian rates that are asymmetric in the sense of the recovery from global negative feedback only affecting the up rates. This is the case, because negative feedback influences the probability of clusters opening, not closing. For the case using waiting time distributions, they are the probability distributions from which a time value is drawn that determines when to jump to the next state, i.e., the time to the next opening or closing of a cluster. They depend on the time t since the last global spike, the relative time spent in a state Δt = t − t , where t is the time of entering the current state, and the current and target state i and i ± 1, respectively. The direction of the jump is drawn from the splitting probabilities, which are the relative weights, i.e., time integrals over Ψ i,i±1 at t, of possible outgoing transitions, and add up to one. This allows evaluating the system when using double exponentially distributed waiting times. The first time reaching the critical nucleus N cr is equivalent to generating a cell wide spike. We are, therefore, interested in the moments of the first passage time probability distribution to reach N cr .
Experiments show that puff times often do not strictly follow an exponential distribution, but rather a double exponential in some cases, Fig. 7. This requires using waiting time distributions instead of rate functions and to use the general master equation, which is formulated more generally in terms of probability fluxes.
Apart from choosing the state variables of the state scheme, the Ψ 's or f 's contain all the physics. This includes effects of stimulation, and positive and negative feedback from CICR on short time scales, but also recovery from global negative feedback on long time scales.
Positive feedback by CICR means the more clusters are open the larger is the open probability of the closed clusters. In mathematical terms, the λ i,i+1 are increasing functions of i. One possible choice is where stimulation strength is included via the IP 3 sensitive puff frequency λ 0 [26], N T is the total number of clusters, and k ∈ {1, 2, 3} some model parameter to quantify the strength of the positive feedback. Leftgoing rates in their most simple form account for the with a single cluster closing rate λ − . The first and second moments of the first passage time distribution from 0 to N cr open clusters can be calculated for very general f i,i±1 or Ψ i,i±1 with the method described in Falcke and Friedhoff [39]. The only requirement is that the f i,i±1 or Ψ i,i±1 can be Laplace transformed. This is possible for the Ψ i,i±1 despite their dependency on t and t − t if the t-dependency is exponential like [39]. Therefore, this method provides a basis for investigating a large variety of positive feedbacks by the choice of i-dependency of rates and waiting times, puff duration properties by the choice of left-going rates and t−t -dependency, pathway properties by the choice of [IP 3 ]-dependency, etc.
Calculating the CV
The state scheme presented in Fig. 6 and its transition rate functions (or waiting time distributions) define a (generalized) master equation, which can be solved using Laplace transforms to determine the moments of the first passage time distribution to reach state N cr [39]. The only requirement towards the waiting time distributions is that their Laplace transform exists. In case of transition rate functions, solving the master equation yields the Laplace transform (denoted by a tilde) of the probability vectorP i (s) for a process that started in state i at t = 0 as Solving the generalized master equation including the waiting time distributions gives as a solution the Laplace transforms of the probability flux vector, where A, B, E, and G are matrices that depend on the length of the chain of states N and the transition rates or waiting time distributions, f i,i±1 and Ψ i,i±1 , respectively, and r i andq contain the initial conditions, as explained in [39]. Application of this theory to a chain with state independent transitions, as it might result from a random walk, found that the CV has a minimum for a certain resonant lengthN . For a given set of parameters, CV(N ), and therefore, the value ofN can be controlled by varying the rate of recovery from negative feedback γ. The stochastic process to reach stateN for the first time is, therefore, more precise than reaching smaller or larger values of N for the first time. This is interesting in its own and for the general theory of stochastic physics, but does not resemble the robustness of the CV against changes in N cr found in Ca 2+ signalling. Here the CV is constant and independent of the various number of IP 3 R clusters per cell found in experiments, due to cell-to-cell variability. Hence, while the approach described in ref. [39] provides the tools it has not solved the problem, yet. Future modelling of Ca 2+ signalling, therefore, needs to properly define the transition rate functions f i,i±1 or the waiting time distributions Ψ i,i±1 to reproduce the measured properties of the CV, in particular its robustness against cell variability and variable conditions. CICR and spatial coupling of clusters have to be reflected by the transitions probabilities to model Ca 2+ spike generation. The probabilities for opening of more clusters, derived from Ψ i,i+1 (t, t − t , γ) or f i,i+1 (t), increases with the Ca 2+ concentration due to CICR, i.e., it increases with the number of open clusters. Due to spatial coupling by Ca 2+ diffusion, it also increases with the number of closed neighbors of open clusters and thus could pick up geometrical or spatial aspects. Ca 2+ binding molecules in the cytosol decreasing Ca 2+ diffusion would decrease the probability of opening more clusters. However, this still has to be worked out.
Similarities and differences to neural spiking
It is interesting and potentially useful to discuss in which respects Ca 2+ -spiking resembles or differs from the spiking activity of neurons, a biological problem that has been quantitatively explored by mathematical modeling to an impressive extent [48,52]. This concerns the single neuron's spontaneous activity and its characterization by interspike interval (ISI) histograms, ISI correlation coefficients, and spike train power spectra, the autonomous activity of many neurons in recurrent networks, and the encoding of time-dependent stimuli.
Obvious differences between the two forms of spiking are (i) the physical quantity that undergoes spiking (Ca 2+ concentration vs trans-membrane voltage), (ii) the time-scales and typical mean ISIs (several sec to minutes for Ca 2+ -spikes vs several to hundreds of ms for neurons), and (iii) the constancy of the spike form (the shape of Ca 2+ spikes is more variable than that of neural action potentials). A technical but important difference is the typical length of experimental recordings: neural spike trains may contain many thousands of spike pulses in a quasi-stationary setting, whereas Calcium spike trains are mostly limited to less than a hundred spikes. This poses a severe limitation for the determination of certain higher order statistics, such as interspike interval correlations. Related to this, for many sensory neurons, researchers can systematically explore the information transmission by presenting well-defined sensory (eg. acoustic, visual or electric) stimuli in the form of harmonic or broadband signals. This allows to study whether neurons preferentially encode information about slow, intermediate or fast stimulus components (see, for instance, [9,27,77]). In Ca 2+ experiments, one is mostly concerned with presenting a certain amount of signaling molecules in a step-like manner, which resembles the first experiments in neuroscience, see e.g. the famous work of Lord Adrian [1]). This, however, seems to be only a consequence of the current technical limitation and the question how the sequence of Ca 2+ spikes encode truly time-dependent signals may come into focus once more spikes can be recorded in experiment and stimuli can be better controlled.
Biophysically, it is interesting that both spiking phenomena rely on the opening and closing of ionic channels and the positive and negative feedback loops are mediated by the Ca 2+ or voltage-dependence of the opening and closing rates of these channels. The main players in the neural dynamics are the Na + and K + -selective voltage-dependent ion channels. This is described in the framework of the famous Hodgkin-Huxley model for the voltage across the neuron membrane V (t) (see standard textbooks on the topic, e.g., [24,52]) where C is the membrane capacitance and I Ext is an external current that can serve as an stimulus. The variables I K , I Na , I L describe ionic potassium, sodium and leak currents, respectively. The parameters g K , g Na , g L and E K , E Na , E L are the corresponding maximal conductances and reversal potentials. The remaining variables m(t), n(t) and h(t) are the gating variables that are of particular importance for the generation of an action potential. They are described by where x can be substituted by n, m or h. Much like in early modeling of CIRC by Ca 2+ channels the variables m and n describe two fast binding processes that activate certain channels, while h describes a slow process that inactivates the sodium-selective channels after a depolarization of the membrane. In a Ca 2+ channel model this would correspond to the fast activation due to the binding of activating Ca 2+ and IP 3 and the slow inactivation due the binding of inhibitory Ca 2+ . The positive feedback loop that is essential to understand the upstroke of the action potential is the sodium dynamics: sodium is in excess outside the cell, the opening probability of the Na + selective channels increases upon depolarization. A small depolarization will thus lead to the opening of some channels, which causes Na + ions to rush into the cell, which depolarizes the membrane further, leading to more channel openings and so forth. This positive feedback loop can be compared to the puff generation via Ca 2+ -induced Ca 2+ release (CIRC) but also to the accelerated puff generation via the global Ca 2+ concentration prior to a cell-wide Ca 2+spike.
Inactivation of Na + channels and activation of K +selective channels (with potassium being in excess inside the cell) leads to the termination of the action potential. Put in mathematical terms, negative feedback loops on a somewhat slower timescale explain the second half of the neural spike-this again is very similar on a mathematical level to the mechanism at work in Ca 2+ spiking.
There are features in the neural membrane dynamics that are sensitive to time-dependent input currents and there are features which are not. Among the latter is the exact shape of the action potential-once the voltage is sufficiently depolarized, a largely stereotypical action potential is generated. To simplify the description, one may cut out this stereotypical part of the dynamic response as it cannot contribute to the signal transmission property of a neuron and focus on what is really the signal-dependent part. This is what is done in an Integrate-and-Fire (IF) model: where the more involved dynamics of the different ion channels and corresponding currents are subsumed in a simplified function f (V ) that describes the currents up to some threshold V T . Interestingly, the particular shape of f (V ) can be obtained experimentally [3,4]. Brette [11] argues that the positive Na + feedback that sets in after a particular voltage is crossed is so abrupt that a simple linear model f (V ) = μ − V with constant parameter μ, i.e., the famous leaky Integrate-and-Fire (LIF) model, describes the sub-threshold dynamics of a real neuron best. The function s(t) could be a time-dependent signal or a stochastic processes accounting for intrinsic and/or external noise. Indeed, especially the generation of the action potential is a stochastic process due to the presence of multiple sources of noise. This includes channel noise, quasi random input from other neurons (network noise) and the unreliability of synapses [101]. Many of these noise sources can be approximated by a Gaussian stochastic process, and often the simplifying assumption of strictly uncorrelated (white) Gaussian noise is made and so will we do in the remainder of this paper. We would like to mention the limitations of this assumption. Some sorts of channel noise [42,75] and very often for network noise [30,103], fluctuations display significant correlations. Furthermore, for strong synaptic connections, the shot-noise character of neural network noise invalidates the Gaussian approximation in some cases [70].
Hence, when we want to mimic spontaneous stochastic spiking, a simple choice for the driving current is to set s(t) = √ 2Dξ(t), i.e., to use a white Gaussian noise of intensity D with ξ(t) = 0 and ξ(t)ξ(t + τ ) = δ(τ ). For concreteness, we state again the standard stochastic model, the leaky integrate-and-fire model with white Gaussian noise: Note that the spike is not explicitly modelled, instead if V (t) reaches the threshold a spike is said to be emitted at time t i = t and the voltage variable is reset to the reset value V R . The abstract spikes are described by delta-functions δ(t − t i ) and form the spike-train, i.e., the sum of all spikes: δ(t − t i ) The spike train is the essential output of an IF model and its different statistics under the influence of noisy stimulation currents has been the subject of many studies (see [13,44,51,101] for reviews of stochastic IF models). We note that the reset after a spike may occur instantaneously or with some refractory period τ ref that accounts for the temporal extent of the action potential in a conductance-based model. Several statistics of neural spike trains are also routinely studied for Ca 2+ spikes. The stationary spike rate is given by an ensemble average, r 0 = x(t) but can be also determined via a time average, is the number of spikes in the time interval T ). Statistics of the interspike interval I i = t i −t i−1 (the time between to consecutive spikes) have been already discussed for Ca 2+ spikes: the mean interval I = 1/r 0 , the coefficient of variation CV = (I − I ) 2 / I , and, of course the most complete description of the single interval, the full ISI probability density function (PDF) ρ(I). There are, however, also a number of statistics that are not as common in the study of Ca 2+ spikes but well established in the computational neuroscience community. These include (i) count statistics, especially the Fano factor F (T ) = (N (T ) − N (T ) ) 2 / N (T ) that compares the growth of the spike count's variance to its mean (see, e.g., [21,105] for studies that highlight the importance of the Fano factor and [84] for a study that investigates the Fano factor in the context of Ca 2+ spiking), (ii) the spike-train correlation function C(τ ) = x(t)x(t + τ ) − x(t) 2 that describes the probability to find a spike at time t i + τ if a reference spike occurred at time t i . This statistics bears information of the spike generation process, for instance experimentally and theoretically obtained spike-train correlation functions usually have a decreased firing probability right after a spike has occurred reflecting refractory processes similar to what is observed in Ca 2+ puffs. Often, oscillatory activity is better characterized in the Fourier domain by the spike-train power spectrum: According to the first defining equation, the power spectrum is given by the variance of the Fourier coefficientsx(f ) of the spike train in a time window T . However, according to the second equation and the Wiener-Khinchine theorem [46], it is also given by the Fourier transform of the autocorrelation function. Turning back to the interspike intervals, we mention finally the serial correlation coefficient (SCC): The high frequency limit reflects the mean firing rate r0, while the low frequency limit bears information about the variability of the spike train. In the considered case of strong mean input μ vT the interspike interval PDF can be approximated by an inverse Gaussian distribution that is fully characterized by r0 and CV . Parameters: μ = 10, D = 1, vR = 0 and vT = 1 that puts the covariance between two ISIs that are lagged by an integer k in relation to the variance of the single interval providing a number between −1 and 1. Correlations among ISIs may reflect slower processes that are at work in the driving input or in the intrinsic dynamics of the neuron. For instance, a negative SCC of adjacent intervals indicates that an ISI longer than the mean is on average followed by an interval shorter than the mean and/or the other way around. Such correlations have been found in many neurons (see [2,41] for reviews) and may lead to an improved information transmission [18,19]. Many of these statistics are related, as it can be easily demonstrated by means of the power spectrum.
We have already pointed out the relation between spike-train correlation function and spike-train power spectrum via the Wiener-Khinchine theorem. The power spectrum, however, also contains information on the interval statistics (see [22]). In the high-frequency limit of a stationary stochastic spike train, the spectrum saturates at the firing rate (the inverse of the mean ISI), lim f →∞ S(f ) = r 0 = 1/ I . If intervals are independent, i.e., if we deal with a renewal spike train, the spectrum attains also a simple form in the opposite limit of vanishing frequency: which means that by comparing the high-and lowfrequency limits we can read off how regular the renewal spike train is. More generally, the full spike-train power spectrum of a renewal point process can be obtained from the knowledge of the interspike interval probability density, more specifically, its one-sided Fourier transform,ρ(f ), via the expression [91] The spectrum can thus be calculated for the leaky IF model driven by white Gaussian noise [57] (using much earlier results for the Laplace transform of the firstpassage-time density of an Ornstein-Uhlenbeck process [23]), because in this model, the reset of the voltage erases any memory about previous ISIs and the driving noise is uncorrelated and thus does not carry memory either. Since the exact result for the power spectrum uses higher mathematical functions (the parabolic cylinder functions), it is instructive to look for a further simplification, which can be achieved if the system is in the strongly mean-driven regime μ V T − V R . In this case, the statistics of the LIF model is close to that of a perfect IF model with f (V ) = μ (omitting the leak term on the right hand side of Eq. (17)). For this model the ISI density is an inverse Gaussian probability density [47] the Fourier transform of which is a simple exponential function: In Fig. 9, we display a simulated spike-train power spectrum for an LIF model with strong mean input (μ = 10 V T − V R = 1) and highlight the limit cases, from which both the firing rate r 0 and coefficient of variation CV can be readily obtained. The simulation is also compared to Eq. (22), using as an approximate descriptionρ(f ) from Eq. (24); the approximation agrees very well for this example, because the constant drift μ is dominating the subthreshold dynamics so strongly that the LIF dynamics is close to that of a perfect IF model. Comparable power spectra have indeed been reported in Ca 2+ spiking [82].
Many extensions of the simple one-dimensional IF model have been studied analytically, such as models with time-dependent threshold [56] or models with colored [12,64,74] or non-Gaussian noise [29,65,70]. In higher-dimensional stochastic IF models we can also reproduce non-renewal behavior observed in many neurons in the form of a non-vanishing serial correlation coefficient, ρ k = 0 for k > 0. As in Ca 2+ spiking, also in neural spiking there are often slower processes at work that steer the pulse-generating process-either as a simple external control or as a feedback of the spike train onto the spike generator. This can be easily incorporated into the Integrate-and-fire framework by adding a slow variable. Consider the following example, where the membrane potential is affected by an addi- val correlations and leads to a renewal spike train). Accordingly the low frequency limit for the power spectrum of the renewal spike train has a higher power compared to the nonrenewal case. Generally, a decreased power at low frequency may improve the signal to noise ratio for potential low frequency signals, hence, improving the information transmission properties of the neuron tional negative adaptation current a(t): In the last line, we complemented the usual reset rule for the voltage by an incrementation rule for the adaptation variable a(t): it is increased by a value Δ when a spike occurs. In between spikes, according to Eq. (26) the adaptation variable will decay exponentially with the time constant τ a that is typically larger than the membrane time constant or the mean interspike interval and ranges between 50ms and several seconds. A sequence of spikes occurring in rapid succession (as we for instance observe when the neuron is subject to a depolarizing current step) leads to a large value of the adaptation variable which has an inhibiting effect on the voltage dynamics in Eq. (25)-the response to the current step will be initially a rapid increase that is followed by an adaptation to a much lower value. The IF model endowed with an adaptation current is conveniently termed adaptive IF models and can be thought of as a simplification of a conductance-based model with a Ca 2+ gated K + current. Since the adaptation current is not reset to a fixed value but increased upon spiking it carries information of past ISIs and may lead to ISI correlations. Indeed the model and some related ones as for instance the IF model with dynamical threshold have been shown to generate nonrenewal spike trains and, specifically, negative interspike interval correlations [20,58]; analytical methods to calculate these correlations have been worked out over the last years [76,78].
As a consequence of the nonrenewal character of the spike train, the power spectrum is not described anymore by Eq. (22) but by a more complex expression involving higher order interval distributions (see, e.g., [51]). While the high-frequency limit of the spectrum is still given by the firing rate (black dashed line in Abb. 10), the zero-frequency limit now involves also the serial correlation coefficients: which in the renewal case reduces to Eq. (21). The effect of the negative ISI correlations is thus to reduce power at low frequencies, which can be clearly seen by comparing the original spectrum to the power spectrum of the shuffled spike train. This effect is especially important for the transmission of slow stimuli (so far not included in the model Eqs. (25), (26)): the power spectrum of the spontaneous state is a good approximation for the noise background in the case of a time-dependent signal (e.g., a cosine signal with low frequency) being present. If noise power is reduced at low frequencies, this can increase the signal-to-noise ratio [18,19]. It is conceivable that some of the concepts reviewed here for neural spike trains may become relevant and applicable to Ca spike trains once longer recordings and better temporal control of stimuli become possible. Since in particular slower processes are at work in the intracellular Ca 2+ dynamics, models like the adaptive integrate-and-fire model that we discussed may serve as an inspiration to capture the cumulative refractoriness of Ca 2+ spikes.
Conclusion
Modelling of Ca 2+ signalling has taken place in the tradeoff between models accounting for the randomness of puffs and spikes, cell variability and measured parameter dependencies on one side and rate equation models convenient to simulate time courses on the other side in recent years. Rate equation models need further development to reproduce measured parameter dependencies. We suggest to include higher moment's dynamics derived from the Master equation to account for spike generation by fluctuations. Alternatively, approaches like spike generation as first passage of a random walk on a linear chain of states as presented in Sects. 5 and 5.1 might be used.
Stochastic theory of neuronal spiking might serve as a role model for what can be achieved with stochastic theory of Ca 2+ spiking. The main challenges ahead are to go beyond simple renewal approaches for spike generation towards ISI correlations, cumulative refractoriness and other phenomena comprising several ISI, explanation of the concentration response relation of the ISI and the robustness properties of the moment relation.
The task of mechanistic mathematical modelling in cell biology is to identify mechanisms on the basis of formulating them as hypothesis in mathematical models. Here, the agreement with experimental data serves as part of the hypothesis verification. Rate equation models fail here with respect to the agonist concentration response relation of the average interspike interval, the sensitive dependence of the average interspike interval on parameters of spatial coupling (diffusion, buffers, geometry) and of course the moment relation as a defining property of Ca 2+ spiking. Stochastic models still have to be developed to address these problems.
Such a model development may lead to answers to obvious questions in the field. Frequency encoding is one of the generally accepted and experimentally supported concepts providing meaning to Ca 2+ signals [66]. However, spike timing is random. The spectrum of a spike train with exponentially distributed ISI is flat. The absolute refractory period introduces frequencies with moderately larger power in the spectrum than the average power [82], but essentially there is no typical frequency in many IP 3 induced Ca 2+ spike sequences. Taking the large cell-to-cell variability at a given agonist concentration into account, there is no defined relation between agonist concentration and Ca 2+ spike frequency applying to all cells of a given type, but each cell has its own relation. How can frequency encoding work with these properties of spiking? What are the reasons for the large cell variability and what does it mean? Addressing these questions requires models that faithfully reproduce the properties of spike sequences including their fluctuations but also have predictive power, e.g., by telling us how the spike statistics will vary if biophysical parameters are changed.
Funding Open Access funding enabled and organized by Projekt DEAL.
Author contribution statement
All authors wrote the introduction (Sect. 1) and conclusion (Sect. 7). Sections 2-5 were written by VNF and MF, and Sect. 6, including simulations and figures, by LR and BL. All authors edited the manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 2021-07-05T16:51:26.918Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "f2fe72129e713717b51dd13c763017a27ddd3044",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjs/s11734-021-00174-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f2fe72129e713717b51dd13c763017a27ddd3044",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53304962 | pes2o/s2orc | v3-fos-license | Sparsorythussescarorum, new species from Mindoro, Philippines (Ephemeroptera, Tricorythidae)
Abstract A new mayfly species, Sparsorythussescarorumsp. n. (Tricorythidae) is described from Mindoro Island, Philippines. Nymphs are characterized by the combination of the following characters: compound eyes of approximately equal size in both sexes, shape and setation of legs, presence of rudimentary gills on abdominal segment VII, and some details of mouthparts. Male imagines are characterized by the coloration pattern of wings and details of genitalia. The developmental stages are matched by DNA barcodes.
Introduction
The order Ephemeroptera (mayflies) is a monophyletic group of pterygote hemimetabolous insects with aquatic larvae and delicate membranous wings in the adult stage. The presence of a subimaginal winged instar is unique within recent pterygote insects. Despite the notable organismic diversity in the Philippine Archipelago, only 38 species of mayflies (Insecta: Ephemeroptera) have been recorded so far. The last catalog by Hubbard and Pescador (1978) listed 20 species. New species and genera have been recorded afterwards from several parts of the Philippines by Flowers and Pescador (1984), and Müller-Liebenau (1980, 1982, with the most recent studies conducted by Braasch and Freitag (2008), Sroka and Soldán (2008), Braasch (2011) and Batucan et al. (2016). From these works, it can be inferred that there are eight families present in the Philippines: Baetidae, Caenidae, Ephemeridae, Heptageniidae, Leptophlebiidae, Prosopistomatidae, Teloganodidae and Tricorythidae. Some papers on mayflies of the country have been limited to ecological studies concerning mayfly nymphs (Realon 1979) and macroinvertebrate composition in certain freshwater bodies (e.g., Freitag 2004, Flores and Zafaralla 2012, Dacayana et al. 2013, albeit limited in number and scope as well. Nevertheless, the records regarding Philippines mayflies remain scattered and species diversity appears clearly underestimated.
In an effort to increase knowledge on the Philippine mayfly fauna, extensive sampling was done in Mindoro as part of the Baroc River Catchment Survey of the Ateneo de Manila University. The research group, as part of Bachelor of Science thesis, focused on the Key Biodiversity Area "69 Hinunduang Mt.", classified as terrestrial and inland water area of very high biological importance and extremely high critical conservation priority.
A new species, Sparsorythus sescarorum sp. n. belonging to the family Tricorythidae is described in this paper. The genus Sparsorythus Sroka & Soldán, 2008(considered by Kluge 2010 to represent a subgenus of Tricorythus Eaton, 1868) has been recorded from India, Indonesia, Sri Lanka, Vietnam and the Philippines, but is probably widespread in South and Southeast Asia. Listed below are the currently described species within the genus.
Materials and methods
Nymphs were collected from rocks partially or fully submerged in the riffle section of the stream (Figs 10a-d). Winged specimens were attracted using a "black light" trap set-up from 6:30 PM to 8:00 PM under overcast skies near the streams or rivers. Insects were manually collected and stored in 96% ethanol to allow for genetic sequencing. Sample preparation for diagnosis under the dissecting microscope and compound microscope followed Braasch (2011) using Liquid de Faure (Adam and Czihak 1964) as mounting medium. Morphological examinations were performed using a Leica EZ4 stereo microscope and Olympus CX21 microscope. Processing and digital imaging of dissected parts was done using the latter stereo microscope equipped with DinoEye Eyepiece camera; the pictures were combined using CombineZP software (Hadley 2010) and were subsequently enhanced with Adobe Photoshop CS6. Full habitus photographs were taken under a Zeiss Axio Zoom V 16 microscope using diffuse LED lighting at magnifications up to 160×, with Canon 5D Mark II SLR attached to the microscope. Images were captured at various focus planes and subsequently stacked using the Zerene Stacker software. Morphological terminology followed Sroka and Soldán (2008) for nymph and imago, Koss and Edmunds (1974) for eggs and Edmunds and McCafferty (1988) for subimagines.
Specimens examined have been deposited in the following institutions: Museum of Natural History of the Philippine National Museum, Manila, Philippines (PNM); Ateneo de Manila University, Quezon City, Philippines (AdMU), Collection Jhoana Garces, Philippines (CGM), currently deposited in AdMU, and Museum für Naturkunde Berlin, Germany (MNB); and Naturhistorisches Museum Wien, Austria (NMW). Specimens at the latter repository are older and not collected by any of the authors, but they are presumably from the same locality.
Mitochondrial DNA extraction was done by elution with Qiagen DNeasy kit (Qiagen, Hilden, Germany) following the protocol for animal tissues (Qiagen 2002). For samples with successful DNA isolations, polymerase chain reactions (PCR) were performed using modified primers LC01490_mod (5'-TTTCAACAAACCATAA-GGATATTGG-3') and HC02198_mod (5'-TAAACTTCAGGATGRCCAAAAAAT-CA-3') for amplification of a partition of the cytochrome c oxidase subunit (COI) gene. The PCR temperature progression was set: 180 s at 94 °C; 30 s at 94 °C, 30 s at 47 °C, 60 s at 72 °C (× 35 cycles); 300 s at 72 °C. Amplification success was checked by gel electrophoresis. PCR products of successful amplifications were sent to a commercial service for cleaning, cycle sequencing PCR and sequencing.
The sequences were manually traced and aligned using the software BIOEDIT version 7.2.5 (Hall 1999). Ends of each partition were trimmed to receive a complete matrix of all sequences used. The corresponding fragment of a COI sequence of Sparsorythus gracilis and Sparsorythus buntawensis available from GenBank (Table 1; Batucan et al. 2016;Selvakumar et al. 2016) were included in the statistical parsimony analysis conducted with TCS 1.21 (Clement et al. 2000). The network connection limit was set manually to 1000 steps in order to keep sub-networks of different species connected and show their inter-specific genetic distance. Description. Nymph. Body length 5.0-5.2 mm; ♂ cerci 0.8 and paracercus, 0.9 times body length; ♀ cerci and paracercus 0.9 times body length; head 1.9-2.0 times wider than long; antennae twice as long as head length (n = 10). General coloration of body brownish-yellow when preserved in alcohol. Head ( Figure 1) pale brownish-yellow. Male compound eyes blackish. Antenna yellowish, pedicle approx. 2.5 times longer than scape, surface of scape with almost transparent ribbon-shaped bristles, a few hair like setae and a finely chagrined area dorsally. Labrum (Figure 2g) oval; 2.8-3.0 times wider than long, with bristles medially diminishing in length along the anterior margin and laterally, uniformly scattered fine bristles on the dorsal surface. Two lateral groups of bristles on the ventral side. Hypopharyngeal lingua (Figure 2i) approximately as wide as long, with a short and shallow medio-longitudinal groove and wide apico-medial emargination; medial indentation relatively shallow, not exceeding 0.33 of hypopharyngeal lingua length, with uniformly scattered extremely small bristles; postero-lateral margin with 3-4 short, strong, evenly spaced bristles; superlingua rounded, bluntly pointed at apex, with a row of bristles in distal half of outer margin; bristles decreasing in length toward apex; inner margin of superlinguae straight (strongly concave in S. buntawensis). Mandibles (Figure 3a, b) as typical for the genus (Sroka and Soldán 2008); both outer incisors triangular; dorsal margin with numerous long filtering setae. Right prostheca (Figure 3d) 1/3 shorter than left, notched, expanded apically and bifurcate, with one long curved projection at distal part, bearing 3 finely fringed setae on the inner side. Distal part of left prostheca (Figure 3c) extended, with several short pointed teeth (blunt when worn); usually three long bristles (approximately ¾ of prostheca length) with feathery margins situated at base of prostheca (and frequently difficult to see). Maxilla (Figure 2e) oblong-shaped with truncate apex and anterolateral part with a group of strong bristles; a dense group of bristles medially and a regular oblique transversal row of slightly shorter bristles submarginally; maxillary palps absent; no sclerotized struc- Thorax (Figure 1) dorsally dull yellowish with blackish smudges and maculae, paler ventrally; pronotum laterally slightly enlarged with convex margins, distal margin more or less straight (in both sexes); wing pads dark, veins inconspicuous, in last instar larvae wing pads reaching the middle of abdominal segment II. Legs (Figures 2a-c) relatively robust; length ratio of femur : tibia : tarsus = 2.5 : 3.0 : 1.0 (fore legs), 2.5 : 2.5 : 1.0 (mid legs), 3.6 : 3.3 : 1.0 (hind legs). Fore femora (Figure 2a) flat, shorter than tibia; ratio length : width = 2.3 : 1.0; apically rounded strong spatulate bristles (Figure 4b), about 3.5-4.2 times longer than wide, arranged in a slightly irregular row almost perpendicularly crossing the femur, the row then abruptly bent basad and sinuously extending along the posterior margin of femur (somewhat similar to the "bow-shaped" arrange- Sroka & Soldán, 2008); transverse row usually consisting of five bristles; the median part of the posterior margin with a scattered row of strong pointed bristles, anterior margin with a few bifid hair-like setae and submarginally a few almost transparent ribbon-shaped bristles; otherwise surface of femur glabrous, without setae or bristles. Fore tibiae with conspicuous inner submarginal row of apically pointed bristles, slightly longer than tibia width and a few (4-7) long marginal bristles. Fore tarsus with a row of 6-10 strong pointed bristles along the inner margin and a few irregularly scattered bifid setae. Surface of middle and hind femora sparsely covered with stout spatulate bristles (Figures 4c, e) one-third of marginal bristle size and fine ribbon-shaped bristles. Middle femora (Figure 2b) with a dense row of blunt, slender spatulate (rarely pointed) bristles along the dorsal (posterior) margin, the basal half of posterior margin submarginally with some small spatulate bristles; ventral margin with a scanty row of medium sized blunt or slender spatulate bristles, more numerous and slightly longer in basal part; surface of femur with some very small oval bristles and fine transparent ribbon-shaped bristles, the latter more numerous submarginally. Middle tibiae with an inner submarginal row of apically bluntly pointed bristles, about ½ of tibia width, outer margin with about a dozen long pointed bristles and scattered bifid setae. Hind femora (Figure 2c) with a dense row of blunt, slender spatulate (rarely pointed) bristles along the dorsal (posterior) margin, ventral (anterior) margin with several rows of distinctly smaller, slender spatulate and oval shaped bristles. Surface with scattered small oval bristles and fine ribbon-shaped bristles. Hind tibiae with inner marginal row of slender spatulate bristles, almost as long as bristles along posterior margin of femur; outer margin of tibia with a dense row of long, bluntly pointed bristles, interspersed with acutely pointed bristles (with finely feathery margins), scattered bifid setae and long hair-like setae (especially in distal half ). Claws strongly hooked, with 2-3 teeth and a pair of strong pointed processes approximately in the middle. Dark tracheization conspicuous on all femora.
ment in S. ceylonicus
Abdominal terga (Figure 1) brownish with fine darker stippling, a small light medial dot and two pale yellowish brown paramedial patches; posterior part of terga VIII and IX darker; terga darker than sterna with greyish-black stippling; segments II-VII with gills. Gills on segments II-VI similar in shape ( Figure 4e) and diminishing in size, each consisting of a dorsal ellipsoidal plate and two branched ventral membranous parts with dense filaments; gill plate on segment II reaches middle of abdominal segment IV, gill plate on segment VI reaches almost end of abdominal segment VII; gill plates simple, thin, not enforced, with scattered hair-like marginal bristles; rudimentary gill on segment VII (Figure 4d) small, tubular with bifurcate tip and frequently missing (or lost subsequent to collecting), without plate. Surface of terga with small denticles and ribbon-shaped bristles, the latter more densely distributed in lateral parts and a few scattered hair-like setae; posterior margin of terga ( Figure 4a) with rather tongue-shaped teeth, acutely pointed, blunt or with somewhat frayed tips (worn). Abdominal terga without postero-lateral processes. Abdominal sterna with a few narrow ribbon shaped bristles in posterior lateral area, hind margin of sterna smooth. Posterior margin of sternum IX equally shaped in male and female larvae.
Paracercus (Figure 1) in male nymphs usually slightly longer than cerci, subequal in female nymphs; surface of segments without bristles; posterior margin of segments with strong, slender spatulate or bluntly pointed bristles of approx. ½ (basal segments) to 1 / 3 of segment length (Figure 2f ), tips of bristles extremely finely frayed. Sexual dimorphism in the spatial arrangement and width of cerci: ♂ with basal segments of cerci and paracercus broader and continuous; ♀ basal segments of cerci and paracercus distinctly more slender and not touching.
Male imago. Body length 4.5-4.8 mm; fore wing 4.0-4.5 mm; antenna 1.2 mm long; tibia 1.0 mm; cerci and paracercus length approx. 10-12 mm. General color of head and prothorax dark, blackish ( Figure 5); antennal pedicle and posterior margin of eyes paler; mesothorax pale yellowish brown; abdomen white to pale greyish with black stippling and maculation on posterior margin; ventral thorax and abdomen paler, whitish and more transparent than dorsal side; tracheization not pigmented; cerci white to pale greyish, at least basal segments frequently with narrow black posterior border; forceps whitish to transparent; legs pale greyish, femora darker, finely stippled with black along margins. Fore wings transparent with minimal dark grey smudges in basal half; most dark smudges in the costal and subcostalareas, clustered in basal and apical regions; pterostigmatic region milky, usually no cross veins in costal space discernible; venation mostly whitish, black in the center of the wing, almost transparent towards the margins; veins costa, subcosta and radius anterior rather transparent, broadly bordered with intense black stippling and conspicuous over all their length. Intensity of dark stippling on body, legs, and wings varies considerably between individuals.
Head (Figure 6b) with globular compound eyes, of approximately the same size as in females, distanced approximately half of mesothorax width; antennal pedicle approximately 2.5 times longer than scape. Prothorax (Figure 6b) slightly longer than head. Tarsal claws double on all legs; fore legs with two rounded claws, mid and hind legs with one claw rounded and the other pointed (ephemeroid). Femur slightly longer than tibia, length ratio 1.2 : 1. In the fore wing vein media forked at approximately ½ of its length; veins cubitus posterior and analis frequently not visible along their entire length, transparent in apical part; posterior wing margin with fine setae, more scattered distally.
Genitalia (Figure 7) with subgenital plate entire. Forceps two-segmented; basal segment shorter than distal one, length ratio approximately 1.0 : 2.2; forceps segment I cylindrical, widest at base, slightly constricted in the middle; hind margin of forceps base sclerotized in medial part with a few tiny bristles; inner margin of segment two of forceps covered with numerous leaf-shaped attachment structures. Penis lobes simple, straight and tubular, slightly bent in dorsal direction, only slightly constricted subapically; penis apex reaching approximately the basal quarter of second forceps segment; apex of penis rounded with distinct medial emargination bisecting penal apex. Caudal filaments more than twice the body length, approx. 10-12 mm, cerci glabrous but paracercus sparsely covered with fine setae.
Male subimago. Similar to imago, but wings uniformly greyish and with microtrichae on wing surface; tarsus of fore leg with one pointed and one obtuse claw ( = 'ephemeropteroid' sensu Kluge 2004: 34, Kluge 2010); fore femur slightly shorter than tibia, length ratio 0.9 : 1.0; cerci and paracercus longer than body, but distinctly shorter than in imago. Male genitalia almost as in imago, but forceps segment I stouter.
Female subimago. Body length 4.0-4.6 mm; fore wing 5.0-5.2 mm; cerci and paracercus length 3.5-4.0 mm. General coloration of head, prothorax, dorsal mesothorax and dorsal abdomen dark, brownish or blackish ( Figure 8); ventral mesothorax yellowish brown; cerci whitish, densely covered with long setae. Head (Figure 6a) with globular compound eyes, of approximately the same size as in male imagines, distanced approximately half of mesothorax width; antennal pedicle approximately 2.5 times longer than scape. Femora blackish, basal end of fore femur paler than the rest, tibia and tarsus transparent. Tarsal claws double on all legs, one rounded and the other pointed (ephemeroid). Length ratio femur: tibia: tarsus = 3.0: 3.2: 1.0 (fore legs), 3.1: 3.0: 1.0 (middle legs), 4.8: 4.1: 1.0 (hind legs). Fore wings (Figure 8) gray with dark smudges in basal half; most dark smudges in the costal and subcostal space clustered in two regions; veins costa and subcosta distinctly darker and conspicuous over all their length; longitudinal venation darker anteriorly and proximally. Subimaginal falciform microtrichia present on wing surface, body surface, and legs. Outer and inner edges of wings (wing margin) with a seam of long and fine setae, slightly shorter towards the wing tip. Subanal plate (sternum IX) approximately as wide at base as long, smoothly rounded in distal half and more than one third longer than sternum VIII (compare Sroka and Soldán 2008: fig. 64).
The resulting network tree (Figure 9) demonstrates that the conspecific specimens of different life stages of Sparsorythus sescarorum sp. n. have only a maximum of five substitutions compared to the much higher divergence of the other Sparsorythus species sampled. This Statistical Parsimony tree is solely intended to provide evidence for matching larval and imaginal stages.
Differential diagnosis. The nymph of Sparsorythus sescarorum sp. n. differs from all known Oriental tricorythid taxa in the combination of the following characters: apex of hypopharyngeal lingua with wide medial indentation (similar in S. buntawensis), wing pads reaching the middle of abdominal segment II in last instar larvae, hind femora longer than tibia (length ratio of femur : tibia : tarsus = 3.6 : 3.3 : 1.0) with central femur surface glabrous (only a few tiny bristles submarginally) and bifurcate rudimentary gill on segment VII present. The new taxon in some respects somewhat resembles S. bifurcatus and S. gracilis, but leg ratio of hind femur : tibia : tarsus and setation of femora are distinctive. Unlike S. jacobsoni (sensu Ulmer 1939: Abb. 334), S. sescarorum has no small nick in the median anterior margin of its labial plate and possesses a specifically shaped transverse row of setae on fore femora, and the rudimentary gill is bifurcate instead of filamentous. Unlike S. buntawensis, S. sescarorum sp. n. has inner margin of superlinguae straight, bifurcate rudimentary gill and cerci and paracercus shorter than body length. They can be easily differentiated using leg ratios of femur : tibia : tarsus and fore femora length : width. The arrangement of apically rounded setae on fore femur resembles the bow-shaped arrangement of S. ceylonicus.
Male genitalia are comparatively similar within the genus Sparsorythus. The male imago of Sparsorythus sescarorum sp. n. can be differentiated from other Oriental tricorythid taxa based on the pattern of dark smudges in the fore wing, the medial sclerotization along the hind margin of forceps base and the length ratio of forceps segments. Color pattern of wings is rather similar in S. multilabeculatus, but male imagines of S. sescarorum are significantly larger (4.5-4.8 mm vs. 3 mm in S. multilabeculatus). Male imagines of S. sescarorum have globular compound eyes, of approximately the same size as in females, in contrast to S. bifurcatus and S. dongnai compound eyes which are distinctly larger than in females. Identification of female subimagines remains rather difficult (except by direct comparison of specimens), mainly based on coloration, color pattern of wings, length ratio of legs, shape of subanal plate (sternum IX) and exochorionic structures of eggs. Distribution. The species is so far only known from the type locality, lower reach of Taugad River, Oriental Mindoro, Philippines.
Ecology. All material was collected from or near permanent rivers in Oriental Mindoro. This province has an equatorial monsoonal (Am) climate based on the Köppen-Geiger Classification and is nationally recognized as the Type III climate according to the Modified Corona Classification (Kintanar 1984), characterized by absence of a very pronounced maximum rain period and a short dry season, in Oriental Mindoro during the period of February to April. Average temperature is around 27.4 °C and the average annual rainfall about 2000 mm (PAGASA 2018), however with considerable annual and local variations. All collection sites are at low altitudes of 5-250 m a.s.l. at meandering alluvial rivers of small to medium size (2-12 m wide) comparable to the hyporhithral section (Figures 10a-c) with estimated water discharge ranging from 0.006 to 7.0 m³/s during the respective times of collection. Most of these sites were surrounded by secondary vegetation, rarely secondary forest, with few houses and farmland in some distance from the river bed.
Larvae were collected in lotic river sections at water depth ranging from 3 to 35 cm, predominantly from mineral bottom substrates (typically small to medium-sized boulders in riffles (Figure 10d)), rarely from submerged wood. The water currents at these microhabitats were estimated to range from 0.08 to 0.79 m/s (usually ca. 0.2-0.4 m/s). The temperature of the water ranged from 23.0 to 28.7 °C, the pH from 6.8 to 8.3, dissolved oxygen from 3.8 to 8.3 mg/l (mostly, but not always near 100% saturation), biochemical oxygen demand (BOD 5 ) from 0.1 to 1.3 mg/l. The maximum values, respectively, measured for selected dissolved nutrients were as follows: phosphate 0.7 mg/l, ammonium 0.5 mg/l, nitrate 1.0 mg/l. Dissolved nitrites were always below detectable values (< 0.2 mg/l). Imagines and subimagines were collected from light traps placed along the same river sections. They seemed to be most attracted by black light used at a time shortly after sun set. No information on feeding, type of emergence and life cycle is available at present. Presumably subimagines emerge on the water surface and male subimagines moult almost immediately after emergence whereas females retain the subimaginal stage.
Etymology. The name of this new species is given to acknowledge the efforts of Baranggay Captain Ronel S. Sescar, Baranggay Kagawad for Environmental and Agriculture concerns Rodel S. Sescar and the rest of their family members who were instrumental for the protection and preservation of the Baroc River. Assessments of aquatic biodiversity and training of student researchers would not have been possible without their support for the past few years. Sroka and Soldán (2008) subgenera of Tricorythus. Lineages within Afrotropical Tricorythus, however, are still poorly known (Barber-James 2008) and for the present the opinion of Sroka and Soldán (2008) is followed in this paper.
Discussion
Several characters of Sparsorythus sescarorum sp. n. merit comment. The nymphs of S. sescarorum sp. n. exhibit a sexual dimorphism in the spatial arrangement and width of cerci and paracercus as observed in other Tricorythidae. Size of eyes is about equal in male and female specimens in the larval and winged stages, whereas S. bifurcatus and S. dongnai exhibit distinctly larger eyes in male specimens. Kluge (2010) suggested that some species of Tricorythus, such as T. exophthalmos, show a correlation between enlarged male eyes and the sexually dimorphic shape of the pronotum, where the male pronotal fore margin expands medially forming a semicircular flap that overlaps the hind part of the head, while the female fore margin is straight. The fore margin of S. sescarorum sp. n. larval pronotum is more or less straight in both sexes, lending some support to the opinion of Kluge. Female adults obviously retain the subimaginal stage. This has also been observed at least in Tricorythus varicauda, Sparsorythus celebensis, and some other tricorythid taxa (Kluge 2010). Male subimagines of the new species have never been collected at light traps, however a single specimen from the type locality is available which has been obtained by rearing nymphs and which obviously represents a subimago. This suggests that the subimaginalimaginal molting of males occurs immediately after emergence before the first flight. molecular genetics of the first author were kindly enabled thru funding by German Federal Ministry of Education and Research (BMBF project BIOPHIL 01DP14002) and the German Academic Exchange Service (DAAD project BIO-PHIL 57393541). | 2018-11-17T16:21:27.943Z | 2018-11-05T00:00:00.000 | {
"year": 2018,
"sha1": "9cb1abfb2be94b8a59430497b117998bbadf3217",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/28412/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cb1abfb2be94b8a59430497b117998bbadf3217",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246498389 | pes2o/s2orc | v3-fos-license | Machine Learning Approaches for Discriminating Bacterial and Viral Targeted Human Proteins
: Infectious diseases are one of the core biological complications for public health. It is important to recognize the pathogen-specific mechanisms to improve our understanding of infectious diseases. Differentiations between bacterial- and viral-targeted human proteins are important for improving both prognosis and treatment for the patient. Here, we introduce machine learning-based classifiers to discriminate between the two groups of human proteins. We used the sequence, network, and gene ontology features of human proteins. Among different classifiers and features, the deep neural network (DNN) classifier with amino acid composition (AAC), dipeptide composition (DC), and pseudo-amino acid composition (PAAC) (445 features) achieved the best area under the curve (AUC) value (0.939), F1-score (94.9%), and Matthews correlation coefficient (MCC) value (0.81). We found that each of the selected top 100 of the bacteria- and virus-targeted human proteins from a candidate pool of 1618 and 3916 proteins, respectively, were part of distinct enriched biological processes and pathways. Our proposed method will help to differentiate between the bacterial and viral infections based on the targeted human proteins on a global scale. Furthermore, identification of the crucial pathogen targets in the human proteome would help us to better understand the pathogen-specific infection strategies and develop novel therapeutics.
Introduction
Despite the current improvements in antimicrobial therapy and vaccination, infectious diseases remain a major threat to public health worldwide. They cause significant morbidity across the nations, posing a major burden on the economy, and causing a substantial number of deaths in the less developed countries [1]. The majority of infectious diseases are caused by pathogenic bacteria and viruses. Pathogens interact with the host system right from the point of its entry into the host, primarily to evade the host immune response and create their own niche for survival and growth [2]. The identification of host proteins targeted by pathogens and pathogen-host protein-protein interactions (PPIs) is crucial to understand the mechanisms underlying the infectious diseases [3]. To differentiate between the bacterial-and viral-targeted host proteins is critical to delineate the specific infection strategies for these two groups of pathogens. While this may help in the diagnosis of the etiology, it is particularly important from the treatment perspective, which is distinct for bacterial and viral infections. Antibiotics kill bacterial pathogens but are ineffective against are ineffective against viruses. Finally, identification of the specific biological processes for the bacterial-and viral-targeted human proteins could improve disease prognosis and treatment.
Several studies attempted to explore the mechanisms underlying infectious diseases from the study of pathogen-host PPIs [4][5][6][7][8][9][10][11][12][13]. The availability of experimentally verified pathogen-host PPIs in the public domain significantly helped these efforts [14][15][16][17][18][19][20]. However, only one study compared pathogen-host PPIs for bacterial and viral infections [21]. This study addressed common as well as distinct infection strategies for bacterial and viral infections. To distinguish between bacterial-and viral-targeted human proteins, they only used the degree centrality, betweenness centrality, and gene ontology (GO) features of different proteins. They drew a general conclusion that viruses tend to interact with human proteins having much higher connectivity and centrality values than those for bacteria. They proposed that viral-targeted human proteins function in the cellular process to manipulate it, while bacteria-targeted human proteins interact with the immune system. Here, we used more rigorous techniques, such as machine learning algorithms, to differentiate the bacteria-targeted human proteins from the virus-targeted proteins. To this end, we used the sequence, network, and gene ontology features of human proteins extensively. We identified the best features set for the purpose of discriminating between bacterial-and viral-targeted proteins and listed the top predicted targets. Finally, the differences between the bacterial-and viral-targeted human proteins were validated by GO and pathway enrichment analysis.
Data Collection
All the experimentally validated bacteria-human and virus-human protein-protein interaction (PPI) datasets were collected from PHISTO: a pathogen-host interaction search tool [22]. We found 8993 and 35,120 bacteria-human and virus-human PPIs, respectively, and detected 3673 bacterial-and 5887 viral-targeted human proteins. Out of these, 1780 proteins were common targets of both bacteria and viruses (shown in Figure 1) and were excluded from our analysis. We searched the remaining 1893 and 4107 respective bacterial-and viral-targeted human proteins, in UniProt, a worldwide hub of protein knowledge database [23]. We found 1618 and 3916 bacterial-and viral-targeted and reviewed human proteins, respectively, in UniProt (Supplementary Tables S1 and S2), which were considered for further analysis.
Sequence Features
All the above human protein sequences were downloaded from the UniProt database. For the prediction of proteins and PPIs, the sequence features, such as the amino acid composition (AAC), dipeptide composition (DC), pseudo-amino acid composition (PAAC), and composition-transition-distribution (CTD) were reported as important features [24][25][26]. We computed AAC, DC, PAAC, and CTD using PyDPI, a freely available
Sequence Features
All the above human protein sequences were downloaded from the UniProt database. For the prediction of proteins and PPIs, the sequence features, such as the amino acid composition (AAC), dipeptide composition (DC), pseudo-amino acid composition (PAAC), and composition-transition-distribution (CTD) were reported as important features [24][25][26]. We computed AAC, DC, PAAC, and CTD using PyDPI, a freely available python package for chemoinformatics, bioinformatics, and chemogenomics studies [27]. We used these sequence features to discriminate between the bacterial-from the viral-targeted human proteins.
Network Features
To compute network features for human proteins, we retrieved expert-curated human PPIs from the Human Protein Reference Database (HPRD) (Release 9) [28] and constructed a network using these PPIs. Network analyzer (cytoscape plugin) was used to compute the network properties, such as degree, closeness centrality, neighborhood connectivity, average shortest path length, betweenness centrality, clustering coefficient, topological coefficient, eccentricity, and radiality [29].
Gene Ontology (GO) Features
All the GO identifiers (IDs) for the respective 1618 and 3916 bacterial-and viraltargeted human proteins were downloaded from UniProt. We found a total of 23,737 GO IDs for 1618 bacteria-targeted human proteins, while the number of GO IDs for the viraltargeted human proteins was 67,035. The occurrence of each GO ID was counted separately for the above two groups, followed by sorting based on the occurrence value. The top 100 and 280 GO IDs for the bacterial-and viral-targeted human proteins were extracted for GO features. However, only 282 were unique among the top 380 GO IDs (Supplementary Table S3). Therefore, we considered the unique IDs for GO features (Supplementary Figure S1). For each human protein, the presence or absence of the top GO ID was considered as 1 or 0, respectively.
Classification
The distinction between the bacterial-and viral-targeted human proteins may be viewed as a binary (two-class) classification problem. To differentiate between the proteins, we used well-known classifiers, such as SVM, RF, and DNN.
Support Vector Machines (SVM)
The SVM classifier explicitly maps the data over a vector space to find a decision surface that maximizes the margin between data points of two classes. For the SVM classifier, we used the scikit-learn python package [30]. To find the best performance of the SVM classifier, we tested different combinations of cost and gamma parameters of radial basis function (RBF).
Random Forest (RF)
Several decision trees (DTs) grow simultaneously using a random subset of features in RF. In the RF classifier, each tree is a new object and "votes" for that class. Based on a majority vote, the forest elects the classification. We also used the scikit-learn python package for the RF classifier. Optimal parameters were utilized to find the best performance.
Deep Neural Networks (DNN)
The DNN method was shown to perform well with diverse problems. DNN is more robust and useful than other methods for complex classification problems and is becoming a popular algorithm in the field of modern computational biology. We used TensorFlow DNN, which is a widely-used deep learning package for classification, to discriminate between the bacterial-and viral-targeted human proteins [31].
10-Fold Cross-Validation
To avoid the performance bias of the prediction methods, we used the 10-fold crossvalidation technique. In 10-fold cross-validation, the whole dataset is divided into 10 sets (folds) of equal or nearly equal sizes. Training and testing are repeated 10 times so that each time, a different set (fold) goes out for testing, while the remaining 9 sets (folds) are used for training. The average performance measures over the 10 folds are considered for the overall performance of the model.
Feature Selection
We used several feature selection methods, such as univariate feature selection (UFS), recursive feature elimination (RFE), feature selection using SelectFromModel (SFM), and tree-based feature selection (TBFS). In UFS, the K best features were selected based on the univariate statistical tests. We used all the univariate statistical test methods available in scikit-learn for the purpose of classification. In RFE, the least important features are excluded in each recursive step, until the desired number of features is reached. The important features are selected from the model in SFM. In TBFS, a tree-based estimator computes the importance of the features and irrelevant features are discarded.
Performance Measures
The performance measures of the classification problem, such as sensitivity, specificity, accuracy, positive predictive value (PPV or precision), Mathews correlation coefficient (MCC), and F1-score were calculated using the following equations: where True Positive (TP): Bacterial-targeted human proteins are correctly identified as bacterialtargeted human proteins.
False Positive (FP): Viral-targeted human proteins are incorrectly identified as bacterialtargeted human proteins.
True Negative (TN): Viral-targeted human proteins are correctly identified as viraltargeted human proteins.
False Negative (FN): Bacterial-targeted human proteins are incorrectly identified as viral-targeted human proteins.
The area under the receiver operating characteristic curve (AUC), for all the cases, was also computed.
GO Enrichment Analysis
The top 100 bacterial-targeted and the same number of viral-targeted human proteins predicted by our method were considered for GO enrichment analysis. To this end, we used Enrichr, a comprehensive gene set enrichment analysis web server, 2016 update [32]. We considered only the biological process terms with p-values < 0.05 for the GO enrichment analysis.
Pathway Enrichment Analysis
The above mentioned 200 human proteins (100 each of the bacterial-and viral-targeted proteins) were also considered for pathway enrichment analysis. We used the Reactome Pathway Knowledgebase for this purpose [33]. Pathways with p-value < 0.05 were treated as enriched pathways.
Selection of Features
Important features of human proteins, such as the sequence, GO, and networks were considered to discriminate between the bacteria-and virus-targeted human proteins. For individual sequence features, dipeptide composition (DC) achieved the highest AUC of 0.931, with an F1-score of 90.3%, and MCC of 0.67 (Table 1 and Supplementary Table S4). However, the sequence features AAC, PAAC, and CTD showed poor performances with CTD being the poorest. We tested different combinations of the above features to achieve a high performance. We observed that a combination of AAC, DC, and PAAC achieved the best AUC of 0.939, F1-score of 94.9% and MCC of 0.81.
Of the other features, the GO feature attained the maximum AUC of 0.886, F1-score of 86.4% and MCC of 0.51. On the other hand, the network feature was unable to distinguish between the bacteria-and virus-targeted human proteins. We also tested mixed features set to measure the performance. We found that the combination of AAC, DC, PAAC, and GO features achieved the highest AUC of 0.914, F1-score of 88.3% and MCC of 0.60. Together, the above results suggested that the combination of the AAC, DC, and PAAC features attained the highest level of performance. We applied multiple feature selection methods, such as UFS, RFE, SFM, and TBFS for the combination of AAC, DC, and PAAC features. We observed that TBFS achieved the highest AUC of 0.805, F1-score of 84% and MCC of 0.44 (Table 2 and Supplementary Table S5). However, features selected by these methods were unable to attain a similar performance as the original features set. This result suggested that several features selection methods were unable to perform better than the primary features. As a result, we selected a combination of AAC, DC, and PAAC (445 features) as the best features set.
Performance Comparison of Different Classifiers
To find the best classifier for our dataset, we compared the performance of SVM, RF, and DNN classifiers. Different parameter-based performances were calculated for these classifiers and only the best result was reported here. In the majority of cases, we observed that the DNN classifier achieved the best performance (Tables 1 and 2). As shown in Figures 2 and 3, the performance of the DNN classifier is far superior to SVM and RF. Together, the results suggested that DNN performed better than other conventional MLT.
Performance Comparison of Different Classifiers
To find the best classifier for our dataset, we compared the performance of SVM, RF, and DNN classifiers. Different parameter-based performances were calculated for these classifiers and only the best result was reported here. In the majority of cases, we observed that the DNN classifier achieved the best performance (Tables 1 and 2). As shown in Figures 2 and 3, the performance of the DNN classifier is far superior to SVM and RF. Together, the results suggested that DNN performed better than other conventional MLT.
Gene Ontology Enrichment Analysis
Prediction probability scores of all the bacteria-and virus-targeted human proteins were sorted (Supplementary Tables S6 and S7). Prediction scores for the top 100 bacteriatargeted and the same number of virus-targeted human proteins were investigated further to understand the specific infection strategies. GO enrichment analysis of the predicted bacteria-targeted proteins displayed negative regulation for catalytic activity, cellular response to hypoxia, cellular catabolic process, nitric oxide biosynthetic process, nitric oxide metabolic process, calcium ion import, RIG-I signaling pathway, cell adhesion mediated by integrin, and heart rate, etc. (Table 3). In contrast, virus-targeted human pro-
Gene Ontology Enrichment Analysis
Prediction probability scores of all the bacteria-and virus-targeted human proteins were sorted (Supplementary Tables S6 and S7). Prediction scores for the top 100 bacteriatargeted and the same number of virus-targeted human proteins were investigated further to understand the specific infection strategies. GO enrichment analysis of the predicted bacteria-targeted proteins displayed negative regulation for catalytic activity, cellular response to hypoxia, cellular catabolic process, nitric oxide biosynthetic process, nitric oxide metabolic process, calcium ion import, RIG-I signaling pathway, cell adhesion mediated by integrin, and heart rate, etc. (Table 3). In contrast, virus-targeted human proteins showed biological processes, such as the peptide biosynthetic process, translation, mitochondrial ATP synthesis-coupled electron transport, mitochondrial translation elongation, cellular macromolecule biosynthetic process, mitochondrial translational termination, respiratory electron transport chain, and translational termination upon GO enrichment analysis (Table 4). Overall, the top bacteria-and virus-targeted human proteins were related to 48 and 96 enriched biological processes, respectively. We found that most of the enriched biological processes were distinct for bacteria-and virus-targeted human proteins (Figure 4). Table 3. Top 20 GO biological processes for bacterial-targeted human proteins. Pathway enrichment analysis showed the uptake and function of anthrax toxins, defective NEU1 causing sialidosis, and Vitamin B1 (thiamin) metabolism pathways for the top 100 bacteria-targeted human proteins (Table 5). Likewise, the top predicted virus-targeted human proteins showed the enrichment of pathways, including the formation of the cornified envelope, keratinization, translation, and mitochondrial translation termination, etc. (Table 6). We found that the enriched pathways for bacteria-and virus-targeted human proteins were different ( Figure 5). The above results suggested that bacterial-targeted human proteins enriched gene ontology (GO) and pathways distinct from viral-targeted human protein. RUNX1 interacts with cofactors whose precise effect on RUNX1 targets is not known 0.044545497 Table 6. Top 5 pathways for viral-targeted human proteins.
Discussion
Rapid, safe, cost-effective, and accurate tools for etiological diagnosis of suspected infections are of paramount importance for individual and public health. Particularly important is to discriminate between the bacterial and viral causes of infectious diseases given the alarming rise of antibiotic resistance, due to their indiscriminate and unnecessary use. An estimated 30-50% of antibiotics are prescribed in hospitalized patients of the United States for wrong indications, most commonly viral infections (https://www.cdc.gov/antibiotic-use/stewardship-report/outpatient.html, accessed on 21 October 2021) [34]. Traditional culture methods for bacterial infections are low
Discussion
Rapid, safe, cost-effective, and accurate tools for etiological diagnosis of suspected infections are of paramount importance for individual and public health. Particularly important is to discriminate between the bacterial and viral causes of infectious diseases given the alarming rise of antibiotic resistance, due to their indiscriminate and unnecessary use. An estimated 30-50% of antibiotics are prescribed in hospitalized patients of the United States for wrong indications, most commonly viral infections (https://www.cdc.gov/antibiotic-use/stewardship-report/outpatient.html, accessed on 21 October 2021) [34]. Traditional culture methods for bacterial infections are low throughput, time consuming, and labor intensive, in addition to the challenges of sample collection from some of the infected tissues, and the lack of wide availability of culture techniques for many pathogen species. On the other hand, the diagnosis of viral infections by serology may lack specificity, while nucleic acid detection methods require sophisticated equipment and technical expertise. However, no reliable methods or markers are currently available for the rapid diagnosis of bacterial and viral etiologies of infectious diseases.
Attempts have been made to develop complementary diagnostics for infectious diseases by focusing on specific host responses. In addition to being capable of discriminating between colonization and infection, this approach is not limited by the availability of infected tissue samples. Moreover, host response-based categorization of infections provides additional insights into the disease pathogenesis and immune response and may help to identify new targets for therapeutic intervention.
Multiple attempts have been made to diagnose infectious diseases based on hostspecific biomarkers. Widely used parameters such as WBC counts and C-reactive protein (CRP), may aid to differentiate between bacterial and viral infections, but lack sensitivity and specificity, leading to frequent misdiagnosis. Newer bacterial infection markers, such as presepsin, procalcitonin, and CD64, are used for severe sepsis, while proADM may predict prognosis of the disease [35,36]. In contrast, cytokines, such as IL-2, IL-8, and IL-10 were suggested as early biomarkers for viral infection [37]. Several research groups reported that the antiviral host protein MxA is a clinically useful marker for acute viral infection and, combined with CRP and/or procalcitonin, may distinguish between bacterial and viral infections [38]. A double-blind, multicenter study found that a strategy to integrate CRP, tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) and interferon γ-induced protein-10 (IP-10) performed significantly better than the individual markers to identify acute viral infection in pediatric patients [39]. However, they did not validate their tools against reference diagnostic methods, limiting its utility. Other studies also suggested that a combination of markers may perform better than a single biomarker [40]. However, combining CRP with other markers did not improve the former's ability to differentiate between bacterial and viral lower respiratory tract infections in a different study [41].
High throughput genomic and proteomic studies have been employed to identify infection-specific host gene sets. Although they were useful for novel biomarker discovery, the gene sets often contained a large number of candidates, making them difficult to apply clinically [42][43][44]. Through multi-cohort analysis of these large datasets, smaller gene sets optimized for the diagnosis of bacterial and viral infections were identified later on [45].
Machine learning techniques have been extensively used for disease biomarker discovery, including infectious diseases. However, they were mostly used for individual microbial species or groups of pathogens. The increasing availability of bacteria-human and virushuman PPIs now permits researchers to compare bacterial-and viral-specific infection strategies and identify host proteins that are differentially targeted by these two classes of pathogens. We employed well-known machine learning methods, such as SVM, RF, and DNN to the available PPI datasets to distinguish between bacteria-and virus-targeted human proteins.
We considered all the updated and comprehensive sets of experimentally validated bacteria-human and virus-human PPIs from PHISTO. We found 1780 human proteins that are common targets for bacteria and viruses. During the bacterial and viral infection, these common proteins might help to execute several commonalities, such as immune response patterns, acute onset, and response to antimicrobial agents in humans. The primary goal of the current study was to differentiate between bacterial-and viral-targeted human proteins. Therefore, we excluded these 1780 human proteins from our analysis. The proposed method used 1618 and 3917 bacterial-and viral-targeted human proteins. To ensure utilization of a larger dataset of two classes, we considered the complete dataset for building the model. For imbalance datasets, we found that performance measures, such as the AUC, MCC, and F1-score, were more important as opposed to sensitivity, specificity, and accuracy. Therefore, we compared the AUC, MCC, and F1-score for all the cases. We found that sequence and gene ontology features performed far better than network features. We witnessed that the network properties of human proteins was unable to distinguish between bacterial-and viral-targeted human proteins (Table 1), suggesting indistinguishable network feature patterns for bacterial and viral targeted human proteins. The majority of frequent GO IDs for bacterial-and viral-targeted human proteins are common (Supplementary Figure S1). Therefore, gene ontology features were unable to perform better than the sequence features. Among the sequence features, we found that DC achieved better performance than the others. A combination of AAC, DC, and PAAC features (445 features) achieved the best performance (Table 1). In addition to these, the feature set selected by different feature selection techniques also showed a poorer performance than the above features set. Therefore, we reported that the combination of AAC, DC, and PAAC (445 features) is the best feature set for discriminating between bacterial-and viral-targeted human proteins. If the two classes are distinct due to true biological reasons, then we can also get good performance results for conventional MLTs like SVM and RF (shown in Table 1, and Figures 2 and 3). The DNN performed well due to a large number of data and features. Furthermore, we identified the top 100 human proteins targeted by bacteria and the top 100 human proteins targeted by viruses. The gene ontology enrichment analysis of these 200 proteins showed a greater number of enriched biological processes for viral-targeted human proteins rather than bacterial-targeted human proteins (Figure 4). Similarly, we observed a greater number of enriched pathways for viral-targeted human proteins than bacterial targeted human proteins. These results imply that viruses are influencing more biological processes and pathways than bacteria. As is known, viruses are totally dependent on the host. Therefore, they exploit more host machinery than bacteria. The above results indicate the same. In addition to this, we observed that the majority of the enriched biological processes and pathways were different for bacterial-and viral-targeted human proteins. These functional annotations also validated our method for discriminating between bacterial-and viral-targeted human proteins.
Conclusions
We proposed a computational method to distinguish between the bacteria-and virustargeted human proteins. We employed widely used and state-of-the-art machine learning techniques, such as SVM, RF, and DNN and integrated important biological information on human proteins, including the sequences, networks, and GO to achieve this goal. We found the best performance was with the sequence features and the DNN classifier.
We developed a prediction model to maximize the performance measures and identify the best features to do the same. Therefore, we did not use the prediction for future data. However, the proposed model may be utilized for predicting and discriminating between the possible interactions of human proteins with bacterial and viral proteins. We identified distinct targets for bacterial and viral infections upon GO and pathway enrichment analysis of highly predicted human proteins. Bacterial targets predominantly included immune response-related genes and transcriptional machinery, while viruses targeted protein translation and mitochondrial energy metabolism. The distinction between bacteria-and virus-targeted human proteins might help to improve infection-specific diagnosis and treatment. In the future, we will look for the difference between RNA and DNA viruses, and Gram-positive and Gram-negative bacteria to understand the specific infection strategy.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/pr10020291/s1, Figure S1: Venn diagram of Gene Ontology (GO) IDs of bacteria-and virus-targeted human proteins., Table S1: Bacterial targeted reviewed human proteins, Table S2: Viral targeted reviewed human proteins, Table S3: Top GO IDs for bacterial and viral targeted human proteins, Table S4: Full table of features wise performance measures on bacterial and viral targeted human proteins, Table S5: Full table of selected feature-wise performance measures of bacterial and viral targeted human proteins, Table S6: Probability score of top 100 bacteria targeted human proteins, Table S7: Probability score of top 100 virus targeted human proteins. | 2022-02-04T16:25:53.732Z | 2022-01-31T00:00:00.000 | {
"year": 2022,
"sha1": "c8c233221b5b09f3a7959c01b8a62c152ef0dffe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/10/2/291/pdf?version=1643642675",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a6ba13ffe36b9087ce2723bad440304d30f7e831",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
252541005 | pes2o/s2orc | v3-fos-license | Development of prediction models to select older RA patients with comorbidities for treatment with chronic low-dose glucocorticoids
Abstract Objective To develop prediction models for individual patient harm and benefit outcomes in elderly patients with RA and comorbidities treated with chronic low-dose glucocorticoid therapy or placebo. Methods In the Glucocorticoid Low-dose Outcome in Rheumatoid Arthritis (GLORIA) study, 451 RA patients ≥65 years of age were randomized to 2 years 5 mg/day prednisolone or placebo. Eight prediction models were developed from the dataset in a stepwise procedure based on prior knowledge. The first set of four models disregarded study treatment and examined general predictive factors. The second set of four models was similar but examined the additional role of low-dose prednisolone. In each set, two models focused on harm [the occurrence of one or more adverse events of special interest (AESIs) and the number of AESIs per year) and two on benefit (early clinical response/disease activity and a lack of joint damage progression). Linear and logistic multivariable regression methods with backward selection were used to develop the models. The final models were assessed and internally validated with bootstrapping techniques. Results A few variables were slightly predictive for one of the outcomes in the models, but none were of immediate clinical value. The quality of the prediction models was sufficient and the performance was low to moderate (explained variance 12–15%, area under the curve 0.67–0.69). Conclusion Baseline factors are not helpful in selecting elderly RA patients for treatment with low-dose prednisolone given their low power to predict the chance of benefit or harm. Trial registration https://clinicaltrials.gov; NCT02585258.
Introduction
RA is a systemic, inflammatory disease primarily located in the joints, resulting in pain, joint damage, functional disability and reduced quality of life. Treatment of RA is essential to prevent these outcomes, but the treatment itself may also result in adverse events (AEs) and comorbidity [1].
Rheumatology key messages • Low-dose prednisolone has strong effects on benefit and harm in RA patients !65 years of age. • Other variables are of little clinical relevance to predict benefit or harm in RA patients.
Current RA treatment strategies are mostly treat to target [2] and consist of conventional DMARDs (cDMARDs), biologic DMARDs (bDMARDs), NSAIDs and glucocorticoids (GCs), such as prednisolone, in different doses and combinations [1]. Ideally, treatment strategies are tailored for individual patients, taking into account the respective estimates of the probabilities of risks and benefits [3]. With this knowledge, rheumatologists could be more efficient in selecting the most appropriate treatment and also in preventing and timely caring of AEs, thus reducing the disease burden for individuals and society [3].
RA treatment strategies are increasingly targeted on individual patients [4], but this remains difficult [5] because of the lack of individualized treatment guidelines [6,7] and prediction models [3,8,9]. Currently, no prediction models for daily clinical practice are available [10]. In previous studies, a variety of predictive factors for benefit of antirheumatic drugs was found. However, most factors have a low predictive value [8] and they have not always been combined in a prediction model.
In the Glucocorticoid Low-dose Outcome in Rheumatoid Arthritis (GLORIA) study, low-dose prednisolone (5 mg/day) given in addition to background treatment was proven more effective than placebo in reducing disease activity [11,12] and damage progression [13] in a high-risk elderly RA trial population [14]. The co-primary outcome harm, which was expressed as the occurrence of at least one predefined AE of special interest (AESI), was higher for the prednisolone group [14].
Further information is needed to obtain more knowledge about individualized treatment strategies and the use of prediction models in clinical practice. Therefore, the aim of this study was to develop internally validated prediction models from the GLORIA study dataset to determine individual harm and benefit outcomes for elderly RA patients with comorbidities treated with chronic low-dose GCs.
Study design and population
The prediction models for harm and benefit outcomes were developed from the dataset of the 2-year, pragmatic, multicentre, investigator-initiated, double-blind, placebocontrolled, randomized GLORIA study. The GLORIA study was approved by the medical ethical committee of VU University Medical Center and all patients provided written informed consent. The study was executed according to Good Clinical Practice (GCP) guidelines and the Declaration of Helsinki.
The study population consisted of 451 patients with RA [15,16] with a 28-joint DAS (DAS28) !2.60 and age !65 years. Patients were recruited from 28 hospitals in seven European countries between June 2016 and December 2018. Patients were randomized to receive 5 mg/day prednisolone or matching placebo. All co-medications, except for oral GCs, were allowed. Details about the study have been reported previously [14,17].
Models with an outcome at 2 years were developed in the dataset of the modified intention-to-treat population (n ¼ 444). This comprised patients who took at least one capsule of study medication and had at least a baseline and follow-up assessment. Models with an outcome at 3 months were developed in the dataset of the per-protocol population (n ¼ 304). This comprised patients from the above population who had complete data, !80% adherence, no modification of antirheumatic treatment and no protocol violations in the first 3 months of the study.
Outcomes
Eight prediction models were developed (Fig. 1). The first set of four models disregarded study treatment and examined general predictive factors. The second set of four models was similar but examined the additional role of treatment (lowdose prednisolone). For each set of four models there are two for harm (occurrence of one or more AESIs after 2 years and the number of AESIs per year) and two for benefit [early response of disease activity (EULAR good response [18] or a 50% improvement in the ACR score [19] after 3 months and a lack of joint damage progression after 2 years (i.e. less than one progression over 2 years)]. Joint damage progression was measured with the Sharp-van der Heijde score [20]. AESIs included serious adverse events (SAEs) according to the GCP definition and the following ('other AESI'): any AE (except worsening of disease) leading to discontinuation; myocardial infarction, cerebrovascular or peripheral arterial vascular event; newly occurring hypertension, diabetes, infection, cataract or glaucoma requiring treatment or symptomatic bone fracture.
AESIs were recorded during study visits and adjudicated without knowledge of treatment allocation. The number of AESIs per year was calculated by dividing the total number of AESIs during the study by the study duration.
Predictors
At baseline, several clinical measurements were performed and questionnaires regarding health and quality of life were collected. These variables were used as possible predictors. In preparation, to limit excessive statistical testing and falsepositive results, 38 possible predictors were grouped into five predictor sets based on prior knowledge and first examined per set. The sets were termed, for example, 'personal' (e.g. age, gender) and 'disease' (e.g. disease activity, damage) (full list in Supplementary Data S1, available at Rheumatology online). These sets were applied in all models.
Two stratification factors applied in the study (start/switch antirheumatic drugs at baseline and prior use of GCs) were also assessed as possible predictors. Treatment centre was the third stratification factor, but this factor was not included for several reasons. First, there was large variability in the number of patients per centre. Second, it was not a significant random factor in the main analysis of the GLORIA study due to the small cluster effect. Finally, prediction models with a random factor quickly become too complex and hard to interpret.
Missing data
Missing value analysis was used to examine the amount and patterns of missing data in the possible predictors and outcomes. Based on this analysis, we assumed that data were missing at random. Missing data were imputed with Bayesian single stochastic regression imputation (predictive mean matching) [21] for each of the first four models. We decided to include a maximum of 25 variables in the imputation model, i.e. the outcome measure, all variables with missing data and the variables with the highest correlation to the variables with missing data. These five variables were DAS28, gender, count of active comorbidities, history of RA surgery and adherence measured with pill count. For the prediction model with the outcome 'early response' we used the DAS28 values that were imputed with single stochastic imputation by chained equations in the main GLORIA analysis [14].
Statistical analyses
Linear and logistic multivariable regression models were used to develop the models. The strategy to develop models 1-4 (disregarding treatment) was as follows: starting with the first model (occurrence of one or more AESIs), the variables in the 8.
Post-hoc predicƟon models
Repeat all analyses with the adaptaƟon of the following outliers: One outlier in the outcome AESI rate (52. • • Figure 1. Overview of main, exploratory and post hoc prediction models first predictor set ('personal') were tested for significance (P < 0.05); this was repeated for the other predictor sets. Variables found significant in a set were further tested together for significance (P < 0.05) to build the final model. These steps were repeated for models 2-4. We chose backward selection as the most practical method given the large number of possible predictors. The strategy to develop models 5-8 (including the effect of treatment) was as follows: starting with the first model (model 5; the occurrence of one or more AESIs including study treatment), interaction terms were made with all variables of the first predictor set ('personal') that were significant in model 1. Then, backward selection with these main effects (including the main effect of treatment) and interactions was run again and significant effects were retained (P < 0.05 for main effects, P < 0.10 for interactions; in case of significant interaction, their main effects always remained in the model regardless of significance). The procedure was repeated for the other predictor sets. Variables and interactions found significant in a set were further tested for significance to build the final model. These steps were repeated for the other models. All prediction factors were measured at baseline, unless indicated otherwise.
The strategy for the third ('comorbidities') and fourth ('medication') predictor set was slightly different. The variables 'count of active comorbidities' and 'Rheumatic Disease Comorbidity Index (RDCI)' in the 'comorbidities' predictor set were probably highly correlated. Therefore the tests for significance of the variables in the 'comorbidities' predictor set were done for all variables of the predictor set plus the variable 'count of active comorbidities' but excluding the variable 'RDCI', and then again for all variables of the predictor set plus the variable 'RDCI' but excluding the variable 'count of active comorbidities'. The same strategy was applied to the 'medication' predictor set and the variables 'medication adherence, measured with pill count' and 'medication adherence, measured with MMAS-8 questionnaire'. We compared the P-values of the variables in the differently composed predictor sets and studied if there were differences in the significance of the variables. Based on this information, we decided which of the collinear variables of the predictor sets 'comorbidities' and 'medication' to use. The required sample size for the models was calculated with the 'psampsize' package [22] in R (R Foundation for Statistical Computing, Vienna, Austria).
Exploratory analyses
As exploratory analyses, eight prediction models with AEs or infection as outcome were developed (Fig. 1). Again, the first four models disregarded treatment allocation to examine general predictive factors. The second set of four models was similar but examined the additional role of study treatment. For each set of four models there were two for harm in general (occurrence of one or more AEs after 2 years and the number of AEs after 2 years) and two for infections (occurrence of one or more infections after 2 years and the number of infections after 2 years).
Post hoc analyses
As post hoc analysis ( Fig. 1), the outlier in the outcome AESI rate (52.1) was adjusted to the second highest value (19.2) because 52.1 was an unrealistically high value due to a patient with a few AESIs while the study participation was only 1 week. The two outliers (111% and 154%) in 'medication adherence measured with pill count' were adjusted to 110% [23].
To increase insight, we also performed analyses stratified for treatment in models where interaction terms proved significant (Supplementary Data S3, available at Rheumatology online).
Performance of models
The performance of the final models was assessed with explained variance (Nagelkerke's R 2 ), calibration (Hosmer-Lemeshow test, P ! 0.05 indicates a good model fit) and the amount of discrimination [concordance index (C-index); area under the receiver operating characteristics (AUC ROC) curve] for the models with a dichotomous outcome. For the models with a continuous outcome, the R 2 was calculated to assess the quality of the models.
Internal validation
The final models were internally validated with the 'validate' function in R [24]. Each model was bootstrapped 250 times [25] and backward selection was used to run the models. The bootstrap-corrected C-index (AUC), bootstrap-corrected explained variance (R 2 ) and calibration slope (shrinkage factor) were reported to indicate the extent of optimism of each model. SPSS version 26 was used to impute missing data, perform the model exercise and test the quality of the models. R version 4.0.3 was used to calculate the sample size and to internally validate the models.
Results
In total, 444 of 451 randomized patients were included in the analyses because 2 patients never started study medication and 5 patients discontinued before the first follow-up. About two-thirds of the patients completed the 2-year trial. Reasons for discontinuation were 'other' reasons [including coronavirus disease 2019 (COVID-19)-related access issues and unwillingness to continue the trial, 20%], AEs (14%) and lack of efficacy of the study medication (4%) [14].
Data on predictors were quite complete except for anti-CCP status (13% missing; Table 1). For outcome, many patients had missing values for joint damage progression (42%), as this was measured only at baseline and 2 years.
The required sample size was 438 patients if the model contained 20 variables, R 2 was set at 0.3 and the shrinkage factor was set at 0.8 [22].
A few variables were found to be predictive for the outcome in one of the models regardless of treatment, and likewise in models that included effect of treatment, with partial overlap ( Table 2, Fig. 2). The results of the post hoc analyses for the models with a benefit outcome and the models to predict occurrence of an AESI did not differ from the original analyses. For the models to predict the number of AESIs per year, only the results of the post hoc analyses were shown, because these results were seen as more reliable than the results including the extremely high outliers. The interpretation and the relationship with the outcome of the predictors in all final models ( Table 2) are presented in Fig. 2 (see also Supplementary Data S2, available at Rheumatology online).
Performance of models
The performance of the models was sufficient (Table 3). For example, for the model with the outcome having one or more AESI including treatment interaction, 12% of the variance was explained by the predictive variables in the model. The AUC was 0.67, which means that the ability to discriminate between patients with and without an AESI was poor.
Internal validation
The models were internally validated and the performance of the models was reasonable (Table 4). For example, for the model to predict which patients have one or more AESI after 2 years (including treatment interaction), the C-index was 0.64. This means that in 64% of the patients the prediction rule discriminates well between a prednisolone and placebo patient to develop an AESI. The explained variance (R 2 ) was 0.09, which means that 9% of the variance in the outcome can be explained by the predictive factors in the model. The calibration slope (shrinkage factor) was 0.87, indicating that overoptimism is expected if you apply the prediction rule in a new RA population with the same characteristics.
Exploratory analyses
In the exploratory analyses, for the model with the AE rate as an outcome, a change of antirheumatic treatment at The outcome early response of disease activity after 3 months was calculated for the per-protocol population (n ¼ 304); prednisolone, n ¼ 156; placebo, n ¼ 148.
b Lower education level is defined as primary or secondary school, higher education level is defined as higher education (non-university and university). c A higher score means a worse outcome. f OR is probably artificially inflated by the small number of observations of patients who had no joint damage, zero previous comorbidities and were previously treated with biologics. g The models including the variables that were found to be predictive in the models with interaction with study treatment stratified for prednisolone and placebo (without interaction terms) can be found in Supplementary Data S3, available at Rheumatology online. h Joint damage: OR refers to joint damage (>0.5 point) at baseline. i Medication adherence (pill count) at 3 months: OR refers to 1% more medication adherence (measured with pill count) after 3 months of study treatment.
j Difficulty in daily functioning (HAQ) score: OR refers to a change of one point in HAQ score (range 0-3).
k Utility (quality of life score, EQ-5D): OR refers to a change of one point in utility score (range À0.446-1).
l Joint damage score: OR refers to a change of one point in joint damage score (range 0-448). OR: odds ratio; EQ-5D: EQ-5D: European Quality of Life 5-Dimensions questionnaire.
baseline appeared to be predictive (Supplementary Data S4, available at Rheumatology online). Serious infections were rare, with 35 patients reporting a serious infection, thus we did not develop a model with serious infection as an outcome.
Discussion
In the many models studied in the GLORIA study dataset, apart from study treatment, we found only a few variables to be predictive for the outcome. The relationship of these factors with the outcomes were weak, sometimes counterintuitive and thus A ? Prior occurrence of GC-related comorbidity Figure 2. Interpretation of predictors in the harm and benefit models disregarding and examining the effect of study treatment (i.e. low-dose prednisolone). For the benefit model, only the model disregarding the effect of study treatment is shown (panel C), because no effect of study treatment was found. (A) Baseline predictive factors for harm, disregarding the effect of prednisolone (red: an increase in harm; green: a decrease in harm; white: the variable is not included in the model; ?: a counterintuitive relationship). (B) Baseline predictive factors for the harm prediction model and the interaction with prednisolone, with the addition of the variables that were found to be predictive in the models disregarding the effect of prednisolone (red: an increase in harm; green: a decrease in harm; white: the variable is not included in the model; ?: a counterintuitive relationship). (C) Baseline predictive factors for the benefit prediction model, disregarding the effect of prednisolone (red: less benefit; green: more benefit; white: the variable is not included in the model; ?: a counterintuitive relationship). Full colour figure is available at Rheumatology online. * Neutralized means that the addition of prednisolone to the model counteracted the adverse effect of the baseline predictive factor. In other words, more joint damage is associated with an increased likelihood of at least one AESI, but this increase is gone after the addition of prednisolone to the model. Similarly, no change of antirheumatic treatment at baseline is associated with a greater number of AESIs, but this increase is gone after the addition of prednisolone to the model. ** More adherence was associated with a lower number of AESIs; the addition of prednisolone to the model partially counteracted this effect. EQ-5D: European Quality of Life 5-Dimensions questionnaire
1830
Linda Hartman et al.
of little clinical relevance. In other words, we were unable to build a useful model to identify patients at greater risk of benefit or harm of low-dose prednisolone treatment. Previous literature is scarce, with only a few previous studies with data of limited quality and generalizability. Variables to predict the occurrence or number of AESIs have not been examined in other studies as far as we know. A few studies found that the infection risk increased after GC [26,27] or biologic treatment [26,28]. These findings are in line with our finding that low-dose prednisolone increased the number of infections and that prior biologic treatment was slightly predictive for this outcome. Current bDMARD treatment was not found to be predictive, probably caused by the small number of patients treated. In previous studies [27,29,30], Rheumatoid Arthritis Observation of Biologic Therapy risk score [26] and serious infections (with variable definitions) were strongly associated with increasing age, comorbidities, higher GC dosages and prior serious infections. We did not specifically ask about prior serious infections in the medical history, so these are most likely underreported. Nevertheless, it is likely that such a history will increase the risk of subsequent infections in our population.
Our observation that difficulties in daily functioning show a slightly lower chance of early response is in line with previous findings with anti-TNF therapy [31] or sustained remission [32]. In previous studies, a variety of other variables were found to predict treatment response. These variables include the presence of comorbidities [33], male [34,35] or female [36] gender, older age [34], higher tender [34] or swollen [32] joint counts, lower number of erosions [32], RF [34] or anti-CCP [34] positivity, obesity [37,38], smoking [39], shorter disease duration [36], methotrexate treatment [31], prior DMARD use [36] and lower baseline disease activity [36,40]. We also assessed most of these variables, but they were not predictive for early response in our dataset.
Reasons for differences between our study and previous studies regarding predictive factors could be the higher mean age (73 years compared with 50-55 years in most studies), and consequently more comorbidities and frailty, a longer disease duration and heterogeneity in antirheumatic treatment between studies. Frailty was indirectly captured by prior comorbidities and questionnaires about health and daily functioning. The absence of age itself as a predictor is best explained by the limited spread since all patients were !65 years of age.
Performance of the models
The performance of the models including the effect of study treatment was sufficient, with explained variances ranging from 12 to 15% and AUC values around 0.68. A possible explanation for the moderate performance is the limited number of predictors in all models. As in most studies, the explained variance was low. This means that the predictive factors only explain a small part of the variance between patients.
A few other studies have examined the performance of their prediction models in an RA population. In two studies, models to predict damage progression were developed with an AUC of 0.77 [10] and 0.87 [42], somewhat better than the 0.69 in our study. However, in two other models the AUC was worse: 0.60 [48] and 0.61 [49]. Guillemin et al. [43] developed a model with an explained variance of 77%, which was high compared with the explained variance of 13% in our model. This might mean that we missed some variables that are predictive for joint damage. However, three of the five predictors in that study were also assessed as predictors in our model. An additional explanation might be the overall low rate of damage progression, limiting the power to detect associations.
Regarding early response, the AUC was 0.62 in a model to predict treatment response [40] and 0.66 in a model to predict remission [32] compared with 0.69 in our model. For the model with occurrence or number of AESIs as the outcome, no performance measurements from other studies are available.
The limitations of most previous studies are that the quality of the prediction models was not assessed and that the models were not internally or externally validated.
Strengths and limitations
A unique characteristic of our study is that the prediction models apply specifically to an elderly RA population, while previous studies targeted their models on younger patients. Another strength is that we assessed a high number of possible predictors in a pre-planned way to limit the amount of statistical testing. Moreover, we did not focus on one outcome, but developed models with four different outcomes that are all relevant for RA patients. In addition, the pragmatic design of the trial with permission to use all antirheumatic medication leads to a situation that is similar to clinical practice and thus high generalizability of the findings. A final strength of our study is that we performed an internal validation with bootstrapping techniques to correct for optimism.
The most important limitation is of course the moderate performance of the models, with variables that are of questionable clinical relevance. Another limitation is that we had missing data at the end of the study, mainly in the outcome joint damage progression, because of premature discontinuation and missed assessments due to COVID-19. This was addressed through imputation. A final limitation is that we were unable to test the generalizability of our models by external validation. We only internally validated our models with bootstrapping techniques. This method is not as good as external validation, but internal validation is seen as a good alternative [50].
Conclusion
We previously reported that low-dose prednisolone has strong effects in RA patients !65 years of age, with a favourable balance of benefit and harm. In the current study we found little or no evidence to suggest that other factors are important to predict risks and benefits of such treatment in elderly RA patients.
Supplementary data
Supplementary data are available at Rheumatology online.
Data availability statement
The data underlying this article will be shared on reasonable request to the corresponding author. | 2022-09-28T06:18:31.145Z | 2022-09-27T00:00:00.000 | {
"year": 2022,
"sha1": "b24f6e28617f9f24b62db5a00572e004c9bbef48",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/rheumatology/advance-article-pdf/doi/10.1093/rheumatology/keac547/46608244/keac547.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0fe696389bd72f81ea7ccfa6dfeb762bfaddcde",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233232084 | pes2o/s2orc | v3-fos-license | New insights into the identity of Discolaimium dubium Das, Khan and Loof, 1969 (Dorylaimida) as derived from its morphological and molecular characterization, with the proposal of its transference to Aporcella Andrássy, 2002
Abstract Three Iranian populations of Discolaimium dubium are studied, including their morphological and morphometric characterization, molecular analysis (LSU-rDNA) and the description of the male for the first time. For comparative purposes, this species is distinguished by its 1.10 to 1.40 mm long body, lip region offset by constriction and 8 to 10 µm wide, odontostyle 7.5 to 10.5 µm long with aperture occupying 59 to 76% of total length, neck 300 to 362 µm, pharyngeal expansion 127 to 181 µm long or 44 to 46% of the total neck length, uterus simple and 38 to 53 µm or 1.2 to 1.5 times the body diameter long, V = 52 to 58, tail conical (32-38 µm, c = 32-43, c′ = 1.6-2.0) with rounded tip and a hyaline portion occupying 14 to 15% of tail length, spicules 30 to 32 µm long, and two or three widely space ventromedian supplements with hiatus. Both morphological and molecular data support its belonging to the genus Aporcella, whose monophyly is confirmed and to which the species is formally transferred as A. dubia (Das et al., 1969) comb. n.
In their revision of the genus Discolaimoides Heyns, 1963Das et al. (1969 described Discolaimium dubium based on the holotype plus 25 females collected from several European enclaves in Italy, the Netherlands and Switzerland. Later, it was recorded from the Netherlands again (Bongers, 1988), Spain (Liébanas et al., 2003;Peña-Santiago et al., 2005), Hungary (Andrássy, 2009) and India (Sharma, 2011). Thus, it seems to be a component of the nematode fauna inhabiting in northern territories, widely spread out in Europe.
When describing it, Das et al. (1969) characterized D. dubium by having, among other features, 1.06 to 1.33 mm long body, lateral chord without cons picuous lateral organs, lip region slightly expanded, odontostyle 8 to 10 µm long with aperture occupying slightly more than one-half of total length, anterior portion of pharynx muscular and enlarging very gradually, absence of pars refringens vaginae and conical tail. This was a peculiar combination of traits; therefore, the authors raised a doubt about its identity and named it dubium, that is, dubious or doubtful (cf. Andrássy, 2009) and pointed out (p. 486) that '… The systematic position of this species is uncertain. Because of general body shape, shape of lips, structure of cuticle and vagina it is included in Discolaimium, but it differs from all species of this genus in the thick anterior part of the esophagus, the anterior position of DO and the absence of conspicuous lateral organs. For the moment, however, it cannot be placed elsewhere; the unsclerotized vagina keeps it outside Eudorylaimus'. Patil and Khan (1982) were aware of the intricate taxonomy of D. dubium and solved the matter with the proposal of the new genus Neodiscolaimium, with N. dubium as its type and only species. Nevertheless, this action was not followed by other authors (Andrássy, 1990(Andrássy, , 2009Jairajpuri and Ahmad, 1992) who regarded Neodiscolaimium as a junior synonym of Discolaimium (Thorne, 1939).
A general survey conducted during last years to explore the dorylaimid fauna from Iran resulted in the recovering of, among other forms, several populations identified as belonging to D. dubium. Their morphological and molecular study should elucidate the evolutionary relationships of this species, and provide new information to understand better the phylogeny of these dorylaimid taxa.
Materials and methods
Extraction and processing of nematodes Several soil samples were collected from depths ranging from 10 to 40 cm, in the active plant root zone of East-Azarbaijan province, northwest Iran, during the period 2016 to 2017. Nematodes were extracted from soil samples using Whitehead and Heming (1965) method, transferred to anhydrous glycerin according to De Grisse (1969), and mounted on glass slides for handling.
Light microscopy
Measurements were made using a drawing tube attached to an Olympus BX-41 light microscope. The digital images were prepared using a DP50 digital camera attached to the same microscope with differential interference contrast (DIC). Morphometrics included Demanian indices and the usual measurements and ratios. Line illustrations were prepared using CorelDRAW ® software version 12. Photographs were edited using Adobe ® Photoshop ® CS software.
DNA extraction, PCR, and sequencing
For the molecular phylogenetic studies, DNA samples were extracted from one or two females selected from each population, studied individually on temporary slides, placed on a clean slide containing a drop of distilled water or worm lysis buffer (WLB), and crushed by a sterilized scalpel. Then, the suspension was transferred to an Eppendorf tube containing 25.65 μ l ddH2O, 2.85 μ l 10 × PCR buffer and 1.5 μ l proteinase K (600 μ g/ml) (Promega, Benelux, the Netherlands). The tubes were incubated at −80°C (1 hr), 65°C (1 hr) and 95°C (15 min). Each sample was regarded as an independent DNA sample and stored at −20°C until used as polymerase chain reaction (PCR) template. Primers for 28S rDNA D2-D3 amplification/sequencing were forward primer D2A (5´-ACAAGTACCGTGAGGGAAAGTTG-3´) and reverse primer D3B (5´-TCGGAAGGAACCAGCTACTA-3´) (Nunn, 1992). The 25 μ l PCR reaction mixture containing 10 μ l ddH2O, 12.5 μ l PCR master mix (Ampliqon, Denmark), 0.75 μ l of each forward and reverse primers and 1 μ l of DNA template. PCR was carried out using a BIO RAD thermocycler machine in accordance with Archidona-Yuste et al. (2016). The thermal cycling program for amplification were as follows: denaturation at 94°C for 2 min, 35 cycles of denaturation at 94°C for 30 sec, annealing of primers at 55°C for 45 sec and extension at 72°C for 3 min followed by a final elongation step at 72°C for 10 min. The PCR products were purified and sent for sequencing to Bioneer Company, South Korea. The recently obtained sequences were submitted into the GenBank database under the accession numbers MT079121, MT079122 and MT079123S as indicated on the phylogenetic tree (Fig. 4).
Phylogenetic analyses
The Basic Local Alignment Search Tool (BLAST) homology search program was used to compare the newly generated sequences with other available sequences in GenBank database. The recently generated sequences were aligned with the other segments of 28S rDNA gene sequences retrieved from the database using MEGA6 software (Tamura et al., 2013). Paravulvus hartingii (de Man, 1880;Heyns, 1968) (AY593062) was chosen as outgroup. Bayesian analysis (BI) was performed with MrBayes 3.1.2 (Ronquist and Huelsenbeck, 2003). The best-fit model of nucleotide substitution used for the phylogenetic analysis was selected using MrModeltest 2.3 (Nylander, 2004) with Akaike-supported model in conjunction with PAUP* v4.0b10 (Swofford, 2003). The BI analysis under GTR + I + G model was initiated with general time-reversible model with invariable sites and a gamma-shaped distribution for the 28S rDNA gene was done. After discarding burn-in samples and evaluating convergence, the remaining samples were retained for further analyses. Posterior probabilities (PP) are given on appropriate clades. The tree was visualized and saved using the program
Adult
Slender to very slender (a = 39-49) nematodes of medium size, 1.10 to 1.40 mm long. Body cylindrical, tapering toward both ends, but more so toward the posterior extremity as the tail is conical. Upon fixation, habitus regularly curved ventrad, to an open C shape in females and nearly J shape in males. Cuticle twolayered, 1 µm thick at anterior region, 1.5 to 2.5 µm in mid-body and 2.5 to 3.5 µm on tail, consisting of thin outer layer bearing fine but conspicuous transverse striation, and thicker inner layer, more appreciable at caudal region. Lateral chord 7 to 10 µm broad, occupying 21 to 28% of mid-body diameter, lacking any differentiation. Lateral pores obscure. Lip region somewhat truncate anteriorly, slightly expanded (but not discoid), that is, slightly wider than its adjoining body, 2.6 to 3.3 times as wide as high and ca onethird (26-37%) of body diameter at neck base; lips mostly amalgamated, with weakly protruding papillae. Amphidial fovea cup-like, its aperture occupying 4.5 to 6 µm or more than one-half (50-63%) of lip Lip region diam.
8-10
Odontophore length region diameter. Cheilostom comparatively short and broad, with thick walls. Odontostyle relatively short and robust, 4.1 to 4.6 times as long as wide, nearly equal (0.9-1.0 times) to the lip region diameter, and 0.66 to 0.76% of the total body length; aperture large, 5.5 to 6.5 µm long, occupying more than onehalf (59-76%) of the odontostyle length. Guiding ring simple, but conspicuous, situated at 4.5 to 5.5 µm or 0.5 to 0.6 times the lip region diameter from the anterior end. Odontophore rod-like, 1.7 to 2.0 times the odontostyle, lacking any differentiation. Pharynx entirely muscular, consisting of a slender section gradually enlarging into the basal expansion that is 7.7 to 9.5 times as long as wide, 4.5 to 5.5 times the body diameter at neck base and occupies 44 to 46% of the total neck length; gland nuclei located as follows (n = 2): DO = 58, 60; DN = 63; S 1 N 1 = 72, 75; S 1 N 2 = ?82, S 2 N = 88, 90. Pharyngo-intestinal junction with a rounded conoid cardia 5.5−9.5 × 8.5-12.5 µm, surrounded by intestinal tissue. Tail conical, with rounded tip, ventrally nearly straight, dorsally regularly convex; inner core not reaching the tail tip, therefore a hyaline portion is present occupying 5 to 5.5 µm or 14 to 15% of tail length; caudal pores two pairs, at the middle of tail, one nearly dorsal, other subdorsal.
Female
Genital system didelphic-amphidelphic, with both branches equally and moderately developed, the anterior 111 to 203 µm long or 9 to 16% of body length, the posterior 108 to 202 µm or 8 to 17% of body length. Ovaries reflexed, relatively small, often not reaching the oviduct-uterus junction, the anterior 52 to 74 µm, the posterior 54 to 90 µm long. Oviduct 44 to 74 µm or 1.3 to 2.1 times the body diameter long, consisting of a slender distal portion made of prismatic cells and a moderately developed pars dilatata. A marked sphincter separates oviduct and uterus. Uterus a simple, tube-like structure, 38 to 53 µm or 1.2 to 1.6 times the body diameter long. Vagina extending inwards 11 to 15 µm or ca two-fifths (33-43%) of body diameter: pars proximalis 7-10 × 8-11 µm, with slightly convergent walls surrounded by weak circular musculature and pars distalis 3.5 to 4.5 µm long. Vulva a somewhat posterior, transverse slit. Prerectum 3.0 to 6.6, rectum 1.2 to 1.5 times the anal body diameter long.
Male
Genital system diorchic, with opposed testes. In addition to the adcloacal pair, located at 8 to 9 µm from the cloacal aperture, there are two or three, widely spaced 13.5 to 16 µm apart, ventromedian supplements, the most posterior of them situated far from the ad-cloacal pair, at 59 to 79 µm. Spicules dorylaimid, regularly curved ventrad and relatively robust, 3.9 to 4.9 times as long as wide; head small, 2.5 to 3 μ m long, occupying 8 to 10% of total spicule length, with its dorsal side slightly longer than the ventral one; ventral side bearing visible hump and hollow, median piece occupying 85 to 87% of spicule length. Lateral guiding pieces 8.5 to 9 µm long. Prerectum 2.4 to 4.5, cloaca 1.4 to 1.5 times body diameter at level of cloacal aperture.
Molecular characterization
Three D2-D3 28S rDNA sequences were obtained, ca 800 bp long. Their analysis has allowed the elucidation of evolutionary relationships of the species. The phylogenetic results are presented in
Discussion
Comparison of Iranian specimens with type material of D. dubium and its closely related species from the genus Aporcella The description of D. dubium by Das et al. (1969) was not excessively detailed, but available information is enough for comparative purposes. Table 1 shows that the most relevant morphometrics of Iranian nematodes and type material (consisting of specimens from at least three European enclaves, see above) are nearly identical, with totally coincident or widely overlapping ranges. General description, in particular the morphology of pharynx, genital system, and caudal region, are identical, as well. Thus, there is no doubt that the Iranian material belongs to the species. The above mentioned description therefor confirms the original one and provides an updated morphometric Das et al., 1969. Bayesian 50% majority rule consensus tree as inferred from D2-D3 expansion segments of 28S rDNA sequence alignments under the GTR + G + I model. Posterior probabilities are given for appropriate clades. Newly obtained sequences are indicated by bold letters.
and morphological characterization of the species. Morphometrical cha racters of Iranian population of D. dubium and closely related species in the genus Aporcella are shown in Table 2.
Proposal of a new identity for D. dubium
Since its original description, D. dubium has been regarded as an atypical species within the genus Table 2. Main morphometrics of Iranian material of Discolaimium dubium Das et al., 1969 and its seven close species of the genus Aporcella. Discolaimium (Das et al., 1969;Patil and Khan, 1982). This study of Iranian specimens confirms this opinion as several traits (pharynx entirely muscular and enlarging gradually, anterior position of DO, absence of lateral gland bodies) are not usual in members of this genus. Besides, the odontostyle aperture is appreciably longer than one-half of the total length, a typical feature of aporcelaims, the members of the family Aporcelaimidae Heyns, 1965. Molecular analyses of LSU rDNA sequences of three Iranian specimens of D. dubium sheds some light on the identity of this species. Their evolutionary relationships, as observed in the tree provided in Figure 4, reveal that these sequences form part of a highly supported (98%) large clade including several taxa, all of them having in common the absence of pars refringens vaginae, so confirming previous findings (Álvarez-Ortega and Peña- Santiago, 2016;Álvarez-Ortega et al., 2013;Imran et al., 2019;Naghavi et al., 2019). The most relevant result is, however, that D. dubium sequences form a very robust (100%) subclade with members of the genus Aporcella (Andrássy, 2002). Other discolaims con stitute another robust (98%) subclade. This strongly supports the membership of D. dubium in Aporcella, which is totally compatible with the updated morphological characterization of the species above provided. Actually, in its general morphology, D. dubium resembles several conicaltailed members of Aporcella from which it can be easily distinguished in several relevant morphometrics (see below). Consequently, it is transferred to Aporcella as A. dubia (Das et al., 1969) comb. n.
Separation of A. dubia from similar Aporcella species
For comparative purposes and taken into consideration the results provided in the present study, A. dubia is characterized by its 1.10 to 1.40 mm long body, cuticle bearing fine but perceptible transverse striation, lip region offset by constriction and 8 to 10 µm wide, odontostyle 7.5 to 10.5 µm long with aperture occupying 59 to 76% of total length, neck 300 to 362 µm, pharyngeal expansion 127 to 181 µm long or 44 to 46% of the total neck length, female genital system didelphic-amphidelphic, uterus simple and 38 to 53 µm or 1.2 to 1.6 times the body diameter long, vulva a transverse slit (V = 52-58), tail conical (32-38 µm, c = 32-43, c′ = 1.6-2.0) with rounded tip and a hyaline portion occupying 14 to 15% of tail length, spicules 30 to 32 µm long, and two or three widely spaced, ventromedian supplements with hiatus.
Remarks
The transference of D. dubium to Aporcella brings up the taxonomy of the large clade morphologically characterized by the absence of pars refringens vaginae and robustly supported by molecular data. This clade consists of representatives of at least three traditional dorylaimid families, namely Aporcelaimidae, Qudsianematidae Jairajpuri, 1965, andTylencholaimidae Filipjev, 1934. Leaving aside the tylencholaims (Tylencholaimidae), in general easily distinguishable thanks to the leptonchid (= tylencholaimid) nature of their cuticle, the morphological separation of aporcelaims (Aporcelaimidae, represented by Aporcella sequences) and discolaims (Qudsianematidae, Discolaiminae Siddiqi, 1969, represented by Carcharolaimus, Discolaimus and Discolaimoides sequences) of this clade becomes controversial and problematic. The results obtained in the present contribution suggest that some species currently classified under Discolaimium (and perhaps other species under Discolaimoides too) might belong to Aporcella and that, as consequence, a reevaluation and/or re-definition of these genera are recommendable and necessary. | 2021-04-15T05:16:39.155Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3554d00e307fe2aa2ab41e1e2bf98f44a6ae36ff",
"oa_license": "CCBY",
"oa_url": "https://www.exeley.com/exeley/journals/journal_of_nematology/53/i_current/pdf/10.21307_jofnem-2021-033.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3554d00e307fe2aa2ab41e1e2bf98f44a6ae36ff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219014627 | pes2o/s2orc | v3-fos-license | Pseudohypoaldosteronism Type 1 with a Novel Mutation in the NR3C2 Gene: A Case Report
Pseudohypoaldosteronism type 1 (PHA1) is a rare salt-wasting disorder caused by resistance to mineralocorticoid action. PHA1 is of two types with different levels of disease severity and phenotype as follows: systemic type with an autosomal re cessive inheritance (caused by mutations of the epithelial sodium channel) and renal type with an autosomal dominant inheritance (caused by mutations in the mineralocorticoid receptor). The clinical manifestations of PHA1 vary widely; how ever, PHA1 commonly involves hyponatremia, hyperkalemia, metabolic acidosis and elevated levels of renin and aldosterone. The earliest signs of both type of PAH1 also comprise insufficiency weight gain due to chronic dehydration and failure to thrive during infancy. Here, we report a case of renal PAH1 in a 28-day-old male infant harboring a novel heterozygous mutation in NR3C2 gene (c.1341_1345dupAAACC in exon 2), showing only failure to thrive without the characteristic of dehydration.
Introduction
Hyponatremia and hyperkalemia can develop in various renal and genetic disorders with significant long-term health consequences 1) . Pseudohypoaldosteronism type 1 (PHA1) is one of these disorder characterized by resistance to aldosterone action 2) . It is characterized by renal salt wasting, dehydration, and failure to thrive. The cardinal biochemical features are hyponatremia, hyperkalemia, and metabolic acidosis, despite elevated plasma renin activity and aldosterone levels 3) . There are two different forms of PHA1 that are clinically and genetically distinct, systemic type PHA1 and renal type PHA1. Systemic PHA1 is autosomal recessive inheritance and is characterized by severe resistance to aldosterone action in multiple organs, including the kidney, colon, sweat and salivary glands, and lung. However, renal PHA1 is autosomal dominant inheritance and is characterized by aldosterone resistance restricted to the kidneys 2) . The earliest sign of both type of PHA1 are insufficient weight gain due to chronic dehydration and failure to thrive during infancy [3][4][5] . Although treatment is often straightforward with oral salt supplementation, the clinical manifestations of this condition can vary widely 6) . A differentiation between systemic and renal PHA1 may be made base on salt requirements, ease of management of electrolyte imbalance, sweat test result and genetic testing.
Here we report a case involving a 28-day-old male infant who only showed poor weight gain at the time of admission and was later diagnosed with renal PHA1 with a novel de novo heterozygous mutation in the NR3C2 gene.
Case report
A 28-day-old male infant was admitted at our hospital because of poor weight gain. There were no other remarkable perinatal complications or previous illness history including urinary tract infection. He was born at term with a body weight of 3.65 kg at birth (50 th -75 th percentile). He was the first child of healthy parents, and his family history was unremarkable. At admission, his weight was 3.5 kg (below the 3 rd percentile); height, 53.7 cm (25 th -50 th percentile), and head circumference, 35.5 cm (10 th -25 th percentile). He was in a stable condition with vital signs appropriate for his age (pulse, 140 beats/minute; blood pressure, 82/50 mmHg; respiratory rate, 30/min; and normal peripheral perfusion) and no signs of dehydration. Physical examination revealed no abnormal findings, except for mild scrotal hyperpigmentation. During the 4 weeks of life, the patient consumed adequate amount of milk, and gastrointestinal signs of vomiting and diarrhea were not observed.
The initial laboratory findings were as follows: serum sodium level, 124 mEq/L; serum potassium level, 7.0 mEq/ L; serum chloride level, 97 mEq/L; blood glucose level, 86 mg/dL; blood urea nitrogen level, 21.0 mg/dL; and serum creatinine level, 0.41 mg/dL. The spot urine sodium and potassium levels were 23 and 49.1 mEq/L, respectively.
The serum and urine osmolality values were 265 and 267 mOs m/kg, respectively. His venous blood gas analysis revealed a pH of 7.382, pCO 2 of 34.8 mmHg, HCO 3 of 20.2 mmol/L, and base excess of -3.7 mmol/L, reflecting mild metabolic acidosis compensated by respiratory alkalosis. His urinalysis revealed a specific gravity of 1.015, a pH of 5.0, and negative for white blood cells or red blood cells.
Renal ultrasonography revealed no abnormality except mild hydronephrosis in left kidneys (Fig. 1) Fig. 2) in exon 2 of the NR3C2 gene was detected. Based on the American College of Medical Genetics and Genomics/Association for Molecular Pathology guidelines, this novel variant can be classified as "pathogenic". Direct sequencing for the NR3C2 gene was performed in the parents, however, they displayed the wild type sequence. The serum electrolyte levels of the patient were normalized with initial parenteral hydration and subsequent oral sodium replacement (7.5 mEq/kg/day). He showed gradual weight gain and is currently 10 months old with height and weight profiles within the 25 th -50 th percentile, achieving adequate developmental milestones.
Discussion
Our patient presented with hyponatremia, hyperkalemia, and metabolic acidosis. Differential diagnosis based on the serum electrolyte levels included hypoaldosteronism, adrenal hypoplasia, and congenital adrenal hyperplasia (CAH). We suspected CAH, which is the most frequent cause of electrolyte disturbance, supported by a high initial level of ACTH. However, the ACTH level was decreased to within the normal range at follow up and plasma renin activity and serum aldosterone levels were high, suggesting PHA1. Although we could not precisely account for the initially elevated ACTH level, it is likely that the levels were falsely increased. The ACTH level in blood may be unstable in blood at room temperature; therefore, the methods of blood collection and preparation and storing of plasma may have markedly affected the measured ACTH concentration 7) . If the values are not suitable for the suspected diagnosis, a repeat assessment may be required for confirmation.
PHA1 is a rare syndrome that shows resistance to mineralocorticoid action. PHA1 manifests as two different forms with different levels of severity and phenotypes as follows: systemic type with an autosomal recessive inheritance and renal type with an autosomal dominant inheritance 8) . Systemic PHA1 is caused by mutation in the SCNN1A, SCNN1B, or SCNN1G genes 9) , which encode the α, β, and γ subunits of the epithelial sodium channel (ENaC) 10) . As ENaC is expressed not only in the distal tubules, but also in the sweat glands, salivary glands, colon, and lung, excessive salt wasting occurs from these organs 11) . Moreover, the salt loss in systemic PHA1 is severe, and symptoms do not improve with age; most patients require lifelong salt supplementation 12) . Renal PHA1 is caused by inactivating mutations of the NR3C2 gene, which encode the mineralocorticoid receptor that is expressed predominantly in the kidney 10) , and characterized by aldosterone resistance only in the kidney. The inheritance is mainly autosomal dominant; however, in some sporadic cases, de novo mutations are noted. The main clinical manifestation is insufficient weight gain due to chronic dehydration 4) and biochemical abnormalities, including metabolic acidosis, hyponatremia, and hyperkalemia, with elevated plasma renin activity and aldosterone levels 3) . When compared with systemic PHA1, the symptoms in renal PHA1 are less severe and improve with age in most cases 6,13) . Although mild in nature, renal PHA1 is associated with a high mortality rate among infant 14,15) . Several cases of unexplained neonatal mortality were identified among infants at risk of renal PHA1, which suggests that the seemingly benign renal PHA1 is potentially fatal to neonates 15) . Therefore, early diagnosis and immediate electrolyte correction for neonates at risk of renal PHA1 is important.
In most cases, the symptoms of PHA1 in infants, such as vomiting, poor weight gain, dehydration, and failure to thrive, are non-specific. In addition, the laboratory findings of PHA1 may be confused with other salt-wasting disorders, such as CAH or hypoaldosteronism 2) . For differential diagnosis, the samples should be tested for plasma renin activity, aldosterone, 17-hydroxyprogesterone, ACTH and cortisol levels before the initiation of treatment. In PHA1, metabolic acidosis and hyponatremia that do not improve despite the administration of high-dose mineralocorticoid therapy, increased urine sodium excretion, and hyperkalemia with decreased urine potassium excretion are presented, even if the plasma renin activity and aldosterone levels are high 5) . Our patient presented non-specific symptoms of insufficient weight gain without the characteristics of dehydration or other typical clinical findings at the time of admission.
Owing to the rarity of the disease, diagnosis of PHA1 is dif ficult in the clinical setting. Thus, careful examination, including analysis of electrolyte levels, urinalysis, and hormonal evaluation, if required, should be performed.
In conclusion, we identified a novel mutation in NR3C2 in infant with only poor weight gain. Clinicians should consider PHA1 during differential diagnosis of faltering growth of the common health concerns of the infants. | 2020-04-30T09:08:14.717Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "618deba14f5f65ad7761fc15433b614dd89c1352",
"oa_license": "CCBYNC",
"oa_url": "http://chikd.org/upload/ckd-24-1-58.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e174f2b3af75a1397079391f426410169e0de8e5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
54952313 | pes2o/s2orc | v3-fos-license | Discrete Complex Analysis and Probability
We discuss possible discretizations of complex analysis and some of their applications to probability and mathematical physics, following our recent work with Dmitry Chelkak, Hugo Duminil-Copin and Cl\'ement Hongler.
Introduction
The goal of this note is to discuss some of the applications of discrete complex analysis to problems in probability and statistical physics. It is not an exhaustive survey, and it lacks many references. Forgoing completeness, we try to give a taste of the subject through examples, concentrating on a few of our recent papers with Dmitry Chelkak, Hugo Duminil-Copin and Clément Hongler [CS08,CS09,CS10,DCS10,HS10]. There are certainly other interesting developments in discrete complex analysis, and it would be a worthy goal to write an extensive exposition with an all-encompassing bibliography, which we do not attempt here for lack of space.
Complex analysis (we restrict ourselves to the case of one complex or equivalently two real dimensions) studies analytic functions on (subdomains of) the complex plane, or more generally analytic structures on two dimensional manifolds. Several things are special about the (real) dimension two, and we won't discuss an interesting and often debated question, why exactly complex analysis is so nice and elegant. In particular, several definitions lead to identical class of analytic functions, and historically different adjectives (regular, analytic, holomorphic, monogenic) were used, depending on the context. For example, an analytic function has a local power series expansion around every point, while a holomorphic function has a complex derivative at every point. Equivalence of these definitions is a major theorem in complex analysis, and there are many other equivalent definitions in terms of Cauchy-Riemann equations, contour integrals, primitive functions, hydrodynamic interpretation, etc. Holomorphic functions have many nice properties, and hundreds of books were devoted to their study.
Consider now a discretized version of the complex plane: some graph embedded into it, say a square or triangular lattice (more generally one can speak of discretizations of Riemann surfaces). Can one define analytic functions on such a graph? Some of the definitions do not admit a straightforward discretization: e.g. local power series expansions do not make sense on a lattice, so we cannot really speak of discrete analyticity. On the other hand, as soon as we define discrete derivatives, we can ask for the holomorphicity condition. Thus it is philosophically more correct to speak of discrete holomorphic, rather than discrete analytic functions. We will use the term preholomorphic introduced by Ferrand [Fer44], as we prefer it to the term monodiffric used by Isaacs in the original papers [Isa41,Isa52] (a play on the term monogenic used by Cauchy for continuous analytic functions).
Though the preholomorphic functions are easy to define, there is a lack of expository literature about them. We see two main reasons: firstly, there is no canonical preholomorphicity definition, and one can argue which of the competing approaches is better (the answer probably depends on potential applications). Secondly, it is straightforward to transfer to the discrete case beginnings of the usual complex analysis (a nice topic for an undergraduate research project), but the easy life ends when it becomes necessary to multiply preholomorphic functions. There is no easy and natural way to proceed and the difficulty is addressed depending on the problem at hand.
As there seems to be no canonical discretization of the complex analysis, we would rather adopt a utilitarian approach, working with definitions corresponding to interesting objects of probabilistic origin, and allowing for a passage to the scaling limit. We want to emphasize, that we are concerned with the following triplets: 1. A planar graph, 2. Its embedding into the complex plane, 3. Discrete Cauchy-Riemann equations.
We are interested in triplets such that the discrete complex analysis approximates the continuous one. Note that one can start with only a few elements of the triplet, which gives some freedom. For example, given an embedded graph, one can ask which discrete difference equations have solutions close to holomorphic functions. Or, given a planar graph and a notion of preholomorphicity, one can look for an appropriate embedding.
The ultimate goal is to find lattice models of statistical physics with preholomorphic observables. Since those observables would approximate holomorphic functions, some information about the original model could be subsequently deduced. Below we start with several possible definitions of the preholomorphic functions along with historical remarks. Then we discuss some of their recent applications in probability and statistical physics.
Discrete holomorphic functions
For a given planar graph, there are several ways to define preholomorphic functions, and it is not always clear which way is preferable. A much better known class is that of discrete harmonic (or preharmonic) functions, which can be defined on any graph (not necessarily planar), and also in more than one way. However, one definition stands out as the simplest: a function on the vertices of graph is said to be preharmonic at a vertex v, if its discrete Laplacian vanishes: (1) More generally, one can put weights on the edges, which would amount to taking different resistances in the electric interpretation below. Preharmonic functions on planar graphs are closely related to discrete holomorphicity: for example, their gradients defined on the oriented edges by are preholomorphic. Note that the edge function above is antisymmetric, i.e. F ( uv) = −F ( vu).
Both classes with the definitions as above are implicit already in the 1847 work of Kirchhoff [Kir47], who interpreted a function defined on oriented edges as an electric current flowing through the graph. If we assume that all edges have unit resistance, than the sum of currents flowing from a vertex is zero by the first Kirchhoff law: and the sum of the currents around any oriented closed contour γ (for the planar graphs it is sufficient to consider contours around faces) face is zero by the second Kirchhoff law: The two laws are equivalent to saying that F is given by the gradient of a potential function H as in (2), and the latter function is preharmonic (1). One can equivalently think of a hydrodynamic interpretation, with F representing the flow of liquid. Then conditions (3) and (4) mean that the flow is divergence-and curl-free correspondingly. Note that in the continuous setting similarly defined gradients of harmonic functions on planar domains coincide up to complex conjugation with holomorphic functions. And in higher dimensions harmonic gradients were proposed as one of their possible generalizations.
There are many other ways to introduce discrete structures on graphs, which can be developed in parallel to the usual complex analysis. We have in mind mostly such discretizations that restrictions of holomorphic (or harmonic) functions become approximately preholomorphic (or preharmonic). Thus we speak about graphs embedded into the complex plane or a Riemann surface, and the choice of embedding plays an important role. Moreover, the applications we are after require passages to the scaling limit (as mesh of the lattice tends to zero), so we want to deal with discrete structures which converge to the usual complex analysis as we take finer and finer graphs.
Preharmonic functions satisfying (1) on the square lattices with decreasing mesh fit well into this philosophy, and were studied in a number of papers in early twentieth century (see e.g. [PW23,Bou26,Lus26]), culminating in the seminal work of Courant, Friedrichs and Lewy. It was shown in [CFL28] that solution to the Dirichlet problem for a discretization of an elliptic operator converges to the solution of the analogous continuous problem as the mesh of the lattice tends to zero. In particular, a preharmonic function with given boundary values converges in the scaling limit to a harmonic function with the same boundary values in a rather strong sense, including convergence of all partial derivatives.
Preholomorphic functions distinctively appeared for the first time in the papers [Isa41,Isa52] of Isaacs, where he proposed two definitions (and called such functions "monodiffric"). A few papers of his and others followed, studying the first definition (5), which is asymmetric on the square lattice. More recently the first definition was studied by Dynnikov and Novikov [DN03] in the triangular lattice context, where it becomes symmetric (the triangular lattice is obtained from the square lattice by adding all the diagonals in one direction).
The second, symmetric, definition was reintroduced by Ferrand, who also discussed the passage to the scaling limit [Fer44,LF55]. This was followed by extensive studies of Duffin and others, starting with [Duf56].
Both definitions ask for a discrete version of the Cauchy-Riemann equations ∂ iα F = i∂ α F or equivalently that z-derivative is independent of direction. Consider a subregion Ω ǫ of the mesh ǫ square lattice ǫZ 2 ⊂ C and define a function on its vertices. Isaacs proposed the following two definitions, replacing the derivatives by discrete differences. His "monodiffric functions of the first kind" are required to satisfy inside Ω ǫ the following identity: which can be rewritten as We will be working with his second definition, which is more symmetric and also appears naturally in probabilistic context (but otherwise the theories based on two definitions are almost the same). We say that a function is preholomorphic, if inside Ω ǫ it satisfies the following identity, illustrated in Figure 1: which can also be rewritten as It is easy to see that restrictions of continuous holomorphic functions to the mesh ǫ square lattice satisfy this identity up to O(ǫ 3 ). Note also that if we color the lattice in the chess-board fashion, the complex identity (6) can be written as two real identities (its real and imaginary parts), one involving the real part of F at black vertices and the imaginary part of F at white vertices, the other one -vice versa. So unless we have special boundary conditions, F splits into two "demifunctions" (real at white and imaginary at black vs. imaginary at black and real at white vertices), and some prefer to consider just one of those, i.e. ask F to be purely real at black vertices and purely imaginary at white ones. The theory of so defined preholomorphic functions starts much like the usual complex analysis. It is easy to check, that for preholomorphic functions sums are also preholomorphic, discrete contour integrals vanish, primitive (in a simplyconnected domain) and derivative are well-defined and are preholomorphic functions on the dual square lattice, real and imaginary parts are preharmonic on their respective black and white sublattices, etc. Unfortunately, the product of two preholomorphic functions is no longer preholomorphic: e.g., while restrictions of 1, z, and z 2 to the square lattice are preholomorphic, the higher powers are only approximately so.
Situation with other possible definitions is similar, with much of the linear complex analysis being easy to reproduce, and problems appearing when one has to multiply preholomorphic functions. Pointwise multiplication cannot be consistently defined, and though one can introduce convolution-type multiplication, the possible constructions are non-local and cumbersome. Sometimes, for different graphs and definitions, problems appear even earlier, with the first derivative not being preholomorphic.
Our main reason for choosing the definition (6) is that it naturally appears in probabilistic context. It was also noticed by Duffin that (6) nicely generalizes to a larger family of rhombic lattices, where all the faces are rhombi. Equivalently, one can speak of isoradial graphs, where all faces are inscribed into circles of the same radius -an isoradial graph together with its dual forms a rhombic lattice.
There are two main reasons to study this particular family. First, this is perhaps the largest family of graphs for which the Cauchy-Riemann operator admits a nice discretization. Indeed, restrictions of holomorphic functions to such graphs are preholomorphic to higher orders. This was the reason for the introduction of complex analysis on rhombic lattices by Duffin [Duf68] in late sixties. More recently, the complex analysis on such graphs was studied for the sake of probabilistic applications [Mer01,Ken02,CS08].
On the other hand, this seems to be the largest family where certain lattice models, including the Ising model, have nice integrability properties. In particular, the critical point can be defined with weights depending only on the local structure, and the star-triangle relation works out nicely. It seems that the first appearance of related family of graphs in the probabilistic context was in the work of Baxter [Bax78], where the eight vertex and Ising models were considered on Z-invariant graphs, arising from planar line arrangements. These graphs are topologically the same as the isoradial ones, and though they are embedded differently into the plane, by [KS05] they always admit isoradial embeddings. In [Bax78] Baxter was not passing to the scaling limit, and so the actual choice of embedding was immaterial for his results. However, his choice of weights in the models would suggest an isoradial embedding, and the Ising model was so considered by Mercat [Mer01], Boutilier and de Tilière [BdT08,BdT09], Chelkak and the author [CS09]. Additionally, the dimer and the uniform spanning tree models on such graphs also have nice properties, see e.g. [Ken02].
We would also like to remark that rhombic lattices form a rather large family of graphs. While not every topological quadrangulation (graph all of whose faces are quadrangles) admits a rhombic embedding, Kenyon and Schlenker [KS05] gave a simple topological condition necessary and sufficient for its existence.
So this seems to be the most general family of graphs appropriate for our subject, and most of what we discuss below generalizes to it (though for simplicity we speak of the square and hexagonal lattices only).
Applications of preholomorphic functions
Besides being interesting in themselves, preholomorphic functions found several diverse applications in combinatorics, analysis, geometry, probability and physics.
After the original work of Kirchhoff, the first notable application was perhaps the famous article [BSST40] of Brooks, Smith, Stone and Tutte, who used preholomorphic functions to construct tilings of rectangles by squares.
Several applications to analysis followed, starting with a new proof of the Riemann uniformization theorem by Ferrand [LF55]. Solving the discrete version of the usual minimization problem, it is immediate to establish the existence of the minimizer and its properties, and then one shows that it has a scaling limit, which is the desired uniformization. Duffin and his co-authors found a number of similar applications, including construction of the Bergman kernel by Dieter and Mastin [DM71]. There were also studies of discrete versions of the multi-dimensional complex analysis, see e.g. Kiselman's [Kis05].
In [Thu86] Thurston proposed circle packings as another discretization of complex analysis. They found some beautiful applications, including yet another proof of the Riemann uniformization theorem by Rodin and Sullivan [RS87]. More interestingly, they were used by He and Schramm [HS93] in the best result so far on the Koebe uniformization conjecture, stating that any domain can be conformally uniformized to a domain bounded by circles and points. In particular, they established the conjecture for domains with countably many boundary components. More about circle packings can be learned form Stephenson's book [Ste05]. Note that unlike the discretizations discussed above, the circle packings lead to non-linear versions of the Cauchy-Riemann equations, see e.g. the discussion in [BMS05].
In this note we are interested in applications to probability and statistical physics. Already the Kirchhoff's paper [Kir47] makes connection between the Uniform Spanning Tree and preharmonic (and so preholomorphic) functions.
Connection of Random Walk to preharmonic functions was certainly known to many researchers in early twentieth century, and figured implicitly in many papers. It is explicitly discussed by Courant, Friedrichs and Lewy in [CFL28], with preharmonic functions appearing as Green's functions and exit probabilities for the Random Walk.
More recently, Kenyon found preholomorphic functions in the dimer model (and in the Uniform Spanning Tree in a way different from the original considerations of Kirchhoff). He was able to obtain many beautiful results about statistics of the dimer tilings, and in particular, showed that those have a conformally invariant scaling limit, described by the Gaussian Free Field, see [Ken00,Ken01]. More about Kenyon's results can be found in his expositions [Ken04,Ken09]. An approximately preholomorphic function was found by the author in the critical site percolation on the triangular lattice, allowing to prove the Cardy's formula for crossing probabilities [Smi01b,Smi01a].
Finally, we remark that various other discrete relations were observed in many integrable two dimensional models of statistical physics, but usually no explicit connection was made with complex analysis, and no scaling limit was considered.
Here we are interested in applications of integrability parallel to that for the Random Walk and the dimer model above. Namely, once a preholomorphic function is observed in some probabilistic model, we can pass to the scaling limit, obtaining a holomorphic function. Thus, the preholomorphic observable is approximately equal to the limiting holomorphic function, providing some knowledge about the model at hand. Below we discuss applications of this philosophy, starting with the Ising model.
The Ising model
In this Section we discuss some of the ways how preholomorphic functions appear in the Ising model at criticality. The observable below was proposed in [Smi06] for the hexagonal lattice, along with a possible generalization to O(N ) model. Similar objects appeared earlier in Kadanoff and Ceva [KC71] and in Mercat [Mer01], though boundary values and conformal covariance, which are central to us, were never discussed.
The scaling limit and properties of our observable on isoradial graphs were worked out by Chelkak and the author in [CS09]. It is more appropriate to consider it as a fermion or a spinor, by writing F (z) √ dz, and with more general setup one has to proceed in this way.
Earlier we constructed a similar fermion for the random cluster representation of the Ising model, see [Smi06,Smi10] and our joint work with Chelkak [CS09] for generalization to isoradial graphs (and also independent work of Riva and Cardy [RC06] for its physical connections). It has a simpler probabilistic interpretation than the fermion in the spin representation, as it can be written as the probability of the interface between two marked boundary points passing through a point inside, corrected by a complex weight depending on the winding.
The fermion for the spin representation is more difficult to construct. Below we describe it in terms of contour collections with distinguished points. Alternatively it corresponds to the partition function of the Ising model with a √ z monodromy at a given edge, corrected by a complex weight; or to a product of order and disorder operators at neighboring site and dual site.
We will consider the Ising model on the mesh ǫ square lattice. Let Ω ǫ be a discretization of some bounded domain Ω ⊂ C. The Ising model on Ω ǫ has configurations σ which assign ±1 (or simply ±) spins σ(v) to vertices v ∈ Ω ǫ and Hamiltonian defined (in the absence of an external magnetic field) by where the sum is taken over all edges u, v inside Ω ǫ . Then the partition function is given by r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r and probability of a given spin configuration becomes Here β ≥ 0 is the temperature parameter (behaving like the reciprocal of the actual temperature), and Kramers and Wannier have established [KW41] that its critical value is given by β c = log √ 2 + 1 /2. Now represent the spin configurations graphically by a collection of interfacescontours on the dual lattice, separating plus spins from minus spins, the so-called low-temperature expansion, see Figure 2. A contour collection is a set of edges, such that an even number emanates from every vertex. In such case the contours can be represented as a union of loops (possibly in a non-unique way, but we do not distinguish between different representations). Note that each contour collection corresponds to two spin collections which are negatives of each other, or to one if we fix the spin value at some vertex. The partition function of the Ising model can be rewritten in terms of the contour configurations ω as Z = ω x length of contours .
Each neighboring pair of opposite spins contributes an edge to the contours, and so a factor of x = exp(−2β) to the partition function. Note that the critical value is x c = exp(−2β c ) = √ 2 − 1. We now want to define a preholomorphic observable. To this effect we need to distinguish at least one point (so that the domain has a non-trivial conformal modulus). One of the possible applications lies in relating interfaces to Schramm's SLE curves, in the simplest setup running between two boundary points. To obtain a discrete interface between two boundary points a and b, we introduce Dobrushin boundary conditions: + on one boundary arc and − on another, see Now to define our fermion, we allow the second endpoint of the interface to move inside the domain. Namely, take an edge center z inside Ω ǫ , and define x length of contours W(ω(a → z)) , where the sum is taken over all contour configurations ω = ω(a → z) which have two exceptional points: a on the boundary and z inside. So the contour collection can be represented (perhaps non-uniquely) as a collection of loops plus an interface between a and z. Furthermore, the sum is corrected by a Fermionic complex weight, depending on the configuration: W(ω(a → z)) := exp (−i s winding(γ, a → z)) .
Here the winding is the total turn of the interface γ connecting a to z, counted in radians, and the spin s is equal to 1/2 (it should not be confused with the Ising spins ±1). For some collections the interface can be chosen in more than one way, and then we trace it by taking a left turn whenever an ambiguity arises. Another choice might lead to a different value of winding, but if the loops and the interface have no "transversal" self-intersections, then the difference will be a multiple of 4π and so the complex weight W is well-defined. Equivalently we can write W(ω(a → z)) = λ # signed turns of γ , λ := exp −is π 2 , see Figure 3 for weights corresponding to different windings. Remark 1. Removing complex weight W one retrieves the correlation of spins on the dual lattice at the dual temperature x * , a corollary of the Kramers-Wannier duality.
Remark 2. While such contour collections cannot be directly represented by spin configurations, one can obtain them by creating a disorder operator, i.e. a monodromy at z: when one goes one time around z, spins change their signs.
Our first theorem is the following, which is proved for general isoradial graphs in [CS09], with a shorter proof for the square lattice given in [CS10]: Theorem 1 (Chelkak, Smirnov). For Ising model at criticality, F is a preholomorphic solution of a Riemann boundary value problem. When mesh ǫ → 0, where P is the complex Poisson kernel at a: a conformal map Ω → C + such that a → ∞. Here both sides should be normalized in the same chart around b.
Remark 3. For non-critical values of x observable F becomes massive preholomorphic, satisfying the discrete analogue of the massive Cauchy-Riemann equations: Remark 4. Ising model can be represented as a dimer model on the Fisher graph. For example, on the square lattice, one first represents the spin configuration as above -by the collection of contours on the dual lattice, separating + and − spins. Then the dual lattice is modified with every vertex replaced by a "city" of six vertices, see Figure 4. It is easy to see that there is a natural bijection between contour configurations on the dual square lattice and dimer configuration on its Fisher graph. Then, similarly to the work of Kenyon for the square lattice, the coupling function for the Fisher lattice will satisfy difference equations, which upon examination turn out to be another discretization of Cauchy-Riemann equations, with different projections of the preholomorphic function assigned to six vertices in a "city". One can then reinterpret the coupling function in terms of the Ising model, and this is the approach taken by Boutilier and de Tilière [BdT08,BdT09]. This is also how the author found the observable discussed in this Section, observing jointly with Kenyon in 2002 that it has the potential to imply the convergence of the interfaces to the Schramm's SLE curve.
The key to establishing Theorem 1 is the observation that the function F is preholomorphic. Moreover, it turns out that F satisfies a stronger form of preholomorphicity, which implies the usual one, but is better adapted to fermions.
Consider the function F on the centers of edges. We say that F is strongly (or spin) preholomorphic if for every centers u and v of two neighboring edges emanating from a vertex w, we have where α is the unit bisector of the angle uwv, and Proj(p, q) denotes the orthogonal projection of the vector p on the vector q. Equivalently we can write This definition implies the classical one for the square lattice, and it also easily adapts to the isoradial graphs. Note that for convenience we assume that the interface starts from a in the positive real direction as in Figure 2, which slightly changes weights compared to the convention in [CS09]. The strong preholomorphicity of the Ising model fermion is proved by constructing a bijection between configurations included into F (v) and F (u). Indeed, erasing or adding half-edges wu and wv gives a bijection ω ↔ω between configuration collections {ω(u)} and {ω(v)}, as illustrated in Figure 5. To check (8), it is sufficient to check that the sum of contributions from ω andω satisfies it. Several possible configurations can be found, but essentially all boil down to the two illustrated in Figure 5.
Plugging the contributions from Figure 5 into the equation (8), we are left to check the following two identities: The first identity always holds, while the second one is easy to verify when x = x c = √ 2 − 1 and λ = exp(−πi/4). Note that in our setup on the square lattice λ (or the spin s) is already fixed by the requirement that the complex weight is well-defined, and so the second equation in (9) uniquely fixes the allowed value of x. In the next Section we will discuss a more general setup, allowing for different values of the spin, corresponding to other lattice models.
contributes λ 2 C 2 to F (u) Figure 5. Involution on the Ising model configurations, which adds or erases half-edges vw and uw. There are more pairs, but their relative contributions are always easy to calculate and each pair taken together satisfies the discrete Cauchy-Riemann equations. Note that with the chosen orientation constants C1 and C2 above are real.
To determine F using its preholomorphicity, we need to understand its behavior on the boundary. When z ∈ ∂Ω ǫ , the winding of the interface connecting a to z inside Ω ǫ is uniquely determined, and coincides with the winding of the boundary itself. This amounts to knowing Arg(F ) on the boundary, which would be sufficient to determine F knowing the singularity at a or the normalization at b.
In the continuous setting the condition obtained is equivalent to the Riemann Boundary Value Problem (a homogeneous version of the Riemann-Hilbert-Privalov BVP) Im F (z) · (tangent to ∂Ω) with the square root appearing because of the Fermionic weight. Note that the homogeneous BVP above has conformally covariant solutions (as √ dz-forms), and so is well defined even in domains with fractal boundaries. The Riemann BVP (10) is clearly solved by the function P ′ a (z), where P is the Schwarz kernel at a (the complex version of the Poisson kernel), i.e. a conformal map Showing that on the lattice F ǫ satisfies a discretization of the Riemann BVP (10) and converges to its continuous counterpart is highly non-trivial and a priori not guaranteed -there exist "logical" discretizations of the Boundary Value Problems, whose solutions have degenerate or no scaling limits. We establish convergence in [CS09] by considering the primitive z z0 F 2 (u)du, which satisfies the Dirichlet BVP even in the discrete setting. The big technical problem is that in the discrete case F 2 is no longer preholomorphic, so its primitive is a priori not preholomorphic or even well-defined. Fortunately, in our setting the imaginary part is still well-defined, so we can set While the function H is not exactly preharmonic, it is approximately so, vanishes exactly on the boundary, and is positive inside the domain. This allows to complete the (at times quite involved) proof. A number of non-trivial discrete estimates is called for, and the situation is especially difficult for general isoradial graphs. We provide the needed tools in a separate paper [CS08]. Though Theorem 1 establishes convergence of but one observable, the latter (when normalized at b) is well behaved with respect to the interface traced from a. So it can be used to establish the following, see [CS10]: Corollary 1. As mesh of the lattice tends to zero, the critical Ising interface in the discretization of the domain Ω with Dobrushin boundary conditions converges to the Schramm's SLE(3) curve.
Convergence is almost immediate in the topology of (probability measures on the space of) Loewner driving functions, but upgrading to convergence of curves requires extra estimates, cf. [KS09,DCHN09,CS10]. Once interfaces are related to SLE curves, many more properties can be established, including values of dimensions and scaling exponents.
But even without appealing to SLE, one can use preholomorphic functions to a stronger effect. In a joint paper with Hongler [HS10] we study a similar observable, when both ends of the interface are allowed to be inside the domain. It turns out to be preholomorphic in both variables, except for the diagonal, and so its scaling limit can be identified with the Green's function solving the Riemann BVP. On the other hand, when two arguments are taken to be nearby, one retrieves the probability of an edge being present in the contour representation, or that the nearby spins are different. This allows to establish conformal invariance of the energy field in the scaling limit: Theorem 2 (Hongler, Smirnov). Let a ∈ Ω and x ǫ , y ǫ be the closest edge from a ∈ Ω ǫ . Then, as ǫ → 0, we have where the subscripts + and free denote the boundary conditions and l Ω is the element of the hyperbolic metric on Ω.
This confirms the Conformal Field Theory predictions and, as far as we know, for the first time provides the multiplicative constant in front of the hyperbolic metric. These techniques were taken further by Hongler in [Hon10], where he showed that the (discrete) energy field in the critical Ising model on the square lattice has a conformally covariant scaling limit, which can be then identified with the corresponding Conformal Field Theory. This was accomplished by showing convergence of the discrete energy correlations in domains with a variety of boundary conditions to their continuous counterparts; the resulting limits are conformally covariant and are determined exactly. Similar result was obtained for the scaling limit of the spin field on the domain boundary.
The O(N ) model
The Ising preholomorphic function was introduced in [Smi06] in the setting of general O(N ) models on the hexagonal lattice. It can be further generalized to a variety of lattice models, see the work of Cardy, Ikhlef, Rajabpour [RC07,IC09]. Unfortunately, the observable seems only partially preholomorphic (satisfying only some of the Cauchy-Riemann equations) except for the Ising case. One can make an analogy with divergence-free vector fields, which are not a priori curl-free.
The argument in the previous Section was adapted to the Ising case, and some properties remain hidden behind the notion of the strong holomorphicity. Below we present its version generalized to the O(N ) model, following our joint work [DCS10] with Duminil-Copin. While for N = 1 we only prove that our observable is divergence-free, it still turns out to be enough to deduce some global information, establishing the Nienhuis conjecture on the exact value of the connective constant for the hexagonal lattice: Theorem 3 (Duminil-Copin, Smirnov). On the hexagonal lattice the number C(k) of distinct simple length k curves from the origin satisfies Self-avoiding walks on a lattice (those without self-intersections) were proposed by chemist Flory [Flo53] as a model for polymer chains, and turned out to be an interesting and extensively studied object, see the monograph [MS93].
Using Coulomb gas formalism, physicist Nienhuis argued that the connective constant of the hexagonal lattice is equal to 2 + √ 2, meaning that (11) holds. He even proposed better description of the asymptotic behavior: Note that while the exponential term with the connectivity constant is latticedependent, the power law correction is supposed to be universal. Our proof is partially motivated by Nienhuis' arguments, and also starts with considering the self-avoiding walk as a special case of O(N ) model at N = 0. While a "half-preholomorphic" observable we construct does not seem sufficient to imply conformal invariance in the scaling limit, it can be used to establish the critical temperature, which gives the connective constant.
The general O(N ) model is defined for positive integer values of N , and is a generalization of the Ising model (to which it specializes for N = 1), with ±1 spins replaced by points on a sphere in the N -dimensional space. We work with the graphical representation, which is obtained using the high-temperature expansion, and makes the model well defined for all non-negative values of N .
We concentrate on the hexagonal lattice in part because it is trivalent and so at most one contour can pass through a vertex, creating no ambiguities. This simplifies the reasoning, though general graphs can also be addressed by introducing additional weights for multiple visits of vertices. We consider configurations ω of disjoint simple loops on the mesh ǫ hexagonal lattice inside domain Ω ǫ , and two parameters: loop-weight N ≥ 0 and (temperature-like) edge-weight x > 0. Partition function is then given by A typical configuration is pictured in Figure 6, where we introduced Dobrushin boundary conditions: besides loops, there is an interface γ joining two fixed boundary points a and b. It was conjectured by Kager and Nienhuis [KN04] that in the interval N ∈ [0, 2] the model has conformally invariant scaling limits for x = x c (N ) := 1/ 2 + √ 2 − N and x ∈ (x c (N ), +∞). The two different limits correspond to dilute/dense regimes, with the interface γ conjecturally converging to the Schramm's SLE curves for an appropriate value of κ ∈ [8/3, 4] and κ ∈ [4, 8] correspondingly. The scaling limit for low temperatures x ∈ (0, x c ) is not conformally invariant.
Note that for N = 1 we do not count the loops, thus obtaining the lowtemperature expansion of the Ising model on the dual triangular lattice. In particular, the critical Ising corresponds to x = 1/ √ 3 by the work [Wan50] of Wannier, in agreement with Nienhuis predictions. And for x = 1 one obtains the critical site percolation on triangular lattice (or equivalently the Ising model at infinite temperature). The latter is conformally invariant in the scaling limit by [Smi01b,Smi01a].
Note also that the Dobrushin boundary conditions make the model well-defined for N = 0: then we have only one interface, and no loops. In the dilute regime this model is expected to be in the universality class of the self-avoiding walk.
Analogously to the Ising case, we define an observable (which is now a parafermion of fractional spin) by moving one of the ends of the interface inside the domain. Namely, for an edge center z we set where the sum is taken over all configurations ω = ω(a → z) which have disjoint simple contours: a number of loops and an interface γ joining two exceptional points, a on the boundary and z inside. As before, the sum is corrected by a complex weight with the spin s ∈ R: W(ω(a → z)) := exp (−i s winding(γ, a → z)) , equivalently we can write W(ω(a → z)) = λ # signed turns of γ , λ := exp −is π 3 .
Note that on hexagonal lattice one turn corresponds to π/3, hence the difference in the definition of λ.
Our key observation is the following the observable F satisfies the following relation for every vertex v inside Ω ǫ : where p, q, r are the mid-edges of the three edges adjacent to v. Above solution (14) corresponds to the dense, and (15) -to the dilute regime. Note that identity (16) is a form of the first Kirchhoff's law, but apart from the Ising case N = 1 we cannot verify the second one.
To prove Lemma 4, we note that configurations with an interface arriving at p, q or r can be grouped in triplets, so that three configurations differ only in immediate vicinity of v, see Figure 8. It is enough then to check that contributions of three configurations to (16) sum up to zero. But the relative weights of configurations in a triplet are easy to write down as shown in Figure 8, and the coefficients in the identity (16) are proportional to the three cube roots of unity: 1, τ := exp(i2π/3), τ (if the neighbors of v are taken in the counterclockwise order). Therefore we have to check just two identities: Recalling that λ = exp (−isπ/3), the equations above can be recast as
The first equation implies that
and the second equation then determines the allowed value of x uniquely. Most of the solutions of (17) lead to observables symmetric to the two main ones, which are provided by solutions to the equations (14) and (15). When we set N = 0, there are no loops, and configurations contain just an interface from a to z, weighted by x length . This corresponds to taking θ = π/2 and one of the solutions is given by s = 5/8 and x c = 1/ 2 + √ 2, as predicted by Nienhuis. To prove his prediction, we observe that summing the identity (16) over all interior vertices implies that where the sum taken over the centers z of oriented edges η(z) emanating from the discrete domain Ω ǫ into its exterior. Since F (a) = 1 by definition, we conclude that F for other boundary points sums up to 1. As in the Ising model, the winding on the boundary is uniquely determined, and (for this particular critical value of x), one observes that considering the real part of F we can get rid of the complex weights, replacing them by explicit positive constants (depending on the slope of the boundary). Thus we obtain an equation z∈∂Ωǫ\{a} ω(a→z) x length of contours c ≍ 1 , regardless of the size of the domain Ω ǫ . A simple counting argument then shows that the series converges when x < x c and diverges when x > x c , clearly implying the conjecture.
Note that establishing the holomorphicity of our observable in the scaling limit would allow to relate self-avoiding walk to the Schramm's SLE with κ = 8/3 and together with the work [LSW04] of Lawler, Schramm and Werner to establish the more precise form (12) of the Nienhuis prediction.
What's next
Below we present a list of open questions. As before, we do not aim for completeness, rather we highlight a few directions we find particularly intriguing.
Question 1. As was discussed, discrete complex analysis is well developed for isoradial graphs (or rhombic lattices), see [Duf68,Mer01,Ken02,CS08]. Is there a more general discrete setup where one can get similar estimates, in particular convergence of preholomorphic functions to the holomorphic ones in the scaling limit? Since not every topological quadrangulation admits a rhombic embedding [KS05], can we always find another embedding with a sufficiently nice version of discrete complex analysis? Same question can be posed for triangulations, with variations of the first definition by Isaacs (5), like the ones in the work of Dynnikov and Novikov [DN03] being promising candidates.
Question 2. Variants of the Ising observable were used by Hongler and Kytölä to connect interfaces in domains with more general boundary conditions to more advanced variants of SLE curves, see [HK09]. Can one use some version of this observable to describe the spin Ising loop soup by a collection of branching interfaces, which converge to a branching SLE tree in the scaling limit? Similar argument os possible for the random cluster representation of the Ising model, see [KS10]. Can one construct the energy field more explicitly than in [Hon10], e.g. in the distributional sense? Can one construct other Ising fields? Question 3. So far "half-preholomorphic" parafermions similar to ones discussed in this paper have been found in a number of models, see [Smi06, RC06, RC07, IC09], but they seem fully preholomorphic only in the Ising case. Can we find the other half of the Cauchy-Riemann equations, perhaps for some modified definition? Note that it seems unlikely that one can establish conformal invariance of the scaling limit operating with only half of the Cauchy-Riemann equations, since there is no conformal structure present.
Question 4. In the case of the self-avoiding walk, an observable satisfying only a half of the Cauchy-Riemann equations turned out to be enough to derive the value of the connectivity constant [DCS10]. Since similar observables are available for all other O(N ) models, can we use them to establish the critical temperature values predicted by Nienhuis? Our proof cannot be directly transfered, since some counting estimates use the absence of loops. Similar question can be asked for other models.
Question 5. If we cannot establish the preholomorphicity of our observables exactly, can we try to establish it approximately? With appropriate estimates that would allow to obtain holomorphic functions in the scaling limit and hence prove conformal invariance of the models concerned. Note that such more general approach worked for the critical site percolation on the triangular lattice [Smi01b,Smi01a], though approximate preholomorphicity was a consequence of exact identities for quantities similar to discrete derivatives.
Question 6. Can we find other preholomorphic observables besides ones mentioned here and in [Smi06]? It is also peculiar that all the models where preholomorphic observables were found so far (the dimer model, the uniform spanning tree, the Ising model, percolation, etc.) can be represented as dimer models. Are there any models in other universality classes, admitting a dimer representation? Can then Kenyon's techniques [Ken04,Ken09] be used to find preholomorphic observables by considering the Kasteleyn's matrix and the coupling function?
Question 7. Throughout this paper we were concerned with linear discretizations of the Cauchy-Riemann equations. Those seem more natural in the probabilistic context, in particular they might be easier to relate to the SLE martingales, cf. [Smi06]. However there are also well-known non-linear versions of the Cauchy-Riemann equations. For example, the following version of the Hirota equation for a complex-valued function F arises in the context of the circle packings, see e.g. [BMS05]: Can we observe this or a similar equation in the probabilistic context and use it to establish conformal invariance of some model? Note that plugging into the equation (18) a smooth function, we conclude that to satisfy it approximately it must obey the identity (∂ x F (z)) 2 + (∂ y F (z)) 2 = 0 .
So in the scaling limit (18) can be factored into the Cauchy-Riemann equations and their complex conjugate, thus being in some sense linear. It does not seem possible to obtain "essential" non-linearity using just four points, but using five points one can create one, as in the next question.
Question 8. A number of non-linear identities was discovered for the correlation functions in the Ising model, starting with the work of Groeneveld, Boel and Kasteleyn [GBK78,BK78]. We do not want to analyze the extensive literature to-date, but rather pose a question: can any of these relations be used to define discrete complex structures and pass to the scaling limit? In two of the early papers by McCoy, Wu and Perk [MW80,Per80], a quadratic difference relation was observed in the full plane Ising model first on the square lattice, and then on a general graph. To better adapt to our setup, we rephrase this relation for the correlation C(z) of two spins (one at the origin and another at z) in the Ising model at criticality on the mesh ǫ square lattice. In the full plane, one has Note that C is a real-valued function, and the equation (19) is a discrete form of the identity C(z)∆C(z) + |∇C(z)| 2 = 0 .
The latter is conformally invariant, and is solved by moduli of analytic functions. Can one write an analogous to (19) identity in domains with boundary, perhaps approximately? Can one deduce conformally invariant scaling limit of the spin correlations in that way?
Question 9. Recently there was a surge of interest in random planar graphs and their scaling limits, see e.g. [DS09,LGP08]. Can one find observables on random planar graphs (weighted by the partition function of some lattice model) which after an appropriate embedding (e.g. via a circle packing or a piecewise-linear Riemann surface) are preholomorphic? This would help to show that planar maps converge to the Liouville Quantum Gravity in the scaling limit.
Question 10. Approach to the two-dimensional integrable models described here is in several aspects similar to the older approaches based on the Yang-Baxter relations [Bax89]. Some similarities are discussed in Cardy's paper [Car09]. Can one find a direct link between the two approaches? It would also be interesting to find a link to the three-dimensional consistency relations as discussed in [BMS09].
Question 11. Recently Kenyon investigated the Laplacian on the vector bundles over graphs in relation to the spanning trees [Ken10]. Similar setup seems natural for the Ising observable we discuss. Can one obtain more information about the Ising and other models by studying difference operators on vector bundles over the corresponding graphs?
Question 12. Can anything similar be done for the three-dimensional models? While preholomorphic functions do not exist here, preharmonic vector fields are well-defined and appear naturally for the Uniform Spanning Tree and the Loop Erased Random Walk. To what extent can they be used? Can one find any other difference equations in three-dimensional lattice models? | 2010-09-30T09:22:01.000Z | 2010-09-30T00:00:00.000 | {
"year": 2010,
"sha1": "371d8355d798fa86bd4bc388529a02d095980d01",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1009.6077",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f971691ea4726f8702fa908379c047bf1a307363",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
} |
22528504 | pes2o/s2orc | v3-fos-license | Effectiveness of Solution-Focused Brief Therapy for an Adolescent Girl with Moderate Depression
Globally, the solution-focused brief therapy is practiced in persons with depression. In India, fewer studies have documented about the treatment outcome of solution-focused therapy among persons with depression. The current study was carried out with a 19-year-old girl, studying SSLC (10th Standard) was diagnosed with moderate depression. She had difficulty in attention, concentration, memory, irritability and sad mood, poor academic performance, guilt feelings, lethargic, anhedonia, decreased sleep, and decreased appetite. The case worker has chosen provided 6 sessions of solution focused therapy for depression. There was considerable improvement in her symptoms and in scholastic performance. The current study supports the effectiveness of solution-focused therapy in persons with depression.
well as whatever she was reading. Gradually, her academic performance had come down significantly by 2008. Since 2008, mother reported that Ms. S gets irritable over trivial issues, anger out bursts and occasionally beats her sister. Subsequently, Ms. S became dull and lethargic later. In spite of these, there was a pressure from mother to perform better studies as she was performing very poor in her academics. So, the client started to think that she is a failure, she cannot clear her exams, better to die rather than living. However, client did not attempt for suicide. She also felt guilty that she was unable to fulfill the dreams of her parents with respect to academics.
Ms. S was not attending Bharatanatyam classes and meditation classes, which she used to before the onset of the symptoms. Client's sleep and food intake had come down gradually. Ms. S was given anti-depressant medication by the consultant psychiatrist for 2 months, and there was no adequate improvement in her depressive symptoms except her sleep.
The therapist followed single subject research design A-B outcome design with baseline and treatment phase to test the treatment gains. Hamilton Depression Rating Scale (HAM-D) was used to assess the severity of the depression. The scale has 21 items, and the total score 0-7 shows normal, 8-13 mild, 14-18 moderate, 19-22 severe, above 23 very severe. HAM -D demonstrated high levels of reliability (r a = .91 to .94, r a = .95 to .96). Extensive validity evidence was presented, including content, criterion related, construct, and clinical efficacy of the HDI cutoff score. The client had scored 21 on HAM -D (moderate depression) at baseline, and it had come down to 6 (normal) after providing solution-focused therapy [ Figure 1].
The case worker had listened to the client attentively and empathetically about her concerns. The client was anxious and worried about her academic difficulties and parents pressure to perform better in the academics. The detailed information was elicited from the client about her problems and experiences. This has increased client's understanding about her problem. The case worker has validated her feelings and concerns. This has brought some amount of positive change in the client as she felt that the case worker has understood her problem. The case worker and the client sat together and discussed about future goals. The goals were clear, simple, and attainable. Though the client felt hopeful, she did not have clarity about how to achieve her goals.
Identified goals were: 1. Managing academic stress, 2. Enhancing attention and concentration, and 3. Planned preparation for exams.
The miracle question
The miracle question devised by Steve de Shazer (1988) was asked to the client in each session.
Imagine when you go to sleep one night a miracle happens and the problems you have been talking about disappear. As you were asleep, you did not know that a miracle happened.
Scaling
On a 0 to 10 scale with 10 representing the best it could be and 0 as worst it has been. The aim of scaling was to measure the progress as sessions goes on. The client has mentioned her score as 2. As the sessions progressed, the scores have increased, which means Ms. S has improved significantly.
Discovering the client's resources
The case worker has discussed about the clients problems in the past and how they were managed. The case worker found that: The client was practicing Bharatanatyam and going for evening walk with friends. She also used to practice western dance with her sister. According to client, these activities were her main source of coping. Client reported that she used to feel free and active after doing Bharata Natyam. While she was in SSLC, the client stopped doing all these things due to academic pressure. The case worker has discussed about possibility to start doing these activities. The case worker also discussed with mother about the need for sending client for Bharata Natyam. The client started going to Bharata Natyam classes and evening walk with her friends. Gradually, her score on scaling has increased. The case worker has given positive feedback at the end of each session. Client efforts to change, regularity to sessions, honesty, and commitment were identified and reflected on her.
DISCUSSION
In developing and under developed countries, the cost of mental health care is high. The medication sideeffects, costly medicine, lower socio-economic status, non-availability of drugs, irregular follow up may cause relapse of depression and depression indicated high percentage of relapse. [6] Hence, a brief solution-focused therapy would help the client in reducing health care cost and preventing relapse.
The results of current study support the previous study findings that solution-focused therapy for clients with mild to moderate depression can be successful with less number of sessions. [7] After undergoing the therapy, the client was not only improved from depression but her academic performance was also improved significantly. The solution-focused therapy can be tailored to meet specific symptoms and resources of the client. However, the therapist needs professional training before practicing solution-focused therapy.
CONCLUSION
Overall, the current study constructs on the existing empirical base supporting the use of solution-focused therapy in the treatment of mild to moderate depression. In this brief therapies era, where clients also do not have much time to come for more number of sessions, solution-focused therapy is very helpful. Intervention research studies on efficacy and cost-effectiveness of solution-focused therapy in clients having mild to moderate depression can be carried out. | 2018-04-03T04:45:58.457Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "05dd11d88344ef582f427ae5dbb91bb456123fda",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4341318",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b48a69244fd25b7269a2e46452fa77b74c902843",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
100448043 | pes2o/s2orc | v3-fos-license | The numerical simulation of Taylor-Couette flow with radial temperature gradient
The Taylor-Couette flow with radial temperature gradient is a canonical problem for the study of heat transfer in engineering issues. However, gaining insight into the transitional Taylor-Couette flow with temperature gradient still requires detailed experimental and numerical investigations. In the present paper we have performed computations for the cavity of aspect ratio Γ= 3.76 and radii ratios η= 0.82 and 0.375 with the heated rotating bottom disk and stationary outer cylinder. We analyse the influence of the end-wall boundary conditions and the thermal conditions on the flow structure, and on the distributions of the Nusselt number and torque along the inner and outer cylinders. The averaged values along the inner cylinder of the Nusselt number and torque obtained for different Re are analysed in the light of the results published in [2, 16, 17].
Introduction
Transitional flow driven by the combination of the rotation and the thermal gradients determines the dynamics which occur in complex industrial flows. The Taylor-Couette flow is one of paradigmatical systems in hydrodynamics very well suited for studying the primary instability, transitional flows and fully turbulent flows in the varying temperature fields. An overview of issues related to the Taylor-Couette flow with heat transfer can be found in [1,2]. Regardless of the significance of the problem for basic research, the results obtained for the geometrically simple Taylor-Couette flow can be directly used in designing and optimizing many devices, such as: cooling systems in gas turbines and axial compressors, ventilation installations, desalination tanks and waste water tanks, nuclear reactor fuel rods [3,4,5,6]. The Taylor-Couette flow is governed by the following parameters: radii ratio 2 1 / R R (where 1 R and 2 R are the radii of the inner and outer cylinders respectively), by curvature and by aspect ratio , where H is the axial dimension of the domain, figure 1. Reynolds number is defined in the following way: where is the rotation of the inner cylinder and the bottom disk, is the kinematic viscosity of the fluid. The heat transfer is characterized by the thermal Rossby number ) ( , where 2 T and 1 T are temperatures of the heated and cooled walls ( is thermal expansion coefficient).
The literature on the Taylor-Couette flow with heat transfer includes experimental, theoretical and numerical studies [7,8,9]. Instability and transitional process in the cavity of large aspect ratio >100 and narrow gap were investigated experimentally, among others, in the papers [10,11]. The experimental results of the heat transfer obtained in the cavity of aspect ratio Γ= 31.5 (η ∼ 0.5) were published in [12,13]. The numerical simulations [14] obtained for the cavity of Γ = 10, η = 0.5 delivered detailed information on the flow structure, but the results revealed discrepancies between the experimental results and numerical ones. These discrepancies were attributed to the influence of the end-wall boundary conditions. The effect of the end-walls was intensively investigated for the isothermal fluid flow in [15], where the authors applied the asymmetric end-wall boundary conditions. In most of the Taylor-Couette flow numerical simulations the assumption of periodicity condition in axial direction has been used. This assumption significantly reduces the computational cost due to the fact that variables can be expanded as a Fourier series in the axial direction, which simplifies the numerical approach. Lopez et al. [2] investigated numerically the heat transfer in fluid flows between rotating cylinders (5 ≤ Γ ≤ 80) using no-slip boundary conditions at the end-walls and also, for comparison, using the periodicity condition in axial direction. They showed that the numerical results obtained with the axial periodicity condition agree with those obtained for closed cavities only for small Rayleigh numbers.
In the present paper we analyse the transition process in the Taylor-Couette flow in cavities of small aspect ratio = 3.76, and radii ratios = 0.82 and 0.375 with the heated rotating bottom disk and the heated stationary outer cylinder (the rotating inner cylinder and the stationary top disk are cooled). The Boussinesq approximation is used to take into account the buoyancy effect induced by the involved body forces i.e. the Coriolis force, the circumferential force resulting from the angular velocity of the rotor and the circumferential force, which is caused by the curvature of the particle track. In the paper we analyse the influence of the asymmetric end-wall boundary conditions and the thermal boundary conditions on the flow structure and on the axial and radial distributions of many physical parameters. We focus particularly on the dependence of the Nusselt number and torque [16,17] averaged along the inner and outer cylinders of Reynolds number. The objective is also the examination how the influence of the end-wall boundary conditions on the flow structure depends on the curvature of cylinders, parameterized in the paper by . These results are discussed in the light of the data obtained from correlation formulas proposed by Lopez et al. [2] for an infinitely long cavity and for the cavity of aspect ratio = 10. The analysis is supposed to show the possibilities of using the existing data obtained for the infinitely long cylinders to predict processes in the cavities of small closed by the end-walls.
The outline of the paper is as follows: the mathematical and numerical models are given in section 2. The flow structure, the radial profiles of the mean angular velocity and momentum, and the radial profiles of the dimensionless temperature obtained for cavity of = 3.76, = 0.82 are presented in section 3.1. The distributions of the Nusselt number and torque along the inner and outer cylinders, as well as, the dependence of the averaged values from Re are analyzed in section 3.2. In section 4 the flow structure obtained for the isothermal and the non-isothermal flow cases of = 3.76,= 0.375 are discussed. The conclusions are given in section 5.
The mathematical and numerical approaches
We consider the flow with heat transfer between two concentric cylinders of aspect ratio 76 . 3 and radii ratios 0.82 and 0.375 closed by end-walls. The inner cylinder of radius 1 R and the bottom disk rotates at a constant angular velocity , while, the outer cylinder of radius 2 R and the top disk are at rest. The flow is described by the Navier-Stokes, continuity and energy equations written in a cylindrical coordinate system ), , , ( Z R with respect to rotating frame of reference: where t is time, R is radius, P is pressure, is density, V is the velocity vector, a is the thermal diffusivity and is the dynamic viscosity. The dimensionless axial and radial coordinates are: . The velocity vector components in radial, azimuthal and axial directions are depicted by U, V and W, respectively, T is temperature. The Boussinesq approximation is used to take into account the buoyancy effects induced by the involved body forces . For validity of Boussinesq approximation the thermal Rossby number The Prandtl number is equal to 0.71. The velocity components are normalized as follows: The dimensionless temperature is defined in the following way: ).
The no-slip boundary conditions are applied to all rigid walls ).
For the azimuthal velocity component the boundary conditions are: 0 v on the rotating inner cylinder and rotating bottom disk, and on the stationary outer cylinder and stationary top disk. The bottom disk and the outer cylinder are heated 1 , and the top disk and the inner cylinder are cooled . 0 In order to eliminate singularities of the azimuthal velocity components at the junctions between the rotating and stationary walls, the azimuthal velocity is regularized by exponential profiles. The exponential profiles are used also for temperature at the junctions between the heated and cooled walls.
The numerical simulations (DNS/SVV) are based on a pseudo-spectral Chebyshev-Fourier-Galerkin collocation approximation. In time approximation we use a second-order semi-implicit scheme, which combines an implicit treatment of the diffusive term and an explicit Adams-Bashforth scheme for the non-linear convective terms. In the non-homogeneous radial and axial directions we use Chebyshev polynomials with the Gauss-Lobatto distributions to ensure high accuracy of the solution. Predictor/corrector method is used. All dependent variables i.e. predictors of three velocity components u, v and w, predictor of pressure and temperature, and corrector for pressure are obtained by solving Helmholtz equation [18,19,20].
For higher Reynolds numbers the SVV method is used. In this method an artificial viscous operator is added to Laplace operator to stabilize the computational process, [20]. The SVV operator is sufficiently effective to suppress Gibbs oscillations and simultaneously the SVV does not affect the solution accuracy. More detailed information about SVV algorithm can be found in [21,22,23]. For the visualization purpose we use the 2 criterion. The computations have been performed using mesh of 5-10 million collocation points. The time step is from the range The verification of the DNS/SVV algorithm has been done in [22,23].
Distributions of Nusselt number and torque
The heat transfer across the gap is characterized by the distributions of Nusselt number along the inner cylinder and the outer one. The local Nusselt number is obtained in the following way is a heat transfer coefficient and is a thermal conductivity; the average value is depicted by Nu. Following [16,17] we also analyze the distributions of the local transverse angular momentum current and, after averaging, in axial direction, its mean value . The mean J is normalized by its laminar value , [16]. In [16] the authors showed theoretically that the mean current proposed in the paper [2], where the authors investigated numerically (DNS) the flow in infinitely long cavity with the periodicity condition in axial direction (cavity with a heated rotating inner cylinder, a cooled stationary outer one, = 0.5, Pr = 0.71, Ra = 1420, [2]). They also performed simulation in the cavity of = 10 with the no-slip velocity condition at the end-walls and they proposed the following formula ( = 10), in spite of differences in thermal boundary conditions. The differences between results are larger, for Re > 1000, when we compare our results with the data [2] obtained with the periodicity condition in axial direction. The Nusselt numbers at the outer cylinder 2 Nu are smaller in comparison to 1 Nu , approximately . From figure 6 we can also see that our Nusselt number distribution 1 Nu agrees well with the distribution of At about Re = 270 the middle vortex is squeezed by the growth of the vortex adjacent to the bottom rotating disk (this process was previously described in [15]). Finally, the steady three-cell structure collapses to a The same computations have been performed for the thermal Rossby number B=0.1 with the heated outer stationary cylinder and the rotating bottom disk. The use of the non-izothermal boundary conditions changes the procedure of the transition from the three-cell structure to the one-cell structure. Again, the three-cell structure is formed at approximately Re = 80, then at approximately Re = 286 the top vortex is squeezed by the growth of the vortex adjacent to the bottom rotating disk, which results in the appearance of the two-cell structure at Re = 288, figure 7h. The flow is pumped outward along rotating bottom disk in accordance with the boundary condition, and it is pumped radially outward along the stationary top disk in opposite direction to what we can expect from the boundary condition. However, in the corner between the inner rotating cylinder and the stationary top disk there is a very small vortex (hardly visible in figure 7h) which rotates in accordance with the boundary condition. The transition to unsteadiness takes place at Re = 553, above which we observe four spiral vortices. From figures 7f-7i we can see that with the increasing Re the top vortex gradually shrinks and finally the one-cell structure is formed. We can conclude that the use of the nonisothermal boundary conditions delays the formation of the one cell-structure in comparison to the isothermal flow case (and generally slightly weakens the effect of the end-wall boundary conditions). The similar procedure of the transition from the three-cell structure to the one-cell structure we observe in the flow case of = 3.76, = 0.524, B = 0. For higher radii ratios and the classic Taylor-Couette laminar-turbulent transition takes place.
Conclusions
In the paper we investigate, with the use of DNS, the transitional Taylor For all flow cases we observe the strong effect of the Ekman boundary layers on the flow structure, however, this effect depends on the curvature of cylinders (). For the flow case of = 0.375 (B = 0), the effect of end-walls is so strong that it leads to a rapid transition from the three-cell structure to the one-cell structure. The use of the non-isothermal boundary conditions results in the appearance of an intermediate two-cell structure which delays the transition. The same scenario takes place in the flow case of = 0.524. For other values = 0.756 and 0.821 we observe the gradual development of the three-cell structure illustrated in section 3.1.
In order to better characterize the influence of the undertaken boundary conditions on the flow we examine the axial distributions of the local transverse angular momentum current [2] obtained for = 10 shows agreement. However, we observe discrepancies for Re > 1000 in comparison to correlation proposed in [2] for infinitely long cavity. The results obtained for = 3.76 show that in the configuration of such low aspect ratio the influence of the end-walls is very large, particularly for smaller . | 2019-04-08T13:08:57.418Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "a45ff476569ab50bb07664c9d664b8657772c6a5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/760/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "083a99c9f4a99d27dd4b072f10c4f520b506263b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
221407942 | pes2o/s2orc | v3-fos-license | A workflow for standardising and integrating alien species distribution data
Biodiversity data are being collected at unprecedented rates. Such data often have significant value for purposes beyond the initial reason for which they were collected, particularly when they are combined and collated with other data sources. In the field of invasion ecology, however, integrating data represents a major challenge due to the notorious lack of standardisation of terminologies and categorisations, and the application of deviating concepts of biological invasions. Here, we introduce the SInAS workflow, short for Standardising and Integrating Alien Species data. The SInAS workflow standardises terminologies following Darwin Core, location names using a proposed translation table, taxon names based on the GBIF backbone taxonomy, and dates of first records based on a set of predefined rules. The output of the SInAS Copyright Hanno Seebens et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. NeoBiota 59: 39–59 (2020) doi: 10.3897/neobiota.59.53578 http://neobiota.pensoft.net SOFTWARE DESCRIPTION Advancing research on alien species and biological invasions A peer-reviewed open-access journal
Introduction
In recent years, we have observed a tremendous rise in the availability of data in all fields of biodiversity research (La Salle et al. 2016), including invasion ecology. In particular, initiatives have emerged to map the occurrence of specific taxa with alien populations -called 'alien taxa' in the following -for major groups such as plants, birds, amphibians and reptiles (van Kleunen et al. 2015;Dyer et al. 2017a;Capinha et al. 2017); to assess the extent of invasions in particular geographical regions (e.g., Europe, DAISIE 2009) and habitats (e.g., marine, Ahyong et al. 2019); to document particular events (e.g., dates of record, Seebens et al. 2017); or to identify and record the presence of alien species that have negative impacts (e.g., Pagad et al. 2018). Although analyses of these data sources have led to valuable insights on the historic and current spatial and temporal patterns and processes of biological invasions (Dyer et al. 2017a;Dawson et al. 2017;Pyšek et al. 2017;Bertelsmeier et al. 2017;Seebens et al. 2018), these new aggregations of alien species data differ in various respects and are not interoperable.
Biodiversity data sources are often not standardised or directly comparable , which limits their value for conservation and research (Bayraktarov et al. 2019). In invasion ecology, new databases have recently been produced for a range of different purposes, although they have, to date, been produced largely in isolation. To remedy this, individual workflows have been created to harmonise and integrate the information in order to meet particular project goals. These workflows have used different taxonomic and geographical standards and practices, but such standardisations are not always clearly documented. As a result, databases are often not comparable and cannot be readily linked, which hampers progress towards improving the taxonomic and geographic coverage of alien species data and potential insights for research and management that might be derived as a consequence (McGeoch et al. 2012). The widespread lack of standardisation across key data sources on alien species also hinders clear communication with managers and policy makers (Gatto et al. 2013;McGeoch and Jetz 2019).
Progress in biodiversity research has been facilitated by the development of data standards (Guralnick and Hill 2009), powerful analytical tools and coherent work-flows to, for instance, develop and calculate Essential Biodiversity Variables (EBVs, Kissling et al. 2018;Jetz et al. 2019) or to clean biodiversity data (Mathew et al. 2014;Jin and Yang 2020). Recently, using three exemplar alien species, a workflow was constructed and tested to integrate data from multiple sources for alien species (Hardisty et al. 2019). For most comprehensive databases in invasion ecology, the publication of such workflows and detailed descriptions of database generation remains rare (but see Dyer et al. 2017b;Pagad et al. 2018). Thus, data management in invasion ecology does not often meet open science principles, and the databases produced do not qualify as FAIR, i.e. Findable, Accessible, Interoperable, and Reusable (Wilkinson et al. 2016). Although the procedures for collating data are often described, the descriptions and associated metadata are generally insufficient for the workflow to be reproduced. Computer scripts and guidance documents are often not publicly available, which further impedes reproducibility. Using a standardised, publicly available workflow would enable alien species databases to be combined in a transparent and repeatable way, and improve the format, contents, and interoperability of databases (Mathew et al. 2014). Such annotated workflows would also guide future data collation efforts such that they achieve both their own goals and contribute to community-wide efforts to enhance the quality and quantity of data on alien and invasive species . In particular, any integration of species databases requires a well-documented, repeatable, coherent, and standardised workflow to match nomenclature and taxonomy based on a standard concept (e.g., Boyle et al. 2013;Murray et al. 2017), or even to map different taxonomic concepts to each other (Berendsohn 1995). The availability of large online infrastructures for biodiversity research, such as the Global Biodiversity Information Facility (GBIF), enables taxonomic standardisation in a reproducible and standardised way, but the potential is still not fully exploited in studies addressing biological invasions.
Here, we introduce the SInAS (Standardising and Integrating Alien Species data) workflow that was developed within the course of the synthesis working group "Theory and Workflows for Alien and Invasive Species Tracking" (sTWIST) at sDiv, Leipzig, Germany. Following Hardisty and Roberts (2013), we use the term "workflow" as a description of a series of processes of data manipulation and integration, including the codes allowing a largely automated approach (see also van der Aalst and van Hee 2002, who use the term "workflow" for a series of standardised processes). The SInAS workflow serves to integrate databases of regional checklists including information on spatial and temporal dynamics of alien species using a standardised protocol to merge taxon and location names. The SInAS workflow combines public taxonomic infrastructures with procedures, resolutions, and concepts commonly used in biodiversity research in general and invasion ecology in particular. In the following, we provide a detailed description of the SInAS workflow and its implementation in R. We demonstrate its functionality using an example of merging five of the most comprehensive open access alien species databases currently available. Although the workflow was developed for merging databases of alien species occurrences, it can be readily adapted to other databases, including those with associated spatial information.
The SInAS workflow
The SInAS workflow was created to integrate databases organised as individual spreadsheet tables, which is the most common format for alien species occurrence information. In contrast to databases of native species, alien species occurrences are often associated with a date of first introduction or first date of report for a region as an alien or naturalised species. Here, we adopt a common use of these "first records", which represent the first record of a taxon in a particular region. Following Darwin Core terminology (Darwin Core Task Group 2009), first records are called "event dates" in the following.
Three major steps, organised in sequence, form the primary components of the workflow: 1) initial check and preparation of the original databases; 2) standardisation of the databases; and 3) merging of the standardised databases (Fig. 1). Standardisation (step 2) is the most complex step and can be subdivided into specific tasks that each involves the standardisation of one of eight variables: taxon names, location names, event dates, occurrence status, establishment means, degree of establishment, pathway, and habitat. An overview of all variables used in this workflow together with definitions and explanations are given in Suppl. material 2: Tables S1-S4. Each specific task requires a reference against which data will be standardised (e.g., a list of location names in a particular format or a list of accepted taxon names and their synonyms). Each task produces intermediate output tables to report where there was standardisation (e.g., replacements of original names) and where standardisation was not possible (e.g., missing names and unresolved names). As input files, each step of the workflow requires the output of the previous step as input except for step one, where the original database and its metadata have to be provided (currently implemented as *.xlsx files). In the following section, a comprehensive overview of the SInAS workflow is provided, while the detailed description can be found in the Suppl. material 1. The full workflow implemented in R together with all required input files, examples databases, and a manual are provided as the SInAS workflow package (see section 'Data and code availability' below).
Step 1: Preparation of databases
The first step includes a check of the availability of variables in the original databases. Variables are categorised into three classes: i) required variables, which must be provided (i.e., taxon and location names); ii) optional variables, which are associated to the taxon occurrence (e.g., occurrence status or pathway) or represent entries potentially useful for data standardisation (e.g., extra taxonomic information); and iii) additional variables, which are not used within the workflow, but are retained as presented in the original databases throughout standardisation (e.g., traits). An overview of variables and definitions is provided in Suppl. material 2: Table S1. The column names of the required and optional variables in the input databases are harmonised.
Step 2: Standardisation 2a: Terminology Records of alien species are often associated with information about their occurrence status, the degree of establishment, and their pathway(s) of introduction. Such information is standardised in this step using translation tables (Suppl. material 1). Translation tables provide information about the entries in the original databases and the corresponding terms that are to be used in the merged database. These are part of the workflow package (see section 'Data and code availability' below), and follow the recommendations by Groom et al. (2019) in standardising the Darwin Core terms 'establishmentMeans', 'occurrenceStatus' and 'pathway', and adopting their suggestion to include a new term 'degreeOfEstablishment', describing the status of the taxon at a particular location (Suppl. material 2: Table S1). Strictly speaking, this status is not associated to a taxon, but a specific population. This means, as Colautti & MacIsaac (2004) already pointed out, that alien or nonindigenous species are misnomers and these attributes, frequently referred to simply as "status", are associated at population level (i.e., intersecting taxon name with locality). In databases covering large regions, such attributes must properly Figure 1. Overview of the Standardising and Integrating Alien Species data (SInAS) workflow that can be used to merge alien species databases. The workflow consists of three consecutive steps: 1. preparation of databases, 2. standardisation, and 3. merging. The standardisation step is subdivided into the standardisation of: 2a. terminology, 2b. location names, 2c. taxon names, and 2d. event dates (i.e., first records). The user can modify the workflow by adjusting the reference tables under 'user-defined input'. At each step of standardisation, changes and missing entries are exported as intermediate output that can be used to check the workflow, the reference tables, or the input data. be assigned at the right level. However, to be comparable with the wealth of invasion literature that does not properly attribute "status", and for reasons of linguistic simplicity, we still refer to alien species rather than using the correct alien populations. Although the proposal by Groom et al. (2019) has not yet been ratified by the Biodiversity Information Standards organisation, we used it in the workflow as the proposed terminology covers dimensions critical to invasion biology, policy, and management (McGeoch and Jetz 2019), and thus will provide helpful information irrespective of its official incorporation into Darwin Core. The Darwin Core term 'habitat' is also standardised within the workflow; however, as a categorisation of different habitats is not provided by Darwin Core, we provide one in the respective translation table (Suppl. material 1) based on the distinction between terrestrial, freshwater, marine, and brackish habitats. The translation tables can be adjusted by the user in any way, but we highly recommend adhering to the proposed Darwin Core terminology to avoid having incomparable entries. Nonmatching terms are exported so they can be manually checked.
2b: Location names
Location names are standardised using a user-defined translation table (Suppl. material 1), which includes the master location names and the corresponding alternative formats, languages, and spellings. Locations represent administrative units such as countries, states or islands. The majority of location names (89%) conform to the 2-digit ISO code (ISO 3166-1 alpha-2) classification. For the remaining locations, countries were split into sub-national units which are geographically separated from each other (be they islands, states or mainland areas). For instance, Alaska, Hawaii, and US Minor Outlying Islands were separated from mainland United States; the Azores were distinguished from Portugal; and Tasmania from Australia. The full list of location names can be found in the input file "AllLocations.xlsx" as part of the workflow package. Altogether, we used a set of 262 non-overlapping locations covering the terrestrial surface of the world. Similar resolutions are used in many studies of biological invasions Capinha et al. 2017;Dyer et al. 2017b). The location categorisation can be easily adjusted to any spatial delineation in a user-friendly way by modifying the input file. Additional information for the location such as two-and three-digit ISO codes of countries, continents or the World Geographical Scheme for Recording Plant Distributions regions (WGSRPD, Brummitt 2001) are also provided. Non-matching location names are exported for reference. A shapefile is provided, which relates the location to georeferenced polygons for mapping.
2c: Taxon names
Taxonomic standardisation is one of the most important and challenging tasks in biodiversity data integration (Rees and Cranston 2017) as taxon names are often considered the fundamental unit to which other information types are linked (Patterson et al. 2010;Koch et al. 2018). This, however, necessitates the use of a taxonomic backbone against which all species names are assessed during the standardisation process. In the absence of a single authoritative nomenclature across all taxa (Bánki et al. 2018), we used the GBIF taxonomic backbone, which is itself primarily based on the Catalogue of Life (Bánki et al. 2018) (43 % overlap of GBIF backbone taxonomy and Catalogue of Life at the time of access) and complemented with 50+ other sources of taxonomic information. The details of these taxonomic sources can be found at the GBIF Secretariat (2019) and the full taxonomy is available for download (http://rs.gbif.org/datasets/backbone/). If the taxon name could be found in GBIF either as an exact match, a synonym or a fuzzy match with a high confidence (see Suppl. material 1), the obtained 'accepted taxon name' according to GBIF, as well as its given synonym and further taxonomic information, are returned and stored. Taxon names identified as synonyms according to GBIF are replaced with the accepted name obtained from GBIF. To avoid mismatches due to spelling errors, GBIF performs fuzzy matching of the full taxon names. This involves a calculation of similarity between the provided taxon names and the record provided by GBIF. GBIF returns the result of fuzzy matching by the summary metric "confidence", which involves cross-checks of taxon names, authorities and taxonomic information with different weightings (see http://www.gbif.org/developer/ species#searching for more details). In addition to the taxon names, the taxonomic tree (species, genus, family, order, class, phylum, and kingdom) is obtained from GBIF. In the SInAS workflow, all taxon names that could not be resolved are exported as a list of missing taxon names for further reference. A complete list of all taxon names (including the original names provided in the individual databases, taxonomic information, taxonomic status of the name, and search results) is exported as a separate list of taxon names (Suppl. material 1). The user can provide a list of species names and synonyms to resolve conflicts and errors in GBIF entries.
2d: Event dates
In the SInAS workflow presented here, event dates represent the time of the first documented occurrence of a species in a region outside its native range, which is also called 'first record' . Ideally, event dates for the first record of an alien species are provided as a single year, which is then retained in the workflow. But often other time ranges are provided. To enable merging and cross-checking of first records among databases and further analysis, it is necessary to translate these different time ranges into single years. Such an adjustment of first records requires a set of rules (e.g., Seebens et al. 2017;Dyer et al. 2017b), which define how a time range should be treated to obtain a single year. In the simplest case, the start and the end years of the time range are provided, and their arithmetic mean is used as the new single event date. In other cases, time ranges are described in alternative ways such as "1920ies" or "<1920". In translating this information, we followed primarily the rules defined in table 3 of Dyer et al. (2017b). The rules are currently provided as a textual description and the user has to "translate" non-standard event dates into a single year format according to the guidelines and examples provided in the file 'Guidelines_eventDate.xlsx' as part of the workflow package. The user has the opportunity to modify the rules, but we recommend sticking to the proposed ones as a standard in biological invasions. Cases of entries that could not be adjusted are exported from the workflow for cross-checking.
Step 3: Merging In the final step of the workflow, the standardised databases are merged into a single master database. Merging is based on the entries of taxon and location names. That is, all entries with exactly the same taxon and location name will be merged to obtain a single entry for each existing combination of taxon and location. This is achieved by first merging columns of the standardised databases to concatenate their contents and, second, by merging rows of the final database to remove duplicate entries. Conflicts of multiple event dates for the same event are resolved by adopting the earlier of the first records. In cases where conflicts cannot be resolved, the respective entries of all databases are combined to a single entry of the master database. For instance, if a taxon X in location Y is classified as 'introduced' in one database and 'uncertain' in another, the entry in the final master database for X in Y will be 'introduced; uncertain'. The user will be informed that conflicts still exist, which might be solved by adjusting the translation tables or by checking the original data.
In principle, the SInAS workflow is fully automated once metadata are provided at step 1. This, however, requires accepting all defaults such as location names and taxonomic classification by GBIF and, more importantly, keeping all unresolved conflicts that might include unmatched location names or misspellings in the original data. We therefore recommend running the workflow in an iterative process of running the workflow, checking warnings and intermediate output tables, resolving conflicts and errors, and re-running the workflow. Such an iterative process should increase the match between databases, and therefore the coverage of the final merged database.
A case study
We applied and tested the workflow using five global databases of spatio-temporal alien species occurrences (Table 1) Variables from the different databases were mapped onto the variables provided in the SInAS workflow as outlined in Suppl. material 2: Tables S1-S4. As location names were provided in different columns in GloNAF and GAVIA, these were merged manually to obtain a better match with the classification of locations used in the SInAS workflow.
Merging of the five databases resulted in a new database (the sTWIST database) consisting of two interlinked tables containing records of alien species per location and a full list of taxa including further taxonomic information (Suppl. material 3). Depending on the success of the integration of the specific databases, several additional files will be created during the workflow providing missing taxa and location names, unresolved terms (e.g., of occurrence status and pathways), translated location names and event dates, and unresolved event dates. In our cases, 17 of these tables were exported from the workflow for further cross-checking (Suppl. One consequence of the workflow was that, after cleaning and standardisation, the number of records dropped (Table 1). For example, the merged sTWIST database contained only ~30% of the original GloNAF database. This was mostly due to the GloNAF database having a finer spatial resolution than the sTWIST database (1,029 vs. 257 regions). Consequently, many regions were combined and records merged.
Altogether, 53,546 taxon names were obtained from all five databases, including synonyms and multiple entries of individual taxa due to different spellings. A small proportion (5 %) of these taxon names could not be found in GBIF for different reasons such as misspellings, missing information or unresolved taxonomies. This often involved subspecies, varieties or hybrids and can be checked in the output files "Miss-ing_Taxa_*" for the individual databases. Most of these unresolved taxon names were obtained from GRIIS (1,610; 6 % of GRIIS taxa) followed by FirstRecords (802; 5%), AmphRep (10; 4%), GloNAF (261; 2%) and GAVIA (8; <1%). Unresolved taxon names were kept in the final database but flagged as such in the full list of taxon names "Taxa_FullList.csv". Standardisation during the SInAS workflow identified 7,174 syn- Table 1. The taxonomic coverage and size of the original databases on the occurrence of alien taxa before and after standardisation and merging using the Standardising and Integrating Alien Species data (SInAS) workflow (see Figure 1). Records were counted multiple times when they were obtained from different databases. Reductions in total record number were mostly a result of aggregation from the finer spatial resolution of the original databases to the higher spatial resolution used in the SInAS workflow. onyms (13%), which were replaced by the accepted names provided by GBIF. This finally reduced the number of taxa to 35,150 distinct taxon names. After standardisation of taxon and location names, the overlap of taxon-specific databases with the cross-taxon ones was surprisingly low (Table 2). Most regions were represented in all databases; however, the overlaps for taxa and taxon by location combinations were often far below 50%. For instance, only 26% of all records in GAVIA can also be found in GRIIS, while 20% of the GloNAF records were also included in FirstRecords. The comparatively low overlap of locations in GRIIS with taxon-specific databases stems from a few locations only considered separately in GRIIS. Table 2. Overlap (in %) of locations, taxa, and taxa by location record between taxonomic and crosstaxon databases. An overlap between two databases is defined as the number of entries in the taxon-specific database shared with the cross-taxon database divided by the total number of entries from the taxonspecific database. It therefore shows how many records of the taxon-specific databases are found in the cross-taxon ones.
Discussion
The SInAS workflow is, to the best of our knowledge, the most comprehensive workflow to standardise and integrate alien species occurrence databases to date. It is also in full compliance with the FAIR data principles (Wilkinson et al. 2016). The workflow provides a foundation to develop and apply standards for the harmonisation of taxon names, geographic resolutions, and event dates. It achieves this using translation tables and rules that are transparent and linked to existing international schemes such as accepted taxonomic backbones that can be easily updated as needed. The SInAS workflow also offers the opportunity to adapt individual steps to the respective user's needs, and enables the user to conveniently report on deviations from the suggested workflow. Reporting of such adjustments is essential for reproducibility, particularly in the field of invasion ecology, which is rich in competing concepts and terminologies (Falk-Petersen et al. 2006). Thus, the SInAS workflow will help to differentiate and integrate the various approaches, and finally will increase trust not only in data but also in study results and conclusions communicated to the decision makers and the general public (Franz and Sterner 2018). The potential to customise and extend the workflow increases the range of possible applications such as the calculation of indicators (e.g., Wilson et al. 2018), the ability to conduct global and regional assessments of invasive alien species and their control, and the global collaboration being proposed as essential for dealing with priority invaders (Blackburn et al. 2020).
We introduced the SInAS workflow as a tool to integrate databases, but it can also assist with standardisation within a database to ensure that region or taxon names are consistent, and that terminologies of individual checklists are reported in a more standardised way. Although the flexibility built into the SInAS workflow makes it more broadly useful, providing flexibility in a workflow does bear the risk that databases remain incompatible. For instance, users of the workflow can define their own categorisation of locations, which might result in even more heterogeneous databases in addition to those that already exist. It is essential, therefore, that modifications of the workflow are clearly communicated. As best practice, we recommend that modifications of the input files such as translation tables, taxon names or any modification of the workflow itself are clearly reported and published together with the final database. For instance, a change in the list of geographic regions can be easily attached as a table to the respective publication together with the link to our workflow. In this way, modifications can be traced back to their origin and databases remain comparable despite adaptations to individual project goals. We believe that our proposed workflow will smooth this process and make it easier for individual researchers to publish not only scientific results in a more consistent way, but also the underlying workflows to enhance the transparency and reproducibility of the science.
The comparison of the individual databases that resulted from the integration work done here highlighted an unexpectedly low degree of overlap between them. This re-emphasizes, in spite of significant recent advances in alien species data collation, the importance of: 1) joint collaborative work, 2) freely available data, and 3) shared workflows to improve the taxonomic, geographic, and temporal coverage and resolution of alien species data (Hardisty et al. 2019). The low degree of overlap was obviously related to the scope of the individual databases -the taxon-specific databases focussed on a high level of spatial and taxonomic coverage, while cross-taxonomic databases harvest information on a specific topic such as event dates or impact. Moreover, the databases drew original data records from different sources, and so each database was constructed using different workflows with divergent assumptions and supporting concepts. This clearly shows that not only does the merging of individual databases have to be standardised as proposed here, but the integration of primary data from the original sources needs to be done in a more reproducible and transparent way as well (Vanderhoeven et al. 2017;Pagad et al. 2018). Our case study also highlights that the SInAS workflow and the associated scripts could be used to assess the reliability of different databases and their components (e.g., Cano-Barbacil et al. 2020) and to identify potential areas of improvement for the respective databases.
Our workflow was developed to integrate taxon lists for individual regions, so-called checklists. Checklists represent by far the most common representation of spatial information on alien species occurrences (Pyšek et al. 2012;Brundu and Camarda 2013). This is somewhat different to other fields of biodiversity research, where occurrence data are often provided as range maps, grids, plot based lists or point coordinates. In contrast to populations of native taxa, alien taxa populations are categorised as being alien only for a particular region and timeframe. The importance of decision-making in an applied science, such as invasion ecology, means that policies are commonly made for the administrative units (such as countries or states/provinces) responsible for control efforts, and the spatial resolution of presence-absence data is low resolution to accommodate both uncertainty and the precautionary principle when data are intended to inform policy and management. As a consequence, the decision of what is considered as being alien is often taken for administrative regions. This is somewhat different for aquatic alien species, which are categorised depending on marine regions or water sheds, but these spatial units can be easily incorporated as additional entries in the table of geographic regions. In its current form, the SInAS workflow is not capable of handling coordinate-based occurrences. While including point-wise occurrences might be possible in future versions of the workflow, a practical solution would be to assign the coordinate-based location to a region and add the region to the workflow. For example, point-wise occurrence data for the Western Mediterranean Sea could be attributed to this region and added to the workflow.
The pervasive challenge in the integration of alien species data from multiple sources is the variability in the use of terminology (McGeoch et al. 2012). For example, the term 'invasive species' has at least three working definitions: alien populations that are self-sustaining and have naturally spread; alien populations that negatively impact native species, ecosystems, the economy or human health; or populations (be they native or alien) that have recently increased in abundance or extent (Richardson et al. 2000;Blackburn et al. 2011;Carey et al. 2012). As a consequence, merging databases that use different definitions of alien and invasive alien species could result in a misleading collation of taxa. Currently, terminologies are not consistently used across databases, although standard concepts have been published (Blackburn et al. 2011). In the SInAS workflow, we provide a translation of terms following common standards (Darwin Core Task Group 2009; Groom et al. 2019), but the definitions of these terms may vary among primary sources and projects, which often cannot be standardised ret-rospectively. It is therefore essential to stick to common definitions and transparent workflows already in the primary literature, to clearly specify which definition is used.
A further difficulty in combining species data lies in the application of different taxonomic concepts (Berendsohn 1995) by the data recorders. This is a general problem in biodiversity and taxonomic research and is not solved within the SInAS workflow: it requires collaborative solutions from the relevant research community. While resolving such taxonomic conflicts would mean the SInAS workflow is more useful, one should keep in mind that a complete taxonomic resolution is not necessarily required to provide useful information (Gerwing et al. 2020). Unless this workflow is used by experienced taxonomists for taxonomic resolution, we recommend sticking to standards offered by other authorities such as GBIF and report deviations from these standards. Our workflow eases this reporting process by providing the opportunity to submit information of modifications together with the databases.
While advancements have been made in other fields of biodiversity research, with online platforms such as GBIF including a full and citable version control, many databases on biological invasions are still curated by individuals or research groups and might not be publicly available at all. Changing this situation will require there being: 1) an incentive for researchers to publish their data online, ideally with a digital object identifier (DOI) and versioning as provided by online platforms such as GBIF or long-term archives such as Zenodo (https://zenodo.org/) or Dryad (https://datadryad.org), and following the FAIR principles of data management; 2) professional training and technical support for data management; and 3) clear guidelines and standards to ease such data publications (Groom et al. 2019). For some of these aspects, support is already available but still not widely adopted such as the "Guide to Data Management in Ecology and Evolution" published by the British Ecological Society (2014). For other aspects, financial and personnel support is required as individual researchers often do not have the capacity to ensure long-term maintenance and support, which can only be achieved from institutions. The importance of adopting the FAIR data principles has been increasingly recognised by international institutions such as the Intergovernmental Science-Policy Platform of Biodiversity and Ecosystem Services [IPBES, currently conducting a thematic assessment on invasive alien species and their control (https://ipbes.net/invasivealien-species-assessment) that depends on the integration of data sources as we have discussed here] and the European Commission, which provide incentives to scientists to make their data comparable and available. We believe the workflow presented here addresses these challenges by providing an example of how to achieve standardisation across databases and to facilitate the kind of standardisation chosen by the researchers.
The modular structure of the SInAS workflow means that it can form the basis for the development of future data integration workflows. We foresee several opportunities for extensions. Translation tables of additional variables such as taxon traits and variables related to regions and relevant for understanding drivers of biological invasions (environmental, socio-economic, historic) would add another level of value for both research and application. The workflow could also be extended to allow for coordinatebased occurrence records by integrating information of region delineations using Geographic Information System (GIS) tools. Thus, the SInAS workflow, focussed as it is on essential variables for tracking biological invasions (distribution, time, and impact, Latombe et al. 2017), can be considered the core of an integrated comprehensive workflow of data on biological invasions. Global collaborative efforts, supported by readily accessible, globally representative evidence, are key to stemming the invasion tide.
Data and code availability
The full SInAS workflow including all required R scripts, input files, example databases and a manual is made freely available at a repository at Zenodo (https://doi.org/10.5281/ zenodo.3944432) together with the coordinate-based delineations of regions. The releases at Zenodo are linked to a GitHub repository, which ensures full version control of the code. New releases will be provided under the same DOI. All additional files related to the case study are attached to this publication as supplementary materials. | 2020-07-30T02:06:27.460Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "b3c26355e8d1c837fbb88c0b68dd18c985ec3dda",
"oa_license": "CCBY",
"oa_url": "https://neobiota.pensoft.net/article/53578/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ae789b1dccd2f54b4e11a19cd7390b90e1db9d68",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
85460528 | pes2o/s2orc | v3-fos-license | Lower resolvent bounds and Lyapunov exponents
We prove a new polynomial lower bound on the scattering resolvent. For that, we construct a quasimode localized on a trajectory $\gamma$ which is trapped in the past, but not in the future. The power in the bound is expressed in terms of the maximal Lyapunov exponent on $\gamma$, and gives the minimal number of derivatives lost in exponential decay of solutions to the wave equation.
In this paper, we study lower bounds on the scattering resolvent in the lower halfplane. To fix the concepts, we consider the semiclassical Schrödinger operator where (M, g) is a Riemannian manifold which is isometric to R n with the Euclidean metric outside of a compact set, and n is odd. See §1.2 for other possible settings.
We study the h-dependence of the norm of R h (ω) where We consider the Hamiltonian flow e tHp of the semiclassical principal symbol of P h , and make the following assumptions: (1) E is a regular value for p; that is, dp = 0 on p −1 (E); (1.4) (2) there exists a trajectory γ(t) = (x(t), ξ(t)) = e tHp (x 0 , ξ 0 ) ⊂ p −1 (E) (1.5) which is trapped in the past but not in the future; that is, x(t) stays in a compact subset of M for t ≤ 0, but x(t) → ∞ as t → +∞.
1.1. Application to the wave equation. To present the application of our result in the simplest setting, let V ≡ 0; then where R g (z) is the meromorphic continuation of the resolvent R g (z) = (−∆ g − z 2 ) −1 : L 2 (M ) → L 2 (M ), Im z > 0.
The estimate (1.8) can then be rewritten as Consider a solution u ∈ C ∞ (R t × M x ) to the inhomogeneous wave equation where ∆ g is the Laplace-Beltrami operator associated to the metric g.
Take the Fourier transform in timê u(z) := ∞ 0 e izt u(t) dt ∈ C ∞ (M ), Im z > 0, (1.10) where the integral converges in every Sobolev space on M by the standard energy estimates for the wave equation. Taking the Fourier transform of (1.9), we see that u(z) = R g (z)f (z), Im z > 0, and thus by Fourier inversion formula ( 1.11) Deforming the contour in (1.11) to {Im z = −ν}, ν > 0 (see for instance [Dy11,Proposition 2.1] or Christianson [Ch08,Ch09] for details), we see that an upper resolvent bound where s ≥ 0 and χ 2 ∈ C ∞ 0 (M ) is equal to 1 near supp f , implies an exponential energy decay estimate for u: e νt χ 1 (x)u H 1 t,x ≤ C e νt f H s t,x . (1.13) We note that the exponent s in the estimate (1.12) gives the number of derivatives lost in the exponential decay bound (1.13), compared to the local in time estimate which has s = 0. In control theory, s is called the cost of the decay estimate.
A classical result of Ralston [Ra69] states that a no-cost local energy decay estimate (which is similar to (1.13) with s = 0) cannot hold when the flow e tHp has trapped trajectories. We make this result quantitative, providing a lower bound on the cost depending on the rate of exponential decay and a local Lyapunov exponent: Theorem 2. Under the assumptions of Theorem 1, suppose that the exponential decay estimate (1.13) holds for some ν > 0, s, and all u satisfying (1.9), where the constant C is allowed to depend on the support of f in x. Then λ max > 0 and s ≥ λ −1 max .
To see Theorem 2, assume that (1.13) holds for some ν; then the integral in (1.10) is well-defined for Im z ≥ −ν and (1.12) holds. (To pass from the resulting semiclassical Sobolev spaces to L 2 , we may argue as in the proof of [Dy11,Proposition 2.1].) It remains to apply Theorem 1.
In the related setting of damped wave equations, the idea of using resolvent estimates to examine energy decay has a long history -see Lebeau [Le], Burq-Gérard [BuGé], and Lebeau-Robbiano [LeRo]. Fourier transforming the time variables to reduce the problem to semi-classical one is a common method of examing the equation; see for example, Bouclet-Royer [BoRo], Burq-Zuily [BuZu], Léautaud-Lerner [LéLe], and Burq-Zworski [BuZw]. In particular, lower resolvent bounds can similarly be used to indicate the minimal cost of exponential decay; for the special case of a single undamped hyperbolic trajectory, see Burq-Christianson [BuCh]. For an abstract approach to the relation between decay estimates and resolvent estimates, see Borichev-Tomilov [BoTo] and references given there.
1.2. Example: surfaces of revolution. Theorem 1 is formulated for Schrödinger operators on Riemannian manifolds which are isometric to the Euclidean space outside of a compact set. However, it applies to much more general situations. In fact, the proof only requires existence of a meromorphic continuation R h (ω) which is semiclassically outgoing (more precisely, the free resolvent R 0 h in the proof of Lemma 5.1 has to be replaced by a semiclasically outgoing parametrix). In particular, one can allow several Euclidean infinite ends, dilation analytic potentials (see for instance [Sj]), or asymptotically hyperbolic manifolds (see the work of Vasy [Va13a,Va13b] and in particular [Va13b,Theorem 4.9]).
Define the trajectory γ(t) ⊂ p −1 (E) as follows: where r(t) is the solution to the ordinary differential equatioṅ r(t) = 2r(t)a(r(t)), r(0) = 1, and θ(t) is defined byθ Then r(t) → ∞ as t → ∞ and r(t) → 0 as t → −∞. It follows that γ(t) escapes as t → ∞ and converges to γ tr (t) as t → −∞. Using the linearization of the flow at γ tr , we find λ max = 2a(0), therefore (1.8) becomes where β > 0 is any number satisfying a(0)β < 1 2 . In particular, in case when a(0) = 0 (that is, {r = 0} is a degenerate equator for the surface M ), for all ν > 0 the norm of the resolvent R h (1 − ihν) grows faster than any power of h. In other words, the point h −1 − iν is an O(h ∞ ) quasimode for the nonsemiclassical resolvent R g (z). This gives an example of h ∞ quasimodes which do not give rise to resonances (as the quasimodes fill in a whole strip, but the number of resonances in a disk grows at most polynomially, see [DyZw,§ §3.4,4.3]). This is in contrast with the work of Tang-Zworski [TaZw] concerning quasimodes on the real line. See [ChWu] for an investigation of the related question of local smoothing for surfaces of revolution.
For the case a(0) > 0, under the additional assumption that a > 0 everywhere, the surface M has a normally hyperbolic trapped set. Upper resolvent bounds for such trapping have been obtained by Wunsch-Zworski [WuZw], Nonnenmacher-Zworski [NoZw], and Dyatlov [Dy15,Dy14]. In particular, the following upper bound, valid for each fixed ε > 0, is a corollary of [Dy14,Theorem 2] and Remark (iv) following it (calculating ν min = ν max = a(0) in the notation of that paper): Therefore, in this case the lower bound (1.15) becomes sharp as ν → a(0).
1.3.
Outline of the proof and previous results. Our proof proceeds by constructing a Gaussian beam u which is localized on the segment γ([−2t e , 0]) where is just below the local Ehrenfest time for γ. For that, we take a Gaussian beam localized h 1/2 close to the segment γ([t e − t 0 , t e + t 0 ]), where t 0 > 0 is small; see Lemma 3.1. The name 'Gaussian beam' comes from the formula for the beam in a model case, see (3.8). We next propagate this fixed time beam for all times t ∈ [−t e , t e ] ∩ t 0 Z using the evolution operator e −it(P h −ω 2 )/h , and sum the resulting terms; see Lemma 4.1. The resulting function u is a quasimode for P h − ω 2 with the right-hand side consisting of two parts: one localized near γ(−2t e ) and the other one, near γ(0). The L 2 norm of the part corresponding to γ(−2t e ) decays like a power of h, due to the negative imaginary part of ω; this power determines the exponent in (1.8). The part corresponding to γ(0) is cancelled by adding to u an outgoing function localized on γ([0, ∞)). The Gaussian beam construction uses the fact that the trajectory γ escapes in the forward direction, as otherwise the results of propagating the basic beam for different times may overlap and cancel each other out. In particular, unlike [EsNo] our construction does not apply to closed trajectories of the flow. See Figure 3 in §5.
To show that u is a quasimode, we need to understand the localization of Gaussian beams propagated for up to the Ehrenfest time. For bounded times, this was done by many authors, in particular Hagedorn [Ha] and Córdoba-Fefferman [CoFe]; see also Laptev-Safarov-Vassiliev [LSV]. More recently, Gaussian beams for manifolds with boundary have been applied to study inverse problems; see for instance Kenig-Salo [KeSa], Dos Santos et al. [DKLS], and the references given there. They have also been used in control theory to give necessary geometric conditions for control from the boundary, see for instance Bardos-Lebeau-Rauch [BLR] and the references given there. In both of these applications, only bounded time propagation was necessary; in the first one this is due to the use of Carleman weights and in the second one, to the bounded range of times taken in the setup. In §3, we use a simple version of a bounded time Gaussian beam as the starting point of our construction.
Combescure-Robert [CoRo] describe propagation of Gaussian beams up to time 1 3 t e in terms of squeezed coherent states (where t e is just below the Ehrenfest time) and the recent work of Eswarathasan-Nonnenmacher [EsNo] gives such description until time t e for the case of closed hyperbolic trajectories.
The present paper describes the localization of Gaussian beams propagated up to the Ehrenfest time, using mildly exotic semiclassical pseudodifferential operators and a Riemannian metric on T * M adapted to the linearization of the Hamiltonian flow e tHp on γ -see §4. The resulting description is however less fine than that of bounded time Gaussian beams, which have oscillatory integral representations with complex phase functions; see for instance Ralston [Ra82] and Popov [Po]. Moreover, the use of pseudodifferential calculus requires to restrict ourselves to the class of smooth metrics and potentials.
Preliminaries
Our proofs rely on semiclassical analysis; we briefly present here the relevant parts of this theory and refer the reader to [Zw] and [DyZw, Appendix E] for a comprehensive introduction to the subject.
Let M be a manifold. We consider the algebra Ψ k (M ) of pseudodifferential operators on M with symbols in the class S k 1,0 (T * M ), defined as follows: where K ⊂ M ranges over compact subsets and α, β are multiindices. In the case when M = R n and a ∈ S k 1,0 (T * M ) is compactly supported in x, one can define an element of Ψ k (R n ) using the quantization procedure To define pseudodifferential operators on a general manifold M , we fix a family of local coordinate charts ϕ j : U j → R n , where U j ⊂ M is a locally finite covering, and take cutoff functions χ j , χ j ∈ C ∞ 0 (U j ) such that j χ j = 1 and χ j = 1 near supp χ j . For a ∈ S k 1,0 (T * M ), we define We refer the reader to [DyZw, §E.1.5] for details.
We will also often use the mildly exotic symbol class S comp • supp a lies in some h-independent compact subset of T * M ; and • for each multiindices α, β, there exists a constant C such that Applying the quantization procedure (2.2) to symbols of class S comp ρ (T * M ), and allowing O(h ∞ ) D (M )→C ∞ 0 (M ) remainders, we obtain the pseudodifferential class Ψ comp ρ (M ). We require that operators in this class be compactly supported uniformly in h. The class Ψ comp ρ enjoys properties similar to the standard pseudodifferential class Ψ k -see for instance [Zw,§4.4] or [DyGu,§3.1]. For ρ = 0, we recover the class Ψ comp of pseudodifferential operators with compactly supported S 1,0 symbols. It can be seen directly from (2.1) and (2.2) that Op h (1) is the identity operator. It follows that We will also use the notion of the wavefront set the fiber-radially compactified cotangent bundle, but we will only be interested in A is a pseudodifferential operator (in either of the classes discussed above), then it is pseudolocal in the sense that WF h (A) is contained in the diagonal of T * M ; we then view WF h (A) as a subset of T * M . We will use the following property valid for pseudodifferential properly supported operators A: See [DyZw,§E.2.3] for details.
For U j ⊂ T * M j and two h-tempered operators A, B : C ∞ 0 (M 2 ) → D (M 1 ), we say that Finally, we review the classes I comp (κ) of semiclassical Fourier integral operators. Here κ : U 2 → U 1 , U j ⊂ T * M j , is an exact canonical transformation (with the choice of antiderivative implicit in the notation) and elements of I comp (κ) are h-dependent families of smoothing compactly supported operators This is a version of Egorov's Theorem and follows by a direct calculation in local coordinates involving the oscillatory integral representations of B, B and the method of stationary phase; see for instance [GrSj,Theorem 10.1]. Moreover, we may choose b so that supp b ⊂ κ −1 (supp a); indeed, every term in the stationary phase expansion for b satisfies this support condition and the full symbol b may be constructed from this expansion by Borel's Theorem [Zw,Theorem 4.15]. [Zw,Theorem 10.4] for the proof. Combining this with (2.4), we see that for each a ∈ S comp (2.5)
Short Gaussian beam
In this section, we construct a Gaussian beam localized on a short segment of a Hamiltonian flow line For U ⊂ R and ρ ∈ [0, 1/2), denote by the h ρ -neighborhood of the set γ 0 (U ) (with respect to any fixed smooth distance function on T * M ). In this section, we prove the following If (x 0 ,ξ 0 ) varies in a compact subset of p −1 (E), then the constants above can be chosen independently of (x 0 ,ξ 0 ).
3.1. Model case. We start the proof of Lemma 3.1 by considering the model case Here we write elements of R n as (x 1 , x ), with x ∈ R n−1 , and elements of T * R n as (x 1 , x , ξ 1 , ξ ).
Let t 0 > 0, choose a function Define the following h-dependent families of functions on R n : (3.8) It is easy to see that Moreover, the following analog of (3.2) holds: We next claim that there exist a m u , b m u , a m f ∈ S comp ρ (T * R n ) such that, with Op 0 h defined in (2.1) and γ m h ρ defined similarly to (3.1), (3.12) Indeed, take χ m ∈ C ∞ 0 (R) such that supp χ m ⊂ (−2/3, 2/3) and χ m = 1 near To check (3.10), it remains to show that each of the functions The third one follows since e − |x | 2 2h (1−χ m (|x |/h ρ )) = O(h ∞ ) L 2 (R n−1 ) as long as ρ < 1/2. The second and fourth operators are Fourier multipliers; to handle them, it suffices to calculate the semiclassical Fourier transform of u m : where ϕ m is the nonsemiclassical Fourier transform of ϕ m , which is an h-independent Schwartz function. Using the bounds and the fact that ω 2 = E + O(h) (following from (1.2)), we finish the proof of (3.10).
We next put Then (3.11) follows from the following fact, which is proved similarly to (3.10): The bound (3.12) is proved similarly to (3.10), taking is supported in (t 0 /3, 2t 0 /3) and equal to 1 near supp ψ m .
3.2. General case. We now prove Lemma 3.1. For that, we reduce to the model case of §3.1 using conjugation by Fourier integral operators.
We now put with u m , f m defined in (3.8).
Since B L 2 (R n )→L 2 (M ) = O(1), we have u 0 L 2 , f 0 L 2 ≤ C. Note that (3.6) holds for u m , f m , γ m by (3.10) and (3.12); since WF h (B ) lies inside the graph of κ −1 , we see that (3.6) holds for u 0 , f 0 , γ 0 . In particular, it will be enough to argue microlocally near γ 0 ([−t 0 , t 0 ]). The identity (3.2) follows from (3.9), (3.13), (3.15), and the following statement: (3.16) Since (3.16) is true for t = 0, it suffices to show that This in turn can be rewritten as . The estimates (3.3)-(3.5) follow from (3.10)-(3.12), if we choose a u , b u , a f such that B Op h (a m u ) = Op h (a u )B + O(h ∞ ) L 2 →L 2 , and similarly for b u , a f . To do that, it suffices to multiply (2.4) on the right by B and use (3.14). If we carry out the arguments of §3.1 with ρ replaced by some ρ ∈ (ρ, 1/2), then we have for small h and similarly for a f ; this finishes the proofs of (3.3), (3.5).
For (3.4), we additionally use that
. This finishes the proof of Lemma 3.1.
Long Gaussian beam
We now construct a Gaussian beam localized on a ∼ log(1/h) long trajectory of the flow e tHp . Recall the trajectory γ defined in (1.5) and the associated constant λ max ≥ 0 defined in (1.6).
The remaining parts of Lemma 4.1 use the following localization statement for u j , f j , proved in §4.1 (see Figure 2): bounded uniformly in j, such that, with remainders uniform in j, where C is independent of h and j and γ ε (U ) denotes the ε-neighborhood of γ(U ).
We remark that by (4.2), therefore the sets in (4.6), (4.8), and (4.10) are contained in o(1) neighborhoods of the corresponding segments of γ. Figure 2. The shaded region represents microlocal concentration of the function u from (4.4), where we put t 0 = 1 for simplicity of notation. The darker regions represent the places where the summands u j and u j+1 overlap, and the blue regions at the ends correspond to f ± .
Finally, for part 4 of Lemma 4.1, we put b := b (N 0 ) u . By (4.7), we have Op h (b)u L 2 ≥ C −1 as long as This follows from (4.5) and the following statement: The identity (4.13) follows from (4.6), (4.8) and the fact that there exists ε > 0 such that (4.14) To show (4.14), we note that γ(t) is not trapped in the forward direction, thus it is not a closed trajectory; it follows that γ(t 1 ) = γ(t 2 ) for t 2 ≤ −t 0 /3 < −t 0 /4 ≤ t 1 . It remains to show that for each t j → −∞, γ(t j ) cannot converge to a point in γ([−t 0 /4, t 0 /4]); this follows from the fact that γ([−t 0 /4, t 0 /4]) does not intersect the trapped set, but the backwards trapped trajectory γ(t) converges to the trapped set as t → −∞ -see for instance [Dy15,Lemma 4.1]. This finishes the proof of Lemma 4.1.
We similarly have where the last inequality follows from (4.17) with t, v replaced by t−t 0 −s, de −(t 0 +s)Hp (γ(t))v. This proves the '−' part of (4.15).
We next construct tubular neighborhoods of segments of γ. Fix small δ > 0 to be chosen later. For each t ≤ t 0 , define the manifold , where expg ± • (•) denotes the geodesic exponential map of the metricg ± . By (1.4), for t 0 and δ small enough the maps Φ ± t are diffeomorphisms onto their images uniformly in t ≤ t 0 . Note that Φ ± t (s, 0) = γ(t + s).
Lemma 4.4. For ε > 0 small enough and all Since all derivatives of Φ ± t and its inverse are bounded uniformly in t, we deduce the existence and uniqueness of S ± (s, v), v ± (s, v) for |s| ≤ 3 4 t 0 and |v| small enough. Next, note that ∂ s S ± (s, 0) = 1, ∂ s v ± (s, 0) = 0.
Next, (4.7) for j ∈ [0, N 0 + 1] follows by induction on j together with the following estimate: To show (4.24), note first that χ on the right-hand side may be replaced by 1 by (4.3). By (2.5), there existsb where the last line above follows from (2.3) and the analog of (4.23) for b (j) u . To prove (4.24), it remains to use the norm bound To show (4.25), we first note thatb u is supported in some coordinate chart on M ; thus it suffices to show the bound where Op 0 h is defined in (2.1). The bound (4.26) follows from [Zw,Theorem 4.23(ii)]. We have proven (4.5)-(4.10) for j ∈ [0, N 0 + 1]. The case j ∈ [−N 0 , 0] is considered in the same way, using the metricg − instead ofg + in the definitions of a (j) and replacing e −it 0 P h /h by e it 0 P h /h , e t 0 Hp by e −t 0 Hp etc. in the proofs of (4.5), (4.7), and (4.9). The cases j ∈ [0, N 0 + 1] and j ∈ [−N 0 , 0] produce different symbols a f , however both options satisfy (4.5)-(4.10) so we may choose either one of them. This finishes the proof of Lemma 4.2.
Proof of Theorem 1
To prove the lower norm bound (1.8), we construct families of functions u(x; h) ∈ C ∞ (M ),f (x; h) ∈ C ∞ 0 (M ) such that for some h-independent constant C, (1)ũ = hR h (ω)f ; 1 (2)f is supported inside some h-independent compact set; (3) f L 2 ≤ Ch 2 √ Eβν ; (4) χ 1ũ L 2 ≥ C −1 for some h-independent χ 1 ∈ C ∞ 0 (M ). Theorem 1 follows immediately from here; indeed, if χ 2 ∈ C ∞ 0 (M ) is such thatf = χ 2f for all h, then we find The functionũ consists of two components. One of them is the long Gaussian beam u constructed in Lemma 4.1; recall that u is supported inside some h-independent compact set and showing its components u, u 0 ∞ , u 1 ∞ , the functions f + , f − from (5.1), the propagated function f + := e −iT 0 (P h −ω 2 )/h f + , and the supports of the cutoffs χ 0 , χ 1 . Here t e = β 2 log(1/h) is just below the Ehrenfest time of the trajectory γ. Our construction is as follows: starting from a basic beam near γ(−t e ), we propagate it for times in [−t e , t e ] to obtain u; see Figure 2. We next propagate f + forward for time T 0 which is large enough so that P h = −h 2 ∆ g 0 on γ([T 0 , ∞)), to obtain u 0 ∞ . We finally apply the free resolvent to f + to obtain u 1 ∞ .
where f ± are also defined in Lemma 4.1. See Figure 3.
Since f − L 2 ≤ Ch 2 √ Eβν , it remains to construct a function which compensates for the f + term in (5.1). This is done by the following Lemma 5.1. There exist h-dependent families of functions such that for some h-independent constants C, C χ , Proof. Since M is diffeomorphic to R n outside of a compact set, we may write for r 0 > 0 large enough, M = M r 0 R n \ B(0, r 0 ) , where B(0, r 0 ) ⊂ R n is the closed Euclidean ball of radius r 0 and M r 0 ⊂ M is compact. We choose r 0 such that the potential V is supported in M r 0 and g is equal to the Euclidean metric g 0 on R n \ B(0, r 0 ); then where P 0 h is the semiclassical Euclidean Laplacian on R n : P 0 h = −h 2 ∆ g 0 . Since the trajectory γ(t) escapes as t → +∞, there exists T 0 > 0 such that We choose cutoff functions χ 0 , χ 1 ∈ C ∞ 0 (M ) such that (viewing them as functions on T * M if necessary) 3) Consider the free resolvent R 0 h (ω) = (P 0 h − ω 2 ) −1 : L 2 (R n ) → L 2 (R n ), Im ω > 0; we continue it meromorphically to a family of operators (see [DyZw,§3.1] or [Va,§7.2
We now finish the construction of the functionsũ,f and thus the proof of Theorem 1. Putũ Note that, since u ∈ C ∞ 0 (M ), we have by (5.12) u = R h (ω)(P h − ω 2 )u.
It follows thatũ = hR h (ω)f . Also, since both u and f ∞ are supported in some hindependent compact set, so isf . We next have by (5.1), together with part 5 of Lemma 5.1, this implies that Combining this with part 4 of Lemma 4.1, we see that Op h (b)ũ L 2 ≥ C −1 .
Since Op h (b) is compactly supported in an h-independent set and its L 2 → L 2 norm is bounded uniformly in h, we obtain property (4) ofũ, finishing the proof. | 2015-12-06T18:05:06.000Z | 2015-08-17T00:00:00.000 | {
"year": 2015,
"sha1": "07052d9ca6532f2d8637390a91c6b41a21b6501a",
"oa_license": null,
"oa_url": "https://academic.oup.com/amrx/article-pdf/2016/1/68/6664961/abv010.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "07052d9ca6532f2d8637390a91c6b41a21b6501a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
204538627 | pes2o/s2orc | v3-fos-license | A Paradigm Shift: Rehabilitation Robotics, Cognitive Skills Training, and Function After Stroke
Introduction: Robot-assisted therapy for upper extremity (UE) impairments post-stroke has yielded modest gains in motor capacity and little evidence of improved UE performance during activities of daily living. A paradigm shift that embodies principles of motor learning and exercise dependent neuroplasticity may improve robot therapy outcomes by incorporating active problem solving, salience of trained tasks, and strategies to facilitate the transfer of acquired motor skills to use of the paretic arm and hand during everyday activities. Objective: To pilot and test the feasibility of a novel therapy protocol, the Active Learning Program for Stroke (ALPS), designed to complement repetitive, robot-assisted therapy for the paretic UE. Key ALPS ingredients included training in the use of cognitive strategies (e.g., STOP, THINK, DO, CHECK) and a goal-directed home action plan (HAP) to facilitate UE self-management and skill transfer. Methods: Ten participants with moderate impairments in UE function >6 months after stroke received eighteen 1-h treatment sessions 2–3/x week over 6–8 weeks. In addition to ALPS training, individuals were randomly assigned to either robot-assisted therapy (RT) or robot therapy and task-oriented training (RT-TOT) to trial whether the inclusion of TOT reinforced participants' understanding and implementation of ALPS strategies. Results: Statistically significant group differences were found for the upper limb subtest of the Fugl-Meyer Assessment (FMA-UE) at discharge and one-month follow-up favoring the RT group. Analyses to examine overall effects of the ALPS protocol in addition to RT and RT-TOT showed significant and moderate to large effects on the FMA-UE, Motor Activity Log, Wolf Motor Function Test, and hand portion of the Stroke Impact Scale. Conclusion: The ALPS protocol was the first to extend cognitive strategy training to robot-assisted therapy. The intervention in this development of concept pilot trial was feasible and well-tolerated, with good potential to optimize paretic UE performance following robot-assisted therapy.
INTRODUCTION
Rehabilitation efforts to optimize motor function, activity performance and participation after stroke require an understanding of factors that contribute to stroke recovery and an intervention approach focused on the individual's goals and desire to re-engage in valued life roles. Despite recent advances in acute medical interventions to reduce the impact of stroke, residual upper extremity (UE) motor deficits persist long term in up to 65% of stroke survivors, contributing to a loss of independence in activities of daily living and negatively impacting quality of life (1). To advance rehabilitative practice and facilitate satisfaction and participation after stroke, improved methods are needed to optimize the recovery of motor function for home and community activities.
Evidence of neural recovery following highly intensive therapy and the high cost of health care have driven the development of rehabilitation robots to treat motor impairments after stroke. Rehabilitation robots have provided researchers and clinicians with new treatment options to improve UE motor capacity and performance after stroke. The number of robot-assisted therapy trials to address UE function has grown significantly over the past 20 years. Previous studies have shown robot-assisted therapy to be as effective as repetitive task-specific training at increasing motor capacity, as measured by standard assessments in clinical settings (2,3). While systematic reviews of robotassisted therapies confirm gains in motor capacity after stroke, they provide little evidence for the transfer of trained motor skills to paretic UE performance during activities of daily living (4,5). This disparity between improved UE motor capacity (i.e., what a person can do in a standardized, controlled setting) and daily use of the paretic arm and hand is a significant clinical issue (6) and critical barrier to the integration of robotic technology into clinical practice. These findings may be attributed to the limited development of rehabilitation robots that specifically train voluntary control of finger flexion and extension of the paretic hand, and a primary focus on intensity of practice with little regard for other principles of motor learning and experience-dependent neuroplasticity (7,8). These principles, including the salience of training tasks, transfer of acquired skills to similar activities, and active engagement and problem solving, are key to task-oriented training paradigms in stroke but have not been well-integrated into robot-assisted therapy protocols. Recent studies on the use of active problem solving and guided discovery to facilitate skill acquisition during task-oriented training have demonstrated transfer to untrained tasks (9) and significant improvements on measures of UE motor capacity and performance after stroke (10). While these treatment components are instrumental to the transfer of motor skills acquired during task-oriented training, they previously have been absent in robot-assisted therapy trials.
Objectives
The primary aim of this pilot study was to develop and refine a theory-based stroke therapy protocol, the Active Learning Program for Stroke (ALPS), to facilitate the transfer of robottrained UE motor skills to functional use of the paretic arm and hand during every day activities. The secondary aim was to examine effects of ALPS training combined with either robotassisted therapy or robot therapy + task-oriented training. We hypothesized that the intervention would be feasible and welltolerated by participants and would yield positive outcomes on standard measures of paretic UE motor capacity and performance across domains of the International Classification of Functioning, Disability and Health (ICF) (11). This study has potential for improving the effectiveness of robot-assisted therapy by facilitating UE self-management and specifically addressing the transfer of acquired skills (e.g., UE motor capacity) to the performance of UE tasks during activities of daily living. The ALPS protocol is relevant to clinical practice because it provides clinicians with a structured, client-centered motor learning approach to optimize use of the paretic arm and hand.
Active Learning Program for Stroke (ALPS): Conceptual Framework and Application
The ALPS protocol is based upon principles of experience dependent neuroplasticity as described by Kleim and Jones (7); empirical evidence from UE motor learning and taskoriented training programs for individuals with stroke (8,12); and a conceptual framework for integrating skill, capacity and motivation as described in multiple publications by Winstein et al. (12)(13)(14). While principles of repetition, intensity, and specificity of training are active ingredients of robot-assisted therapy protocols to improve motor capacity, other motor learning principles, such as salience and transference, have not been well-infused into prior robot training programs. The ALPS protocol incorporates these principles during robot-assisted therapy sessions, and they are an integral component of each participant's home action plan (HAP) aimed to facilitate UE performance in the home and community. Examples of learning principles are highlighted in Table 1.
The ALPS protocol involves instructions to engage in active problem solving, activity analysis and use of general cognitive strategies (e.g., STOP, THINK, DO, CHECK), modeled after the Cognitive Orientation for daily Occupational Performance (CO-OP) (15), during paretic UE tasks. We purposely altered our strategy approach from that used in CO-OP because we found that individuals typically don't explicitly establish goals for performance prior to activity engagement. Rather, when they run into challenges while attempting to use their paretic UE functionally they benefit from cues to stop and identify factors impeding performance. Examples of general and domain specific movement strategies are shown in Appendix A.
In conjunction with cognitive strategy training, individuals are provided with a HAP to encourage the application of ALPS principles and use of the paretic UE when engaged in everyday activities in the home and community. Participants identify specific, achievable tasks for their HAP based on personal interests. The clinician may use scores from the upper limb subtest of the Fugl-Meyer Assessment (FMA-UE) (16,17) when providing input to select appropriate tasks based on the participant's current level of function. Due to this participantcentered approach, there are no core tasks included in every HAP, however, similarities do occur across individuals. Participants identify 3-5 UE tasks to be completed daily at home and are taught general and specific ALPS strategies that may facilitate performance. Participants are encouraged to engage in HAP tasks for at least 30 min each day.
Study Design
While the primary aim was to develop and refine the ALPS protocol for use with robot-assisted therapy, we were also interested in learning whether the inclusion the both robot-assisted therapy and task-oriented training during treatment sessions reinforced participants' understanding and implementation of ALPS strategies. This single-blind randomized control pilot study examined effects of the ALPS protocol combined with robot-assisted therapy alone, or robotassisted therapy plus task-oriented training, as described below. The clinical evaluator was blinded to group assignment and research hypotheses (Figure 1).
Recruitment
Individuals between the ages of 18-82 years and diagnosed with stroke more than 6 months prior to study enrollment were recruited for this study. Informational flyers were provided to attending physicians, outpatient therapists and stroke survivors who previously had given permission to be contacted about research opportunities at Spaulding Rehabilitation Hospital, Boston MA. Inclusion criteria were: moderate UE hemiparesis with initial score on the upper limb subtest of the Fugl-Meyer Assessment (FMA-UE) between 21 and 50/66) (18); and intact cognitive function to understand and actively engage in the ALPS protocol as measured by a Montreal Cognitive Assessment Score of ≥26/30 (19) during the initial evaluation visit. Exclusion criteria were: no more than moderate impairments in paretic UE sensation, passive range of motion, and pain as assessed with the Fugl-Meyer Assessment (18); increased muscle tone as indicated by score of ≥3 on the Modified Ashworth Scale (20); hemispatial neglect or visual field loss measured by the symbol cancellation subtest on the Cognitive Linguistic Quick Test (21); and aphasia sufficient to limit comprehension and completion of the treatment protocol. Participants could not be enrolled in other UE therapy or research during the study period or present with contraindications for robot-assisted therapy, including recent fracture or skin lesion of paretic UE.
The study protocol was reviewed and approved by the Partners Human Research Committee, the Institutional Review Board for Partners HealthCare, and registered at https://clinicaltrials. gov (NCT02747433). All participants provided written informed consent in accordance with the Declaration of Helsinki.
Intervention
All enrolled participants were administered the ALPS protocol and were randomly assigned to one of two treatment groups: (1) Robot-Assisted Therapy (ALPS + RT) or (2) Robot-Assisted Therapy + Task-Oriented Training (ALPS + RT-TOT).
The Armeo R Spring is a passive exoskeletal spring suspension system that provides repetitive practice of virtual goal-directed reaching tasks for the paretic UE. A distal sensor that detects grip pressure allows the grasp and release of virtual objects during computer-generated games. The amount of gravity assistance and virtual task demands are selected by the clinician to provide challenging yet achievable movement therapy.
During the first treatment session, the Armeo R Spring was adjusted for the participant's arm size and required angle of suspension (∼45 • shoulder flexion, 25 • elbow flexion) and the workspace was measured via standard device operation procedures. The versatility of the Armeo R Spring system allowed repetitive practice of single degree-of-freedom motions (e.g., elbow flexion/extension, supination/pronation) as well as multiple degree-of-freedom training for the paretic shoulder, elbow, forearm, wrist, and hand.
The Amadeo TM robotic system provides position-controlled exercises during computerized games that emphasize grasp and release of the paretic hand. Participants were seated comfortably with the paretic forearm and wrist strapped to an adjustable support attached to the robot device with the wrist in approximately neutral position. A small magnetic disc was secured to the distal phalanx of each digit for connection to the robotically controlled slide that guides movement. Each 1-h session included visually evoked games that provided active-assistive training of collective and individual flexion and extension of the digits, isometric flexion/extension contractions, and continuous passive motion with visual feedback to rest and relax digits when fatigue or increased muscle tone began to impact motor performance.
All participants received 1-h sessions, 2-3×/week for 6-8 weeks (total 18 sessions), divided into two 9 session treatment blocks. The two treatment blocks were given in order, with all participants receiving proximal training via Armeo R Spring during the first block followed by Amadeo TM distal training during the second block. All training sessions for one treatment block were completed before proceeding to the next. The robot training sessions provided highly repetitive movement training, Frontiers in Neurology | www.frontiersin.org and the robot training time completed during each session was recorded. Rest periods were offered between computergenerated games, as needed. Task challenge for each training device was incrementally increased or decreased based on participant performance.
Task-Oriented Training (TOT)
Participants randomized to the robot and task-oriented training (RT-TOT) group received therapist-guided task-oriented training in addition to RT during 20-30 min of each 1-h treatment session. The participant's baseline performance on the FMA was reviewed, and the FMA keyform and patient-targeted treatment activities outlined by Woodbury et al. (17) aided the selection of UE tasks with greatest potential for improvement during TOT. While we tracked the number of repetitions performed and/or time that participants engaged in continuous motions (e.g., wiping table) the actual dose of TOT differed among participants, based on their activity tolerance and level of function. We attempted to control for this difference by assuring that the overall treatment dose (duration and frequency of therapy sessions) was comparable across RT and RT-TOT groups.
ALPS Protocol
Participants randomly assigned to both intervention groups (RT and RT-TOT) received ALPS cognitive strategy training (e.g., STOP, THINK, DO, CHECK), as described above, during each treatment session. The UE training during RT and RT-TOT reinforced the importance of repetitive practice to optimize motor capacity and performance. Guided discovery during RT facilitated participant understanding of how robot-trained motor skills could generalize to everyday tasks. Individuals randomized to the RT-TOT group also engaged in dynamic performance analysis to identify breakdowns in task completion and attempt solutions during "real-life" activities, such as retrieving objects from the fridge (15). Clinician feedback encouraged selfassessment and knowledge of performance, and participants were motivated to explore ways to use their paretic UE better for HAP tasks. Level of engagement, strategy use, achievements, and concerns regarding the completion of the HAP were reviewed at each session. Participants engaged in active problem solving to identify specific strategies to facilitate success by modifying motor actions (e.g., changing body position, assisting with the less affected UE) or activity demands. The HAP was updated weekly to include new everyday activities and strategies to optimize performance and transfer of motor skills trained during robot therapy.
Outcomes
Clinical assessments were administered at baseline, discharge (<1 week after intervention), and at a 1-month follow-up visit. Evaluation sessions lasted ∼1 ½ to 2 h, and the standardized measures listed in Table 2 were administered. All are reliable and valid measures of UE motor function, activity performance and participation for individuals post-stroke.
Statistical Analysis
We first performed non-parametric Mann Whitney U tests to examine effects of ALPS training combined with RT vs. RT-TOT from admission to discharge, and from admission to the 1-month follow-up assessment. To determine whether the addition of ALPS training to RT and RT-TOT resulted in significant gains on measures across ICF domains, raw scores from both groups were combined and Friedman tests examined whether changes in performance at these three time points were significant. Post-hoc analyses with Wilcoxon signed-rank
RESULTS
Ten individuals (53.19 ± 19.83 years of age) more than 6 months post-stroke onset participated in this study between July 2016 and November 2018. Participant characteristics for each group are reported in Table 3. Group differences in baseline demographics and FMA-UE scores were non-significant.
The ALPS protocol was feasible and well-tolerated, as participants (n = 10) completed all assessment and intervention sessions, described use of ALPS cognitive strategies during their HAPs, and reported high satisfaction with the therapy process.
Mann Whitney U tests revealed statistically significant gains on the FMA-UE from admission to discharge (Z = −2.32, p = 0.02) and admission to the 1-month follow-up assessment (Z = −2.64, p = 0.008), with the RT group outperforming those who received RT-TOT. No between-group differences were found for the remaining clinical outcome measures following intervention. Friedman tests and post-hoc Wilcoxon analyses to evaluate effects of the ALPS protocol in addition to RT and RT-TOT (n = 10) revealed statistically significant improvements at discharge and follow-up for the FMA-UE, WMFT, MAL (AOU and HW scales), and the hand portion of the SIS (see Table 4).
Wilcoxon post-hoc tests of participant ratings on the Confidence in Arm and Hand Movement (CAHM) scale indicated that confidence in use of the paretic UE for a variety of functional activities (e.g., cutting food with a knife and fork or performing tasks in public) trended upward at the one-month follow-up visit, with admission to follow-up results reaching statistical significance (p = 0.037). Moderate to large Cohen's d effect sizes for these measures are reported in Table 5.
DISCUSSION
The clinical acceptance and widespread use of rehabilitation robots for UE therapy post-stroke has been limited, in part, by the lack of empirical evidence for its impact on UE performance and engagement in meaningful activities of daily living (4,5). This development of concept pilot trial (25) is the first to test an ALPS that shifts robot-assisted therapy away from an impairment focused intervention to one aimed to facilitate the transfer of robot-trained motor skills to functional use of the paretic arm and hand after stroke. This new paradigm is based upon principles of experience-dependent neuroplasticity (7) and cognitive strategy training (15), and embraces the distinct strengths of robot-assisted technology and clinician-driven interventions. The rehabilitation robots deliver a higher dose of repetitive task-specific training than is possible in conventional rehabilitation settings, while the clinician empowers participants with a step-by-step problem-solving approach to facilitate use of trained motor skills during meaningful everyday activities, thereby adding salience and transference to the rehabilitation process. The Mann Whitney U group analyses revealed statistically and clinically significant improvements in motor capacity, as measured by the FMA-UE, with the ALPS + RT group improving more than those who received a combination of ALPS + RT-TOT. Participants in the ALPS + RT group received on average a total of 524.0 min of Armeo R Spring and Amadeo TM training during the study protocol, as compared to 303.0 min in the ALPS + RT-TOT group. Although individuals randomized to the RT-TOT group also received repetitive task-oriented training during 20-30 min of each treatment session, it was not possible to achieve as many movement repetitions during this time due to the nature of the training, which was focused on guided discovery and problem solving during challenging, yet achievable UE tasks. The number of repetitions, choice of discrete vs. continuous tasks (e.g., reaching vs. stirring), and practice of unilateral and bilateral tasks during task-oriented training was individualized, based on the participant's UE motor capacity and target of intervention. Therefore, it is likely that individuals in the ALPS + RT group completed more movement repetitions than those in the RT-TOT group, which may have contributed to greater improvement in UE motor capacity, as measured by the FMA-UE. Whyte et al. (26,27) have developed the Rehabilitation Treatment Specification System to specify and study the effects of rehabilitation treatments and uncover the "black box" of rehabilitation. This framework is useful for describing the treatment outcomes or targets as well as the many treatment ingredients that comprise a given intervention and their potential mechanisms of action. The primary target for most robot-assisted therapy studies has been a reduction in motor impairment, with less attention to measuring gains in functional use of the paretic arm and hand during everyday activities. A missing element in much of this research is the examination of what treatment ingredients other than the number of repetitions delivered (e.g., type of human machine interface, instructions, motor skills practiced by robot therapy games) are integral to the intervention protocol, and how they contribute to changes in performance. An intervention study that compared effects of Amadeo TM robot-assisted therapy to conventional hand training by an occupational therapist revealed significantly greater improvements on neurophysiological measures of cortical plasticity and interhemispheric inhibition in the Amadeo TM group that paralleled gains in clinical outcome scores (28).
Controlled studies such as this are essential to our understanding of the relationship between treatment ingredients delivered by these different forms of hand training and potential mechanisms of action that contribute to observed changes on standardized clinical assessments and in functional use of the paretic arm and hand after stroke. The recently published RATULS randomized control study of more than 700 stroke participants who received robotassisted therapy, enhanced upper limb training (EULT) by a rehabilitation clinician, or usual care reported that the intensive training interventions (robot-therapy and EULT) did not significantly improve its targeted outcome, UE function as measured by Action Research Arm Test (ARAT) (29). In addition, the small gains that were observed in UE function did not transfer to activities of daily living. These findings, and similar reports from systematic reviews of robot-assisted therapy (4,5), indicate that greater attention is warranted to treatment ingredients other than repetition. While rehabilitation robots are highly capable of repetitive movement training, it is apparent that robot-assisted therapy alone is not sufficient for optimizing UE activity engagement and participation in persons with UE motor impairments after stroke. In the current ALPS protocol, treatment ingredients to specifically enhance the transfer of robot-trained motor skills included instruction in cognitive strategies to enhance problem solving during UE activities and a HAP to encourage carry-over of robot-trained motor skills to daily activities in the home and community. While the ALPS pilot was not designed to differentiate the effects of these treatment ingredients, the statistically significant gains and medium to large effect sizes for outcomes across ICF domains, coupled with clinically significant improvements in FMA-UE scores at follow up (n = 10, mean = 7.3/66 points) are promising. They far exceed gains reported in the 36 session RATULS study (adjusted mean FMA-UE difference of 2.79/66 points between robot and usual care groups at 3 months) and in systematic reviews of robot-therapy outcomes (5,29). The present findings align with assertions by Valero-Cuevas et al. (30) that changes in performance are multidimensional and cannot be measured by a single primary outcome, such as the Fugl-Meyer Assessment or ARAT.
A systematic review of UE rehabilitation methods after stroke (31) emphasized the importance of tailoring evidence-based treatments to the needs of the individual. Each component of the ALPS protocol (robot therapy, cognitive strategies, and HAP) was individualized, based on the participant's level of UE functioning and identified task goals. The HAPs provided to ALPS participants were tailored to their individual interests and contexts, and were based upon prior research on the effectiveness of cognitive strategy training for individuals post-stroke (10,32). While adherence to daily HAP completion varied among participants, semi-structured interviews administered more than 6 months post-ALPS training revealed that the HAPs were a separate, yet valued ingredient of the intervention. Participants applied ALPS strategies (e.g., STOP, THINK, DO, CHECK) to problem-solve challenges encountered during everyday tasks. Those with greater distal function at baseline were more likely to follow through with HAP activities for the paretic arm and hand and reported greater ability to independently apply problem solving strategies during HAP activities. Participants who did not consistently complete HAP activities suggested ways to improve adherence, including discussions to better manage fatigue, time management, and potential benefits of a computer or mobile application to improve ease of reporting. Thematic analysis of post-intervention interviews has begun, and the initial results have contributed to our understanding of the treatment ingredients most beneficial to past participants. Many reported continued use of the ALPS strategies more than 1-year post-intervention and viewed each treatment component as essential to improving use of their paretic arm and hand during daily activities. Participant input has been used to refine the intervention manual prepared for our next ALPS trial.
Limitations of this research, including its small sample size and variable daily adherence to the HAP across participants, suggest caution when interpreting study outcomes. The inclusion criteria limited our participant sample to individuals with moderate upper extremity impairments as measured by the Fugl-Meyer Assessment (inclusion range 21-50/66 points), therefore generalization of findings to individuals with milder or more severe impairments is limited. Also, our participants were individuals more than 6 months post-stroke onset, and many had developed learned non-use of the paretic arm and hand during this time. Earlier training and implementation of ALPS strategies during acute and subacute phases of recovery may facilitate greater ease of transfer and adherence to HAP activities.
CONCLUSIONS
The novel Active Learning Protocol for Stroke (ALPS) has the potential to shift current research paradigms for intensive robotassisted therapy by training stroke participants to engage in selfanalysis and active problem solving to better utilize recovered UE motor skills during daily living tasks. This innovative project is the first to extend this cognitive strategy and motor learning approach to robot-assisted therapy for persons with moderate UE impairments after stroke: individuals who may not qualify for task-oriented training protocols. The ALPS protocol and client-centered HAP are derived from principles of experiencedependent neuroplasticity (7), motor learning strategies applied to task-oriented training (8,12) and the Cognitive Orientation to daily Occupational Performance (15). Although this initial pilot study to develop and test the ALPS protocol was welltolerated and produced significant gains in paretic UE capacity and performance, we are in the process of refining and formalizing the intervention protocol in preparation for a larger confirmatory trial.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Partners Human Research Committee (PHRC), the IRB for Partners Healthcare in Boston, MA. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
SF developed the Active Learning Program for Stroke research protocol, obtained IRB approval, administered the intervention, and was primarily responsible for data analysis, interpretation, and writing of the manuscript. CA-D was a blinded evaluator, assisted with data analysis, interpretation of results, and editing of the manuscript. | 2019-10-15T13:22:50.133Z | 2019-10-15T00:00:00.000 | {
"year": 2019,
"sha1": "855539e7c0ab135021f225dc50ce6b33e6c9d695",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.01088/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "855539e7c0ab135021f225dc50ce6b33e6c9d695",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202184823 | pes2o/s2orc | v3-fos-license | Comparing the occupants’ comfort between perimeter zone and interior zone in Asian office
Asian office buildings receive plenty of heat and daylight because of their glazing facades. They also allow the occupants to view outside. These effects depend on the seat position in different distance from the window. A study of building performance regarding those difference effects is required to clarify the occupants’ comfort under contextual conditions. Therefore, this study aims to compare the impact of glazing facades on occupants’ comfort between the occupants in the perimeter zone and interior zone by analyzing the building performance in terms of thermal comfort and visual comfort. Measuring devices were installed to investigate temperature, humidity, and daylight in three office buildings in Thailand and three in Singapore. Simultaneously, occupants’ satisfaction were investigated using questionnaires during working hours. In total, 1,356 samples were surveyed. The questions were fixed for both thermal environment and visual environment in terms of sensation and satisfaction. Furthermore, the opening view and internal blind occlusion rate were noted by visual inspection. The results showed that the thermal environment and visual environment in the perimeter zone were affected by the outdoor environment more than the interior zone. However, most of the occupants were satisfied because they be able to adapt to a wide range of indoor environment conditions. The occupants in the perimeter zone were more satisfied in terms of temperature and view. On the contrary, occupants in the interior zone were more satisfied by the lighting environment. The dissatisfaction survey revealed that the thermal environment has the most influence on occupants’ comfort. However, daylight accessing was revealed to has the highest impact on occupants’ comfort in terms of building-facade effect. The results show that occupants’ comfort levels differed depending on the seat position in the current situation for Asian office buildings. The optimisation of building-facade performance considering its influence on occupants’ comfort is necessary.
Green office building in Southeast Asia
Green buildings have been gaining popularity worldwide, especially in Southeast Asia, and the number of new buildings is predicted to increase because of economic growth. Most of these buildings are anticipated to obtain green building certification for contributing positively to the health and wellbeing of the occupants in terms of assessing the occupants' comfort. However, it is difficult to do so. Because office buildings usually have facades made of glass, the associated transparency results in a large amount of heat gain and daylight entering. These two factors corresponding to the transmitted solar radiation affect the occupants' comfort, especially for a person seated in the perimeter zone. The area within a radius of approximately 5 meters away from building facades should be investigated separately [1]. This area experiences the largest fluctuations in both the thermal and visual comfort. The thermal comfort index is mentioned in the green-building rating system based on ASHRAE Standard 55, although only a few visual comfort criteria are specified. However, the effects of heat and light from solar radiation are experienced simultaneously by the occupants, and thus, it is necessary to consider both aspects to adopt steps to improve the occupants' comfort.
Building facades' effect
The function of windows was stated the criteria for window function as following: daylight, ventilation, sunlight, and view [2]. Based on the case study buildings, office buildings usually have facades made of glass. The associated transparency from these buildings can result in a large amount of solar heat gain, daylight, and an exterior view. Therefore, the impact of building facades on the indoor environment was studied based on temperature, lighting, and view.
The building facade is one of the most important elements and has a significant influence on comfort conditions with environmental impact [1]. Occupants situated near windows often experience thermal discomfort because of radiant temperature asymmetry and increased operative temperature [3]. In addition, solar radiation falling directly on the occupant can exacerbate visual discomfort [1]. The occupants in the perimeter zone experience stronger fluctuation of the indoor environment rather than those in the interior zone because of negative facade effects. Therefore, the satisfaction of the perimeters' occupants should be treated separately from that of the interiors' occupants to improve comfort conditions in both areas.
Internal blind usage
When a building is occupied, the occupants usually install blinds as internal shading devices to reduce the strong effect from the glass facade and allow themselves to maintain their own comfort [4]. The adjustment of internal blinds can result in different levels of indoor environment. Glass facades that are continually obstructed is no daylight use [5] and can also reduce the mean radiance temperature [6]. Thus, it has been shown that the way users interact with internal blinds can influence building performance. However, the occupants' comfort in actual situations has not yet been studied. There is a consensus that disregarding the influence of user behavior in buildings may result in erroneous estimates, because users play a key role in the performance of buildings. As the trend toward highly glazed facades in office buildings keeps growing, it is important to study building performance and occupants' behavior considering its influence on the occupants' comfort.
Case study buildings
Six office buildings involving the operation of an air-conditioning system were selected. The specifications of the three buildings (Offices A, B, and C) located in Singapore and other three (Offices D, E, and F) located in Thailand are listed in Table 1. On-site measurements were performed within 3-5 days during working hours based on the collaboration from the occupants. 3 environment and occupants' comfort. The solar heat gain coefficient (SHGC) of glazing facades was used referring to the optical properties taking into account its influence on the thermal environment with thermal comfort. On the contrary, the visible light transmittance (VLT) resulted in a visual environment with visual comfort. Internal blind lowering/closing has an influence on the indoor environment and occupants' comfort. The blind's position or specification has a strong impact on the building-facade performance considering glazing facades combined with internal blinds as an effective layer inside the building. The users' interaction with internal blinds was evaluated using visual inspection in the four buildings at 11:00 a.m. and 3:00 p.m. The blind occlusion level could be segmented into 10 different percentages from 0% (fully opened) to 100% (fully closed). Moreover, the rate of facing the opening-view area of different seat positions was measured individually in terms of projection factors in six-dimensional values(up-down,front-back,left-right). It was measured by a 360° spherical camera dividing the result by using SPCONV. When a building is occupied, the occupants usually install blinds as internal shading devices. Therefore, the actual facade performance becomes different from the design-phase performance referring to the occlusion of internal blind. The integrated performance of glass with an internal blind was inspected to clarify its actual performance in the holistic facade system. The SHGC was clarified by using WINDOW 7.4 software to simulate glass layers covered with an internal blind layer. Meanwhile, the VLT was clarified from on-site measurement by installation of two pyranometers with two illuminance sensors inside and outside the building at the same level to find out the difference of solar radiation and daylight entering between the outdoors and the indoors. Moreover, the interior surface temperature was measured from thermocouples installed to clarify the surface temperature of the glass and internal blind.
Indoor environment measurement
To understand the indoor environment, a set of measuring devices with a T&D TR-74, TR-52, and globe ball were installed inside the perimeter zone and interior zone on the working partition level to investigate air temperature, globe temperature, humidity, and work plane illuminance level all day long. The mean radiant temperature (MRT) that would result in the heat loss by radiation from a seated person as an enclosure accounted for the interior surface temperature and projection factors. A thermography camera evaluated the surface temperature, while projection factors in six-dimensional values(up-down,front-back,left-right) were measured by a 360° spherical camera dividing the result by using SPCONV software. It will be used not only for MRT calculation, but also for view analysis. The rate of 1 for project area factor means the total facing-surrounded environment of that seated position.
Daylight glare index such as daylight glare probability (DGP) was evaluated in daylit spaces using Evalglare software by using high-dynamic-range photos (HDR) taken from digital single lens reflex camera (DSLR) at 11:00 a.m. and then again at 3:00 p.m. The camera was set at eye-level of seated-person around 1.20 meters. The appropriate range of the DGP was 0.35-0.40 [7].
Occupants satisfaction
Paper questionnaires were given to 1,356 samples in the perimeter zone and interior zone. The data survey was divided into subjective and objective variables. The objective variables measured include gender, age group, clothing, and various types of behavior inside the workspace environment. The subjective variables measured is satisfaction levels. The occupants were asked to answer or rate the level for the same questions at 11:00 a.m. and then again at 3:00 p.m. Occupants' comfort was measured in terms sensation and satisfaction for temperature, lighting, and exterior view. The satisfaction questions used a five-point semantic differential scale with endpoints "dissatisfied" and "satisfied." For the purposes of comparison, it was assumed that the scale was roughly linear, and ordinal values were assigned to each point along the scale, from −2 (dissatisfied) to +2 (satisfied), with 0 as the moderate midpoint. In the event that respondents indicated dissatisfaction with a survey topic, they were given sensation questions by rating the level of the surrounded environment within a ninepoint semantic differential scale with endpoints of extreme conditions, such as "extremely dark" and "extremely bright" for the lighting environment.
This study focused on occupant satisfaction with thermal comfort, lighting comfort, and view comfort. In a given building, the satisfaction score was derived from the mean of all occupants' votes on satisfaction questions in that category. Similarly, the mean satisfaction scores in each zoning of the building layout were computed through a "one zoning, one vote" method to give zonings of various occupant population numbers equal weight in the analysis.
Internal blind usage
The inspection results showed that most of the internal blinds were closed all the time during the investigation. The average blind occlusion rates of Offices A, B, C, D, E, and F were 72.57, 80.75, 85.61, 72.75, 76.36, and 92.71, respectively. The orientation is one of the most influential factors for user interaction with internal blinds; different orientations receive different rates of direct solar radiation, and this could influence the way in which the users adapt to their surroundings by opening or closing the blinds. The average blind occlusion rate for the six buildings was 81.73%. The occlusion rates in the east, northwest, west, southwest, south, southeast, northeast, and north orientations were 87.56%, 85.33%, 80.06%, 75.55%, 75.01%, 72.86, 70.12%, and 66.77%, respectively. The east-oriented office and the west-oriented office had a high internal blind occlusion rate, referring to the strong effect from negative solar radiation. The reasons for closing the blinds were evaluated using questionnaires. These reasons can be inferred considering the negative solar effects of the external environment. The prominent negative factors noted in this study were solar heat gain and overbrightness from solar radiation. The reasons for closing internal blinds were primarily to reduce solar heat gain (58%), control overbrightness (25%), cut off the connection between outdoors and indoors for privacy (10%), and avoid an unpleasant window view (7%). It can be concluded that the impact of solar heat gain has the most influence on internal blind adjustment by the occupants.
According to the high rate of blind occlusion, the opening view area was reduced drastically in every building orientation. For the glass facade area, 83.25% was covered with internal blind lowering or closing by the occupants. Only 16.75% of the glass area still opened to connect indoors and outdoors. The projected area factor of the window area for the occupants in the perimeter zone was 0.32, while 0.11 for those in the interior zone. On the contrary, if focusing on only the glass surface area without internal blind covered, the projected area factor of the opening area for the occupants in the perimeter zone was 0.06, while it was 0.02 for those in the interior zone, as shown in Figure 1.
Optical properties
The way users interact with internal blinds can influence building performance. In the field investigation of glass facade performance, the occupants usually installed internal blinds as internal shading devices to reduce the strong effect from the glass facade and allow themselves to maintain their own comfort. The glass facades that are continually obstruct daylight use and can also reduce heat transmittance.
Of the blinds, 81.73% were closed or lowered all day long during the investigation. In terms of thermal performance, the on-site measurement by thermocouples installation showed that the glass facades' interior surface was mainly affected by the internal blind surface as the high occlusion rate Figure 1. Comparing the interior internal blind surface with the interior glass surface, it was 0.18% to 9.24% lower, as shown Figure 2. Moreover, the glass facades with internal blind simulation in WINDOW 7.4 (Office D and Office F) in Figure 3 show that the SHGC is 5.88% to 7.68 % lower. The internal blind was considered an effective additional layer of the whole facade system that can lower heat transmittance from transmitted solar radiation. In terms of visual performance, the high occlusion rate of the internal blind revealed the lower opening view of the projected area factor at 81.25% in the perimeter zone and 81.81% in the interior zone, as shown in Figure 1. The occupants in the perimeter zone hardly experienced the exterior view, especially the occupants seated in the inner interior zone. Furthermore, the internal blind also obstructed daylight utilization. The illuminance level on the work plane level can be drastically lower from 0.08% to 5,145.77% with blind covering [8] because the VLT of the facade system was reduced for 55.17% to 73.41%, as shown in Figure4. Its transparency was incapable by internal blind lowering.
Thermal environment
To understand the actual thermal environment performance in the case study office buildings, on-site measurement of operative temperature was conducted. The results in Figure 5. show that the mean operative temperature of the case studies ranged between 21.25 °C and 23.76 °C . The operative temperature for occupants in the perimeter zone ranged from 22.48 °C to 23.76 °C but 21.25 °C to 23.51 °C for those in the interior zone. The operative temperature in the perimeter zone was 1.23% to 5.46% higher than in the interior zone because of the strong effect from the external heat load referring to transmitted solar radiation. The closer a seat position is to a window, the greater the impact. The thermal environment inside the perimeter zone usually fluctuated in terms of solar heat gain as the air temperature measurement shown that perimeter zone was 0.41% to 4.85% higher than the interior zone. Meanwhile, the mean radiant temperature revealed the same trend that the perimeter zone was 0.31% to 7.14% higher than interior zone, as shown in Figure 6.
In addition, the mean radiant temperature accounted for the effect of the surround-surface temperature. Comparing the same seat position near the glass facades with blinds controlled by the occupants and without blinds, the mean radiant temperature was 0.61% to 2.63% lower [8]. This is because the internal blind temperature was 1.66% to 13.24% lower than the internal blind surface temperature, as shown in Figure 2. The impact of the external heat load from glass facades on the mean radiant temperature mainly arises from the internal blind surface temperature.
Visual environment
In this study, on-site measurement was conducted inside daylit space by analyzing the daylight glare probability, which is commonly used for evaluation of the daylight discomfort glare for a source with non-uniform levels of luminance. The results in Figure 7. stated that the mean daylight glare probability of the case studies ranged between 0.01 and 0.31. The daylight glare probability for occupants in the perimeter zone ranged from 0.12 to 0.31, while it was 0.01 to 0.18 for those in the interior zone. The perimeter zone value was 1.23% to 5.46% higher than that of the interior zone because of the impact of daylight entering.
The daylight glare probability inside the Southeast Asia office buildings revealed a low rate in every situation. The international standards mentioned a comfortable range within 0.35-0.40. However, most of the daylight condition distributed at 0.01 -0.31. This is because 81.73% of the internal blinds were lowered or closed. Opening-glass area was drastically low as shown in Figure1. ,And this obstructed condition can lower the VLT of the whole facade as shown in Figure 4.
In addition, the high internal blind occlusion rate resulted in not only less daylight entering, but also a lower projected area factor for the opening area. The opening rate of the perimeter zone was 9.87% to 71.15% higher than that of the interior zone, as shown in Figure 8. However, if one focuses on only the actual opening area without internal blind covering, the glass-opening area was 12.36% to 77.13% lower than the whole window area. The opening rate in the interior zone was 8.45% to 64.36% lower than that of the perimeter zone, as shown in Figure 9. Therefore, The occupants in the interior zone had a smaller projected area factor for the opening-view area compare to perimeters' people so that they rarely had a window view.
Occupants satisfaction in perimeter zone and interior zone
In the questionnaire survey, the occupants were asked to rate the level of satisfaction for the indoor environment in terms of temperature, lighting, and view. The mean of occupant satisfaction between the perimeter zone and interior zone is plotted in Figures 10. The mean of these two zones is marked by the horizontal dash lines, which show the range of the overall mean satisfaction inside the grey area. The results show that, on average, occupants in the perimeter zone were more satisfied in terms of temperature and view. However, occupants in the interior zone were more satisfied for lighting, because the mean score was higher.
For thermal satisfaction, most of the mean satisfaction score of each building (Offices A, B, C, E, and F) in the perimeter zone was 9.21% to 33.41% higher than in the interior zone. The results indicate that satisfaction scores present positive mean values except for thermal satisfaction inside the interior zone of Office F. When occupants expressed dissatisfaction with a survey category, their sensations were asked simultaneously. The sensation question results show that most of the interior occupants in Office F were dissatisfied because of the cooling environment. Of them, 92.34% felt slightly cool, cool, and cold. Moreover, the satisfaction score of the occupants inside the interior zone in four buildings (Offices A, C, E, and F) was rated lower than those in the perimeter zone because of the dissatisfaction answer concerning the cooling environment.
Not only were the occupants in the interior zone dissatisfied with the cooling environment, but 72.63% of the occupants in the perimeter zone were also dissatisfied with the cooling environment. Even though the air temperature, operative temperature, and mean radiant temperature inside this area were higher than in the interior zone because of the heat gain effect from transmitted solar radiation. The thermal environment inside the perimeter zone mainly resulted in cooling answers from dissatisfied occupants. It can be concluded that negative effect of solar heat gain has a less effect in terms of thermal comfort because of the high rate of internal blind occlusion, which can lower the SHGC and interior surface temperature of the window. The primary reason, based on the questionnaire result, for lowering/closing internal blinds was to reduce the solar heat gain. However, the sensation survey showed that most of the occupants were dissatisfied by the cooling environment rather than the warming environment. Only the occupants seated very close to building facades answered slightly warm, warm, or hot for the sensation questions. Closing the internal blind may originate from heat-reducing reasons. However, when the internal blinds were closed all day long, the environment cooled, so that the occupants were dissatisfied with cooling environment in terms of the thermal comfort.
For view satisfaction, most of the mean satisfaction scores of each building (Offices B, C, D, E, and F) in the perimeter zone was 4.27% to 57.68% higher than in the interior zone. Only Office A was view satisfaction in the perimeter zone lower than in the interior zone. This was because its working space layout was small and narrow. Therefore, the view satisfaction votes of the occupants in those two zones were not drastically different. The results indicate that the satisfaction scores present positive mean values. All the occupants were satisfied with the view category. The opening-view area based on the projected area factors of each seat position inside the perimeter zone were 8.45% to 64.36% higher than those inside the interior zone as shown in Figure 8-9. It can be concluded that a higher degree of view facing in the perimeter zone resulted in a higher view satisfaction score for the perimeter occupants. However, the actual opening-glass area score was super low considering the high rate of blind occlusion. Only the occupants in the perimeter zone had a chance to experience a window view, whereas the occupants inside the interior zone hardly saw the outside view, even though the actual opening-glass area was small. The mean view satisfaction score of each building presented positive mean values for the satisfied mode.
For lighting satisfaction, most of the mean satisfaction scores of each building (Offices A, B, C, and F) in the interior zone were 9.21% to 33.41% higher than in the perimeter zone. The results indicate that satisfaction scores present positive mean values for the satisfied mode. The satisfaction score of the occupants inside the perimeter zone in four buildings (Offices A, C, E, and F) was lower than those in the interior zone because of the dissatisfaction with the brighten environment. The questionnaire result revealed that 84.14% of the occupants in the perimeter zone and 50.47% of the occupants in the interior zone were dissatisfied and felt into too-bright mode.
The questionnaire answer of the satisfied-occupants who answered their satisfactions' score during 0-2(moderate-satisfied) were clarified with the indoor conditions from the measuring devices in their area, as shown in Figure 11-13. The result revealed a different comfortable range of indoor environment between the occupants in perimeter zone and interior zone. The satisfied-occupants mainly maintain their own thermal comfort and visual comfort under the condition of the area within their seat position. The mean operative temperature for the occupants in perimeter zone was 23.06°C to 23.95°C, while it was 22.13°C to 23.71°C for those in interior zone referring to the thermal satisfaction, as shown in Figure 11. Meanwhile for the visual satisfaction, The mean daylight glare probability for the perimeter zone was 0.17 to 0.26 but 0.03 to 0.14 for the interior zone as shown in Figure 12, and the opening-view based on projected area factor for the perimeter zone was 0.13 to 0.16 but 0.03 to 0.10 for the interior zone, as shown in Figure 13. This means that the occupants be able to adapt to and accept a wide range of indoor conditions base on their surrounded-environment. Moreover, the occupants in perimeter zone seem to have a wider comfortable range than those in interior zone because they always experience more fluctuated condition from building-facade effect. Figure.13. Frequency distribution of projected area factor for opening -glass area Coming back to the overall-satisfaction analysis, it can be seen that, on average, as shown in Figure 14., the mean thermal satisfaction votes were the lowest compared with those for lighting and view. Thermal comfort has the most influence on occupants' dissatisfaction. However, the questionnaire result revealed that 68.45% of the occupants in the perimeter zone and 88.54% of them in the interior zone were dissatisfied when answering the sensation questions about the cooling side. This was because of low operative temperature based on the low air temperature and low heat transmission of the building facade considering the high rate of internal blind occlusion. The impact of solar heat gain was not the main distribution from the building-facade effect. Figure.14. Comparing the occupants satisfaction score between perimeter zone and interior zone View satisfaction had a higher score than lighting satisfaction. It can be seen that lighting comfort has the higher impact on occupants' satisfaction. Considering only dissatisfied occupants in both the perimeter zone and the interior zone, most of them were dissatisfied with overbrightness from the sensation question. The brightness in the perimeter zone can be extremely high, especially for a seat position close to the building facade. But the high rate of internal blind occlusion results in a lower daylight glare index for both the perimeter zone and the interior zone. The DGP index was distributed in the range 7.19% to 13.26% lower than the comfortable level at 0.35 to 0.40 based on the international standard. However, the questionnaire result revealed most of the occupants in the perimeter zone and interior zone were dissatisfied and answered the sensation questions in the toobright mode, even though the daylight index was small because of the high blind occlusion. It can be concluded that the occupants were highly sensitive to the lighting environment. Lighting comfort has the most influence on occupants' satisfaction in terms of the building-facade effect.
Conclusion
As the trend toward highly glazed facades for commercial buildings continues to grow, it is becoming increasingly important to be able to design facades taking into account the influence on occupants' comfort, especially for occupants inside perimeter zones, who usually experience the largest fluctuation conditions compared with those in the interior zone. Therefore, the occupants usually close/lower internal blinds to reduce solar heat gain and control excessive brightness from When a building is occupied with a high rate of internal blind occlusion, a building facades' performance considering the blind-covering impact is different from that of the design-phase building. Therefore, The opening-view area, the visible light transmittance(VLT) and the solar heat gain coefficient (SHGC) were lowered. However, Thermal environment and visual environment inside the perimeter zone was distributed in a wider range and at a higher level than in the interior zone because of the strong effect from the transmitted solar radiation.
The satisfaction questionnaire result shows that the occupants in those two zone be able to adapt to a wide range of indoor conditions base on their surrounded-environment. On average, occupants in the perimeter zone were more satisfied in terms of the temperature and view. On the contrary, occupants in the interior zone were more satisfied by lighting, with the mean satisfaction score at a higher level. In the overall-satisfaction analysis, the thermal environment has the most influence on occupants' dissatisfaction based on the lowest mean satisfaction score. However, occupants were dissatisfied because of the cooling environment, even though they were seated inside the perimeter zone that was affected by the strong solar radiation. Because the operative temperature was low. Moreover, the internal blind covering can reduce the negative effect with a lower SHGC and lower window interior surface comparing to glass facades' area, so that heat transmittance from building facades has less effect on thermal satisfaction. Most occupants were not disturbed by the solar heat gain from the building facades in terms of thermal comfort.
Overbrightness from daylight entering has the most influence on occupants' satisfaction in terms of the building-facade effect. Most dissatisfied occupants in the perimeter zone and interior zone felt it was too bright, even though the daylight was blocked by the high occlusion rate of the internal blinds and the discomfort glare index was distributed lower than the international standard. The optimisation of lighting comfort is necessary with internal shading devices usage in order to provide a comfortable visual environment for assessing the occupants' comfort.
Acknowledgement
This research was conducted by Tokyo Metropolitan Government Platform collaborative research grant (representative: Assoc. Prof. Dr. Masayuki Ichinose). All of the kind support for the case study is gratefully acknowledged. | 2019-09-10T20:24:06.601Z | 2019-08-09T00:00:00.000 | {
"year": 2019,
"sha1": "6d2fae861b103bbe3b0549188ee45cef70f2bc55",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/294/1/012075",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2fe0da94fc97790823081e49a3ed07d54f9c8383",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
236984683 | pes2o/s2orc | v3-fos-license | Development of a Machine Learning Model with a Function of Amygdala for Rapid Image Processing
: In this paper, a method of implementing convolutional neural network that can quickly deal with dangerous situations is designed based on the processing rules of the amygdala of human brain. By studying the processing rules of the amygdala in the human brain, we can understand neuron activity when humans are at risk, build similar models, and test relevant data. A neural network model can be constructed by changing structure, loss function and number of filters the general convolutional neural network model. The designed neural network model can quickly and accurately predict dangers. It was used to test data set and good results were obtained.
Introduction
With the continuous development of technology, people have begun to use machines to do a lot of work. Using machines to identify dangers is one of the most important technologies that many developers are working on. People gather information from the outside world with their eyes and ears, and can easily identify an event, or determine whether it is dangerous, but this information to the machine is a bunch of zeros and ones.
For people, danger can be understood as a situation that can cause great losses and even cause people to lose their lives. It is of great importance to spot dangers accurately, but it is even more important to spot them quickly and make quick decisions.
As we all know, human brain is a very complex organ with many functions. One of the most important features of many functions is to handle dangerous situations. For example, when a person is walking down the road and suddenly a snake-like monster emerges, he or she tends not to be the first to judge what the monster in front of them is, but to be the first to fear and hide behind them. He or she would go the front of the monster to determine what it is after going back to a safe place. If he or she chooses to first observe the monster instead of escaping, it will be very likely to get bitten by the monster and thus fall into danger.
Developers have developed neural networks and convolutional neural networks based on models of human brain. But many developers only care about the accuracy of model predictions rather than going deeper into the human brain and testing whether the developed model is similar to the human brain.
It is necessary to delve into whether the neural network model has the same structure and processing as the human brain, and whether more brain functions can be added to the neural network model. Such researches can be used to refine existing models or develop a new model based on the human brain.
My goal is to build models of the brain functions through relevant researches. In this paper, what I have to do is to implement the brain functions for processing dangers into convolutional neural networks. By changing the model construction, loss function, and the number of filters, a neural network model that can quickly and accurately determine the danger just as brain does is constructed. Through such a neural network, dangerous situations can be rapidly determined to reduce the losses caused by the danger.
Research of the Brain
The human brain receives information from the outside world through sensory organs such as eyes and ears, and then transmits and processes information in the brain through organs such as thalamus, cortex, amygdala and hippocampus. The connections between these organs form multiple circuits, and each function in the human brain consists of multiple circuits [1].
The information received by the brain through the sensory organs is collected in the thalamus, which acts as an intermediate station for the simple processing of information, and then such information is transmitted to cortex and other organs [2]. The cortex processes the information in detail and then transmits it to other organs, such as hippocampus and amygdala. The hippocampus is an organ related to memory storage [3], and amygdala has functions such as sensing danger and responding to dangerous situations [4].
Neurons and synapses play a very important role when the brain performs a certain function. All the information in the brain is transmitted through neurons and synapses [5]. Neurons are composed of nucleus, dendrites, and axons. The nucleus stores genetic material and provides the materials required for the survival of neurons. The dendrites act as input sites to receive information from the last neuron. The axons act as output sites to communicate information to the next neuron by generating chemicals. When a neuron receives a signal and is activated, it generates electrical propagation within the neuron, passing information from the dendrites to the axons for output. There is synaptic space between the two neurons. When information transmission occurs, the chemical substances generated by the dendrites are combined through synapses for information transmission [5], [6].
The amygdala, the most important organ of the brain for dealing with dangerous situations, can sense potential dangers, make judgments, and respond to perceived dangers. For example, a mouse will instantly become stiff and enter a state of suspended animation if it meets a cat, its natural enemy. This is the amygdala in the brain of the mouse that senses danger (cat) and reacts to danger (suspended animation).
The amygdala's perception of danger is divided into two categories: one is the perception of the innate danger from heredity, and the other is the perception of the danger of the brain by learning. We call it Fear Conditioning [7]. For example, a cat will identify an electric shock as dangerous and have a stiff body. Then, at the same time, the bell rings. After repeated learning, as long as the bell rings, the amygdala in the mouse brain will identify it as dangerous and the body will be stiff.
The amygdala constantly receives information but it ignores most of the information and only responds to potentially dangerous information. The amygdala simultaneously receives information from two sources, namely thalamus and cortex. The information obtained from sensory organs is collected in the thalamus. The thalamus simply processes the information and sends it to cortex for more detailed processing [8]. But at the same time, thalamus sends this information to the amygdala, which is known as low path. The information transmitted by thalamus to cortex is processed through cortex and then transmitted to amygdala, which is known as high path. The information of the low path has not been processed in detail, so it can be quickly sent to amygdala. The amygdala can make dangerous judgments quickly, Published by Francis Academic Press, UK -37-but relatively low accuracy. The information of the high path is processed by cortex in detail, and the speed of transmission to amygdala is relatively slow, but judgment is relatively accurate [9], [10], [11].
Convolutional Neural Network
Image recognition has always been an area of concern. Convolutional neural networks, as a neural network for image recognition, are more in line with the way human brains process visual information than ordinary image classification neural networks. When visual information such as a picture is transmitted into the human brain through eyes, the brain does not analyze the picture as a whole, but divides it into many small pieces, and local analysis on features such as contours, edges, textures is carried out in each small piece.
The convolutional neural network splits the picture into smaller pieces and is interested in the features within those small pieces. These features are collected by convolution operations on the filters in the model trained with the features as input.
The convolutional neural network is mainly composed of the Input layer, Convolutional Layer, Pooling Layer, and Fully-Connected Layer.
In the convolutional layer, the filter performs convolution on the input image by sliding the window. Each filter corresponds to a different feature. After the convolution of images by multiple filters, multiple feature images can be extracted. The resulting feature map is then sent to the pooling layer.
Max Pooling is a common pooling method used in the Pooling Layer. Max Pooling is to slide the window with a filter of a certain size, leaving only the largest value. Thus, the pooling layer compresses the feature map, extracts the main features, and makes the model less complicated to improve the calculation speed.
Finally, the compressed feature map is sent to the Fully-Connected layer to calculate the prediction result. When the model is trained, the loss function is calculated by the predicted result to train the weight of the Fully-Connected layer and each filter of the Convolution layer.
Model Development and Experiment
People have multiple sensory organs, and amygdala combines the information collected by various sensory organs to sense and identify dangers. But in this paper, I only model, experiment, and analyze visual senses.
Connection of Thalamus and Cortex
As mentioned earlier, thalamus receives information from sensory organs and sends it to cortex for more detailed processing. The detailed information is transmitted to other organs, compared with the memory extracted from hippocampus, and compared to determine what is perceived by our sensory organs. The general Convolutional Neural Network is consistent with this, and the picture is extracted by the Convolution Layer, and then classified by Fully-Connected Layer.
So here, we use the general Convolutional Neural Network structure for modelling to obtain the filter needed to classify specified data set. These filters are like the cerebral cortex, which processes the information in detail and transmits the processed information to the next organ to further classify and identify the original data.
Connection of Cortex and Amygdala
In the brain, cerebral cortex processes information both for categorizing primitive objects and for perceiving danger. After cortex processes information in detail, it transmits the processed information to amygdala. Based on the processed information, it is perceived whether a person is in a dangerous environment.
In this part of the model, we again use the general convolutional neural network model, but we do not need to train the filters in the convolution layer again because in the previous link, we have obtained the filters to process specific data sets. By reusing the trained filters, the time for model training can be greatly reduced. In this part of the model, what we need to do is to determine whether it is dangerous or not. Therefore, the predicted results fall into two categories, namely dangerous and safe. When an error is predicted and a loss occurs, the model trains the weight by calculating the loss function. After the training is completed, we can get the weights in the Fully-Connected layer, which can be further used to input the information, and after calculation, a result indicating whether it is dangerous will be output.
Connection of Thalamus and Amygdala
The role of the connection between thalamus and amygdala is to quickly perceive danger through simple, unprocessed information.
In this part of the model, we do not retrain the model in the network model, but directly use the filters and weights that have been trained in the first two links. However, this link does not require complete detailed information processing, so we only select the filters with the highest risk performance, process the images, and then directly make dangerous judgments.
The complete model structure can be seen through the picture.
Experimental Analysis
The experiment used the CIFAR-10 dataset and selected five kinds of animals to treat 'dog' as a dangerous animal. The size of the data is 28x28. The size of the data set is 20,000 training data, 5,000 test data, and 5,000 verification data.
When we train the model to determine whether it is a danger, the general loss function is: And when analyzing prediction errors, we can see from the picture that the prediction errors can be divided into two types. The Type1 Error determines that safety is dangerous, and Type 2 Error indicates that dangerous situation is safe. In reality, if Type2 Error occurs, that is, when we judge the dangerous situation as safe, there may be a major loss or even loss of life. But looking at the loss function in the model, we find that when both Type1 Error and Type2 Error occur, the loss of the same amount will occur.
So here I will make a change to the loss function: By adjusting the size, the size of Type2 Error loss is changed. The purpose is to produce a greater loss when Type2 Error occurs. Test set multiple levels of x for testing, and the larger the value, the greater the loss caused by Type2 Error. The training results are shown in Figure 2. As can be seen from the figure, when the alpha increases, the model becomes more sensitive to dangerous data, and the judgment result of dangerous data is continuously increased.
In Figure 2, 'full' means a model that uses all the filters, and 'quick' means a simple model that uses only five filters. Comparing the test results of the two models, the quick model has more than fullhigh-risk sensitivity, which is in line with people's quick judgment and response to dangerous situations. Table 2 shows the time required for each model training and testing, where Connection1, 2, and 3 are the three connections between thalamus, cortex, and amygdala, respectively. We can see that the time required for the full model and the quick model test is almost the same, but the quick model does not need to be trained, so it can save a lot of time. And the advantage of the quick model is that it is more sensitive to danger than the full model, preventing the loss as much as possible.
Conclusion
In this paper, we established the connection model of sensory organs, thalamus, cortex, amygdala, and hippocampus by studying human brain. The neural network model with the function of amygdala risk processing was realized by the convolutional neural network. The experiment was carried out by the CIFA-10 data set, and the expected results are obtained.
The main purpose of this paper is not to get a high-precision model of the test results, but to study how to make the model as sensitive to danger as the human brain. It can better prevent the huge loss caused by Type2 Error. | 2021-08-11T15:59:20.805Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "613be0c594b8e9368ba4f01fbca9213766fbaee8",
"oa_license": null,
"oa_url": "https://francis-press.com/uploads/papers/TMJ1jf16xYSS1CY436DSlHIKrpd0bgMr2593bzj8.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "613be0c594b8e9368ba4f01fbca9213766fbaee8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246280381 | pes2o/s2orc | v3-fos-license | Enriching the Study Population for Ischemic Stroke Therapeutic Trials Using a Machine Learning Algorithm
Background Strokes represent a leading cause of mortality globally. The evolution of developing new therapies is subject to safety and efficacy testing in clinical trials, which operate in a limited timeframe. To maximize the impact of these trials, patient cohorts for whom ischemic stroke is likely during that designated timeframe should be identified. Machine learning may improve upon existing candidate identification methods in order to maximize the impact of clinical trials for stroke prevention and treatment and improve patient safety. Methods A retrospective study was performed using 41,970 qualifying patient encounters with ischemic stroke from inpatient visits recorded from over 700 inpatient and ambulatory care sites. Patient data were extracted from electronic health records and used to train and test a gradient boosted machine learning algorithm (MLA) to predict the patients' risk of experiencing ischemic stroke from the period of 1 day up to 1 year following the patient encounter. The primary outcome of interest was the occurrence of ischemic stroke. Results After training for optimization, XGBoost obtained a specificity of 0.793, a positive predictive value (PPV) of 0.194, and a negative predictive value (NPV) of 0.985. The MLA further obtained an area under the receiver operating characteristic (AUROC) of 0.88. The Logistic Regression and multilayer perceptron models both achieved AUROCs of 0.862. Among features that significantly impacted the prediction of ischemic stroke were previous stroke history, age, and mean systolic blood pressure. Conclusion MLAs have the potential to more accurately predict the near risk of ischemic stroke within a 1-year prediction window for individuals who have been hospitalized. This risk stratification tool can be used to design clinical trials to test stroke prevention treatments in high-risk populations by identifying subjects who would be more likely to benefit from treatment.
INTRODUCTION
As the second most common cause of mortality globally, stroke poses a significant health burden (1). It is associated with long term disabilities, increased healthcare expenditures, and an overall decline in quality of life for individuals who have suffered a stroke (1,2). In the U.S., over 795,000 strokes occur per year, putting this disease in the top five causes of mortality (3). It is estimated that over $34 billion in healthcare expenditures in the U.S. are directly related to stroke, including lost income, costs associated with management of comorbidities, and use of health services (1,3). Risk factors for stroke include those that are nonmodifiable and modifiable (1). Non-modifiable factors include individual demographics, such as being female, being older than 55, or being a racial-ethnic minority (3)(4)(5). Modifiable risk factors include inadequate physical activity, obesity, smoking, and isolation (6,7).
Ischemic strokes, the most common type of stroke, result from the sudden shortage of blood supply to the brain and account for 80% of strokes in the U.S. and 87% globally (1,3). Complications can be permanent and pose a range of challenges for stroke survivors, both physically and psychologically (1). For example, a study by Crichton et al. found that nearly 40% of stroke survivors had diagnosed depression following the event and approximately one-third experienced a decline in cognitive abilities (8).
Clinical trials have focused on secondary stroke prevention to influence modifiable risk factors and examine the efficacy of various therapeutic interventions for limiting the recurrence of stroke (9,10). Anticoagulant therapy has been shown to be an effective tool for primary prevention to reduce stroke risk in patients with comorbidities that put them at a high risk for stroke, such as atrial fibrillation (AF) (11,12). Given the continued high prevalence of stroke and its lethality, clinical trials are needed to explore the effective use of various therapeutics as both primary and secondary prevention of ischemic strokes in both high risk populations and populations without traditional risk factors. However, clinical trials often stall due to patient attrition or other factors. Per a study by Herrer et al. over one third of all Phase III clinical trials fail due to poor subject selection, resulting in lost expenditures and time for research and development (13).
Artificial intelligence (AI) and machine learning (ML) may serve as tools to supplement the patient selection process for clinical trials by identifying individuals at a high risk for stroke within the window of the study, versus other stroke risk assessments that provide a longer window of prediction. While there has been much progress in the prediction of outcomes of acute stroke using ML-based models (14)(15)(16)(17), there is a need for research regarding the utilization of ML tools for the prediction of future stroke. The goal of this study was to examine the ability of ML models to predict an individual's 1-year stroke risk in order to identify individuals for whom preventive interventions, such as anticoagulant therapies, may mitigate this risk. This research may enhance clinical study protocols regarding patient selection, dosage and timing of a study subject's therapy, as well as streamlining the process of patient selection (18).
Data Sources
Data were obtained from a proprietary longitudinal electronic health record (EHR) repository that includes over 700 inpatient and ambulatory care sites located in the U.S. Encounter level data were extracted from individuals between January 2017 and December 2020 (Figure 1). Having had these prior encounters ensured that there was comparison data for these patients in the EHR system. Patient data became eligible for analysis at the patient's second encounter within the same hospital system in either the intensive care unit (ICU) or inpatient wards. Inputs for the analysis included patient demographics, diagnoses, and medication usage both at the time of the first inpatient encounter as well as any prior medication usage recorded in the EHR during the data collection period. Data were collected passively, and to comply with the Health Insurance Portability and Accountability Act (HIPAA), data were de-identified to maintain patient privacy. As data were de-identified, this project did not constitute research using human subjects and approval was not required.
Patient Selection
Patients who experienced an ischemic stroke between 1 day to 1 year after their first inpatient encounter were identified using international classification of diseases (ICD) codes within EHRs to indicate stroke ( Table 1). All patients who had an inpatient encounter, did not meet the criteria for ischemic stroke, and who did not meet the hemorrhagic stroke exclusion criteria were considered to be the negative class ( Table 1, Supplementary Table S1). The minimum and maximum timeline for the input window for collecting laboratory and vital measurements was between 24 h and 1,000 h during the patient's length of stay. We excluded encounters that did not fall within that window. Wherever applicable, we used summary statistics (mean value, standard deviation, and last measurement) of collected feature data at any time within the visits. Patients with characteristics indicative of high risk of hemorrhagic stroke at the first encounter were excluded to further improve the ability of the algorithm to only identify patients at risk of ischemic stroke. This software feature has the potential to serve as a tool to reduce the risk of enrolling patients who are at risk for hemorrhagic stroke as opposed to ischemic stroke, as anticoagulant therapy may increase the risk of hemorrhagic stroke (19). Risk factors for hemorrhagic stroke included patients who were given anticoagulants during the first inpatient encounter, had a surgery within 30 days of their first encounter, had a gastrointestinal bleed, amniotic embolism, intracranial hemorrhage, ulcers, and/or had a high risk of falling, or were pregnant. Patients with coagulopathy were also excluded, as these patients were unlikely to be suitable candidates for a clinical trial.
Algorithm inputs included demographic information, medical history, and clinical and laboratory data which were identified from EHRs by the use of clinical measurements, ICD codes, procedure data, medicine (self-administered prescription or inhospital medication) data, and other patient data. An analysis of the correlation between features used in the study was performed FIGURE 1 | Study design timeline. Patients identified in the positive class according to our gold standard had to have been diagnosed with ischemic stroke within the prediction window, i.e., 1 day after the end of visit to within 1 year from end of visit. The negative class included patients in which no diagnosis of ischemic stroke was identified within the prediction window and they must have had at least 1 year of data after the end of visit. and if two features had a very high magnitude of correlation (>0.8), then one of the features was removed. This included the following sets of features: male and female; antihypertensive medication and antidiabetic medication; white blood cell count and platelet count, weight and body mass index (BMI). The list of features used in the model is presented in Table 2.
Machine Learning Model
This research utilized a gradient boosting decision tree classifier to predict ischemic stroke within a year. The Extreme Gradient Boosting (XGBoost v1.3.3) method in Python (v3.6.13) (20-24) was used to implement the decision tree model (25). In this method, multiple trees are generated based on the values of the various input features and a prediction score is generated by combining the results from various trees. During training, future decision trees are constructed with the goal of minimizing the error calculated in previous iterations of tree building. This allows the model to construct targeted trees which optimize the accuracy of the final output. The training process iteratively determines the best variables (and respective thresholds) that can be used to differentiate which patients will have an ischemic stroke within 12 months, and which patients will not. The result of this process is a decision tree that uses a patient's data to predict if they are likely to have a stroke. In handling missing data, we did not include features that had a missing rate of >50%. Furthermore, the XGBoost model was also chosen as it is particularly robust in handling missing data (26,27) and often outperforms simpler ML models (22,23). Supplementary Figure S3A shows the missingness of non-categorical features that were used as inputs.
No more than five branching levels were permitted in each tree in the final model. The XGBoost parameter for learning rate was set to 0.2 with no more than 100 total trees to avoid a computational burden. Patients were assigned one of the two groups (predicted ischemic stroke or not predicted ischemic stroke) based on whether or not the final score from the model exceeds a predefined threshold.
Other hyperparameters of the model including the learning rate and the total number trees were selected using a crossvalidated grid search. To ensure that model overfitting did FIGURE 2 | Patient encounter inclusion diagram. Initially, more than 28 million inpatient visits were included in the analysis, then patient encounters were filtered by the exclusion criteria and the prediction window requirements. Forty-one thousand nine hundred seventy patients were identified as positive for ischemic stroke based on our gold standard. The prevalence of ischemic stroke encounters was 5.9% in the training set, 5.8% in the hold-out test set and 6.7% in the external validation set.
not occur, a hyperparameter to prevent iterative tree-addition was built into the training algorithm and optimized across this hyperparameter through the process of 3-fold cross-validation. Another parameter "scale_pos_weight" was introduced and set to a value equivalent to the ratio of negative class examples to positive class examples in order to tackle the imbalance in the dataset. This parameter was optimized as it is useful for unbalanced classes in that it controls the balance of positive and negative weights. This was followed by further optimization of hyperparameters across a sparse parameter grid and crossvalidation across a grid search to ensure that an optimal combination of candidate hyperparameters was included in the algorithm. The final XGBoost model was calibrated post training using the method of isotonic regression (28). Calibration was implemented using the scikit learn package in Python (23). When a model is well-calibrated, the probability associated with the predicted label reflects the likelihood of the correctness of the actual label (29). The reliability curves showing the true probability vs. the predicted probability of the XGBoost model before and after calibration are presented in the Supplementary Figure S4.
Statistical Analysis
Model performance was determined using a 80-20 train-test split assessed through area under the receiver operating characteristic (AUROC), equivalent to the c-statistic. We reported performance of the model on the test data and an additional external validation dataset (see Supplementary Information). The external validation data comes from a healthcare site and patients separate from those included during model training and testing. The performance of the model against the comparator, the CHA 2 DS 2 -VASc Score (Congestive heart failure, Hypertension, Age > 75, Diabetes Mellitus, Prior Stroke or transient ischemic attack (TIA) or thromboembolism, Vascular disease, Age 65-74 years, Sex category), was assessed by comparing the AUROCs of the model against the comparator on the 20% hold out test set. The 95% confidence intervals of the AUROC curves were calculated by bootstrapping the AUROC curves. The CHA 2 DS 2 -VASc Score was compared in a binary manner (low risk vs. high risk) rather than using risk stratification.
RESULTS
In total, 28 million inpatient encounters were initially included in our analysis and 715,836 adult patients were included after applying exclusion criteria and the prediction window condition requirements (Figure 2). Of these encounters, 41,970 patients were identified as positive for ischemic stroke based on our gold standard and 673,866 patients with no stroke diagnosis were classified as the control group. The external validation set consisted of 813,107 total inpatient visits, 56,143 of which were included after applying exclusion filters. Of the 56,143 encounters in the external validation set, 3,790 were identified as positive for ischemic stroke and 52,353 remained in the control group.
Patients who experienced an ischemic stroke were, on average, likely to be older and were more likely to have hypertension, a history or stroke, diabetes or cardiovascular comorbidities (Tables 3, 4).
A total of 41,970 patients with ischemic stroke were included in training and testing of the prediction model. In the test set, XGBoost achieved an area under the receiving operating characteristic (AUROC) curve of 0.880 (95% CI [0.873-0.879]) for prediction of ischemic stroke ( Table 5). Logistic Regression and multilayer perceptron (MLP) both achieved comparable AUROCs of 0.862. Though XGBoost and Logistic Regression both performed well, XGBoost may have achieved a slightly higher AUROC for this task because Logistic Regression does not process null values. Logistic Regression imputation of missing data must be done manually, which is not the case for XGBoost. The XGBoost model had a higher specificity than the Logistic Regression model on the hold out test set. Also of note, several prior studies have utilized the XGBoost algorithm to construct models that have superior predictive capacity over existing riskscoring systems, across a wide range of indications (30)(31)(32). The comparator, CHA 2 DS 2 -VASc risk score, achieved an AUROC of 0.7565 (95% CI [0.7531-0.7569]) (Figure 3).
Feature importance was also assessed using SHAP (SHapley Additive exPlanations: v0.39.0) (33) analysis to determine model features that most significantly impacted ischemic stroke predictions. The SHAP analysis of feature correlation and distribution identified the three most significant features for prediction of ischemic stroke-history of stroke, age, and systolic blood pressure (Figure 4). Important features also identified in the analysis include hypertension, mean hemoglobin, blood urea nitrogen, and temperature. A feature correlation plot is also presented as Supplementary Figure S3B.
Study Summary
This study describes the development of a machine learning algorithm to accurately predict the onset of ischemic stroke from the period of 1 day up to 1 year following the patient encounter using only data automatically collected from the patient EHR. Although there are existing tools for stroke risk assessment over longer windows of prediction (34,35), the goal of this study was to develop an MLA tool to aid in the patient selection process for clinical trials by identifying patients at a high risk for ischemic stroke within the time period of a study. The XGBoost algorithm obtained AUROC, PPV, NPV, sensitivity and specificity of 0.864, 0.188, 0.981, 0.800, and 0.749, respectively, on the external test set, indicating the tool's ability to maintain high performance in stroke predictions up to 1 year after an initial inpatient encounter. The use of EHR-based machine learning allows for fast and cost-effective means to identify patients at higher risk of stroke and may potentially improve patient cohorts for clinical trials by accurately predicting shorter term stroke risk. The ability to classify patients as high risk or low risk may guide inclusion and exclusion criteria to ensure that individuals included may have an improved quality of life and decreased incidence of stroke from successful therapies. Importantly, the high negative predictive value of 98.1% indicates the ability of the algorithm to assist researchers to exclude patients who may have otherwise qualified for a clinical trial based on qualitative assessments or patient disclosure of factors that indicated a higher risk for stroke. The MLA developed and validated in this study outperformed the CHA 2 DS 2 -VASc scoring system, which has been shown to be an effective clinical tool in predicting the 1-year risk of stroke and thromboembolism (TE) in patients both with and without AF (34)(35)(36). While the gold standard scoring system that is in wide use for stroke risk assessment is the Framingham Stroke Risk Profile (FSRP) (34,35), the FSRP tool predicts stroke risk between 5 and 10 years prior to the occurrence of stroke and partially relies on subjective information received directly from patients by a technician-administered questionnaire and a self-administered questionnaire (37). The ability to predict stroke within 1 year may identify patients who have a more immediate risk than those identified in the FRPS, making them viable participants for clinical trials, which occur over limited timeframes. For this study, we chose to use the CHA 2 DS 2 -VASc score as a comparator in order to compare the MLA in this study with a similarly objective risk score that can provide 1-year predictions (36).
Significant Features
ML methods can provide insight into the importance of individual variables in predicting stroke. The abc (age, biomarker, Table S2 shows performance metrics for our XGBoost, logistic regression, and MLP MLAs on the hold out test set and external validation test set using the same inputs as the CHA 2 DS 2 -VASc risk score. and clinical history) stroke score was recently shown to provide short-term stroke risk assessment in AF patients (38). In line with these previous findings, history of prior stroke and age were identified as the two most important ML features in our study (Figure 4). Further experimentation was done to examine the performance of the MLAs when stroke history was removed, results for which are presented in Supplementary Table S3, Supplementary Figure S2. Epidemiological studies continue to support the benefits of blood pressure reduction for lowering the risk of stroke (39) as elevated blood pressure levels (>115/75 mm Hg) contribute to almost two-thirds of the global stroke burden. Additionally, both systolic and diastolic blood pressure were ranked among the most important features (top 20), with higher values indicating a higher risk of stroke onset. While diabetes is a known independent risk factor for stroke onset, recent studies have shown that elevated glucose levels and glucose fluctuations (variance) can increase stroke risk, even among individuals without diabetes (40). Similarly, we found that a high variance in glucose level correlated positively with stroke onset. Although the diagnosis of diabetes increased the risk of stroke, the association between mean glucose level (the least important feature on the SHAP plot) and stroke onset was not straightforward. It is plausible that the fluctuation in glucose level is more informative than the mean glucose measurement, particularly in non-diabetic subjects. Fluctuations, as measured by standard deviation, in BMI were positively correlated with stroke risk. These findings are consistent with several previous studies showing that the risk of stroke increases in individuals who lose or gain weight (41). The associations between BMI and stroke risk were inconclusive, possibly reflecting a previously observed weight paradox in stroke outcomes, particularly in the elderly (>75% of our study participants were over 60 years) (42,43). We also found that a higher potassium concentration was associated with a lower risk of stroke, whereas lower potassium level was associated with a higher stroke risk. These findings are consistent with previous studies reporting associations between low serum potassium and stroke in healthy populations (44) and in adults with hypertension (45).
Comparison to Other AI Studies
Several studies have examined the use of ML and artificial intelligence (AI) based tools for patient care related to stroke. Ding et al. broadly discuss the role of AI and ML in stroke care and its implications for future stroke management (46). This includes the use of AI to analyze electrocardiogram and ultrasound data for risk stratification and projection of stroke outcomes in patients with known risk factors and to aid with stroke diagnosis using imaging data (46
Study Limitations
This study has several limitations. First, the performance of the stroke prediction algorithm was not assessed in prospective settings due to the retrospective nature of the study. To determine how clinicians may respond to predictions of stroke risk, prospective validation is necessary. Prospective validation is also required to determine the extent to which algorithm predictions may affect resource allocation or patient outcomes. Second, stroke risk factors were identified solely via EHR data and healthcare providers may not properly code stroke risk factors or relevant inputs in the EHR (54). Previous studies have reported limited accuracy associated with the ICD-9 stroke codes in identifying ischemic strokes (55,56). However, ICD-10 stroke codes, as used in this study, are more specific; for instance, ICD-10 codes specify the hemorrhage locations and distinguish between thrombotic and embolic ischemic stroke. Moreover, recent studies have validated the performance of ICD-10 codes for identifying acute ischemic stroke (57). Finally, it is important to note that while the CHA 2 DS 2 -VASc score is a widely-used clinical risk scoring tool for predicting stroke in AF patients (36,(58)(59)(60), the cohort utilized in the current study included both AF and non-AF patients. Although the CHA 2 DS 2 -VASc score has been validated for use in non-AF patients, and several clinical studies that have demonstrated the effectiveness of the CHA 2 DS 2 -VASc score in predicting stroke incidence in non-AF patients (61)(62)(63)(64), these validation studies are all based on retrospective datasets. The incidence of stroke was predicted by the combination of a large number of EHR features, including several vital signs. While the variation of individual vital signs and lab measures within the normal range are not informative for disease prediction, the ML algorithm can use the variation of a large number of variables to capture a latent pattern for disease prediction. Nevertheless, the biological basis for the contribution of individual vital signs to the ML prediction model is not readily interpretable.
CONCLUSION
Clinical trials ensure the safety and efficacy of therapeutics as they transition from development to human testing. However, the success of these measures rely upon a well-identified study cohort. The machine learning algorithm presented in this paper can be successfully utilized to more accurately identify patient cohorts at risk for ischemic stroke within 1 year that are appropriate candidates for anticoagulant therapy studies. This may enable more effective clinical trials of potential ischemic stroke preventative therapies.
DATA AVAILABILITY STATEMENT
The data analyzed in this study was obtained from a proprietary longitudinal electronic health record (EHR) repository that includes over 700 inpatient and ambulatory care sites located in the U.S. Requests to access the processed data and statistical information should be directed to Qingqing Mao, qmao@dascena.com.
AUTHOR CONTRIBUTIONS
RD, QM, and JC contributed to conception and design of the study. JM, YE, and LR assembled the dataset, performed the experiments, and performed the statistical analysis. JM, YE, LR, GB, SS, and AG-S wrote the manuscript. All authors contributed to the article and approved the submitted version. | 2022-01-26T14:20:35.686Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "2bc599fe012db18cbb497d40da3a0c0a33694201",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.784250/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "2bc599fe012db18cbb497d40da3a0c0a33694201",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268925039 | pes2o/s2orc | v3-fos-license | Unlocking the Potential: A Systematic Literature Review on the Impact of Donor Human Milk on Infant Health Outcomes
Human mother milk is considered the most healthy and best source of nutrition for both premature and full-term infants, as it possesses many health benefits and is associated with its consumption. Some of the mothers are not able to produce an adequate quantity of milk to meet the required needs of the infants, particularly in cases involving premature births or facing challenges in breastfeeding. Especially for the most vulnerable premature infants, donor human milk (DHM) provides a helpful bridge for effective breastfeeding. Even with the advancement in baby formulas, no other dietary source can match the bioactive matrix of benefits found in human breast milk. This literature review discusses the risks associated with prematurity and explores the use of DHM in the care of premature infants. It helps prevent substantial preterm complications, especially necrotizing enterocolitis, bronchopulmonary dysplasia, and late-onset sepsis, which are more commonly seen in infants who are given formulated milk made from cow's milk. It gives insights into the benefits of DHM, such as immunological and nutritional benefits, which is a basic infant’s need. When medical distress prevents mothers from producing enough breast milk for their infants, pasteurized human donor breast milk should be made accessible as an alternative feeding option to ensure infants remain healthy and nourished. A systematic literature search was conducted using PubMed and Google Scholar databases and other sources. A total of 104 articles were searched, of which 35 were included after identification, filters were applied, eligibility was checked, and references out of scope were excluded. Human milk banking should be incorporated into programs encouraging breastfeeding, highlighting lactation in mothers and only using DHM when required.
Introduction And Background
Worldwide, merely 38% of infants meet the WHO's recommendation to breastfeed exclusively for the initial 180 days (six months) of life, even though the guidelines support this practice [1].While a mother's own milk is the preferred choice for all newborns, a significant proportion of many preterm and critically ill infants may not receive an adequate amount of breast milk during the initial days of their lives.Despite the numerous benefits that are linked with providing infants with their mother's milk, a lot of mothers who have low birth weight (LBW) infants face challenges in expressing sufficient quantities of milk due to factors such as illness, stress, immaturity of mammary secretory cells, and other issues related to premature birth [2].Donor human milk (DHM) is the prescribed alternative when the mother's milk is unavailable or insufficient.This suggestion aligns with commendations from the European Society for Paediatric Gastroenterology, Hepatology, and Nutrition, the American Academy of Pediatrics, and the WHO [3].According to the 2018 recommendations from WHO or UNICEF, the emphasis was placed on providing DHM to infants who needed supplements or could not consume their mother's milk.This is particularly crucial for LBW infants, including those with very low birth weight (VLBW) and other vulnerable infants [4].DHM refers to milk expressed and willingly contributed by lactating women who are not the biological mothers of the intended recipients [5].DHM is beneficial because it enhances feeding tolerance and reduces the likelihood of necrotizing enterocolitis, bronchopulmonary dysplasia, and late-onset sepsis [6].
Since LBW is known to be a significant risk factor for both infant mortality and morbidity, it has grown to be a significant global public health issue [7].In 2020, the number of babies born with LBW was 19.8 million, accounting for approximately 14.7% of all the new births taken worldwide for that year.This rate was noticeably higher in Southern Asia, reaching 24.4% [8].The Academy of Breastfeeding Medicine advises selecting DHM as the primary supplement, giving it precedence over the formula.This emphasis is particularly notable in cases where supplementation becomes necessary, such as dehydration, hypoglycemia, and hyperbilirubinemia [9].Over the recent decades, technological advancements in medical treatments have significantly enhanced the chances of survival of infants with VLBW.Nevertheless, despite these progressions, there has not been a proportional reduction in the associated morbidity linked to VLBW [10].
The utilization of donor milk in hospitals is influenced by several factors at both institutional and individual levels.The factors include the absence of standardized policies and insufficient staff training on its use.Additionally, the perspectives and knowledge of the staff members and parents regarding the positive side, i.e., health benefits and safety of donor milk, play a significant role in shaping its adoption [11].Establishing the acceptability of human milk banking involves validating its value and addressing community beliefs and misconceptions.This can be achieved by offering comprehensive information and increasing awareness about the crucial role of DHM.This strategy is designed to enhance community conviction and promote acknowledgment of breast milk donation and the use of DHM [12].This systematic review aims to provide insights into the importance of DHM in improving the outcomes in premature infants.We searched for further information using terms such as "lactation management," "human milk bank," "low birth weight," "immunity," and "preterm."Articles available in the English language and published in the previous five years, as well as free full-text and open access articles were included in this study.A total of 104 articles were searched, of which 35 were included after identification; filters were applied, eligibility was checked, and references that were out of scope, limited rigor, or had insufficient knowledge were excluded.
Review
Figure 1 shows the selection process used in the study.
FIGURE 1:
The selection process used in this study.
Adopted from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).
Discussion
Donor Human Milk DHM is obtained from donors and has undergone extensive screening, heat treatment, and microbiological screening before being supplied to patients in the community or medical facilities [13].DHM represents human milk that has been pooled and pasteurized, meeting international standards for processing.Its application in premature infants has been associated with several beneficial health outcomes, including mitigation of feeding intolerance, necrotizing enterocolitis, late-onset sepsis, bronchopulmonary dysplasia, and various other conditions [14].Substantial research suggests that human breast and donated human milk complement each other, proportionately improving child health and survival rates [15].The complementing effect is particularly noticeable when all newborns are fed only breast milk.At birth, an infant's gastrointestinal tract and respiratory system possess low levels of antioxidant and anti-inflammatory properties associated with the immune system.In addition, it has poorly developed physical barriers, including tight junctions, a low gastrointestinal acidity (chemical barrier), a delayed T-cell response, and a decreased production of immunoglobulins, especially secretory immunoglobulin A (IgA).It has been shown that a healthy consumption of human milk influences the composition of gut bacteria, stimulates the gastrointestinal mucosa, and strengthens the developing child's immune system, probably because of the bioactivity in human milk [16].Growth factors that show lower permeability of the intestinal lumen, which acts as a defense against ailments in a fragile gut, such as the epidermal growth factor seen in human milk, may support the development of small intestinal epithelium growth and gastro barrier protection [17].
Necrotizing enterocolitis: Necrotizing enterocolitis, which affects 5-12% of newborns with extreme LBW, is the leading cause of death in preterm neonates relating to gastrointestinal disorders.It can present with slow, subtle symptoms at first; some newborns show signs as early as feeding intolerance [18].Indications of feeding intolerance include elevated gastric residual volume resulting in vomiting, abdominal distention, absence of stool, and more frequent apnea episodes [19].The antitoxin activity of breast milk IgA against the enterotoxins of Vibrio cholerae and Escherichia coli may be necessary in avoiding infantile diarrhea [20].
In contrast to premature infants who were fed formulated milk, premature infants who were fed human milk have a lesser prevalence of necrotizing enterocolitis.Mother's milk contains bioactive substances, which consist of both bactericidal and immune-regulating properties that can protect against sepsis.Human milk has antibacterial properties that resist Staphylococcus aureus, Candida species, and Escherichia coli growth.It is recognized that DHM can be used in place of mother milk.Human milk has probiotics, prebiotics, larginine, glutamine, growth factors, oligosaccharides, and lactoferrin.These substances can help colonize good intestinal bacteria, prevent the multiplication of infectious microorganisms, maintain intestinal mucosa integrity, and increase resistance.As a result, this reduces the dangers of sepsis and necrotizing enterocolitis [14].To reduce the risk of transmitting pathogens from donor mothers to preterm infants, DHM is commonly pasteurized.It is proven that DHM can be used in place of mother milk.The absence of milk banks can lead to detrimental consequences for infants [21].
Late-onset sepsis: Sepsis manifesting 72 hours postpartum is referred to as late-onset neonatal sepsis.It affects around 10% of babies born preterm and is linked to long-term neurological development problems [22].DHM includes various immunological components, including antibodies such as IgA, IgG, or IgM, white blood cells, and immune-regulating substances.These elements enhance the infant's immune system, providing passive immunity and protecting against bacterial pathogens that could potentially cause sepsis [16].Moreover, it promotes the development of a resilient gut barrier that helps in averting intestinal permeability.Consequently, this lowers the likelihood of bacteria moving from the gut to the bloodstream, decreasing systemic infections, including sepsis [23].
Bronchopulmonary dysplasia: Bronchopulmonary dysplasia is the predominant complication that arises from extremely premature birth.The affected infants show abnormal or impaired lung development, which may cause permanent changes in the function of the lungs and heart.One of the most critical components of its treatment and prevention is adequate nutritional support.The protective benefits of DHM against the development of bronchopulmonary dysplasia have been highlighted by several studies [24].Preterm infants who receive human milk, whether from their biological mothers or through donors, may experience improved feeding tolerance compared to those formulated milk.This enhanced feeding tolerance can positively impact overall health and alleviate stress on the respiratory system [25].Antibodies and other protective elements found in DHM prevent infections and also reduce the risk of lung inflammation and damage [20].The well-balanced combination of proteins, fats, and carbohydrates in human milk has the potential to promote the growth and steady progress of premature infants [26].
Why Human Milk Over Formula Milk?
The infant's developing defense mechanism and shield against infections depend on a variety of bioactive substances and immune factors found in human milk, such as microRNAs, antibodies, immunoglobulins, lactoferrin, lysozyme, growth factors, antimicrobial peptides, white blood cells, and human milk oligosaccharides [27].In a comparison between infants fed DHM and those fed their mother's milk, the latter had lower levels of Clostridiaceae and higher amounts of bifidobacteria.However, when it comes to the metabolic and functional traits of the microbiota, which differ significantly from those of newborns given formula, there is no discernible difference between the mother's milk and DHM [15].
DHM's advantages might result from less exposure to formula, which can exacerbate inflammation and intestinal permeability [17].Formula feeds are expensive and may also result in dyspepsia.Higher energy and protein levels can be obtained from formula prepared from cow's milk.Still, human milk lacks many beneficial ingredients, such as the "personalized" ingredients exclusive to the mother's milk.Additionally, a formula may exacerbate inflammation and raise the possibility of necrotizing enterocolitis [28].
Breastfeeding Practices and Infant/Child Mortality Rates in India
LBW is defined as birth weight below 2,500 grams, and it is a significant public health problem in India as well as throughout the world.Infants weighing less than 1,500 grams are considered as VLBW, and it is associated with significant morbidity and mortality rates [29].Even though there is a global decline in the child mortality rate, India's infant mortality rate remains higher than 30 per 1,000 live births.Lack of access to healthcare, female illiteracy, and ignorance of the dietary requirements of newborns and premature infants are the three main factors contributing to childhood malnutrition.Because of their medical conditions, these infants are often fed poorly and have problems in terms of longevity and cognitive development.The Indian state and union territory has a thorough report on population, health, and nutrition in the National Family Health Survey-5 (2019-21).According to this, the rate of children (below 3) who breastfed within an hour of birth is 41.8%, whereas 63.7% of infants younger than six months are exclusively breastfed.The rates of under-five mortality, infant MORTALITY, and neonatal mortality are reported to be 24.9%,35.2%, and 41.9%, respectively, for every 1,000 live births [30].
Human Milk Banking Process
In 2019, an international expert gathering, co-sponsored by WHO and the University of Zurich, examined human milk banks' setup, operation, and supervision.The meeting underscored the need and consequences of handling the utilization of DHM in a way that gives precedence to safeguarding, promoting, and enhancing mother's milk whenever feasible [4].Milk banks typically follow standardized procedures for gathering and handling donated milk.The milk bank provides guidelines about proper techniques for cleaning and pumping breasts to donors.It is standard practice to combine milk from different pumping sessions, and the collected milk is kept in containers provided by the bank.Each donor's name, expression time, and date must be properly labelled on each container [31,32].A thorough screening procedure that involves an interview, a serological test, and medical clearance is required of all donors.Testing for hepatitis B and C, HIV, and the human T-cell leukemia virus are all included in serology.Once a mother is accepted as a donor, she receives training on the safe collection and storage of her milk.Subsequently, the milk is refrigerated, stored, and transported to the milk bank.After the milk has thawed, a bacterial culture is obtained.The milk is again cultured after being pasteurized for 30 minutes at 62.5°C in an industrial pasteurizer.As the final culture results are awaited, the milk is frozen again.The milk is delivered, thawed, and administered as needed once the milk bank requests human milk (Figure 2) [31].
Human Milk Bank Establishment in India
Mother Milk Bank was founded under the name "Jeevan-Dhara" and was established in Udaipur in 2015 by an NGO with ties to the government.The first community milk bank, "Divya Mother Milk Bank," was founded in Udaipur as a private institution.Compared to other states, Rajasthan currently has the highest number of milk banks, roughly around 19, called Aanchal Mother Milk Banks [33].Notably, in five years of bank operation, 13% of all donations came from mothers who delivered babies less than 500 grams, roughly 40% of this number referring to women who gave birth at exceptionally low gestational ages of less than 25 weeks.The composition data validates the exceptional value of human milk provided by mothers of premature newborns, making it a genuine biological jewel [15].
In India, the establishment of human milk banks follows a structured framework known as the Lactation Management Centre (LMC), which operates across three distinct levels.Lactation Management units are located at all delivery sites; LMCs collaborate with Special Newborn Care Units at the district level, and Comprehensive LMCs are established to collaborate with medical colleges at the tertiary level.The National Guidelines on Lactation Management Centres in Public Health Facilities indicate the presence of approximately 80 operational milk banks nationwide (Figure 3) [34].All hospitalized mothers can receive comprehensive lactation support and management from the Comprehensive LMC in a medical facility.It offers facilities for collecting, screening, filtering, storing, and distributing donated human milk to infants who receive their mother's milk.Furthermore, the Comprehensive LMC makes it easier for a woman to express and store breast milk for her child to consume [5].
The LMC is housed within a medical facility and was created to provide lactation support to all mothers who are patients there.Its primary purpose is to gather, store, and distribute a mother's breast milk so that her child can be fed [5].
The Lactation Support Unit is located at delivery points such as primary health centers, community health centers, and sub-district hospitals.Its purpose is to provide lactation support to all mothers receiving care at these healthcare institutions [5].
Problems Relating to Donating Human Milk
Individual barrier: The most often mentioned obstacles were mothers' ignorance of the donation process, their lack of time because of everyday chores, the physical and mental strain of pumping, freezing, or storing milk, and the expense and travel time to the milk bank [35].
Social barrier: Women face barriers because societal taboos prohibit them from giving or receiving.The fact that giving milk requires time puts a stop to their willingness to do so.Religious customs were the primary cause of the reluctance to accept donated DHM for feeding and to donate breast milk.The primary obstacle to providing mothers with counselling was cultural reluctance to view milk banking as unethical.For various reasons, including the possibility of infection, the loss of their child's attachment, concerns related to safety, and the loss of milk nutrients, some mothers declined to take the given milk [33].
Systemic barrier: The health system-related barriers to breast milk donation were identified as follows: inadequate infrastructure; the improper acquiring and storage of breast milk; the absence of trained staff; testing in laboratories; equipment upkeep; and the potential for contaminated milk; particularly during thawing.These issues must be taken care of by ensuring enough safety measures.Hospitals discouraging donation due to infection risk were also identified [35].
Recommendation
Human milk banking should be incorporated into programs encouraging breastfeeding, highlighting lactation in mothers and only using DHM when required [32].Promote the creation and advancement of various plans of action that ease the use of DHM within healthcare institutions.Urge healthcare providers to inform parents about the accessibility and advantages of DHM, especially in neonatal intensive care units and similar settings catering to preterm infants in need of extra assistance.
Training should be provided to the staff to address any potential cultural barriers that might affect the relationships between donors and recipients.Social Behavior Communication Change intervention should target specific groups to remove obstacles and eliminate common misconceptions about milk banking procedures among communities.
We can address worldwide inequities in accessing DHM by focusing on the expansion and long-term sustainability of human milk banking in poor resource areas [26].Public policies must be created to improve access to DHM and raise public knowledge of its advantages, particularly among groups whose breastfeeding rates are low [32].
Provide an ethical framework to direct national laws regarding the proper procurement and application of DHM.The framework would be based on the ideas of justice and equity, and it would include safeguards for both milk givers and recipients as well as a reverence for autonomy and human rights.Before proceeding, the donor and the recipients should provide informed consent.Table 2 shows the summary of the included studies.
Author Year Finding
Bramer et al. [1] 2021 DHM can lessen the incidence and severity of bronchopulmonary dysplasia and necrotizing enterocolitis in preterm infants.
Unger et al. [2] 2014 Donor milk banks handle large volumes of products and require strict protocols to prevent contamination, misidentification, or infection; they require a significant financial outlay.
Piemontese et al. [6] 2019 One benefit of using pooled milk is that it contains donor milk from various lactation stages.2019 DHM would reduce morbidity, particularly from the adverse effects of delayed breastfeeding and baby formula allergies.
Shenker et al. [13] 2023
The possibility that mothers may become less motivated to establish the milk supply if DHM is readily available has been one of the main arguments made against its use.
Young et al. [14] 2020
During the first year of lactation, milk IgA gradually drops; DHM total IgA was not correlated with milk postpartum age.
Hoban et al.
[17] 2020 The advantages linked to the DHM era in this research might simply result from a reduction in formula exposure, which has been known to raise intestinal permeability and inflammation.
Costa et al. [19] 2018
In comparison to donor breast milk, formula feeding may boost short-term growth rates in preterm or birth-weight infants, but it also doubles the risk of necrotizing enterocolitis development, according to a Cochrane review by Quigley and McGuire.
Arboleya et al. [23] 2020
A distinct gut microbiota profile is observed in premature babies fed DHM as compared to babies breastfed by their mothers.
Villamor-Martínez et al. [24] 2018 In very preterm/VLBW infants, DHM may protect against bronchopulmonary dysplasia.Pasteurization, on the other hand, seems to lessen the advantageous qualities.
Kumbhare et al. [28] 2022
The gut microbiome development of VLBW infants was found to be significantly influenced by the source of human milk (own mother versus donor milk), while the type of fortifier (human versus bovine) had a negligible effect.
Mantri et al. [33] 2021 Findings from Ethiopia are consistent with this study, which found that some mothers were hesitant to give their infants donated human milk because of the possibility of disease transmission.
Mathias et al. [35] 2023 According to many studies, one of the biggest obstacles to milk donation is people's ignorance of milk banking.These results align with those obtained by Wambach et al.Exclusively breastfeeding for the first six months of life is the most nutritious food source.It is well-known that human milk helps prevent various infant diseases and is beneficial to the health of premature infants.
In situations where nursing presents difficulties because of illnesses or for any other reason, DHM may be used.For premature babies, DHM is crucial because it is a transitional food source for effective breastfeeding.Although it is handled and processed by human milk banks, its qualities help to prevent postnatal growth deficits and offer health benefits.In addition, DHM lessens the need for formula, a known risk factor for bronchopulmonary dysplasia, late-onset sepsis, and necrotizing enterocolitis.Milk must be pasteurized using a high-quality holder to preserve essential nutrients and strengthen newborns' immune systems.Particularly for premature infants, human milk banking is an absolute necessity.Inequities in the prevention of diseases with high death rates may result from a lack of access to socio-health infrastructures, such as milk banks.To sum up, DHM enhances susceptible infants' overall health and well-being by helping them meet their nutritional needs.
Methodology
This review explores the vital role DHM plays in the health outcomes of premature infants.A systematic literature search was carried out using PubMed and Google Scholar databases in conjunction with reliable sources such as the websites of UNICEF and WHO.The search strategy used was ((((((((((((infant) OR (newborn)) OR (neonate)) AND (low birth weight)) OR (preterm)) OR (early birth)) OR (premature)) AND (donor human milk)) OR (donor breast milk)) OR (donated mother milk)) OR (donor milk)) AND (milk bank)) OR (lactation management[MeSH Terms]), and filters were as follows: free full text in the last five years.
FIGURE 2 :
FIGURE 2: A flow chart illustrating the human milk banking process.
FIGURE 3 :
FIGURE 3: The roles and levels of centers that manage lactation, located in Indian facilities.NICU, neonatal intensive care unit; SNCU, special newborn care unit
Table 1
displays India's infant and child mortality rate and breastfeeding practices. | 2024-04-05T18:16:59.253Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "ec5fa4b96b3b56f7b7327ddd21607b64117cf29d",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/233098/20240402-27479-y3sf36.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b209e31dd8bd8c8a0591f999814e8d7905f430cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254611520 | pes2o/s2orc | v3-fos-license | Association between the Preoperative C-Reactive Protein-to-Albumin Ratio and the Risk for Postoperative Pancreatic Fistula following Distal Pancreatectomy for Pancreatic Cancer
Postoperative pancreatic fistula (POPF) are major postoperative complications (POCs) following distal pancreatectomy (DP). Notably, POPF may worsen the prognosis of patients with pancreatic cancer. Previously reported risks for POCs include body mass index, pancreatic texture, and albumin levels. Moreover, the C-reactive protein-to-albumin ratio (CAR) is a valuable parameter for prognostication. On the other hand, POCs sometimes lead to a worse prognosis in several cancer types. Thus, we assumed that CAR could be a risk factor for POPFs. This study investigated whether CAR can predict POPF risk in patients with pancreatic cancer following DP. This retrospective study included 72 patients who underwent DP for pancreatic cancer at Ehime University between January 2009 and August 2022. All patients underwent preoperative CAR screening. Risk factors for POPF were analyzed. POPF were observed in 17 of 72 (23.6%) patients. POPF were significantly associated with a higher CAR (p = 0.001). The receiver operating characteristic curve analysis determined the cutoff value for CAR to be 0.05 (sensitivity: 76.5%, specificity: 88.9%, likelihood ratio: 6.88), indicating an increased POPF risk. Univariate and multivariate analysis revealed that CAR ≥ 0.05 was a statistically independent factor for POPF (p < 0.001, p = 0.013). Therefore, CAR has the potential to predict POPF following DP.
Introduction
Distal pancreatectomy (DP) is the standard surgical procedure for tumors located in the pancreatic body or tail, such as pancreatic cancer, neuroendocrine neoplasm, and mucinous cystic neoplasm [1]. A postoperative pancreatic fistula (POPF) is one of the most serious postoperative complications (POCs) of DP. Despite the development of energy devices and perioperative management, the incidence of POPF remains between 17% and 40% [2][3][4]. Additionally, morbidity rates of POPFs reach up to 30% because of its potential to lead to intraabdominal bleeding or abscess [5], with the mortality rates of DP reaching approximately 5%, even in high-volume centers [6]. Recent evidence showed that variables such as obesity, estimated blood loss, nutritional status, and surgical methods for pancreatic resection are clinical predictors of POPF after DP [7][8][9]. Additionally, more recent reports showed that several surgical methods, including spraying fibrin glue, wrapping hydrogel [10] or a polyglycolic acid sheet [11], and using fibrin sealant [12], could reduce the incidence of POPF. In contrast, recent studies concluded that POPF occurrence could not be predicted using any clinical variables [13] and found that reinforced staplers did not reduce POPF incidence [14]. Thus, there is an urgent need to identify more robust factors Nutrients 2022, 14, 5277 2 of 8 that may help predict the risk of POPF. The C-reactive protein (CRP)-to-albumin ratio (CAR) was initially developed as a prognostic factor for patients with sepsis [15]. However, many studies showed that CAR is associated with prognosis for patients in several types of cancers, including pancreatic cancer [16][17][18]. Moreover, a recent meta-analysis revealed that CAR becomes a predictive factor for pancreatic cancer patients [19]. On the other hand, CAR can affect POCs such as anastomotic leakage in esophageal and colorectal surgery [20,21]. Considering this evidence, we hypothesized that CAR can predict not only prognosis but also POCs such as POPF. In addition, based on the relationship between POPF and malnutrition, this study aimed to determine whether CAR could be a potential predictor of POPF in patients who underwent DP for pancreatic cancer.
Patients
Between January 2009 and August 2022, 72 patients underwent DP for pancreatic cancer at Ehime University Hospital. We retrospectively analyzed the medical records of these patients. The inclusion criteria were as follows: (1) pancreatic cancer patients with preoperative or postoperative pathological diagnosis, (2) cases with resectable pancreatic cancer, and (3) patients with a tolerance for curative surgery. The exclusion criteria were as follows: (1) non-radical resection, (2) DP with celiac artery resection, and (3) peritoneal dissemination. However, the presence of neoadjuvant therapy such as chemotherapy and radiation was not included in the exclusion criteria. The study protocol was reviewed and approved by the ethics committee of the Ehime University Hospital in 2022. All patients or their guardians had verbally consented to use their medical information for scientific research (Ethics approval number: 2206005). Obtaining informed consent from all patients was waived because of the retrospective nature of the study. All patients underwent DP with splenectomy and lymph node dissection, with the closure of the pancreatic remnant performed using a stapler. The drainage tube was placed into the subphrenic space or pancreatic stump, depending on the surgeon's decision.
Clinicopathological Data
The following data were collected from medical records: occurrence of POPFs, demographic variables (sex and age), anthropometric parameters (height, weight, and BMI), comorbidities, American Society of Anesthesiologists (ASA)'s physical status classification, blood transfusions, estimated blood loss, operative time, and serum albumin levels. POPFs were classified according to the International Study Group of Pancreatic Fistula (ISGPF) definition and grading [22]. In this study, grade B and higher indicated clinically relevant POPFs, which are symptomatic and require interventions such as antibiotics therapies or drainage for grade B and resuscitation or exploratory laparotomy for grade C fistulas. Drain amylase was monitored on postoperative days 1, 3, 5, and 7.
Nutritional Assessment Using CAR
CAR was calculated as CAR = [CRP (mg/dL)]/[albumin (g/dL)]. This calculation method was applied regardless of sex in the same way [15].
Statistical Analysis
All statistical analyses were performed using SPSS, version 24 (SPSS Inc., Chicago, IL, USA). Differences between patients with and without POPFs were compared using Mann-Whitney's U test, Fisher's exact test, or a chi-squared test. Additionally, patients' backgrounds were expressed as the median and interquartile ranges for nonparametric distribution. The chosen cutoff value of CAR was based on a receiver operating characteristic (ROC) curve analysis using Youden's index. Similarly, the cutoff values for continuous variables were calculated using their respective ROC curves. The potential risk factors for POPFs were evaluated using univariate and multivariate analyses. Univariate analysis was conducted using the chi-squared or Fisher's exact test, followed by multivariate analysis using logistic regression to identify risk factors for POPFs. The results are presented as odds ratios and 95% confidence intervals. p values < 0.05 were considered to indicate statistical significance.
Patient Characteristics
Among the 72 patients included, 35 were men and 37 were women. The median age was 71 (range 42-87) years. POPFs occurred in 17 (23.6%) patients. There was no mortality due to POPFs in this study. There were no statistically significant differences between patients with POPFs and those without POPFs with respect to age, sex, ASA classification, neoadjuvant chemotherapy, surgical approach method, and diabetes mellitus. However, preoperative albumin, CRP, and CAR were significantly higher in patients with POPFs than in those without (p = 0.001) ( Table 1). Additionally, estimated blood loss, blood transfusions, the presence of a soft pancreas, and CD classification over III showed no significant difference. In contrast, the operation time was statistically significant ( Table 2).
Calculation of Optimal CAR Cutoff Value
The ROC analysis showed that the areas under the curve of albumin, CRP, and CAR were 0.669, 0.866, and 0.888, respectively ( Figure 1). Thus, CAR was a better predictive marker for POPFs following DP. Using the Youden index, a CAR of 0.05 was determined to be the appropriate cutoff value, with a sensitivity of 76.5%, a specificity of 88.9%, and a likelihood ratio of 6.88. Patients were categorized into two groups based on the CAR cutoff value: the High-CAR group (CAR ≥ 0.05, n = 21) and the Low-CAR group (CAR < 0.05, n = 51). POPFs were observed in 61.9% of patients in the High-CAR group and 7.8% in the Low-CAR group. Univariate analysis was performed to evaluate whether a CAR ≥ 0.05 Nutrients 2022, 14, 5277 4 of 8 was a risk factor for POPFs after DP (p < 0.001). Similarly, multivariable logistic regression analysis revealed that a CAR ≥ 0.05 was an independent predictor of POPFs following DP (p = 0.013) ( Table 3). marker for POPFs following DP. Using the Youden index, a CAR of 0.05 was determined to be the appropriate cutoff value, with a sensitivity of 76.5%, a specificity of 88.9%, and a likelihood ratio of 6.88. Patients were categorized into two groups based on the CAR cutoff value: the High-CAR group (CAR ≥ 0.05, n = 21) and the Low-CAR group (CAR < 0.05, n = 51). POPFs were observed in 61.9% of patients in the High-CAR group and 7.8% in the Low-CAR group. Univariate analysis was performed to evaluate whether a CAR ≥ 0.05 was a risk factor for POPFs after DP (p < 0.001). Similarly, multivariable logistic regression analysis revealed that a CAR ≥ 0.05 was an independent predictor of POPFs following DP (p = 0.013) ( Table 3). Figure 1. Comparison and determination of the c-reactive protein-to-albumin ratio cutoff value using the receiver operating characteristic curve analysis.
Discussion
POCs following pancreatectomy, including POPF, may worsen patient prognosis [22][23][24][25][26]. The incidence of POPF was approximately 21-40% in patients who underwent DP [4,27]. Several surgical techniques for pancreatic stump creation or pancreatic transection have been introduced to reduce the risk for POPF [4,5,28]. However, no robust evidence has been established to support surgical techniques. In contrast, a number of POPF risk factors have been suggested, such as a soft pancreas, obesity, diabetes mellitus, a lower geriatric nutritional risk index (GNRI), lower albumin levels, blood loss, and an extended operation time [29][30][31]. Notably, a meta-analysis revealed that a soft pancreas, a higher BMI, blood transfusion, blood loss, and the operative time were major predictors of POPF [7]. Especially, BMI is a well-known risk factor for POPF following pancreatectomy, as the alternative fistula risk score for pancreatoduodenectomy includes BMI as one of the assessments [32,33]. In the present study, those parameters actually showed a statistical relationship in the univariate analysis. However, recent data contrastingly indicated that definitive indicators for predicting POPF did not exist [13]. Therefore, exploring more reliable factors for POPF is an important point of clarification for surgeons. Lower albumin, including malnutritional status, is the most commonly reported risk factor for POPF after Nutrients 2022, 14, 5277 5 of 8 pancreatectomy [7,34,35]. Giardino et al. showed that preoperative elevated CRP levels were associated with an increased risk of POCs after pancreatectomy [36]. Under these circumstances, we hypothesized that preoperative CAR could be a novel predictor for POPF following DP. It is important to perform surgery based on preoperative POPF risk because POPF may result in increased medical costs and worsened patient prognosis [25,37]. Given these clinical issues, a parameter or strategy for simple preoperative assessment is needed. Recent reports revealed that some parameters using nutritional status and inflammation might contribute to the development of POCs following pancreatectomy, including the prognostic nutritional index [37], neutrophil-to-lymphocyte ratio [38,39], GNRI, and CAR [25]. Moreover, Gililland et al. suggested that albumin levels < 2.5 mg/dL or weight loss >10% warranted the postponement of surgery to improve operative outcomes [40]. Preoperative immunonutrition has also been reported to improve the outcomes of patients with pancreatic cancer [41,42] and reduce the risk for POPF [43].
CAR was originally developed to predict prognosis in patients with sepsis [15]. In this study, 23.6% of patients developed POPF following DP. CAR ≥ 0.05 was associated with an increased POPF risk, suggesting that the preoperative improvement of nutrition or inflammatory status might decrease POPF incidence. Our results also showed that nutritional or inflammatory status affected the risk of POCs, which was consistent with the findings of previous studies [25,[44][45][46][47]. Previous data revealed that the CAR on postoperative day 3 is a risk factor for POPFs following pancreatoduodenectomy [48,49]. By the ISGPF definition of POPF, a POPF is diagnosed on postoperative day 3 due to the drain amylase level. For surgeons, the risk of POPF should be known preoperatively to perform safe procedures.
This study had a few limitations. First, the sample size was small, and only a singlecenter study was conducted to definitively claim that preoperative CAR was a novel POPF risk factor. Second, the retrospective nature of this study was another limitation of the scope of the conclusions. Finally, the level of CRP has the potential to depend on several factors including sex, body weight, and race [50]. Therefore, a larger prospective study should be conducted to validate this result. Despite these limitations, we believe that the predictor will be simple and valuable as a clinical application.
Conclusions
This study showed that a preoperative CAR ≥ 0.05 may become a risk factor for POPF following DP. Funding: This research received no particular grant from any funding agency in the public, private, or not-for-profit sectors.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Ehime University Hospital (protocol code: 2204007 and date of approval: 13 April 2022) for studies involving humans. Informed Consent Statement: All patients or their guardians had verbally consented to use their medical information for scientific research.
Data Availability Statement: The data will be available upon request from the corresponding author. | 2022-12-14T16:18:02.603Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "983fb2c0043717c07aba2c19f61910ca3e5e871f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/14/24/5277/pdf?version=1670670513",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db2c9f2cf5dbbaaf3c9d240792499b74d6ade7b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
232387368 | pes2o/s2orc | v3-fos-license | The Quest for System-Theoretical Medicine in the COVID-19 Era
Precision medicine and molecular systems medicine (MSM) are highly utilized and successful approaches to improve understanding, diagnosis, and treatment of many diseases from bench-to-bedside. Especially in the COVID-19 pandemic, molecular techniques and biotechnological innovation have proven to be of utmost importance for rapid developments in disease diagnostics and treatment, including DNA and RNA sequencing technology, treatment with drugs and natural products and vaccine development. The COVID-19 crisis, however, has also demonstrated the need for systemic thinking and transdisciplinarity and the limits of MSM: the neglect of the bio-psycho-social systemic nature of humans and their context as the object of individual therapeutic and population-oriented interventions. COVID-19 illustrates how a medical problem requires a transdisciplinary approach in epidemiology, pathology, internal medicine, public health, environmental medicine, and socio-economic modeling. Regarding the need for conceptual integration of these different kinds of knowledge we suggest the application of general system theory (GST). This approach endorses an organism-centered view on health and disease, which according to Ludwig von Bertalanffy who was the founder of GST, we call Organismal Systems Medicine (OSM). We argue that systems science offers wider applications in the field of pathology and can contribute to an integrative systems medicine by (i) integration of evidence across functional and structural differentially scaled subsystems, (ii) conceptualization of complex multilevel systems, and (iii) suggesting mechanisms and non-linear relationships underlying the observed phenomena. We underline these points with a proposal on multi-level systems pathology including neurophysiology, endocrinology, immune system, genetics, and general metabolism. An integration of these areas is necessary to understand excess mortality rates and polypharmacological treatments. In the pandemic era this multi-level systems pathology is most important to assess potential vaccines, their effectiveness, short-, and long-time adverse effects. We further argue that these conceptual frameworks are not only valid in the COVID-19 era but also important to be integrated in a medicinal curriculum.
Precision medicine and molecular systems medicine (MSM) are highly utilized and successful approaches to improve understanding, diagnosis, and treatment of many diseases from bench-to-bedside. Especially in the COVID-19 pandemic, molecular techniques and biotechnological innovation have proven to be of utmost importance for rapid developments in disease diagnostics and treatment, including DNA and RNA sequencing technology, treatment with drugs and natural products and vaccine development. The COVID-19 crisis, however, has also demonstrated the need for systemic thinking and transdisciplinarity and the limits of MSM: the neglect of the bio-psycho-social systemic nature of humans and their context as the object of individual therapeutic and population-oriented interventions. COVID-19 illustrates how a medical problem requires a transdisciplinary approach in epidemiology, pathology, internal medicine, public health, environmental medicine, and socio-economic modeling. Regarding the need for conceptual integration of these different kinds of knowledge we suggest the application of general system theory (GST). This approach endorses an organism-centered view on health and disease, which according to Ludwig von Bertalanffy who was the founder of GST, we call Organismal Systems Medicine (OSM). We argue that systems science offers wider applications in the field of pathology and can contribute to an integrative systems medicine by (i) integration of evidence across functional and structural differentially scaled subsystems, (ii) conceptualization of complex multilevel systems, and (iii) suggesting mechanisms and non-linear relationships underlying the observed phenomena. We underline these points with a proposal on multi-level systems pathology including neurophysiology, endocrinology, immune system, genetics, and general metabolism. An integration of these areas is necessary to understand excess mortality rates and polypharmacological treatments. In the pandemic era this multi-level systems pathology is most important to assess potential vaccines, their effectiveness, short-, and long-time adverse effects. We further argue that these conceptual frameworks are not only valid in the COVID-19 era but also important to be integrated in a medicinal curriculum.
THE COMPLEXITY OF COVID-19 QUESTS FOR AN INTEGRATIVE FRAMEWORK WITH A FOCUS ON A BIO-PSYCHO-SOCIAL MODEL
Molecular systems medicine and computational medicine were helpful for the understanding and management of COVID-19 pandemic. Moreover, the importance of mathematical modeling in epidemiology demonstrates the benefits of generic system models that can be used to compare causally different cases (such as the Spanish flu, Ebola and and are also necessary to lead political decisions (1,2). The societal importance of integrated medical knowledge has become particularly obvious during the COVID-19 pandemic. The understanding and management of COVID-19 pandemic exposes the dissociation of specialized diversity of medicine such as virology, epidemiology, public health, internal medicine, etc. Molecular analysis of mechanisms of infection, epidemiological data on spreading and their mathematical extrapolation alone are insufficient to foresee and avoid catastrophic developments. This exemplifies that empirical research on Covid-19 necessitates analysis of systemic feedback and feedforward mechanisms as well as collateral effects on all levels of organismic organization, on the "ecology of the person" (3) and on the level of institutional management of the pandemic. For example, for differential understanding of the high rate of case fatalities on the population level clinical knowledge of individual courses of the disease must be integrated with views of basic research in various disciplines not only of immunology, but also endocrinology and even neurobiology. Regarding adherence to prevention regulations, psychology of distancing, and social sciences of lock downs must be considered in order to depict the real-life situation of people (3,4). Higher mortality rates among some population groups cannot be understood through molecular, or even "biological" factors alone, but also involves consideration of psycho-social conditions of life, socioeconomic disparities, and sociocultural orientations; COVID-19 is a syndemic (5). An integration of all these factors into a comprehensive conceptual framework for COVID-19 is proposed in Figure 1. A general system-theoretical bio-psycho-social model for medicine and the education of health professionals has been suggested already in the 1970 by George Engel (9). Regarding this (nearly forgotten) integrative and multidimensional model a bio-psycho-social pathology on individual and population level could improve understanding varieties, for example in clinical Covid-19 courses and also could have practical effects in management of the pandemic. But it also would enable a wider understanding of societal conflicts between restrictive hygiene suggestions, rights for freedom, and impaired economic vitality (see Figure 1). These conflicts exert a strong disturbance on everyday life organization and need a comparative and sophisticated discussion. Smart health care delivery can lower thresholds for access and improve acceptance of the targeted population. In case of in-patient treatment units also a human-centered management structure and style, e.g., "systemic management concepts, " could prevent burn-out of the staff and could enhance success of public health goals (6,7,10).
Later on, we sketch those issues focusing on somatic processes but being aware of environmental factors as contexts. In the context of human sciences the term "environment" has two epistemic meanings: "subjective" environment according to Jacob von Uexküll (11), and "objective" environment in the sense of Ernst Haeckel (12). This difference corresponds to the clinical data, e.g., if someone is clinically obese and subjectively not aware of it. Integration of data across such different domains is a key task which is exactly the domain of system theory that aims to transform bio-psycho-social data sets to a framework of a functional language that represents an ontology of functions as well as their quantification as discussed later. Furthermore, the need for the conceptual integration of different organismal subsystems to understand the mechanisms of the pandemic raises basic questions about the epistemic power of contemporary systems medicine, which is discussed below. Regarding the multiorgan manifestation of COVID-19, a co-evolution of new disease ontologies, data integration and interoperability strategies that use omics-based classifications and combine them with clinical ontologies are highly fruitful (13).
FROM MOLECULAR TO ORGANISMAL SYSTEMS MEDICINE
Medicine today relies on three major pillars: (i) clinical knowledge and practical experience, (ii) classical diagnostics and evidence-based treatment on the basis of expected value decision making and (iii) multiomics combined with advanced statistical/mathematical analysis such as machine learning and mathematical modeling. Here, precision medicine and molecular systems medicine (MSM) are highly utilized and successful approaches to improve understanding, diagnosis and treatment of many diseases, based on data from multiomics technologies, data statistics and modeling (14)(15)(16)(17)(18)(19)(20)(21). Genome-wide-association studies (GWAS) of COVID-19 are rather at the beginning but are highly promising to reveal critical illness cases (22,23). For data gathering, diagnostics and prediction models machine and deep learning techniques and applications of artificial intelligence are of utmost importance but also need to be critically reviewed (24)(25)(26)(27)(28)(29). Altogether, there is a risk that classical medical knowledge, especially qualitative, and intuitive knowledge of organismic pathology, will be lost before the transition to MSM can be implemented clinically. Also, these technologies neglect the bio-psycho-social dimension of medicine as discussed above (4). Interestingly, the technologyand data-driven epistemology of MSM is not yet sufficiently understood regarding the gap between correlation and causation [(30); Table 1 and discussion below], especially comparing it with the very special bedside-epistemology of physician-based and patient-based observations and experiences of health and disease. The combination of implicit biochemical reductionism of physiology and data-reductionism of health phenomena implicates the lack of conceptual inter-level and interdisciplinary integration, namely the neglect of the epistemic weight of clinical experience that is concerned with the whole person. Current documentation within electronic health records (EHRs) FIGURE 1 | Systemic compartment model of the individual person with its respiratory system that primarily is confronted with and affected by SARS-CoV-2, that invades cells within tissues of respiratory (and others) organs of the organism, step by step. But invasion can be attenuated by local defense mechanisms (circular lines with transoms). Spreading of the virus can occur (stippled arrow). The person can experience the sickness by symptoms (stippled double arrow) and/or obtains and utilizes the information provided by the respective socio-economic system on micro-, meso-, and macro-level and its knowledge about the virus and its prevention and treatment. With this mindset, depending on social context, the person might change behavior by reduction of the exposure (-) by lockdown, distancing, quarantine etc. or will exert risky behavior (+) with respective consequences for the social environment. A major feedback loop is also organization of health care and other services (6, 7) as well as access to these health care services which is not equally distributed in the society (5,8).
is not designed to treat the patient as an organism, but rather as a suite of documentation regarding the clinical encounter and for billing purposes (13). We can learn a lot from the rare disease community (among others) where there is a need to support multi-species and multi-modal data integration to inform diagnostics and treatment discovery. Most of this documentation regarding the patient happens outside the EHRs in order to support the observations of the patient as an organism (13). We have not well-applied this approach to COVID patients yet. In consequence, not only a unidirectional but rather a bidirectional relation between bench and bed (physician and patient) could improve efficacy of translational medicine. This could be "vertical transdisciplinarity" that complements "horizontal transdisciplinarity" as conventional interdisciplinarity and that combines scientific knowledge production with physicians and patients observations and experiences (49)(50)(51)(52).
In consequence, we take a conceptual and theoretical approach that is organism-centered conceiving the organism as system of organs, tissues, and cells and that also envisions the organism as a living system-in-the-world. We appeal to the need for an "Organismal Systems Medicine" (OSM) in the sense of Bertalanffy (53,54) to complement MSM by accounting for the systemic and ecological context of the organism.
This approach can be seen as a complementary procedure to the bottom-up methodology of current MSM as it is an organism-centered (or: person-centered) top-down functional analysis. This holistic starting point of biomedical research is aware of contextual factors such a psycho-social factors that come up as risk factors and/ or protective factors for health and disease. One of the central concepts of OSM is the adaptive, selforganized dynamic equilibrium ("flow equilibrium") of partially antagonistically operating components of the system. Dynamic equilibrium is constituted by the assumption of hierarchical partial antagonisms and synergisms between activators and inhibitors that converge on operators of different organizational levels of complex organisms (organs, tissue, cells, and molecules). This is a heuristically fruitful concept to organize observed phenomena and data and it is also a guiding principle for organizing experimental and field research as we will show later.
EPISTEMOLOGY OF MOLECULAR AND ORGANISMAL SYSTEMS MEDICINE
Altogether, a differentiated but integrated systemic methodology could improve our understanding and managing of COVID-19 but also future challenges. Accordingly, it has to be considered
Problem Literature
Bilateral relations between data and theory, difficulties of causal inference based on correlations (31,32) Reduction and holism, whole-parts relations: can the knowledge of molecular biology explain higher functions of the whole organism?
Limits of bottom-up explanations of social phenomena by molecular biology (34) Systemic multilevel ontologies and emergence (35) Chance and necessity: Is there a significant difference between randomness and determinism?
Examples of top-down causation (37) Epistemology of computational modeling (38) The meaning of terms like "information," "function," and "structure" Scaling problems (40)(41)(42) The structure of explanations and theories in biology (43) The ontology of life (44) The limited explanatory power of evolutionary theory (45,46) The concept of goal-directedness as teleonomic but not teleological property of living systems (47) The relevance of the concept of self-organization (48) These topics are dealing with conceptual problems that underly the need for systems medicine.
that epistemic limitations of valuable MSM show up focally, but they are not worked out in a broader way. In contrast, systems biologists and philosophers alluded to methodological difficulties/limitations of claims of early systems biology (55,56). Some examples for important metatheoretical topics that would frame methodology of systems medicine are listed here [(30); see Table 1]: The part-whole problem, bottom-up vs. top-down causation, mechanistic vs. nomological explanations, complexity reduction, epistemics of interdisciplinarity, determinism and self-organization, correlation and causation, emergence vs. reductionism, robustness vs. homeostasis. Several of these problems are relevant for a holistic MSM (57). Selected aspects are discussed in the following.
Functional Organization of the Organism
A major challenge in biomedical research is to understand the functional organization of a living system across multiple levels. Several taxonomies were presented in history of systems science that intend to capture conceptually a limited set of "essential functions" like respiration, circulation, reproduction, or more general like adaptation, assimilation, integration, differentiation, etc. (58). Biological functions (e.g., defense functions such as inflammation) can be localized at different levels of organization (e.g., molecules, cells, tissue, organs), using different methodologies. Biases sometimes occur as a result of downgrading factors that are left out of the analysis when focus is directed at a specific functional level. This type of bias was called by the philosopher William Wimsatt as functional localization fallacies (59). For example, research in recent decades has focused on the impact of specific genetic mutations on cancer development and treatment response. While successful, this approach has also created blind spots, since environmental and biomechanical factors at higher scales are often ignored or held fixed as a methodological necessity. Genetics can infer a correlation of cancer treatment and treatment success. But this phenomenological association cannot be explained by genomics alone, as success of cancer treatment is also impacted by other systems such as the immune and endocrine system. Thus, the analysis of the cancer alone is insufficient, as it neglects the impact of the treatment on other parts of the organism.
Level of Conceptual Resolution
It is often assumed that the precision of models increases as more details are incorporated, and some have even argued that the principle of Occam's razor does not apply to biology in the computational age (60). Inclusion of ever more molecular details could impede the predictive capacity of models, especially if these are not contextualized within the overall system organization: Living systems exhibit what Mihajlo Mesarovic termed bounded autonomy of levels (61). Cross-level relations in biology are neither independent nor linearly coupled-rather, they are dynamically autonomous within certain boundaries. Examples are how phenotypic states are often resilient to genetic mutations or changes in expression levels. As living systems are not homogenously organized, upscaling of models is not accomplished by simple averaging of lower-scale details. In contrast, multi-scale modeling of living systems requires an understanding of how the system is hierarchically organized, and how higher-scale structures can exert top-down control over lower scales through constraining relations (62). An example which is more elaborated below is stress-dependent release of cortisol with downstream effects on organ-level (63).
Top-Down Causation
The relevance of top-down influences is exemplified by recent insights from multi-scale modeling of the human heart and the cardiovascular system (41), embryonic development (64), fracture risk in bone (42), organogenesis, and cancer (65,66). In these contexts, macro-and meso-scale models represent higherlevel features that act as boundary conditions for models at lower scales (67). Top-down causation occurs when higher-level structures shape lower-level interactions and channel dynamic possibilities, some of which would be impossible to reach for an unconstrained system (62). Just like the heart rhythm is possible due to constraints of the cell membrane and higherlevel structures (37), the wiring of biological networks constrains lower-levels states and give rise to generic functions such as feedback control or signal implication. System theory can help to identify similarities in the patterns of organization in different systems, and hence to recontextualize the inputs from dataintensive fields (40).
ORGANISMAL SYSTEMS MEDICINE AS AN INTEGRATIVE FRAMEWORK
As already mentioned briefly, some of these theoretical challenges for systemic thinking were already tackled by the philosopher and biologist Ludwig von Bertalanffy who made significant contributions to the field through his formulation of General System Theory (54). This theory should enable researchers from different disciplines to conceive their epistemic object as a dynamic system. Although his conception was already grounded on early biochemistry, molecular biology, and mathematics, he proposed an organism-centered view (organismic systems biology) (53). He underlined this idea by defining a system as a "structured whole." In addition, the perspective of developmental biology was crucial for his concept of a theoretical functionoriented biology.
Concepts and Models
Ludwig von Bertalanffy, Mihajlo Mesarovic, and other founders of systems biology, like James G. Miller, proposed explicit conceptual multi-level models to describe, explain, and predict dynamics of states of living systems. These ideas had also interdisciplinary relevance as GST was also developed in context of sociology by Talcott Parsons who designed a heuristic scheme that assumes that social systems have to fulfill four basic functions: adaptation, goal attainment, latent pattern maintenance, and integration (68). In consequence, these concepts could probably be useful for a general functional understanding of the organism and its pathology.
One of the most significant basic and already elaborated concepts is the notion of "dynamic equilibrium" (German notion: Fliessgleichgewicht) that governs processes on different levels of the organism. It is constituted by asymmetric antagonistic convergence operations of systemic cellular and molecular components, e.g., the nervous system ("autonomic network" of ergotropic sympathetic vs. trophotropic parasympathetic autonomous nervous system) (69), endocrine system (blood glucose regulation by antagonistic hormones) and immune system (pro-vs. anti-inflammatory agents). The action of these subsystems converge overlapping on sites of homeostasis of organ functions such as cerebral stress reactions, glucose homeostasis or balanced defense reaction. This concept of a dynamic interplay between accelerators and brakes can be a guiding principle not only to describe, but also to explain and understand temporal patterns of organismic processes in health and disease.
Systemic Methods
During the last decades, several methods of systemic modeling were developed from qualitative models in context of systems dynamics, via mathematical models, computerized modeling tools, and data-driven inverse modeling (32). Aiming to construct comprehensive models with high ecological validity, transdisciplinary approaches that connect practitioners and researchers from various fields of relevance are the basis of successful modeling as it was elaborated in sustainability science (70,71). Qualitative conceptual models are developed, formulated by simple verbal, graphical and tabulation tools (Figure 2). In a next step, the model will be transposed into a data-based mathematical formulation that can be used for exploratory computer simulations that need not much mathematical literacy such as programs Stella or Vensim (55). Notably, in this procedure the conceptualization of a system model can easily be done by graphical tools that facilitate interdisciplinary communication (Figures 2, 3). Already simple graphs of process structures, such as feedback and feedforward loops, can capture features of systems dynamics. Complex structures of process conditions (e.g., biochemical pathways) can be studied in this view qualitatively by identification of generic dynamic principles or "motifs" (72). This type of modeling is a classical forward approach, however, inverse or reverse approaches feeding data directly into model building are even more promising (19,32). Data driven medical diagnostics and anamnesis are inverse problems per se. A classic example of biomedical inverse problems are imaging techniques from tomography to microscopy. Relating the acquired data to the unknown object is an inverse problem and requires mathematical modeling (73).
SYSTEMS PATHOLOGY
Pathology is a key discipline in medicine and nowadays its subject is split into subsystems (organs and tissues) which are wellunderstood on the anatomical, physiological, cellular, molecular, and biochemical level. In line with our proposition for the utility of a function-oriented structural analysis, physiology, anatomy, and histology are central to make sense of the richness of these molecular data. Morphological macro-and micro-structures provide the boundaries and connecting structures that enable functions such as metabolism, respiration, circulation etc. The central challenge is to develop an integrative multi-level "systems pathology" that connects classical physiological and clinical knowledge with current rich molecular biological knowledge. The linkage between molecular and cellular models (bottom up) and the organ and organism level (top down) as in topdown causation models is a rather underrepresented approach (Figure 3). This step requires systematic methods of vertical integration of model inputs, translating from the molecular to the whole-system level and vice versa. One elaborated and validated example is the integrative multi-level model of the heart activity covering molecular and cellular mechanisms (75) and also a multi-organ model of the thyroid control loops (76). Regarding COVID-19 only a few attempts were made to design a multistage systems view on the pathology of this disease (77,78). Such models should be constructed basically as multi-level models that integrate organismal physiology with molecular studies. Methods of sensitivity analysis can be used in order to study the effects of molecular properties and events (e.g., enzyme kinetics, receptor binding and certain genetic variants) on the behavior of the whole system from cells, tissue, and organs up to the level of the organism. Conversely, "reverse sensitivity analysis" allows for concluding from the behavior on the organismic tier to an affine subspace of sensible parameter values on the molecular level (Figure 3).
Preconditions for this advanced modeling technique include certain measures on both the systems and the molecular levels. On the higher level, it is required that the network structure follows a "parametrically isomorphic" paradigm, i.e., that the model is constructed from building bricks that can be mapped to knowledge from molecular research in a bijective manner. On the other hand, research on the lower level has FIGURE 2 | Methodology for iterative optimization of exploratory and quantitative models in Systems Medicine. In the following the workflow is described: (i) data collection, observations in a "transdisciplinary" groups, (ii) generation of data matrices, (iii) drafting a verbal model of interactions of components, definition of the system, elements, relations, and boundaries necessary for the subsequent formal models (iv) construct graphical model of causal loops and/or of stocks and flows/effects (decide about graphical language), (v) drafting mathematical equations, (vi) parameter estimation and fitting using existing data, state and flow variables, and coefficients (if not possible: "educated guess" by expert-based estimations), (vii) transposition to computational model simulation, model tests, validation, scenarios: "…if, then…," (viii) model prediction and extrapolation and comparison with data, further validation and model optimization with backward improvements (for further details see text).
to deliver quantitative results providing meaningful information on stimulus-reaction relations, temporal dynamics etc., and thereby enabling parameterization of the high-level models (Figure 3). Reusable libraries of universal motifs and building bricks (e.g., feedback loops, feedforward motifs, antagonistic, and redundancy) helps to speed up and simplify the modeling process (79)(80)(81).
MULTI-LEVEL/MULTI-FUNCTION PATHOLOGY INTEGRATING NEUROPHYSIOLOGY, ENDOCRINOLOGY, AND IMMUNE SYSTEM
Even on a semi-quantitative level, systemic modeling is heuristically relevant. Regardless of the details available on subtypes of cells, receptors, signaling molecules etc., there is lot of evidence of the antagonistic organization and regulation of an integrative neurophysiological, endocrine, and immune system. The integration of these subsystems has implications for neurophysiological complications, excess mortality rates, and polypharmacological treatment for COVID-19 (77,(82)(83)(84) but also the assessment of potential novel vaccines such as the mRNA vaccines and their short-and long-term adverse effects due to mass vaccination, e.g., autoimmune reactions and neurophysiological manifestations (82,(85)(86)(87)(88). The interaction of the neurophysiological, endocrine, and immune system is discussed in the following.
The Neurochemical Antagonism
Most clinical observations in neurology and psychiatry confirm the view that a delicate dynamic equilibrium exists between activating and inhibiting neurotransmitter systems, although anatomic and pharmacological details (e.g., receptor subtypes) are more complicated (55): activating (ergotropic) noradrenaline (NA) and inhibiting (trophotropic) acetylcholine (ACh) oppose FIGURE 3 | Today's biomedical research is faced by the challenge that descriptions of the systems of interest are restricted to different sublevels. A high-level theory covers a systems view, addressing large subsystems, e.g., multi-organ feedback control systems or even the whole organism including psychosocial relations. This is complemented by a low-level description of molecular structure and reactions. Unfortunately, the vertical translation, e.g., how molecular data are integrated into the high-level system description, is difficult. Methods of vertical integration include affine subspace mapping (top-down inference), sensitivity analysis (bottom-up reasoning) and graphical tools. They require, however, certain preparative steps on both tiers of research to be feasible (74).
partially to each other, with different patterns of dynamics. They represent a body-wide antagonistic regulation of functions (heart, blood vessels, lungs, pancreas, gut system, etc.). On a second operational level, synergistically connected with NA, the neurotransmitter dopamine (DA), and serotonin (5 hydrotryptamine, 5-HT) exhibit a partial antagonism. On the side of ACh, fast operating excitatory glutamate (Glu) and inhibiting GABA show also partial antagonism. Clinically, these interactions can be seen in disorders like Parkinson's disease with a dominance of acetylcholine over dopamine because of loss of dopamine cells: substitution of dopamine can induce a psychotic syndrome and in turn neuroleptic treatment of psychoses can evoke a Parkinson's syndrome. By integrating such antagonistic effects into a neurochemical network model that can be simplified as a "neurochemical mobile, " the neurochemical basis of several neuropsychiatric syndromes can be described, and explained even quantitatively by computer simulations (89,90). Effects of new medications (glutamate antagonists) can be predicted as well (anti-depressive effects): in depression NA, DA, and 5HT exert a hypofunction in neurotransmission, compared to a hyperfunction of ACh, Glu, and GABA. In consequence, selective serotonin reuptake inhibitors (SSRIs) that enhance 5HT transmission work as antidepressants and also glutamate antagonists such as ketamine can reduce depressive syndromes (91). Interestingly, all these neurotransmitters operate on probably all organismic cells and many body cells even produce transmitters (e.g., immune system).
The Endocrine System
The endocrine system is a multi-organ system partially centered around the pituitary gland (92). The principle of asymmetric antagonistic convergence is only weakly confirmed in the endocrine system, but at the peripheral organ level the interplay of glucagon and insulin confirm this concept. The most important hormone is cortisol, a steroid hormone produced in the adrenal glands. It involves a range of processes related to metabolism, stress and immune response. It works ergotropic and is partially synergistically to NA and classic thyroid hormones. The main feature of the cortisol system is its multilevel upstream connection with the CNS via hypothalamus and the pituitary gland. This connectivity is often quoted as hypothalamus-pituitary-adrenal or HPA axis: HPA axis elevates cortisol level which in turn inhibits the HPA axis via glucocorticoid receptors in hypothalamus and pituitary gland as a feedback inhibition. This feedback loop exhibits several cybersystemic features that make several pathologies [e.g., stress; (63)] understandable: delayed feedback with consecutive oscillations, delays, adaptation, allostasis etc. characterize the dynamics of endocrine systems.
Interestingly, the HPA axis has multiple antagonists on various anatomical levels (each of them antagonizing certain partial functions) (Figure 4). They include growth hormone (anabolic action), insulin (glucose-lowering and anabolic function), hormones of the non-classical renin-angiotensin system (angiotensin 1-7, angiotensin 1-9, angiotensin A, and alamandine with hypotensive and hyponatremic actions) and thyroid hormones (HPT axis, central antagonism). It is therefore not surprising that the HPA axis is upregulated in critical infectious diseases, including COVID-19, while the HPT axis is downregulated (93).
The Immune System
Regarding the principle of antagonistic convergence, the immune system has a (fast) pro-inflammatory and (slow) antiinflammatory functional differentiation, for instance by signaling via Th1 and Th2 cells (94). Interferon (IFN) and tumor necrotic factor (TNF) are secreted from Th1 cells that amongst other effects activate macrophages and inhibit activity of Th2 cells that in turn can also inhibit Th1 cells by interleukin IL-4, IL-10 etc. In acute inflammation, Th1 subsystem dominates Th2 subsystem, in case of chronic inflammation Th2 subsystem dominates Th1 subsystem. In case of COVID-19, pro-inflammatory components exhibit a persistent overactivation. In response to a local pathogenic challenge, an innate immune response is initiated by type I interferons (IFN) and pro-inflammatory cytokines like tumor necrosis factor alpha (TNFalpha), interleukin 1beta (IL-1beta), and interleukin-6 (IL-6). Later on, when the adaptive immune response kicks in, an overreaction of the immune system is prevented by anti-inflammatory factors like TGF-beta and interleukin-10, thus, generating a negative feedback loop onto the immune response. In the case of COVID-19, this latter step sometimes fails to keep the immune response under control (84).
The Multi-Level/Multi-Function Interaction
The complexity of these and other regulatory systems can be structured conceptually by a multi-level/multi-function interaction network. Regarding the immune system, it is wellknown that ACh inhibits macrophages to secret TNF, whereas NA could stimulate TNF secretion via alpha-and beta-receptors (95). In synergy with ACh, cortisol also suppresses macrophage activity. Several other examples can be worked out (96), e.g., multimorbitidy and the problem of polypharmacy that affects about 20% of the population (97,98). For instance, the comorbidity of diabetes mellitus and depression can be revisited by looking to molecular signaling cross-overs between the CNS (relative hypofunction of noradrenaline, dopamine and serotonin compared to acetylcholine, glutamate, and GABA in depression) and the physiological control of beta and alpha cells in pancreas physiology by these neurotransmitters and also the effects of insulin in the brain, etc. (99). In addition, imbalances within the immune system (elevated IL-6) contribute to the occurrence of depression (e.g., side effect of interferon therapy) and diabetes.
In consequence, these subsystem interactions need to be analyzed in detail on the basis of a reference network model. With regard to COVID-19, chronic bio-psycho-social stress situation could evoke the severe persisting shift in the immune system toward pro-inflammatory mechanisms. It is stimulated by catecholamines (representing the fast stress system) and inhibited by thyroid hormones (as slow mediators of stress and allostatic load). In addition, it has multiple antagonists inhibiting partial functions at peripheral levels of the processing structure, some of them (marked by *) resulting from ACE2 activation. AT, angiotensin; STH, somatotropic hormone. For more details see text.
A SYSTEMS VIEW ON COVID-19-INTEGRATION OF EPIDEMIOLOGY AND SYSTEMS PATHOLOGY
The utility of systems thinking in medicine, especially in the case of COVID-19, is obvious in epidemiology by the universal application of SIR compartment models and their derivatives which help to understand and explore the dynamics of spreading of the virus (100). The diverse exposure features (asymptomatic carriers) are crucial for infection so that models have to be extended (2). At this population level, data analysis and modeling demonstrate the dangerous dynamics of exponential growth. Several theoretical challenges exist to represent the mechanisms of focal spreading and for evaluation of measures. They can only be partially solved by agent-based modeling, but only if the collateral effects of public health measures (home quarantine, lockdown) are also included in an ecological perspective of human beings (Figure 1). Systems theoretical analyses can help to explore and design management strategies ensuring health and economy, e.g., by cyclic management of lock down (101).
In addition, we propose an integrative compartment-based and multi-layer-and multi-level-oriented systems pathology, as a systemic understanding of Covid-19. It could help to explain the causes of asymptomatic clinical courses of SARS-CoV-2 infected persons. The complex pathophysiology of COVID-19 starts at the entrance of the virus in the compartment of the upper airways (nose, throat) with its local defense mechanisms on the layer of fluids that protect the mucosa (nasal mucus), the local expression of ACE2 on cells and the local presence of immune cells etc.. In this view, still the trivial pathophysiological question is not clarified if tonsillectomized individuals are at a higher risk that infection "jumps" down to the second compartment, the lower airways, respectively, to the alveola where the fatal mechanisms of hyper-inflammation occur: there might be a higher risk for respiratory dysfunction in tonsillectomized persons (102). Thus, the respiratory system in case of airborne virus invasion must be explored as a "structured whole" (compartment model), being connected with the circulatory system via alveoli thus providing oxygen for the whole organism and emitting carbon dioxide. In addition, each compartment should be conceived as being composed by a heterogeneous multi-layer tissue and should be modeled from tissue to cells to molecular processes of FIGURE 5 | Interaction of the HPA axis with the immune system in COVID-19: Antagonistic convergence on ACTH production by inhibitory cortisol feedback and activating interleukin-6 that is released by macrophages after contact with SARS-Cov-2. The structure of the feedback loop explains the glucocorticoid paradox in COVID-19, i.e., that elevated serum concentrations of cortisol are associated with poor prognosis but that pharmacological use of glucocorticoids like prednisolone or dexamethasone leads to improved outcomes. For more details see text.
viral pathology addressed by molecular systems biological tools (103,104). Here, one should not look only to effects on and of the molecular mechanisms of the endocrine system (reninangiotensin system vs. cortisol system) in both directions but also consider the molecular effects on and of the autonomous nervous system (Figure 5). The crucial clinical problem of Covid-19 is that it appears as a hyper-inflammatory process as a result of a dynamic imbalance between pro-inflammatory and anti-inflammatory components of the immune system: On the immune systems level a macrophage and Il-6 excess is often reported that seems to lead to severe courses of Covid-19 (105). Also, a very high cortisol level is observed at hospital admission that could be functionally understood as an ineffective counterreaction, maybe because of down-regulation or desensitization of glucocorticoid receptors in macrophages (106). However, at first, the cellular invasion of the virus is based on utilization of the ACE2 receptor with the consequence that a lower amount of ACE2 is available that converts angiotensin II to angiotensin 1-7 and angiotensin 1-5 attaching to the mas receptor and operating antagonistically to proinflammatory ATR1 (107,108). In consequence, the antiinflammatory effects compared to proinflammatory effects are persistently lower than under normal conditions. This imbalance could explain the heavy structural and functional changes in the alveola. A next step might be the modeling of tissue dynamics in inflammation that can be explored by computer simulations of models of cell-cell interactions, namely as it was demonstrated for macrophages and fibroblasts showing that interaction structures based on growth factors can reach bistable homeostasis. This system theoretical concept that assumes a strong attractor basin in pro-inflammatory state space facilitates to understand pathological locked-in states of cell systems as they are found in alveolar pathology in Covid-19. As a starting point for a systemic view on COVID-19, a simple model could integrate the activation of the HPA axis by the inflammatory response triggered by virus invasion, where cortisol has again an immunosuppressant effect ( Figure 5). The interaction of the involved three feedback loops could explain both markedly deranged blood glucose levels in diabetics infected with the SARS-CoV-2 virus (109), especially in severe cases (110)(111)(112), and the apparent paradox that therapy with glucocorticoids is able to improve the outcome of COVID-19 (113), whereas patients with elevated cortisol concentration face a poor prognosis of the disease (114).
CONCLUDING REMARKS
A system-theoretical framework can provide a more consistent picture for complex diseases like Covid-19, by bridging the current gaps in medical knowledge, especially enhancing clinical knowledge, and experience. Systems theory enables the integration of multiscale top down (organismal view) and bottom up (molecular systems medicine) approaches. We propose the following unsettled strategies in systems medicine: (i) integration of biochemically-based and physiology-related dynamic models considering adaptive dynamic equilibrium, antagonism, and synergism, (ii) developing models of human and human health in an socio-ecological context with consequences for health status, and (iii) extending methodology of systemic modeling, also qualitative, pre-formal conceptualization techniques for the implementation of system-theoretical thinking and modeling technologies in the medical curriculum.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. | 2021-03-29T13:15:28.636Z | 2021-03-29T00:00:00.000 | {
"year": 2021,
"sha1": "b91771789fd0ca0edf6ffc0592a81f92c15c2fed",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.640974/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b91771789fd0ca0edf6ffc0592a81f92c15c2fed",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20452463 | pes2o/s2orc | v3-fos-license | Nurse – Physician Collaboration in General Internal Medicine : A Synthesis of Survey and Ethnographic Techniques
BACKGROUND Effective collaboration between hospital nurses and physicians is associated with patient safety, quality of care, and provider satisfaction. Mutual nurse–physician perceptions of one another’s collaboration are typically discrepant. Quantitative and qualitative studies frequently conclude that nurses experience lower satisfaction with nurse–physician collaboration than physicians. Mixed methods studies of nurse–physician collaboration are uncommon; results from one of the two approaches are seldom related to or reported in terms of the others. This paper aims to demonstrate the complementarity of quantitative and qualitative methods for understanding nursephysician collaboration. METHODS In medicine wards of 5 hospitals, we surveyed nurses and physicians measuring three facets of collaboration—communication, accommodation, and isolation. In parallel we used shadowing and interviews to explore the quality of nurse–physician collaboration. Data were collected between June 2008 and June 2009. RESULTS The results indicated difference of nurse–physician ratings of one another’s communication was small and not statistically significant; communication timing and skill were reportedly challenging. Nurses perceived physicians as less accommodating than physicians perceived nurses (P<.01) and the effect size was medium. Physicians’ independent schedules were problematic for nurses. Nurses felt more isolated from physicians than physicians from nurses (P<.0001) and the difference was large in standardized units. Hierarchical relationships were related to nurses’ isolation; however this could be moderated by leadership support. CONCLUSION Our mixed-method approach indicates that longstanding maladaptive nurse–physician relationships persist in the inpatient setting, but not uniformly. Communication quality seems mutually acceptable, while accommodation and isolation are more problematic among nurses. Received: 07/22/2013 Accepted: 08/14/2013 Published: 03/28/2014 © 2014 Gotlib Conn et al. This open access article is distributed under a Creative Commons Attribution License, which allows unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. H IP & Nurse-Physician Collaboration ORIGINAL RESEARCH 2(2):eP1057 | 2 Introduction There is strong international promotion of interprofessional collaboration to improve patient safety, quality of care, and to enhance health professionals’ working relationships (e.g. Department of Health 2000, 2001; Institute of Medicine 2001, 2008; Health Canada, 2010). Interprofessional collaboration is a complex and dynamic process that involves the establishment of trust, familiarity, and goal-sharing between health care professionals, as well as a supportive work environment and culture. Communication is a central component of collaboration, and among nurses and physicians it has been a topic of much interprofessional interest. Because they are a core dyad of the inpatient care team, it is important to understand the evolving relationship between nurses and physicians in terms of the quality of their communication, the nature of their interactions, and their perceptions of their relationship in order to continually work toward cohesive interprofessional care.
Introduction
There is strong international promotion of interprofessional collaboration to improve patient safety, quality of care, and to enhance health professionals' working relationships (e.g.Department of Health 2000Health , 2001;;Institute of Medicine 2001, 2008;Health Canada, 2010).Interprofessional collaboration is a complex and dynamic process that involves the establishment of trust, familiarity, and goal-sharing between health care professionals, as well as a supportive work environment and culture.Communication is a central component of collaboration, and among nurses and physicians it has been a topic of much interprofessional interest.Because they are a core dyad of the inpatient care team, it is important to understand the evolving relationship between nurses and physicians in terms of the quality of their communication, the nature of their interactions, and their perceptions of their relationship in order to continually work toward cohesive interprofessional care.
Literature Review
Separate quantitative and qualitative research has been conducted on nurses' and physicians' experiences of collaboration.Efforts to validate nurse-physician collaboration measurement scales are ongoing (Baggs, 1994;Kenaszchuk, Reeves, Nicholas, & Zwarenstein, 2010b;Ushiro, 2009).Quantitative reports of nursephysician relationships have shown that nurses' and physicians' opinions of each other's collaboration are discrepant.Quantitative studies consistently find that nurses experience lower satisfaction with nursephysician collaboration than doctors do, and that nurses are more critical of physicians' collaboration efforts than doctors are of nurses' efforts (Krogstad, Hofoss, & Hjortdahl, 2004;O'Leary et al., 2010;Verschuren & Masselink, 1997).Nurses report lower levels of communication openness with doctors (Reader, Flin, Mearns, & Cuthbertson, 2007) and lower quality of collaboration and communication than doctors report about nurses (Makary et al., 2006;Sexton et al., 2006).Nurses are more likely to report problematic team-and communication-related behaviours that might endanger patient safety than either physicians or non-clinician managers (Singer et al., 2003).Quantitative research has also revealed associations between nurse-physician collaboration and patient satisfaction as well as with health outcomes and nurses' job satisfaction (Baggs et al., 1999;Kenaszchuk, Wilkins, Reeves, & Zwarenstein, 2010c;Wanzer, Wojtaszczyk, & Kelly, 2009).
Qualitative studies exploring nurse-physician collaboration in hospitals have elaborated on when, where, and why the relationship succeeds or fails.These studies highlighted a range of issues about nurse-physician experiences with collaboration.For instance, interviews in the intensive care unit suggested good collaboration was regarded as availability and receptivity of one profession to and by another (Baggs & Schmitt, 1997).A study
Implications for Interprofessional Practice
• Nurses and physicians experience sub-optimal communication with one another that impedes opportunity and efforts toward interprofessional collaboration.
• The flexibility of physicians' schedules is discrepant with other health care providers' fixed schedules lending to the perception that physicians are less team-oriented.
• Nurses feel more isolated from physicians than physicians do from nurses, citing poor understanding and an entrenched professional hierarchy as contributing factors.
• Physicians and nurses believe that interprofessional collaboration is desirable and can be enhanced by strong professional leaders who are willing and able to collaborate by example. of interprofessional narratives illustrated that high collaboration was experienced by nurses and physicians alike when unplanned opportunities for joint problem-solving were available and appreciation for the knowledge and contribution of the other professional was demonstrated (McGrail, Morse, Glessner, & Gardner, 2009).Other qualitative studies found that establishing trust and respect between nurses and physicians promotes positive collaboration (e.g.Pullon, 2008).
Other qualitative studies have indicated a range of more problematic experiences of nurse-physician collaboration, and how this is part of a wider historical system of power dynamics within which the physician maintains higher status and authority (Corser, 2000;Greenfield, 1999;Stein, 1967), which is enduring (Reeves, Nelson, & Zwarenstein, 2008;Stein, Watts, & Howell, 1990;Stein-Parbury & Liaschenko, 2007).An interview study of medical residents, for example, found that their perceptions toward nurses were consistent with nurses' experiences of being viewed in a mechanistic way, i.e., as a tool to carry out physicians' orders rather than as a professional with an expertise (Weinberg, Miner, & Rivilin, 2009).It has also been suggested that sizeable proportions of nurses are dissatisfied with their interprofessional relationships with doctors (Sirota, 2008).This may be related to physicians' tendencies to de-emphasize relational aspects of patient care in favour of 'case knowledge' which emphasizes medical diagnostic and treatment-of-disease approaches (Stein-Parbury & Liaschenko, 2007).In the operating room, observational research showed how nurses use a variety of communication strategies, including not communicating at all, to negotiate constraints on their role autonomy in relation to physicians' (Gardezi et al., 2009).Observations of nurses' disengagement from collaborative practice have found that physicians neglect to incorporate the core values of nursing practice into interprofessional care, thus impacting nurses' willingness to collaborate (Miller et al., 2008).
Despite substantial research on the topic, mixed methods techniques (e.g.Sandelowski, 2000) are used infrequently to understand nurse-physician collaboration.This paper reports quantitative and qualitative results from studying nurse-physician collaboration in general internal medicine (GIM) units.We have integrated survey and ethnographic methods to understand the range of collaboration experiences.We believe that an integrated analytic approach to nurse-physician collaboration allows us to quantitatively identify and qualitatively explore three meaningful components of collaboration as they shape and are shaped by the nurse-physician dyad: communication, accommodation, and isolation.Our synthesis of survey and ethnographic data offers new insights on the evolving nursephysician relationship in acute care.
Methods
The aim of the study was to explore interprofessional collaboration in the GIM units of community hospitals.We analyzed survey and ethnographic data to understand how nurses and physicians rate one another on communication, accommodation, and isolation, and to qualitatively understand their collaboration experiences along these dimensions.Communication, accommodation, and isolation are three aspects of nurse-physician collaboration that some authors on this article identified earlier with confirmatory factor analysis (Kenaszchuk et al., 2010b).The study design was a sequential mixed methods approach with equal parts quantitative and qualitative using survey and ethnographic methods.
Quantitative
The survey was fielded in the inpatient GIM units of five hospitals.We attempted to obtain a completed survey from all nurses and resident and attending physicians who worked full-and part-time between during interprofessional team meetings when the GIM unit administrator requested it.
Power and sample size calculations were not performed in planning for survey administration and data analysis.Based on past experiences using the scale, and on recent published literature, we anticipated obtaining sizable mean score differences (Makary et al. 2006), small standard deviations, and effect sizes in the medium range or greater with nurse and physician sample sizes similar to those in our study (Reader et al. 2007).Current literature also indicates that substantial mean score differences in nurse-physician mutual ratings were possible (O' Leary et al. 2010).
Qualitative
A purposive sample of key informants was recruited from each participating hospital.Participants were selected based on their professional roles on the health care team and within the medicine programs.The study researcher contacted potential participants by e-mail or telephone and invited them to an interview.Twenty interviews were completed with nurses and physicians, including six direct care nurses, six unit managers trained in nursing, one program director trained in nursing, and seven physicians (including one former and four current chiefs of medicine, and two staff physicians).Participating nurses were trained as registered nurses or advanced practice nurses.No individuals declined participation.Participants were recruited to the point of thematic data saturation, that is, when no new findings or themes emerged from the interview data.Five shadowing episodes (one per site) were also carried out to confirm or disconfirm interview data.All participants were made aware that the purpose of the research was to enhance our understanding of interprofessional communication and collaboration in the general internal medicine setting.There was no further specific information given to participants that might introduce any bias to the observational data.Observations were used to develop a more in-depth and rich understanding of participant experiences with interprofessional collaboration as described in the interviews.Observations included staff members who were not interviewed for the study but were made aware of the observations and their purpose in advance.
Quantitative
The outcome measurement scale is a major adaptation of the Nurses' Opinion Questionnaire (Adams, Bond, & Arber, 1995).The adapted scale uses a new 3-factor structure to measure dimensions of nursephysician relationships in inpatient care settings: communication between nurses and physicians, accommodation by one group to the other's optimal work practices, and isolation resulting from excessive detachment between the groups (see Table 1, following page).
Summated scores were calculated based on five items in each of the communication and accommodation subscales and 3 items in the isolation subscale.Four response options were available for the items: strongly disagree (1), disagree (2), agree (3), and strongly agree (4).Maximum possible scores were 20, 20, and 12, respectively.The scale was designed for administration to multiple health professional groups.Its 13 items request nurses to rate physicians on the construct items and physicians to rate nurses likewise.This is a round robin design where respondents are proxy reporters on the collaboration behaviours of group targets, and each group is a target of other-group respondents.
Qualitative
Ethnographic methods were used to develop contextually-relevant understandings of participants' beliefs and behaviours (Hammersley & Atkinson 1995).We used semi-structured interviews and participant shadowing to understand nurses' and physicians' experiences with collaboration.Interview questions were designed to elicit participants' experiences with communication and the collaborative nature of their work.Shadowing participants on the wards was subsequently used to contextualize and triangulate interview findings.
Data were collected between June 2008 and October 2008.Interviews were arranged at the convenience of participants.Interview durations were 25-45 minutes and were recorded by handwritten notes which were immediately transcribed into reflective, reconstructed field notes by the researcher (Sanjek, 1990).Observational data were collected following the interviews in order to further explore interprofessional communication patterns, the nature of the GIM context as well as compare the emerging findings from the interview data.For instance, if an interviewee described communication tensions during interprofessional rounds, the observer would shadow that participant during rounds to gain insight to his or her experience therein.Shadowing occurred on weekdays only.Descriptive observational notes were written by hand and later transcribed into reconstructed field notes by the researcher (Sanjek, 1990).
Ethical considerations
Research ethics approval was obtained from each participating hospital.For the survey, participant anonymity was achieved by requesting minimal personally identifying information.A statement on the survey cover page indicated that submitting the survey was implied consent to participate.For the qualitative component of the study, informed consent was obtained from all interview participants.All transcripts and field notes were anonymized.
Quantitative
Item responses were screened for missing data.Among physicians no more than 4% of returned surveys had item-level missing data within a subscale.Among nurses the figure was 12%.The 13 scale items were analyzed for data missing completely at random (MCAR) using Little's (1988) omnibus test.The chisquare statistic was not statistically significant (χ 2 = 328.4,P =.29), suggesting that data were missing completely at random, but this test is not definitive (Enders, 2010).Listwise deletion of observations with any missing item-level data was employed for main analyses.Replication analyses were performed using other methods for missing data.
Internal consistency reliability of nurse and physician responses was estimated with Cronbach's coefficient alpha.The structural equation modelling approach of Maydeu-Olivares, Coffman, Garcia-Forero, and Gallardo-Pujol (2010) was used to calculate the 95% confidence interval (CI) for alpha and test the significance of the difference between alpha from ^<They> would not be willing to discuss their new practices with <us>.
Note: ^ reverse-coded; terms in angle braces (<…>) are replaced with 'nurses' and 'physicians' according to the respondent and target profession physicians' and nurses' mutual ratings on subscales (Fan & Thompson, 2001).
We used robust statistics (Wilcox & Keselman, 2003) to test mean scale score differences for nurse-physician mutual ratings.Yuen's T-test statistic (Yuen, 1974) (2005b).In a two-group problem the method permits the standard deviation of either group to be used to standardize the difference.Because of the elevated prominence of nurses in medicine wards, we scaled the effect size on nurses' data.Therefore we used the variances of nurses' ratings to calculate effect sizes and estimate confidence intervals for each subscale.The probability of superiority was read from Grissom's (1994) Table 1 based on obtained robust d effect sizes.
Qualitative
The qualitative data were analysed using an inductive approach (Hammersley & Atkinson, 1995) and a constant comparative analysis of the data was performed (Pope, Ziebland, & Mays, 2000).One author read and coded interview transcripts and observational field notes for common themes relating to nurse-physician and physician-nurse experiences with collaboration.The codes were shared with two members of the research team for discussion and refinement of themes.
To synthesize the quantitative and qualitative findings we aligned our three survey subscales with our thematic categorization of the qualitative data.Clusters of qualitative data that reflected facets of communication, accommodation, and isolation were explored to generate the analytic categories presented here.We used these facets as sensitizing concepts to capture nuances in the qualitative data pertaining to scale items and their factors.Here, integrating quantitative and qualitative findings serves the conceptual purpose of complementarity (Sandelowski, 2000), elaborating the results of quantitative study findings and exploring the relationship between the two.
Quality
Psychometric properties and scale items of the survey were published earlier (Kenaszchuk et al., 2010b).
Internal consistency reliability for each of the subscales was acceptable (>.70).Qualitative findings were validated in two ways.First, multiple data collection methods -interviews and observations -provided one form of triangulation which enhanced the trustworthiness of the data (Creswell & Miller, 2000).
Observations were conducted sequentially and used to confirm or disconfirm emergent findings from the interviews.Second, different researcher perspectives were used for validation in the analysis stage where quantitative and qualitative findings were integrated.
Results
Quantitative and qualitative results are presented together.Themes generated from qualitative data that align with the survey constructs are elaborated to provide deeper insight into nurse-physician and physician-nurse collaboration.
Survey Response and Reliability
Surveys were returned from 51 physicians and 190 nurses.Results are in Table 2 (following page).There were 49 useable surveys returned from physicians for each of the three subscales (communication, accommodation and isolation) and between 169 and 178 from nurses (Table 2, column 1).Most of the coefficient alpha values were near 0.60; the mean was 0.63.The 95% CIs were usually wide and indicated that the upper limits were likely > 0.70 (Table 2, column 2).In hypothesis tests, coefficient alpha differences for nurses and physicians were not statistically significant (Table 2, column 3).
Communication
For communication, nurse-physician mutual mean score ratings were equivalent, 12.8 (
Timing
Qualitative data revealed that nurses did not generally feel that patients were discussed with physicians in a timely fashion.A majority of nurse participants felt that in spite of being able to page physicians to discuss patient care, pages were either not returned within a reasonable amount of time or were returned with contempt.Two nurse informants described delayed nurse-physician communications that affected the timely exchange of patient information: There are some doctors that are not as…diligent or caring as others.They don't return pages and we only page when it's serious.We'll get pretty mad when it's not returned and then we have to send a patient to the ICU as a result.I believe that if you take responsibility as a doctor then you' d better turn that damn pager on. [RN] Most nurses can give you an example of a time that they paged a physician and got yelled at for it.One physician gets agitated if the nurse who paged him doesn't have answers to his questions when he asks them.Some physicians will just hang up on them.
[Unit Manager] By comparison, one physician felt that he was readily available for discussion with nurses, explaining, "People don't need to page me really, they will just see me on the floor and approach me with questions.
Patient care discussion
Nurse managers agreed that while some doctors were effective communicators willing to review patient information jointly with nurses, this did not apply universally to all doctors on their units: Some physicians are excellent with the charge nurse, for example, sitting down to go over patients.Other physicians it's like you don't even know they are on the floor.[Unit manager] From the physician perspective there was a similar appreciation for patient care discussion during
Skill
Nurse descriptions of physicians' communication skill ranged from offensive to non-existent.This was particularly noted on consultant staffed wards: [We] have some physicians who are mean and rude and no one wants to approach them.Some of them come up to the unit and everyone scatters.
There is one person who is a great physician but he's hot and cold so nurses will get the charge nurse to approach him for them.Some nurses have had the physicians yell at them directly and so will not approach them again.[Unit manager] On the non-hospitalist units the communication is not there.The physicians are not speaking to the direct care nurses or the allied health unless they are approached by them.
[Unit manager]
A physician also expressed frustration with nurses' communication skill by saying that she struggled to understand why one particular nurse received "all of these nursing awards when she's the most useless person.
Accommodation
For accommodation, physician ratings of nurses (13.9) was 1 point higher than nurse ratings of physicians and the difference was significant (P<.01).The effect size was 0.44 with a 95% CI from 0.20-0.70.The probability of superiority was 0.62.
Based on these scores, our qualitative data yielded the following insight into the area of accommodation.
Scheduled versus unscheduled care time
Front line nurses and nurse managers believed collaboration with some physicians was significantly impacted by physicians' schedules, which were independent from those of other team members.In some units the physicians were said to "come and go, " and were unavailable to participate in pre-planned discussions.Nurses believed that physicians did not consider others' schedules; they made important patient-related decisions without team discussion: The key for this site is physician engagement to bring the team together.We are still floundering at the end when physicians come in and discharge the patient without giving the team enough notice. [
RN program director]
Effective nurse-physician communication was sometimes opportunistic, performed at the convenience of physicians passing the nursing station: A male MD is leaving the nurses' station when the charge nurse calls out to him, "Dr.[last name]," she says.She asks for an update on a patient, which he gives before leaving.[Observational field note]
Isolation
For the isolation subscale lower scores indicated qualitatively worse nurse-physician working relationships, and more isolation.Nurses' mean rating of physicians was 1.3 points lower than physicians' mean rating of nurses, 7.3 versus 8.6.The 95% CI for the difference was 1.0-1.7 points (P<.0001).The effect size was 0.75 with a 95% CI from 0.52-0.97.The probability of superiority was 0.70.
Based on these scores, our qualitative data offered illumination in three areas of isolation: leadership support, physician authority, and changing perceptions.Another physician chief confirmed the importance of willingness to discuss issues between the nursing and physician leaders to enhance collaboration on the front line: When a nurse handles herself poorly at a code, I will speak to her and then if I didn't feel satisfied that it was a one-time event I would speak to the charge nurse or the manager.I feel I have outstanding communication with my co-administrators.If they have a problem with a doctor they will take it to me. [MD]
Physician authority
Physicians' collaboration efforts were perceived by interviewees to be poor in units where physicians exhibited attitudes of authority and upheld "traditional" ideas about the role of the nurse and her standing in the professional hierarchy.Nurses believed that these physicians viewed themselves as being above the nurses.This created hostility and isolation.One nurse said, "I watch the nurses with the physicians and [physicians] treat them like they're clerical staff.I'm proud when the nurses say 'it's not clear or I don't agree with that.'" A physician also described how poor role understanding and lack of respect for a new nursing role among traditional-thinking physicians contributed to detachment between his unit staff: It was really difficult to create an appreciation for the nurse practitioner among the internists because she wasn't a doctor.There's a general lack of connectedness between the doctors, nurses and allied health staff. [MD]
Changing perceptions
Nurses and physicians believed that positive changes in nurse-physician collaboration occur as physician perceptions of the nurse's role modernize.In some units, lack of traditional barriers and nursing professionalization helped decrease nurse-physician isolation: There used to be a bullying mentality on medicine but today the communication with the doctors is good.The informal communication lines are strong and it's pretty darn positive and pleasant to work here.There's no power struggle.[Unit Manager] We also have a great nursing leader and a culturally diverse staff.There used to be the traditional barriers between physicians and other staff and a gender barrier as well but this has disappeared. [MD] In addition, some nurses believed more junior physicians are sensitized in training to issues of interprofessional collaboration which is enhancing nurse-physician relations.
Discussion
Previous nurse-physician studies have primarily been single-method designs that used survey or interview data to investigate interprofessional collaboration.Previous quantitative research finds inequality between nurse and physician ratings of collaboration, with nurses reporting lower levels of satisfaction with physicians' collaboration than physicians of nurses' collaboration.Some qualitative studies support this finding (Weinberg et al., 2009); others highlight mutually positive experiences of nurse-physician collaboration (McGrail et al., 2009).
The synthesis of our quantitative and qualitative techniques has allowed us to identify forms of nurse-physician collaboration quantitatively and subsequently illustrated them and their nuances with qualitative data of those participants' experiences.
Our survey results support previous research on discrepant nurse-physician collaboration ratings, particularly in the areas of accommodation and isolation.Qualitative data from the same units revealed a number of nuances that can be targeted to enhance interprofessional collaboration.Nurses and physicians mutually rated one another below average on the communication items.Our qualitative exploration of communication identified areas for improvement in terms of the structure and processes of communication.For nurses, communication timing and delayed physician response to paging was a conflict point that obstructed collaboration.In addition, many physicians were perceived to lack the necessary communication skill for effective collaboration to happen.Physicians, however, did not express these frustrations to the same degree as nurses.For both groups, planned oneon-one patient rounds were experienced as positive communication opportunities and both appreciated the ability to discuss patients at length.This can be attributed to the fact that regular interprofessional rounds meet aspects of the communication needs of both groups in that they are structured conversations, are also pre-planned and focus on exchanging specific information about patients.
Physicians rated nurses higher on accommodation than nurses rated physicians.In some units physicians divided their time between inpatient and outpatient care while nurses cared exclusively for inpatients.Our interviews revealed that the discrepancy between physicians' flexible schedules and others' fixed schedules can cause strain.When physician work became more patient-oriented, it began to appear less team-oriented when, for instance, nurses and other health professionals were uninformed about a physician-led process like patient discharge, they perceived this to be poor physician collaboration.Both physician and nurse interviewees believed that strong support from professional practice leaders, and leaders' ability to work together, enhanced collaboration.
Nurses rated physicians lower on the isolation items than physicians rated nurses.Nurses felt more disengaged from physicians than physicians from nurses.Both groups cited poor role understanding and the entrenched professional hierarchy among many senior physicians as contributing factors.Yet our qualitative findings also highlighted an important shift in this facet that was not apparent in survey results.Physicians' attitudes about the nursing profession and nursing roles are changing.In some units the variation in nurses' experiences of isolation from junior physicians was attributed to the change in junior physicians' attitudes toward the nurses with whom they work.A comparative look at the differences in junior (<10 years in practice) and senior physicians' approaches to interprofessional work, as well as junior and senior nurses' , is an area for future research that can explore more closely the evolution of this aspect of the nurse-physician relationship.
Our measurement scale tapped communication, accommodation, and isolation as facets of nursephysician collaboration.Incongruous nursephysician mutual perceptions of team collaboration are familiar to health services researchers.Less well known are the scale-free standardized effect sizes associated with nurse-physician differences.
Earlier studies of nurse-physician relationships were not designed with mutual-group ratings and effect sizes in mind.Several recent articles either test significance of mutual ratings differences without presenting effect sizes (Makary et al., 2006;O'Leary et al., 2010), or present effect sizes for items that were not clearly developed as elements of summated scales whose data validity and reliability (Reader et al., 2007) could be examined.
By synthesizing quantitative and qualitative data, we attempted to provide new perspectives on interprofessional care by using ethnographic data to contextualize effect sizes.On two scale facets there were mean score differences of mutual ratings that translated to effect sizes conventionally considered medium to large.The effect size CI for isolation covered a range between medium and large.The probability of superiority estimates calibrate nursephysician relationships clearly on two subscales.There was a substantially greater probability that a physician's ratings of nurses' accommodation and isolation was higher than a nurse's ratings of physicians' .Perhaps as expected, physicians were more likely to judge nurses positively than they were to be judged positively by nurses.This relationship was most apparent when isolation was considered.These effect sizes are likely generalizable and representative of other inpatient medical/surgical units.For example, we were able to calculate d-type effect sizes (Cohen,1988) from published statistics relating to two items resembling items in our study that Thomas, Sexton, and Helmreich (2003) administered to critical care nurses and physicians.Both effect sizes were in medium-to-large ranges.Mutual nurse-physician discrepancies in perceptions of working practices of the other group seem to be pervasive.Therefore, the relevant communitiesincluding academic researchers, hospital professional practice leaders, and clinical educators-should be aware that mutual differences on some dimensions of the nurse-physician relationship may be substantial in quantitative effect size terms and are likely larger than previous quantitative studies suggest.
These results are relevant for future mixed methods research on nurse-physician collaboration.In earlier work, we analyzed rank agreements across hospital sites between measurement scale data from nurses and qualitative observation and interviews (Kenaszchuk et al., 2010a).Agreement on rank orderings was highest and significantly greater than chance between fieldwork observation and the scale constructs of accommodation primarily, and isolation secondarily.Consequently, we believe that modest statistical agreement between qualitative and quantitative data collected in inpatient settings indicates that medium or large effect size differences in mutual nurse-physician scale ratings could be co-existent.In other words, other qualitative results similar to ours may well characterize hospital settings where nurse-physician differences of mutual perceptions are medium or large when interpreted as effect sizes.Furthermore, the variation of experiences revealed in our qualitative results suggest that the implications of nurses rating physicians lower on collaboration scales are not entirely clear.
Quantitative and qualitative methods are helpful for understanding this topic.
Interventions to improve health services quality may be implemented as cluster-randomized trials delivered to practicing ward teams.When teams, wards, or hospitals are used in group randomized trials, researchers can employ quantitative and qualitative methods in planning stages.If they investigate consensus between quantitative and ethnographic data sources, they should expect to find at least modest agreement if cross-site data are transformed to relative rank orderings.Overall nurse-physician discrepancies should be expected, and could vary in size between research sites.Our multidimensional measurement scale can identify nurse-physician relationship problems for quality improvement interventions.An integrated multi-method approach to this topic can achieve a more meaningful understanding of the spectrum of interprofessional collaboration in the clinical setting, and begin to help conceptualize its improvement.
Limitations
The qualitative data here may be limited by the small sample size.However, key physician and nurse informants in this study had many years of experience from which to contextualize the quality of their interprofessional relations and work.
The response rate to the interprofessional survey could not be determined with confidence.At some hospitals it was difficult to enumerate the nurses and physicians who worked in the GIM units and to monitor survey distribution.The overall response rate among both professional groups is believed to be less than 50%, and higher among nurses than physicians.It is possible that the obtained responses are not representative of survey non-responders.
The project was a multi-site design.The analysis did not incorporate possible between-hospital differences.In multilevel modeling terminology, the GIM units are clusters.It is reasonable to expect that within GIM units participants' survey responses are non-independent to some degree, i.e., correlated.Non-independence may have been consequential for reported statistical significance levels.Other nursephysician studies should consider whether the data have a hierarchical structure that could be modeled with a mixed-effects model.
Conclusion
This mixed method approach to understanding the nurse-physician dyad in general internal medicine has identified and elaborated upon three dimensions of collaboration: communication, accommodation, and isolation.The synthesis of quantitative and qualitative findings is useful for identifying sites of tension between the professions, and subsequently exploring the meaningfulness of specific experiences that contribute to these.In our global evolution toward interprofessional patient centered care, this methodological approach can be used to understand the significance of health care professionals' attitudes and beliefs about interprofessional collaboration, and to develop tailored interventions that will maximize opportunities for them to engage with one another in mutually meaningful ways to achieve this.
Table 1 .
Interprofessional Collaboration Scale factors and items
Table 2 .
Statistical test results for nurse-physician mutuaratings: 3 subscales Note: 1 Z statistic 2 Yuen's test 3 Robust Cohen's d based on variance of nurse distribution 4 Probability of superiority | 2017-10-11T18:14:35.573Z | 2014-03-28T00:00:00.000 | {
"year": 2014,
"sha1": "45602737c8c87744b23f0d34c02d06445c6ff939",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7772/2159-1253.1057",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3cb78f9e01c4cd57698f5c6d5cc7db4da48d9848",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213300845 | pes2o/s2orc | v3-fos-license | Insights from Participatory Prospective Analysis (PPA) workshops in Nepal
• Forestry experts identified several external factors likely to impact the rights of community forest user groups under the new federal structure. These factors include national legal frameworks, macroeconomic policies, the emergence of new sub-national governments, and a changing political context. Experts also considered how the future of community forestry might be influenced by internal factors, such as the rules of community forest user groups, governance arrangements, strategies, plan implementation, conflict management systems, and relationships with local governments.
• Participatory Prospective Analysis was found to be a good methodological tool for effective planning, and participants thought it could improve local environmental planning. With some customization and contextual refinement, it can be adopted by community forestry groups, local government ward offices and municipalities to assist Nepal's forestry sector in its transition to a decentralized system.
Background
Introduced in the early 1990s in response to serious environmental degradation, Nepal's community forestry program has become the country's most successful forest tenure reform initiative, hailed internationally for its contribution to biodiversity conservation and climate change mitigation efforts. Almost 40% of Nepal's population has been involved in managing a third of national forests through more than 22,000 locally formed forest user groups (DOFSC 2018).
However, Nepal has experienced considerable social and political conflict during most of this period. With the absence of local governments for almost two decades , community forestry institutions have been the only functioning rural bodies to sustain grassroots democracy, to offer vital social services and community infrastructure, and to support livelihood activities. Recently, Nepal emerged from this protracted conflict, adopted a federal political structure and has established stable governments at federal, provincial and local levels. Under its new constitution, provincial and local governments enjoy substantial power, particularly in development activities, service provisioning and natural resource management. Now, the key concern of forest user groups is whether their existing rights will be secured under federalism in Nepal. The newly emerged provincial governments and local municipalities will have greater authority and responsibility in community forestry than under the highly centralized system of former times. However, the details of their roles, and what services they will offer to forest user groups, are still evolving. Although there is considerable optimism regarding the ability of communities and local governments to cooperate and collaborate, there are also fears of power struggles that could reduce the role of community forest governance. Likewise, most of the debate on forest governance under federalism has taken place at a national level, with limited feedback from provincial and municipal actors. No. 20 No. 276 Against this background, CIFOR and ForestAction aimed to better understand emerging issues at the grassroots level, to include them in provincial and national forest policy making, and to introduce Participatory Prospective Analysis (PPA) as a process for collaborative environmental planning at a local level. The team held workshops in Nepal's hill and Terai districts before bringing analysis and insights to stakeholders at the municipal, provincial and national levels.
This collaborative project formed part of a series of research and engagement activities under the CIFORled Global Comparative Study on Forest Tenure Reform (GCS Tenure, www.cifor.org/gcs-tenure). The aim is to improve understanding, communication and stakeholder engagement in developing countries in order to support stronger tenure security, livelihoods and sustainable forest management.
This infobrief summarizes the PPA process and provides key insights from four PPA workshops held in two municipalities in Nepal. The next section provides a brief outline of the PPA process. This is followed by a summary of the workshops, which are compared and contrasted across the two sites. The final section reflects on the workshop outcomes.
Participatory Prospective Analysis
PPA is a planning tool that aims to anticipate plausible scenarios in the future. Its foresight approach explores future alternatives by developing different visions and their associated key components (or drivers). The PPA process identifies relevant stakeholders and recognizes them as experts. The exploration of future options is then used to conduct exercises for planning strategies and actions in pursuit of an ideal future selected by the collective.
PPA was implemented in a series of workshops in Nepal and was instrumental in developing a shared understanding among stakeholders with regard to many dimensions of forest tenure reform as well as factors affecting tenure security. Discussion about the drivers of change is key to building a better future while being cognizant of the other possible future scenarios ).
The PPA process involves working through these six sequential steps (see Figure 1). These steps are described in more detail in Box 1.
In Nepal, facilitator training was organized in November 2016, attended by 23 professionals and practitioners representing the government forestry service, NGOs, academic institutions and forestry networks. PPA workshops were then implemented in Buddhabhumi municipality (Kapilvastu district) and Chautara Sangachowkgadhi municipality (Sindhupalchowk district) (see Figure 2) to represent the regions of Terai and the hills respectively. Terai and the hills were chosen due to their significant differences in the productive potential of forests; the value of forest products; the socio-economic complexities; and the role of actors in forest governance. Experts attending the workshops -held in June and July 2018 -included leaders of selected forest user groups, representatives from local municipalities, the government forest service, NGOs and forest-based entrepreneurs.
The PPA workshops followed the same format in both locations. After PPA implementation workshops, a meeting was held at the municipality level in each district, followed by state-level meetings, to inform stakeholders and to solicit feedback.
Defining the forces of change
Participants were asked to discuss their potential hopes and fears for the coming decades in relation to community forestry rights. Local stakeholders expressed fears that forest user groups might eventually lose their rights and autonomy under federalism, due to conflicting regulatory provisions and corresponding stakeholder claims. There Bourgeois et al. 2017 for detailed guidelines) Defining the system The first step in the process is to clarify these four questions: what, where, how long and who (which collectively define the system). The main 'what' question used to define the system is: What could be the future scenarios for forest tenure security? The 'where' question leads to defining a geographical territory. In the case of PPA implementation in Nepal, the territory covered multiple community forest user groups within a specified municipality. 'How long' defines the time period considered for the future scenarios; in our cases, stakeholders defined a 10-year time frame. 'Who' means identifying relevant stakeholders, whose stakes are linked with the future of forest tenure security.
Identifying and defining forces of change
The second step in the PPA process is to list forces of change, which have the capacity to significantly transform the system in the future. To identify these forces, the stakeholders were asked to discuss their greatest fears and hopes for the future of forest tenure security in their area over the next 10 years. The factors that potentially trigger a change toward a hope or fear are known as forces of change. These forces could be external or internal to the system. The external forces are beyond the capacity of local stakeholders to influence them, so the defining driving forces and scenario building are largely based on the internal forces.
Selecting driving forces
Once the list of the internal forces is finalized and each force is defined, interrelationships among these forces are discussed. The interrelationship is defined by evaluating the direct influence of each force on the others (i.e. whether Force A has a direct influence on Force B and vice versa). A value of 1 is given when there is direct influence, otherwise a value of 0 is given. This binary evaluation is then used to analyze the influence and dependence of each force by using structural analysis software. The value generated for each force in terms of influence and dependence is then plotted on a graph. The forces with low influence and low dependence are called 'outliers'; forces with low influence and high dependence are called 'outputs'; forces with high influence and high dependence are called 'leverages'; and forces with high influence and low dependence are called 'drivers.' These driving forces are then used in developing the scenarios.
Building future scenarios
A future scenario is an anticipated plausible state in the future of forest tenure security and is produced by combining the coherent states of the driving forces. Hence, the first step to building a future scenario was to identify different contrasting states that a driving force could have in the future. For example, for the driving force 'the rules of user group,' see the example of the contrasting states identified by the participants: Once different states were agreed upon for each driving force, the next step was to identify the combination of states that cannot co-exist. While developing scenarios, those incompatible states were discarded. Each scenario was then developed by combining one particular state for each driving force. The multiple scenarios were therefore constructed with a combination of different states of the driving forces. The next step was to develop a narrative for each scenario -e.g. how this would evolve over the selected timeline. The participants then selected the most desirable and undesirable scenarios before working on the action plans.
Action plan
Different community forest user groups (CFUGs) and local government representatives were present in the workshops and had agreed to work in their respective institutions on a pathway from scenarios to action. Together, however, they developed a list of common strategies that they should apply across institutions in order to make the desirable scenario a reality.
Driving force
Different states of the driving force 1 2 3 4 5 6 A. Rules of user group
Pro-poor
Inclusive, democratic and powerful; sustainable forest management; propoor livelihoodfocused provision
Dominating (Haikamvadi)
Self-centered, rich focus; autocracy of leadership; no forest management system
Land use change
Polices of converting forest land to other land uses
Anarchism
No more rules and regulations No. 20 No. 276 were also fears that local governments may start levying more taxes, and that partisan politics could jeopardize the success of community forestry. On the other hand, stakeholders were also hopeful that local government services would be provided faster, access to services would be easier, tenure security would be stronger, and collaboration with local governments would be simpler and more effective. These reflections were used to identify potential forces of change, many of which were related to the national legal frameworks, macroeconomic policies and changing political context. In order to focus on the aspects that they were able to influence, participants created a list of internal forces.
These were largely the same across the two municipalities, with both identifying the following key internal factors as especially relevant: forest management technology; working strategy; plan implementation; community forestry's contribution to community development; benefit sharing/ distribution; information and communication; user group governance system; relationships between stakeholders; forest condition; condition of forest enterprises; user group rules; transparency; allocation of rights and responsibilities; conflict management system; awareness levels; and participation.
Chautara Sangachowkgadhi municipality identified a few additional forces of change, such as user group leadership; inclusion in community forestry; condition of physical infrastructures; and the contribution of local government representatives. Buddhabhumi municipality stakeholders additionally identified the relationship between local government and community forest user groups; attitude; self-motivation; sources of motivation; types of rights; the relationship between rules and regulations; and financial management.
Overall, the forces that the experts identified as most relevant can be summarized in terms of extent and security of rights obtained; group governance; relationships with other actors; behavioral aspects; the condition of forest resources and infrastructure; technology; and the market.
Unveiling the driving forces
The next step was to assess the interdependence between the forces of change and to identify five or six driving forces based on their influence on other factors. After finalizing 20-23 forces of change, data were entered in the PPA structural analysis software, which revealed the direct and indirect influence, dependence, strength and weighted strength. Using a combination of direct and indirect influence and dependence, this software highlighted the status of each force of change in terms of its influence and dependence on other forces.
Five forces of change were identified in both municipalities as key driving forces (see Figure 3): user group rules; user group governance system; working strategy; conflict management system; and plan implementation. Buddhabhumi municipality also identified relationships between local government and community forest user groups as key driving forces.
In the following stage of developing scenarios and strategies, those forces that lie in the 'leverages' quadrant (e.g. participation, attitude, level of awareness and information and communication) are key to improving the effectiveness and efficiency of driving forces, while 'outputs' forces should benefit from interventions. forestry networks. PPA workshops were then implemented in Buddhabhumi municipality (Kapilvastu district) and Chautara Sangachowkgadhi municipality (Sindhupalchowk district) (see Figure 2) to represent the regions of Terai and the hills respectively. Terai and the hills were chosen due to their significant differences in the productive potential of forests; the value of forest products; the socioeconomic complexities; and the role of actors in forest governance. Experts attending the workshops -held in June and July 2018 -included leaders of selected forest user groups, representatives from local municipalities, the government forest service, NGOs and forest-based entrepreneurs.
Figure 2: Map of municipalities where PPA activities were conducted
The PPA workshops followed the same format in both locations. After PPA implementation workshops, a meeting was held at the municipality level in each district, followed by state-level meetings, to inform stakeholders and to solicit feedback.
Key outcomes
Defining the forces of change Participants were asked to discuss their potential hopes and fears for the coming decades in relation to community forestry rights. Local stakeholders expressed fears that forest user groups might eventually lose their rights and autonomy under federalism, due to conflicting regulatory provisions and corresponding stakeholder claims. There were also fears that local governments may start levying more taxes, and that partisan politics could jeopardize the success of community forestry. On the other hand, stakeholders were also hopeful that local government services would be provided faster, access to services would be easier, tenure security would be stronger, and collaboration with local governments would be simpler and more effective. These reflections were used to identify potential forces of change, many of which were related to the national legal frameworks,
From driving forces to future scenarios
A scenario is a description of how the future may unfold according to an explicit, coherent and internally consistent set of assumptions about key relationships between driving forces. Future scenarios of community rights over forests under federalism were constructed with the premise that it was possible to foresee plausible futures based on the different potential states of the driving forces. Experts worked to identify all possible scenarios based on all plausible combinations of the driving forces' states. Once there was agreement on scenarios considered to be sufficiently contrasting and diverse enough to explore, each scenario required a more complete description (i.e. a narrative). Experts developed scenario narratives based on the combination of diverse states of the driving forces, while also thinking about the influence of other forces of change. Examples of the most and least desirable scenarios are presented here (see boxes 2 and 3 respectively).
Developing strategies
Facilitators helped experts to identify strategies that support desirable scenarios and avoid undesirable ones. In total, participants finalized 15 strategies for Buddhabhumi municipality of Kapilvastu district (K) as well as 17 strategies for Chautara Sangachowkgadhi municipality of Sindhupalchowk district (S) to strengthen the rights of community forest users in federal Nepal.
Inclusive and integrated planning: Experts in both locations will adopt strategies for an inclusive, informed, integrated and nested planning process. This will include the poor, women, Dalits and other forest-dependent groups;
Box 2. Most desirable scenario: Our prosperous community forest
In 2028, all of Chautara Sangachowkgadhi municipality's forest is very active and almost all community forest user groups have implemented a sustainable forest management system. The governance system has been enhanced with proper transparency and accountability. Inclusive democracy and collective action are institutionalized in community forest. Coordination, co-existence and cooperation are developed across the municipality, district forest office and all other stakeholders. A strategic plan for integrated resource management has been prepared in a participatory way, and communities are actively implementing these plans while taking full ownership. The use of advanced technology in forest management and enterprise development has been contributing to the community development and livelihood improvement of forestdependent poor people. A remarkable contribution has been made to the local and national economy, while the condition and availability of forest, biodiversity and ecosystem services have been sufficiently improved. integrate forest management plans with community development plans and other sectoral plans; and maintain the coherence of plans between community forest user groups and local governments. Nested planning will integrate community forestry plans with those of local governments.
Adopting principles of good governance: Experts in both places placed an emphasis on applying good governance principles to community forest user groups. This includes securing representation and participation of marginalized groups in the executive committee and other decision-making bodies; ensuring accountability and responsiveness; maintaining transparency; improving communication and information exchange; and developing an appropriate mechanism for conflict management. Forming a rapid response team was also suggested for increased responsiveness of authorities (S).
Networking and capacity building: Experts recommended institutionalizing a community forest user group forum at municipality level to secure and extend the rights of communities over forests, while effectively facilitating technical support and market access (K). This forum could also share learning among user groups, and help them develop consistent policies (K). The need for institutional capacity building was also highlighted, particularly on forest management, account keeping, leadership development and conflict transformation (K).
Strengthening relationships with local government and other stakeholders: Experts recommended that municipal regulations and programs related to forestry and the environment be formulated through appropriate consultation and cooperation with user groups (K). The municipality should also provide technical, institutional and other services to community forestry, and prioritize community forestry in annual planning and budgeting. Strong relationships with other actors -e.g. the ward office, municipality, district forest office and the Federation of Community Forestry Users Nepal (FECOFUN) -were also highlighted for effective plan development and execution (S).
Use of improved technology: A major strategy was related to the use of advanced technology for the effective management of community forestry (S). The aim was to increase productivity by applying appropriate silvicultural techniques, and to promote ecosystem services. This strategy would run alongside an appropriate incentive (K) and appropriate punishment mechanism to enforce rules and curb deforestation (S).
Entrepreneurship development: Strategies also involved enterprise development; this would increase income generation and employment opportunities through forest enterprises, while exploiting market opportunities through better organization and transparency in the trade of timber and other forest products (K). User groups should prioritize employment generation through forest-based entrepreneurship (i.e. sawmills, timber-and fruit-based enterprises, and through promoting ecotourism) (S).
Livelihoods and poverty reduction initiatives: As existing poverty reduction initiatives by community forestry are ineffective, special programs will be prioritized for forestdependent poor (K). Similarly, an equity-oriented forest product distribution system will be implemented (S) with a focus on poverty reduction.
Developing an action plan
All community forest user groups in attendance agreed to develop an action plan for their own community forest in order to achieve the desired scenario. User group representatives made written commitments for action plan development with the help of leaders from FECOFUN, the district forest office and local government.
Box 3. Least desirable scenario: Unmanaged community forest
In 2028, Chautara Sangachowkgadhi municipality's forests are unmanaged and haphazard. Problems such as deforestation, environmental degradation, forest fires and shortages of forest products are increasing. Forest user groups are largely inactive and hardly holding any meetings or discussions. Users are threatened if any complaints are made regarding committee activities. Most of the decisions are made by very few elites, and all the benefits are also taken by elites. Similarly, the contribution of forests to the livelihoods of forest-dependent poor, women and marginalized groups is very limited. There is no participation in the activities of the user group. Illegal forest activities are increasing. Community forests do not cooperate with the municipality and the ward office, so conflicts are increasing in the forest. The forest is in poor condition. The government is almost at the stage of taking back the community forest.
Conclusions
Based on two training programs and municipality-level workshops, participants thought that PPA was a useful methodological tool for effective planning, and that its robust design and integrated approach would have long-lasting effects on community forestry in the region. As one user group representative from Buddhabhumi expressed, "The scenarios, particularly the most desirable scenario, provided us with a vision, and strategies that we have developed are equally applicable to all of our community forest user groups…the complete list provides us with a framework, which we can use as a benchmark for strengthening community rights, improving governance and reducing poverty." With some customization and contextual refinement, PPA can be adopted as a planning tool by community forestry groups, local government ward offices and at the municipality level. ForestAction is now adopting the PPA approach in another project to facilitate collaborative planning between local governments and community forest user groups across five project sites in two districts. Some considerations were highlighted for future adoption of PPA in Nepal. Participants emphasized that the nature and technicalities of the training required educated, literate, well-informed and committed forestry leaders who were able to attend both workshops for continuity. It was likewise considered important to have experienced facilitators, including female facilitators, who can internalize the concepts and rationale of PPA while possessing some knowledge and skills in running computer software. The time availability of local leaders must also be considered; these workshops were condensed from the currently practiced six-day session.
Center for International Forestry Research (CIFOR)
CIFOR advances human well-being, equity and environmental integrity by conducting innovative research, developing partners' capacity, and actively engaging in dialogue with all stakeholders to inform policies and practices that affect forests and people. CIFOR is a CGIAR Research Center, and leads the CGIAR Research Program on Forests, Trees and Agroforestry (FTA). Our headquarters are in Bogor, Indonesia, with offices in Nairobi, Kenya; Yaounde, Cameroon; Lima, Peru and Bonn, Germany. | 2020-02-20T09:07:00.876Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "4f3337347e436e6294f87cbbe716140e00910caa",
"oa_license": "CCBY",
"oa_url": "https://www.cifor.org/publications/pdf_files/infobrief/7553-infobrief.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "510078d53b61e71d303daa820b63f6bb44315f7a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
} |
271048214 | pes2o/s2orc | v3-fos-license | Preclinical evaluation of [18F]SYN1 and [18F]SYN2, novel radiotracers for PET myocardial perfusion imaging
Background Positron emission tomography (PET) is now an established diagnostic method for myocardial perfusion imaging (MPI) in coronary artery disease, which is the main cause of death globally. The available tracers show several limitations, therefore, the 18F-labelled tracer is in high demand nowadays. The preclinical studies on normal Wistar rats aimed to characterise two potential, novel radiotracers, [18F]SYN1 and [18F]SYN2, to evaluate which is a better candidate for PET MPI cardiotracer. Results The dynamic microPET images showed rapid myocardial uptake for both tracers. However, the uptake was higher and also stable for [18F]SYN2, with an average standardized uptake value of 3.8. The biodistribution studies confirmed that [18F]SYN2 uptake in the cardiac muscle was high and stable (3.02%ID/g at 15 min and 2.79%ID/g at 6 h) compared to [18F]SYN1 (1.84%ID/g at 15 min and 0.32%ID/g at 6 h). The critical organs determined in dosimetry studies were the small intestine and the kidneys. The estimated effective dose for humans was 0.00714 mSv/MBq for [18F]SYN1 and 0.0109 mSv/MBq for [18F]SYN2. The tested dose level of 2 mg/kg was considered to be the No Observed Adverse Effect Level (NOAEL) for both candidates. The better results were achieved for [18F]SYN2, therefore, further preclinical studies were conducted only for this tracer. Radioligand binding assays showed significant responses in 3 from 68 assays: muscarinic acetylcholine M1 and M2 receptors and potassium channel hERG. The compound was mostly metabolised via an oxidative N-dealkylation, while the fluor substituent was not separated from the molecule. Conclusion [18F]SYN2 showed a favourable pharmacodynamic and pharmacokinetic profile, which enabled a clear visualization of the heart in microPET. The compound was well-tolerated in studies in normal rats with moderate radiation exposure. The results encourage further exploration of [18F]SYN2 in clinical studies. Supplementary Information The online version contains supplementary material available at 10.1186/s13550-024-01122-5.
Background
Ischemic heart disease is a common clinical problem and accurate diagnostic tools are warranted.Among others, nuclear cardiac imaging techniques such as single photon emission computed tomography (SPECT) and positron emission tomography (PET) have been used to detect myocardial ischemia and guiding therapy options such as revascularization [1][2][3][4].
The current limiting factor of PET myocardial perfusion imaging (MPI) is availability of tracers, especially in clinical sites without cyclotron [5,6].Based on technical superiority of PET over SPECT for myocardial perfusion imaging and feasibility for quantitation of myocardial blood flow in absolute terms [7,8], the development of new radiotracers for PET MPI may lead to wider use of PET diagnostic imaging in cardiology.
[ 82 Rb]Rubidium chloride ([ 82 Rb]RbCl) is currently the most commonly used tracer for PET MPI, especially in the USA, but is also registered in some countries of the European Union (Luxembourg, Denmark) [9].The use of [ 82 Rb]RbCl requires a specific generator, which is costly, especially in sites with small number of patients per day.The other PET MPI tracers, [ 13 N]ammonia ([ 13 N] NH 3 ) and [ 15 O]H 2 O require on site cyclotron for their production.
The studies on novel PET MPI tracers mainly focus on 18 F-labelled molecules. 18F seems to be an optimal radioisotope for labelling -its chemistry has been well-studied, it can be easily produced in cyclotrons and transported over considerable distances due to its relatively long half-life.
At present, there are several 18 F-tracers under development, but none of them is currently commercially available [10][11][12].One of them, [ 18 F]flurpiridaz, completed a phase 3 clinical trials showing greater diagnostic accuracy with PET as compared to SPECT MPI [13].The comparison of available MPI tracers is shown in Table 1.
Myocardial uptake of a tracer depends both on its transport into the cell and its subsequent extraction/ retention in the cell.The first process depends on the blood flow and the latter on cell membrane integrity and viability.The ideal myocardial perfusion agent should have high first-pass extraction fraction and retention by the myocytes, rapid clearance from blood, low extracardiac uptake, minimal myocardial redistribution, intrinsic chemical stability, and relatively simple radiochemical synthesis and purification process [20].In addition, the extraction fraction should not be substantially influenced by level of flow as this characteristic makes quantitation of myocardial perfusion more feasible.
There is a group of compounds, widely tested on animals, that show these features − 18 F-labelled derivatives of triphenylphosphonium cation with a symmetrical structure [10].Other compounds, which were promising in MPI in clinical studies, are 18 F-labelled dyes such as rhodamine B and boron-dipyrromethene (BODIPY) derivatives [12,21].Taking into account the collected data and that [ 13 N]ammonia is an excellent PET MPI tracer, we concluded that desired active pharmaceutical ingredient (API) structure should consist of a nitrogen cationic centre, at least three aromatic rings and be structurally similar to well-known dyes used in staining cell material and organelles.The quaternary ammonium salts, whose main core consists of three fused aromatic rings, commonly used in molecular biology, are ethidium bromide (EthBr) and N-nonyl acridine orange (NOA).We have chosen these compounds as model molecules for developing a novel PET MPI tracer.The EthBr is used to stain the nucleic acids by intercalation between adjacent base pairs.The NOA specifically incorporates into the mitochondrial membrane through primary binding to cardiolipin, a phospholipid present only in the inner layer of the cell membrane [22].
We developed and investigated [ 18 F]SYN1 and [ 18 F] SYN2, which are 18 F-labelled derivatives of EthBr and NOA, respectively.
Methods
The presented preclinical studies were conducted using radioactive tracers [ 18 F]SYN1 and [ 18 F]SYN2, and also their non-radioactive reference standards (when a high concentration of the compound was needed to carry out the tests).
Precursor synthesis
The precursors of [ 18 F]SYN1 and [ 18 F]SYN2 were synthesized by the method illustrated in Fig. 1.The reagent bearing two groups suitable in a nucleophilic substitution reaction was dissolved in dichloromethane (DCM) and cooled.The phenanthridine derivative or acridine derivative was also dissolved in DCM and added dropwise to the reagent.The contents of the reaction flask were stirred for a given time, and at the end of the reaction, cold Et 2 O was added.The precipitate formed was filtered off under reduced pressure.The details are given in patent EP 3 814 325 B1 [23].
Radiosynthesis
The radiosynthesis of [ 18 F]SYN1 and [ 18 F]SYN2 is based on S N 2 substitution reaction of [ 18 F]F -with the precursor described above (Fig. 2).The process was carried out using a radiosynthesis module (Modular-Lab Standard, Eckert&Ziegler, Germany).The [ 18 F]F -was produced using the Eclipse HP cyclotron (11 MeV, Siemens, USA).The usual irradiation conditions were a proton current of 60 µA.The produced [ 18 F]F -was captured on the Waters, QMA light SPE column and eluted by a solution of phase-transfer catalyst (Kryptofix® 2.2.2., K 2 CO 3 , water, acetonitrile) into the reactor.Afterwards, the water/acetonitrile mixture was removed under reduced pressure, and azeotropic distillation was carried out.The precursor solution was added to the dried complex of [ 18 F]F -with phase-transfer catalyst ([(Kryptofix® 2.2.2.)K] + ), and the mixture was heated.The crude product was diluted with water/ethanol and purified on a semipreparative HPLC column, where the mobile phase was acetate buffer with the addition of ethanol.The product solution was formulated by dilution with saline to a suitable radioactive concentration and then sterilized by filtration.
Quality control
The final product was analysed by HPLC to confirm identity and determine radiochemical purity: Xterra, C18 250 × 4.6 mm column (Waters) was used with a flow rate 1 mL/min, gradient elution (phase A -water + 0.05% TFA, phase B -acetonitrile), UV detection at 270 and 500 nm, and on-line radiometric detection; sample volume: 20 µL.Analysis time was ca. 30 min.The free [ 18 F] fluoride amount was determined by TLC using silica gel on aluminium plates and 95% ACN/0.9%saline (V:V; 1:1) as mobile phase.GC method was used to determine the ethanol and residual solvents amount using HP-Fast Residual Solvent column (30 m x 0.53 mm, 1.00 μm, Agilent).The formulation stability was tested for 8 h.
Animals
All animal procedures were conducted according to the National Legislation and the Council Directive of the European Communities on the Protection of Animals Used for Experimental and Other Scientific Purposes (2010/63/UE) and the "ARRIVE guidelines for reporting animal research" [24].Wistar rats (6-7 weeks old) were purchased from the M. Mossakowski Institute of Experimental and Clinical Medicine, Polish Academy of Sciences in Warsaw (Poland) (microPET and biodistribution studies), Experimental Medicine Centre at the Medical University in Białystok, Poland (toxicity study) or Envigo RMS Spain S.L. (pharmacokinetics study).On arrival, rats were quarantined and observed for at least five days in groups of not more than five in standard cages in the animal facility of the site conducting study.They were housed in a quiet room under constant conditions (22 ± 2 °C, 55 ± 10% HR, 12 h light/dark cycles) with free access to standard food and water.Veterinarian staff and investigators observed the rats daily to ensure animal welfare and determine if humane endpoints were reached (e.g.hunched and ruffled appearance, apathy, ulceration, severe weight loss).
For the preclinical studies, the animals were randomized into groups containing from 3 to 10 rats per group and time point, depending on the experiment.
Biodistribution and pharmacokinetics
The physiological distribution and pharmacokinetics for [ 18 F]SYN1 and [ 18 F]SYN2 were examined on rats according to approved protocol.Approximately 0.2 mL/31.1 MBq of compound was administered i.v.At established time points after injection (15 min, 30 min, 1 h, 2 h, 4 h and 6 h), the animals were euthanized by cervical dislocation and dissected.Selected organs and tissues were weighed, and their radioactivity was measured using a NaI(Tl) crystal gamma counter.The results were adjusted for the radioactive decay of 18 F.The physiological distribution was calculated and expressed in terms of the percentage of administrated dose found in each of the selected organs (%ID) or per gram of tissues/organ (%ID/g) with suitable standards of the injected dose.
The pharmacokinetic study was also conducted using non-radioactive reference standard of [ 18 F]SYN2 after a single i.v.administration at a dose of 2 mg/kg in rats.Blood samples of about 0.5 ml were collected in K 3 -EDTA tubes at pre-dose time and 4 min, 15 min, 30 min, 60 min, 4 h, and 24 h after injection (three rats were used at each time point, each rat was bled twice).Blood samples were then centrifuged at approx.3270 g at 5 ± 3 °C for 10 min to extract plasma, divided in two aliquots and analysed (54 plasma samples in total).The concentrations of SYN2 were determined according to a LC-MS/MS validated method according to the Guidance for Industry (FDA) and Guidance on Validation of Bioanalytical Methods (EMA).
Dosimetry
The pharmacokinetics data from ex vivo biodistribution studies in rats were extrapolated to humans using the simplest allometric scaling model to estimate an effective dose in mSv/MBq.The dose calculation was done using computer software for internal dose assessment OLINDA/EXM® (Version 1.1, copyright Vanderbilt University, 2007).
Toxicity
The potential health hazards resulting from exposure to intravenous administration of [ 18 F]SYN1 and [ 18 F]SYN2 were investigated.Extended single dose toxicity study after administration of 2 mg/kg b.w. of either test item (non-radioactive reference standards, SYN1 or SYN2) in a solution of 0.9% NaCl or only 0.9% NaCl into 30 males and 30 females of Wistar rats was investigated.The rats were euthanized 24 h (n = 40) and 14 days (n = 20) postadministration.Throughout the experiment, the animals' behaviour was observed and their weight recorded.The overview of animal groups used in the study is presented in Table 2.
Pharmacodynamics
The possibility of pharmacological interactions of SYN2 was assessed in vitro in radioligand binding assays on key pharmacological target classes including GPCRs, drug transporters, ion channels, nuclear receptors and enzymes (LeadProfilingScreen SafetyScreen, PP68, Panel of 68 receptors, Eurofins Panlabs, Inc.) [25].There was only one SYN2 concentration of 1 µM used in the study.The significant response, which confirmed selective binding with target, was defined as ≥ 50% inhibition or stimulation.
Metabolites
The metabolite profiling and characterization experiment of SYN2 molecule (in vitro MetID, Admescope) was conducted to evaluate potential metabolic pathways.In order to characterize the metabolites of SYN2, the compound was incubated in cryopreserved rat, dog and human hepatocytes.Prior to the incubation, the hepatocytes were thawed, suspended in Williams medium E and then centrifuged to receive a target density of about 1 million cells/ml.A SYN2 solution of 1 µM was then added to the hepatocyte suspension and incubated at 37 °C for 0, 30 and 60 min.The incubations were stopped by the addition of acetonitrile, then frozen and centrifuged to remove precipitated proteins.Supernatants were then analysed with ultra-performance liquid chromatography (UPLC) and high-resolution mass spectrometry (HRMS).
Statistical analysis
The results of radiopharmaceutical uptake are presented as a percentage of the dose administered per gram of tissue (%ID/g) in the form of an average with standard deviation (%ID/g; mean ± standard deviation (SD)).
Where applicable, statistical analysis was performed with one-way analysis of variance (ANOVA) using GraphPad Prism software (GraphPad Software, La Jolla, CA, USA).The results of toxicity studies conducted for SYN1 and SYN2 were analysed using Excel 2013 and STATISTICA 10.0.PL.The normality distribution was tested with Shapiro-Wilk test.For homogeneity of variance Brown-Forsythe test has been used.For normal distribution and homogeneous variances results, Student's t-test was used.In the absence of normal distribution, Mann-Whitney test was used.In the case of non-homogenous variance, Cochrane-Cox test was used.
IC 50 values from pharmacodynamics assessment were determined by a non-linear, least squares regression analysis using MathIQ ™ (ID Business Solutions Ltd., UK).Where inhibition constants (K i ) are presented, the K i values were calculated using the equation of Cheng and Prusoff [26] using the observed IC 50 of the tested compound, the concentration of radioligand employed in the assay, and the historical values for the K D of the ligand (obtained experimentally at Eurofins Panlabs, Inc.).Where presented, the Hill coefficient (n H ), defining the slope of the competitive binding curve, was calculated using MathIQ ™ .
Compound Discoverer (CD) was used for data mining with manual confirmation for metabolite profiling and characterization.
All of the obtained results were considered statistically significant for p < 0.05.
Results
The sterile cardiotracers solutions with radiochemical purity > 95% were achieved.The ethanol content was not higher that 10% and residual solvents concentration was within the Ph.Eur.limits.The stability of the formulation within 8 h was confirmed.
MicroPET
The dynamic microPET data (Fig. 3) showed higher accumulation of [ 18 F]SYN2 compared to [ 18 F]SYN1 in the organs of interest: heart, liver and lungs.The uptake of [ 18 F]SYN2 in the myocardium was stable and high, the SUV value was 4.0 at the beginning and slightly decreased in time reaching plateau at a value of 3.5 after 6 min.In comparison, [ 18 F]SYN1 uptake was lower (SUV 3.0 at the beginning) and decreased in time (SUV 1.8 after 45 min).The accumulation in the liver was higher for [ 18 F]SYN2 with slower elimination compared to [ 18 F] SYN1, the SUV max of 2.9 was reached around 4 min with decrease to 1.9 at 45 min for [ 18 F]SYN2, whilst for [ 18 F] SYN2 the SUV max was 2.4 at 2 min decreased to 1.0, respectively).However, the distribution ratio between heart and liver was comparable for both radiotracers and was equal to 1.2-1.3 for the first 10 min.The radioactivity uptake and retention in the lungs were also higher for [ 18 F]SYN2.Small intestine and bladder were clearly Full data of tissue distribution are listed in Supplemental Tables 1-4.
The uptake of [ 18 F]SYN2 in the myocardium was stable and higher than 2.5%ID/g for up to 6 h, compared to [ 18 F]SYN1 which was between 2 and 1%ID/g up to 2 h, and then it dramatically decreased below 0.4%ID/g.Figure 6 shows comparative value uptake (%ID/g) in the heart as a target organ and in the lungs and the liver as critical organs during imaging.
Furthermore, we found that for [ 18 F]SYN2, the ratio of radioactivity accumulation in the heart to the liver and lungs was more favourable from an imaging point of view (Fig. 7).The most favourable and statistically significant ratios were observed after 2 h of radiotracer administration.
The blood pharmacokinetics of [ 18 F]SYN1 and [ 18 F]SYN2 as a one-phase and two-phase exponential decay model, respectively, are shown in Fig. 8.For [ 18 F]SYN2 the initial fast decrease mainly represents the It is worth mentioning that half-life parameters for [ 18 F]SYN1 and [ 18 F]SYN2 calculated from blood activity clearance differ from those presented in Table 3 which is most likely related to the administered dose.The additional pharmacokinetic (PK) parameters for both radiolabelled compounds are presented in the Table 4.
The pharmacokinetics of [ 18 F]SYN2 was also measured using its non-radioactive reference standard SYN2.The tested item was quantifiable in plasma from the first sampling time point (4 min) down to 4 h.The plasma levels showed a rapid decay up to 1 h post injection.The elimination half-life was 0.26 h.The estimated plasma clearance was 2.59 L/h/kg.High total plasma clearance and low volume of distribution were obtained in the rats (Table 3; Fig. 9).
Safety Toxicity
The objective of the toxicity study was to obtain information on health hazards resulting from a single exposure due to the intravenous administration of each of test item at dose of 2 mg/kg b.w., which is 1000-fold higher than the predicted maximum dose in humans.
The effects of each of test item on animals' body weight, clinical parameters as well as haematological, biochemical, enzymatic parameters, gross and histopathological lesions in tissues and internal organs were tested.Summary of collected data regarding the toxicity study of SYN1 and SYN2 is presented in Table 5 comparing if there were statistically significant changes observed between control and treated group.
Dosimetry
This work estimated the human absorbed radiation dose based on a detailed description of activity organ distribution data for normal rats.After administration of [ 18 F] SYN1 and [ 18 F]SYN2, the animals were placed in holding cages which guaranteed their freedom of movement.Therefore, Table 6 compares estimated absorbed doses (mSv/MBq) in human organs for [ 18 F]SYN1 and [ 18 F] SYN2.Additionally, the obtained theoretical values for examined cardiotracers were compared with [ 18 F]flurpiridaz absorbed dose calculated in exercise stress subjects in humans (Table 6) [14].
The data in Table 6 shows that [ 18 F]SYN1 compound resulted in 34% less effective dose compared to [ 18 F] SYN2 and 52% as compared to [ 18 F]flurpiridaz.However, the dose absorbed in the heart wall was also clearly lower for [ 18 F]SYN1 (0.0139, 0.0415 and 0.0390 for [ 18 F]SYN1, The highest radiation-absorbed doses per unit-administered activity (mSv/MBq) of [ 18 F]SYN2 were calculated for kidneys (0.0704), heart wall (0.0415) and pancreas (0.0285), in contrast to [ 18 F]flurpiridaz for which the highest values were for heart wall (0.039), kidneys (0.027) and stomach wall (0.024).It's worth mentioning that the high value of the absorbed dose for [ 18 F]SYN2 in kidneys should not influence imaging quality of heart, which is related to the position of organs in the human body.However, obtained absorber dose for [ 18 F]SYN2 in kidneys is twice that of [ 99m Tc]Tc-MIBI (0.036) [15], but at that level does not induce kidney toxicity.
The obtained preclinical data suggest that [ 18 F]SYN2 is a well-tolerated PET radiopharmaceutical with favourable radiation dosimetry profile and that is suitable for clinical cardiac imaging.
Metabolites
Twelve metabolites were identified in the study by analysis of high-resolution LC-MS/MS data and characterized in vitro in cryopreserved rat, dog and human hepatocytes (Supplemental Table 5).The products of reactions such as N-dealkylation, oxidation, acetylation and glucuronidation were identified and the fluorine atom was generally not detached.The metabolism of SYN2 in human hepatocytes was rather low in comparison to rat and dog (32% and 5% of the parent, respectively), the unchanged SYN2 was predominant after 60 min and was equal to 70% (Fig. 10).The major metabolite in dog cells was metabolite M1 and in rat cells metabolite M2.These metabolites appeared to be major metabolites in human hepatocytes with relative abundance of 19% and 8%, respectively at 60 min.In dog hepatocytes, the N-dealkylated metabolites M1, M2 and M3 were predominant at 60 min, respectively 58%, 53%, 46% of relative abundance.In rat cells the major metabolite was M2 (almost 7 times that of the parent at 60 min).The structures of M4, M5, M6, M7 and M8 were not fully elucidated due to very low contents, and generally may represent products of dealkylated/N-acetylated/oxidated forms of SYN2.
Myocardium was clearly visible in microPET for both radiotracers, however time-activity curves showed stable heart radioactivity uptake only for [ 18 F]SYN2 with high SUV values of 4.0-3.5 throughout the scan time (about 45 min).The microPET images suggested that both cardiotracers were excreted via the biliary and urinary pathway.MicroPET also showed negligible skeletal activity, indicating that 18 F is not hydrolysed or released in the form of [ 18 F]fluoride ion in vivo for both compounds (Figs. 4 and 5).
The biodistribution studies demonstrated very similar results to microPET findings.They confirmed the higher than [ 18 F]SYN1 and sustained myocardial uptake of [ 18 F]SYN2 with values of about 3.0%ID/g within 1-hour post-injection and then 2.6-2.8%ID/gfrom 2 to 6 h.The uptake in the liver was also higher for [ 18 F]SYN2, and similarly to microPET results, the heart/liver ratio was also higher.The distribution profiles confirmed that both radiotracers were excreted via the biliary and urinary pathway (Supplemental Tables 1-4).
The approximated estimation of [ 18 F]SYN2 dosimetry in humans showed that the organ receiving the highest estimated absorbed dose were the kidneys at 0.070 mSv/MBq, followed by the heart wall at 0.042 mSv/ MBq (Table 6).The value for heart wall is comparable to [ 18 F]flurpiridaz, the new cardiac perfusion agent now in phase III clinical trials (0.027 mSv/MBq and 0.039 mSv/ MBq, respectively) [14].The estimation for [ 18 F]SYN1, the second tested compound, showed that the small intestine received the highest estimated absorbed dose at 0.020 mSv/MBq, followed by the kidney at 0.019 mSv/ MBq and heart wall with much lower value compared to [ 18 F]SYN2 and [ 18 F]flurpiridaz at 0.014 mSv/MBq (Table 6).The effective dose of [ 18 F]SYN1 was the lowest, followed by [ 18 F]SYN2 and then [ 18 F]flurpiridaz -0.0072 mSv/MBq vs. 0.0109 mSv/MBq vs. 0.0150 mSv/MBq, respectively [14].For comparison, effective doses for 82 Rb and [ 13 N]ammonia, short lived cardiotracer, are much lower, 0.0011 mSv/MBq and 0.002 mSv/MBq, respectively [15,16].The in-house data of estimated dosimetry of the 18 F-labelled compounds presented in this paper are derived from allometrically scaled results from rat studies to humans.This is an important limitation and may result in discrepancies from the dosimetry determined in direct human studies.Such studies are planned.
SYN1 and SYN2 were well-tolerated and only slight abnormalities were reported in the toxicological study.The minor biochemical abnormalities were within the normal limits for Wistar rats [29].Minimally increased incidence of basophilic tubules in the kidneys of male rats (without other renal changes) might have been the result of spontaneously occurring chronic progressive nephropathy, one of the most common spontaneous diseases in rats correlated with age [30].Hepatocyte karyomegaly and foci of single hepatocyte necrosis are a normal incidental finding in the rats and may occur spontaneously with age [31].In conclusion, the histopathological changes found in all study groups were background findings and although they might have been related to SYN1 or SYN2 administration, the dose level of 2 mg/kg b.w. is considered to be "no observed adverse effect level" (NOAEL).
The high and stable uptake of the [ 18 F]SYN2 in the heart muscle together with good heart/liver distribution ratio and low toxicity make this radiotracer suitable for the heart muscle imaging.Therefore, further preclinical studies were conducted only with the [ 18 F]SYN2.
The possible pharmacological interactions of SYN2 were tested and significant inhibition was observed for 3 out of 68 targets at a concentration of 1 µM -muscarinic acetylcholine receptors M 1 and M 2 as well as hERG.The determined values indicate that SYN2 show high affinity to hERG receptors and moderate to muscarinic receptors [32].The muscarinic acetylcholine receptors are multifunctional, they are responsible for regulating heart rate, smooth muscle contraction, glandular secretion and many fundamental functions of the central nervous system [33], while hERG plays an important role in electrical activity in the heart [34].The affinity of acridine derivatives to hERG receptors was already confirmed in the range of 0.2-18 µM [35].The structure that has positively charged nitrogen atom and hydrophobic aromatic ring system can be deducted as a potential hERG blocker using QSAR strategy.The results of analysing different database highlighted that hERG inhibitors tend to have a larger molecular weight, higher hydrophobicity, more cations and less basic substituents [35].Similar conclusion can be applied to muscarinic receptors, e.g. the tacrine, acridine derivative, is the known antagonist [36].
Despite the wide abundance of mentioned targets in the human body (including heart) any interactions of SYN2 are considered unlike, as it reached low plasma concentrations [37][38][39][40].We demonstrated in normal Wistar rats that SYN2 reached C max of 0.19 µM after i.v., injection of 2 mg/kg with a half-life of 0.26 h.After injection of a proposed maximum limit dose of 100 µg (2 µg/ kg for a 50-kg patient), we expect C max of 1.1 nM after body surface area correction, which is unreliable to lead to any measurable biological effects.
The metabolic profiles in human, dog and rat hepatocytes were compared to evaluate potential metabolism pathways.Firstly, the drug product usually undergoes reaction of functionalization, very frequent to be more polar.The metabolites found in dog, rat and human hepatocytes suggest that SYN2 is mainly oxidatively N-dealkylated in a subsequent mode forming M1 and M2 metabolites.In addition, in dog and rat hepatocytes the M3 metabolite was observed as well.Only the structures of M1, M2 and M3 were confirmed by MS fragmentation spectra to high content.However, it seems that hydroxylation can occur as well as metabolites M5, M6 were detected [41], though the forming of N-oxide cannot be excluded.Secondly, metabolites are usually coupled with an endogenous molecule undergoing acetylation or (N, O)-glucuronidation reaction.Metabolites M7, M8 are N-acetylation products similarly to proflavine [42].The product of the last reaction is only observed in rat and dog hepatocytes presumably to slower metabolism in human cells.The gathered data indicated that main metabolism of SYN2 proceeded via oxidative N-dealkylation(s) followed by acetylation yielded polar metabolites that can be excreted in urine and/or bile.The other pathway seen in dog and rat hepatocytes via hydroxylation and O-glucuronidation appear to be of lower significance.Similarly, the dealkylation at N-acridine atom, followed by cleavage of fluorine atom is of lower significance due to the fact that quaternary amines are more stable than tertiary or secondary amines toward N-dealkylation.
Conclusions
[ 18 F]SYN2 showed favourable pharmacodynamic and pharmacokinetic profile in normal Wistar rats.The tracer, with its rapid plasma clearance, high and stable uptake in the myocardium and favourable heart to lung ratio, allowed for a clear visualization of the heart in microPET.[ 18 F]SYN2 was safe, well-tolerated and showed moderate radiation exposure comparable with other MPI PET tracers.Promising results of preclinical evaluation of [ 18 F]SYN2 encourage its further exploration in clinical studies.
Fig. 3
Fig. 3 MicroPET time-activity curves (TACs) of heart, lungs and liver for [ 18 F]SYN1 and [ 18 F]SYN2.Results are presented as mean standardized uptake value ± standard error of measurement
Fig. 4 Fig. 6
Fig. 4 Representative microPET images of [ 18 F]SYN1: slice (top) in the coronal plane and maximum intensity projection (bottom).Images represent the summation of time frames
Fig. 5
Fig. 5 Representative microPET images of [ 18 F]SYN2: slice (top) in the coronal plane and maximum intensity projection (bottom).Images represent the summation of time frames
T
max -time to peak plasma concentration, C max -peak plasma concentration, T last -time to last quantifiable plasma concentration, C last -last quantifiable plasma concentration, AUC t -area under the plasma concentration-time curve from time 0 to the time of the C last , calculated by the linear trapezoidal linear/log rule, V zvolume of distribution at the terminal phase, Cl -plasma clearance, T 1/2λz -terminal half-life
Fig. 8 Fig. 7
Fig. 8 Blood activity clearance.The fit of a one-phase and two-phase exponential model
Fig. 9
Fig. 9 Plasma concentration time profile for the unlabelled reference standard of [ 18 F]SYN2
Table 1
Comparison of available SPECT and PET tracers for myocardial perfusion
Table 2
Animal groups overview in the toxicity studies
Table 4
Pharmacokinetic parameters for [ 18 F]SYN1 and [ 18 F]SYN2 in the rat
Table 5
Toxicity study results * "+" -statistically significant changes were observed between control and treated group "-" -no statistically significant changes were observed
Table 6
[14]mates of absorbed doses (mSv/MBq) for [ 18 F] SYN1 and [ 18 F]SYN2 in human organs after extrapolation of biodistribution data in rats and absorbed doses for [ 18 F]flurpiridaz derived from human studies[14] | 2024-07-09T06:16:58.111Z | 2024-07-08T00:00:00.000 | {
"year": 2024,
"sha1": "2ef729c40e2487eed05de93c5e9849e01cbd5422",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5980cee72a008d43e9123ca7f2c02b2e3500a3d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233880936 | pes2o/s2orc | v3-fos-license | Contribution of ERT on the Study of Ag-Pb-Zn, Fluorite, and Barite Deposits in Northeast Mexico
: The results on the effectiveness of five 2D electrical resistivity tomography (ERT) survey profiles for Ag-Pb-Zn, fluorite, and barite exploration Mississippi Valley Type (MVT) and on the magmatic deposits of northeast Mexico, are presented. The profiles were made in areas with mining activities or mineralization outcrops. Schlumberger, dipole-dipole, and Wenner array configurations were used on the measurements. The results showed that electric resistivity can be used to distinguish between mineralized zones. In magmatic-type Pb-Zn and MVT Pb-Zn deposits, resistivity values are shown as low. In magmatic-type fluorite and MVT fluorite deposits, as well as the MVT barite deposit, low-resistivity values are related to Fe sulfides and clays. With these results it is possible to connect observed surface mineralization with underground mineralization. New mineralized zones are also found and their geometries, extensions, and dipping are reported. Therefore, lower resistivity values can be linked to mineral bodies with higher Ag-Pb-Zn contents, as well as bodies enriched in Fe sulfides, Fe oxides, and clays in the fluorite and barite mineralizations. In most ERT models, fractures and faults are identified, indicating a structural control on mineralization. From the geoelectric patterns we can infer the magmatic and MVT origin of these mineral deposits.
Introduction
Mexico is an important producer of silver, celestine, fluorite, lead, zinc, barite, antimony, manganese, and gold [1]. Fluorite, barite, celestine, and Pb-Zn deposits occur in northeastern Mexico, within Mesozoic carbonates [2]. These deposits belong to a wide variety of ore deposit types, such as epithermal, skarn and MVT [3]. There are over 200 MVT Zn-Pb and associated celestine, fluorite, and barite deposits in the state of Coahuila and neighboring areas in northeastern Mexico. These deposits define a metallogenic province, named the MVT province of northeastern Mexico [2,4].
Some characteristics of the MVT and magmatic deposits, are their location in carbonate rocks, their sulfides, barite, and fluorite epigenetic mineralization (originated at shallow depth) [12,13], which suggest that geophysical methods, particularly ERT, can provide good results on the location and characterization of these deposits.
Geological Setting
Mineral deposits in Coahuila are dominated by MVT-like fluorite deposits. This region also has significant magmatic-hydrothermal deposits associated with the East Mexican Alkaline Province, which correspond mostly to skarn type [14]. In this region, fluorite, barite, celestine, and MVT Pb-Zn deposits are found. These deposits are located inside the MVT metallogenic province of northeastern Mexico in different Mesozoic sedimentary formations of the Sabinas Basin, linked to the beginning of the Laramide Orogeny [2]. These ore deposits occur in Mesozoic carbonate rocks, partially deformed by Laramide tectonics [2,13].
The Ag-Pb-Zn deposits in northern Mexico are hosted in thick carbonate-dominated Jurassic-Cretaceous sedimentary sequences. These mineral deposits contain irregular ore bodies that commonly reveal strong structural controls. The deposits are generally controlled by a combination of folds, faults, fractures, fissures, and intrusive contact that acted as structural conduits for channeling mineralizing fluids to favorable sites of deposition. These deposits show three principal morphological types: mantos, chimneys, and pods and are composed of massive sulfides and/or calc-silicate skarn. Sulfide mantos and chimneys are enclosed within carbonate rocks [15].
Zn-Pb deposits are made up essentially of sulfides and generally exhibit supergene alteration with the consequent alteration of sulfides to oxides, carbonates, and sulfates [2]. Most of these deposits have been described in the Cupidito, Cupido, La Virgen, Aurora, and Acatita Formations [2,13], as well as in the Olvido and Santa Elena Formations [16,17].
In the Cupidito Formation (early Aptian) the smaller Pb-Zn bodies appear as stratabound mantos which are concordant with the stratigraphy, while the larger bodies are found filling paleokarst. When sulfides are oxidized, the host rocks adopt reddish and yellowish tones. At the base of the Georgetown Formation (late Albian), there is small sub-economic mineralization of Pb-Zn sulfides. The mineralization is filling in small karst structures and fractures with different thicknesses. Generally, the primary mineralization is oxidized [2].
Three types of clearly epigenetic fluorite deposits are reported in northeastern Mexico within carbonate rocks [13]: the first is MVT, in which the fluorite mantos are related to compressive structures, and which shows no relationship with the intrusive rocks; the second type is related to magmatic processes, and; the third type includes small fluorite bodies in skarn-type deposits developed around post-Laramide rhyolitic bodies.
Along the edge of the Burro-Peyotes Paleopeninsula ( Figure 1), a high concentration of strata-bound fluorite bodies is found within the Washita Group [2]. In the La Encantada-Buena Vista fluorite district (located east of the Minerva mineralized zone; Figure 1), the fluorite bodies exhibit strong structural controls and contain clay minerals [13]. The Aguachile district (located west of the Rosita mineralized zone; Figure 1) and neighboring areas contain a variety of fluorite deposits. At the Aguachile deposit, the fluorite bodies have a magmatic relationship with tertiary rhyolites. This deposit comprises a ring dyke hosted in Mesozoic limestone that developed Fluorite-Be mineralization around the dyke. This fluorite mineralization also contains hematite, iron oxyhydroxides, and illite [3].
In the study region, fluorite deposits are reported in several geological carbonate formations, mainly the Georgetown, Salmon Pick, Santa Elena, and Del Río Formations [2]. The barite deposits have been reported in the Olvido, La Virgen, Cupidito, and Cupido Formations [2,13], as well as in the Zuloaga Formation [18]. The La Virgen Formation of Early Cretaceous age (Barremian) is made up of gypsum and limestone. The barite bodies are also stratiform, exhibiting strong fracturing, and were infilled with Pb-Zn sulfides during the supergene oxidation process [2].
Minerva Mineralized Zone
This zone mainly contains metallic Ag-Pb-Zn mineralization of magmatic origin, restricted to the peripheral part of Cerro Minerva. The mineralization consists of a lenticular manto whose origin is related to contact metasomatic processes (skarn) between the intrusive bodies and limestones of the Santa Elena Formation of Early Cretaceous age (Late Albian). This mineralization has preferential Ag-Pb-Zn values, in a mixture of sulfides, iron oxides, and clays [16].
At the Minerva mine (P1 in Figure 1), the Santa Elena Formation is made up of dark gray limestone with a mudstone and wackestone texture. This formation has a lenticular manto with values of Ag (99 g/t), Pb (2.18%), and Zn (11.51%), with a direction N 45 • W and a dip from 35 • to 65 • NE. Its mineralogy consists of a mixture of sulfides and oxides, such as argentite, galena, sphalerite, chalcopyrite, and magnetite, within an iron oxide matrix with calcite associations, gypsum, and clays [16].
El Huiche-La Escondida Mine District
In this mining district, Ag-Pb-Zn mineralization occurs within breccias with iron oxides and clays. Such mineralizations have been reported in gypsum and limestone lenses of the Olvido Formation of Late Jurassic (Oxfordian-Kimmeridgian). In the Guadalupe mine (P2 in Figure 1), this mineralization is of MVT-type and has iron oxides, manganese, and disseminated pyrite, with traces of Ag and average values of 0.008 g/t Au, 0.095% Cu, 0.682% Pb, 1.81% Zn, and 0.28% Mn [17].
San Vicente-La Harina Mineralized Zone
In this zone, fluorite mineralization is mainly reported in the Santa Elena Formation of Early Cretaceous (Late Albian), which is composed of dark gray limestones with a mudstone to wackestone texture. The fluorite occurs in veins with a lenticular behavior, erratic in their direction and depth, and their thickness varies from 0.10 to 1.20 m. All reported mining prospects have small mining works that had reached up to 35 m in depth. Generally, there is 20% to 30% CaF 2 , although some mines have more than 60% CaF 2 . In this mineralized zone, the El Otomí mine (P3 in Figure 1) has MVT fluorite associated with iron oxides and clays [19].
La Rosita Mineralized Zone
This mineralized zone contains fluorite bodies within the Salmon Pick Formation of Early Cretaceous (Middle-Late Albian) age. This formation is composed of gray dolomitic wackestone and grainstone with abundant flint nodules in a dolomitized limestone, with dolomites occurring at its base. In the Juan Valencia mine (P4 in Figure 1), this formation contains fluorite bodies of magmatic origin. The mineralization appears in a vein with a 120 m length, and a N 50 • E direction and a 65 • SE dip. Along with fluorite, iron oxides (such as hematite, goethite, and limonite) and clays are reported filling fault areas that control mineralization. Fluorite sometimes fills fractures related to this fault. Generally, the CaF 2 contents vary from 1.03% to 30.2%, although 74.45% CaF 2 has been reported in one sample [20].
Palos Altos Mineralized Zone
In this zone, MVT barite mineralization is reported within the limestone of the Zuloaga Formation of Late Jurassic (Oxfordian-Kimmeridgian). The barite bodies are stratiform in the form of irregulars manto with iron oxides and clays resulting from the oxidation of Pb-Zn sulfides. In this formation, values from 3.11% to 77.25% BaSO 4 have been detected. At the El Tapón mine (P5 in Figure 1), a mineralized breccia has been observed and the limestone layer has been recorded to dip to the south with a 19 • angle [18].
All mineralized zones studied are located within the MVT Province of Northeast Mexico proposed by González-Sánchez et al. [4]. Within this province, mineral deposits of barite, fluorite, and Ag-Pb-Zn are reported, accompanied by iron oxides and clays originated from the sulfide oxidation [2,3,20]. The iron sulfides and clays are good conductors for ERT and therefore, must display contrasting values in resistivity compared with the sedimentary host rocks [7]. This helps detect and delimit the mineral bodies. The Ag-Pb-Zn deposit in the Minerva zone [16] and fluorite deposit in the La Rosita zone [20], are the only ones that show an evident relationship with igneous rock bodies. In the other three mineralized zones, the mineralization shows strata-bound characteristics.
Materials and Methods
Electrical resistivity tomography (ERT) data were acquired through five profiles (P1, P2, P3, P4, and P5 in Figure 1) using an AGI SuperSting R1/IP/SP (Austin, TX, USA) ( Figure 2) [21]. IP and ERT methods are commonly used to locate mineral deposits, however, it was decided to use only ERT because the research aimed at showing the preliminary contribution of ERT in the mineral exploration in the region. Furthermore, the measured resistivity values reasonably delimited the mineral bodies observed on the surface and in the mining works. Schlumberger, dipole-dipole, and Wenner array configurations with 28 electrodes spaced every 20 m were used. This electrode spacing was used to cover most of the mineral deposits and to assess the preliminary contribution of electrical resistivity tomography to locate and characterize mineral bodies. Since only preliminary results were aimed at and because obtained resistivity models reasonably delimited the mineral bodies observed on the surface and in the mining works, we discarded using smaller electrode spacing. However, the use of smaller electrode spacings might be necessary for more accurate characterization of the geometry of imaged mineral bodies. The profiles reach an approximate length of 540 m. With the extension of 540 m begin possible to reached up to 130 m, however, our study areas were mainly composed of highly resistive limestone, therefore, we discarded the deepest data because of their low signal-to-noise ratio. Only one profile was measured in each mineral deposit to preliminary assess the contribution of ERT in the characterization of the minerals bodies.
synthetic data and their results may be contradictory [22,23]. Without entering into such a discussion, we merged the three data sets (dipole-dipole, Schlumberger and Wenner arrays) in a single entrance file in order to get a single resistivity model from the inversion process. The root mean square (RMS) reported is the one obtained when using the merged three data sets. We assumed that the more information used for the inversion process the better model obtained, but we also assumed that every array had a different electrical current pattern in the ground, and therefore, every array imaged the ground in a slightly different way. The use of the three different current patterns from the three arrays gave us a more accurate image of the ground resistivity [24]. EarthImager 2D from the AGI company was the program used for the inversion process. This program is getting a standard in industry. It solves the resistivity inversion by means of finite elements modeling. An L2 norm method was used for the inversion process. As an initial model, we assumed a homogeneous half-space with a resistivity that was the average of all the resistivity measurements in each profile. A representative resistivity value is taken from the variance recorded along the acquisition process of each measurement. The final resistivity model obtained was considered the best fitted model, and the model residuals were determined as the RMS. The 2D ERT survey was carried out in five mineralization zones, in which the mineralization was partly noticed on the ground surface or through mine activity. The first and second zones (profiles 1 and 2; P1 and P2 in Figure 1) contained Ag-Pb-Zn mineral deposits of magmatic and MVT type, respectively. In the third and fourth zones (profiles 3 and This kind of eq highly resistive limestone, therefore, we uipment has automated switching that makes the work faster. We first ground the electrodes. Then we could easily measure Schlumberger, Wenner and dipole-dipole arrays according to a previous deployment design. Some authors discuss the advantages of every array over the others with synthetic data and their results may be contradictory [22,23]. Without entering into such a discussion, we merged the three data sets (dipole-dipole, Schlumberger and Wenner arrays) in a single entrance file in order to get a single resistivity model from the inversion process. The root mean square (RMS) reported is the one obtained when using the merged three data sets. We assumed that the more information used for the inversion process the better model obtained, but we also assumed that every array had a different electrical current pattern in the ground, and therefore, every array imaged the ground in a slightly different way. The use of the three different current patterns from the three arrays gave us a more accurate image of the ground resistivity [24]. EarthImager 2D from the AGI company was the program used for the inversion process. This program is getting a standard in industry. It solves the resistivity inversion by means of finite elements modeling. An L 2 norm method was used for the inversion process. As an initial model, we assumed a homogeneous half-space with a resistivity that was the average of all the resistivity measurements in each profile. A representative resistivity value is taken from the variance recorded along the acquisition process of each measurement. The final resistivity model obtained was considered the best fitted model, and the model residuals were determined as the RMS.
The 2D ERT survey was carried out in five mineralization zones, in which the mineralization was partly noticed on the ground surface or through mine activity. The first and second zones (profiles 1 and 2; P1 and P2 in Figure 1) contained Ag-Pb-Zn mineral deposits of magmatic and MVT type, respectively. In the third and fourth zones (profiles 3 and 4; P3 and P4 in Figure 1) there were fluorite mineral deposits of magmatic and MVT type, respectively. The fifth zone (Profile 5; P1 in Figure 1) hosted a barite mineral deposit considered MVT-type. All profiles were oriented perpendicular to the direction of the mineralizations and the dip of rock sequences.
The measurements acquired in the field were processed using the commercial software EarthImager2D [25]. The inversion process was based on the finite element algorithm to compute the true resistivity with the apparent resistivity data. The software allowed us to do a single inversion of every array or the use of the three arrays for getting a single resistivity model. The number of iterations varied from five to eight with the root mean square (RMS) reaching 3.34 to 7.66%.
In each ERT, we showed the kind of rock that outcrops, as well as the outcrops of mineral bodies and old mining works, in order to facilitate interpretation. These outcrops of mineral bodies, as well as vertical well and underground galleries of old mining works, were used for the reduction of ambiguity in the interpretation. We assumed that the mineralized zones were low resistivity bodies delimited by high resistivity due to carbonated sedimentary rocks. Therefore, we assumed that the decrease in the resistivity values was the effect of higher mineralization concentrations and/or alteration minerals because of the direct relationship between the mineralizations on surface and mining works with relatively low resistivity values. In Ag-Pb-Zn deposits, mineralization in a mixture of iron sulfides, iron oxides, and clays could be found [16,17], accordingly, mineralization and clays cause the decrease in resistivity values. In fluorite and barite deposits, the mineralization was associated with iron oxides and clays originated from iron sulfide alteration [18][19][20]. This mineral association indicated that the decrease in the resistivity values is mainly related to these alteration minerals.
Results
In the studied profiles, outcropping sedimentary rocks were represented mainly by Jurassic and Cretaceous limestone. The profiles had N-S, NE-SW, and NW-SE directions, and mineralization outcrops, leading to the observed mining activities. Electrical resistivity models were obtained in these mineralized zones. These were helpful for continuing to explore the surface mineralization to depths, but also to identify new underground mineralized bodies with no surface connection.
The models showed resistivity values ranging between 1 and 100,000 Ωm, and mineralized zones located in zones of low resistivity delimited by high resistivity. In some ERT models, the sedimentary sequences and the layers dip were well delimited. Similarly, the low-resistivity values helped identify fractures and faults. Some of these structures controled the characteristics of the mineralized zones. Below are the details about the interpretation of the inverted ERTs or resistivity models.
Ag-Pb-Zn Deposits
Profiles 1 and 2 were located in Ag-Pb-Zn polymetallic deposits. Profile 1 occured in an Ag-Pb-Zn deposit of magmatic origin which belonged to the Minerva mineralized zone, in which the mine with the same name was studied (P1 in Figure 1). Ag-Pb-Zn mineralization in a mixture of iron sulfides, iron oxides, and clays could be found in this mine [16]. Iron sulfides and clays are good conductors and could therefore help detect the associated Ag-Pb-Zn mineral bodies in this deposit [7].
Limestone outcrops of the Santa Elena Formation of Early Cretaceous, Albian were found in profile 1 (Figure 3a). There were also caliche outcrops in two small areas (X = 0-60 m and X = 80-160 m), as well as Ag-Pb-Zn mineralization outcrops (X = 60-80 m) with those of the host limestone. The low resistivity was been used to identify and delineate this mineralization in the subsurface. The outcropping mineralization was identified as zona A (Figure 3b), in which, the resistivity ranged between 8.7 and 510 Ωm. This zone had a length between 60 and 80 m, with a depth exceeding 90 m. It was very shallow in places (approximately 5 m depth) but did not outcrop. Using the low resistivity values (lowest than 510 Ωm), other mineralized zones could be identified within the limestone (B, C, D, and E in Figure 3b), and many of them were related to fractures or faults. The mineralization, around the vertical well appears to be related to a fracture or fault that extended more than 90 m at depth. The mineralized zone B had resistivity values between 90 and 510 Ωm, extended more than 120 m in length, and was laterally limited by fractures or faults. This zone had more than a 20 m depth, except for its lateral limits, where fractures or faults allowed mineralization, and the zone could be shallower. Zones C, D, and E were smaller than the two previously described zones (A and B). These were located at more than 20 m depths, except for zone E, which had an approximately 10 m depth. Zones C and D had resistivity values from 300 to 510 Ωm, while zone E had resistivity values from 80 to 510 Ωm.
Profile 2 was taken in an MVT Ag-Pb-Zn deposit of the El Huiche-La Escondida mining district. In this district, the Guadalupe mine (P2 in Figure 1) was studied, in which limestone from the Olvido Formation outcrops (Figure 4a). In this rock, the Ag-Pb-Zn mineralization appeared in iron oxides and clays, originated from the sulfide alteration [17]. Such minerals are good conductors and showed contrasting resistivity with the sedimentary host limestone [7].
In this profile, sediments and limestone outcrop (Figure 4a) where some old mining works (a vertical well with a depth of 10 m and underground galleries) were developed, and high Ag-Pb-Zn contents were extracted (Figures 4b and 5). The vertical well reaches an underground gallery at a depth of 10 m. The ERT model showed a variation range between 56 and 100,000 Ωm with a RMS of 7.66% (Figure 4b). Low resistivity values (lower The ERT model of the resistivity showed a range between 8.7 and 100,000 Ωm with a RMS of 5.87%. The mineralization shown on the surface and in the vertical well occured in areas with low resistivity values (lower than 510 Ωm), delimited by high resistivity values. The resistivity values of the Ag-Pb-Zn sulfide mineralization were contrasting with those of the host limestone. The low resistivity was been used to identify and delineate this mineralization in the subsurface. The outcropping mineralization was identified as zona A (Figure 3b), in which, the resistivity ranged between 8.7 and 510 Ωm. This zone had a length between 60 and 80 m, with a depth exceeding 90 m. It was very shallow in places (approximately 5 m depth) but did not outcrop. Using the low resistivity values (lowest than 510 Ωm), other mineralized zones could be identified within the limestone (B, C, D, and E in Figure 3b), and many of them were related to fractures or faults. The mineralization, around the vertical well appears to be related to a fracture or fault that extended more than 90 m at depth.
The mineralized zone B had resistivity values between 90 and 510 Ωm, extended more than 120 m in length, and was laterally limited by fractures or faults. This zone had more than a 20 m depth, except for its lateral limits, where fractures or faults allowed mineralization, and the zone could be shallower. Zones C, D, and E were smaller than the two previously described zones (A and B). These were located at more than 20 m depths, except for zone E, which had an approximately 10 m depth. Zones C and D had resistivity values from 300 to 510 Ωm, while zone E had resistivity values from 80 to 510 Ωm.
Profile 2 was taken in an MVT Ag-Pb-Zn deposit of the El Huiche-La Escondida mining district. In this district, the Guadalupe mine (P2 in Figure 1) was studied, in which limestone from the Olvido Formation outcrops (Figure 4a). In this rock, the Ag-Pb-Zn mineralization appeared in iron oxides and clays, originated from the sulfide alteration [17]. Such minerals are good conductors and showed contrasting resistivity with the sedimentary host limestone [7].
Minerals 2020, 10, x FOR PEER REVIEW 8 of 16 8000 Ωm) delimited by high resistivity were obtained in areas of metallic mineralization (according to the mine information in the vertical well and underground galleries). This zone extended northward, being more than 160 m length, with an approximate 25 m thickness and a maximum depth of 55 m (A in Figure 4b). This zone also had variable dips, apparently related to faults. In this profile, sediments and limestone outcrop (Figure 4a) where some old mining works (a vertical well with a depth of 10 m and underground galleries) were developed, and high Ag-Pb-Zn contents were extracted (Figures 4b and 5). The vertical well reaches an underground gallery at a depth of 10 m. The ERT model showed a variation range between 56 and 100,000 Ωm with a RMS of 7.66% (Figure 4b). Low resistivity values (lower 8000 Ωm) delimited by high resistivity were obtained in areas of metallic mineralization (according to the mine information in the vertical well and underground galleries). This zone extended northward, being more than 160 m length, with an approximate 25 m thickness and a maximum depth of 55 m (A in Figure 4b). This zone also had variable dips, apparently related to faults.
The relationship between the mining work (vertical well and underground galleries) that cuts the mineralization and the resistivity values allowed other mineralized zones to be located (B, C, D, E, and F in Figure 4b). In the mineralized zones A and F, the resistivity was lower. In zone A, the resistivity decreased to 1.300 Ωm, and in zone F, it decreased to 200 Ωm. This last zone was shallower (less than 10 m deep) and was located at the north end of the profile. This zone reached more than 80 m in length, dipping southward, and reached 40 m depths. Mineralized zone B may be a continuation from the north end of the mineralized zone A, delimited by a fault. This zone B dipped southward and had a depth between 10 and 50 m. Mineralized zone C could also be a continuation of zone A at a depth greater than 80 m. Zone C was sub-vertical and appeared to be delimited laterally The relationship between the mining work (vertical well and underground galleries) that cuts the mineralization and the resistivity values allowed other mineralized zones to be located (B, C, D, E, and F in Figure 4b). In the mineralized zones A and F, the resistivity was lower. In zone A, the resistivity decreased to 1.300 Ωm, and in zone F, it decreased to 200 Ωm. This last zone was shallower (less than 10 m deep) and was located at the north end of the profile. This zone reached more than 80 m in length, dipping southward, and reached 40 m depths. Mineralized zone B may be a continuation from the north end of the mineralized zone A, delimited by a fault. This zone B dipped southward and had a depth between 10 and 50 m. Mineralized zone C could also be a continuation of zone A at a depth greater than 80 m. Zone C was sub-vertical and appeared to be delimited laterally by a fault. The mineralized zones D and E were related through faults. Zone D was located 60 m deep and zone E was very shallow.
Fluorite Deposits
Profiles 3 and 4 cross-cut fluorite deposits. Profile 3 was from an MVT fluorite deposit at the Otomí mine within the San Vicente-La Harina mineralized zone (P3 in Figure 1). In this mine, limestone from the Santa Elena Formation of Early Cretaceous (Albian) age outcrops (Figure 6a). At the center of the profile, there was also an area of unconsolidated materials and limestones, which were both highly oxidized and red in color (Figure 7). This zone had iron sulfides whose alteration produced iron oxides and clays. Both mineral groups are good conductors [7] and could help to detect the fluorite bodies in this deposit.
In profile 3, vertical wells showed that fluorite mineralization outcrops were located in old mining works (1, 2, and 3 in Figure 6b). In this deposit, the fluorite mineralization was related to areas enriched in iron oxides and clays. The fluorite mineralization detected in vertical well 2 occurred in a highly oxidized limestone sequence. In vertical wells 1 and 3, the mineralization occurred less oxidized limestone, that is, less enriched in iron oxides and clays. The extent of this area and the unconsolidated materials in the subsurface are
Fluorite Deposits
Profiles 3 and 4 cross-cut fluorite deposits. Profile 3 was from an MVT fluorite deposit at the Otomí mine within the San Vicente-La Harina mineralized zone (P3 in Figure 1). In this mine, limestone from the Santa Elena Formation of Early Cretaceous (Albian) age outcrops (Figure 6a). At the center of the profile, there was also an area of unconsolidated materials and limestones, which were both highly oxidized and red in color (Figure 7). This zone had iron sulfides whose alteration produced iron oxides and clays. Both mineral groups are good conductors [7] and could help to detect the fluorite bodies in this deposit.
In profile 3, vertical wells showed that fluorite mineralization outcrops were located in old mining works (1, 2, and 3 in Figure 6b). In this deposit, the fluorite mineralization was related to areas enriched in iron oxides and clays. The fluorite mineralization detected in vertical well 2 occurred in a highly oxidized limestone sequence. In vertical wells 1 and 3, the mineralization occurred less oxidized limestone, that is, less enriched in iron oxides and clays. The extent of this area and the unconsolidated materials in the subsurface are indicated in interpreted ERT (Figure 6b).
In the ERT model of profile 3, the resistivity ranged between 10.4 and 100,000 Ωm, with a RMS of 3.34% (Figure 6b). In the fluorite outcrop, and around the vertical wells, resistivity values were less than 5000 Ωm, delimited by larger resistivity values. These low resistivity values allowed the delimitation of the mineralization within the limestone sequence (A, C, E, and F in Figure 6b). New mineralized zones were also revealed with these resistivity values (B, D, G, H and I in Figure 6b). In general, the mineralization reached a depth greater than 85 m. In the ERT model of profile 3, the resistivity ranged between 10.4 and 100,000 Ωm, with a RMS of 3.34% (Figure 6b). In the fluorite outcrop, and around the vertical wells, resistivity values were less than 5000 Ωm, delimited by larger resistivity values. These low resistivity values allowed the delimitation of the mineralization within the limestone sequence (A, C, E, and F in figure 6b). New mineralized zones were also revealed with these resistivity values (B, D, G, H and I in Figure 6b). In general, the mineralization reached a depth greater than 85 m. In the ERT model of profile 3, the resistivity ranged between 10.4 and 100,000 Ωm, with a RMS of 3.34% (Figure 6b). In the fluorite outcrop, and around the vertical wells, resistivity values were less than 5000 Ωm, delimited by larger resistivity values. These low resistivity values allowed the delimitation of the mineralization within the limestone sequence (A, C, E, and F in figure 6b). New mineralized zones were also revealed with these resistivity values (B, D, G, H and I in Figure 6b). In general, the mineralization reached a depth greater than 85 m. In the interpretation of the ERT model of profile 3, the mineralized zones dipped towards the SW in the first part of the profile. This dip was similar to the dip of the limestone layers described in the mine [19].
The second part of the profile was located in the zone with the lowest topographic height to the profile end. In this second part, the mineralized zones and, therefore, the limestone layers dipped towards the NE. The differences in the dip direction in both parts of the profile showed its location in a folded area (anticline at both ends and syncline at the profile center; Figure 6b). Profile 4 occurred in a magmatic-type fluorite deposit. This deposit was hosted in the La Rosita mineralized zone, in which the Juan Valencia mine was studied (P4 in Figure 1). Limestones from the Salmon Peak Formation of Early Cretaceous, Middle-Upper Albian age outcrop in this mine (Figure 8a). Fluorite veins were observed in these rocks, generally controlled by faults with iron oxides and clays originated from iron sulfide alteration [20]. Such minerals are good conductors [7], and therefore, could help detect the associated fluorite bodies in this deposit. An igneous rock dike (rhyolite porphyry) outcropped in the first 20 m of profile 4 (NW end; Figure 8a,b). Otherwise, only very compact limestones were seen. At the NW end, at a 20 m depth, a horizontal gallery of the old mining works was located (Figure 8b).
Minerals 2020, 10, x FOR PEER REVIEW 11 of 16 In the interpretation of the ERT model of profile 3, the mineralized zones dipped towards the SW in the first part of the profile. This dip was similar to the dip of the limestone layers described in the mine [19].
The second part of the profile was located in the zone with the lowest topographic height to the profile end. In this second part, the mineralized zones and, therefore, the limestone layers dipped towards the NE. The differences in the dip direction in both parts of the profile showed its location in a folded area (anticline at both ends and syncline at the profile center; Figure 6b).
Profile 4 occurred in a magmatic-type fluorite deposit. This deposit was hosted in the La Rosita mineralized zone, in which the Juan Valencia mine was studied (P4 in Figure 1). Limestones from the Salmon Peak Formation of Early Cretaceous, Middle-Upper Albian age outcrop in this mine (Figure 8a). Fluorite veins were observed in these rocks, generally controlled by faults with iron oxides and clays originated from iron sulfide alteration [20]. Such minerals are good conductors [7], and therefore, could help detect the associated fluorite bodies in this deposit. An igneous rock dike (rhyolite porphyry) outcropped in the first 20 m of profile 4 (NW end; Figure 8a,b). Otherwise, only very compact limestones were seen. At the NW end, at a 20 m depth, a horizontal gallery of the old mining works was located (Figure 8b). In the ERT model of profile 4, the resistivity ranged from 1.0 to 100,000 0hm-m, with an RMS of 3.66% (Figure 8b). In this section, high resistivity if predominantly related to compact limestone. Resistivities lower than 180 Ωm occurred in an igneous rocks dike. The zone that included the horizontal gallery was associated with resistivities between 180 and 1600 Ωm. Such resistivity values were related to iron oxides and clays associated with fluorite mineralization. This resistivity range allowed us to continue delimiting the fluorite mineralization within the limestone sequence (A in Figure 8b). Similarly, those resistivity values revealed other mineralized zones (B, C and D in Figure 8b). In the ERT model of profile 4, the resistivity ranged from 1.0 to 100,000 0hm-m, with an RMS of 3.66% (Figure 8b). In this section, high resistivity if predominantly related to compact limestone. Resistivities lower than 180 Ωm occurred in an igneous rocks dike. The zone that included the horizontal gallery was associated with resistivities between 180 and 1600 Ωm. Such resistivity values were related to iron oxides and clays associated with fluorite mineralization. This resistivity range allowed us to continue delimiting the fluorite mineralization within the limestone sequence (A in Figure 8b). Similarly, those resistivity values revealed other mineralized zones (B, C and D in Figure 8b).
All mineralized zones had the same resistivity ranges, indicating similar iron oxides and clay contents in such zones. The mineralized zone A was hosted in the contact area between igneous and sedimentary rocks. This possibly extended SE through a fracture, fault, or stratigraphic contact. The structure dip to the NW related this zone to the shallower mineralized zone C (lower than 10 m depth). The largest mineralized zone (zone B), started at a 45 m depth and reached more than 90 m of depth with an approximate diameter of 30 m. This zone should be related to a fracture or fault that reached the surface. To the SE of zone B, there was another smaller mineralized zone (zone D), located at a depth of 15 m. This last zone was also related to fractures or faults.
Barite Deposit
The ERT model in profile 5 was hosted in the MVT-type barite deposit of the El Tapón mine belonging to the Palos Altos mineralized zone (P5 in Figure 1). In the profile, limestones from the Zuloaga Formation of Late Jurassic, Oxfordian-Kimmeridgian age outcrop (Figure 9a). In this rock, barite mineralization was observed in one of the breccia outcrops with iron oxides and clays ( Figure 10). A vertical well with a depth of 15 m had been made in this outcrop. This well was developed in past mining activities. The iron oxides and clays originated from the oxidation of Pb-Zn sulfides associated with barite mineralization. Such minerals are good conductors [7] and could help detect the associated barite mineral bodies in the deposit.
Minerals 2020, 10, x FOR PEER REVIEW 12 of 16 All mineralized zones had the same resistivity ranges, indicating similar iron oxides and clay contents in such zones. The mineralized zone A was hosted in the contact area between igneous and sedimentary rocks. This possibly extended SE through a fracture, fault, or stratigraphic contact. The structure dip to the NW related this zone to the shallower mineralized zone C (lower than 10 m depth). The largest mineralized zone (zone B), started at a 45 m depth and reached more than 90 m of depth with an approximate diameter of 30 m. This zone should be related to a fracture or fault that reached the surface. To the SE of zone B, there was another smaller mineralized zone (zone D), located at a depth of 15 m. This last zone was also related to fractures or faults.
Barite Deposit
The ERT model in profile 5 was hosted in the MVT-type barite deposit of the El Tapón mine belonging to the Palos Altos mineralized zone (P5 in Figure 1). In the profile, limestones from the Zuloaga Formation of Late Jurassic, Oxfordian-Kimmeridgian age outcrop (Figure 9a). In this rock, barite mineralization was observed in one of the breccia outcrops with iron oxides and clays ( Figure 10). A vertical well with a depth of 15 m had been made in this outcrop. This well was developed in past mining activities. The iron oxides and clays originated from the oxidation of Pb-Zn sulfides associated with barite mineralization. Such minerals are good conductors [7] and could help detect the associated barite mineral bodies in the deposit. In the ERT model of profile 5, the resistivity ranged from 61 to 10,000 Ωm with an RMS of 5.57% (Figure 9b). In the barite outcrop and around the vertical well, relatively low resistivity values were obtained (lower than 650 Ωm), delimited by high resistivities. This relationship between resistivity values and barite mineralization had allowed the estimation of the extension and location of this mineralization within the limestone sequence (A in Figure 9b). The resistivity data also showed the existence of another mineralized zone with a similar signature (B, C and D in Figure 9b). These zones were relatively shallow and could reach depths greater than 118 m (view zone D in Figure 9b). Most of these mineralized zones were southward dipping, following the general dipping of the rock sequence in the mine [18]. Zone C presented a different overall dip (northward), apparently related to a fault. This structure dipped northward and cut the profile in a NW-SE direction, which was related to the tectonic characteristics of the region [18]. The highest resistivity values were related to compact limestone, that is, limestone which has few fractures [7].
After interpreting the ERT, one mining activity was carried out in zone D and barite mineralization was found at a 19 m depth, within a breccia zone with iron sulfides, iron oxides, and clays.
Discussion
In the magmatic-type Ag-Pb-Zn deposit, electrical resistivity data reveals five mineralized zones (A, B, C, D, and E in Figure 3b), characterized by resistivity values below 510 Ωm. These mineralized zones have different geometries, sizes, and extensions. The resistivity values (mostly less than 90 Ωm) indicate that zone A has the highest enrichment in sulfides and iron oxides. Some areas of zones B and E also have higher mineralization concentrations, according to the decrease in the resistivity values in both zones [8]. The fractures and faults are identified by low-resistivity values [26][27][28] and reveal a structural control on the mineralization, which is characteristic of magmatic-type Ag-Pb-Zn deposits [15]. In the Minerva mineralized zone, such structures cut the profile in a NE-SW direction, like the reported direction in faults near the mine [16].
The interpretation of the ERT model in the MVT Ag-Pb-Zn deposit reveals six mineralized zones (A, B, C, D, E, and F in Figure 4b) characterized by resistivity values between 200 and 8000 Ωm. These zones have different geometries, sizes, and extensions. This interpretation also shows a highly tectonized mineral deposit and an apparent relationship between the mineralization and tectonic deformation through faults. These structures conditioned the distribution and characteristics of the mineral bodies. The identified In the ERT model of profile 5, the resistivity ranged from 61 to 10,000 Ωm with an RMS of 5.57% (Figure 9b). In the barite outcrop and around the vertical well, relatively low resistivity values were obtained (lower than 650 Ωm), delimited by high resistivities. This relationship between resistivity values and barite mineralization had allowed the estimation of the extension and location of this mineralization within the limestone sequence (A in Figure 9b). The resistivity data also showed the existence of another mineralized zone with a similar signature (B, C and D in Figure 9b). These zones were relatively shallow and could reach depths greater than 118 m (view zone D in Figure 9b). Most of these mineralized zones were southward dipping, following the general dipping of the rock sequence in the mine [18]. Zone C presented a different overall dip (northward), apparently related to a fault. This structure dipped northward and cut the profile in a NW-SE direction, which was related to the tectonic characteristics of the region [18]. The highest resistivity values were related to compact limestone, that is, limestone which has few fractures [7].
After interpreting the ERT, one mining activity was carried out in zone D and barite mineralization was found at a 19 m depth, within a breccia zone with iron sulfides, iron oxides, and clays.
Discussion
In the magmatic-type Ag-Pb-Zn deposit, electrical resistivity data reveals five mineralized zones (A, B, C, D, and E in Figure 3b), characterized by resistivity values below 510 Ωm. These mineralized zones have different geometries, sizes, and extensions. The resistivity values (mostly less than 90 Ωm) indicate that zone A has the highest enrichment in sulfides and iron oxides. Some areas of zones B and E also have higher mineralization concentrations, according to the decrease in the resistivity values in both zones [8]. The fractures and faults are identified by low-resistivity values [26][27][28] and reveal a structural control on the mineralization, which is characteristic of magmatic-type Ag-Pb-Zn deposits [15]. In the Minerva mineralized zone, such structures cut the profile in a NE-SW direction, like the reported direction in faults near the mine [16].
The interpretation of the ERT model in the MVT Ag-Pb-Zn deposit reveals six mineralized zones (A, B, C, D, E, and F in Figure 4b) characterized by resistivity values between 200 and 8000 Ωm. These zones have different geometries, sizes, and extensions. This interpretation also shows a highly tectonized mineral deposit and an apparent relationship between the mineralization and tectonic deformation through faults. These structures conditioned the distribution and characteristics of the mineral bodies. The identified faults cut the profile in a NE-SW direction, coinciding with the direction of the principal faults reported in the mining district [17]. The mineralized zones A and particularly zone F can be more mineralized, because of their lower resistivities compared the other four zones.
The resistivity values also indicate that in zone A, the mineral enrichment is the largest and it occurs in the north of the mining activity.
In the MVT fluorite deposit, eight mineralized zones (A, B, C, D, E, F, G, H and I in Figure 6b) were delimited with relatively moderate resistivity values (less than 5000 Ωm). These zones have different geometries, sizes, and extensions. Lower-resistivity values in zones C, G, H, and I indicate a higher amount of iron sulfides and clays in the fluorite mineralized zones. Values higher than 10,000 Ωm must be related to very compact limestone and less mineralization.
In the magmatic-type fluorite deposit, four mineralized zones (A, B, C, and D in Figure 8b) were delimited with resistivity values ranging between 180 and 1600 Ωm. In this deposit, the interpreted ERT model shows the structural control of the mineralization, and that iron oxides and clay contents are filling some fractures, as reported by Bernales-Azúcar et al. [20] in previous research. The fractures have relatively low resistivity values [6,[26][27][28]. The interpreted ERT model also evidences the dipping direction of the limestone sequence. The faults or fractures suggested in the interpretation may be related to some faults with NE-SW directions, reported in the surroundings of the studied area [20].
In the MVT barite deposit, four mineralized zones (A, B, C, and D in Figure 9b) were delimited, with resistivity values less than 650 Ωm. These resistivity values confirm the resistivity ranges reported in other barite deposits, for example, the resistivity from 300 to 500 Ωm reported by Dakir et al. [9]. The lower-resistivity values indicate higher iron sulfides and clay contents in barite mineralization. Zones C and D have higher contents of these minerals (Figure 9b).
The location, geometry, extension, and dip of the mineralized zones of fluorite (profile 3), barite (profile 5), and Ag-Pb-Zn (profile 2), evidence their stratabound origin and, therefore, the type of mineral deposit, classified as MVT. Similarly, the location and geometry of the mineralized zones in profiles 1 and 4 show the magmatic-type fluorite and Ag-Pb-Zn deposits, respectively. The detected fractures and faults in the ERT models, as well as the heterogeneous patterns and trends defined by the mineralized zones, suggest that the mineralization is structurally controlled.
Conclusions
Five ERT models have been obtained on MVT-type and magmatic mineral deposits of northeastern Mexico. The geoelectrical profile allowed for the characterization of these mineral deposits and evidenced the magmatic origin of one Ag-Pb-Zn and another fluorite deposit. Similarly, the stratabound origin (MVT) of other Ag-Pb-Zn, fluorite, and barite deposits were evidenced. The interpretation allowed us to estimate the location, geometry, extension, and dip of the mineralized zones. In all 2D models, the continuity in the subsurface of the mineralization observed on the surface or mining workings was established. New mineralized zones were also revealed. The mineralized zones with moderate to lowresistivity values are related to mineral bodies with the highest Ag-Pb-Zn contents, and the fluorite and barite mineralized zones are more enriched in iron sulfides and clays. In most of the studied profiles, the structural control of the mineralization is shown, according to the identified fractures and faults. The results of this research could be used as a guide for the future of explorations of mineral deposits in this region or elsewhere. | 2021-05-08T00:02:37.337Z | 2021-02-27T00:00:00.000 | {
"year": 2021,
"sha1": "dc40f5f6a9c25f9c503422f2a05207ff11fef379",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/11/3/249/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1b2c6959a0f6593771e9ab1afbe57e5f9a6a4ff1",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
259991656 | pes2o/s2orc | v3-fos-license | Robust peak detection for photoplethysmography signal analysis
Efficient and accurate evaluation of long-term photoplethysmography (PPG) recordings is essential for both clinical assessments and consumer products. In 2021, the top opensource peak detectors were benchmarked on the Multi-Ethnic Study of Atherosclerosis (MESA) database consisting of polysomnography (PSG) recordings and continuous sleep PPG data, where the Automatic Beat Detector (Aboy) had the best accuracy. This work presents Aboy++, an improved version of the original Aboy beat detector. The algorithm was evaluated on 100 adult PPG recordings from the MESA database, which contains more than 4.25 million reference beats. Aboy++ achieved an F1-score of 85.5%, compared to 80.99% for the original Aboy peak detector. On average, Aboy++ processed a 1 hour-long recording in less than 2 seconds. This is compared to 115 seconds (i.e., over 57-times longer) for the open-source implementation of the original Aboy peak detector. This study demonstrated the importance of developing robust algorithms like Aboy++ to improve PPG data analysis and clinical outcomes. Overall, Aboy++ is a reliable tool for evaluating long-term wearable PPG measurements in clinical and consumer contexts. The open-source algorithm is available on the physiozoo.com website (on publication of this proceeding).
Introduction
Photoplethysmography (PPG) is an optical technic widely used in clinical and commercial settings to detect volumetric variations in blood circulation in the tissue microvascular bed.PPG is typically performed in the fingertip and offers a low-cost and convenient assessment of various physiological systems, including the cardiovascular, respiratory and autonomic nervous systems [1].Its main application fields include heart rate (HR) measurement [2] and atrial fibrillation detection [3].Potential applications of PPG include sleep staging [4], obstructive sleep apnea screening [5], and blood pressure estimation [6].Real-time PPG assessment is particularly important in clinical settings, where it is extensively used for oxygen saturation and heart rate monitoring.In recent years, the popularity of PPG has grown significantly due to the development of consumer devices such as smartwatches [7], PPG rings [8], sports belts [9] and in-ear PPG devices [10].
Aboy peak detector
In the case of the original Aboy algorithm, the PPG signal is heavily filtered to preserve frequencies near an initial heart rate estimate [12].The PPG signal is segmented into non-overlapping 10-second windows, and each window is processed with three digital filters.The first filter removes baseline wander and high-frequency noise in preparation for spectral HR estimation.This HR estimate is then used to determine the upper cutoff frequency of the second and third filters.These filters help to detect peaks in the PPG signal and its first derivative (PPG') respectively, and ultimately determine the systolic peaks.The peak detection threshold is set at the 90 th percentile of the amplitude of the PPG', signal and the 60 th percentile of the PPG signal.The systolic peaks are identified as peaks in a weakly filtered PPG signal immediately following each peak identified in the differentiated signal [18].
Aboy+ peak detector
The Aboy+ algorithm is an accelerated version of the original Aboy algorithm [12].Charlton's implementation of the Aboy algorithm in Matlab [18] uses high-order finite arXiv:2307.10398v1[physics.med-ph]18 Jul 2023 impulse response (FIR) filters for the three bandpass filters with passbands of 0.45-10 Hz, 0.45-2.5*HR/60Hz, and 0.45-10HR/60 Hz.Although this provides accurate cutoff frequencies, the filters are computationally expensive, particularly in the case of long-term measurements (see Table 2).Therefore, in the Aboy+ algorithm, zero-phase 5 th order Chebyshev Type II, infinite impulse response (IIR) filters with the same cutoff frequencies, were used to reduce the computational complexity.
Aboy++ peak detector
Aboy++ peak detection algorithm includes adaptive HR estimation, to improve Aboy+.In each 10-second window, an HR estimate is made for the subsequent window using constraints to reduce the likelihood of incorrect HR values.Based on the estimated HR, the upper cutoff frequency is adjusted to improve peak detection.In summary the Aboy++ algorithm involves the following steps: where F s is the sampling frequency.(5) Final peaks: Using DetectMaxima, the peak-to-peak time must be at least 2 * HR win and prominence must be at least 25% of the average systolic peak amplitude.(6) Update HR window: for the subsequent 10 s window.
The open-source algorithm is available on the physiozoo.comwebsite.
3.
Materials and Methods
MESA database
The MESA database is a well-known database widely used cardiovascular disease-related research [19].It consists of data from 2,054 adult individuals with sub-clinical cardiovascular disease and contains a total of 19,998 hours of PPG records.The male and female ratio is 1:1.2, and the patients age range is 54-95 years.The database was downloaded from the National Sleep Research Resource (NSRR).The polysomnography (PSG) records were acquired at home from the fingertip using Nonin ® 8000 series pulse oximeters (Nonin Medical Inc.).The PPG sampling rate was 256 Hz.
Evaluation of peaks
To forecast the PPG systolic peaks, the electrocardiogram (ECG) peaks were extracted from the PSG records and used as a reference [20].The Kotzen et al. [20] Peak Matching method was used to benchmark the Aboy, Aboy+, and Aboy++ algorithms on the MESA database.However, Kotzen [11] excluded noisy signals during evaluation, which resulted in higher F1-scores for the previously benchmarked peak detectors than in the present study.
The original and improved algorithms were evaluated on a standalone server using 100 records and 1025 hours of PPG data, from the MESA database.The median length of the records was 10 hours, with an interquartile range (IQR) of 2 hours.The records were divided into 1-hour-long segments, and the F1-score was calculated as the mean value of these segments.
To present the F1-score results, the median (MED) and the quartiles (Q1, Q3) were applied.MED: Value in the middle of the ordered set of data.Q1, Q3: The data value at which percent of the value in the data set is less than or equal to this value, calculated with 25 th or 75 th .
Results
Table 1 shows that Aboy++ outperformed the original Aboy by almost 5% in terms of F1-score.Our initial expectation was that Aboy and Aboy+ would achieve similar F1 scores.However, we found that Aboy+ significantly outperformed Aboy.Although both Aboy and Aboy+ used the same cutoff frequency, the 4 th order IIR filter used in Aboy+ was more effective in processing low-quality signals compared to the FIR filter in the original Aboy implemented by Charlton [18].The median length of each patient's recording was 10 hours, with an interquartile range of 2 hours.When evaluated on recordings of 100 adults included the MESA database containing more than 4.25 million reference beats, Aboy++ achieved an F1-score of 85.5%, compared to 80.99% for the Aboy+ peak detector.The open-source implementation of the original Aboy peak detector [18], was not efficient for long-term measurement analysis due to its high computational cost.The enhanced performance of the Aboy++ peak detector was not limited to its superior F1-score.Aboy++ also pro- vided a remarkable reduction in computational time compared to the original Aboy algorithm, as summarized in Table 2.The processing time for detecting peaks using Aboy++ was over 57 times faster than the original algorithm without multiprocessing.This significant improvement in speed can be particularly advantageous for longterm PPG recordings, where the original Aboy algorithm was computationally expensive.
Conclusion and discussion
This research contributes a new PPG peak detector denoted Aboy++ which was based on the original Aboy algorithm developed by Aboy et al.The adaptive feature of the Aboy++ peak detector provides higher accuracy compared to the original Aboy peak detector in cases where there is strong baseline wander (see Fig. The performance of Aboy++ has shown superior performance compared to other open-source algorithms benchmarked on the MESA database, making it a strong candidate for a standardized toolbox for comprehensive PPG analysis.These results demonstrate that Aboy++ is a highly accurate and efficient peak detection algorithm that can be reliably used for PPG signal analysis.Future work will involve evaluating the performance of Aboy++ on the complete MESA database, as well as on other databases of PPG signals from different patient groups.Additionally, there will be an implementation of algorithms for detecting other PPG fiducial points, such as the pulse onset, dicrotic notch, diastolic peak, and fiducial points of higher-order derivatives, which rely heavily on the accurate identification of systolic peaks.By using Aboy++ as a foundation for PPG analysis, researchers can improve the accuracy and reliability of their findings and make signifi-cant progress in understanding the cardiovascular system's underlying mechanisms.
( 1 )
Windowing: 10 s, non-overlapped windows are used.(2) HR estimation: The HR estimation process utilizes the DetectMaxima function, which is enhanced through the implementation of adaptive peak-to-peak time and a modified percentile of peak amplitude.(3) Define HR index: The list of the peak-to-peak times above 30 th percentile is defined as P d .If median(P d ) * 0.5 < mean(P d ) < mean(P d ) * 1.5, then HR i = std(P d )/mean(P d ) * 10.Otherwise, the HR i is the same as in the previous window.(4) Define HR window: HR win = F s /((1
Figure 1 :
Figure 1: This figure illustrates the performance of the Aboy and Aboy++ peak detectors on PPG signals of varying quality.Panel (a) represents a good-quality PPG signal with slight baseline wander, while Panel (b) represents a signal with rapid amplitude fluctuations.Panel (c) displays a low-quality PPG signal, while Panel (d) demonstrates very low-quality PPG signals of signals in the long term.
1.b), rapid amplitude fluctuations, and high heart rate variability.In the presence of a noisy signal (see Fig. 1.b) peak detection can also be challenging for the Aboy++ peak detector.In the short term, the estimated heart rate of the previous windows can provide a reasonable output (see Fig. 1.c); however, peak detection accuracy can be diminished in the presence of long-term noisy signals (see Fig. 1.d).Although excluding low-quality signals can have advantages, inaccurate assessment of signal quality can lead to significant changes in the overall signal evaluation.
Table 2 :
Computational time of peak detectors | 2023-07-21T06:43:21.076Z | 2023-07-18T00:00:00.000 | {
"year": 2023,
"sha1": "5bcbbae6497c18d9a2fb57aadbaa8611475630b1",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/2307.10398",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "5bcbbae6497c18d9a2fb57aadbaa8611475630b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
221692731 | pes2o/s2orc | v3-fos-license | IMPROVING THE CATALYTIC FEATURES OF THE LIPASE FROM Rhizomucor miehei IMMOBILIZED ON CHITOSAN-BASED HYBRID MATRICES BY ALTERING THE CHEMICAL ACTIVATION CONDITIONS
Elizabete Araújo Carneiroa, Ana Karine Pessoa Bastosb, Ulisses Marcondes Freire de Oliveirac, Leonardo José Brandão Lima de Matosd, Wellington Sabino Adrianoe, Rodolpho Ramilton de Castro Monteiroc, José Cleiton Sousa dos Santosc,f,*, and Luciana Rocha Barros Gonçalvesc Instituto Federal do Ceará, Campus Quixadá, 63900-000 Quixadá – CE, Brasil Eixo de Química e Meio Ambiente, Instituto Federal do Ceará, Campus Maracanaú, 61939-140 Maracanaú CE, Brasil Departamento de Engenharia Química, Universidade Federal do Ceará, Campus do Pici, 60455-760 Fortaleza – CE, Brasil Instituto Federal do Maranhão, Campus Caxias, 65600-992 Caxias – MA, Brasil Universidade Federal de Campina Grande, Centro de Educação e Saúde, Campus Cuité, 58175-000 Cuité – PB, Brasil Instituto de Engenharias e Desenvolvimento Sustentável, Universidade da Integração Internacional da Lusofonia Afro-Brasileira, Campus das Auroras, 62790-970 Redenção – CE, Brasil
INTRODUCTION
Enzyme immobilization on solid supports, besides facilitating the recovery and further re-use of the catalyst, offers important additional advantages. 1,2 Indeed, immobilization avoids enzyme aggregation and autolysis, facilitates operational control, increases flexibility of reactor design and facilitates the removal from the reaction medium. 3 Yet, additional stabilization of the immobilized enzyme three-dimensional structure may be achieved if an increase in the rigidification of the macromolecule structure is promoted, which can be accomplished if several bonds between enzyme and support are obtained. 4 Lipases (triacylglycerol acyl hydrolases, E.C. 3.1.1.3) have been immobilized by several methods, namely adsorption, cross-linking, adsorption followed by cross-linking, covalent attachment and physical entrapment. [5][6][7][8][9][10][11][12][13] However, the selection of an immobilization strategy should be based on process specifications for the catalyst, including parameters such as overall enzymatic activity, effectiveness of lipase utilization, deactivation and regeneration characteristics, costs of the immobilization procedure, toxicity of immobilization reactants and the desired final properties of the immobilized derivative on an industrial scale require their immobilization and re-usability. 14,15 In this way, the multipoint covalent immobilization requires the interaction of several residues of the same enzyme molecule with active groups of the support. 16,17 Enzyme stabilization is obtained after increasing the rigidity of a small part of its surface, which will turn the overall threedimensional structure more rigid. 18,19 Aldehyde groups in the support and amine groups in the enzyme are a good choice to make the multipoint attachment and, therefore, to obtain highly stable enzyme derivatives. 19 Amine groups (terminal and in lysine residues) are very reactive, abundant on the enzyme surface and form Schiff bases with the aldehyde groups of the support. 20 The number of covalent bonds between the support and the enzyme depends on the degree of activation of the support (concentration of aldehyde groups in the support surface) and on the concentration of amine groups in the enzyme molecule. 21 pH is an important variable in this immobilization approach, since lysine amine groups have pK around 10.5, and will only be reactive at pH 10 or above. 22 Therefore, immobilization and stabilization of enzymes may make them still more attractive for industrial applications, facilitating their use under extreme conditions of temperature and pH, as well as in the presence of organic solvents or any other distorting agent. Improvement of stability, nonetheless, is still one of the main issues for the implementation of enzymes as industrial biocatalysts. 20 Due to the interesting properties of lipases as biocatalysts, several works report the immobilization of these enzymes using different protocols: adsorption on hydrophobic supports, entrapment in gels and covalent attachment to solid supports. 23,24 Among lipases, the lipase from Rhizomucor miehei (RML) is available enzyme in both soluble and immobilized form, presenting high activity and good stability under diverse conditions; therefore, it has been employed from food industry to organic chemistry, from biodiesel production to fine chemicals. 25,26 Thus, RML was chosen as model lipase for this study.
Chitosan, an abundant raw material, has been already used as support for lipase immobilization. 27 This material is easily available in Ceará State, Brazil, due to the long extent of its seacoast and the high activity of its seafood industry. It is a natural cationic polysaccharide derived from chitin and is known as good support for enzyme immobilization because of its hydrophilicity, biocompatibility, and Artigo biodegradability; moreover, chitosan is obtained at a relatively low cost from shells of shellfish (mainly crab, shrimp, lobster, and krill), wastes of the seafood industry, and its utilization for enzyme immobilization constitutes an attractive option for disposal of crustaceans, shrimp and crab shell wastes. 28 Chitosan has reactive amino and hydroxyl groups, which after further chemical modifications, can make covalent bonds with reactive groups of enzymes. 1 Due to its amine groups, chitosan is a cationic polyelectrolyte (pKa = 6.5) being insoluble in neutral aqueous solutions, but it is soluble in acidic solutions below pH 6.5. The mechanical properties of this polymer can be improved by further crosslinking using bifunctional reactants like glutaraldehyde. 28 Chitosan amine groups can directly react with glutaraldehyde to generate aldehyde groups, which will in turn form Schiff bases with the enzyme. 29 Chitosan hydroxyl groups can be also activated by using epoxide reactants like glycidol and epichlorohydrin, for instance, followed by oxidation with sodium periodate to produce reactive aldehyde-glyoxyl groups. 30 The internal structure of the chitosan gel can be modified by interaction with other biopolymers such as alginate and carrageenan, with which chitosan may form hybrid gels. 26,27 The biopolymers alginate and κ-carrageenan have groups that are negatively charged at neutral pH and can interact with the positively charged amine groups of chitosan, forming different internal nets. Carrageenans are a family of linear sulfated polysaccharides extracted from red seaweeds. 31 They are large, highly flexible molecules that curl, forming helical structures that give them the ability to form a variety of different gels at room temperature, in the presence of some cations like potassium. k-Carrageenan can form strong and rigid gels. 32 Alginate induces the formation of stable polyelectrolytes with chitosan that are broken at strict pH and temperature conditions. 33 This work aims to obtain high activity and thermal stable immobilized derivatives of the lipase from Rhizomucor miehei using hybrid matrices of chitosan and different copolymers as k-carrageenan and sodium alginate activated by glycidol, epichlorohydrin or glutaraldehyde, analyzing some parameters such as immobilization yield, recovered activity, and thermal stability at 60 ºC.
Determination of enzyme activity and protein concentration
The hydrolysis of p-NPB was used to follow the soluble and immobilized enzyme hydrolytic activities. Assays were performed by measuring the increase in the absorbance at 410 nm produced by the release of p-nitrophenol in the hydrolysis of 15 mmol L -1 p-NPB in 2-propanol 100 mmol L -1 sodium phosphate buffer at pH 8 and 25 °C. 34 To initialize the reaction, 2 mL of lipase solution or suspension was added to 1 mL of substrate solution. One unit of p-NPB activity was defined as the amount of enzyme that is necessary to hydrolyze 1 μmol of p-NPB per minute (U) under the conditions described previously.
Protein concentration was determined according to the procedure described by Bradford (1976) using bovine serum albumin (BSA) as standard. 35
Preparation of chitosan beads
Chitosan beads were prepared by dissolving powder chitosan in an acetic acid 5% v/v solution. The obtained solution of 2.5-5.0% (m/v) was dropped into a gently stirred 0.1 mol L -1 NaOH solution for 24 h at room temperature and washed with an excess of distilled water. 36 Higher concentrations of polymer were not used due to the high viscosity, which makes the formation of beads difficult.
Preparation of hybrid-chitosan beads
Hybrid-chitosan beads were prepared by dissolving powder chitosan in acetic acid 5% (v/v) solution. Afterward, sodium alginate or carrageenan was added to the solution, which was stirred for 10-30 min. The obtained solutions were sprayed into a gently stirred 0.1 mol L -1 NaOH solution for 24 h at room temperature and dropped with distilled water. 36 The obtained supports were chitosan 2.5%-alginate 2.5% and chitosan 2.5%-carrageenan 2.5%, being all concentrations expressed as % (m/v).
Chemical activation using glutaraldehyde
Activation was made by contacting chitosan and hybrid-chitosan beads with sodium phosphate buffer (0.1 mol L -1 , pH 7.0) containing glutaraldehyde 5% (v/v) using a ratio Vbeads/Vtotal of 1/10 during 1.0 h at 25 °C. 20 Afterwards, the beads were washed with distilled water to remove the excess of the activating agent.
Chemical activation using glycidol and epichlorohydrin
Glyceryl-supports were prepared by mixing beads under stirring with an aqueous solution containing 1.7 mol L -1 NaOH and 0.75 mol L -1 NaBH 4 (glycidol) 37 or 2 mol L -1 NaOH and 0.12 mol L -1 NaBH 4 (epichlorohydrin) in ice bath. 38 Then, 0.48 mL of glycidol or 2 mL of epichlorohydrin per gram of bead were added, kept under mechanical stirring for 18 h and washed until neutrality. Glyoxyl/ oxirane-supports were obtained by contacting beads with 2 mL of 0.1 mol L -1 NaIO 4 solution per gram of gel for 2.0 h under room temperature. 39 Afterward, they were washed with an excess of distilled water until neutrality.
Immobilization procedure
Lipase from Rhizomucor miehei was immobilized on chitosan beads (200U p-NPB of enzyme per gram of bead), after activation with glycidol, epichlorohydrin or glutaraldehyde. The immobilization was carried out in 100 mmol L -1 sodium bicarbonate buffer (ratio m/v of 1/10), pH 10.05, at 25 °C and incubation time of 5 h, under mild stirring. 2 mg of protein per gram of support was used for immobilization, prepared from a crude extract containing 10.8 mg of protein per milliliter. The number of enzyme units/mL of enzyme and the protein mass/mL of enzyme were calculated using the hydrolytic enzyme activity (previously described) and Bradford method (1976). 35 The mass of enzyme and gel were weighted and the offered enzyme load could be calculated (U • g gel -1 and mg enzyme per gram of gel).
Immobilization parameters
The immobilization yield (IY) was calculated by measuring the difference between enzyme activities in the blank solution and in the supernatant before (At 0 ) and after (At f ) immobilization, according to Eq. (1): (1) Because the offered enzyme load was known, the number of enzyme units theoretically immobilized per gram of gel (At theoretically immobilized ) could be calculated. After finishing the immobilization, the apparent gel activity At app (enzyme units • g gel -1 ) was measured and compared to the theoretically immobilized. The recovered activity was then calculated as At app (U • g gel -1 )/ At theoretically immobilized (U • g gel -1 ).
Thermal stability assays
Soluble enzyme and immobilized derivatives were incubated in a 0.1 mol L -1 sodium phosphate buffer and pH 7.0 at 60 °C. Periodically, samples were withdrawn and their residual hydrolytic activities were assayed as described above. The single-step non-first-order model, proposed by Sadana and Henley 40 was fitted to the experimental data. This model considers that a single step inactivation leads to a final state that exhibits a residual activity, which is very stable. The activity-time expression is (2) In which, AR is the activity (dimensionless), it is the ratio between the specific activity of the final state, A t , and the one of the initial state, At 0 ; and k d is the first-order deactivation rate constant (time -1 ). The parameter k should describe the unfolding or the inactivation process and the parameter α describes the longterm level of activity (Pedroche et al.). 21 Stabilization factor (S F ) was given as the ratio between the half-life of the immobilized derivative and the half-life (t 1/2 ) of the soluble enzyme at the same conditions.
Activation with glutaraldehyde: influence of the polymer composition on the multipoint covalent attachment of lipase
Pure and hybrid chitosan gels were prepared and then activated with glutaraldehyde and used for RML immobilization at pH 10.05 and 25 °C for 5 h. Table 1 presents the immobilization parameters for these RML derivatives.
The results of Table 1 show that the formation of hybrid gels by mixing chitosan with κ-carrageenan or alginate, for the same total polymer concentration (5.0%), may not have improved the internal structure of the matrix, as the obtained hybrid derivatives presented lower thermal stability when compared to pure chitosan. The best derivative was obtained using chitosan 5.0%, recovered activity of 97% and 154-fold more stable than the soluble enzyme.
It can also be seen that the other significant variable was the polymer concentration. The increase of the chitosan concentration from 2.5 to 5.0% led to an increase in the immobilization yield and in the stabilization factor of the derivatives. These results may be explained by the increase of aldehyde groups available to link to the amine groups of the enzyme. The higher the polymer concentration, the higher the number of available amine groups in the support to react with glutaraldehyde. However, the high reactivity of this activating reactant also might have led to excessive cross-linking in the matrix and the formation of small pores. In consequence, the apparent activity of the immobilized enzyme did not increase.
The thermal stability of the produced derivatives was studied at 60 °C (pH 7.0). The Sadana-Henley two-parameter deactivation model was fitted to the data (enzyme residual activities for different incubation times at 60 °C). Figure 1 shows the results. The calculated half-lives for the obtained biocatalysts are shown in Table 1. Figure 2 show the immobilization parameters for RML in pure and hybrid chitosan, activated with glycidol. Activation with glycidol led to a decrease in the stability factor, when compared to the activation with glutaraldehyde.
Activation with glycidol: influence of the polymer composition
The enzyme immobilization through reaction between glyoxyl aldehyde groups of the support and amine groups of the enzyme requires the formation of at least two simultaneous bonds, which act in a synergistic way, but with weakly forces. This behavior may explain the observed decrease in the stability factor of the chitosan glyoxyl derivatives. Probably, immobilization with glutaraldehyde was unipoint and, with glyoxyl, at least through two points. The chitosan-carrageenan derivative presented the best result, presenting 78% immobilization yield and stabilization factor of 80 at 60 °C. Table 3 / Figure 3 show that activation of pure and hybrid chitosan with epichlorohydrin led to a significant improvement in the thermal stability of the derivatives, when compared to the other two tested activating agents.
Results in
The most stable derivative, using epichlorohydrin, was the hybrid chitosan-alginate. Hence, the presence of different reactive groups in each polymer and the difference in reactivity of the activating agents have caused this nonmonotonic behavior. Besides the hydroxyl groups, the polymers have other different reactive groups: amine groups in chitosan, acidic groups in alginate, and sulfate groups in κ-carrageenan. The reaction of the amine groups of chitosan with epichlorohydrin generates epoxide groups, which are able to link to the enzyme, as well are the glyoxyl groups. Glycidol has epoxy and hydroxyl groups while epichlorohydrin has epoxy and chloride groups, being the latter one more reactive. On the other hand, chitosan has also two reactive groups, amine and hydroxyl, being the former ones more reactive than hydroxyl. Therefore, after the reaction with the epoxide reactants, probably many amine groups also reacted with the activating agents.
Although it has been already reported that only few chitosan amine groups react with epichlorohydrin, the reaction conditions used in this work were stronger and it was expected that more amine groups were transformed into amino-diol. 20 As epichlorohydrin is more reactive than glycidol, more aldehyde groups in the support might Table 2. Influence of the polymer composition on the immobilization of lipase at pH 10.05, 25 °C, for 5 h. Supports activated with glycidol, enzyme load: 200U p-NPB of enzyme g -1 of gel. Immobilization parameters: immobilization yield (IY), recovered activity (RA), apparent activity (App), half-life (t 1/2 ) and stabilization factor (S F ) at 60 °C ). Soluble enzyme and derivatives were incubated at 60 °C and pH 7.0: (■) soluble RML; (■) chitosan 2.5%; (■) chitosan 5.0%; (■) chitosan 2.5%-alginate 2.5%; (■) chitosan 2.5%-carrageenan 2.5% Table 3. Influence of the polymer composition on the immobilization of lipase at pH 10.05, 25 °C, for 5 h. Supports activated with epichlorohydrin, enzyme load: 200U p-NPB of enzyme g -1 of gel. Immobilization parameters: immobilization yield (I Y ), recovered activity (R A ), apparent activity (App), half-life (t 1/2 ) and stabilization factor (S F ) at 60 °C be formed using this activating agent, which allowed the formation of more bonds between enzyme and support, thus explaining the increase in the stability factor. The best results were obtained for chitosan 2.5%-alginate 2.5%, which presented a stabilization factor of 93 (at 60 °C). The different reactivity of the involved groups and the helicoidal conformation of κ-carrageenan, which may lead to the formation of a better internal gel structure, may be responsible for this result. 33
RML thermostability
Many lipases have been engineered to enhance thermostability, including RML. 41 Thermostability is one of the most important factors for the reaction rate once high temperatures favor mass transfer. Lipase with better thermostability can bear higher reaction temperatures thus benefiting the reaction rate and making the industrial processes more efficient. [41][42][43] The optimum temperature RML depends on several factors. As each immobilized system has its own characteristics, changes in kinetics parameters may occur or not, and it is influenced by several variables, such as enzyme source, kind of support, immobilization method, and enzyme-support interactions. 44 A study from Mohammadi et al. indicated that immobilized RML retained 90 and 85% of its initial activity after 24 h of incubation at 50 and 55 °C, respectively; and, when temperature increased to 60 and 65 °C, approximately 50 and 20% of its initial activity was retained. 45 On the other hand, the study from Babaki et al. revealed that this immobilized RML retained about 89 and 69% of its initial activity after 24 h incubation at 60 and 80 °C respectively. 46 In a work from Skoronskiet al., the immobilized lipase Lipozyme ® RM IM was applied in a continuous bioreactors for ester synthesis. The deactivation of the biocatalyst was observed during the reaction as a function of temperature and the substrate/ enzyme ratio, as well as the water produced by the esterification reaction. In this study, higher conversions were obtained at 40 ºC, while a larger amount of ester was produced when the reaction was carried out at 30 ºC. 47 RML was chosen as the biocatalyst for this manuscript, in view of its successful utilization in various synthetic reactions. 26 This can be attributable to the strong specificity and catalytic versatility of RML, as well as its very high catalytic activity under the wide temperature range from 40 to 80 °C. 45,[48][49][50][51] Performance of the hybrid matrices The purpose of this work was to prepare chitosan-based hybrid polyelectrolytes (sodium alginate or carrageenan) matrices. Specifically, chitosan and carrageenan form a strong ionic bond between each other due to electrostatic attraction between oppositely charged amino groups of chitosan and sulfate groups of carrageenan. 52 In this present manuscript, we optimized the concentration of chitosan and carrageenan in solution and their blend composition, to enable the formation of desired complexes for fabricating the chitosan-based hybrid matrices. Furthermore, different structures were obtained using chitosan with another biopolymer, such as alginate, gelatin, or κ-carrageenan. Changing the structure of the gel can significantly improve the covalent multipoint immobilization of the enzymes and considerably improved thermostability compared to pure chitosan hydrogels. 36,52 As hybrid hydrogels are highly hydrophilic, their use as a support for enzymes requires chemical modification of the matrix, using hydrophobic agents to improve the intragel microenvironment, favoring multipoint covalent immobilization of enzymes. 36,[53][54][55] Therefore, this work sought to prepare chitosan gels (sodium alginate or carrageenan) modified with glycidol (GLY), epichlorohydrin (EPI) or glutaraldehyde (GLU) groups to obtain highly active and thermostable lipase derivatives by multipoint covalent immobilization of RML in complexes of chitosan polyelectrolytes.
CONCLUSIONS
In this work, a very significant improvement in thermal stability of RML was achieved after the covalent attachment of the enzyme on chitosan. The half-life of immobilized RML could be increased from 11 min to 54 min at 60 °C. The less stable derivative was obtained using pure chitosan 2.5% activated with glutaraldehyde. The more stable RML was immobilized on a hybrid gel, chitosan 2.5%-carrageenan 2.5%, activation of the support with epichlorohydrin. This best derivative was 154-fold more stable than the soluble enzyme, with chitosan 5.0% beads, using glutaraldehyde. When epichlorohydrin was used, a higher number of reactive groups was generated in the support. In consequence, more residues of the enzyme were involved in the multipoint attachment, which led to a higher stability. It is possible to note an increment of stability according to activation method, following different chemistries, has offered very different properties. Thus, it is possible to improve its characteristics aiming lipase stabilization.
ACKNOWLEDGMENTS
We gratefully acknowledge the financial support of Brazilian Agencies for Scientific and Technological Development, Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico -FUNCAP, Conselho Nacional de Desenvolvimento Científico e Tecnológico -CNPq and Coordenação de Aperfeiçoamento de Ensino Superior (CAPES). | 2020-08-27T09:03:39.444Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "b9432ae32c04b8866f90a5dc70a770f1260f2a17",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.21577/0100-4042.20170615",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ce42921c541c36f0eb49dba0c60a53a7aed792f3",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
264124038 | pes2o/s2orc | v3-fos-license | Environmental Science and Sustainable Development
Public space will be studied regarding history, use, and the evolution of urban space. Through culture, architecture, and behavior, this paper investigates how Japanese and foreigners use public space. Foreigners use open spaces such as neighborhood parks more frequently than Japanese people, who consider space sacred and private. Because of this study, urban space will be analyzed from the perspective of Japanese culture and customs as well as foreign culture to qualify the precise meaning of space, urban space, and cultural space, within the context of diverse conditions and ethnicities. Residents who frequently use neighborhood parks recognize that foreigners are more welcome and that spatial accessibility contributes to the creation of a unifying space in their neighborhoods. Understanding cultural views and ethnic behavior is critical to the design and implementation of effective and creative urban spaces. © 2023 The Authors. Published by IEREK Press. This is an open-access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/). Peer-review under the responsibility of ESSD’s International Scientific Committee of Reviewers.
Introduction
Traditionally, public space has played an important role both in terms of physical and psychological dimensions of people's day-to-day lives (Carr, Francis, Rivlin, & Stone, 1992, p. 3).In urban environments, the provision of public space as a place for physical and emotional regulation will indirectly create a platform for social community space (Chitrakar, 2016).Public space has two different shapes formal and informal, which is determined by the size of the created spaces.Public spaces that focus on public life, activities, and events will play an important role in the city center, whereas public spaces that function as a place to rest or play offer a simpler look, are interior, and sometimes challenge users' perceptions both physically and socially of new spaces (Cho et al. 2016).Several research of public spaces support this assertion.Overall, public spaces play an important role in decreasing stress, enhancing health, and sustaining spatial and social ties in residential areas (Liu et al., 2017;Tan & Samsudin, 2017;Wan et al., 2020;Wang et al., 2021;Zanon et al., 2013).In this context, it is important to design public spaces well and of high quality in order to provide enormous economic, social, and environmental benefits for the region and its people without encouraging ambition (CABE, 2004).
There is a broader definition of public space, which includes a variety of places where people can participate in social interaction, including streets, city parks, and buildings.Urban parks as public spaces are one of the most important components in urban planning, since they are accessible to all citizens and provide an opportunity for interaction between them and the city.However, parks, especially those in residential areas, have changed over time and have been influenced by various cultural traditions.As new environmental developments, including the case study area, pg. 2 public spaces are being provided and used in different ways.Neighborhood Park as a Public Space between Kitakyushu Science Research Park and Orio Station, Japan remains unclear how such changes affect residents' perceptions of contemporary public space and their sense of community.There has been an ongoing debate in recent years regarding whether residents need parks or whether parks are just urban green spaces without any activities.Specifically, this occurs due to the perception that parks are individual spaces that are closely associated with the place where the individual resides (Caballero, 2007).As small private spaces, parks create fragmentation of separate space characteristics within the community.In turn, this creates a paradigm of content space for the community as a whole, resulting in parks becoming urban public spaces.As much as parks do not have a specific purpose as meeting places, they still represent an important part of people's daily lives for a variety of reasons (Carr, Francis, Rivlin & Stone 1993, p.3).Through their dynamics, parks act as communication nodes that facilitate human well-being (Hagenbjörk, 2011;and Rupprecht, 2017).Since public space is not simply a physical setting, it has several subjective meanings for its users that accumulate over time (Cattell et. al., 2008).
In reality, this dynamic is not well captured by some groups of people.This is due to several behaviors that are influenced by age, physical condition, gender, and ethnicity.Physical condition and ethnicity have a major impact on the use and management of public spaces, especially parks in the neighborhood park as Public Space between Kitakyushu Science Research Park and Orio Station, Japan.This raises the question of whether Japanese or foreigners are interested in using parks as part of their daily lives.Another question is whether the local community has the desire and time to manage parks as public spaces.Individuality and urban lifestyle of Japanese people have indirectly led to the loss of priority of parks as public spaces, where problems faced by local communities cannot be solved through neighborly cooperation and tend to be solved through the public administration system at the government level (Kurasawa, 1987).However, Japan's resident has unique behavior practically all people join neighborhood associations (Chounaikai) in most urban areas (Nakata ed. 2003).According to Nakata (2003:17), for parks to serve as community public spaces and not be neglected, the participation of urban communities in the construction of common public spaces is needed to increase community social mobility.
The objective of this paper is to analyze the relationship between the existence of parks as part of a city and their usefulness to residents.As a result of this study, the researcher is able to observe how parks serve as community public spaces where no parks are unutilized or abandoned.During this research, the level of community involvement in the development and use of parks as shared public spaces will be observed to improve the social mobility of the community.This research shows that community involvement in activities has a major impact on stakeholders or the public.Community perception is an important factor in the success of a design that inspires creativity and social equality (Barton, et.al. 2003;Njunge et.al., 2020).The social interaction created plays an important role in ensuring that people's needs and desires are met in public spaces.By applying this principle, not only can a certain type of group make better use of the public space, but it can also gain support for public life.
Research Motivation
To understand the motivation behind the proposed study of Japanese and foreigner's perceptions of the design and use of public space, it is important to understand the historical and cultural background of each of these societal groups.Cultural knowledge of citizens can provide clues about people's needs and desires for public space.In addition, theories, developments, and current issues of public space can also become map patterns of public space, particularly in Japan.The function of public space as the urban instrument has a high responsibility to provide space for urban society to express itself.However, this perception will be difficult to capture in several public spaces in Japan, which are considered standard and less flexible.Therefore, by studying the use and design of public space based on Japanese and foreign perceptions, the researcher will explore the concept of public space, especially neighborhood parks, and then evaluate the design of public space and the intensity of its use.These two factorsdesign and intensity of use -are examined from the point of view of Japanese and foreigners through a questionnaire survey.In addition, the design of public spaces is examined based on a review of Japanese government literature and standards.Finally, this research will be completed based on the results of the public perception assessment and the review of the Japanese government's design standards.
Research Methodology
In this study, a behavior mapping approach was employed, utilizing both observation and interviews.Behavior maps serve as both an outcome of observational research and a valuable instrument for the analysis and design of designed places (Cheung, et.al., 2022., Han, et.al., 2022).This mapping technique was originally introduced by Ittelson et al. (1970) to document the behaviors that transpire within the designed space (Marušić and Damjan, 2012).The process of behavior mapping entails two distinct stages of data collection and subsequent analysis.
As the first step, The researcher conducted observations at 39 neighborhood parks located between Kitakyushu Science Research Park and Orio Station, Japan.These observations were carried out on a daily basis, encompassing both weekends (Saturday and Sunday) and weekdays (Monday to Friday), during the morning (08.00-11.30), afternoon (13.00 -16.30), and evening (17.00 -19.00) periods.The data collection period spanned six months, commencing in October 2022 and concluding in March 2023.Observational data will be divided into three main characteristics, i.e.
• park condition (facilities, area, number of visitors, and history of park development), • users' behavior (visitor activities, gender and age, user house location, and a sense of resident's comfort) • city block pattern (Location, building typology, traffic pattern) During the second phase of the mapping analysis, a series of interviews will be conducted with a sample of 50 individuals, comprising students, workers, and residents of the case study area.The respondents will be drawn from diverse ethnic backgrounds, including Japanese and foreigners of Malay, Chinese, Arab, Indian, European, and American descent.The primary objective of these interviews is to elicit participants' perceptions regarding the presence of small parks in urban areas, particularly in residential neighborhoods.Additionally, the interviews will serve to gauge the frequency and intensity of park usage, as well as to identify respondents' expectations for future park development.
The present study aims to investigate the impact of park design and the presence of parks in residential areas on the intensity of use.To achieve this objective, two stages have been identified as the basis for analysis.Furthermore, the study will employ behavior mapping in conjunction with spatial analysis to generate a people-moving analysis.This approach will enable park planning to align with the identity and desires of the community, thereby facilitating the creation of public spaces that are conducive to place-making.
Public Space Design Initiative
The debate over public areas in cities is advancing quickly.According to social geographers and urban planners, public space is a neutral, naturally occurring, and not just spatially bound container that has been produced by society.Idealistically, public space serves as a platform for fostering a sense of community.Nevertheless, according to some researchers, public space is defined as hardscaped, vacant space that is still present outside of buildings and transportation hubs.Western nations, where public spaces are perceived as being rigid and having only a few uses, tend to hold this opinion more widely.Public space is defined as accessible to all types of users, such as streets, parks, markets, libraries, museums, and so forth.Public spaces have been thought to need physical forms that can be easily distinguished from their surroundings.Numerous communities that use this area collectively create the appearance of a "private" area.The sense of closure that is produced will elicit various emotions, which will change based on the interests of those who are interested in public space.This opinion, which holds that public space can be used by one group to boost the economy and build a secure and sustainable urban environment, is largely unchallenged.This is due to the increased activity in public spaces, which has a strong correlation with welfare and property values (Schmidt and Jeremy, 2020).
Hardilla/ Environmental Science and Sustainable Development pg. 4 The way that public space is perceived in Western culture is not consistent with Japanese cultural history.Since the development of language literacy in Japan, which divides Hiragana ("vocabulary for Japanese") and Katakana ("separates vocabulary for foreigners"), the word "private" has not been historically recognized by Japanese society.According to Japanese culture, public areas are flexible and can be used for events, emergencies, or open areas at any time.This idea emphasizes how shared things do not include things with fixed locations or physical boundaries.This is evident in the way the Japanese government categorizes public areas, such as parks and other public buildings.In terms of architecture and urban planning, parks and public facilities like schools serve a variety of purposes.According to Hidaka and Tanaka (2001), the idea of flexible public space fosters psychological closeness among users.
When American culture first arrived in Japan in 1964, it introduced the fundamental idea of privacy, which had an impact on how Japanese spaces and barriers were created.A separate social space has been created in each society because of the Western idea that "public" and "private" should be divided.Urban space planning has become rich, complex, and diverse due to the shifting perceptions of the flexibility of public space.Nevertheless, this perception has a detrimental effect on the atmosphere of modest public areas, such as neighborhood parks.Other public structures that are in keeping with the design of urban space blocks can take the place of neighborhood parks that serve as public spaces at the level of neighborhood homes.In line with the trend of combining urban spaces, abandoned, useless, and disconnected park spaces in the city block.This pattern will offer opportunities for change.
Community Perception of Public Space
Public space use is a hot topic in urban spatial design, with the intensity of public space use being influenced by the concept of space shaped by community behaviors and spatial practices.The dynamics of urbanism are closed based on the stages of analysis of spatial conditions, and social and spatial change experiences (Aelbrecht, 2010).In other words, the use of public space can be measured by public acceptance of the planning, design, and management of public space.The limitations of planning, design, and space management at the government level encourage a sense of indifference to the existence of public space.Public space can be called public space when people use it in different ways, at different times, or simultaneously or sequentially, without considering a particular social group.The contribution of public space to the social life of the community will spontaneously develop the use of open space in ways that could not be imagined or designed.Community initiatives for the inclusive use of public urban spaces can turn people into owners and designers of public spaces.However, this condition does not affect every public space equally (Massip-Bosch, 2017).Behavior and cultural backgrounds are also factors that decide whether this concept can be implemented correctly.Japanese people tend to be more individualistic and closed and interact on a small scale, which leads them to use spaces that are more private as living and interacting spaces.This condition was also affected when the Covid pandemic hit, forcing people to stay at home, thus changing their social relationships with the community, including park-visiting activities.Parks are considered public spaces that are less attractive and require high maintenance costs, but their existence is something that is difficult to separate from people's lives.Although the Covid pandemic has ended, the existence of parks as public spaces for Japanese society remains tenuous.This is due to a shift in the perspective of Japan's younger generation, where the presence of streets and parking lots are seen as more attractive for use as art, commercial, and public spaces, rather than a park.Projects and research on future urban design have shown that people in cities such as Tokyo have a strong desire to use parks as public spaces, which can express themselves through festivals, exhibitions, and concerts (Nikken Sekkei LTD, 2021).There are limited parameters for the use of public spaces that may be more specialized, combining economic and social aspects to provide important points for the future development of the park.Parks can create spaces that can draw the attention of Japanese society, which is a need for increased branding of parks as public spaces, so the connectivity of Japanese lifestyles and parks occurs more densely according to the characteristics of each public space user.
In contrast, Foreigners have a background that is more open than Japanese society as a whole in terms of traditions and culture.Some ethnic groups' shared religious beliefs or historical experiences have an impact on how similar their social customs are.Muslims in Japan, for instance, frequently greet and converse with one another in parks and other public areas without coming close to one another.Liberal Westerners also have a more open identity.The difference in habits between Japanese and foreigners is evident when both groups spend the same amount of time in public spaces.The fear of Japanese people -especially those in small towns -to interact with foreigners creates a large gap, so one of the groups reduces the intensity of visits to public spaces.Therefore, to understand people's perception and desire for parks as public spaces, it is important to examine the development of public spaces in Japan.The cultural uniqueness of Japanese society must be appropriately associated in order to recognize the characteristics of citizens, allowing maximum appreciation of the design of park development and ensuring the fusion of public spaces with its owner, the Japanese people themselves.Some researchers have interpreted this relationship as a dichotomy between humans and nature in Japanese literature, which regards parks as natural spaces rather than commercial and public spaces (Rupprecht. et.al., 2015).Small public spaces such as parks should be a priority for the design of community behavior approaches in order to build a design consistent with the community's desire
Residents As Stakeholders in Public Space's Sustainability Development
The concept of public space refers to a physical environment that can facilitate social exchange and interaction among neighbors (Chitrakar, 2016).However, it is important to note that the mere presence of physical proximity does not necessarily guarantee social contact and interaction, as common ground is often required.Public space is therefore considered a significant social territory (Abu-Ghazzeh, 1996), and its role in social integration is shaped by the meanings attributed to it by individuals (Peters, 2011).As a key design feature of urban environments, public space has the potential to foster place attachment and encourage the development of place meaning through social interaction (Abu-Ghazzeh, 1996, 1999).
In general, the design of public parks has a significant impact on the frequency of visits.Nonetheless, the responsibility for this design does not necessarily fall solely on the designer or urban planner.Rather, community participation, as a means of involving local residents in the conservation of parks as public spaces, is a straightforward process.Through this process, residents can act as leaders or partners in ongoing planning efforts, and the long-term success of such projects is contingent upon the community's comprehensive understanding of the park's condition and its role in their lives (Satherly, 2009).As proprietors and influencers, the inhabitants of a given locality play a crucial role in ensuring the long-term sustainability of a park as an accessible and functional public space.However, community participation is only effective if cultural considerations and community background are given due consideration.
The sustainable development of public space has historically overlooked cultural considerations (Fitnian, 2009), thereby diminishing the significance of public space as a component of community identity.The facets of identity, social cohesion, and creative development are heavily reliant on the preservation of cultural heritage (UNESCO-Culture Agenda 2030, 2018), which in turn promotes local energy security and sustainable activities within public spaces (Ragheb, 2022).The design of public spaces, such as parks, that fail to acknowledge the international identity of urban spaces, risks erasing the unique identity of the locale and creating a sterile environment (Aly, 2011).As a fundamental element of the city, public spaces must cater to the needs of the community while also promoting sustainability.By respecting social and cultural traditions, a deeper understanding of the community's perspectives can be gained, leading to a revitalization of urban identity.The cultural value of local communities will indirectly contribute to balanced development, and this value should be considered in future public space development plans.
Case Study: Neighborhood Park as Public Space between Kitakyushu Science Research Park and Orio Station
A neighborhood park acts as a public space and serves as a place for physical activities at the neighborhood level.The Japanese government provides neighborhood parks every 200 m from residential areas.In general, the neighborhood park is an ideal public space for residents of residential areas.This space can serve as a common space for the community around the park.In reality, however, the public spaces provided by the Japanese government are not all used to a great extent, although neighborhood parks theoretically serve the function of emotional spaces, play, reading, and gathering spaces.The government's view, which emphasizes the city's development and focuses on economic development, creates a bustling society and uses more private or public-private space than housing (both formal and informal).As a result, the traditional functions of parks are often challenged by new trends in the provision and management of public spaces, and several important trends have emerged.The provision and management of public space are becoming increasingly privatized, with property developers, property managers, and local business associations taking the lead in park provision and maintenance.This condition then raises the question of how the neighborhood park as a public space relates to urban society.Answering this requires an initial identification of the design and use of neighborhood parks, taking into account the background of park planning and development, as well as the culture of the community.
For instance, between Kitakyushu Science Research Park and Orio Station, there are a number of public areas (neighborhood parks).The population in the Kitakyushu Science Research Park area is more diverse than that of other parts of Kitakyushu.Because there are three universities in this area, the Japanese residents are very eager to interact with visitors.The park is the destination of choice for gathering and playing among the diverse backgrounds of foreigners who reside and commute as families and students, particularly young children.An online questionnaire survey was administered to a number of students or the families of students from Japan, America, Indonesia, Malaysia, and India in order to determine how parks are perceived and how frequently they are used as public spaces.Because non-student Japanese people have a tendency to be closed-off and constrictive, the sources were chosen based on the author's proximity to the source.
Based on the identification results, it was found that 69.2% of Japanese people in this area never visit the park as a public space or make 1 to 3 visits a year.The main cause of the low frequency of visits is the perception that Japanese people cannot use neighborhood parks in accordance with their needs and wants.Additionally, the majority of them do not consider parks to be lovely and enjoyable public areas.The mobility created in these areas is not as high as in large cities like Tokyo, despite the fact that some parks are situated near supermarkets.The performing arts event, which took place in the park in front of the Tokyo Metropolitan Theater, was organized by Tokyo (Baba, 2020).Temporary structures, like cafes, were set up there for people to enjoy for a month after attending a performance.In Western nations like the United States, utilizing neighborhood parks for fun public events is a major priority.It is thought to be quite difficult to apply public spaces, such as neighborhood parks in Japan, including parks in the case study area, as picnic and festival areas, or shared spaces where there is the freedom to interact with strangers.The laws dictating how people should behave in public, which are cultural norms that have been passed down from generation to generation and have become ingrained in Japanese society, inadvertently cause issues with how people use public spaces, particularly parks.However, the loss of parks as a component of public space also raises issues regarding the loss of a vital component of society.People still believe that they want parks close to their neighborhoods.Contrary to Japanese society, foreigners from America, Indonesia, Malaysia, China, and India have similar habits in social interaction, although they have different histories and cultures.The results show that Indonesians, Malaysians, and Indians who belong to the same religion, like Islam, tend to share similar feelings of brotherhood.Additionally, Indonesian society is heterogeneous and fosters a sense of togetherness and mutual collaboration, making it easier for it to blend in with other communities.Despite the fact that brotherhood and unity are not central to American culture, open-mindedness and willingness to speak in front of large crowds contribute to the group's ease in society.Parks and other public areas become a beneficial part of daily life as a result of the need for a strong social life.This finding contrasts with the negative perception that Japanese people have of public spaces as well as the urban gap, where parks are gradually replaced by empty spaces.Ninety percent of visitors from outside countries visit parks two to four times per week, demonstrating the impact parks have on people's quality of life.The existence of parks as public spaces in Japan does not give benefit to improve social relations for both Japanese and foreigners, as shown by the data graph above.This is due to the distinct personality differences between Japanese and non-Japanese people.It follows that neighborhood parks are seen by the Japanese as a double-edged sword that simultaneously has negative and positive effects on society.Although the views on stress relief are different in Japanese society and among foreigners, both the Japanese and foreign cultures agree that parks have a stress-relieving effect.The park is seen by foreigners as a place to relax, play, and engage in other activities that can strengthen ties between neighborhoods.While parks are seen as a place to engage in physical activities like sports to improve one's health by the Japanese.
Conclusion
Despite having a low usage intensity, it can be inferred from this study that public spaces, particularly Neighborhood Park, are essential a part of Japanese society's life.Many strategies have been made by planners and designers to resurrect the park's use, particularly in big cities like Tokyo.A different strategy is needed to increase the level of park's use, which is located between Kitakyushu Science Research Park and Orio Station.Japanese citizens who live in big cities can easily accept foreigners to some extent, where social and communication patterns are formed naturally.Small towns like Kitakyushu have a more closed social pattern, where interaction with foreigners is rare.Additionally, public space in a residential area is used less frequently than public space, which is located in commercial, educational, or Main Street.In other words, public areas that offer a variety of attractions tend to pique Japanese citizens' interest more.Foreigners including Indonesian citizens perceive public space as an open space that is an integral part of life, which is a very different perception from the Japanese.Therefore, it can develop some recommendations and plans for public space -Neighborhood Parkthrough the influence of citizen perception.Meanwhile, this paper did not provide a deeper analysis related to redevelopment strategies of public space, especially Neighborhood Park between Kitakyushu Science Research Park and Orio Station.Because most of the analysis aims to the general evaluation of citizen's (both Japanese and foreigners), perceptions toward neighborhood parks as public spaces.There are many challenges in data technologies, including storage, organization, management, and analytics.
Figure 1
Figure 1 Relationship between Space and People
Figure
Figure 2 Park Visit Frequency Rate Figure 3 Usage Time
Figure 4
Figure 4 Advantages of Visiting a Park Figure 5 Reason for visiting the park | 2023-10-04T15:09:14.446Z | 2023-09-30T00:00:00.000 | {
"year": 2023,
"sha1": "6fcea448a398eadb885f799d53f9361fc66ff6a6",
"oa_license": "CCBY",
"oa_url": "https://press.ierek.com/index.php/ESSD/article/download/983/1014",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8bb6b2c86ad12306a1ecd3cfab1b27608db44623",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
215733665 | pes2o/s2orc | v3-fos-license | Assortative mating by population of origin in a mechanistic model of admixture
Populations whose mating pairs have levels of similarity in phenotypes or genotypes that differ systematically from the level expected under random mating are described as experiencing assortative mating. Excess similarity in mating pairs is termed positive assortative mating, and excess dissimilarity is negative assortative mating. In humans, empirical studies suggest that mating pairs from various admixed populations—whose ancestry derives from two or more source populations—possess correlated ancestry components that indicate the occurrence of positive assortative mating on the basis of ancestry. Generalizing a two-sex mechanistic admixture model, we devise a model of one form of ancestry-assortative mating that occurs through preferential mating based on source population. Under the model, we study the moments of the admixture fraction distribution for different assumptions about mating preferences, including both positive and negative assortative mating by population. We consider the special cases of assortative mating by population that involve a single admixture event and that consider a model of constant contributions to the admixed population over time. We demonstrate that whereas the mean admixture under assortative mating is equivalent to that of a corresponding randomly mating population, the variance of admixture depends on the level and direction of assortative mating. In contrast to standard settings in which positive assortment increases variation within a population, certain assortative mating scenarios allow the variance of admixture to decrease relative to a corresponding randomly mating population: with the three populations we consider, the variance-increasing effect of positive assortative mating within a population might be overwhelmed by a variance-decreasing effect emerging from mating preferences involving other pairs of populations. The effect of assortative mating is smaller on the X chromosome than the autosomes because inheritance of the X in males depends only on the mother’s ancestry, not on the mating pair. Because the variance of admixture is informative about the timing of admixture and possibly about sex-biased admixture contributions, the effects of assortative mating are important to consider in inferring features of population history from distributions of admixture values. Our model provides a framework to quantitatively study assortative mating under flexible scenarios of admixture over time.
Sex-specific contributions from the populations are s f 1,g , s f 2,g and h f g , and s m 1,g , s m 2,g and h m g , for females and males, respectively. H γ α,g,δ represents the fraction of admixture from source population α ∈ {1, 2} in generation g for a random individual of sex δ ∈ {f, m} in population H for chromosomal type γ ∈ {A, X}. Within the admixed population, at every generation, parents from generation g − 1 pair according to one of three mating models. Individuals from S 1 are represented by triangles, S 2 by pentagons, and H by squares. (A) Random mating. The probability of a pairing is given by the product of the proportional contributions of the two populations. (B) Positive assortative mating. Individuals are more likely to mate with individuals from their own population. (C) Negative assortative mating. Individuals are more likely to mate with individuals from a different population. In each panel, a mating pair is indicated by a pair of adjacent symbols. Each panel considers the same values for the contributions from the three populations to generation g + 1. (1) 90 Additionally, the total contribution from a source population is the average of the sex-specific contributions 91 from that source, 92 s 1,g = (s f 1,g + s m 1,g )/2 (2) 93 s 2,g = (s f 2,g + s m 2,g )/2 (3) For a randomly mating population, the probability that an individual in the admixed population has 96 a specific parental pairing is the product of the associated female and male contribution parameters. For 97 example, the probability that an individual in the admixed population in generation g has a female parent 98 from S 1 and a male parent from H is s f 1,g−1 h m g−1 . To consider deviations from random mating, we define 99 a new parameter c ij,g as the difference between the probability that a mating pair in generation g contains 100 a female from population i and a male from population j (Table 1) The parameters c ij,g govern the strength and direction of assortative mating. We assume that the 104 assortative mating preference is constant over time after the founding of the admixed population. Therefore, 105 we have two sets of parameters: c ij,0 for the founding generation, and c ij for all further generations. Because 110 The values of the c ij are bounded such that the probability of each given parental pairing (Table 1) 111 takes its values in the interval [0, 1], and such that each probability is no greater than the probability of 112 one of its constituent components. For example, if c 11 > 0, then c 11 ≤ min(s f 1,g−1 , s m 1,g−1 ) − s f 1,g−1 s m 1,g−1 .
Table 1: Recursions for autosomal and X-chromosomal admixture. The table shows the probability of parental pairings and the admixture fractions for each of the nine possible pairings for a randomly chosen female and a randomly chosen male from the admixed population at generation g.
for all i and j = i. If individuals from a source population are most likely to mate with individuals from the other source, then c 12 > 0 and c 21 > 0.
We study the random variable H γ α,g,δ describing the admixture fraction from source population α at 124 generation g for a chromosomal type γ in a random individual of sex δ in the admixed population. The 125 chromosomal type is indicated by γ, and it can be either autosomal, A, or X-chromosomal, X. For the 126 autosomes, H A α,g,f and H A α,g,m are identically distributed (Goldberg et al., 2014). For the X chromosome, 127 the distribution of H X α,g,f depends on both female and male admixture distributions in generation g − 1, 128 as females inherit one X chromosome from each parent. For H X α,g,m , the distribution depends only on the 129 female contributions, as males inherit a single X chromosome from their mothers. 130 Based on these inheritance patterns, we can write the values for the autosomal and X-chromosomal 131 admixture fractions of an individual randomly chosen from the admixed population given one of nine pos-132 sible sets of parents, L, along with the probability that an individual has that set of parents (Table 1). 133 The probability of a parental pairing is a function of the sex-specific contributions from the populations Under the model, following Goldberg et al. (2014, eq. 12), we can use the law of total expectation to write a recursion for the expected value of the admixture fraction from source population 1 for a random individual of sex δ sampled from the admixed population in generation g as a function of conditional expectations for all possible parental pairs L. As H A 1,g,f and H A 1,g,m are identically distributed, we write expressions for sex δ when considering autosomal admixture, understanding that δ takes on the same value, f or m, throughout Section 3. Using the values from Table 1, for the first generation, in which neither parent is from population H, we have For all subsequent generations, g ≥ 2, we have Using eqs.
144
For subsequent generations g ≥ 2, we have
Higher moments
Next, we write a general recursion for the higher moments of the autosomal admixture fraction from popula- 159 For g ≥ 2, we have
161
For k = 1, eqs. (12) and (13) simplify to produce the recursion in Table 1. Using the law of total expectation, and following our calculation for the expected value of autosomal admixture, we can write an expression for the higher moments of autosomal admixture by summing the conditional values for autosomal admixture given the parental pairings over all possible sets of parental source populations. For the first generation, g = 1, we have For g ≥ 2, we have For g ≥ 2, we have For k = 1, eqs. (16) and (17) reduce to eqs. (10) and (11).
166
The recursions for the higher moments of the autosomal admixture fraction distribution follow the 167 corresponding expressions for a randomly mating population, but with additional terms that are linear in
Variance
Using eqs. (16) and (17) for k = 2, we can write expressions for the second moment of autosomal admixture.
(1)-(4), for g = 1, we have For g ≥ 2, because H A 1,g−1,f and H A 1,g−1,m are identically distributed, we have Using the definition of the variance V H A 1,g,δ = E H A 179 For all subsequent generations, g ≥ 2, we have For a randomly mating population, with c ij,0 = 0 and c ij,g = 0 for all i and j, eqs.
184 For g ≥ 2, recalling eqs. (6) and (7), we have Eqs. Following the same approach as in the corresponding derivation for the autosomal admixture fraction, we 192 can use the law of total expectation to write recursions for the moments of X-chromosomal admixture.
193
Because the distribution of admixture differs for females and males on the X chromosome (Table 1) For g ≥ 2, we have As was the case for the autosomes, the expectation of X-chromosomal admixture in eqs. Following the derivation for the autosomes and using Table 1, we can write general coupled recursions for 214 the higher moments of the X-chromosomal admixture fraction from population S 1 in a randomly chosen 215 female and male from the admixed population. As was true for the expectation, the recursion for the 216 X-chromosomal admixture fraction in a female from the admixed population for k ≥ 1 is the same as 217 the recursion for autosomal admixture in eqs. (12) and (13), exchanging the superscript A for X. For 218 X-chromosomal admixture in a male, we have, for g = 1, Using the law of total expectation and the binomial theorem, and following the derivation of the auto- For g ≥ 2, using the conditional independence of H X 1,g−1,f and H X 1,g−1,m given H X 1,g−2,f and H X 1,g−2,m , we have Unlike for the autosomes, the female and male admixture fractions are not identically distributed or 229 conditionally independent, so the dependence on both H X 1,g−1,f and H X 1,g−1,m in eq. (27) cannot be further 230 reduced. For k ≥ 2, moments of the X-chromosomal admixture fraction depend on the assortative mating 231 parameters, c ij,0 and c ij . However, conditional on H X 1,g−1,f , moments of the X-chromosomal admixture 232 fraction sampled in a male from the admixed population do not depend on the assortative mating parameters.
233
Because a single copy of the X chromosome is inherited from mother to son, the distribution of admixture 234 for male X chromosomes in a given generation is affected only by the origin of the mother and not by the 235 probabilities of parental pairings in Table 1. for g = 1, the second moment of X-chromosomal admixture in a female is the same as the expression for 240 autosomal admixture in eq. (18), substituting superscript X in place of A. For the second moment of 241 X-chromosomal admixture in males in g = 1, we have Following our derivation of the variance of autosomal admixture in Section 3.3, we can write the variance of X-chromosomal admixture in a female and in a male from the admixed population using the expected values and second moments of X-chromosomal admixture. For g = 1, we have For g ≥ 2, we have We denote the variance of X-chromosomal admixture under random mating, in a random female and male from the admixed population, by V RM [H X 1,g,f ] and V RM [H X 1,g,m ], respectively. Setting all c ij,0 and c ij parameters to zero, for g = 1, we have Using eqs. (33)-(36), we can rewrite the variances in eqs. (29)-(32) as functions of the variance under a similar randomly mating population. For g = 1, we have For g ≥ 2, conditional on H X 1,g−1,f and H X 1,g−1,m , and recalling eqs. (6) and (7), we have In contrast to the expectation, the variance of X-chromosomal admixture depends on the assortative 244 mating parameters c 11 , c h1 , c 1h , and c hh . Conditional on the female and male variances of X-chromosomal 245 admixture in generation g − 1, the variance of X-chromosomal admixture in a male sampled in generation g 246 is equivalent to that in a corresponding randomly mating population. To analyze the behavior of the model, we study the moments of the distribution of autosomal and X-249 chromosomal admixture for two special cases of constant admixture processes over time. First, in Section 250 5, we consider a case with no contributions from the sources after the initial founding. That is, for g ≥ 2, Recalling the variance of autosomal admixture for a randomly mating population produced by a single As is true in a randomly-mating population, the long-term limit of the variance over time is zero, with the
294
For g = 1, Next, for g ≥ 3, we can rewrite the variance of X-chromosomal admixture in a female as a two-generation recursion, Goldberg and Rosenberg (2015, eq. 8), and we can take an analogous approach to solving the recursion.
299
The recursion for the variance includes a factor of 4 in each generation; therefore, the closed-form expression 300 contains a factor of 4 g . We define y g = 4 g V[H X 1,g,f ], and for g ≥ 3, we have and A 1 = 1, A 2 = 5, B 1 = 1, and B 2 = 1. Calculating further values of B g , we note that B g = A g−1 . We The sequence A g is reminiscent of a similar recursively defined sequence that appears in the X- Lucas sequence (OEIS A006131), which can be written in closed form by using its generating function, 71) and (72)) as g → ∞. Recalling eqs. (37)-(40) and using eq. (73) for the definition of P 3 , we have Here, we show that for positive assortative mating, eq. (45) can be written as the sum of a positive quantity 393 and the limit of the variance of admixture for autosomes under random mating. That is, We recall that the quantity s1 1−h is the limit of the expectation of autosomal ancestry, and therefore 396 takes its value in (0, 1) (Verdu and Rosenberg, 2011, eq. 31). We can rewrite the right-hand side of 397 eq. (48) as where D 1 is a given constant in (0, 1). Under this assortative mating scenario, we can rewrite eq. (6) to give 400 a range for c 1h of (− c11 2 , c 11 ) and, by our definition, we have c 11 , c hh > 0 for positive assortment. Because f 401 is linear in c 1h , its minimum in terms of c 1h occurs at the boundary of the range of c 1h . Substituting for c 1h 402 its lower bound, − c11 2 , we have
404
This quantity is positive because D 1 takes its values in (0, 1), and c 11 , c hh > 0. Therefore, we have demon-405 strated that this scenario of positive assortative mating increases the variance of autosomal admixture relative 406 to a randomly mating population with the same contribution parameters. 407 eq. (46) can be written as the sum of a positive quantity and the limit of the variance of the female Xchromosomal admixture under random mating. That is, We recall that the quantities P 3 and s f 1 + h f P 3 are the limits of the expectations of female X-412 chromosomal, and male X-chromosomal ancestry, respectively, and therefore take their values in (0, 1)
413
(Goldberg and Rosenberg, 2015, Appendix). We can rewrite eq. (49) as with D 2 ∈ (0, 2) and D 3 ∈ (0, 1). These upper bounds arise from the fact that 0 < P 3 < 1 and 0 < 416 s f 1 + h f P 3 < 1 (Goldberg and Rosenberg, 2015, Appendix). As in the cause for autosomal admixture 417 above, f is linear in c 1h . Therefore, its minimum in terms of c 1h occurs at the boundary of the range of c 1h .
418
Substituting for c 1h its lower bound, − c11 2 , we have For this scenario of positive assortative mating, we can see that f is always positive because c 11 , c hh > 0 by 421 definition, and D 2 ∈ (0, 2) and D 3 ∈ (0, 1).
675
Setting the first derivative of eq. (52) with respect to s f 1,0 equal to 0, we find that the critical point of 676 the variance of X-chromosomal admixture occurs at This critical point is a maximum because the second derivative of eq. (52) is negative, −2(A g + A g−1 ).
679
Following the same procedure, we can write the variance of X-chromosomal admixture in females as a this interval when both of the following pairs of inequalities hold: Eq. (55) is always satisfied, as A g > A g−1 , so that the left hand side of the inequality is negative and the 689 right hand side exceeds 1. Therefore, the maximum always occurs when s f 1,0 takes the value in eq. (53).
698
Using eq. (44), we see that because A g > A g−1 , the variance of the female X-chromosomal admixture 699 fraction is smaller when (s f 1,0 , s m 1,0 ) = (0, 2s 1,0 ) than when (s f 1,0 , s m 1,0 ) = (2s 1,0 , 0). Therefore, unlike for 700 the autosomes, the minima are not symmetric for the X chromosome ( Figure A1, for g > 1). Specifically, In this appendix, we use the closed form of the variance of autosomal admixture for a randomly mating Therefore, taking the limit as g → ∞ of the variance of autosomal admixture in eq. (57) Finally, assuming assortative mating is the same for the two sexes, c ij = c ji , we have eq. (45). Here, we find an expression for the limit of the variance of X-chromosomal admixture as g → ∞ in a randomly 725 mating population. We follow the structure of Appendix 1 from Goldberg and Rosenberg (2015). Using (59) z g = d 1,g + d 2 z g−1 + d 3 z g−2 , with | 2019-09-09T21:21:55.815Z | 2019-08-21T00:00:00.000 | {
"year": 2019,
"sha1": "4e907ff7a744e1af6f20b621c3d8e2361e690efc",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/08/21/743476.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "e2608aca993a1b398a957253586b6a5ba9387183",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118887175 | pes2o/s2orc | v3-fos-license | Self-Duality Equations on S^6 from R^7 monopole
In this note we identify a correspondence between a seven-dimensional monopole configuration of the Yang-Mills-Higgs system and the generalized self-dual configuration of the Yang-Mills system on a six-dimensional sphere. In particular, the topological charge of the self-duality configurations belongs to the sixth homotopy group of the coset G/H associated with the symmetry breaking G ->H induced by a non-trivial Higgs configuration in seven-dimensions.
In this short note we make an observation about the self-duality equations on the sixdimensional sphere. We make use of the work of [1,2,3,4], the details of which we omit. It is well known [5] that a four-dimensional instanton configuration has second Chern character, which is in turn, related to the third homotopy group π 3 (G) of the gauge group G. We show there is a correspondence between seven-dimensional monopoles and self-duality equations on the six-dimensional sphere. There have been numerous efforts to generalize monopoles to higher dimensions, some of which have appeared in [1,6,7,8,9,10].
In analogy in six-dimensions, when G = SU(N), the third Chern character TrF 3 is considered as a topological charge and takes values in π 5 (G), with π 5 (SU(N)) = Z for N ≥ 3. In particular, for SU(4) ≃ SO(6) 1 pure Yang-Mills theory on S 6 , one has a nontrivial gauge configuration [4], which satisfies the generalized self-duality relation Here, c = 3/(qR 2 0 ) is a covariantly constant scalar given in terms of the gauge coupling q and radius of S 6 R 0 .
A few examples of other configurations for π 5 (G) = 0 have appeared in the literature in [3]. In this note, our focus is non-trivial solutions of self-duality equations on S 6 with gauge group G with π 5 (G) = 0.
In one dimension higher, the above equation takes the form wherec is a constant. The above equation can be obtained from the Bogomol'nyi equation [9]. Here F is a gauge field strength two-form and " * 7 " is the Hodge dual operator with respect to the Euclidean metric on R 7 . φ a are scalar fields forming a fundamental multiplet of SO(7), φ := φ a γ a and finally, D is the covariant exterior derivative: The Hermitian matrices γ a , (a = 1, 2, · · · , 7), are Dirac matrices with respect to SO(7), with γ ab := (1/2)[γ a , γ b ] satisfying the commutation relations of SO (7). φ induces symmetry breaking when it acquires an expectation value φ a = H 0 .
To substantiate this connection, we suppose that the gauge configuration is concentrated around the origin of R 7 . Solutions of Eq. (2) represent monopole configurations with corresponding topological charge, where B(R 0 ) = {x ∈ R 7 | x ≤ R 0 }. This charge Q relates to the mapping class degree of S 6 R 0 → SO(7)/SO(6) = S 6 for the case where R 0 >> 1. To see this, we suppose that gauge field A and scalar field φ have the following form, where q is again the gauge coupling, r = √ x a x a and the functions U(r) and K(r) satisfy the following boundary conditions: U(0) = 1, K(0) = 1, U(∞) = 1 and K(∞) = 0. The corresponding F and Dφ are For this particular configuration, Eq. (2) reduces to a first order nonlinear ordinary differential equation [9].
In the asymptotic region, F and Dφ become where, as may be seen, F is aligned perpendicular to the radial direction and thus, along the S 6 . Hence F can be regarded as a differential form on S 6 . In this asymptotic region, Eq. (2) is tranformed into Eq. (1) with a suitable scalar.
However, the above discussion includes some degree of approximation: the self-duality is not exact. If we now relax the constraint of demanding a finite energy configuration by considering the singular configuration where κ is a constant, the seven-dimensional equation reduces to Eq. (1).
Having constructed a concrete example, we now consider other embeddings. In general, we may consider a gauge group G with non-trivial π 6 (G/H) with symmetry breaking G → H, from a seven-dimensional monopole solution. For simplicity suppose that G is a simple group and the rank of group G is greater than or equal to 3. From the long exact sequence of homotopy group we obtain If π 5 (G) = 0 and H includes Spin (6) or SU(N) (N ≥ 3) as a factor group, then π 6 (G/H) = 0.
In contrast to the earlier example where the Higgs is in the fundamental 7 of SO (7), it is possible to embed it and the adjoint 21 of SO (7) in the adjoint 28 of SO (8). Here For these symmetry breakings π 6 (G/H) = 0. It may also be of interest to consider coupling this system to gravity in a similar fashion to studies appearing in [12,13,14], the latter of which addresses the possibility of cosmological models as a result of dynamical compactification on S 6 . | 2009-06-30T04:45:38.000Z | 2009-06-25T00:00:00.000 | {
"year": 2009,
"sha1": "6a4d05d9b7288d76fd48d94e1292eff0e30b5f4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6a4d05d9b7288d76fd48d94e1292eff0e30b5f4c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
142880526 | pes2o/s2orc | v3-fos-license | An Analysis of Comprehension Performance of Sudanese EFL Students
This study examines the Sudanese EFL students’ comprehension performance and measures the differences in their performance according to gender. After one semester of participation in extensive reading, 300 secondary school students from 15 schools in the state of Khartoum are randomly selected for the study. A comprehension test followed by a questionnaire on how they think while completing the test are used as instruments. The study’s results reveal an average performance and statistically significant differences in the background, explicit, vocabulary and general understanding questions of the reading comprehension. However, these significant differences are not found in the implicit and the interpretation questions and in using lower and higher levels of processing skills. In addition, significant differences have appeared between males and females students in the background, explicit, implicit and interpretation, and vocabulary questions, but not found in the general understanding questions. Then, findings are discussed and recommendations are provided. Drawing on these findings the study, for example, recommends that good applications of techniques and procedures of teaching EFL reading might be a viable intervention for improving students’ performance in EFL reading.
Introduction
The situation of reading classes at Sudanese secondary schools and the complaints from students has obliged the researcher as an ELT practitioner to propose this study.Specially, no one can neglect the importance of reading and its necessity to successfulness in learning.Much work is done on reading EFL in general and reading strategies and reading skills, which are adopted by successful and unsuccessful learners in particular.Investigating lower and higher level of processing skills has occupied many efforts, also.To contribute in this field one of the researcher's aims from this study is to measure the levels of processing skills employed by Sudanese secondary schools students in EFL reading.However, his main aims are to examine the Sudanese EFL students' comprehension performance and measure the differences in their performance according to gender.The researcher has selected randomly 300 secondary school students from 15 schools in the state of Khartoum.At the end of the study, he hopes to reach to results that can help to improve EFL reading in the Sudan and create better and effective readers.
Theoretical Background
Reading is considered by many researchers as a vital skill in learning, and reading ability is necessary for any learner searching for success.Due to the complexity of the reading processes, readers have to benefit from the background knowledge, linguistic knowledge, encyclopedic knowledge, contextual knowledge, etc. they possess and all their skills and strategies of reading in order to comprehend the meanings of the text.Crystal (1987, p. 209) notes, "The field of reading research would not seem to be a particularly promising or attractive one.It is, however, an area that has attracted many investigators, partly by virtue of its very complexity, and partly because any solutions to the problem of how we read would have immediate application in areas of high social concern".
Reading have had many shifts and transitions during time and many approaches and theories have been proposed beginning from focusing on the printed text to the cognitive and metacognitive theories.Therefore, many researchers have considered reading process and have different views about it.In Goodman's (1967) view of reading, readers benefit from their background knowledge when they read.However, Rumelhart (1977) has emphasized on the role of schemata in reading.Anderson and Pearson (1984) think reading require readers to interpret the new information and assimilate them with the information already exist at their memories.Bamford and Day (1998) refer to reading as constructing meaning from a written text.Mikulecky (1990) do not separate between reading and comprehension.Nattal (1983), on the other hand, is convinced that reading happens spontaneously, not by teaching.For Aebersold and Field (1997), reading occurs from looking at the text and assigning meaning from its written symbols.Eskey (1988) thinks the reader needs a complete interaction with the text to be comprehend.For Eskey, reading in L2 demands more cognitive effort.
Lower or Higher Levels of Processing Skills?
Many researchers in second language reading such as Carrel (1988) and Eskey (1988) deal with the lower level (Bottom up) and the higher level (top down) processing skills of reading.However, other researchers such as Gagne et al. (1993) have divided reading processes into more subgroups including the bottom up level of processing skills and higher level of processing skills.
In addition, reading comprehension's studies vary in evaluating which processing skill is better to be adopted by students.After making much research on skilled and less-skilled readers, some studies prefer focusing on engaging students in bottom-up, some on top-down processing skills.However, good readers always use interactive reading, which integrates elements of both levels of processing.In fact, most readers begin reading by using top-down in reading strategies until they face a problem like encountering an unknown word, then they shift to bottom-up reading strategies.They slow down to decode the new word.
Competence and Performance in EFL Reading Comprehension
Fromkin and Rodman (1981) refer to reading performance as "how" the language is used by someone.In order to maximize the validity of performance, students and teachers need to be aware reading in a foreign language needs certain techniques and strategies to be mastered.Nation (2009, p. 149) describes a number of them for improving readers' reading skills.He emphasizes that the techniques and strategies should not be seen as isolated activities but as means of bringing meaning into practice.To be effective and efficient readers, EFL students have to tackle all the following elements of competences: Phonological competence: knowledge of sounds and sound combinations; syntactic competence: knowledge about the possible syntactic combinations; semantic competence: knowledge about the meanings of words, phrases and sentences; lexical competence: having extensive amount of words; morphological competence: knowledge of word formation or word structure; communicative competence: knowledge about the social, pragmatic and contextual characteristics of a language; grammatical competence (linguistic competence): knowledge of the use of different functioning rules of the system of the language; sociolinguistic competence: knowledge of producing sentences according to the communicative situations i.e., knowing when, where and whom to say things; discursive competence: knowledge of determining different types of discourse; and strategic competence: knowledge to maintain communication i.e., strategies language users have to understand others.
Hypotheses
1) There is no statistically significant difference in the mean comprehension performance in EFL reading among Sudanese secondary school students.
2) There is no statistically significant difference between the mean comprehension performance in EFL reading of Sudanese secondary school males and females students.
Subjects and Setting
To identify the subjects for the study, the data was collected from 15 secondary geographical schools in the state of Khartoum.Participants of the study were 300 students in total, 160 male and 140 female (20 students from each school).They were Sudanese EFL learners at secondary level, at their 3 rd grade.At the time of testing, they had studied English for seven years and they were about to finish the SPINE (Note 1) series.Their ages ranged from 16 to 18.
Measuring Instruments
The study used both quantitative and qualitative data analysis.The students took a 25-item reading comprehension test, followed by a questionnaire on how they thought while completing the test.
The Test
Although the criticism it faces as inappropriate measure of students' academic competence, the test used was a traditional standardized objective achievement test.The researcher selected this type of test of reading to focus on the integration of lower level and higher level of skills.The students were given a reading passage preceded by background questions and followed by different other kinds of questions (see Appendix A).The reading text was adapted from the online article, "Two types of Input Modification and EFL Reading Comprehension: Simplification versus Elaboration" by Sun-Young Oh (2001).It was the elaborated form, which was used in his study to investigate the relative effects of input modification and elaboration on Korean high school students' EFL reading comprehension.The text was intended to achieve the designed objective in that it was long to some extent with about 250 words, new information and vocabulary, included clues, defining context and relative clauses.The researcher prepared a set of 25 written questions to be answered before and after reading the text.They were five different types of questions (A -E) measuring both bottom-up and top-down skills including background questions, literal (explicit) questions, implicit and interpretation questions, vocabulary questions and questions on general or overall understanding.They consisted of general, specific and inferential comprehension questions.Through answering the questions, the researcher could check and measure progress in such important elements of EFL reading.Without any preparation, the participants were asked to read the text silently and answer the questions.
The Questionnaire
For the purpose of the study, the questionnaire was designed to be answered by the students who did the test (see Appendix B).Information obtained covered the two main parts of the processing levels of skills: bottom-up and top-down.It was designed according to the options of the Likert five -point scale ("strongly agree", "agree", "somewhat", "disagree", and "strongly disagree").Participant students were asked to give their personal information, then, provided answers on how they thought while completing the test.The questionnaire had 22 statements.
Correlations & Reliability
To measure the reliability of the comprehension test, the researcher chose the "test-retest" reliability to find out whether the questions were related to one another and measure the same thing or not.The students answered the same test twice.The interval time between the two tests was two weeks.Forty-four students did the first test.Forty-five students did the second.Forty students did the two tests.The correlations of the scores of the two comprehension tests was (0.75) according to Pearson correlations coefficients.On the other hand, the questionnaire was given to many faculty members and classroom practitioners.Twenty teachers were expected to fill it in; however, only 15 handed in their copies.Therefore, the reliability was calculated from the 15 copies using the split-half method.To apply this method, first the questions were divided into two similar parts.Since the items were homogenous, all odd-numbered items constituted one half and even-numbered items constituted the other half.Then, the scores of the subjects on the two halves of the test were correlated.The reported questions reliability of (0.79) using Guttmann's prophecy formula showed that this instrument was highly reliable.
Procedures
Dealing with the target group, 20 students (randomly selected from each class) received instructions from the researcher in the remedial setting.The reading comprehension test was given to students to be read silently.The students wrote their answers to the written question.They finished the test in a single session that lasted for around 45 minutes.Instructors provided their assistance to students answering their questions about the comprehension questions, but not in answering them.Each student finished doing the test, the instructor handed him the questionnaire to be filled out.
Analysis, Results & Discussions
The focus of this part is to analyze, discuss and state the results and findings.The data to be analyzed and discussed was gathered by both instruments (the students' test and the questionnaire).Therefore, the analysis, discussions and results of the data collected and used were treated by virtue of these two instruments.Then, different statistical procedures were done.Statistical results in relation to hypotheses were drawn and discussed.Each hypothesis was restated and followed by an examination of the statistical results relating to it.
1) Hypothesis One: There is no statistically significant difference in the mean comprehension performance in EFL reading among Sudanese secondary school students
To test this hypothesis, the scores of students in the reading comprehension test were obtained.Their performances were evaluated in five areas: background questions; explicit questions; implicit and interpretation; vocabulary; and general understanding.Since the total mark of the test was 25, the passing mark was regarded as 12.5.To see the performances of students in EFL reading and check whether this assumption was valid or not, the one-sample t-test was used.The mean scores, standard deviations and the P-values were calculated and their results were shown in Table 1.The results of the one-sample t-test showed significant differences in the background (M = 2.50, SD = 2.00, N = 300), explicit (M = 2.38, SD = 1.45,N = 300), vocabulary (M = 2.49, SD = 1.23,N = 300) and the general understanding questions (M = 1.40,SD = 0.91, N = 300).Their T-values of 4.20, 7.45, 7.21 and 11.39 respectively with a common P-value of 0.000 which was below the level of significance 0.05 indicated that these groups were significantly different.When their mean scores were compared with the passing mark, the averages were found significantly below the passing mark.On the other hand, the results showed there was no significant difference in the part of the implicit and interpretation questions (M = 3.19, SD = 1.37,N = 300).Its t-value was 1.49 with a P-value of 0.013 was larger than 0.05, the level of significance.When its mean score was compared with the passing mark, the average was found significantly above the passing mark.In addition, the coefficient of variation (C.V) showed that there were variations of performances in all sections of the test.The highest variation was in the general understanding section (65%) followed with explicit questions section (60.9%), and the least variation was obtained in implicit and interpretation section (42.9%).
These results indicated that the performances of students differed in some sections and did not differ in others and generally tallying to an average.Therefore, these results did not confirm or reject the hypothesis that there was no statistically significant difference between the mean comprehension performances in EFL reading of Sudanese secondary school students in some parts and rejected the hypothesis in others.
Furthermore, although explicit questions were always simpler than the implicit and the interpretation ones, the results of the comprehension test showed the highest percentage of success was gained in the implicit and interpretation questions (69%) and background questions (68%), however, the learners failed in giving correct responses to the explicit questions (41%).These results indicated that the students tended to possess abilities in top-down skills rather than bottom-up skills.Figure (1) showed the average performance of the students in all sections of the EFL reading test.
Figure 1.The average performance of the students in EFL reading test Concerning the questionnaire that was introduced to students to express their thought while they were answering the comprehension reading questions, the t-test for independent samples was used.The mean scores and standard deviations were calculated in Tables 2 and 3.The results showed no significant differences found between the mean scores of all the variants.When looking at Tables 4 and 5 for the independent samples test, we can check the assumption of equal variances.In addition, when looking at Levine's test for equality of variances, we can determine the scores of all variants of the groups.The results of the independent t-test showed no statistically significant differences between all variants of the groups.They were all larger than 0.05, indicating that there were no significant differences between the scores of all variants of the questionnaire.This means that the assumption of the equal variances was not violated and it was tenable.
2) Hypothesis Tow: There is no statistically significant difference between the mean comprehension performance in EFL reading of Sudanese secondary school males and females students
To test this hypothesis, the average scores of students in the comprehension test and the questionnaire were obtained.Concerning the comprehension test, the hypothesis was tested by using a t-test for independent samples.Table ( 6) showed the performances of male and female students in the comprehension test.The mean scores of male and female students in the five areas and the overall mark were compared.The results of the one-sample t-test showed significant differences in the background (M = 4.07, SD = 2.01, N = 140), explicit (M = 2.81, SD = 1.44,N = 140), implicit and interpretation (M = 3.60, SD = 1.25, N = 140), and the vocabulary questions (M = 2.82, SD = 1.18,N = 140).Their T-values were of 4.96, 5.09, 4.97 and 4.52 respectively with a common P-value of 0.000 which was below the level of significance 0.05 indicated that these groups were significantly different.On the other hand, the results showed there was no significant difference in the part of general understanding questions (M = 1.33,SD = 0.91, N = 160).Its t-value was 1.60 with a P-value of 0.110 was larger than 0.05, the level of significance.
The results showed the average performance of female students was better than that of male in all areas of EFL reading test except in the general understanding section where there were no statistically significant differences between the scores of the two sexes.In general, comparing to male students, not only the performance of female students was better than their counterparts were but also was characterized by the small coefficient of variations (C.V).This result showed that the performance of female was more homogenous than that of male students.Figure 2 showed the average performance of male and female students in the EFL reading test.
Figure 2. The average performance of the male and female students in EFL reading test About the questionnaire introduced to the students after doing the test and in order to assess the association between gender and the performance of Sudanese secondary school students in EFL reading, a chi-square test was used to determine whether there were significant differences between the expected frequencies and the observed ones as shown in Tables 7 and 8.When looking at Tables 7 and 8 to the observed account and the expected count in order to see the association of gender with other variables, it is obvious that the observed differences between them were not enough to be significant.When going down to the Chi-square test Tables 9 and 10, the percentages in all bottom-up and top-down items were less than 20%.Since these percentages were less, then this assumption was not violated.In all the items except item no.17, Pearson Chi-square values were not P-values.They were all greater than the alpha value 0.05, so this revealed that their results were not statistically significant and hence the ultimate hypothesis, which said there was no significant association between gender and the performances of Sudanese secondary school students in EFL reading, was accepted.In other words, students' performances were not dependent from gender.They were independent.
Conclusion
Depending on the results of the study and data analysis, it seems reasonable to conclude that in teaching Sudanese EFL secondary schools, it is necessary to adopt the interactive method of reading which integrates elements of both levels of processing skills: bottom-up and top-down.It is important to provide students explicit instruction of some lower level processing skills (bottom-up) such as teaching students some strategies in phonemic awareness, word recognition, and syntactic analysis, and some higher-level of processing skills (top-down) such as teaching students some strategies in guessing, inferences, and predicting.Good applications of techniques and procedures of teaching EFL reading may prove to be a viable intervention for improving students' performance in EFL reading.
Recommendations
Bearing in mind the conclusions derived from the study, these points are recommended to any teacher of EFL reading: 1) The objectives of teaching English language in secondary schools should make an obvious plan to promote teaching EFL reading.Ambiguity leads to a weak output.In preparing and developing a reading syllabus, it is important to consider the balance between top-down and bottom-up levels of processing.
2) When teaching EFL reading, combining approaches to reading is recommended in order to train students to become efficient, effective and independent readers.
Appendix B
The Questionnaire Dear student, You, with genuine answers, will contribute much in promoting teaching EFL in the Sudan, through answering this questionnaire.The answers will be analyzed confidentially.I would like to reassure you that they would be used for scientific purposes only.The following statements are about strategies used by EFL students in determining unfamiliar words.Please indicate your level of agreement or disagreement with each statement by ticking the appropriate space: (1) indicates the strongest agreement, (5) indicates strong disagreement.Table B1.The questionnaire (bottom-up statements)
Statement
Strongly Agree Agree Somewhat Disagree Strongly Disagree 2. I attempt to understand the meanings of individual words.
3. I try to understand the meaning or structure of a clause or a sentence.
4. I restate the content by paraphrasing or rereading.
5. I am able to identify the grammatical categories of words.
6.I am able to identify the meaning of words and phrases.
7. I infer the meaning of an unknown word through retention.
provide some information about yourself.Please tick () the appropriate space(sPlease tick () the appropriate space: have phonemic awareness of words and phrases in the text.
scan and skim the passage for a general understanding.12.I predict the likely content of the succeeding portions of the text.13.I confirm, modify or reject the prediction I have made about the succeeding portions of the text.14.I connect the new information with the previously stated content.15.I benefit from the textual clues in the text to anticipate information.16.I make sense of what I read.comment on the significance of the content.21.I summarize the whole or some portion(s) of the text.22.I make inferences or draw conclusions about the content of the text.
Table 1 .
Average performance in EFL reading test * Difference is significant at or less than 0.05 level of significance.
Table 2 .
The group statistics of the questionnaire (bottom-up statements)
Table 3 .
The group statistics of the questionnaire (top-down statements)
Table 6 .
The average performance of male and female students in the comprehension test * Difference is significant at 5%.
Table 7 .
The expected and the observed frequencies of the questionnaire (bottom-up statements)
Table 8 .
The expected and the observed frequencies of the questionnaire (top-down statements)
Table B2 .
The questionnaire (top-down statements) | 2018-04-22T14:13:30.742Z | 2015-06-29T00:00:00.000 | {
"year": 2015,
"sha1": "97ae43ff6beaf77a486559510aa8f9258b77eb1d",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/50561/27167",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "97ae43ff6beaf77a486559510aa8f9258b77eb1d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
26333298 | pes2o/s2orc | v3-fos-license | Evaluation of an automated colony counter.
An automated colony counter was found to readily detect surface and subsurface bacterial colonies of 0.3-mm size or greater with a high degree of precision. On a logarithmic scale, counting efficiency consistently ranged from 89 to 95% of corresponding manual count determinations for plates containing up to 1,000 colonies. In routine application, however, automated plate counts up to approximately 400 colonies were selected as a more practical range for operation. The automated counter was easily interfaced with an automated data acquisition system.
One of the most tedious laboratory procedures in practice today is the manual counting of bacterial colonies on petri dishes. Recently, several types of automated colony-counting instruments became available. We have evaluated the performance of one, namely, the bacterial colony counter (model 870) manufactured by Artek Systems Corp., Farmingdale, N. Y. Some of the key parameters which we elected to examine were (i) colony size discrimination (sensitivity), (ii) instrument precision (reproducibility), (iii) accuracy, and (iv) feasibility of interfacing the counter with automated data acquisition systems. The latter point was of prime importance for maximum utilization of the automated counter for our research needs.
The main element of the colony counter is an internal high-speed scanning television camera which can detect and record differences in optical density between a bacterial colony and the agar background. A television monitor (Fig. 1, arrow 1) is supplied which visually displays the total area being scanned; an illuminated dot on the video display is automatically superimposed over each surface and subsurface colony which has been counted. A button (Fig. 1, arrow 2) is pressed to record the total count on the digital display (Fig. 1, arrow 3). An adjustment dial (Fig. 1, arrow 4) is also provided to optimize instrument sensitivity for the culture medium in use. Also shown are the plate alignment stops and a culture plate in counting position (Fig. 1, arrow 5).
Capability for electronically monitoring the digital display is provided through an external connector located at the rear of the counter. When the count button is depressed, the contents of the display panel (BCD) are read and converted to serial print characters (ASCII) by the interface. This information is then transmitted either to the teletype or to a remote computer for further processing (switch selectable).
Soybean casein digest agar (Trypticase soy agar, BBL) was used for studies with Staphylococcus epidermidis (ATCC 17917) and Escherichia coli (ATCC 8739). Skin extract samples obtained by a method previously described (1) were plated in Trypticase glucose extract agar with added lecithin and polysorbate 80 (Letheen agar, BBL). The standard pour plate system used in all studies involved the plating of 1-or 2-ml samples of a bacterial suspension, followed immediately by the addition of melted, tempered agar. Except for preliminary studies, no attempt was made to add a measured volume of agar (estimated to be between 10 and 20 ml). All plates were incubated at 37 C for approximately 48 h except as noted. This incubation period corresponds to that which is routinely employed in this laboratory for the types of specimens and conditions described herein. Manual plate counts usually were performed by two technicians who also counted the same plates on the automated colony counter. All counts were obtained with plate covers removed. Where plates contained 300 to 1,000 colonies, an estimated total count was obtained by the standard method of manually counting a representative defined subarea of the plate and applying a mathematical correction for the total plate area.
plastic disposable petri dishes with and without stacking rings were examined. Because the optical system does not view the peripheral 15% of the total plate area, stacking rings did not interfere with the counting process, provided that plates were properly centered. In addition, several types of culture media and various agar volumes were examined and found to have little, if any, detectable effect on counting efficiency. However, it was noted that occasional artifacts in the agar, such as bubbles or cracks, were sometimes detected by the scanner, producing erroneously high counts. Therefore, plates of questionable quality were routinely rejected on the basis of either visual examination of the plate or the appearance of obvious artifacts on the video display.
Because subsurface colonies which result from the use of the pour plate technique would be expected to be smaller than their surface counterparts, the exact capabilities of the optical system in this regard were of paramount importance to us. To determine the minimal detectable colony size, pour plates were prepared from two dilutions of an overnight broth culture of S. epidermidis to yield one set of plates with a density of approximately 50 colonies and another set with approximately 25 colonies. Ten randomly selected plates from each dilution set were counted both manually and automatically after either 18, 24, 48, 72, or 96 h of incubation. In addition, the minimal diameters for each of three representative subsurface colonies from each plate were measured with an ocular micrometer for each dilution at all incubation intervals.
The results of this study showed that auto-mated colony count and subsurface colony size were correlative (Table 1). A plot of the ratios of automated to manual counts for each plate against the mean subsurface colony diameter for the corresponding plate showed that colonies less than 0.3 mm were not readily detected by the automated instrument (Fig. 2). For colony diameters of 0.3 mm, however, the detection capability of the automated counter increased abruptly and, thereafter, remained unchanged for colony sizes greater than 0.3 mm. Similar results were obtained in a related study with E. coli.
The counting precision (reproducibility) of the automated counter was found to be proportional to the number of colonies for pour plates of bacterial mixtures obtained from skin extracts. Repetitive automated counts (10 to 20) on single plates in fixed position varied by approximately 3, 2, and 1% for colony densities of 30 to 100, 200 to 300, and 600 to 1,000, respectively. When the same plates were repositioned in the instrument prior to each repetitive count, the automated counts varied by approximately 12, 4, and 3% for the above-cited colony density ranges. The adverse influence of plate orientation on instrument precision, particularly at lower colony densities, probably reflects the interplay of several inherent factors, such as variable colony distribution at the peripheral edge of the optically scanned area, and the linear (directional) nature of the optical scan itself. The increased precision observed at In a number of independent studies (Table 2), the automated count was found to be consistently predictive of the manual count on a logarithmic scale. In each study, counting efficiency on a log basis ranged from 89 to 95% of theoretical, even when an extremely broad range of counts was examined. It should be noted that manual counts for plates containing higher colony densities for both fixed and repositioned plates is of little practical advantage. It is shown below that increased precision for higher automated counts is gained at the expense of accuracy.
To this point, we had satisfied ourselves that the automated colony counter was an adequately precise instrument and could readily detect colonies as small as 0.3 mm. Our major concern, however, was instrument accuracy relative to manual count determinations. To examine this parameter, pour plates prepared from skin extract samples and containing mixed bacterial populations were counted manually and automatically. A graphic presentation of the relationship of the two counting systems clearly demonstrates a nonuniform bias in the automated estimate of counts (Fig. 3). It will be noted that the discrepancy became more apparent as the manual count increased much above 100 colonies. Secondly, the spread in automated counts also became greater with higher manual colony counts. Several inherent factors, such as increasing superposition and proximity of colonies, undoubtedly contributed to the increasing bias and variation observed in automated counts for plates with higher colony densities.
In contrast, when the plate count data were transformed to logarithms, a linear relationship was clearly evident between the automated and manual counts on a logarithmic scale (Fig. 4). Most importantly, the bias was uniform over the counting range examined. This bias was not unexpected because, as previously noted, the optical system only scans approximately 85% of the total plate area. aBased on least squares estimate of slope for log automated counts relative to log manual counts. Maximum theoretical slope is 1.0 (100% efficiency). SE, standard error. 300 to 1,000 colonies were estimates based on counts of representative subareas of such plates. Nevertheless, on a logarithmic basis, the automated counting efficiency for these high-density plates was similar to that observed for plates containing an actual manual count of fewer than 300 colonies. Because comparatively fewer plates were examined in the range above 300 colonies, and since those examined represent estimated rather than actual manual counts, we have elected in routine application to select automated count data up to approximately 400 colonies as a more practical range for operation. The latter range is readily attainable with standard decimal dilution of samples.
In summary, with respect to sensitivity, precision, and accuracy, our experiments suggest that the automated colony counter is an extremely useful development in microbiological instrumentation. We currently have the counter interfaced to a data acquisition system which provides computer-compatible output. Interfacing was easily accomplished with commercially available hardware. The automated counting system has increased the overall efficiency in several of our research operations by significantly reducing both the time devoted to plate counting and the number of data manipulation tasks previously needed to render results in a form amenable to analysis and interpretation.
Although the automated counter system has potential application in a number of microbiological areas, it is recommended that its suitability for particular circumstances be evaluated with respect to types of samples and culture systems to be employed.
We express appreciation to H. Stander for his many suggestions during the course of these studies. | 2018-04-03T00:43:22.365Z | 1974-01-01T00:00:00.000 | {
"year": 1974,
"sha1": "be9e0a1e1c5b344a38fadd5d3a302fce04a7bace",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/aem.27.1.264-267.1974",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "828b76d492f27921a09f7fcaf683e05bac897ded",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
203638883 | pes2o/s2orc | v3-fos-license | Evidence of high-temperature exciton condensation in 2D atomic double layers
A Bose-Einstein condensate is the ground state of a dilute gas of bosons, such as atoms cooled to temperatures close to absolute zero. With much smaller mass, excitons (bound electron-hole pairs) are expected to condense at significantly higher temperatures. Here we study electrically generated interlayer excitons in MoSe2/WSe2 atomic double layers with density up to 10^12 cm-2. The interlayer tunneling current depends only on exciton density, indicative of correlated electron-hole pair tunneling. Strong electroluminescence (EL) arises when a hole tunnels from WSe2 to recombine with electron in MoSe2. We observe a critical threshold dependence of the EL intensity on exciton density, accompanied by a super-Poissonian photon statistics near threshold, and a large EL enhancement peaked narrowly at equal electron-hole densities. The phenomenon persists above 100 K, which is consistent with the predicted critical condensation temperature. Our study provides compelling evidence for interlayer exciton condensation in two-dimensional atomic double layers and opens up exciting opportunities for exploring condensate-based optoelectronics and exciton-mediated high-temperature superconductivity.
A Bose-Einstein condensate is the ground state of a dilute gas of bosons, such as atoms cooled to temperatures close to absolute zero 1 . With much smaller mass, excitons (bound electron-hole pairs) are expected to condense at significantly higher temperatures 2-7 . Here we study electrically generated interlayer excitons in MoSe 2 /WSe 2 atomic double layers with density up to cm -2 . The interlayer tunneling current depends only on exciton density, indicative of correlated electronhole pair tunneling 8 . Strong electroluminescence (EL) arises when a hole tunnels from WSe 2 to recombine with electron in MoSe 2 . We observe a critical threshold dependence of the EL intensity on exciton density, accompanied by a super-Poissonian photon statistics near threshold, and a large EL enhancement peaked narrowly at equal electron-hole densities. The phenomenon persists above 100 K, which is consistent with the predicted critical condensation temperature 9-12 . Our study provides compelling evidence for interlayer exciton condensation in twodimensional atomic double layers and opens up exciting opportunities for exploring condensate-based optoelectronics and exciton-mediated high-temperature superconductivity 13 .
Exciton condensation is a macroscopic quantum phenomenon that has attracted tremendous theoretical and experimental interests. Condensed phases of excitons that are generated by optical pumping or through the quantum Hall states under a magnetic field have been realized in coupled semiconductor quantum wells and graphene [2][3][4][5][14][15][16][17][18][19][20][21] . The weak exciton binding in these systems, however, limits the condensation temperature to ~ 1 K. Although high-temperature exciton condensate has been observed in 1T-TiSe 2 22 , the system based on a three-dimensional semimetal is limited from a future device and configurability perspective.
Two-dimensional (2D) transition metal dichalcogenide (TMD) semiconductors (MX 2 , M = Mo and W; X = S and Se) with large exciton binding energy (~ 0.5 eV) 23,24 and flexibility in forming van der Waals heterostructures provide an exciting platform for exploring high-temperature exciton condensation and condensate-based applications [9][10][11][12] . The maximum condensation temperature in TMD double layers, limited by exciton ionization in the high-density regime 25 , has been predicted a fraction (~ 10 %) of the exciton binding energy 9-12 , i.e. comparable to room temperature! Condensation of intralayer excitons in TMDs is, however, hindered by the short exciton lifetimes 23 and formation of competing exciton complexes at high densities, such as biexcitons 23 and electron-hole (e-h) droplets 26,27 . These difficulties can be overcome by separating the electrons and holes into two closely spaced layers in a double layer structure 3,5,25 . Long lifetimes and still substantial binding energies (> 0.1 eV) have been demonstrated for interlayer excitons 28 . They further act like oriented electric dipoles with repulsive interactions that prevent the formation of competing exciton complexes at high densities 3,5,25 . Nevertheless, exciton condensation remains elusive.
Here we present experimental evidence of high-temperature interlayer exciton condensation in MoSe 2 /WSe 2 double layers. The device (Fig. 1a) consists of two anglealigned TMD monolayer crystals separated by a two-to three-layer hexagonal boron nitride (h-BN) tunnel barrier. The barrier suppresses interlayer e-h recombination to achieve high exciton density while maintaining strong binding [9][10][11][12] . The double layer is gated on both sides with symmetric gates that are made of few-layer graphene gate electrodes and 20-30 nm h-BN gate dielectrics. Figure 1b is an optical image of a typical device. The lateral size is about a few microns. By applying equal voltages ( !"#$ ) to the two gates and a bias voltage ( !"#$ ) to the WSe 2 layer, one can tune the carrier density in each TMD layer independently in contrast to optical pumping. In addition, the electrical method can create interlayer excitons in thermal equilibrium with the lattice, favoring condensation. Figure 1c is a contour plot of the optical reflection contrast spectra of the double layer as a function of !"#$ for !"#$ = 0 V. The two prominent features correspond to the neutral exciton resonance of monolayer MoSe 2 and WSe 2 , which lose their oscillator strengths rapidly upon doping 24 . The fall-edge sharpness provides an estimate of the disorder density in each monolayer to be a few times of 10 !! cm -2 which is consistent with the highest reported quality 29,30 . For a large range of !"#$ , both neutral exciton resonances are present; the feature disappears in WSe 2 with hole doping at large negative !"#$ 's, and in MoSe 2 , with electron doping at large positive !"#$ 's. This is consistent with a type-II band alignment of the heterostructure 28 . In contrast, for !"#$ = 5.5 V, a pn region (between the two horizontal dashed lines in Fig. 1d) opens up, in which MoSe 2 is electron-doped (with density n > 0) and WSe 2 is hole-doped (with density p > 0). The bias voltage creates an interlayer electrochemical potential difference and a steady-state non-equilibrium e-h double layer. The problem is well described by an electrostatic model (see Methods and Supplementary Fig. 6) that the total charge density (n + p) and the charge density imbalance (n -p) are given by !"#$ and ( 2 !"#$ − !"#$ ), respectively. The e-h pair density N (the smaller of n and p) is thus given by !"#$ − !"#$ (for n > p) or !"#$ (for n < p). Very high pair densities up to 10 !" cm -2 have been achieved.
Radiative recombination from interlayer excitons is completely suppressed in devices with h-BN barrier thicker than 1 layer due to negligible e-h wave function overlap 31 (Supplementary Fig. 7). We employ two other probes to study the e-h double layer. The large !"#$ creates a tunneling current I between the layers at the A level. Enhanced tunneling is expected if the electrons and holes are bound 8,25 (i.e. form interlayer excitons). But the large bias that is needed to open a p-n region excludes the possibility of observing any zero-bias Josephson-like effects in the exciton condensate 3,14 . Electroluminescence (EL) is observed near 1.6 eV (the bright spot in Fig. 1d), which matches the energy of charged exciton in MoSe 2 (see Supplementary Fig. 1 for EL image). The EL arises as a hole tunnels from WSe 2 to MoSe 2 and recombines radiatively with electron in MoSe 2 (see Methods). The maximum EL quantum efficiency is on the order of 10 -4 . EL resulted from recombination of an electron tunneled from MoSe 2 with hole in WSe 2 is also observed. It is typically much weaker presumably due to the presence of lower-energy dark exciton states in WSe 2 32 and is not monitored. The measured EL intensity is directly proportional to the radiative decay rate of a hole in ndoped MoSe 2 , which, unlike tunneling current under large bias, is sensitive to the emergence of a condensate. Large EL enhancement has indeed been reported in a similar situation when a hole recombines radiatively with electron in a superconductor on cooling below the critical temperature 33,34 . Unless otherwise specified, all measurements were performed at 3.5 K. The results of two devices are presented (Fig. 1, 5 and Supplementary Fig. 1, 2, 4, 5, 6 from device 1; The rest is from device 2). Figure 2 illustrates the tunneling characteristics of the double layer. For a fixed !"#$ (i.e. constant n + p), the gate dependence of I shows a cusp-shaped peak centered at charge balance (n = p) and the current on the cusp increases with !"#$ (Fig. 2a). Similarly for a fixed !"#$ (Fig. 2b), I increases with increasing !"#$ and approaches a cusp at charge balance. Beyond this point, I becomes a constant, the value of which increases with increasing !"#$ . Note in this region (n < p), the e-h pair density is also a constant, hinting that I depends only on N. Indeed, the simulated tunneling characteristics based on ∝ !.! ( Fig. 2c and 2d) are in excellent agreement with experiment ( Fig. 2a and 2b). Negligible temperature dependences for I were observed for the entire temperature range studied in this work (3.5 K -180 K) (Fig. 5a).
The above observations for tunneling are not consistent with the independent particle picture. In the independent particle picture, I is determined by the number of states available for tunneling 35 , which is proportional to ( + ) (Fig. 1d). The gate dependence of I for a given !"#$ would be flat inside the p-n region instead of a cuspshaped peak (Fig. 2a). Tunneling here is consistent with correlated e-h pair tunneling 8 that involves the creation and annihilation of bound e-h pairs. The tunneling current is thus only dependent on pair density 8 . The absence of temperature dependences for I is also consistent with the large exciton binding energy (> 0.1 eV, equivalent to 1200 K) predicted for interlayer excitons with ~ 1 nm h-BN separation 9, 36 .
Next we turn to the EL measurements. Figure 3a displays the EL spectra for different exciton densities (at charge balance). The spectra consist of a peak with a linewidth of 10-20 meV. No significant changes in both the spectral width and the peak energy with N are noted. (The detailed analysis of the EL spectra is summarized in Supplementary Fig. 2 and more discussion on the linewidth in Methods.) In contrast, the spectrally integrated EL intensity shows a critical threshold dependence on with threshold at !! ≈ 0.26×10 !" cm -2 (Fig. 3c). In a narrow range of !! the EL intensity increases by two orders of magnitude. In contrast, the tunneling current changes by only about two times in the same density range.
The threshold behavior is accompanied by a change in photon statistics revealed by the intensity correlation measurement based on a Hanbury Brown-Twiss (HBT) type setup (see Methods) (Fig. 3d). The time resolution of the setup is about 40 ps, which far exceeds the EL coherence time (~ 100 fs) estimated from the spectral width. Figure 3b displays the EL intensity correlation function (!) for different N's (at charge balance), where is the arrival time difference between pairs of EL photons. Photon bunching ( (!) 0 > 1) is observed near threshold with a decay time of about 1 ns. Above threshold, photon bunching is absent ( (!) = 1) and the EL statistics is Poissonian.
The EL enhancement is very sensitive to charge imbalance in the double layer. Figure 4 shows the EL intensity as a function of ( − ) at several fixed values of ( + ) (= 2 at charge balance). The tunneling current is included as a reference. For ≥ !! , similar to current, EL shows a cusp-shaped peak centered at charge balance, but the EL enhancement occurs in a much narrower range than I. For < !! , the EL intensity does not follow the current strictly and does not have a cusp. Near threshold, EL is particularly sensitive to charge imbalance both in terms of intensity and photon statistics ( Supplementary Fig. 3). We employ the normalized 'density width' (the fullwidth-half-maximum of the peak, , divided by the total density),
!"
!! , to quantify the sensitivity of the two processes to charge imbalance in Fig. 3e. Whereas the current 'density width' is nearly independent of N, the EL 'density width' decreases sharply at threshold and remains substantially smaller than the current 'density width'. Similar EL characteristics have been observed in other devices. Supplementary Fig. 4 and 5 show results from device 1 with a higher !! .
The observed threshold behavior on exciton density, photon bunching at threshold and absence of bunching above threshold, and high sensitivity to charge imbalance for EL are not compatible with the conventional light-emitting diode action that involves tunneling and recombination of independent charge particles. In such picture, the EL intensity is proportional to current before reaching saturation and none of these observations can be explained. The EL-threshold behavior is analogous to lasing and polariton condensation, which are known non-equilibrium continuous phase transitions 6,7,[37][38][39] . In these processes, the cavity photons or the polaritons condense into a single electromagnetic mode above threshold. Photon bunching arises near threshold due to critical electromagnetic fluctuations, which disappear above threshold in the condensed phase. However, in the absence of an optical cavity that provides feedback, EL in our devices is not lasing or polariton condensation.
Instead, the EL-threshold behavior is consistent with a continuous phase transition at the critical density !! from an exciton gas to an exciton condensate, whose wave function consists of spatially coherent e-h pairs 3, 10, 14, 39 . When a 'normal' hole recombines radiatively with electron in MoSe 2 that is a part of a condensate, the recombination rate increases with the number of bosons in the condensate 33 , i.e. a superradiant process 4, 5 (see the EL rate analysis in Methods). The situation is similar to the case when a hole is injected into a superconductor and recombines radiatively with its electron Cooper pair condensate 33,34 . The observed photon bunching with a correlation time much longer than the coherence time near threshold corresponds to critical fluctuations and slowing down at the critical point 37,38 . The absence of photon bunching above threshold corresponds to suppressed noise in the condensed phase. The observed strong sensitivity of the EL enhancement to charge imbalance above threshold is also consistent with exciton condensation, which requires nearly perfect e-h Fermi surface nesting [40][41][42][43] . In particular, a non-analytic cusp at charge balance as in Fig. 4 was predicted for exciton coherence as a function of charge imbalance [40][41][42][43] . Such sensitivity to charge imbalance disappears below threshold as expected.
Finally, we estimate the transition temperature for exciton condensation. Figure 5a displays the EL intensity and tunneling current as a function of charge imbalance at varying temperatures for a fixed pair density above threshold (0.74×10 !" cm -2 ). The tunneling current has a negligible dependence on charge imbalance or temperature , as discussed above (Fig. 2b). In contrast, both the EL intensity and EL enhancement at charge balance decrease with increasing temperature. To exclude any potential trivial temperature effects, we use the relative EL enhancement at charge balance above the baseline to estimate the transition temperature. The top panel in Fig. 5b corresponds to the result for = 0.74×10 !" cm -2 . For > 100 K, the enhancement disappears (i.e. EL behavior returns to that of a normal light-emitting diode). This value is very close to the predicted degeneracy temperature (onset of the macroscopic occupation of the ground state) for interlayer excitons in TMD double layers separated by two-layer h-BN 9 . Figure 5b also shows similar results for two other exciton densities. No clear -dependence of the transition temperature can be concluded for the small range of densities investigated in this study. A systematic investigation of the -dependence to test different theories (linear 9, 11 or more complicated dependences 12 ) is beyond the scope of this study and deserves further investigations. We also note that all samples studied in this work are a few microns in size and further improvement on the sample size and quality is required to further investigate spatial coherence and finite sample size effects on exciton condensation 15,19,21 .
In conclusion, we have electrically created a high-density e-h double layer based on 2D van der Waals heterostructures under zero magnetic field. By combining the tunneling and EL measurements, we have observed threshold dependence on exciton density, sensitive dependence on charge imbalance, and critical fluctuations for EL, which are consistent with exciton condensation. These observations persist up to ~ 100 K. Our results open up exciting opportunities for exploring exciton condensates at highenergy scales 9-12 and exciton-mediated high-temperature superconductivity 13 .
Device fabrication
Dual-gated WSe 2 /h-BN/MoSe 2 tunnel junctions were fabricated based on the dry-transfer method developed by Wang et al. 44 Each constituent layer of the junction and the gates was exfoliated from bulk crystals onto Si substrates with a 300-nm oxide layer. Highquality WSe 2 and MoSe 2 bulk crystals were synthesized based on the flux-growth technique 30 . Thin flakes of appropriate thickness and size were identified optically. In particular, the dark-field imaging mode was used to enhance the optical contrast for fewlayer h-BN. The h-BN tunnel barrier is about 2-3 layer thick (0.6-1.0 nm), confirmed by atomic force microscopy. The identified flakes were picked up layer-by-layer by a stamp made of a thin layer of polycarbonate (PC) on polydimethylsiloxane (PDMS) supported by a glass slide. The finished stack was thermally released onto a Si substrate with prepatterned electrodes for source, drain, top and bottom gates. The PC film was finally removed by chloroform. The device schematics and the optical image of a finished device are shown in Fig. 1a and 1b.
Reflection contrast and electroluminescence measurements
Reflection contrast measurements were conducted to calibrate the doping density of WSe 2 and MoSe 2 monolayers. The output of a broadband super-continuum light source was focused by a high numerical aperture (NA = 0.8) objective to a spot of about 1 µm 2 on the device. The reflected light was collected by the same objective, dispersed by a grating spectrometer and detected by a charge-coupled-device (CCD) camera. The reflection contrast / was obtained by normalizing the difference between the reflected light intensity from the tunnel junction and the substrate ( ) to that from the substrate (R). Electroluminescence (EL) from the tunnel junction was detected by the same optics. The typical integration time for EL was 0.1 s.
Hanbury Brown and Twiss setup
A Hanbury Brown and Twiss (HB-T) setup was employed to measure the EL intensity correlation function (!) . The collected EL from the tunnel junction was split by a 50:50 beam splitter and focused onto two identical single-photon detectors. The detector outputs were fed to a time-correlated single photon counter (Picoharp 300) to record the arrival time difference between a pair of EL photons. A histogram of photon counts versus arrival time difference and the intensity correlation function can then be obtained. The instrument response time is about 40 ps.
Electrostatics of the TMD double layer structure
The equivalent circuit of the TMD tunnel junction is a tunnel resistor connecting the bottom and top gates. The junction capacitance can be ignored (compared with the gate capacitance !"#$ ) under large bias and tunneling current. The energy band alignment is shown in Fig. 1. For symmetric gating as in our experiment, we express the electron density (> 0) in the MoSe 2 layer and the hole density (> 0) in the WSe 2 layer as !"" ), Here !"#$ and !"#$ are the gate and bias voltages, respectively, is the elementary charge, and !" !"" ( ! !"" ) is the amount of potential that is required to move the in-gap Fermi level to the conduction band minimum of MoSe 2 (the valence band maximum of WSe 2 ). The gate capacitance !"#$ = 0.89 − 1.33 ×10 !! F/cm 2 is determined by the thickness of the h-BN dielectric (20-30 nm) and its dielectric constant (~ 3) 29, 30 . We derive from Eqn. (1) the e-h density imbalance ( − ), the total density ( + ), and the exciton density as The density imbalance is controlled by (2 !"#$ − !"#$ ), and the total density, by !"#$ .
We compare the electrostatic model with experiment in Supplementary Fig. 6 for three special cases: = , = 0, and = 0: !"#$ = !" The gate voltage for = can be unambiguously determined in experiment from the cusp of the tunneling current. The gate voltages for = 0 and = 0 can be determined from the reflection contrast of MoSe 2 and WSe 2 . We assume that when the electrochemical potential touches the band edge, the reflection contrast of the corresponding neutral exciton resonance drops to 90% of its maximum value. We obtain !" !"" ≈ 2.3 V and ! !"" ≈ 2.0 V from the best fit in Supplementary Fig. 6b.
Electroluminescence (EL) rate equation and spectral width
The rate equation for the minority hole density ! in the n-doped MoSe 2 layer can be expressed as The first term is the hole pumping rate with denoting the tunneling current, the tunnel junction area, and the elementary charge. Since tunneling here is a correlated pair-wise process (Fig. 2), the hole pumping rate is determined by half of the tunneling current /2. The second term is the total hole decay rate in MoSe 2 with !"! = ! + !" + ! , where ! , !" and ! represent contributions from the radiative recombination, non-radiative recombination in MoSe 2 , and tunneling out to the electrode, respectively. In the steady state, The integrated EL power normalized by tunneling current is thus determined by the ratio of the radiative rate to the total decay rate Here ! is the EL photon energy (~ 1.6 eV). In the vicinity of exciton condensation, the large enhancement observed in the normalized EL power (Fig. 3c) is dominated by the enhancement in ! since the EL quantum yield is typically very small (i.e.
The radiative recombination rate ! is determined by the transition dipole matrix element between the initial and final state, which are many-body quantum states in the exciton condensate phase 33 . The radiative rate is enhanced by the number of spatially coherent excitons in the condensate, similar to the phenomenon of super-radiance.
Unlike in lasing and polariton condensation, significant EL spectral line narrowing does not necessarily accompany the threshold intensity dependence on exciton density. In lasing or polariton condensation, while the carrier decay rate, thermal broadening and inhomogeneous broadening contribute to the spectral width below threshold, the spectral width above threshold is determined only by the photon decay rate in the cavity, which is typically much smaller, giving rise to significant spectral line narrowing. In the absence of an optical cavity, the EL spectral width below and above threshold in our devices is mainly determined by !"! , which is not necessarily significantly suppressed above threshold. There is therefore enhanced spatial coherence but not necessarily much enhanced temporal coherence in the condensed phase. The situation is similar to the case of a hole recombining radiatively with electron in a superconductor 33,34 . Significantly enhanced EL intensity due to the formation of a Cooper pair condensate was observed without significant line narrowing below the critical temperature. and tunneling current (right axis) as a function of electron-hole density imbalance (n -p) at varying total densities. Densities shown are the exciton density measured at charge balance. Above threshold, a cusp-shaped peak centered at charge balance is observed for both EL and tunneling. The EL 'density width' is significantly narrower than the tunneling 'density width'. An asymmetric dependence for the EL is observed, presumably due to the asymmetric disorder density in MoSe 2 and WSe 2 . (Data from Device 2) Figure 5 | Temperature dependence of EL. a, EL intensity (left axis) and tunneling current (right axis) as a function of charge imbalance (n -p) at varying temperatures for a fixed exciton density N above threshold (0.74×10 !" cm -2 ). The EL enhancement at charge balance decreases with increasing temperature and disappears slightly above 100 K. b, The temperature dependence of the relative EL enhancement at charge balance above the baseline for N = 0.74×10 !" cm -2 (top), 0.84×10 !" cm -2 (middle), and 0.90×10 !" cm -2 (bottom). The typical errors for each density were estimated from the fluctuations of the EL intensity. The transition temperature is estimated from the value at which the enhancement disappears (dashed lines). (Data from Device 1) | 2019-10-03T09:07:06.310Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "bbf9f4975e9a580c6bceaf13a10819a82b095e40",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.16407",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2ca43ca191b85e4a7dde53cb3afae2aa65c668b7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
23098607 | pes2o/s2orc | v3-fos-license | In vivo multiplexed molecular imaging of esophageal cancer via spectral endoscopy of topically applied SERS nanoparticles
: The biological investigation and detection of esophageal cancers could be facilitated with an endoscopic technology to screen for the molecular changes that precede and accompany the onset of cancer. Surface-enhanced Raman scattering (SERS) nanoparticles (NPs) have the potential to improve cancer detection and investigation through the sensitive and multiplexed detection of cell-surface biomarkers. Here, we demonstrate that the topical application and endoscopic imaging of a multiplexed cocktail of receptor-targeted SERS NPs enables the rapid detection of tumors in an orthotopic rat model of esophageal cancer. Antibody-conjugated SERS NPs were topically applied on the lumenal surface of the rat esophagus to target EGFR and HER2, and a miniature spectral endoscope featuring rotational scanning and axial pull-back was employed to comprehensively image the NPs bound on the lumen of the esophagus. Ratiometric analyses of specific vs. nonspecific binding enabled the visualization of tumor locations and the quantification of biomarker expression in agreement with immunohistochemistry and flow cytometry validation data.
Introduction
Esophageal cancer causes approximately one-sixth of all cancer-related deaths worldwide [1,2]. Esophageal cancer patients often present with advanced metastatic disease at the time of diagnosis [3], resulting in poor survival and cure rates [4,5]. One promising means of improving the early detection and biological investigation of esophageal cancer is to screen for the molecular changes that precede and accompany the onset of cancer [6,7]. Since esophageal cancer arises from the epithelial cells located at the lumenal surface of the esophagus, the cell-surface and tissue biomarkers at the lumenal surface could be imaged to monitor disease progression. However, due to the fact that biomarker expression profiles vary greatly between patients and within individuals over time [8], accurate disease diagnosis, patient stratification, and biological investigation requires the assessment of a large number of molecular targets.
Over the past few decades, various types of exogenous contrast agents have been developed for the molecular imaging of fresh tissues ex vivo and in vivo [9][10][11]. Among these contrast agents, surface-enhanced Raman-scattering (SERS) nanoparticles (NPs) have attracted interest for cancer imaging due to their excellent multiplexing capabilities [12]. The SERS NPs utilized in this study may be engineered as various "flavors," each of which generates a unique Raman spectral "fingerprint" when illuminated with a single laser at 785 nm. These diverse "barcode" or "fingerprint" spectra allow for the multiplexed detection of large panels of NPs when applied in live animals and human tissues [13][14][15][16][17][18]. Each flavor of SERS NPs can be conjugated to an antibody or small-molecule ligand to target a specific cellsurface or tissue biomarker [19][20][21][22][23][24]. After orally introducing a mixture of these biomarkertargeted NPs into the esophagus (e.g., by having the subject ingest a cocktail of NPs), physicians and tumor biologists may potentially be able to visualize the expression of a panel of molecular biomarkers during endoscopic procedures to accurately detect tumors and to study their progression over time. No toxicity effects have been observed, with negligible systemic uptake, when SERS NPs have been applied topically in the rectum of mice [25,26]. However, SERS NPs are not yet approved for administration in the human esophagus. Therefore, our current work utilizes a rat model to demonstrate the feasibility of this endoscopic imaging approach for preclinical investigations. Major challenges such as FDA approval and the improvement of imaging speeds must be addressed in order to translate this technique into clinical use.
We previously described a rotational spectral-imaging endoscope to image the rat esophagus in which preliminary validation was performed using untargeted SERS NPs and matrigel phantoms pre-loaded with SERS NPs [27]. In this article, we demonstrate (for the first time) the feasibility of in vivo multiplexed molecular endoscopy of biomarker-positive tumors in a rat esophagus using targeted SERS NPs that are topically applied into the lumen of the esophagus. Ratiometric imaging of targeted vs. untargeted SERS NPs provides a means of controlling for nonspecific effects such as uneven NP delivery, off-target binding, and variations in tissues permeability and retention [24,[28][29][30]. We show that this ratiometric quantification of specific vs. nonspecific binding is in agreement with both flow cytometry and immunohistochemistry validation data.
SERS NPs and functionalization
The SERS NPs used in this study were purchased from Cabot Security Materials Inc. These NPs have a sandwich structure, a 60-nm gold core, a unique layer of Raman reporter adsorbed onto the gold core surface, and an outer silica coating, totaling 120 nm in diameter [13,31]. Three "flavors" of NPs were used here, identified as S420, S421 and S440, each of which emits a unique fingerprint Raman spectrum when illuminated with a laser at 785-nm. These spectral differences are due to chemical differences in the Raman reporter layer. To enable the imaging of cell-surface biomarker targets, the SERS NPs were functionalized with monoclonal antibodies (mAb) according to a previously described conjugation protocol [24]. In brief, the NPs were first labeled with a fluorophore (Cyto 647-maleimide from Cytodiagnostics Inc, part No. NF647-3-01) for flow-cytometry characterization, and then conjugated with either an anti-EGFR mAb (Thermo Scientific, MS-378-PABX), an anti-HER2 mAb (Thermo Scientific, MS-229-PABX), or an isotype control mAb (Thermo Scientific, MA110407) at 500 molar equivalents per NP.
The conjugated NPs were tested with four cell lines (purchased from ATCC) that express various levels of EGFR and HER2, including A431 (a human epidermoid carcinoma cell line that highly overexpresses EGFR and moderately expresses HER2), U251 (a human glioblastoma cell line that moderately expresses EGFR and HER2), SkBr3 (a human breast adenocarcinoma cell line that highly overexpresses HER2 and moderately expresses EGFR), and 3T3 (a normal mouse fibroblast cell line that expresses negligible amounts of EGFR and HER2 [32][33][34][35][36][37]. Flow-cytometry analyses demonstrate a high binding affinity of these conjugated NPs to their cell-surface receptor targets: either EGFR or HER2 (Fig. 1). The different binding levels of various NP conjugates to different cell lines may be quantified by calculating the geometric mean of the fluorescent intensities (MFI). The MFIs of negativecontrol 3T3 cells stained with the three NP conjugates (two targeted and one control) are similar. For EGFR-NPs, the binding level (MFI of EGFR-NPs vs. isotype-NPs) increases with the following order of cell lines: 3T3 < SkBr3 < U251 < A431. For the HER2-NPs, the binding level (MFI of HER2-NPs vs. isotype-NPs) increases with the following order of cell lines: 3T3 < U251 < A431 < SkBr3. These results are consistent with the known receptor expression levels of these cell lines [32][33][34][35][36][37].
Rat model of esophageal tumors
In preparation for the development of an orthotopic esophageal cancer model, subcutaneous tumor xenografts were first implanted into female nude mice (6-8 weeks, Charles River Laboratories, model NU(NCr)-Foxn1 nu ). These tumor xenografts were later surgically implanted into the esophagus of Male Fischer 344 Inbred rats (7-9 weeks, Harlan Laboratories, Inc), which were used for esophageal-imaging studies. All animal work was performed in accordance with guidelines approved by the Institutional Animal Care and Use Committee (IACUC) at Stony Brook University or the University of Washington.
To develop tumor xenografts, three human cancer cell lines that differ in EGFR and HER2 expression levels (see Section 2.1) -A431, U251 and SkBr3 -were suspended in matrigel (BD biosciences, 354234) in a 1:1 volume ratio to form a 100-200 μL mixture (1 × 10 6 A431 cells, 3 × 10 6 U251 cells or 5 × 10 6 SkBr3 cells per mixture). At 7-9 weeks of age, nude mice were subcutaneously implanted with the cell mixture at different sites on their flanks. A maximum of three sites were implanted on each mouse with a distance of 1-2 cm between adjacent sites. When the tumors reached a size of 8 to 10 mm (about 3-5 weeks), the mice were euthanized by CO 2 inhalation, followed by the surgical removal of the implanted tumors. An orthotopic rat esophageal model was developed for imaging studies. For ex vivo experiments, rats were euthanized via inhalation of CO 2 , followed by the surgical removal of the esophagus (~8 cm in length). Several 3-mm diameter holes were cut in the wall of the esophagus ( Fig. 2(a)). Small tumor explants, obtained from subcutaneous tumor xenografts in mice, were positioned on top of the holes in the esophagus wall, and glued into place with dermabond (a tissue adhesive). A 20-μL pipet tip was used to apply the dermabond between the tumor explants and the rat esophagus in order to form a tight seal ( Fig. 2(b)) such that SERS NPs applied within the lumen of the esophagus would not leak out of the esophagus during staining procedures (Fig. 2(c)). For in vivo experiments, rats were anesthetized via intraperitoneal injection of ketamine and xylazine. A small incision was made through the neck skin, and the cervical muscles were separated. The cervical esophagus was carefully exposed, and a few 3-mm diameter holes were cut in the wall of the esophagus. Small tumor explants, obtained from subcutaneous tumor xenografts in mice, were positioned on top of the holes in the esophagus wall, and glued into place with dermabond. The esophagus was stained via an oral gavage procedure and imaged immediately afterwards (in vivo). After the imaging studies were completed, the rats were euthanized via inhalation of CO 2 .
Spectral-imaging endoscopy of SERS NPs in the rat esophagus
For molecular endoscopy of the rat esophagus, a glass guide tube (OD 3.25 mm, ID 2.6 mm, Rayotek Scientific Inc) was first inserted into the rat esophagus, such that the lumen of the esophagus was wrapped tightly around the guide tube. A spectral-imaging probe (OD 2.5 mm, FiberTech Optica Inc.) was then inserted into the guide tube and was rotated and translated axially to comprehensively image the SERS NPs applied on the esophagus (Fig. 2(d)). This form of laser-scanned imaging is referred to here as "rotational pull-back imaging." A multimode fiber (100-μm core, 0.10 NA) at the center of the imaging probe was used to illuminate the esophagus using a 785-nm diode laser (~15 mW at the tissue) while 27 multimode fibers (200-μm core, 0.22 NA) surrounding the illumination fiber were used to collect optical signals (Fig. 2(d)). A 30° prism (Tower Optical Inc.) was adhered to the tip of the fiber-bundle probe in order to deflect the laser beam onto the lumen of the esophagus. Based on the working distance between the illumination fiber and the lumen of the esophagus (4 mm), and the NA of the illumination fiber (0.1), a 0.5-mm diameter spot (FWHM) was illuminated at the lumen of the esophagus (Fig. 2(d)), which defines the spatial resolution of our device. A customized spectrometer (Andor Holospec) was used to disperse the light collected from the 27 collection fibers onto a cooled deep-depletion spectroscopic CCD (Andor Newton, DU920P-BR-DD) with a spectral resolution of ~2 nm (~30 cm −1 ). The rigid imaging probe was held and rotated by a hollow-shaft stepper motor (Nanotec ST5918) that was translated axially with a linear stage (Zaber Technologies, T-LSM200A). The imaging probe was alternately rotated in the positive and negative directions ( ± 180 deg) to protect the fiber bundles from twisting. A National Instruments data acquisition system programmed in LabVIEW was used to control the stepper motor and translation stage for laser-scanned imaging of the rat esophagus, as well as to demultiplex the acquired spectra and reconstruct images of the concentration and concentration ratio of various SERS NP flavors. Since the illumination spot size was 0.5 mm, in order to sample the esophagus at nearly the Nyquist sampling criterion, the imaging probe acquired 30 spectral acquisitions per rotation (a pixel pitch of 12° or 0.34 mm), and was translated by 0.3 mm in the axial direction during each rotation. For these studies, we utilized a spectral acquisition rate (pixel clock) of 10 spectra/sec (0.1 sec/pixel). Additional details about the system, the spectra demultiplexing algorithm, and the reproducibility and linearity of the spectral measurements, have been described previously [27,38].
For ex vivo experiments, the esophagus was rinsed with PBS and then irrigated with a mucolytic agent (N-Acetyl-L-cysteine, or NAC for short, Sigma-Aldrich, part No. A7250, 0.5 mL at 0.1 g/mL, 30-sec rinse) to allow the NPs to access and adhere to the esophagus tissue more effectively. The esophagus was first imaged prior to being stained in order to measure the tissue background. Multiple spectra were acquired by performing rotational pull-back imaging of both normal esophagus and tumor areas. Multiple tissue background reference components were used for spectral demultiplexing analysis in order to account for slight differences in the spectral background of normal esophagus and tumor tissues. After acquiring tissue background spectra, a 0.1-mL mixture of EGFR-NP and isotype-NP (150 pM/flavor with an addition of 1% BSA) was pipetted into the lumen of the esophagus and allowed to incubate for a duration of 10 min, followed by a rapid PBS rinse (0.2 mL). The esophagus was imaged again to measure the NP concentrations for biomarker detection.
For in vivo experiments, rats were anesthetized and the 3.25-mm diameter guide tube was inserted into the rat's esophagus until gently opposed by the esophageal sphincter at the entrance to the stomach. The imaging probe was then inserted within the guide tube for rotational pull-back imaging (similar to ex vivo experiments).
Upon completion of all imaging experiments, esophagus tissues and tumor xenografts were fixed in 10% formalin and submitted for histopathology (IHC and H&E staining).
Demonstration of endoscopic imaging ex vivo
Ex vivo experiments were first performed to test the ability to detect tumors (Figs. 3(a)-3(d)). The excised esophagus was implanted with 3 tumor xenografts ( Fig. 3(b)) -U251, SkBr3 and A431 -each of which exhibits a different level of EGFR expression (Fig. 1). Endoscopic imaging was performed by scanning a 2.5-cm esophagus section (2500 acquisitions) with an integration time of 0.1 sec per spectrum (0.1 sec per image pixel).
The image of the measured concentrations and ratios are shown in Figs. 3(c) and 3(d). The image showing the distribution of EGFR-NPs fails to identify the three tumors due to the high variability in NP concentrations across the tissue (Fig. 3(c)). In comparison, the ratiometric image of targeted vs. untargeted SERS NPs (Fig. 3(d)) is insensitive to nonspecific variations in the absolute NP concentrations (due to uneven delivery and nonspecific accumulation) and accurately reveals the location of the tumors. In addition, as shown in Fig. 3(d) and 3(e), the ratiometric image of EGFR-NPs vs. isotype-NPs agrees qualitatively with the corresponding EGFR immunohistochemistry (IHC) images of the various tissues.
Demonstration of endoscopic imaging in vivo
In vivo experiments were performed to demonstrate the ability to simultaneously image the expression of multiple biomarkers through the oral administration of a mixture of three different flavors of SERS NPs (Fig. 4). An oral gavage catheter was placed within the rat esophagus to first deliver 0.2-mL of NAC (0.1 g/mL, 30 sec), which is a mucolytic agent, then 0.2-mL of PBS to rinse out the NAC (10 sec), and finally an equimolar mixture of EGFR-NPs, HER2-NPs, and isotype-NPs (150 pM/each, 0.12 mL). After 10 minutes, unbound NPs were rinsed away by irrigating the esophagus for 10 sec with 0.2-mL of PBS. Once the staining procedure was completed, we performed rotational pull-back spectral endoscopy of a 3-cm section of the esophagus (see methods) where the tumors were located. Figure 4(a) is a zoomed-in view of the surgically exposed cervical esophagus with 3 tumor implants. Figure 4(b) shows the reference spectrum of the SERS NPs, and Fig. 4(c) shows the background spectrum of esophagus tissues and raw (mixed) spectra acquired from NP-stained normal esophagus tissues and A431 tumor implants. Spectral demultiplexing reveals that the respective concentrations of the EGFR-NPs, HER2-NPs and isotype-NPs are 21.4 pM, 21.5 pM and 20.6 pM on the normal esophagus (blue spectrum) and 33.1 pM, 24.7 pM and 18.2 pM on the A431 tumor (red spectrum). The accuracy and linearity of the demultiplexing alogirithms used in this study have been demonstrated previously [24,27].
Ratiometric images show the concentration ratios of EGFR-NPs vs. isotype-NPs ( Fig. 4(d)) and HER2-NPs vs. isotype-NPs (Fig. 4(e)). These results demonstrate that multiplexed ratiometric imaging of targeted vs. untargeted SERS NPs not only reveal the location of the receptor-positive tumors, but also quantify the EGFR and HER2 expression levels in agreement with flow-cytometry results (right-side plots in Figs. 4(d) and 4(e)). Note that while tumor cell suspensions yield specific-to-nonspecific binding ratios of up to two orders of magnitude in flow-cytometry experiments (Fig. 1), the molecular imaging of real tissues yields lower specific-to-nonspecific binding ratios due to less-ideal delivery of the NPs to cell-surface receptors as well as significantly higher nonspecific binding and retention effects, as discussed previously [24]. Fig. 4. In vivo endoscopic molecular imaging performed with multiplexed SERS NPs delivered via oral gavage. (a) Photograph of a surgically exposed rat esophagus implanted with three tumor xenografts. (b) Reference spectrum of the SERS NPs that were mixed together and topically applied into the rat esophagus in this study. (c) Background spectrum of esophagus tissues and raw (mixed) spectra acquired from NP-stained normal esophagus tissues and A431 tumor implants. (d and e) Images showing the concentration ratio of (d) EGFR-NPs vs. isotype-NPs and (e) HER2-NPs vs. isotype-NPs. The right-side plots show the correlation between the image-derived intensities from various tissue types (normal esophagus and three tumors) and the corresponding fluorescence ratio (targeted-NP vs. isotype-NP) from flowcytometry experiments with the cell lines used to generate the various tumor xenografts (Fig. 1). All values in the figures are presented as mean ± standard deviation. R > 0.95.
Discussion
We have demonstrated an endoscopic imaging strategy for detecting and/or investigating gastrointestinal cancers that is based on the visualization of the molecular phenotype of tissues through the administration of molecularly targeted SERS NP contrast agents. Recently, SERS NPs have attracted much interest because of their high multiplexing capabilities, brightness, and minimal systemic uptake when applied to epithelial tissues including the gastrointestinal mucosa [25,39]. In this study, we have developed and validated a miniature spectral endoscope to image SERS NPs topically applied within the rat esophagus for rapid tumor detection. The endoscope is able to comprehensively scan the entire lumen of the esophagus, via rotational pull-back of a spectral probe, with an imaging speed of 0.6 cm/min (10 spectra/sec). Although it requires more time than a localized point-detection method, the strategy of comprehensively imaging the entire organ enables screening for molecular changes without prior knowledge about the location of potential lesions. Through imaging experiments with a rat esophageal tumor model, we have demonstrated that the miniature spectral endoscope can accurately locate and distinguish tumors by ratiometric quantification of targeted vs. untargeted SERS NPs, which enables the unambiguous visualization of biomarker-positive lesions. Furthermore, the ratiometric images agree with flow-cytometry and IHC validation data (Figs. 3 and 4).
SERS NPs are advantageous for multiplexed molecular imaging for a number of reasons. First, multiplexed SERS NPs may be excited at a single illumination wavelength (785 nm), ensuring that all NP reporters in a measurement are interrogated identically in terms of illumination intensity, area and depth. This ensures that ratiometric measurements are highly accurate and immune to wavelength-dependent tissue optical properties, which can plague fluorescence-based multiplexed imaging techniques in which disparate wavelength channels are often necessary. Second, the relatively large size of these NPs (~120 nm) allows them to remain at the surface of the esophagus lumen rather than diffusing into the tissue and being trapped, such that molecular image contrast between tumor and normal esophagus may be rapidly achieved (<15 min). Finally, the ratiometric quantification of targeted and control agents allows for the accurate identification of molecularly specific binding by eliminating nonspecific effects that are common in single-agent imaging, such as off-target binding, uneven agent delivery, and variations in tissue permeability and retention [24,[28][29][30].
Additional work is needed to further improve the spectral endoscopy technique. Brighter NPs would be of value to improve signal-to-noise ratios (SNR) and therefore, imaging speeds. While a few studies have shown the feasibility of detecting large multiplexed panels (5-10) of untargeted SERS NPs in animal or human tissues [14,40], further work is needed to demonstrate the ability to quantify a large panel of tumor biomarkers with targeted SERS NPs such that tumors with heterogeneous biomarker expression patterns can be accurately identified. In addition, it will be interesting to test our endoscopy strategy with larger animal models, such as swine, which more closely resemble the human esophagus in terms of size and geometry. Since the ratiometric imaging of SERS NP mixtures is relatively insensitive to variations in working distance and tissue geometry [24,29], the guide tube could be removed and a rotational Raman-imaging probe could be deployed through the instrument channel of a standard endoscope [17,41]. These future studies will leverage our topical-delivery protocols and ratiometric-detection strategy to unambiguously visualize cell-surface biomarkers in tissues and may potentially improve the ability to detect and investigate esophageal cancer at the molecular level. | 2018-04-03T04:02:42.750Z | 2015-10-01T00:00:00.000 | {
"year": 2015,
"sha1": "f8cc6d5997dccfadeb41f3b283540d687bd1eb7a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.6.003714",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "efc3baecbb01e1fc38e19edf4321469cc712d186",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249348609 | pes2o/s2orc | v3-fos-license | A comparison of the UK and Italian national risk-based guidelines for assessing hydraulic actions on bridges
Abstract This paper compares the application of two recently published guidance documents for risk-based assessment of hydraulic actions on bridges, namely the UK Design Manual for Roads and Bridges and the Italian Ministry of Infrastructure and Transport’s Guidelines, to two case study bridges (Staverton Bridge, UK; Borgoforte Bridge, Italy). This work is one of the first to illustrate how to apply these guidelines. Both documents present risk-based methods for the assessment of hydraulic actions, while exhibiting fundamental differences. For example, the UK method prescribes calculations for local and constriction scour, water depth, and velocity at several cross-sections; by comparison, the Italian method does not prescribe calculations to assess the risk level. For the case studies in this paper, the hydraulic risk obtained for Staverton Bridge resulted as ‘High’ using both methods. The scour score for the Borgoforte Bridge resulted higher using the Italian method (Medium-High), as compared to the UK approach (Medium). This difference is due to how the guidelines assess the vulnerability associated with the minimum clearance. The comparison of these two risk-based approaches and the resulting discussion may serve as a useful resource for those wishing to develop new risk-based methods for assessing hydraulic actions on bridges.
Introduction
Hydraulic actions on bridges are a significant source of damage and pose risks to the safety and stability of new and existing structures.One particular action, namely scour erosion at piers and abutments, remains a major cause of damage and collapse of bridges worldwide (Blockley, 2011;Lamb, Garside, Pant, & Hall, 2019;Maddison, 2012;Sasidharan, Parlikad, & Schooling, 2021;Selvakumaran, Plank, Geiß, Rossi, & Middleton, 2018).Scour occurs where flowing water leads to the removal of soil from around bridge foundations (Hamill, 1999), which can compromise stability.Maddison (2012) provides a detailed discussion of the various types of bridge scour, namely 'general', 'contraction/constriction', and 'local' scour.General scour is caused by natural channel evolution, contraction (or constriction) scour is caused by water flowing through bridge openings (reduced flow area), and local scour is caused by the presence of obstacles (such as foundations) to the flow.The combination of scour types can lead to significant scour hole depths at affected bridges (Klinga & Alipour, 2015).Scour can be exacerbated by certain phenomena such as actions induced by debris accumulation (Panici, Kripakaran, Djordjevi c, & Dentith, 2020).Moreover, climate change is likely to increase the frequency and magnitude of flooding events (Nasr et al., 2020), which could further aggravate scour risk on certain structures.For these reasons, the identification of bridges at risk of scour is crucial for infrastructure operators, managers, and asset owners.
Asset owners and operators use various methods to rank and prioritise structures using available data.There is general consensus in literature that risk-based methods are the most appropriate for this task because they allow consideration of multiple elements (hazard, structure, cost, etc.) that underpin decision-making for a bridge subjected to hazards (Adey, Hajdin, & Br€ uhwiler, 2003).Various European projects on bridge management under natural hazards have proposed a range of risk-based approaches (e.g., Campos, Casas, & Fernandes, 2016;Khakzad & van Gelde, 2016).Decision-making related to bridge management can be enhanced by information from monitoring systems that can capture and record scour effects during flood events (Giordano, Prendergast, & Limongelli, 2022;Prendergast & Gavin, 2014;Vardanega, Gavriel, & Pregnolato, 2021).For large networks of bridges, however, certain structures will inevitably succumb to flood-related damage and scour (Whitbread, Benn, & Hailes, 2000), and real-time monitoring of every bridge with sensors remains unachievable due to the associated costs (Farreras-Alcover, Andersen, & McFadyen, 2016).It is therefore essential that asset owners and operators can prioritise the most at-risk bridges in their network (Arneson, Zevenbergen, Lagasse, & Clopper, 2012).
Various guidelines for the management of bridges under flood hazards have been developed to date (e.g., Ab e, Shimamura, & Fujino, 2014;FHWA, 2004;Mn DOT, 2009;TDT, 1993).International references include the AASHTO manuals (2020), while the European regulation Eurocodes (BSI, 2002(BSI, , 2005(BSI, , 2006;;CEN TC 250 N1148: 2015;EN 16991: 2018;ISO 13822: 2010) currently offer principles for assessing hydraulic actions on bridges but offer little guidance on how to prioritise bridges for scour monitoring and maintenance interventions.Moreover, how to deal with climate change effects and the influence of debris has received little attention (Takano & Pooley, 2021).This paper focuses on two recent risk-based guidelines for assessing hydraulic actions on bridges, namely the UK Design Manual for Roads and Bridges (CS 469) (HA, 2012;Takano & Pooley, 2021) and the Italian Ministry of Infrastructure and Transport (MIT) guidelines (CSLP, 2020), and demonstrates their performance against two case study bridges.This paper has the following aims: (i) To apply the UK and Italian guidelines to two bridges from the UK and Italy to demonstrate how the guidelines operate.(ii) To compare the results from the two methods to demonstrate how risk-based approaches and concepts can be incorporated into scour assessment guidelines.(iii) To offer novel insights on how future codes of practice can be further enhanced to better assess hydraulic risk for bridge structures.
Management of scour and hydraulic actions in practice
Transportation agencies usually manage a portfolio of hundreds or even thousands of bridges, thus they must adopt specific procedures to rank the level of risk and prioritise interventions on bridges prone to scour.A risk-based approach to prioritize interventions combines information about the hazard (hydraulic actions), the vulnerability of the bridge, and the consequences of potential damage (Pregnolato, 2019).Using inspections and monitoring data (when available) as inputs, risk-based approaches usually rank bridges according to different 'risk' classes.Pregnolato et al. (2020) suggested that bridge agencies can have varying in-house definitions of the concept of risk, which is incorporated into 'risk-based' practice with varying degrees of refinement.The present work focuses on two risk-based national guidelines for managing hydraulic actions and scour available in the UK and Italy, which were released in 2021 and 2020 respectively.The UK guidelines (see Sec. 2.1) introduced innovation by considering debris and climate change in assessments.Moreover, it included an updated assessment of the impact of a bridge failure on communities (which was missing in previous work, e.g., Sasidharan et al., 2021).The Italian guidelines (see Sec. 2.2) are the first to be issued at the national (Italian) level for the management of bridges, and contain a comprehensive system perspective and a wider assessment of consequences, which also include human and surrounding property losses.
Scour and hydraulic actions risk assessment in the UK
The risk assessment due to scour and hydraulic actions on UK bridges is regulated by National Highways' CS 469 Management of scour and other hydraulic actions at highway structures (hereafter referred to as CS 469), which replaces BD97/12 Assessment of scour and other hydraulic actions at highway structures (Takano & Pooley, 2021).Informative case histories and technical guidance on scour at bridges is also contained in CIRIA (2017).The main aim of CS 469 is to evaluate the risk posed by scour and other hydraulic actions to bridges and other riverine structures (e.g., retaining walls), and to prioritise and manage risk mitigation measures when required.The guideline covers crucial aspects of the whole process of monitoring, evaluating, and managing risk, including characterising scour and hydraulic risks, inspections and monitoring of scour-prone structures, and mitigation measures to reduce negative consequences from scour and hydraulic actions for both affected populations and structures.
The estimation of the risk in CS 469 is split into two levels.Level 1 is a preliminary screening that assesses scour impact based on a qualitative analysis without calculations or numerical analysis.Level 1 considers: (i) the history of scour at the bridge, (ii) visual inspections, (iii) other factors based on the bridge location (e.g., flow angle of attack, history of debris accumulation).The outcome of a Level 1 assessment can be either low risk of scour or the requirement to conduct a Level 2 assessment.Level 2 assesses the risk level after a detailed set of surveys and calculations that include a topographic survey of the cross-sections upstream and downstream of the subject bridge, as well as hydraulic calculations for water depth and velocity upstream and at the structure.
Scour and hydraulic actions risk assessment in Italy
Risk assessment on Italian bridges is addressed by the 'Guidelines for risk classification and management, safety evaluation and monitoring of existing bridges' which was issued by the Italian Ministry for Public Work in 2020 (CSLP, 2020).These guidelines are aimed at providing operators with standard procedures for bridge safety management at a national level ensuring safety by maintaining an acceptable risk level.The guidelines propose a multi-level and multi-risk procedure entailing the survey and the risk classification of existing bridges, the assessment of their safety, and the management of planned inspections and monitoring.
The procedure concerns six levels (0-5) of analysis.The complexity and the level of detail of the investigations increase with the level of the analysis, but the number of bridges examined, as well as the level of uncertainty of the results obtained, decrease.The first three levels relate to a preliminary analysis at a local scale, involving the collection of existing data (Level 0), in situ inspections (Level 1), and risk-based classification (Level 2) considering four types of risks, namely, (i) the structure-foundation risk, (ii) the seismic risk, (iii) the landslide risk, and (iv) the hydraulic risk.The structures that present criticalities are further investigated through simplified (Level 3) or accurate (Level 4) safety assessments.Level 5 is the final level and mentions a resilience analysis of the transportation network.This level of analysis is not directly detailed in the guidelines, which recommend adopting methods from international literature.Currently, the implementation of these guidelines is at an experimental phase which will lead to a preliminary review for their final adoption.Very few studies have applied this approach, with those that have mainly focused on seismic and structural risk (e.g., De Matteis, Bencivenga, & Zizi, 2021;Santarsiero, Masi, Picciano, & Digrisolo, 2021).
Methodology
This study applies the UK and Italian guidelines to two different case study bridges with the goal to demonstrate how the methods operate, and how they compare, in order to draw lessons for further improvement.
Methodology proposed in the UK code
For the UK assessment, the Level 2 Scour Risk Assessment of CS 469 has been employed, which evaluates risk and vulnerability that are assessed and classified independently for scour, hydraulic actions (e.g., uplift), and channel stability.Figure 1 shows a schematic of the hydraulic-related phenomena considered in CS 469 and its assessing criteria; interested readers can refer to Takano and Pooley (2021) for further details.
The scour risk is based on two factors, the first is the priority factor P F : This equation is the product of seven heuristic coefficients (described in Table 1), each of them depending on qualitative or quantitative information available (e.g., previous history of scour, river gradient, importance of road), although each term is independently defined (Takano & Pooley, 2021).In terms of establishing the history of scour, this can be achieved by analyzing previous visual inspection data if available.If not, methods such as Ground Penetrating Radar have shown success at detecting the depth of previous scour holes, subsequently filled in upon flood attenuation, see for example Anderson, Ismael, and Thitimakor (2007).
The second factor is the total scour depth D T at each structural element, whereby scour depth is given by the sum of contraction (D C ) and local (D L ) scour depths.The approach employed in the CS 469 makes use of formulae (CIRIA, 2017) for the estimation of each type of scour that are inclusive of several factors (e.g., angle of attack, pier or abutment shape, debris accumulations), according to: where each term is described in Table 2.The computation of individual components (e.g., D C ) in Equation (2) (not shown here for brevity) is dependent on an input flow, that is the flood discharge for a return period of 200 years inclusive of climate change allowance (variable between 20% to 35% increase, depending on geographic location) for all types of rivers.
The risk assessment is evaluated by computing the relative scour depth D R (i.e., the ratio between total scour depth D T and foundation depth D F : and then plotting against the priority factor P F .This operation needs to be repeated for all structural elements (e.g., piers, abutments), whilst the overall risk rating of the bridge will correspond to the highest among all assessed elements.Figure 2 depicts the graph used for this estimation, with indication of the risk category and score.The region in the graph where the interpolation between P F and D R falls will determine the risk rating for a structure.Depending on the area where the output lies in the plot, the bridge will be assigned a score and a rating for scour risk: 100 (Highimmediate risk of scour), 80 (high), 60, and 40 (Medium), and 10 (Low).Structures with score 100 will be considered at immediate risk of scour and treated as substandard, i.e., requiring urgent interventions to minimise the load carried by the structure, including closure to vehicular traffic and implementation of scour mitigation measures.
In parallel with scour risk, the CS 469 requires carrying out an assessment on hydraulic actions on the bridge, such as soffit uplift, damage to bridge deck and parapets, and impact forces from debris.The assessment is based on the hydraulic calculations already carried out for scour, and only requires assessing i) whether the soffit of the bridge will be submerged by the flood flow or not, or ii) if the flow specific energy (i.e. the sum of kinetic and potential energy head of the flow relative to the channel bottom) under the bridge will be higher than the soffit height.If the hydraulic analysis estimates bridge submergence, then the guidelines require a vulnerability analysis (exposure  hazard).Associated recommended action might include bridge classification as substandard, if hydraulic actions are deemed to cause damage to the bridge.An intermediate situation is when neither the flow nor the specific energy is estimated to reach the bridge soffit, but the latter (i.e.flow specific energy) will be within 0.60 m of it.In this case, CS 469 consider the bridge at potential failure due to impact with debris and in need of further investigations (although this is outside the scope of CS 469).
Finally, a third assessment must be carried out on river channel stability upstream of the bridge.In this case, the assessment is only based on qualitative analysis, which includes observations on local river channel conditions or stream or riverbank protections.High or medium vulnerability will require actions from the overseeing organisation to reduce the impact of channel instability.Although in CS469 it is considered as a 'risk rating', the channel stability should be rather considered as vulnerability (as we refer to in this paper), since the approach is limited to qualitative observations, and does not consider possible consequences.
Methodology proposed in the Italian code
As introduced in Section 2.2, the risk classification of bridges at the local level is addressed in the realm of Level 2 of the multi-level procedure proposed by the Italian guidelines.The first step includes assigning an Attention Class (AC) for each risk type, i.e., the structure-foundation risk, the seismic risk, the landslide risk, and the hydraulic risk.Then, the individual ACs are combined to obtain a global AC used for risk classification purposes.There are five ACs in total, namely: Low, Medium-Low, Medium, Medium-High, and High.The global AC determines the actions to be carried out on each bridge of a given portfolio, i.e., the need for safety assessment, and/or the need for collecting detailed data by means of inspections or Structural Health Monitoring (SHM).In line with the purpose of this study, the Italian guidelines are applied up to the level relevant to Level 2 (risk classification), considering the hydraulic risk only.
The evaluation of the AC with respect to hydraulic actions considers three hydraulic-related phenomena, specifically: (i) insufficient minimum vertical clearance, (ii) general scour, and (iii) local scour.In the original manual, 'general scour' (erosione generalizzata, in Italian) relates to both contraction and general scour, as defined in Section 1; 'local scour' (erosione localizzata, in Italian) relates to both local and general scour.The AC for scour is determined by combining the AC for local and general scour by means of combination tables.The global AC for hydraulic risk is obtained by selecting the lowest AC between the AC for insufficient vertical clearance and the AC for scour (see Figure 3).Table 2. Factors used in Equation ( 2) and typical ranges.For each hydraulic-related phenomenon, a partial AC is assigned to hazard, exposure, and vulnerability, for a total of nine partial ACs (three hydraulic-related phenomena multiplied by three risk parameters, as shown in Figure 3).The partial ACs are combined by means of combination tables (see Figure 4).For instance, in case the partial ACs for hazard, vulnerability, and exposure are Low, High, and Low, respectively, the AC for that hydraulic phenomenon will be High.The guidelines provide specific rules to drive the assignment of the nine partial ACs based on the value assumed by several quantitative or qualitative variables, as briefly explained in the sequence.
Regarding the hazard, for the minimum clearance, the partial AC is estimated by considering the distance between the bridge soffit and the water level related to the specified return period; different return periods are considered but they do not depend on the type of bridge, although the importance of the bridge is considered in the consequences.With reference to general scour, the partial AC is evaluated considering: (i) the ratio between the width of the riverbed occupied by the bridge and the overall width of the riverbed, C a , and (ii) the ratio between the width of the floodplains occupied by the bridge and the overall width of the floodplains, C g .As for the local scour, the partial AC is determined by the ratio between the scour depth and the foundation depth (taken as 2 m if the foundation depth is unknown); the method requires the assumption that the scour depth be two times the width of the piers (CSLP, 2020, p. 34).
The vulnerability depends on the type of foundation, the geometry of the riverbed and the amounts of sediment/debris/floating materials.The definition of the partial ACs for the three hydraulic-related phenomena is mostly based on meeting certain conditions, i.e. the operator uses tables provided in the guidelines to verify if certain conditions are respected.The following conditions are considered: For insufficient vertical clearance: (i) significant sediment deposition or riverbed erosion; (ii) significant transport of plant material of considerable size; (iii) dimension of the river basin higher or lower than defined thresholds.For example, a partial AC can score 'medium' whether at least one condition is met among the presence of (i) and (ii), and (iii) <500 km 2 .For general scour: (i) superficial foundations; (ii) generalized lowering of the riverbed; (iii) curvature of the riverbed.For example, a partial AC can score 'medium' whether at least one condition is met among presence of (i), (ii), and (iii).For local scour: (i) superficial foundations; (ii) generalized lowering of the riverbed; (iii) accumulations of debris or floating material; (iv) planimetric migration of the riverbed; (v) existence of scour protection devices.For example, a partial AC can score 'medium' whether at least one condition is met among presence of (i), (ii), (iii), and (iv).
The exposure mainly accounts for indirect consequences related to the loss of functionality of the bridge and damage to the environment induced by the bridge collapse.A single partial AC is determined for all three hydraulic phenomena considering different parameters, namely: (i) the daily traffic level; (ii) the length of the bridge span used as a proxy for the number of people on the bridge; (iii) presence (under the bridge) of people and important assets from the naturalistic, economic and social perspectives; (iv) presence of alternative routes; (v) criticality of the bridges during an emergency; and (vi) transport of dangerous goods.Based on the value assumed by these parameters, tables, and graphs guide the operator in the selection of the partial AC for the exposure, as illustrated in CSLP (2020).For instance, the partial AC for bridges with a low level of traffic ( 10,000 vehicles/day) and short span length ( 20 m) is Medium-High in case: (i) alternative routes are not present, (ii) people or important assets are present under the bridge, (iii) the bridge is a strategic asset.The presence of dangerous goods is used as a prioritization factor between bridges belonging to the same AC.
Case study
This study applies the UK and Italian guidelines to two different case studies in Italy and in the UK (Figure 5), which presented different characteristics and available data.
Staverton Bridge (Figure 5a) crosses the River Dart near Totnes in Devon, in the South-West of England (UK), and is located on road C46.The bridge consists of seven arch masonry spans of varying sizes.The bridge construction date is believed to be 1413, and it is a Grade I listed structure (i.e., the bridge is of exceptional architectural and historical interest).The bridge is located in between two river bends, immediately downstream of a small island, whilst downstream of the bridge the river Dart flows around the large Dartington island.The bridge is owned by Devon County Council and scour was observed at the piers and foundations during inspections in 2018 and 2019.The River Dart at Staverton Bridge has a catchment area of 268.1 km 2 after originating in Dartmoor, and the riverbed is formed mostly by coarse gravel, with a catchment bedrock of low permeable rocks.The 200-year flood peak flow is estimated to be 611.1 m 3 s À1 , whereby the increase due to climate change allowance for peak river flow (according to CS 469) is 25% for the South-West of England.The bridge is also known for periodically accumulating large wood debris at its piers after seasonal floods.
Borgoforte bridge in Italy (Figure 5b) is a concrete bridge built in 1961 which crosses the Po River on state road no.62 between the Regions Lombardy and Emilia Romagna (Northern Italy).The overall length of the structure is approximately 1137 m.The bridge consists of a complex structure of three different parts, as follows: The left access viaduct (Lombardy side) is located in the floodplain and consists of 9 simply supported spans each with a length of 18.35 m (total length 165m).The bridge over the river Po, partly located in the floodplain and partly in the riverbed, consists of 7 pairs of piers that support cantilever beams of variable height which are connected by means of a suspended section (total length 472m).The total span between two piers is 63.50 m.The piers are composed of 3 circular section columns with a diameter of 1.5 m.Right access viaduct (Emilia Romagna side) is located beyond the embankment and consists of 28 simply supported spans each of 18.40 m length (total length 500m).
In normal flow conditions, the width of the river is approximately 300 m.Therefore, only four of the 44 piers of the bridge are permanently in water.In the year 2000, a 15 m deep scour hole was found in the riverbed after a flood.Subsequently, the piles affected by the scour were reinforced and the hole filled (Ballio, Ballio, Franzetti,
Results
The first outcome of this study includes a methodological comparison between the application of the two guidelines (CSLP, 2020;Takano & Pooley, 2021).A critical analysis and interpretation of the two methods resulted in the following list of similarities (S) and differences (D), shown in Table 4.
A more informative analysis occurs when both methods are applied to a single bridge, demonstrating how they work and comparing their performance.In the application, results are mostly homogenous, although a few differences are present.Table 6 summarizes the results obtained highlighting the actions triggered by the attained risk levels.Sec.4.1 and 4.2 illustrate the application and explain the results in detail.
Results of the application of the UK method
In this section, the UK method is applied to both the UK and Italian bridges to demonstrate its operation.The application of the UK method (CS 469) to Staverton Bridge and Borgoforte Bridge showed that the two bridges attained different levels of risk, depending on the type of hydraulic action.Figure 6 shows a schematic comparison between the two structures.First, the scour risk rating is High (score of 100) for Staverton Bridge, indicating that the bridge is at immediate risk of scour, according to the UK criteria.On the other hand, Borgoforte Bridge scored 40, indicating a risk rating of Medium.As a result, Staverton will have to be treated as a substandard structure (where actions have to be taken immediately) by the competent authority, whilst Borgoforte Bridge will only require a monitoring plan.
Since the priority factor of the two structures is comparable (1.638 for Staverton and 1.747 for Borgoforte Bridge), the different ratings are mostly due to the relative scour depth (D R ): for Borgoforte Bridge, D R is relatively low (1.14), due to the depth of foundations comparable to the scour depth.Conversely, for Staverton Bridge D R is very high (9.71),although this was calculated on an assumed foundation depth of 1 m, as per recommendation of the UK approach.Nevertheless, the type of structure (masonry arch bridge from the 15 th century) suggests that the real foundation depth will be in the order of 1-2 m, resulting in high values of D R .
For the assessment of hydraulic actions (e.g., uplift) both bridges resulted in low vulnerability, meaning that the bridge deck and parapet are unlikely to be submerged in either case and thus to be at risk of hydraulic actions.Despite in both cases the clearance under the bridge deck being less than 1 m, the UK method would not require any further assessment.It should be noted that the hydraulic risk for CS 469 is not explicitly specified, since the analysis is treated as vulnerability.For the purposes of this work, the risk of hydraulic actions has been assumed low since no vulnerability was identified.
The hydraulic assessment also suggested that Staverton Bridge might be at risk from debris impact damaging or dislodging the bridge, whilst this is not the case for Borgoforte Bridge.This assessment is only based on energy considerations at the bridge opening (Section 3.1), i.e. the flow specific energy under the bridge at Staverton Bridge was ca.0.60 m lower than the bridge soffit depth, although it should be considered that both bridges have a history of debris accumulation.Finally, the vulnerability of channel stability for this study resulted as low for Borgoforte Bridge and medium for Staverton.This difference is based on the qualitative assessment adopted in the UK method: Borgoforte Bridge had enough criteria qualifying for a low rating, whilst Staverton Bridge did not have enough criteria for either low or high risk, hence it was taken as Medium.
Results of the application of the Italian method
The results relating to the application of the Italian guidelines are shown in Figure 7.Both Borgoforte and Staverton bridges are classified into the High global AC for hydraulic actions.Nevertheless, it is observed that in general, the partial ACs assume different values for the two case studies.Two major differences exist, concerning: (i) the partial AC for exposure, and (ii) the partial ACs for the hazard relating to general scour and localized scour.
As for the exposure, the Borgoforte and the Staverton bridge fall into the High and Low partial AC, respectively.Indeed, the Borgoforte bridge is one of the major bridges of the Mantua Province and it serves both local and interregional traffic.Besides, it has been estimated that the indirect costs related to the closure of the bridge (including the cost of time loss for users and additional costs of running vehicles) in 2013 are of the order of 88.7 million Euro/year ( Eupolis Lombardia, 2013).Conversely, the Staverton bridge, although relevant from a historical point of view, does not play a significant role within the road network and therefore its closure does not imply severe consequences.
Regarding the hazard for general and local scour, the Borgoforte and the Staverton bridge fall into the Low and High partial AC, respectively.These results are mainly due to the different structural properties of the two bridges (concrete vs arch masonry bridge).As for the general scour, the Borgoforte bridge presents very low values of the ratios C a and C g (see Sec. 3) due to the relatively small dimensions of piers with respect to the dimensions of the river, corresponding to Low AC.Conversely, the two ratios are quite high for Staverton bridge due to its massive structure with respect to the river dimensions.Regarding local scour, the AC is strongly influenced by the value of the foundation depth, that is 18 m for the Borgoforte bridge and unknown for the Staverton bridge (thereby the reference value of 2 m is employed in the calculations).
Sensitivity analysis
A sensitivity analysis is carried out in this section to determine which are the dominant factors of the methods proposed by the two national documents.Specifically, for the UK method, the sensitivity analysis has been focused on varying individual parameters that contribute to the value of the Priority Factor P F according to Equation (1).For the Italian guidelines, the sensitivity analysis is performed to assess the relative importance of each partial AC involved in the computation of the global AC for hydraulic risk.
A similar sensitivity analysis has been carried out on the factors used by the UK method for scour assessment.Since the estimation of scour depth is deterministic, the analysis has been based on the heuristic Priority Factor P F .In this case, for both Borgoforte and Staverton bridges, the individual factors that contribute to P F have been varied across the whole range of factors available and consequently, the risk rating has been re-assessed for each new combination, based on the scour risk rating graph in Figure 2. Results are reported in Figure 8 for Staverton Bridge at piers and abutments.On the vertical axis the individual elements of the priority factor P F are reported, whilst on the horizontal axis the values that contribute to P F for each individual factor (see Equation ( 1)) are displayed.
In the case of Borgoforte Bridge, since the relative scour depth is low (i.e., D R ¼ 1.14) any variation of P F does not produce any difference.From Figure 2, any D R value in the range 1-1.4 will always result in a rating Medium and score 40, irrespective of the Priority factor P F .A similar situation is also observed for Staverton bridge at the piers, although for the opposite reason.In this case, the relative scour depth is High (D R ¼9.71), whereby the minimum value of priority factor to decrease from a score of 100 to a score of 80 is 0.85, therefore any variation of the priority factor would not affect the final risk rating, that will remain High (score 100).
Regarding the abutments of Staverton Bridge, D R is lower (resulting in 4.92), so the variation of P F produces a change of one risk class for the variation of the factors F (i.e., foundation type) and H S (i.e., history of scour).When F is varied, values of 1 and 0.75 would still result in High risk class, but with a lower score (80) than for its highest value (that is, 1.25).For the history of scour factor H S , a value of 1 (instead of 1.5) would lower the score to 80 (although still being considered at High risk).It should be noted that according to CS 469 the overall bridge risk rating should be considered as the highest among all structural elements, therefore, even if the risk rating of the abutments were to be reduced, the bridge rating will be unchanged.
Overall, the analysis showed that the approach proposed by CS 469 tends to have a low sensitivity to variations of the priority factor P F , although the two cases analysed here had relative scour depths lying on two opposite ends of the spectrum, whereby intermediate values might show a slightly higher sensitivity to changes in P F .For variations of relative scour depth D R (and, consequently, of the total scour depth D T ), a change in risk score only occurs when increase (or decrease) of the estimated maximum scour depth is significant: for Borgoforte bridge, a 46% increase of total scour depth is necessary to increase the scour risk rating to a score of 60.On the other hand, a reduction of 14% of the total scour depth would be required to reduce the risk rating to Low and a score of 10.This means that it is highly unlikely that the bridge will achieve a higher risk level, even with a more conservative approach.For Staverton bridge, the situation is the opposite: a change in risk rating from a score of 100 to a score of 80 is only possible when the total scour depth is reduced by a factor of 3 for the piers (1.5 for the abutments).Thus, Staverton bridge is likely to be assessed at the highest risk level even with less conservative scour depth estimations.
The method proposed by the Italian guidelines entails seven partial ACs in total (see Sec. 3.2): the hazard and vulnerability ACs for the three hydraulic-related phenomena (insufficient Minimum vertical Clearance (MC), General Scour (GS), and Local Scour (LS) plus one AC for the exposure.In this sensitivity analysis, the ACs are modified one by one, keeping the other ACs constant and equal to the value obtained in the analysis.
The results of the sensitivity analysis for the Borgoforte Bridge are displayed in Figure 9, where the horizontal axis shows the values of the partial ACs for factors (indicated on the vertical axis) that contribute to risk, and the different Table 4. Summary of methodological similarities and differences between the Italian and UK methods (authors' critical analysis).
SIMILARITIES (S) S1)
to assess scour risk separately from the other hydraulic actions.S2) to account for the history of scour problems, if recorded via past inspections.S3) to consider indirect consequences, within a network-level perspective: the UK P F includes a community severance or disruption factor, i.e., relevance of the bridge for the served community, rerouting or access to critical services, while the Italian method considers the daily traffic level, the presence of alternative routes, and the importance of the bridges during an emergency.S4) to produce risk classes as a final outcome.S5) to recommend similar actions in relation to the risk class, i.e., to gather more information.However, the UK guidelines include urgent actions for High Risk bridges scoring 100 in the assessment (see D4). S6) to consider effects due to debris -although the UK guidelines include effects of debris quantitatively, whilst the Italian guidelines only do so qualitatively.S7) to overlook direct consequences: both approaches do not consider the cost of repair/replacement.DIFFERENCES (D) D1) the UK guidelines assess risk due to scour actions only; for hydraulic actions and channel stability, a vulnerability assessment is proposed.The Italian guidelines encompass risk assessment for four hazards (seismic, landslide, stability, and hydraulic); all the risk types are combined to obtain the final rating.D2) regarding hydraulic actions, the UK guidelines assess various risks separately (i.e., scour actions, debris forces, hydraulic actions, and channel stability); scour is assessed independently from hydraulic actions, e.g., if the bridge results as submerged, more calculations are prescribed to measure hydraulic loads for deck uplift (no risk rating).For the Italian guidelines, no action can be taken considering only one type of risk (e.g., hydraulic).D3) the UK guidelines prescribe calculations for local and constriction scour, water depth, and velocity at several cross-sections; with reference to Level 2, the Italian guidelines are based on records and visual inspections, and do not involve calculations.For example, the UK method calculates local scour via a multi-factor empirical equation accounting for pier shape, angle of attack, and debris accumulations; while the Italian method assumes scour as two times the pier width in the absence of targeted inspections.D4) The UK guidelines account for climate change by considering a 20-30% allowance for a 200-year return period flood peak flow (based on the geographical area; Takano & Pooley, 2021); the Italian guidelines do not explicitly account for climate change (water levels used in the computation of the vertical clearance are retrieved from flood maps where climate change is accounted for according to the current technical legislation, D. Lgs. 49, 2010).D5) the outcome of the UK method's risk assessment procedure could also include operative actions, e.g., bridge closure or traffic limitations when at high risk; the outcome of the risk assessment for the Italian method is limited to gathering more information or carrying out in-depth investigations in the case of a bridge at high risk (Level 3-4) (see S4). D6) The UK method does not include damage to humans and the environment (and any direct consequences at all), while the Italian method assumes the bridge length as a proxy of human casualties (see S7) and includes as a risk factor the presence of dangerous materials, i.e. those substances (carried along the bridge) which, due to their particular nature, are capable of producing significant damage to people and the environment.D7) As for the minimum clearance, the UK method considers whether the bridge is submerged or not during the 200-year (plus climate change correction) return period flood peak flow (see also D1).For the Italian method, the hazard AC for insufficient vertical clearance is determined considering the magnitude of the minimum clearance relating to floods of different return periods.The most important hydraulic-related phenomenon appears to be the minimum vertical clearance.This depends on the combination rule considered in the guidelines where the partial ACs for general and local scour are further combined to obtain the global AC for scour (see Figure 6) 2018)) Ã .Ã The AC for hydraulic risk should be combined with the ACs relevant to the other three risk types.In the table, it is assumed that the AC for hydraulic risk and the general AC are the same.whereas the partial AC for minimum clearance contributes directly to the evaluation of the global AC.As for the Staverton Bridge, any variation of a single AC does not modify the Global AC.The main reason for this result is that the ACs for minimum clearance and scour are both H: even if one of them lowers, the Global AC is H (note that the global AC is assigned as the most severe AC between the AC for minimum clearance and the AC for scour).
Discussion on compatibility of approaches
This work is one of the first to illustrate and show how to apply the latest UK (2021) and Italian (2020) guidelines to bridges subjected to scour and other hydraulic actions.Both methods are risk-based and produce compatible results, when comparing the approaches directly against each bridge.The scour risk obtained for Staverton Bridge resulted as High using both methods; the score for Borgoforte Bridge resulted higher for the Italian method, as compared to the UK method.The UK method categorised Borgoforte bridge as medium (40), which means that only standard monitoring is required; the Italian method assessed Borgoforte bridge as Medium-High risk, but it is necessary to finalize Level 2 of the analysis to understand which actions to take (and therefore combine the ACs relevant to all risk types).Additional national codes are available worldwide and future research could advance additional case studies for comparison.
In general, it is the hydraulic action assessment that showed significant differences between both methods: the Italian approach assessed Staverton as Medium-High hazard for minimum clearance, whilst the UK approach considers this bridge unlikely to be submerged.The main difference can be identified in the way the two methods assess the vulnerability of this particular hydraulic phenomenon: the UK method will consider bridge submergence only if the upstream water depth is higher than the bridge soffit (or the specific energy of the flow through the bridge is higher than the bridge soffit height).On the other hand, for the Italian approach, even if the design flow will not reach the bridge soffit, minimum clearance is required before lowering the rating, making the Italian method more conservative than the UK approach.
It can be observed that the combination procedure for ACs proposed by the Italian guidelines adopts a conservative approach, which accounts for the uncertainties due to the qualitative level of its method.For instance, the highest value between the AC for insufficient vertical clearance and the AC for scour is selected as the global AC.However, a High AC for hydraulic actions does not necessarily imply that the global AC of the bridge would be High overall.Indeed, in the Italian guidelines, the global AC for hydraulic risk must be combined with the ACs for the structure-foundation risk, seismic risk, and landslide risk, and often it is the AC for structure-foundation risk which has the highest impact on the evaluation of the global AC, as highlighted in Santarsiero et al. (2021).
It should also be noted that the Italian method accounts for contraction and local scour in the hazard computation of the partial AC, whilst general scour is considered for the assessment of the vulnerability.The UK approach instead calculates the depth of scour due to each phenomenon and estimates the risk as a compound value.Whilst the UK method also considers channel stability around a bridge, this is not included in the Italian approach, making it a potential candidate for future implementation in further iterations.Another interesting observation relates to the sensitivity of the two approaches to the variations of the factors that affect the risk evaluation.The UK method seems to have a low sensitivity to the variation of the assessment factors, whereby the scour risk rating does not change for either bridge, despite varying all heuristic factors across the existing range.The same behaviour could not be observed for the Italian procedure as the AC for hydraulic risk is affected by the partial ACs, as highlighted in Sec.4.3.Thus, it can be inferred that the response from the UK method is less sensitive than the Italian one, as it is unlikely that the risk rating would change with changing values within the priority factor.It should be noted, though, that the two bridges reside at two opposite ends of the scour spectrum, i.e., Staverton bridge with a very high relative scour depth and Borgoforte bridge with a low one.Therefore, other structures with intermediate conditions might display a more marked sensitivity than witnessed in these two case studies.
This work focuses on two specific case studies; however, national agencies manage thousands of bridges in their portfolios.In this respect, the main drawback of both methods is that the final classification includes five risk classes without a ranking.The lack of a ranking means that a prioritisation process among structures is challenging (e.g., which bridges are at most risk in the high-risk category?).The Italian method was applied considering scour and hydraulic risk only, and future studies should verify the landslide, seismic, and structure-foundation risk as well, according to the complete multi-risk procedure.In the UK, the guideline is not multi-risk: despite the UK not being a seismic region, the landslides and structural risk would be relevant and could be considered.Both the UK and Italy are countries with relevant heritage (e.g., the 40% of UK bridges are historical assets, Sasidharan et al., 2021); however, neither approach considers this aspect.This could be an additional element to integrate into the evaluation of the 'importance' of the bridge, by updating how exposure is evaluated.In recent years, increasing emphasis has been put on vertical contraction scour, i.e. the erosion depth caused by pressurised flow under bridges (Majid & Tripathi, 2021).Nevertheless, neither the Italian nor the UK method consider vertical contraction scour, which could be included for further estimation of the scour depth.Finally, the UK method could also be improved by reviewing the Priority Factor, e.g., including direct costs such as the cost of repairing/replacement or costs associated with affected people.
The availability of data (e.g., both the UK and Italian method requests a visual inspection at Level 1) could be a general barrier to the application of a methodology, especially in a context of fragmented bridge ownership and lack of national bridge databases.For example, the UK method requires a topographical survey of an affected structure, which could be difficult to achieve in large rivers where sonar (and other resources) might be required.Some automated data collection strategies, for example, automated ground penetrating radar using remote surveys, could supply additional information for the procedures, but this would require significant additional resources for each asset.
In line with the increasing role of monitoring for vulnerable structures (Giordano, Prendergast, & Limongelli, 2020), both methods recommend monitoring in their guidance.The last part of the Italian method (Sec.7) is dedicated to inspection procedures and monitoring.Specifically, in addition to scheduled inspection, the installation of monitoring systems is recommended for structures classified in the Medium-High or High ACs in Level 2 or strategic structures.The guidelines describe the general principles of SHM and refer to UNI TR 11634:2016 for technical details.For the UK, a new part about a 'monitoring plan' for all structures is present, requiring regular monitoring for all structures falling within the Medium or High scour risk categories.The monitoring plan is required to periodically review the condition of the bridge, any change to existing scour holes, debris accumulations, or change in the river morphology next to the bridge.Also, neither of the analysed methods treat different bridge types (materials/static system) in the definition of failure modes; failure mode analysis would provide a more comprehensive risk-based outlook on potentially endangered bridges (e.g. to plan preventative maintenance or monitoring).However, both methods omit clearly relating monitoring and risk assessment, i.e. the practice of integration between monitoring data, failure analysis, and risk assessment could be addressed by future studies.
Conclusions
This paper demonstrates the application of two recently published guidance documents for assessing hydraulic actions (including scour) on bridges, namely the UK Design Manual for Roads and Bridges and the Italian Ministry of Infrastructure and Transport Guidelines.These guidelines were explained and applied to two bridges, the Staverton Bridge in the UK and the Borgoforte Bridge in Italy.This paper is the first to compare both of these guidelines, and the comparison of the methodologies and the results provides novel insights into the benefits and drawbacks of each approach.For example, the methodologies could be further improved by considering the relevance of heritage, which is currently neglected in each method.
The application and comparison of the two guidelines may support practitioners or researchers wishing to develop their own risk-based methods, and this paper could serve as a reference for authorities wishing to incorporate risk into existing codes of practice and guidance documents for scour assessment.Future work will consider expanding the comparison to other national best-practices or other relevant procedures.
Disclosure statement
No potential conflict of interest was reported by the authors.
Figure 1 .
Figure1.A schematic representation of the UK method and indication assessment process for the three aspects considered; details can be found inTakano and Pooley (2021).
Figure 2 .
Figure 2. Scour risk assessment chart for CS 469, interpolating the Priority factor P F on the horizontal axis and the relative scour depth D R on the vertical axis.Each score is identified by a category (i.e.High, Medium, Low) and a score (i.e. 10 to 100).
Figure 3 .
Figure 3.A schematic representation of the hydraulic risk assessment for the Italian method.Details can be found in CSLP.(2020).
Figure 4 .
Figure 4. Combination tables to combine partial ACs.
Figure 5 .
Figure 5.The selected structures for the case study: (a) Staverton Bridge in the UK; (b) Borgoforte Bridge in Italy.
colours indicate the value of the resulting global AC.The Exposure has greater importance with respect to vulnerability and hazard for the three hydraulic-related phenomena (MC, GS, LS): a decrease in the range M-H to M-L of the partial AC of Exposure entails a decrease from H to M-H of the global AC while a decrease of Exposure to L entails a decrease of the global AC from H to M. Changes of the partial ACs of Hazard and Vulnerability for GS and LS do not affect the global AC whereas the decrease of the partial AC for minimum clearance (MC) in the range M to L leads to a decrease of the global AC from H to M-H.
Figure 6 .
Figure 6.Application of the UK method (CS 469) to the two case studies.
Figure 7 .
Figure 7. Application of Italian method to the two case studies.
Figure 8 .
Figure8.Sensitivity analysis for Staverton bridge considering the UK method for the Priority factor P F for piers and abutments.For Borgoforte Bridge no variation was observed.
Figure 9 .
Figure 9. Sensitivity analysis for the Italian method applied to Borgoforte Bridge; for Staverton Bridge no variation was observed.MC ¼ Minimum Clearance, GS ¼ General Scour, LS ¼ Local Scour.
Table 3 .
Input data for the two case studies and the two methodologies; data were obtained from private reports.
Table 5 .
Final risk rating for the two methods and the two bridges considered in this study (L ¼ low, M ¼ medium, H ¼ high).
Maria Pregnolato was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) LWEC (Living With Environmental Change) Fellowship (EP/R00742X/2).Maria Pina Limongelli and Pier Francesco Giordano were partially funded by the Italian Civil Protection Department within the project Accordo CSLLPP e ReLUIS 2021-2022 "WP3: Analisi, revisione e aggiornamento delle Linee Guida".Diego Panici was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) Impact Acceleration Award (EP/R511699/1).All authors acknowledge Devon County Council and Italian authorities for sharing data and Dr Vardanega for discussing the topic and providing key insights. | 2022-06-05T15:19:47.094Z | 2022-06-03T00:00:00.000 | {
"year": 2024,
"sha1": "b16a8ec112acabdae692793998c6cbac0347b120",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15732479.2022.2081709?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "cc7c01e346e51055924a19ade340b5f63f3eec40",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
252488169 | pes2o/s2orc | v3-fos-license | Computational Analyses Reveal Fundamental Properties of the Hemophilia Literature in the Last 6 Decades
Hemophilia is an inherited blood coagulation disorder caused by mutations on the coagulation factors VIII or IX genes. Although it is a relatively rare disease, the research community is actively working on this topic, producing almost 6000 manuscripts in the last 5 years. Given that the scientific literature is increasing so rapidly, even the most avid reader will find it difficult to follow it closely. In this study, we used sophisticated computational techniques to map the hemophilia literature of the last 60 years. We created a network structure to represent authorship collaborations, where the nodes are the researchers and 2 nodes are connected if they co-authored a manuscript. We accurately identified author clusters, namely, researchers who have collaborated systematically for several years, and used text mining techniques to automatically synthesize their research specialties. Overall, this study serves as a historical appreciation of the effort of thousands of hemophilia researchers and demonstrates that a computational framework is able to automatically identify collaboration networks and their research specialties. Importantly, we made all datasets and source code available for the community, and we anticipate that the methods introduced here will pave the way for the development of systems that generate compelling hypothesis based on patterns that are imperceptible to human researchers.
Introduction
Hemophilia is an X-linked inherited blood coagulation disorder, affecting approximately 1 in 5000 to 25 000 live male births. 1 It is caused by the presence of a defective copy of the coagulation factor VIII (hemophilia A) or the coagulation factor IX (hemophilia B). In turn, these defective genes synthesize a partially functional or nonfunctional protein, resulting in a missing component in the precisely orchestrated coagulation cascade.
Although it is a relatively rare disorder, research in hemophilia is intense. Several groups are working to uncover the fundamental aspects of coagulation factor biology, 2,3 improve patient care, 4 develop physiotherapy programs, 5 improve therapeutics, 6 and advance gene therapies. 7 In all, hemophilia research encompasses all areas of biomedical research.
As in other fields, the main channels used by hemophilia researchers to communicate their findings are the peerreviewed scientific journals in English. Due to the transition from printed to the electronic form, the number of scientific journals and articles increased dramatically in the last 2 to 3 decades. Thus, even considering only hemophilia research, it is already difficult for professionals to stay up-to-date with all the latest findings. Similar to other hematological disorders, this trend points to a near future where it will be unfeasible for humans to read all published studies.
In other areas, researchers started addressing this issue by creating automatic text summaries, 8 classifying studies according to its contents, 9 recommending articles based on reading records, 10 and notably, making automatic discoveries by connecting dispersed factual information. 11 For hemophilia research, in particular, these applications are still lacking.
In this study, we gave the first step in this direction by developing a computational framework that maps the knowledge accumulated in hemophilia research in the last 60 years. First, we created a network where the nodes represent the hemophilia researchers, and 2 nodes are connected if they coauthored a manuscript. In previous studies, coauthorship networks proved itself useful to reveal meaningful patterns of scientific collaboration, 12 as well as serve as a historical record for future generations. 13 We used this hemophilia coauthorship network to automatically find groups of authors (ie, clusters), who have collaborated systematically for many years. We used this information as input for text mining algorithms and found that even with minimum processing, it is already possible to automatically identify the topics representing the essence of the work performed by each group.
Thus, the contribution of this study in the short term is that it helps researchers to visualize and identify potential competitor and collaborator groups, and in the long term, the computational methodology introduced here paves the way for the development of automatic knowledge curation and discovery systems that are tailor-made for hemophilia research.
We downloaded all abstracts in the Medline format and processed them by in-house scripts to extract the abstract text and the authors.
Extracting author names to build a network
We extracted the author names from each abstract record using Python scripts and the Biopython package. 14 We considered only articles with more than one author. Next, we built an undirected graph where the nodes represent the authors and created an edge between 2 nodes if they co-authored a manuscript. The weight on each edge is the number of manuscripts co-authored by the 2 authors. Moreover, we considered symmetrical edges, meaning that A-B is the same as B-A. We pruned the complete network by leaving only authors with 2 or more publications related to hemophilia (Supplementary Table 1 has the complete network).
Network processing and visualization
To calculate the centrality measures of the coauthorship network, we used the R statistical package (www.r-project.org) and the iGraph package. 15 We used its functions to calculate the degree, betweenness, closeness, Burt's constraint, authority score, PageRank-like, and Kcore, with their standard parameters.
We visualized the network and prepared the manuscript figures using Cytoscape 16 version 3.8.
SPICi for finding clusters and text processing
To find clusters of authors in the coauthorship network, we used SPICi, 17 with the parameters [-s 3 -d 0.1 -g 0.4]. All clusters are available in Supplementary Table 4.
For each cluster, we selected the manuscripts authored by at least 3 of the authors who are members of the given cluster. We considered only clusters that had at least 3 representative studies. Next, we concatenated the abstracts from the selected manuscripts and processed them to combine plurals (eg, inhibitors and inhibitor) and removed words and synonyms that are common in hemophilia (eg, "hemophilia," "hemophilia," "FVIII," "factor," "FIX").
Finally, we used an online server to process and depict the contents of the corpus containing the abstracts from each author-cluster (https://www.wordclouds.com/).
Prediction of the number of manuscripts to be published in the future
The prediction of the number of papers published annually in this area was performed using an ARIMA model, available in the statsmodels package, 18 version 0.13.1, which describes the time series behavior by combining 3 different methods. We used the R statistical package version 3.4 (www.r-project. orgwww.r-project.org) and Python version 3.6.9 (https://www. python.org/).
Code and data availability
The source code and the datasets used in the study are available at https://github.com/madlopes/Hem-AuthNet.
Properties of the hemophilia authorship network
The representation of information as a network is a convenient way to depict a relationship between entities. To build a coauthorship network of hemophilia studies, we queried the PubMed database using a carefully built search term with several synonyms and aimed to include abstracts genuinely related to hemophilia while excluding articles that only occasionally mentioned terms from this field (see Methods). We downloaded a set of more than 20 000 textual abstracts in English, covering the period of 1960-2022.
Next, we created an undirected graph, where the nodes are the manuscript authors, and 2 nodes are connected by an edge if they co-authored at least 1 manuscript. In this network, the weights of the edges are the number of studies that the 2 researchers co-authored.
This approach yielded a network with more than 54 000 nodes and 305 000 edges. Upon closer inspection, we noticed that this network was too large to be processed using current algorithms, and several authors had only a single publication in the field; therefore, we pruned the network by including only authors with 2 or more publications related to hemophilia. In the end, our coauthorship network had 14 767 nodes and 117 257, and retained only the authors who made a continuous contribution to the field; we termed this network Hem-AuthNet (Supplementary Table 1).
In general, the Hem-AuthNet is a very dense and compact network, as evidenced by its more than 14 000 nodes connected and forming a very large central component ( Figure 1A). Moreover, given its diameter, we found that the number of intermediates between any 2 researchers consistently working in this field is at most 15. Although Hem-AuthNet does not take into account the time component (ie, some studies Lopes et al 3 were published decades apart), the presence of a large central component and the possibility of reaching all nodes with a small number of steps indicate that hemophilia is a highly collaborative field, likely due to the rarity of this disease and the small number of groups actively working on it.
Next, we investigated the connectivity properties of all authors in this network, namely, what kind of position they occupy within the hemophilia research landscape. For this purpose, we calculated several centrality measures of the Hem-AuthNet nodes; however, given the strong correlation that these measures displayed to each other, we found that only 2 measures sufficed ( Figure 1B). Thus, for this analysis, we used the degree (how many connections a node has) and the betweenness (to what extent a node serves as a bridge to groups that otherwise would not be connected) ( Figure 1C). We found that while most nodes make only a few connections, a few nodes have several dozen connections, for instance, among the most connected authors, ~100 co-authored manuscripts with more than 150 researchers. Moreover, the broad betweenness distribution displayed in Figure 1D indicates that while some authors co-authored studies only with their immediate contacts, other authors served as "bridges" between different groups and most likely participated in large multidisciplinary studies. Interestingly, the distribution of these centrality measures is analogous to the properties exhibited by networks of a completely different nature, like the population size of cities 19 and the magnitude of earthquakes. 20 Finally, we wondered what are the most central nodes in the whole Hem-AuthNet. To answer this question, we consider both the degree and the betweenness measures in conjunction (top 1% in both) and found that at least 108 authors filled this criteria (Supplementary Table 3); with their publication records combined, these authors have published more than 1000 manuscripts, have collaborated with thousands of researchers, and have a career spanning several decades (Supplementary Tables 2 and 3).
Taken together, these results indicate that Hem-AuthNet automatically identifies emerging authorship patterns in the hemophilia scientific literature. This approach offers a method to quickly identify the most prolific authors, their position within their collaboration network, and encode these patterns digitally, in a format that can be used for further in silico analyses.
Characterizing clusters of collaborators and their work
After studying the network characteristics of individual authors, we wondered about the properties that can be derived from groups of authors. For this purpose, we used a network In this network, highdegree nodes represent the authors who co-authored studies with several other researchers, whereas low-degree nodes represent those who collaborated with only a few others. High-betweenness nodes are those who served as a "bridge" between groups that would have no connection otherwise; on the contrary, low-betweenness nodes are the members of groups where most members are connected directly to each other. (D) While the vast majority of researchers in the Hem-AuthNet co-authored manuscripts with less than ~20 other researchers, a few of them served as the "hubs" of the network (ie, high-degree and high-betweenness). PR-like, PageRank-like.
processing algorithm to identify clusters in the Hem-AuthNetnamely, groups of authors who have collaborated and published numerous hemophilia studies together.
In total, we found 25 clusters with sizes ranging from 3 to 15 authors (Figure 2; Supplementary Table 4). Although hemophilia researchers sporadically participate in studies involving several groups, our cluster detection algorithm identified the main network of each researcher-namely, the group of collaborators with whom they produced most of their studies. Interestingly, the clusters we found were marked by the presence of 1 or 2 senior authors and by several junior members. As Figure 2 depicts, the senior authors are easily distinguished by the node sizes, reflecting their number of publications. Moreover, it is clear that while most senior authors have close, persistent collaborations with only a few other researchers, most of the collaborations are only transient (Figure 1; Supplementary Table 1), probably due to the structure of most modern academic institutions.
Next, we used text mining algorithms to automatically analyze and determine the research specialties in the body of scientific work produced by the members of each cluster. For this purpose, we processed the ~20 000 manuscript abstracts related to hemophilia and selected those that had at least 3 authors from each cluster. In these texts, we found that its terms and sentences could readily identify the research interests from each group. As shown in Figure 3, these algorithms accurately found the groups working on the development of emicizumab, 21 the bispecific antibody for hemophilia A prophylaxis (cluster 2), patient care (cluster 3), gene therapy (cluster 5), and inhibitor development (cluster 9), demonstrating that even with minimal processing and using only a handful of abstracts per group, the research topics in hemophilia are so specific that they are surprisingly suitable for algorithmic analysis.
Evidently, the information captured and represented by the Hem-AuthNet platform is a "snapshot" of the hemophilia literature, and this field is undergoing a permanent increase in the number and variety of topics (until 2025, we predict it will reach more than 1000 studies per year, or 1 study every ~8 hours; Supplementary Figure 1). In addition, we understand that the Hem-AuthNet layout and connectivity changes based on its input parameters, therefore, we took special care to make all datasets and source code available in a simple and intuitive format to enable the community to reproduce and extend our findings (see Data availability).
In summary, these results demonstrate the feasibility of representing the hemophilia research landscape as a network and show that this structure contains all information required for algorithms to reveal informative patterns and trends. Given the accelerating pace at which the hemophilia literature is growing, it is encouraging to verify that text mining techniques can promptly identify the research topics of each group based solely on the abstracts of their work.
Discussion
In this study, we created a comprehensive map spanning 6 decades of research in hemophilia (we named it the Hem-AuthNet). In this framework, we represented thousands of researchers, their collaborations, and the contents of their work. From a historical perspective, this work is an appreciation of thousands of careers dedicated to understanding the details of this bleeding disorder; from a practical point of view, the computational methods presented here enable researchers to make sense of the current vast hemophilia research landscape and to narrow down the scientific material that best aligns with their interests.
Even for a field with a scientific body of modest size (~20 000 articles), the complexity and number of authors composing the Hem-AuthNet largely surpass the human capacity to derive meaningful patterns from this structure. The representation of scientific collaborations as a network is a convenient way to create a structure that can be explored by algorithms. Our network analysis methods found that as in other research fields (eg, physics 22 ), the hemophilia research network also has "hubs"-namely, the authors who collaborated with hundreds of researchers and published dozens of articles ( Figure 1). Interestingly, some of these hub researchers also served as "connectors" between different research groups (Supplementary Table 3); given that academic groups are often highly specialized in a few techniques, these researchers probably played a pivotal role in facilitating the development of studies that would otherwise not be conducted. The importance of persons interfacing and connecting different groups is a recurrent topic in social science studies, 23 and the Hem-AuthNet framework was able to detect and quantify this phenomenon in hemophilia research as well.
Interestingly, using the coauthorship network as input, we used graph analysis algorithms to find parts of the network that were strongly connected (ie, clusters). As in other networks derived from a variety of human activities, 24 we observed that in the more than 20 clusters, some collaborations were persistent and spanned several years, and others were only transient and sporadic (Figure 2). This is likely an emergent property of scientific collaboration networks, given that there is a small number of senior researchers, and a large number of junior members who undergo scientific training for only a few years.
Although it is important to visualize the connections made by the researchers working in the hemophilia field, it is essential to develop algorithms that make sense of their work. We found that using text mining techniques, we could identify the research topics representing the essence of each cluster-ranging from clinical care to molecular biology and drug development ( Figure 3). This feature is particularly important because we predict that the literature related to hemophilia will increase dramatically (Supplementary Figure 1). Thus, it is important to create algorithms to automatically discover relevant content before researchers miss key studies due to the notorious information overload that already affects other fields. 25 In this sense, the research presented here opens interesting avenues for research. Perhaps the most exciting is the automatic discovery of patterns and connections between factual data that are not apparent to humans. These powerful techniques are already used to uncover the role of mutant genes in disease pathways, 26,27 and to help synthesize novel materials that display notable physical properties. 11 If applied to hemophilia research, we anticipate that these methods will foster even better clinical care, physiotherapy programs and help in the resolution of issues threatening hemophilia patients (eg, the development of inhibitory antibodies and events of intracranial hemorrhage).
Conclusions
In summary, the framework presented here accurately represents the work produced by a large collaboration network established by thousands of hemophilia researchers in the last 6 decades. We expect that this system will facilitate knowledge discovery and will accelerate the development of superior treatments for people living with hemophilia.
Author Contributions
TJSL conceptualized the study designed the analysis. TL, TN, and RR performed the analyses, interpreted the results and wrote the manuscript.
Supplemental Material
Supplemental material for this article is available online. | 2022-09-24T15:18:00.501Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "9e636fbb81d827b8e26f1cc6d843de76f3f75d00",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d1dbe85a691c1af06b42c3180fb8404c3d46c8fc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
218900691 | pes2o/s2orc | v3-fos-license | CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency
In this paper, we present a new open source toolkit for speech recognition, named CAT (CTC-CRF based ASR Toolkit). CAT inherits the data-efficiency of the hybrid approach and the simplicity of the E2E approach, providing a full-fledged implementation of CTC-CRFs and complete training and testing scripts for a number of English and Chinese benchmarks. Experiments show CAT obtains state-of-the-art results, which are comparable to the fine-tuned hybrid models in Kaldi but with a much simpler training pipeline. Compared to existing non-modularized E2E models, CAT performs better on limited-scale datasets, demonstrating its data efficiency. Furthermore, we propose a new method called contextualized soft forgetting, which enables CAT to do streaming ASR without accuracy degradation. We hope CAT, especially the CTC-CRF based framework and software, will be of broad interest to the community, and can be further explored and improved.
Introduction
Deep neural networks (DNNs) of various architectures have become dominantly used in automatic speech recognition (ASR), which roughly can be classified into two approaches -the DNN-HMM hybrid and the end-to-end (E2E) approaches. Initially, the DNN-HMM hybrid approach was adopted [1], which is featured by using the frame-level loss (cross-entropy) to train the DNN to estimate the posterior probabilities of HMM states. A GMM-HMM training is firstly needed to obtain frame-level alignments and then the DNN-HMM is trained. The hybrid approach usually consists of an DNN-HMM based acoustic model (AM), a state-tying decision tree for context-dependent phone modeling, a pronunciation lexicon and a language model (LM), which can be compactly combined into a weighted finite-state transducer (WFST) [2] for efficient decoding.
Recently, the E2E approach has emerged [3,4,5,6], which is characterized by eliminating the construction of GMM-HMMs and phonetic decision-trees, training the DNN from scratch (in single-stage) and, even ambitiously, removing the need for a pronunciation lexicon and training the acoustic and language models jointly rather than separately. The key to achieve this is to define a differentiable sequence-level loss of mapping the acoustic sequence to the label sequence. Three widely-used E2E losses are based on Connectionist Temporal Classification (CTC) [3], RNN-transducer (RNN-T) [5], and attention based encoder-decoder [6] respectively.
When comparing the hybrid and E2E approaches (modularity versus a single neural network, separate optimization versus joint optimization), it is worthwhile to note the pros and † Corresponding author. This work is supported by NSFC 61976122. Code released at https://github.com/thu-spmi/CAT cons of each approach. The E2E approach aims to subsume the acoustic, pronunciation, and language models into a single neural network and perform joint optimization. This appealing feature comes at a cost, i.e. the E2E ASR systems are data hungry, which require above thousands of hours of labeled speech to be competitive with the hybrid systems [7,8,9]. In contrast, the modularity of the hybrid approach permits training the AM and LM independently and on different data sets. A decent acoustic model can be trained with around 100 hours of labeled speech whereas the LM can be trained on text-only data, which is available in vast amounts for many languages. In this sense, modularity promotes data efficiency. Due to the lack of modularity, it is difficult for an E2E model to exploit the text-only data, though there are recent efforts to alleviate this drawback [10,11]. In this paper, we are interested in bridging the hybrid and the E2E approaches, trying to inherit the data-efficiency of the hybrid approach and the simplicity of the E2E approach. A second motivation for such bridging is that low latency ASR has been addressed relatively easier and better in the hybrid approach than in the E2E approach, as will be discussed later in Section 2.
Specifically, we base on the recently developed CTC-CRF approach [12]. Basically, CTC-CRF is a CRF with CTC topology, which eliminates the conditional independence assumption in CTC and performs significantly better than CTC. It has been shown [12] that CTC-CRF has achieved state-of-the-art benchmarking performance with training data ranging from ∼100 to ∼1000 hours, while being end-to-end with a simplified pipeline (eliminating GMM-HMMs and phonetic decision-trees, training DNN-based AM in single-stage) and being data-efficient in the sense that cheaply available LMs can be leveraged effectively with or without a pronunciation lexicon.
In this paper we present CAT (CTC-CRF based ASR Toolkit) towards data-efficient and low-latency E2E ASR, which trains CTC-CRF based AMs in single-stage and uses separate LMs, with or without a pronunciation lexicon. On top of the previous work [12], the new contributions of this work are as follows.
1. CAT releases an full-fledged implementation of CTC-CRFs. A non-trivial issue in training CTC-CRFs is that the gradient is the difference between empirical expectation and model expectation. CAT contains efficient implementations of the forward-backward algorithm for calculating these expectations using CUDA C/C++ interface. CAT adopts PyTorch [13] to build DNNs and do automatic gradient computation, and so inherits the power of PyTorch in handling DNNs. In CAT, we can readily use the PyTorch DistributedDataParallel module to support training over multi-node and multi-GPU hardwares.
2. We add the support of streaming ASR in the toolkit. To this end, we propose a new method called contextualized soft forgetting (CSF), which combines soft forgetting [14] and context-sensitive-chunk [15] in using bidirectional LSTM (BLSTM). Extensive experiments show that: (a) CTC-CRF with soft forgetting improves over CTC with soft forgetting significantly and consistently; (b) By using contextualized soft forgetting, the chunk BLSTM based CTC-CRF with a latency 1 of 300ms outperforms the whole-utterance BLSTM based CTC-CRF.
3. CAT provides reproducible, complete training and testing scripts for a number of English and Chinese benchmarks, including but not limited to WSJ, Switchboard, Fisher-Switchboard, and AISHELL datasets which are presented in this paper. CAT achieves state-of-the-art ASR performance on these datasets, which are comparable to the LF-MMI [18] results in Kaldi (one of the strongest, fine-tuned hybrid ASR toolkit) but with a much simpler training pipeline. Remarkably, compared to existing non-modularized E2E models, CAT performs better on limited-scale datasets (with ∼100 to ∼2000 hours of training data), demonstrating its data efficiency.
Related Work
ASR toolkits. Roughly speaking, there are two approaches to using DNNs in ASR -the DNN-HMM hybrid and the E2E approaches. So does the classification of existing ASR toolkits. For the hybrid approach, Kaldi [19] may be the most widely-used hybrid DNN-HMM based ASR toolkit. In Kaldi, lattice-free maximum-mutual-information (LF-MMI) training needs a multi-stage pipeline consisting of GMM-HMM training and phonetic decision tree construction. There have emerged some E2E ASR toolkits (e.g. ESPnet [20]/ESPRESSO [21], Wav2letter++ [22], and Lingvo [23]), mostly focusing on using attention-based encoder-decoder or hybrid CTC/attention. EESEN [4] and E2E-LF-MMI [24,25] seem to bridge the hybrid and the E2E approaches, by using the sequence-level loss (CTC and LF-MMI respectively) to train single-stage AMs and employing WFST based decoding. EESEN is based on CTC, which, different from CTC-CRF, is limited by its conditional independence assumption and weak performance. E2E-LF-MMI [24,25] was developed with two versions of using mono-phones or bi-phones, and bi-phone E2E-LF-MMI obtains comparable results to hybrid LF-MMI. It is shown in our experiments that mono-phone CTC-CRF performs comparably to bi-phone E2E-LF-MMI but with a simpler pipeline. Bi-phone CTC-CRFs is found to slightly improve over mono-phone CTC-CRFs but will complicate the training pipeline. The differences between E2E-LF-MMI and CTC-CRF are detailed in [12].
Low latency ASR. An important feature for a practical ASR toolkit is its ability to do streaming ASR with low latency. In the hybrid approach, chunk-based schemes have been investigated in BLSTM [15,26]. Time-delay neural networks (TDNNs) with interleaving LSTM layers (TDNN-LSTM) [27] has been developed in Kaldi to successfully limit the latency while keeping the recognition accuracy. In contrast, it is challenging and more complicated for attention-based encoder-decoders to do streaming ASR, which recently has received increasing studies, such as monotonic chunkwise attention (MoChA) [28], triggered attention [29], or using limited future context in the encoder [17]. RNN-T has some advantage for streaming ASR but is data hungry, requiring large-scale training data to work. The RNN-T result over the Fisher-Swichboard data (2300 hours) [30] is worse than CAT, as shown in Table 4.
CTC-CRF based ASR
CAT consists of separable AM and LM, which meets our rationale to be data efficient by keeping necessary modularity. In the following we mainly describe our CTC-CRF based AMs. CAT uses SRILM for LM training, and some code from EESEN for decoding graph compiling and WFST based decoding. More details are described at the toolkit URL.
Consider discriminative training of DNN-based AMs in single-stage based on the loss defined by conditional maximum likelihood [12]: where x x1, · · · xT is the speech feature sequence and l l1, · · · lL is the label (phone, character, word-piece and etc) sequence, and θ is the model parameter. Note that x and l are in different lengths and usually not aligned. To handle this, a hidden state sequence π π1, · · · πT is introduced; state topology refers to the state transition structure in π, which basically defines a mapping B : S * π → S * l that maps a state sequence π to a unique label sequence l. Here S * l denote the set of all sequences over the alphabet S l of labels, and S * π similarly for the alphabet Sπ of states. It can be seen that HMM, CTC, and RNN-T implement different topologies. CTC topology defines a mapping that removes consecutive repetitive labels and blanks, with Sπ defined by adding a special blank symbol <blk> to S l . CTC topology is appealing, since it allows a minimum size of Sπ and avoids the inclusion of silence symbol, as discussed in [12].
Basically, CTC-CRF is a CRF with CTC topology. The posteriori of l is defined through the posteriori of π as follows: And the posteriori of π is further defined by a CRF: Here φ θ (π, x) denotes the potential function of the CRF, defined as: where l = B(π).
T t=1 log p θ (πt|x) defines the node potential, calculated from the bottom DNN. log p(l) defines the edge potential, realized by an n-gram LM of labels and, for reasons to be clear in the following, referred to as the denominator ngram LM. Remarkably, regular CTC suffers from the conditional independence between the states in π. In contrast, by incorporating log p(l) into the potential function in CTC-CRF, this drawback is naturally avoided. Combing Eq. (1)-(3) yields the sequence-level loss used in CTC-CRF: The gradient of the above loss involves two gradients calculated from the numerator and denominator respectively, which essentially correspond to the two terms of empirical expectation and model expectation as commonly found in estimating CRFs. Similarly to LF-MMI, both terms can be obtained via the forward-backward algorithm. Specifically, the denominator calculation involves running the forward-backward algorithm over the denominator WFST T den . T den is an composition of the CTC topology WFST and the WFST representation of the n-gram LM of labels, which is called the denominator n-gram LM, to be differentiated from the word-level LM in decoding.
Contextualized Soft Forgetting towards Streaming ASR
To enable streaming ASR in CAT, we draw inspirations from soft forgetting [14] and context-sensitive-chunk [15] in using BLSTM. With the hypothesis that whole-utterance unrolling of the BLSTM leads to overfitting, soft forgetting, which is developed for CTC-based ASR, consists of three elements. First, the BLSTM network is unrolled over non-overlapping chunks. The hidden and cell states are hence forgotten at chunk boundaries in training. Second, the chunk duration is perturbed across training minibatches, which is called chunk size jitter. Third, the CTC loss is added with a twin regularization term, which is the mean-squared error between the hidden states of a pre-trained fixed whole-utterance BLSTM and the chunk-based BLSTM being currently trained. Since twin regularization promotes some remembering across chunks, this method is called soft forgetting. In streaming recognition, the hidden and cell states of the forward LSTM are copied over from one chunk to the next, and the backward LSTM hidden and cell states are reset to zero. The idea of context-sensitive-chunk (CSC) is proposed in the BLSTM-HMM hybrid system to reduce the latency from a whole utterance to a chunk. In CSC, a chunk is appended with a fixed number of left and right frames as left and right contexts.
In CAT, we propose to apply soft forgetting to contextsensitive-chunks, which is called contextualized soft forgetting (CSF) as illustrated in Figure 1. First, we split an utterance into non-overlapping chunks. For each chunk, a fixed number of frames to the left and right of the chunk are appended as contextual frames except for the first and last chunk, where we use zeros as the left and right contexts respectively. Thus we form context-sensitive-chunks and run BLSTM over each CSC. The hidden and cell states of the forward and backward LSTM networks are reset to zeros at the left and right boundaries of each CSC in both training and inference. When calculating the sequence-level loss in CTC-CRF, we splice the neural network output from chunks into a sequence again, but excluding the network outputs from contextual frames. A pre-trained fixed whole-utterance BLSTM is used to regularize the hidden states of the CSC-based BLSTM, and the overall training loss is the sum of the CTC-CRF loss and the twin regularization loss with a scaling factor λ. Note that once the CSC-based BLSTM is trained, we can discard the whole-utterance BLSTM and per-form inference over testing utterances without it.
Experiment Settings
The experiment consists of two parts. In the first part, we introduce the results on several representative benchmarks, including WSJ (80-h), AISHELL (170-h Chinese), Switchboard (260-h) and Fisher-Switchboard (2300-h) (the numbers in the parentheses are the size of training data in hours). The performances over these limited-scale datasets reveal the data efficiency of different ASR models. The second part presents the results for streaming ASR by the proposed contextualized soft forgetting method with ablation study.
It should be noted that the results shown in this paper should not be compared with results obtained with heavy data augmentation (e.g. specAugment [31]), much larger DNNs, and model combination. When compared to results reported from other papers, unless otherwise stated, we cite those results under comparable conditions to the best of our knowledge.
Setup for benchmarking experiment
We compare CAT with state-of-the-art ASR systems on several benchmarks, as stated above. We apply speed perturbation for 3-fold training data augmentation, except on Fisher-Switchboard. Unless otherwise stated, 40 dimension filter bank with delta and delta-delta features are extracted. The features are normalized via mean subtraction and variance normalization per utterance, and sampled by a factor of 3.
The AM network, different from [12], is two blocks of VGG layers followed by a 6-layer BLSTM similar to [32]. We apply 1D max-pooling to the feature maps produced by VGG blocks on the frequency dimension only, since the input features have been sampled in time-domain and we find that maxpooling along the time dimension will deteriorate the performance. The first VGG block has 3 input channels corresponding to spectral features, delta, and delta delta features. The BLSTM has 320 hidden units per direction for WSJ and AISHELL, and 512 for Switchboard and Fisher-Switchboard. The total number of parameters is 16M and 37M respectively, much smaller than most E2E models. In training, a dropout [33] probability of 50% is applied to the LSTM to prevent overfitting. Following [12], a CTC loss with a weight α is combined with the CRF loss to help convergence. We set α = 0.01 by default and find in practice that the smaller α is, the better the final result will be.
Setup for streaming ASR experiment
To evaluate the effectiveness of contextualized soft forgetting, we first implement soft forgetting with the CTC-CRF loss. For a fair comparison, we adopt the same neural network architecture as in [14], which is a 6-layer BLSTM with 512 hidden units per direction. 40 dimension MFCC with delta and delta-delta are extracted, and the chunk size is set to 40. The whole-utterance BLSTM pre-trained on 260hr Switchboard obtain 14.3% WER on eval2000. For twin regularization, the scaling factor λ is set to 0.005. In contextualized soft forgetting, the chunk size is also 40, with 10 left and 10 right frames appended.
The WER results on Switchboard are shown in Table 3. The Eval2000 test set consists of two subsets -Switchboard (SW) and Callhome (CH). It can be seen that compared to biphone hybrid LF-MMI and E2E-LF-MMI, mono-phone CTC-CRF performs comparably but with a simpler pipeline. Remarkably, mono-phone CTC-CRF performs significantly better than other E2E models.
The WER results on Fisher-Switchboard are shown in Table 4. The performance of CTC-CRF, with no data augmentation, is on par with state-of-the-art hybrid and E2E models. Summing up the above results, we can see that on the limited-scale datasets (such as 80-h, 170-h, 260-h and 2300-h training data), the modularity of CTC-CRF clearly promotes data efficiency and achieve better results than other data hungry E2E models.
Results for streaming ASR experiment
First, we introduce different elements of soft forgetting [14] in steps to show their impact on WERs and also compare CTC and CTC-CRF. For this purpose, we follow [14] to report the nonstreaming recognition results, as shown in Table 5. We start from training the basic chunk-based BLSTM networks with a fixed chunk size. It can be seen that CTC-CRF improves over CTC significantly under all experiment settings.
Then we examine the streaming recognition. It can be seen from Table 6 that CTC-CRFs trained with CSF improve significantly over CTC-CRFs with SF, and obtain comparable result with state-of-the-art TDNN-LSTM based hybrid model [27]. Remarkably, the CSF based streaming CTC-CRF (14.1%) even outperforms the whole-utterance CTC-CRF (14.3%), presumably because CSF alleviates overfitting in addition to realizing streaming ASR. This is in contrast to steaming ASR results by other E2E models, where streaming E2E models can hardly outperform their whole-utterance models [16,17,41].
Conclusion
This paper introduces an open source ASR toolkit -CAT, with the main features of data efficiency, simple pipeline, streaming ASR and superior results. we propose a new method called contextualized soft forgetting, which enables CAT to do streaming ASR without accuracy degradation. We hope CAT, especially the CTC-CRF based framework and software, will be of broad interest to the community, and can be further explored and improved, e.g. exploring different DNN architectures, different topologies of CRFs, and the application in more ASR tasks. | 2020-05-28T01:01:25.175Z | 2020-05-27T00:00:00.000 | {
"year": 2020,
"sha1": "75fe456e3a740a0f300b5a546ceeb1412ebc87cb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.13326",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "75fe456e3a740a0f300b5a546ceeb1412ebc87cb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics"
]
} |
27819608 | pes2o/s2orc | v3-fos-license | Personality in the cockroach Diploptera punctata: Evidence for stability across developmental stages despite age effects on boldness
Despite a recent surge in the popularity of animal personality studies and their wide-ranging associations with various aspects of behavioural ecology, our understanding of the development of personality over ontogeny remains poorly understood. Stability over time is a central tenet of personality; ecological pressures experienced by an individual at different life stages may, however, vary considerably, which may have a significant effect on behavioural traits. Invertebrates often go through numerous discrete developmental stages and therefore provide a useful model for such research. Here we test for both differential consistency and age effects upon behavioural traits in the gregarious cockroach Diploptera punctata by testing the same behavioural traits in both juveniles and adults. In our sample, we find consistency in boldness, exploration and sociality within adults whilst only boldness was consistent in juveniles. Both boldness and exploration measures, representative of risk-taking behaviour, show significant consistency across discrete juvenile and adult stages. Age effects are, however, apparent in our data; juveniles are significantly bolder than adults, most likely due to differences in the ecological requirements of these life stages. Size also affects risk-taking behaviour since smaller adults are both bolder and more highly explorative. Whilst a behavioural syndrome linking boldness and exploration is evident in nymphs, this disappears by the adult stage, where links between other behavioural traits become apparent. Our results therefore indicate that differential consistency in personality can be maintained across life stages despite age effects on its magnitude, with links between some personality traits changing over ontogeny, demonstrating plasticity in behavioural syndromes.
Introduction
The field of animal personality research has bloomed in recent years; inter-individual variation which was once considered background "noise" in behavioural ecological studies can now be a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 formally attributed to among individual differences which persist through time [1]. Methods have recently been formalised and adapted [2] to show that personality (where individuals of the same species show consistent differences in their behaviour across time and contexts, [1,3]) can be detected across a wide spectrum of animal species, including mammals [4,5], birds [6,7], fish [8,9] and insects [10,11]. The field has expanded to explore personality within a range of contexts such as mate choice [12], colour morphs [13,14], collective movement [6,15], dispersal [16], social network positions [17,18], collective foraging [19] and leadership [20]. Suites of personality traits may also be correlated to form distinct behavioural syndromes [3]. However, whilst there is a wealth of published studies on animal personality to date, the development of behavioural consistency over ontogeny is an area which has often been neglected [21].
A central tenet of personality is its stability over time; however, significant physical and behavioural developmental changes are likely to occur over an individual's lifetime and an understanding of how these changes affect behavioural traits is key to appreciating the adaptive value of personality itself. Experiences in early life can have significant influences upon the development of stable personality [22]. Periods of major reorganisation, such as morphogenesis, metamorphosis and sexual maturation, may also be expected to influence the stability of behaviour [21]. Juveniles often experience a very different set of selection pressures to adults, particularly where they live in completely different environments; in this context, behavioural stability is particularly surprising, especially when it occurs over complete metamorphosis (e.g. in the lake frog Rana ridibunda, [23], and the damselfly Lestes congener, [24]). Even in species where both adults and juveniles occupy similar environments, these life stages often have very different ecological needs, which affect their optimal behaviour; for example, most juvenile insects mainly focus on the search for food, whilst adults require mating partners or prime locations for oviposition or parturition [25]. Since personality traits may only become stable at the adult stage in some insect species (e.g. in the mustard leaf beetle Phaedon cochleariae, [26]) whilst these persist across life stages in others (e.g. L. congener, [24]), further studies are required to determine whether taxon or life history differences can best explain such differences in behavioural plasticity over ontogeny [26].
There may also be specific age and size effects on personality traits that involve risk-taking behaviour. In most insect species, juveniles and larvae are less mobile than adults and, due to their smaller size, they are at risk from a much wider range of predators; they therefore experience higher predation pressures, influencing both boldness and predator escape performance [27]. Their smaller size also imposes restrictions on the time they can spend without foraging, which may in turn promote bolder behaviour [26]; their greater metabolic requirements may be linked to a greater propensity to take risks [28,29]. Life-history trade-offs may also occur [30]; the pace-of-life syndrome can explain links between behavioural traits and differences in either growth rates or physiology across life stages [31,32] and explains why trade-offs between, for example, growth and mortality may differ across life stages [31]. These are all potential explanations for the finding that juvenile insects in some species have been shown to be bolder than their adult counterparts (e.g. field crickets Gryllus integer, [33]). The elucidation of mean-level changes in risk-taking behaviour across a broader range of species is now required to better understand the differential selection pressures in operation across life stages, and how these affect the development of personality [34].
To investigate behavioural consistency across discrete ontogenetic stages, an insect that undergoes a number of moults to adulthood is a perfect model. Despite the advantages of relatively short generation times, simple husbandry requirements and a vast variety of life history strategies, relatively few studies have assessed the consistency of personality across life stages in insects [10]. Indeed animal personality in general is frequently assessed over short time periods, or within a certain life stage, which is inadequate for the assessment of the proximate mechanisms contributing to personality variation [35]. In this study, consistency in individual behaviour will therefore be tested both within life stages (juvenile and adult) and across these stages in the gregarious cockroach Diploptera punctata, a species in which personality has not previously been explored despite its frequent use in endocrinological research [36].
A number of studies have so far examined personality variation in cockroaches; differential consistency in personality (where rank order in behaviour in a given context correlates across individuals over time, [21]) has so far been demonstrated in terms of exploration, sociality and foraging activity in male Blattella germanica [37], exploration, foraging, courtship, activity and boldness in Gromphadorhina portentosa [38][39][40] and sheltering behaviour in Periplaneta americana [41]. Influences of social isolation [37] and developmental environment [38] upon personality, characterisation of behavioural syndromes [39,40] and collective personality at the group level [41] have also been explored. However, no studies have so far investigated changes in personality traits across life stages in cockroaches and all have so far focused upon males.
In order to investigate the development of behavioural consistency in D. punctata, we tested both nymphs and adults to explore 1) differential consistency in behavioural traits (both within and between life stages), 2) age effects on individual personality traits, 3) structural consistency (i.e. the extent to which correlations among behaviour patterns are preserved when measured in the same context(s) at a different time, [21]), 4) context generality (where scores across contexts correlate across individuals, [21]) in boldness within each life stage and 5) the effects of individual sex and size on behavioural traits.
Based on previous research and our understanding of the behavioural ecology of D. punctata, a number of factors may affect behavioural consistency in this species. Both juvenile and adult D. punctata inhabit a similar ecological niche [42]; consistency in personality traits across life stages may therefore be predicted, as was found in field crickets [33]. Age, however, may affect the magnitude of risk-taking behaviour comprising boldness and exploration; juveniles showed higher levels of boldness than adults in other insect studies [26,33,43]. Sex may affect boldness levels as this species shows distinct sexual size dimorphism (and hence differential predation risks), with females being significantly larger [36]. Sex effects upon both boldness [43,44] and activity [26] have previously been demonstrated in other insects. Behavioural syndromes have also been identified in other cockroach species [39,40]; if a behavioural syndrome is identified here, its stability over ontogeny will be investigated.
Study population
Study individuals were taken from a mass colony of D. punctata maintained in laboratory conditions for over ten years. This colony was initially set up using individuals from three source populations and numbers have been maintained at a minimum population level of 200 individuals throughout this time, thus minimising the risk of inbreeding. These colonies were kept in an incubator at 24.5˚c with a 12:12 light:dark cycle in plastic tanks approximately 33 x 26 x 19 cm, with ventilation provided in the lid. These cockroaches were allowed to feed ad libitum on Lidl's "Orlando complete" dog biscuits and were given a constant supply of fresh water.
companion. Family and social environment were later considered in statistical analyses as factors potentially affecting behavioural consistency (see Statistical Analyses section). Housing consisted of transparent plastic containers of dimensions 11.5 x 11.5 x 6cm with air holes providing ventilation. Water was provided by Falcon tubes filled with water and plugged with soaked cotton wool. Water tubes were replaced as required. Nymphs were allowed to feed ad libitum on a 1:1 mixture of Aquarian fish flakes and Lidl's "Orlando complete" dog biscuits.
All moults were recorded until the focal individuals reached adulthood (when wings are present). Upon reaching adulthood, individuals were photographed and their head width and pronotum width measured to the nearest 0.01cm using ImageJ 1.48 [45], with sex being determined by examination of the sexually dimorphic subgenital plates. Photographs were taken under standardized conditions; adults were placed in a petri dish on a white paper background with consistent background lighting. A ruler was placed next to the dish and included in the photograph to allow scale to be determined using the software. The accuracy of measurements was ascertained by ten adults being measured three times each and a repeatability analysis carried out in JMP (SAS Institute Inc., Cary, North Carolina). This gave a repeatability of 94.2% for head measurements and 97.4% for pronotum measurements. Since head and pronotum width were significantly correlated (Pearson's correlation: r p = 0.670, N = 60, P < 0.001), pronotum width was selected as the single measure of size due to its higher level of repeatability.
Behavioural assays
Seventy-four individuals were tested in total; 24 were tested twice as third instar nymphs (10 males, 12 females, 2 unknown-died prior to reaching adulthood) and 65 as adults (29 males, 36 females) to explore differential consistency in behaviour within life stages. Since 19 individuals (7 males, 12 females) were tested both as juveniles and adults, these individuals alone were used to test for differential consistency of behavioural traits across life stages. The number of individuals tested at the third instar stage was limited by time constraints, mortality and intermoult intervals, as has occurred in similar studies (e.g. [26]). Since the number of moults to adulthood varies between sexes and among individuals in D. punctata [36], we chose to test juveniles at the third instar stage since a minimum of three moults occurs in both sexes [36]. Individuals were not tested as fourth or fifth instars as only a small proportion go through these stages (usually females, [36]. A gap of X + SD = 32.8 + 10.8 days, N = 19 occurred between the second trial as a third instar and the first trial as an adult (lifespan of D. punctata is around 423 days, [42]). For both life stages, each individual was tested twice with a gap of three to seven days between testing so that differential consistency within each life stage could be established.
Three potential personality traits, boldness, exploration and sociality, were explored across three behavioural assays: the exploration arena, social arena and startle test. The order of testing was randomly assigned to each individual and its effects later considered (see Statistical Analyses section).
i. Exploration arena. We used methods similar to those used for B. germanica [37]: a modified adaptation of the open field test (used to quantify exploration, [46]) with an emergence test component used as a measure of boldness [28]. The individual was removed from its normal housing and placed in an opaque perspex tube approximately 4cm in length and 3cm in diameter with both ends temporarily sealed (using petri dish lids as barriers) and left for three minutes to acclimatise. This tube was placed in sector B of an "exploration arena" prior to this acclimatisation time, with its temporarily sealed ends facing sectors A and C ( Fig 1). The exploration arena was a 21x30cm plastic tray with a depth of 8cm. This contained a piece of A4 paper that divided the arena into 12 sectors (labelled A to L, Fig 1); these delimited distinct geographical areas of the arena e.g. corners, sides, and central portions. An empty oval-shaped plastic dish (dimensions approximately 4x3cm with a depth of 2cm) was placed in sector K. After the acclimatisation period, the barriers were removed and timing began.
We recorded when i) the head and ii) the entire body emerged from the tube and iii) when the focal individual crossed the centre line (separating sectors D-F and G-I). We also recorded the number of novel sectors explored within ten minutes. When either all sectors had been explored or ten minutes had elapsed, the experiment was terminated. If all twelve sectors were explored, the time at which the focal individual entered the last novel sector was recorded. The paper grid was replaced between individuals since aggregation of pheromones, present in faeces, may influence cockroach movement [47,48].
ii. Social arena. This arena was identical to the exploration arena (Fig 1) except the plastic dish was replaced by a cotton net bag containing three adults randomly selected from the colony population (but ensuring both sexes were represented). The bag measured approximately 10 x 8 cm when flat (with an expanded volume of approximately 100cm 3 ) and allowed individuals to move around the restricted space; antennal contact with the focal individual was also possible. Following three minutes of acclimatisation for the focal individual in the Perspex tube, timing began when the barriers sealing the plastic tube were removed.
We recorded the time at which the focal individual first entered the sector containing conspecifics (latency to reach conspecifics), the latency to make antennal contact with conspecifics and subsequent times when the focal individual both left and entered this sector, allowing the total time spent in the sector containing conspecifics to be calculated. The experiment was terminated after ten minutes.
If, after five minutes, the individual had not left the tube (which occurred in 21% of trials), we moved it to sector H, on the border of sector K; we rotated the tube by 90 degrees to ensure the individual within the tube was now facing the conspecifics. This was carried out to allow less bold or explorative individuals the opportunity to show social behaviour; these individuals may otherwise not leave the tube at all during the experiment hence would be scored low in terms of sociality as an artefact of their reduced boldness levels. The initial five minutes when these individuals did not leave the tube were included in the latency to both reach conspecifics and make antennal contact.
iii. Startle test. Using a methodology similar to that used for G. portentosa [40], an alternative emergence test to assay boldness was used to quantify an individual's reaction to sudden exposure to light. The focal individual was placed in a 9cm diameter petri dish with an opaque lid and allowed to acclimatise for three minutes. Timing began when the lid was suddenly removed, exposing the focal individual to bright light. The times at which the individual first moved its i) antennae, ii) head and iii) initiated locomotion were recorded.
Across all three assays, each behavioural measure recorded was assigned to a particular personality trait based upon results from previous personality work in cockroaches [37,40], with boldness being measured across two contexts (Table 1).
Statistical analyses
i. Differential consistency within and across life stages. In the exploration assay, 23 of 65 individuals never left their tubes in at least one trial; they were assigned a latency to leave the tube value of "601" if the event in question (head or body leaving tube, crossing centre line) never occurred. Since non-parametric correlations were later carried out, these individuals were therefore assigned the highest latency rank. In the social assay, again, 21 individuals never left their tubes at all in at least one trial, despite being moved after five minutes to within sight of the conspecifics. These were also assigned a value of 601 for each relevant latency (either to reach conspecifics or to touch antennae with conspecifics).
In order to test for differential consistency in each trait within each life stage, a composite measure was calculated for each separate personality trait to reduce the number of variables and hence enable a more powerful test. We collapsed the individual measures used for each personality trait into the first principal component (PC) score for each individual in the statistical package JMP, including scores from each trial for each individual. We then tested for a significant correlation between the two trials' ranked PC scores for each individual by carrying out Spearman's rank correlations in SPSS. To test for differential consistency in personality traits across life stages, we again calculated PC scores for each personality trait for both life stages separately in the 19 individuals where these data were available; we calculated the mean latency for the two trials for each individual at each life stage then entered these means into the PC analysis. We again implemented a Spearman's rank correlation to test for consistency within individuals. This combination of principle component analysis (PCA) and Spearman's rank correlation tests was also used to show consistency in personality across metamorphosis in an anuran [23].
Since D. punctata are sexually dimorphic, sex differences in behaviour may occur. Males and females were therefore initially analysed separately, following the procedure outlined above. Further analyses were then carried out on pooled data where the direction of correlations was consistent between the sexes (S1 Appendix).
We repeated the correlations for the sociality assay excluding all individuals that were moved after five minutes of not leaving the tube to exclude the possibility that this practice was affecting results. Additionally, since the order of testing could affect the likelihood that individuals might leave the tube during the social assay (for example, if they had already experienced an experimental arena, this might affect their likelihood to leave the tube) and hence the measures used to assess sociality, a chi-squared test was carried out to examine whether order of testing had an effect on whether or not an individual left the tube during the social trial.
ii. Age effects. To test for age effects on the magnitude of individual behavioural measures between nymphs and adults, the difference between mean values obtained in trials at each life stage was calculated for each of the 19 individuals tested at both stages and a Wilcoxon signedrank test was applied to test whether these differences significantly differed from zero. This allowed a non-parametric comparison between these repeats within individuals across life stages. Principal component scores were not used as the aim was to test for changes in the raw behavioural scores measured for each individual between life stages. Data were initially plotted for each sex separately to ensure there was not a consistent difference in the magnitude of the response between the sexes before these were pooled (S1 Appendix). If a difference was observed, each sex was analysed separately. This applied to three measures: latency to reach conspecifics, total time spent with conspecifics and total time taken to explore all sectors.
iii. Context generality & behavioural syndromes. Context generality in boldness was tested for by carrying out Spearman's rank correlations between pairs of boldness measures taken in independent behavioural assays for both juveniles and adults. The context differed between assays since in the exploration arena, boldness was measured in terms of the latency for an individual to choose to leave a shelter, whereas in the startle test, boldness was measured in terms of the latency to move following sudden exposure to bright light by removal of the shelter.
Spearman's correlations were carried out between thirteen pairs of individual measures quantifying different personality traits in different trials to test for the presence of behavioural syndromes in D. punctata. Pairings of measures quantifying different traits but collected in the same assay (e.g. exploration arena-latency for head to emerge from tube and latency to cross centre line) were excluded as they lacked independence. Since fewer measures were compared, the principal components approach was not necessary. Analyses were carried out for both nymphs and adults separately in order to determine whether any behavioural syndromes present persist across life stages (hence to test for structural consistency, [21]). Since the sociality measure "total time with conspecifics" could be dependent upon the latency to reach conspecifics (measured independently in the exploration assay), a lack of a correlation between these two measures could be used to justify these measures' independence.
iv. Effects of sex, size, social environment & order of testing.
To test for sex and size effects on adult personality, a mixed models approach was used to also incorporate potential effects of order of testing, social environment and family. A linear mixed-effects model was built for each adult personality dimension with its PC score as the response variable. A square root transformation was applied to normalise boldness PC scores prior to analysis, whilst exploration and sociality PCs were ranked for use in non-parametric analysis due to their nonconformity to any distribution. Standard data exploration procedures were carried out to ensure all data met the assumptions of the models [49]. Sex, order of testing, social environment and pronotum width were included as fixed factors, with family (brood) included as a random effect. Models were built in the R environment [50] using the nlme package [51]. Directionality of loading of each measure for each PC was used to interpret any significant effects.
All tests were two-tailed and the significance level was set at α = 0.05. P-values were corrected for multiple comparisons for each factor within each dataset by carrying out sequential Bonferroni corrections [52].
Ethical note
We did not observe any adverse effects from the behavioural experiments conducted. The minimal number of individuals necessary to test the hypotheses was used and all animals were returned to the mass colony following final behavioural testing. Environmental enrichment (cardboard "egg boxes" to provide shelter and a more stimulating environment) was used in mass colonies, with small opaque plastic tubes being provided for developing nymphs as shelter.
Differential consistency within and across life stages
For nymphs, the direction of most correlations between PC (first principal component) scores was consistent between the sexes (S1-S3 Figs) and so data were pooled for subsequent analyses. Since correlations appeared much weaker for females than males for nymph boldness and exploration (S1 Fig), and also for male boldness in adults (S2 Fig) and exploration between life stages (S3 Fig), these analyses were initially carried out separately for each sex. However, these analyses did not reveal significant sex differences (S1 Appendix), apart from the analysis of differential consistency in exploration between the life stages, which showed that this trend was likely to be driven by females (male: r s = 0.321, N = 7, P = 0.482; female: r s = 0.718, N = 12, P = 0.009). This result will be discussed further; however, pooling the sexes is still justified here due to the low power of this test to show a significant correlation for a sample of seven males.
Significant correlations between PC scores for both sexes pooled together provided evidence of differential consistency in boldness, exploration and sociality within adults, in boldness within third instars and in boldness and exploration across life stages (see Table 2 for PC loadings and the percentages of variance explained by this principal component, Table 3 for correlation test results, as well as S5-S7 Figs for individual correlation plots and S1 Table for means and SEs for all behavioural measures). These analyses could only be carried out using 63 of 65 adults due to missing data points for the social trial (see raw data in S2 Appendix).
After excluding all individuals that did not leave the tube during the sociality test in the first five minutes, results for a new correlation test between trials to examine differential consistency in sociality (Spearman's rank correlations, third instars: r s = -0.236, N = 10, P = 0.511; adults: r s = 0.392, N = 38, P = 0.015; across life stages: r s = -0.182, N = 11, P = 0.593) were relatively consistent with those obtained using all data; although the trend direction reversed for the between life stages analysis, the correlation remained extremely weak and non-significant. We can therefore conclude that the practice of moving these tubes did not significantly influence results. The order of testing (i.e. the order of behavioural assays carried out) did not affect the likelihood of individuals leaving the tube during the sociality test (Chi-square test: χ 2 5 = 6.02, P > 0.05), thus showing that previous experience of other trials did not affect the likelihood to leave the tube and hence the time spent with conspecifics in this social trial.
Age effects
Plotting the data for each sex separately (S4 Fig) showed that the direction of change in magnitude of behavioural measures differed between the sexes for two sociality measures (latency to reach conspecifics and total time spent with conspecifics) and for one exploration measure (total time taken to explore all sectors). These measures were therefore initially analysed for males and females separately. However, following Bonferroni corrections, none of these sex specific differences were significant (S1 Appendix) and so both sexes were pooled for all subsequent analysis of all measures.
Adults were significantly less bold than juveniles across three of the five measures of boldness tested (Fig 2). There was no apparent difference in levels of either exploration or sociality between the two life stages (Table 4).
Context generality & behavioural syndromes
Consistent levels of boldness were evident in both juveniles and adults across the exploration and startle contexts from all measures used (Table 5), thus demonstrating context generality for this trait.
Within nymphs, exploration and boldness were found to significantly correlate across three pairs of measures (Table 6). No significant correlations were found between other pairs of measures. Within adults, there were significant correlations between sociality and both exploration (two pairs of measures) and boldness (two pairs of measures, Table 6). However, there was no evidence for a behavioural syndrome linking boldness and exploration in adults. There was therefore no evidence of structural consistency across life stages.
Effects of sex, size, social environment & order of testing
Results from linear mixed effects models showed that neither social environment during rearing nor order of testing had significant effects on any of the three personality dimensions tested as adults (Table 7). Similarly, there was no apparent effect of sex on boldness or exploration. There was, however, a significant effect of adult size on both boldness and exploration, with larger individuals showing higher PC scores for both personality dimensions (Fig 3). Since a higher boldness PC score represented a greater latency to carry out all five behaviours measured for this trait, and a higher exploration PC score represented a greater latency to cross the centre line, fewer sections explored and a greater time taken to explore all sectors (Table 2), these results show that larger individuals are less bold and less explorative.
There was an apparent effect of sex on sociality, with males having a lower PC score than females (Fig 4). Since a higher PC score represented a greater latency to reach and touch antennae with conspecifics and a shorter overall time spent with conspecifics (Table 2), these results indicate that males showed more motivation to approach conspecifics than did females.
Discussion
Inter-individual variation in personality was evident in D. punctata cockroaches; whilst in third instars only boldness was shown to be consistent, in adults differential consistency was evident in boldness, exploration and sociality. Moreover, in the sample tested, we found boldness and exploration were stable across life stages despite age effects on population boldness levels. Therefore, boldness was the only personality trait to be consistent within and across all tested life stages, whereas exploration and sociality only emerged as stable in adults. Behavioural syndromes were found in both nymphs and adults but for different traits, indicating a lack of consistency across stages. We also found evidence of context generality in boldness within both juveniles and adults. There were clear effects of sex and size; larger individuals were less bold and less exploratory, whilst males showed higher levels of sociality than females.
Differential consistency in behaviour within life stages
Our demonstration of differential consistency in boldness in D. punctata nymphs, as well as in boldness, exploration and sociality in adults (Table 3), clearly indicates the existence of personality in cockroaches. Our results are consistent with other studies in which boldness was found to be consistent in nymphs of field crickets [53] and damselflies zygoptera [24] but contrasts with results found in mustard leaf beetles [26]. The two former studies were carried out on a particular instar stage, as was our study, whereas the latter study on the beetles was carried out across several instar stages. Interestingly, only the studies conducted within a particular instar Linear mixed-effects models found a significant effect of size (measured by adult pronotum width) on both boldness and exploration scores and effects of sex on sociality scores in Diploptera punctata, as shown in bold. The order of testing (i.e. order trials were carried out) and social environment (isolated, with adult or with nymph companion) during development did not significantly affect these measures. Brood identity was included as a random factor. Degrees of freedom are presented for both denominator (Den DF) and numerator (Num DF). 60 individuals (for which all data were available) were included in the model.
report consistency in boldness. Further research is needed to determine whether consistency can also be found across instar stages. Consistency in boldness in adults has previously been documented in a variety of insects such as seed beetles Callosobruchus maculatus [54], mustard leaf beetles [26], firebugs Pyrrhocoris apterus [43] and hissing cockroaches [40]. Exploration was also shown to be consistent in adult hissing cockroaches [38] and firebugs [43], whilst sociality was shown to be consistent in B. germanica [37] and courtship display behaviour was shown to be consistent in hissing cockroaches [39]. Since D. punctata has been widely used for physiological research [36], an appreciation of its inter-individual differences now places it as a prime candidate for the exploration of physiological changes correlated with personality trait variation. The result that exploration and sociality were not found to be significantly consistent in third instar nymphs may indicate that these personality traits are not yet stable at this developmental stage. However, our demonstration of differential consistency in exploration across life stages is at odds with this explanation, at least for exploration behaviour, as this implies that exploration is already established at this life stage. Another potential explanation is that these personality traits may be more unstable during periods of rapid morphological change such as frequent moulting during juvenile development, which often require major reorganisation [21]. Indeed other studies on squid Euprymna tasmanica [55] and red junglefowl Gallus gallus [56] have failed to find consistency in various personality traits (boldness, tonic immobility, exploration and predator responses) within early developmental stages. It seems that specific traits are selected for consistency depending on the taxon but that other traits are inconsistent at an early developmental stage. Selection for greater behavioural plasticity may be beneficial at an early life stage, where future environments are less predictable [55,56]. Further work with a larger sample of nymphs is required to better understand this result since it may also be explained by relatively low statistical power. Personality in cockroaches-Stability and age effects Differential consistency in behaviour across life stages Stability in personality across discrete life stages has previously been demonstrated in only a handful of species, for example in the damselfly L. congener [24], the firebug [43], the lake frog R. ridibunda [23] and the laboratory rat Rattus norvegicus [57]. In others, such as the mustard leaf beetle [26], personality levels were only stable at the adult stage, whilst individual consistencies were found to be generally low across the major developmental stages of becoming independent and sexually mature in the jungle fowl G. gallus [56].
Despite a general lower likelihood of repeatability in behavioural experiments in ectotherms compared to endotherms [58] we found exploration and boldness remained consistent across discrete life stages in our sample of cockroaches (Table 3), in line with our prediction. Nymphs and adults share similar environments, lifestyles and possibly foraging strategies [25]. Consistent individual strategies to collect information (exploration) and to respond to risky situations (boldness) may therefore be adaptive across developmental stages. As a consequence, boldness and exploration seem to become "fixed" at the juvenile stage in D. punctata. However, it should be noted that consistency in exploration was mainly driven by females. Whether males do not show consistency in exploration across life stages or whether this result is due to low statistical power resulting from a small sample size requires further research. In contrast, sociality was not consistent across life stages and may be shaped by factors only arising once a discrete life stage such as sexual maturity has occurred. The stabilisation of hormonal profiles which occurs at this point [56] may have an effect here as well. Social factors, such as societal roles, may also affect the stability of certain personality traits [59,60].
Age effects
As expected, juveniles showed consistently higher levels of boldness than adults in three measures across two contexts in our sample (Fig 2), which is consistent with other studies on insects [26,33,43]. Whilst nymphs and adults may inhabit the same environment and may have the same lifestyle and foraging habits, allowing them to use the same individual strategies (i.e. showing relative consistency between individuals [21]), they may be exposed to environmental challenges to different degrees resulting in variation in the magnitude of behaviours expressed [21]; examples are differences in predation risk or life-history trade-offs. According to the asset protection hypothesis [30] adults may be more cautious in unfamiliar situations so as not to miss opportunities for reproduction, whereas juveniles may take greater risks to reach the reproductive stage as quickly as possible; this hypothesis is supported by studies on field crickets [33,44]. Another factor responsible for variation in environmental challenges is body size. Size effects on boldness have previously been demonstrated in mustard leaf beetles [26] and hissing cockroaches [38]. The higher boldness levels of smaller individuals were attributed to their greater metabolic requirements and therefore higher willingness to take risks. Whether life-history, body size or both factors are responsible for differences in boldness between nymphs and adults in our study species needs further investigation.
Surprisingly, exploration did not differ between age classes even though exploration has previously been found to be higher in juveniles than adults [14, [61][62][63] across different taxa. The formation of different behavioural syndromes within life stages in our study may play a role here. While exploration was positively correlated with boldness in nymphs, it was positively correlated with sociality in adults. Whether the correlation with other traits constrains exploration [64] or whether exploration is important in different contexts across life stages (e.g. finding food in nymphs versus finding food and mates as adults) is an exciting next step to investigate.
Context generality & behavioural syndromes
We found evidence for context generality in boldness in both juvenile and adult D. punctata (Table 5). Context generality is not always demonstrable in boldness [8,65], despite its inclusion in some definitions of personality [1]. Whilst boldness may be adaptive in certain situations, such as in intraspecific competition for resources, bold individuals may be at a disadvantage when confronted by a predator [66]. Personality may therefore affect individual fitness in context-dependent ways [67], which may explain why variation in personality persists [3]. In our study, the two contexts (latency to leave an opaque tube and latency to move following a sudden stimulus) may both be linked as common responses to an immediate threat from predation, which could explain their correlation. Consistency within contexts was also found for two boldness measures (time to leave tube after disturbance and time to walk after thrown into a novel arena) in firebugs [43]. In contrast, the two boldness measures (latency to leave cover after disturbance and latency to move after squeezing) tested in mustard leaf beetles were not correlated [26].
Within nymphs alone, we found significant relationships between exploration and boldness using three pairs of measurements across two independent pairs of trials. We therefore provide evidence of a behavioural syndrome in nymphs where boldness and exploration are linked ( Table 6). Since these same correlations were not apparent within adults (despite a much larger sample size), it is likely that this behavioural syndrome is specific to nymphs. Since obtaining food to shorten the latency to reach a reproductive stage is an essential driver for nymphs [25], boldness and exploration may combine to improve the efficiency of foraging, whilst for adults, these traits diverge for other purposes.
In adults, we found evidence of a behavioural syndrome linking sociality (total time spent with conspecifics) with exploration (two measures), as well as one linking sociality (total time spent with conspecifics) with boldness (two measures; Table 6). It could be argued that these correlations are an artefact of our experimental design and not true behavioural syndromes as the total time spent with conspecifics is likely to be dependent on an individual's boldness or exploration; if they are slow to leave their tube or cross the centre line, they will have less time available to spend with conspecifics. However, the lack of significant correlations between these pairs of measures in nymphs (despite boldness and exploration showing a significant correlation) provides evidence against the potential confounding effects of boldness or exploration on sociality in this assay. It is therefore likely that these behavioural syndromes are adaptive in adults, but not in nymphs, and this experimental design is therefore justified for exploring these three behavioural traits independently.
Since no behavioural syndrome was consistently found in both adults and nymphs by this experiment, we can provide no evidence of structural consistency [21] in personality in this species. There are currently very few studies that address the persistence of behavioural syndromes over ontogeny; however, those which do often fail to find evidence of structural consistency (e.g. [56,64]). This is perhaps explained by the differential selection pressures that adults and juveniles are often exposed to; it may therefore be adaptive for the organisation of behaviours into syndromes to change over development [21,68].
Effects of sex and size
We found adult size (but not sex) significantly affected both boldness and exploration; larger individuals were both less bold and less explorative (Fig 3). This result contrasts with a similar study on G. portentosa which found no effect of size on risk-acceptance (which includes behaviours associated with boldness, such as exploration and food acquisition, [28]). However, smaller size in individuals which were kept in a low nutrition environment during this study was associated with increased risk-taking behaviour (in terms of exploration, foraging and recovery after disturbance, [38]). Size, therefore, only had an effect under stronger competitive conditions. A relationship between body size and boldness has also been found in fish such as the poeciliid Brachyraphis episcopi and the guppy Poecilia reticulata [28,29]. Smaller individuals may have greater metabolic requirements and are therefore more willing to take risks [28,29]. This could also be the case in D. punctata. The higher exploration levels in smaller individuals may be explained by subordination as larger individuals may monopolise resources requiring smaller individuals to invest more in exploration for uncontested resources.
Sex did not significantly affect either boldness or exploration in adults, despite the high levels of sexual dimorphism in this species [36], but males were found to be more motivated to approach conspecifics than were females, as demonstrated by males' lower sociality scores (Fig 4). This result is likely to reflect sex differential reproductive motivation in D. punctata, although there is little known regarding mating behaviour and sexual selection in this species. Where sex differences in personality are apparent, they are likely to be explained by differential selection on male and female personality, perhaps explained by intrasexual selection, mate choice, differential reproductive roles, ecological demands or life histories [69]. In this case, males may be more motivated to approach conspecifics in order to obtain matings than are females, whose behaviour is often adapted to minimise the costs of male coercion [70]. The lack of sex differences in both exploration and boldness is unexpected; perhaps sex-differential selection upon personality is low in cockroaches as pressures such as predation act equally on both males and females. Indeed in species such as field crickets where there are clear sex differences in predation pressure (due to male crickets' calls attracting the attention of predators), sex differences have been found in differential consistency across life stages [44].
Conclusions and future work
Here we show evidence of differential consistency in personality both within and across life stages in cockroaches, as well as age effects upon boldness and a lack of stability in a behavioural syndrome over development in the sample tested. We show that differential consistency can be maintained despite age effects on the magnitude of personality traits, as well as showing that there is flexibility in the linkage between behavioural traits at different life stages.
Further work could reveal whether consistent behavioural variation is adaptive in the group context; testing individuals in isolation may not be a true representation of their personality in a group as behaviour may be modified by the influence of other group members [71] and isolated individuals may behave in a qualitatively different way to those in groups [72]. Personality sampling in wild populations may also provide crucial information on the many potential factors promoting personality variation [69], especially since this may have a significant impact upon survival in the natural habitat [53]. Bar charts for a. males and b. females showing the mean and standard error change in each behavioural measure from nymph to adult life stages. Behavioural measures quantify boldness (latency for head, B1, and body, B2, to emerge; latency to move antennae, B3 and head, B4; latency to initiate locomotion, B5), exploration (latency to cross centre line, E1; no. sectors explored, E2; total time taken, E3) and sociality (latency to reach, S1, and touch, S2, conspecifics; total time with conspecifics, S3). N = 19 (7 males, 12 females). Table. Behavioural measure descriptive statistics. Mean and standard error for each behavioural measure for each life stage ("combined mean"), with means and standard errors also presented separately for males and females. All values are in seconds except for the number of sectors explored. Sample sizes are 22 3 rd instars (10 male, 12 female) and 63 adults (28 male, 35 female). (DOCX) S1 Appendix. Sex-specific differential consistency analyses and sex-specific age effects analyses.
Supporting information
(DOCX) S2 Appendix. Complete dataset containing all raw data used for personality analyses on Diploptera punctata. (XLSX) | 2018-04-03T01:02:55.506Z | 2017-05-10T00:00:00.000 | {
"year": 2017,
"sha1": "76d8b123a2a120537092578d41b1eb651530d2da",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0176564&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c2c64e2afa3b8a21d3356dcc8be722afe9df533",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244112561 | pes2o/s2orc | v3-fos-license | AgAnt: A computational tool to assess Agonist/Antagonist mode of interaction
Activity modulation of proteins is an essential biochemical process in cell. The interplay of the protein, as receptor, and it’s corresponding ligand dictates the functional effect. An agonist molecule when bound to a receptor produces a response within the cell while an antagonist will block the binding site/produce the opposite effect of that of an agonist. Complexity grows with scenarios where some ligands might act as an agonist in certain conditions while as an antagonist in others [1, 3]. It is imperative to decipher the receptor-ligand functional effect for understanding native biochemical processes as well as for drug discovery. Experimental activity determination is a time extensive process and computational solution towards prediction of activity specific to the receptor-ligand interaction would be of wide interest.
Introduction
Activity modulation of proteins is an essential biochemical process in cell. The interplay of the protein, as receptor, and it's corresponding ligand dictates the functional effect. An agonist molecule when bound to a receptor produces a response within the cell while an antagonist will block the binding site/produce the opposite effect of that of an agonist. Complexity grows with scenarios where some ligands might act as an agonist in certain conditions while as an antagonist in others [1,3]. It is imperative to decipher the receptor-ligand functional effect for understanding native biochemical processes as well as for drug discovery. Experimental activity determination is a time extensive process and computational solution towards prediction of activity specific to the receptor-ligand interaction would be of wide interest.
Studies to classify agonists and antagonists such as [12] and [8] use molecular descriptors to classify androgen receptor and SlitOR25 ligands respectively. [6] also used molecular descriptors and fingerprints to classify androgen receptor ligands. [2] followed a similar approach to classify 5-HT1A ligands. [10] implemented extended connectivity fingerprints and extracted descriptors from ligands to classify TNBC and GPCR ligands. [9] used images of 3D chemical structures to predict Progesterone Receptor antagonist activity. These studies have achieved good results using random forest and SVM based models with some implementations using deep neural networks but all current methods take into account a single receptor when predicting its agonists or antagonists. We believe that the activity of a complex after ligand binding is determined by both the ligand and the receptor and thus both should be featurized to make predictions.
Machine Learning helps to uncover underlying patterns between various classes. When dealing with machine learning on proteins, the accuracy of the representations determines the final performance of the model. [5] used a feature map based on residue-based features. [7] use the raw protein sequence as well the fingerprints of the drug targets to predict drug-protein interactions. [4] represent a protein as a graph with residues acting as nodes, edges represent the spatial relationship between them and use Graph Convolution Networks for classification. Studies such as [11] have further explored various methods adopted to solve protein-related machine learning problems. We used 3 different approaches for this work, which are: -Sequence-Based Models Physiochemical Properties Based Models Graph-Based Models
Results
We created our dataset by filtering through RCSB for entries which matched our criteria and further refined the results by performing additional filtering which is described in the supplementary section.
Models based on protein sequence performed the best on our experiments. Using Word2Vec, the protein sequences were represented as 100 length vectors. We experimented with different vector sizes but achieved better performance with length 100 as was also observed in other papers. Training these vectors without the ligands achieved 81% accuracy with 10-fold stratified cross validation. Using one-hot encoded SMILES representation as an additional input, we were able to bump up the accuracy by 4.6% with the best model performing at 86.5% accuracy ( Fig. 1.1). Tree based models performed better than SVM's which suggests non linearity in the ProtVec features. The feature importance plots of the AdaBoost and XGB classifiers also give further indication to their success. Out of 11500 features, only 47 were used by the AdaBoost model to make predictions. This suggests that by selecting these features, the model is able to get an advantage over SVM. Also, including one-hot encoded SMILES strings makes it difficult for SVM classifiers to handle such a large number of features. SMILES representations contribute to 32% of the features used by AdaBoost and the boost in performance is a confirmation of our pairwise approach to the problem.
XGB Classifier performed best with Latent space embeddings while AdaBoost outperforms XGB slightly when using Word2Vec SMILES embeddings achieving an accuracy of 0.819 in both the cases. One-hot encoded representation of SMILES outperforms more efficient representations tested possibly owing to the inferences from our previous finding of only a few features being important. It seems that in the process of reducing dimensions for a more efficient representation, information important to our classification task is being lost. Also, we find that AdaBoost with one hot SMILES uses 46 (30 protein, 16 smiles) features whereas with Word2Vec embedded SMILES it only uses 42 (23 protein, 19 smiles) which is an unexpected decrease in the number of protein features being used by the model. Some important information of SMILES gets lost in these representations which is why one hot encoding performs better and although these representations are much more efficient, one hot encoding is more suitable for our classification task.
We created LSTM's and Graph Neural Networks to utilize the physiochemical and structural properties of proteins. These models were not able to perform as well as our sequence based models and have been detailed in the supplementary section.
Further analysing our pairwise approach, we looked at the change in activity of a protein as the binding ligand changes. One such case was Glutamate Receptor 2. GLUR2 exhibits agonistic activity when ligand AMQ binds with it while exhibiting antagonistic activity when ligand 08W binds to it. When we consider all the ligands, their fingerprints dont show any seperation between agonists and antagonists. Same is the case with receptor features and we believe that it is the combination of the properties which allows our model to differentiate between agonists and antagonists. This is further indicated by Fig 1.5, fingerprints of AMQ and 08W when considered individually indicates distinction in physical and chemical properties. The properties of the two ligands in table X also indicates a difference in the two ligands. Similarly, ligand CNI exhibits agonistic activity when bound with GLUR2 but exhibits antagonistic activity when bound to GLUR3. This difference must arise from the difference of structure and properties of the two proteins in this case and all such cases. These distinctions only help us further our pairwise hypothesis and do not give much information on mechanisms which determine the activity upon binding of the ligand. Our model is hosted on the domain agant.raylab.iiitd.edu.in and can make predictions for any given PDB ID-Ligand ID pair or Protein Sequence-Ligand SMILES pair. Additional usage instructions are detailed on the web page. The scipts for this project are available on the Github repository.
Discussion
Advances in Natural Language Processing models and the accomplishments of models like GPT-3 in producing text incredibly similar to humans shows that machine learning models have learnt to capture the underlying principles of language. Creation of similar models specifically for proteins will enhance all kinds of machine learning tasks related to proteins.To achieve this, we believe that there is a need to better represent protein features and an efficient integration of structural features with protein sequence. We explored many possible approaches which lays some groundwork for our future experiments. ProtVec despite being a relatively simple approach produces strong results though has problems with generalization and might not be suitable for larger datasets. ProtTrans assigned tokens to each residue and extensions to this approach might lead to better learning techniques. Physiochemical features of residues performed well though their performance on larger datasets and more advanced models are some things that still to be experimented with.
Graphs still are the most visually apt representations of proteins but require better features in order to solve such classification problems. Entire graphs although give an accurate representation of protein structures, are harder to extract features from given their size. Protein graphs show properties similar to images, -Locality -nearby residues are alike and as distance increases, dissimilarity increases. Stationarity -features can appear anywhere in the graph. Compositionality -features follow a hierarchy.
Which is why we believe that GCN's can achieve much better results as CNN's have on image tasks. We believe that these representations should be used only for much larger datasets so effectively utilize the convolutional networks.
Conclusion
In summary, we have demonstrated our models ability to differentiate between agonist and antagonist proteinligand pairs with high accuracy. We have examined various representations of proteins and different machine learning models that can be used for classification problems. AdaBoost model trained on ProtVec and SMILES representations was the most precise and accurate model. Our results demonstrate that featurizing both the ligand and protein is not only theoretically accurate, it experimentally performs better than only considering a single entity. In the future, we want to experiment with better representations of proteins on specialized language models. The success of models based only the sequence of the protein show potential for the application of specialized transformer models for proteins. | 2021-11-15T16:09:23.147Z | 2021-11-13T00:00:00.000 | {
"year": 2021,
"sha1": "e5b360a7e8356ef2ab96e33657a4329d3b9db4ee",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/11/13/2021.11.11.468208.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "ddd19a233b173209a9e1b5483db7dc87d5780bba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
55796368 | pes2o/s2orc | v3-fos-license | Intermediate Maturing Soybean Produce Multiple Benefits at 1 : 2 Maize : Soybean Planting Density
A study was conducted to identify the most suitable intercropping arrangement in smallholder farms in Western Kenya. Biomass and N (nitrogen)-accumulation, N2 fixation and grain yield of maize and soybeans grown as intercrops at three planting densities were assessed. The study was conducted in four seasons. Three soybean varieties, Namsoy 4m, SC Squire and TGx1987-18F, were used in the experiment. Maize: soybean planting densities 1:1 (D1), 1:2 (D2), 1:3 (D3) as well as sole soybean (SS) and sole maize (SM) were tested. Higher biomass, N-accumulation, and N-fixed in the order 3.8 Mg ha, 260 kg ha and 161 kg ha respectively, were recorded in D3 with long maturing variety TGx1987-18F. Conversely, higher soybean grain yield < 2.4 Mg ha was achieved by intermediate maturing SC Squire in D3. The highest maize yield in the intercrop was obtained in D1. N balance calculations indicated that planting TGx1987-18F resulted in an addition of 6 to 67 kg N ha, while SC Squire and Namsoy 4 m removed 3 to 89 kg N ha when soybean grain was removed from the field. The differences in N balances between the intercrops depended on the N-fixed and the amount of N in harvested soybean and maize grain. Greater land equivalent ratio < 1.75 were obtained with SC Squire and Namsoy 4m in D2. We concluded that intermediate maturing soybean have multiple benefits for farmers in Western Kenya at 1:2 maize: soybean planting density provided that the practice is accompanied with good soil and crop management practices.
Introduction
Intercropping is regarded as an important practice to stabilize yield, improve crop production and environmental quality in regions with risk in production (Juma, Tabo, Wilson, & Conway, 2013;Vanlauwe et al., 2015).The stability in yield of intercrops is obtained in several ways, most frequently by compensation of yields of individual components (B.Rerkasem, K. Rerkasem, Peoples, Herridge, & Bergersen, 1988;Waddington, Mekuria, Siziba, & Karigwindi, 2007).The reasons for increased production of intercrops are different length of vegetation period, various need for resources and different time of using those resources, suitable vertical arrangement of crops & Ogata, 1992); or (ii) through mycorrhizal uptake and translocation (Ofosu-Budu, Fujita, & Ogata, 1990); or (iii) a direct transfer through the common mycorrhizal networks, which enable linkages to form between the root systems of both mixes of crop species (He, Xu, Qiu, & Zhou, 2009) as well as (iv) through the rhizodeposition and subsequent uptake of released root exudates (Fujita et al., 1992;Høgh-Jensen & Schjoerring, 2001;Mahieu et al., 2014).Despite claims for substantial N transfer from legumes crops to the associated cereal crops, benefits are insufficient to meet the requirement of intercropped cereal (Giller, 2001).According to Ledgard, Giller, and Bacon (1995), the benefits from N contribution by legumes are more likely to occur to subsequent crops as the main transfer pathway is due to root and nodule senescence and fallen leaves.
Another advantage of intercropping cereals and legumes is the more efficient utilization of resources such as light, water and nutrients over time and space, leading to increased productivity compared with each sole crop of the mixture (Agegnehu, Ghizaw, & Sinebo, 2008;Mucheru-Muna et al., 2010;Willey, 1979;Zhang & Li, 2003).Increased productivity is attributed to factors such as (i) maintained light absorption rate over a longer period (Stern, 1993); (ii) reduced evapotranspiration rate due to higher leaf area per ground area provided by the legume (Anglade, Billen, & Garnier, 2015); (iii) increased availability of water in the root zones because of deeper penetrating roots of legumes (Giller, 2001) and (iv) promoted N uptake, utilization and photosynthetic efficiency of cereals (Tsubo, Walker, & Mukhala, 2001;Zhang et al., 2015).
Despite the many benefits, intercropping of cereal-legume may lead to reduction in yield of the legume component because of the adverse competitive effects (Willey et al., 1983).The cereal component has often a fast growth rate in the intercrop, height advantage, and a more widespread rooting system which gives it upper hand in competition with associated legume crop (Belel et al., 2014).Greater yield loss of the legume crop may therefore occur due to reduced intensity and quality of solar radiation intercepted by the legume crop canopy during the reproductive period which is an important environmental factor determining yield and yield components of the legume (Biabani, Hashemi, & Herbert, 2012;Jin, Liu, Wang, & Herbert, 2003).
Although intercropping of cereals and legumes is widespread among smallholder farmers in SSA (Odendo, Bationo, & Kimani, 2011), there is limited knowledge on how soybean can best be intercropped with maize to achieve higher nitrogen fixation and yield of both maize and soybean.This is because, in the past, many countries in SSA, including Kenya, gave low priority to soybean research as it was considered a minor crop (J.N. Chianu, Nkonya, Mairura, J. N. Chianu, & Akinnifesi, 2010).In the meantime, soybean is gaining a changing strategic importance, firstly as a key protein source in the booming animal feed industry and secondly as a commodity for human nutrition and income (Chianu et al., 2010), demanding for innovations to increase its productivity.This paper reports results of the study conducted in Western Kenya with the objectives to (i) assess biomass accumulation and nitrogen fixation of soybean varieties intercropped with maize at different planting densities, (ii) determine yields of maize and soybean intercropped at different planting densities; and (iii) assess the benefits of intercropping soybean and maize at different planting densities under smallholder farmers' conditions.
Description of Study Sites
The study was carried out in Western Kenya, at Lubino and Manyala villages and was maintained on the same sites for 4 seasons namely the short rainy seasons of 2012 and 2013 and the long rainy seasons of 2013 and 2014.The two sites are separated by a distance of about 50 km.Manyala is found in Butere District located at 0.971°N and 34.274°E, 1363 m altitude.Lubino is found in Mumias District located at 0.312°N and 34.565°E, 1372 m altitude.The rainfall pattern in Western Kenya is bimodal with two distinct rainy seasons; the long rainy season starting in March ending June and short rainy season starting in October ending December.Annual average rainfall in Western Kenya ranges from 900 mm to 2200 mm.Temperatures range from a minimum of 14 °C to a maximum of 36 °C throughout the year (Jaetzold & Schmidt, 2005).According to the same authors the soils at Lubino and Manyala are classified, respectively, as Humic nitisols and Orthic ferralsols.Mehlich, 1984) the soil was de 1).The soil at Manyala was slightly acidic with moderate levels of exchangeable Mg.Both soils had moderate levels of organic C (2.4-2.5 g kg -1 ), but were low in total N, exchangeable K and Ca.Available P in both soils was far below the critical value for maize and soybean of < 15 mg kg -1 (Nandwa & Bekunda, 1998), but micronutrients Zn, Cu, B and Mo were in sufficient range (Landon, 1991).The sandy clay loam soil at Lubino site had high levels of exchangeable Al and Fe, when compared to a clayey textured soil at Manyala site.
Treatment and Experimental Design
To estimate maize and soybean populations that give higher biomass and grain yield in maize-soybean intercrops, three maize: soybean planting densities, coded as 1:1 (D1); 1:2 (D2) and 1:3 (D3) (in which the first number represents lines of maize and second number represents lines of soybean) were considered.Treatments with sole soybean (SS) and sole maize (SM) were also included as controls.The planting densities were evaluated using three soybean varieties namely; Namsoy 4m (supplied by Makerere University, Uganda), SC Squire (Supplied by Seed Co. Zimbabwe) and TGx1987-18F (supplied by International Institute of Tropical Agriculture (IITA), Malawi).The tested soybean varieties are well adapted to Western Kenya conditions but differ in growth habit.SC Squire and Namsoy 4m are determinate in growth (finish most of their vegetative growth when flowering begins) and have intermediate growth duration (90-95 days).The soybean variety Namsoy 4m tends to keep more leaves towards maturity when compared to SC Squire.The variety TGx1987-18F is indeterminate (continues with vegetative growth after flowering begins) and takes longer to mature (> 110 days).
To avoid the impact of Striga (a witch weed that is endemic in Western Kenya) on the experiments, an open pollinated (OP) Imidazolinone herbicide resistant maize (IR-maize) variety WS 003 was used.The experimental plots measured 6.5 × 3 m.The experiment comprised 13 treatments with different populations of maize and soybean (Table 2).Note.* 1:1 (D1) = One row of maize alternated with one row of soybean; 1:2 (D2) = One row of maize alternated with two rows of soybean; 1:3 (D3) = One row of maize alternated with three rows of soybean; The planted plot size was 6.5 × 3 m.
The experimental design was a factorial, arranged in a randomized complete block design (RCBD) with three replicates, where maize: soybean planting densities and the soybean varieties were the factors.
Establishment of Experiments and Management
The experiments were established on flat beds prepared using a hand hoe.Before sowing, all plots received a basal application of legume fertilizer "SYMPAL" (NPK 0:25:15) + 10 CaO + 4 S + 1 Mg) at a rate of 200 kg ha -1 to provide P at a rate of 22 kg ha -1 , K at a rate of 25 kg ha -1 , Ca at a rate of 16 kg ha -1 , S at a rate of 6.4 kg ha -1 and Mg at a rate of 1.6 kg ha -1 .For soybean, SYMPAL fertilizer was applied in furrows dug 5 cm deep, slightly covered with soil to remain 2.5 cm deep for planting soybean.For maize, SYMPAL was applied in planting holes (approximately 10 g per planting hole).Plots with SM and D1 treatments received urea fertilizer at a rate of 130 kg ha -1 targeted to maize, to supply N at a rate of 60 kg N ha -1 where 20 kg was applied at planting and the remaining 40 kg top-dressed, four weeks after crop emergence.Since the population of maize in D2 and D3 was 38% and 55% of that of SM, respectively, the required N dose in D2 and D3 was adjusted to 33 and 23 kg N ha -1 , respectively.Soybean seeds were inoculated with rhizobia inoculant BIOFIX (supplied by MEA Kenya Ltd), containing Bradyrhizobium japonicum USDA 110 strain.The inoculant was applied at a rate of 10 g kg -1 seed, following a two-step method (Somasegaran & Hoben, 2012).
In plots receiving D1 treatment, one line of soybean was established between two lines of maize at a spacing of 0.37 m from the maize lines.In plots receiving D2 treatment, two soybean lines spaced at 0.45 m from each other were established in between two maize lines maintaining a distance of 0.37 m from maize lines.In plots receiving D3 treatment, three lines of soybean were established at same spacing as in D2.In plots receiving SS treatment, soybean was planted at a spacing of 0.45 × 0.05 m, in furrows opened at a depth of 2.5 cm, after application of SYMPAL fertilizer.Maize in SM plots was planted at a spacing of 0.75 × 0.25 m, by putting two seeds per planting hole, and subsequently thinned to remain one plant per hill two weeks after emergence.In the short rainy season, the sowing was carried out on 13 and 14 October 2012 and on 18 and 19 September 2013 at Manyala and Lubino site, respectively.Sowing in the long rainy season was done on 12 and 14 April 2013 and on 15 and 16 March 2014 at Manyala and Lubino site, respectively.The experimental fields were kept weed free by frequent weeding using a hand hoe.
Plant Harvesting, Sampling, Analysis and Calculations
Harvesting of soybeans to determine their biomass and N accumulation as well as N 2 fixation was done when the variety attained 50% flowering.Shoots of soybean plants were harvested in each plot from a randomly selected area of 0.5 m 2 , by cutting the plants at ground level.From each experimental site, additional samples of couch grass (Digitaria scalarum) that germinated and grew during the same period as soybeans, were collected around the experimental fields for use as reference plant to estimate the nitrogen fixation by soybean.The couch grass has slender wiry creeping rootstalks with the rooting depth (> 1 m) and a growth period of 100 days (Heuze, Tran, & Delagarde, 2014), similar to that of soybean varieties tested.Samples of couch grass were collected from 10 positions around the experiment by cutting the grass at ground level then bulked to make a composite sample per location.The shoot samples (soybean and reference plants), were oven-dried at 65 °C to constant weight and the soybean shoot samples weighed.
Shoot biomass yield of soybeans were calculated using the weights of samples taken from each plot and were expressed in Mg ha -1 .The dry shoot samples, of soybean and couch grass, were ground to < 1 mm in cyclotech mill in preparation for N and 15 N analysis at Catholic University at Leuven in Belgium.Analysis of δ 15 N of couch grass was done in the first two seasons and it was found to be consistent at each site.Hence, the average δ 15 N of the first two seasons was used to estimate N 2 fixation by soybeans in the third and fourth seasons, with the assumption that same signature was maintained.The measured values of shoot biomass and % N were used to estimate the total nitrogen in legume biomass (N-biomass), expressed in Mg ha -1 .The %N derived from N 2 fixation (Ndfa) was estimated using 15 N natural abundance method (Unkovich et al., 2008).
Where, 15 N ref is the 15 N natural abundance of shoots of non-N-fixing reference plant, 15 N leg is the 15 N natural abundance of legume (soybean) shoots, and the B value is the 15 N natural abundance of a legume depending solely on N 2 fixation for its N nutrition.A B value of -1.83 was used in calculating %Ndfa.This value represents the mean B value for soybean based on experiments conducted by six different laboratories (Unkovich et al., 2008).The amount of N-fixed was calculated according to Maskey, Bhattarai, Peoples, and Herridge (2001): Where, %Ndfa is percentage of nitrogen derived from atmospheric fixation and legume N is the nitrogen content of soybean shoots.
In the present study, no attempt was made to estimate the N-fixed in soybean roots although studies have shown that roots of nodulating legumes contain substantial amount of fixed nitrogen (Unkovich et al., 2008).
At maturity, soybean and maize from respective treatments were harvested from the net area demarcated after leaving out two rows of maize/soybean on each side of the plot, the first two and the last two maize/soybean plants on each row to minimize possible edge effect.The cobs and pods were then, shelled, grains dried to 12% moisture content, and weighed.The weight of grains was used to calculate yields from each treatment and results extrapolated per hectare basis, expressed in Mg ha -1 .After harvest, soybean and maize residues in respective plots were ploughed into the soil to avoid removal by farmers or grazing by animals.
The N balance of intercrops was calculated for the entire one year by subtracting the total N contained in soybean grain and N contained in maize grain from the total N-fixed (averages of two short rainy seasons and 2 long rainy seasons).The nitrogen content of soybean was considered to be 6.08% (based on 38% protein) and maize 1.52% (based on 9.5% protein) (Giller, 2001).The N balance was then calculated as follows: Where, N fixed is N from N 2 fixation, N soybean grain is N in harvested soybean grain and N maize grain is N in harvested maize grain.
The efficiency of the different planting densities (D1, D2 and D3) for the three soybean varieties tested was determined by calculating the land equivalent ratios (LER) as described by (Willey, 1979).The LER of the intercrops or total LER (LER Total ) was obtained by summing the LERs for Maize (LER Maize ) and for soybean (LER Soybean ): Where, YIM and YISB are grain yields per hectare of intercropped maize and soybean respectively, and YSM and YSSB are grain yields per hectare of sole cropped maize and soybean, respectively.
A LER value greater than 1.0 indicates an intercrop advantage relative to sole crop.LER values less than 1.0 indicate an intercrop disadvantage and LER values equal to 1.0 imply no difference between the intercrop and sole crop (Willey, 1979).
Statistical Analysis
Data on biomass accumulation, N-accumulation, N-fixed and grain yield of both maize and soybean were tested for normality and then subjected to analysis of variance (ANOVA) to determine if there were significant differences between planting densities at two locations in the different cropping seasons.The data was averaged over 2 short rainy seasons (2012 and 2013) and 2 long rainy seasons (2013 and 2014) per location (as season was considered fixed factor) and the analyses were carried per location and season given the agroecological differences between the two locations (Manyala and Lubino) and weather conditions between the two rainy seasons.The statistical package GenStat version 13 was used.Where significant differences were obtained, the means were separated using Fisher's least significance difference (LSD) at P ≤ 0.05.
Biomass Yield, N-Accumulation and N 2 Fixation by Soybean
Shoot biomass varied across planting densities, sites and seasons (Tables 3 and 4).Higher shoot biomass was recorded on SS and tended to decrease with decreasing soybean plant population, but with a few exceptions.At Manyala site for example, in the short rainy season, biomass yield in D3 was 4% higher than in SS for soybean variety Namsoy 4m, but 13 and 4% lower for SC Squire and TGx1987-18F, respectively.In D2, a decrease in shoot biomass in the range of 19 to 22% was recorded while in D1 the reduction was in the range of 43 to 53% when compared to SS.The same trend was observed at Lubino site where shoot biomass was lower by 22 to 34% in D3, 31 to 45% in D2 and 50 to 61% in D1 relative to SS, across varieties.
In the long rainy season, shoot biomass at Manyala site was reduced by 7 to 26% in D3, 20 to 37.0% in D2 and 2 to 53% in D1 when compared to SS.At Lubino site the decrease ranged between 13 and 30 % in D2 and 28 to 35% in D1, but an increase in biomass yield of 8% was recorded in D3 with the varieties Namsoy 4m and SC Squire.Overall, the differences in shoot biomass between D2 and D1, D3 and D1, D3 and D2 and between D1, D2, and D3 relative to SS were small in the long rainy season when compared to the short rainy season.With the exception of Lubino site in the short rainy season, soybean shoot biomass recorded in D3 was not statically different (P < 0.01) from that recorded on SS.Soybean varieties exhibited differences in biomass accumulation under different maize-soybean planting densities with the variety TGx1987-18F accumulating more biomass across planting densities, location and seasons, followed by SC Squire and Namsoy 4m accumulating the least.
In general, all soybean varieties accumulated more biomass in the short rainy season than in the long rainy season.
Shoot N-accumulation followed the same trend as shoot biomass across seasons and varieties.For example, in the short rainy season at Manyala site, a decrease in the order of 3% was recorded in D3 when compared to SS, except with the soybean variety Namsoy 4m, which recorded a 2% increase.N-accumulation in D2 and D1 decreased between 19 and 22% and 43 to 53%, respectively, when compared to SS across varieties.At Lubino site, a decrease in N-accumulation by 22 to 34% for D3, 30 to 45% for D2 and 50 to 63% for D1 relative to SS was recorded in the short rainy season.In the long rainy season, at Manyala site, for D3, a decrease in shoot N-accumulation was in the order of 8% with TGx1987-18F and 17% with SC Squire, but an increase of 32% with Namsoy 4m, for D3 was recorded.Furthermore, a decrease in shoot N-accumulation by 18 to 37% for D2 and D1, respectively was observed.At Lubino site, across varieties, N-accumulation decreased from 9 to 31% in D2 and 28 to 34% in D1 when compared to SS.In general, across varieties, seasons and sites, there were no significant differences between N accumulated in D3 and SS except for the short rainy season at Lubino site with the varieties Namsoy 4m and SC Squire.However, significant differences (P < 0.001) were found between N-accumulated in D1 and D2 when compared to SS. Across varieties and seasons, the highest N accumulation was recorded on SS, with the soybean variety TGx1987-18F accumulating more N.
The highest amount of N-fixed was recorded in SS with the soybean variety TGx1987-18F fixing 95 and 73 kg N ha -1 in the short rainy season and 55 and 36 kg N ha -1 in the long rainy season at Manyala and at Lubino site, respectively.No significant differences were found in N-fixed between SS and D3.In general, significant differences (P < 0.001) in N-fixed were detected between D1 and D2, D1 and D3, D1 and SS, D2 and SS and between D2 and D3 in the short and long rainy seasons, across sites and varieties.However, in the long rain season, these strong variations in N-fixed could not be detected between the planting densities across sites.In both seasons, at all sites, soybean varieties Namsoy 4m and SC Squire accumulated almost the same amount of biomass and nitrogen, and fixed almost the same amount of N when established at any of the planting densities, D1, D2, D3 or SS.
Soybean and Maize Grain Yields
Across sites and seasons, higher yields of soybean were obtained under SS while for maize was obtained under D1 (Tables 5 and 6), with more yield of both crops achieved in the long rainy season.In the short rainy season, soybean grain yield was reduced by 50% in D1, but 15% in D2 and D3 across sites and soybean varieties.Clear differences in yield of soybean under different planting densities could be observed in the long rainy season.For example, at Manyala site, a yield reduction of 28, 45 and 58% was recorded in D3, D2 and D1, respectively.At Lubino site, a yield reduction of 16, 28 and 45% was recorded in D3, D2 and D1, Across sites and seasons the planting density D3 with soybean variety SC Squire gave the highest grain yield of 1.2 and 0.7 Mg ha -1 in the short rainy season and 1.8 and 1.5 Mg ha -1 in the long rainy season at Manyala and Lubino sites, respectively.Across sites, seasons and varieties, no significant differences in soybean grain yields could be observed between D3 and SS.
Maize grain yield was affected by planting densities (Tables 5 and 6).Compared to sole maize (SM), a general yield reduction by 40 to 50% and 20 to 40% was observed in D3, D2, for Lubino and Manyala site respectively, and similar trend was observed in both short and long rainy seasons.For D1, an increase in maize grain yield of up to 7% was recorded.No significant differences in maize grain yields could be observed between D1 and SM, except for Manyala site during the short rainy season and Lubino site in intercrop with Namsoy 4m in the same season.Higher yields of maize were recorded in the long rainy season across planting densities, where the intercrops of maize and soybean variety TGx1987-18F gave the highest yield (Table 6).
Nitrogen Balances of Intercrops
Net N balances of intercrops varied from -37 to +47 kg N ha -1 when only harvest soybean grain was removed and from -107 to -5 kg N ha -1 when both soybean and maize grain were removed (Table 7).With only soybean grain removed, positive N balances were recorded at Manyala site in all intercrops with TGx1987-18F, and at Lubino site in D2 with TGx1987-18F and in D3 with Namsoy 4m.Overall, the N balance of intercrops was less negative with increasing soybean plant population, where intercrops with TGx1987-18F had less negative N balances, while the intercrops with SC Squire had more negative N balances.
Land Equivalent Ratio of Intercropping Systems
All planting densities (D1, D2 and D3) for the three soybean varieties were lying above the linear line of sole maize and its corresponding soybean variety grown as sole crop, indicating an advantage of intercropping (Figure 2).Across sites and seasons, LER figures were highest in D2 with SC Squire, high in D2 with Namsoy 4 m and lower in D3 with TGx1987-18F.
jas.ccsenet.organic inputs are in short supply.Although the current results are based on few sites, they support the need for tailoring soil and crop management practices to site-specific conditions to increase crop productivity in smallholder farms in SSA (Giller, Schilt, & Franke, 2013).
Soybeans accumulated more biomass and fixed more N in the short rainy season compared to long rainy season (Tables 3 and 4).This may be due to favorable temperatures and soil moisture conditions that prevailed in the short rainy season when the crop was at vegetative growth stage.According to Jaetzold and Schmidt (2005), temperatures in Western Kenya vary between 25 and 29 °C in the short rainy season, the optimal temperatures for soybean growth (Nteranya Sanginga, 2003).Poor soybean grain yield in the short rainy season could have been caused, in large part by moisture stress caused by dry spells that occurred mid-November through December (Figures 1a and 1b).At the beginning of dry spells, Namsoy 4m and SC Squire were at R4 (seed filling stage) and TGx1987-18F at R1 (flowering stage) reproductive stage.In soybean, soil-water deficit at reproductive stage results in increased flower abortion, reduced pod number, reduced grain per pod, and the size of the grain, which affect negatively the grain yields of the crop (Frederick, Camp, & Bauer, 2001;Purcell & King, 1996).
Performance of Soybean in the Intercrops
The effect of intercropping three varieties of soybean with maize were significant for biomass yield, N-accumulation, amount of N-fixed and grain yield of both crops.The observed higher biomass and N-accumulation of soybean in D3 relative to D2 and D1, and D2 relative to D1 might be associated with higher soybean plant population.Intercropping of maize and soybean at D3, is equivalent to reducing maize plant population to half the recommendation (53.000 to 24.240 plants per ha) and soybean plant population to two third the recommendation (444.440 to 363.630 plants per ha).Our results agree with those of Zhang et al. (2015) who reported higher biomass accumulation and N uptake in maize-soybean intercrop ratio of 1:3, compared to the ratio 1:1.Good performance of soybean at low population of maize may be attributed to the wide space available between alternate maize lines in intercropping leading to increased light use efficiency and enhanced photosynthesis of soybean.
Across plant densities, sites and seasons, the soybean variety TGx1987-18F accumulated more biomass and nitrogen and fixed more N than the rest of varieties.These results agree with Vanlauwe, Mukalama, Abaidoo, & Sanginga (2011) who reported a range of soybean total biomass of 1.7 to 4.5 Mg ha -1 in Vihiga district, Western Kenya.In their study, long maturing varieties were found to accumulate more biomass and fix more N than the short maturing varieties.Long maturing cultivars are known to grow slowly and take this advantage to absorb and utilize more nutrients and solar energy and fix more N that is converted to plant tissues (Giller, 2001).Rusinamhodzi, Corbeels, Nyamangara, & Giller (2012) reported similar observations when pigeon pea (Cajanus cajan) varieties with different maturity periods were evaluated under intercrop with maize in northern Mozambique.
The N-fixed in SS was not significantly different from N-fixed in D3, but it was 50% and 22% higher than the N-fixed in D1 and D2 respectively.The amount of N-fixed in SS was within the range of 18 to 95 kg N ha -1 observed by Osunde et al. (2003) in various farms in Nigeria.The use of universal B values and a reference crop with different root-structure from a legume crop tested is reported to influence the calculations of %Ndfa, especially when the δ 15 N of reference is lower than the δ 15 N of a legume plant (Giller, 2001).In our study, the δ 15 N of reference plant was 4.67‰ for Manyala site and 4.42‰ for Lubino site, almost double the δ 15 N recorded in soybeans samples, suggesting that the B value and the reference crop used were appropriate (Unkovich et al., 2008).However, the reported quantities of N-fixed in the present study could be an underestimation because did not account for the belowground contributions, comprising of N associated with roots, nodules and rhizodeposition via exudates and decaying root cells and hyphae, which is estimated to be 31% of N-fixed at pod-filling stage (Ofosu-Budu et al., 1990).Other researchers, accounting for the N-fixed in the belowground plant parts, have reported higher amounts of N-fixed.For instance, Eaglesham, Ayanaba, Rao, and Eskew (1982) reported N-fixed by soybean of 114 to 188 N kg ha -1 per season, and Nteranya Sanginga (2003) of 24 to168 kg N ha -1 per season.
Soybean and Maize Grain Yield
Soybean grain yield ranged between 0.1 and 1.7 Mg ha -1 in the intercrops and 0.4 to 2.6 in SS.Kihara, Martius, Bationo, and Vlek (2011) had reported similar maize yields when soybean was intercropped or rotated with maize.SC Squire produced more grains because of its high yield potential, associated with many pods per plant, large and heavy grains, when compared to TGx1987-18F and Namsoy 4m.High yields in SC Squire, relative to other soybean varieties, have also been reported in other locations (Woomer et al., 2014), where SC Squire was identified among the best varieties in terms of grain yield in different agro-ecological zones in Southern, Central and in East Africa.
Although maize yield was low than reported elsewhere in SSA, it was within the range (0.6 to 5.0 Mg ha -1 ) reported in western Kenya (Wambugu, Mathenge, Auma, & VanRheenen, 2012).The observed lower yields of maize with increasing soybean population can be attributed to a reduction in maize plant population as in most cases no significant differences were detected between D1 and SM, treatments which had the same maize plant population.The tall nature of maize in relation to soybean and its more widespread rooting system might have favored maize in D1.Dolijanović, Kovačević, Oljača, and Simić (2009) showed that, in the intercrops of soybean and maize, the maize component, which has often fast growth rate, height advantage and more widespread rooting system gives it an advantage in the competition with the associated soybean.The observed consistent, but slight increase in maize yield under D1, relative to SM in the long rainy season, may be due to N obtained through nitrogen fixation from associated soybean.It has been reported cereal crops can benefit for symbiotic N-fixed by the legume crop grown as intercrop through N-transfer (Wilson, Giller, & Jefferson, 1991).Such performances of component crops in intercropping were also observed by Fujita et al. (1992); Layek et al. (2014).The N-transfer is considered to occur through root excretion, N leached from leaves and leaf fall (Fujita et al., 1992).
Nitrogen Balances of Intercrops
Nitrogen balances of crop fields that include grain legumes vary widely and are affected by site conditions, grain harvest and N input (Giller, 2001).Although in the present study the N obtained from N 2 fixation ranged between 39 and 168 kg ha -1 yr -1 (Table 7), this could not turn the N balances into positive one.Similar soybean N balances have been reported in Argentina, Brazil, China, Canada and Thailand (Salvagiotti et al., 2008) and in Nigeria (Singh, Carsky, Lucas, & Dashiell, 2003).In their review of biological nitrogen fixation studies that had been conducted between 1999 and 2006, (Salvagiotti et al., 2008) observed that the amount of N-fixed by soybean was, in most cases, insufficient to replace all the N removed in harvested grain.Overall, planting late maturing variety TGx1987-18F resulted in net addition, or less N removal of N from the soils, while the intermediate maturing variety SC Squire resulted in net N removal from the soil.Similar to the present study, Singh et al. (2003) also obtained more negative N balance values for early and intermediate maturing soybean varieties when compared to late maturing varieties.The short and intermediate maturing varieties are known to efficiently translocate N to the grain, thus leaving behind only a small proportion of N in the stover ( N Sanginga, Abaidoo, Dashiell, Carsky, & Okogun, 1996).Assuming 31% N contribution of below ground, the values of N balances in D1, D2 and D3 become less negative when only soybean grain is removed from the field.However, after removal of maize grain, net N balances of intercrops became more negative suggesting that in less productive soils, combinations of legume intercropping, and mineral N fertilizer application would be the best option.
Efficiency of Intercropping Systems
All the intercrops had LER greater than 1 suggesting that at all planting densities, maize and soybean complemented each other mutually in the utilization of resources.The observed decrease in LER values with increasing density of soybean suggest decreased efficiency in land resource utilization with increasing density of soybean.Zhang et al. (2015) had observed LER reductions in soybean intercropped with maize in Northern China and associated the LER depression to increase interspecific completion at higher soybean population.With the exception of Lubino site in the short rainy season, greater advantage of intercropping maize with soybean was obtained with variety SC Squire, closely followed by Namsoy 4m in D2.This intercropping pattern could be an effective way of optimizing soybean and maize production in areas like Western Kenya where farmers are experiencing reduction in the amount of land available due to the rapid increase in human population.The high grain yields obtained at Manyala did not translate into higher LER values, suggesting that the LER was not influenced by the quantity of grain yields obtained in the location, but by the ratios of grain yields of maize and soybean in the intercrops and in sole crop.Although D3 performed almost equally as SS in terms of biomass accumulation, N-accumulation, N-fixed and soybean grain yield, it had lower LER when both soybean and maize yield were considered (section 3.6), making it difficult to be adopted by smallholder if maize is a major crop.
Conclusions
Above ground biomass, N-accumulation and N-fixed was higher in D3 with the long maturing soybean variety TGx1987-18F, while higher grain yield was recorded in D3 with SC Squire, a variety with intermediate growth duration.An increase in the population of soybean in the intercrops implied a reduction in maize grain yield, but improved the N balances, when soybean and maize grains were removed from the system.LER figures were above unity under all intercrop combinations, but it decreased with increasing density of soybean.Greater advantage of intercropping maize with soybean was obtained with SC Squire, closely followed by Namsoy 4m and in D2.Small scale farmers in Western Kenya, and those living in areas with similar conditions in the highlands of East and Central Africa, can take greater advantage of biological nitrogen fixation by adopting 1:2 maize soybean intercropping system, using the intermediate growth types of soybean.Owing to the fact that the N-fixed in the intercrop cannot compensate for the entire N harvested in the maize grain, this practice should be combined with (i) the application of mineral N targeted to maize (ii) application of fertilizers especially blended for legumes (e.g.SYMPAL) to supply secondary and micronutrients, (iii) returning to the field the soybean residues after removal of grain, (iv) liming of soil to increase the pH and reduce Al toxicity and (v) management of soil and crop to reduce drought stress.
Figure 2 Soil data from experimental fields showed wide variability in major properties with soil at Lubino being strongly acidic, low in total N and exchangeable Mg (Table
Table 1 .
Physico-chemical properties of the surface soil (0-20 cm) at the experimental sites
Table 2 .
Treatments and their corresponding equivalent plant population on one ha.The treatments arrangement is as it was in one replication
Table 3 .
Above ground biomass accumulation, N accumulation and N-fixed of tested soybean varieties under different maize-soybean planting densities as recorded at Manyala and Lubino experimental sites; data are means of yields of 2012 and 2013 short rainy seasons
Table 5 .
Soybean grain yield and Maize grain yield under different maize-soybean planting densities as recorded at Manyala and Lubino experimental sites; data are means of 2012 and 2013 short rainy seasons
Table 6 .
Soybean grain yield and Maize grain yield under different maize-soybean planting densities as recorded at Manyala and Lubino experimental sites; data are means of 2013 and 2014 long rainy seasons *Note.SGY: Soybean grain yield; MGY: Maize grain yield; LSD = Least significant difference between means, ** P < 0.01; *** P < 0.001.
Table 7 .
Nitrogen balances (kg ha -1 year -1 ) in maize-soybean intercropping of different planting densities at Manyala and Lubino sites, Western Kenya.Balances are averages of 2012 and 2013 short rainy seasons, and 2013 and 2014 long rainy seasons | 2018-12-15T18:57:44.560Z | 2018-08-13T00:00:00.000 | {
"year": 2018,
"sha1": "34971c629b4db942bb32f165d9b92955af55f65e",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/74857/42668",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "34971c629b4db942bb32f165d9b92955af55f65e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
251599555 | pes2o/s2orc | v3-fos-license | Editorial: Advanced Strategies for the Recognition, Enrichment, and Detection of Low Abundance Target Bioanalytes
Biosensors and bioassays provide important analysis capabilities for biological, medical, and environmental applications. Owing to active and creative exploration of emerging materials, devices, and methods, there have been great advances in biosensors and bioassays, making detection more sensitive, rapid, accessible, and intelligent. For biodetection, especially for low abundance, strategies for target recognition, enrichment, and detection are of vital importance. The goal of this Research Topic is to provide a forum sharing recent research and novel ideas on related techniques. We have collected eight papers focused on biosensors for target or event detection: three research articles, two opinion articles, and three reviews. The techniques include microfluidics, electrochemistry, CRISPR, emerging materials, and so on. In terms of research papers, Zhu et al. report a microfluidic device integrated with an electrical impedance spectroscopy (EIS) biosensor to perform in-situ impedance measurement of yeast proliferation in single-cell resolution. Hydraulic shear force is utilized to detach a daughter cell from its mother cell, and time-lapse impedance measurement is performed to monitor the cellular process. The main highlight of this work is a combined sensor achieving in-situ and real-time monitoring of a dynamic event for single-cell reproduction, which uses simple EIS instead of image observation, saving hardware resources for image recording. Meanwhile, the capacity to monitor for longer than several hours still needs to be improved. Salvador et al. present the development of a fluorescent microfluidic device based on an antibody microarray for therapeutic drugmonitoring of acenocoumarol (ACL). A fully integratedmicrofluidic system is realized using a specific antibody for ACL on the glass slide in a microarray chip, with a fluorescent label for target binding. A limit of detection (LOD) of 1.23 nM is achieved in human plasma. As a proof-of-concept point-of-care device, this system is automatic and sensitive, although the exact turnaround time is not mentioned. The selectivity also needs to be demonstrated for situations such as the presence of other molecules similar to ACL existing in the blood. Yuan et al. have developed a one-step electropolymerized biomimetic polypyrrole membranebased electrochemical sensor for selective detection of valproate (VPA). The critical point of this strategy is to fabricate molecularly imprinted polymer (MIP) with simple one-step electropolymerization and employ the MIP/gate effect to realize sensitive concentration-electrical signal response. This sensor has a LOD of 17.48 μM and good selectivity against five drugs in Edited and reviewed by: Guozhen Liu, The Chinese University of Hong Kong, Shenzhen, China
Edited and reviewed by:
Guozhen Liu, The Chinese University of Hong Kong, Shenzhen, China combination therapy with VPA. This work furthers a step toward conveniently controlling VPA concentrations for patients, and the main shortfall is the lack of confirmatory tests in practical blood or serum, which is a much more complex matrix than deionized water. For opinions, Jiang et al. introduce the working modes and applications of biosensors for point mutation detection, including the biosensors for RNA mutation rate detection of SARS-CoV-2. As an alternative to traditional methods, biosensors for gene mutation detection are fast, low-cost, and suitable for large-scale applications. Using ssDNA probes for hybridizing with the point mutations, gene sequencing, and PCR-based methods become unnecessary. Meanwhile, versatile signal conversion approaches enable various sensitive sensors. As a result, the LOD for point mutation can be as low as 0.5 fM. ssDNA is commonly designed as an oligonucleotide for biomolecule recognition, while the basic function of hybridization promotes the detection of point mutation.
Jiang et al. describe a possibility for the application of artificial intelligence-based triboelectric nanogenerators (AI-TENGs) in biosensor development. The mechanism of triboelectric nanogenerators and their combination with artificial intelligence are introduced. Representative research on AI-TENG for biomolecules sensing and other biotechnology is then highlighted. TENG as an energy collection and conversion technique is of great value for portable biosensors needing a power supply, while more investigations might be needed on the integration of AI and TENG for biosensor development.
In terms of the reviews, Li et al. review recent advances in metal-organic framework (MOF)-based electrochemical biosensing applications. The main content includes the synthesis and modification of MOFs for electroactive materials and emerging biosensing applications of these MOF materials. The emphatically introduced applications are small-biomolecule sensing, biomacromolecule sensing, and pathogenic cell sensing. The specific improvement to figures of merit by MOFs, such as LOD, is not presented in this paper, and oxidative degradation in certain solutions might be considered when MOFs are applied in electrochemical sensors.
Liu et al. discuss nanomaterial-based immunocapture platforms for recognition, isolation, and detection of circulating tumor cells (CTCs). Three parts are outlined: design principles for immunoaffinity nanomaterials, nanomaterial-based platforms for CTC immunocapture and release, and recent advances in single-cell release and analysis for CTCs. Nanomaterials of different shapes, sizes, and structures are introduced, showing their good performance for agent linking and CTC isolation and detection. In this article, capture efficiency is highlighted as an important figure of merit. Although the overall detection time of most mentioned techniques is not introduced, this review remains a comprehensive review of CTC separation and detection.
Chen et al. have created an overview of the recent development of clustered regularly interspaced short palindromic repeats (CRISPR)-based biosensing techniques and their integration with microfluidic platforms. A representative microfluidic CRISPR sensor achieves a LOD of 20 pfu/ml of purified Ebola RNA within 5 min, and most similar sensors are all accompanied by high sensitivity and fast responses.
Working with the two powerful techniques, these biosensors, being applied in nucleic acid-based diagnostics, protein tests, metal ion monitoring, and protein/small molecule interactions screening, have great advantages for point-of-care testing of various bio-targets with low abundance.
In summary, the articles collected in this Research Topic include various emerging materials, methods, strategies, and platforms promoting technological innovation for recognition, enrichment, and detection of low abundance target bioanalytes. In these articles, microfluidic devices are most utilized for target collection and enrichment, while electrochemical biosensors are widely adopted transducers for signal conversion. Advanced and interdisciplinary fields such as CRISPR, MOF, and AI-TENG are represented in these articles.
For the goal of trace bio-target analysis, the decisive techniques are target enrichment and sensing mechanism, possibly including signal amplification. Efficient target enrichment and sensitive signal transformation remain critical challenges to be solved, and efforts are being made to seek advanced strategies besides the methods mentioned in this Research Topic. As a promising strategy, AC electrokinetics is demonstrated to be a simple and versatile approach for rapid bio-target enrichment (Salari and Thompson, 2018). Meanwhile, researchers have developed an ultrasensitive sensing mode by monitoring the solid-liquid interfacial capacitance of an electrode array. Combining the two mechanisms, the LOD level of 0.1 fM (Zhang et al., 2021), fg/mL (Qi et al., 2022), or 10 CFU/ml (Zhang et al., 2020) is achieved when ions, proteins, or bacteria are detected with the turnaround time less than 1 min. With technological advances being continuously on the horizon, more sensitive, convenient, rapid, and intelligent detection is expected to emerge in the future.
AUTHOR CONTRIBUTIONS
JZ and JJW wrote the editorial, which was proofread and accepted by all the authors. | 2022-08-17T13:12:02.321Z | 2022-08-17T00:00:00.000 | {
"year": 2022,
"sha1": "3baa19bd0f82a417fac54e233f5f4754c4eb1cf6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3baa19bd0f82a417fac54e233f5f4754c4eb1cf6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245263069 | pes2o/s2orc | v3-fos-license | Systematic review and meta-analysis of modified facelift incision versus modified Blair incision in parotidectomy
Surgical removal is the treatment of choice for many neoplasms of the parotid gland. This meta-analysis aimed to evaluate the differences between parotidectomy using a modified facelift incision (MFI) and parotidectomy using a modified Blair incision (MBI). A systematic search of the available literature in PubMed, Embase and the Cochrane Library was performed. Studies of adult patients who underwent open parotidectomy with presumed benign parotid neoplasms based on preoperative examinations were reviewed. The surgical outcomes of the MFI and MBI groups were collected. Intraoperative and postoperative parameters, including operative time, tumor size, cosmetic satisfaction, and incidences of facial palsy, Frey’s syndrome and salivary complications, were compared. Dichotomous data and continuous data were analyzed by calculating the risk difference (RD) and the mean difference (MD) with the 95% confidence interval (CI), respectively. Seven studies were included in the final analysis. The pooled analysis demonstrated that the cosmetic satisfaction score was significantly higher in the MFI group (MD = 1.66; 95% CI 0.87–2.46). The operative duration in the MFI group was significantly longer than that in the MBI group (MD = 0.07; 95% CI 0.00–0.14). The MFI group exhibited a smaller tumor size (MD = − 2.27; 95% CI − 4.25 to − 0.30) and a lower incidence of Frey’s syndrome (RD = − 0.18; 95% CI − 0.27 to − 0.10). The incidence of postoperative temporary facial palsy (RD = − 0.05; 95% CI − 0.12 to 0.03), permanent facial palsy (RD = − 0.01; 95% CI − 0.06 to 0.03) and salivary complications (RD = − 0.00; 95% CI − 0.05 to 0.05) was comparable between the two groups. Based on these results, MFI may be a feasible technique for improving the cosmetic results of patients who need parotidectomy when oncological safety can be ensured.
www.nature.com/scientificreports/ inferiorly along the hairline rather than horizontally. With the use of MFI, the cervical incision needed for surgery is moved further back into the hairline; thus, a visible cervical scar is avoided. Cosmetic satisfaction after parotidectomy may be improved by using the MFI, and several authors have reported the aesthetic superiority of MFI. However, to date, no meta-analysis has been published that evaluates the difference between these two techniques. In the present study, the authors conducted a systemic review of related articles and presented the combined results of the postoperative and intraoperative parameters after the use of MFI and MBI in parotidectomy.
Materials and methods
Literature search. This meta-analysis was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement 7 . Two authors (LJH and YCL) extensively and independently searched PubMed, Embase, and the Cochrane Library for studies of interest published before December 2020. The keywords in our search process included "parotid", "parotidectomy", "facelift", "rhytidectomy", "cosmetic" and "esthetic". Moreover, the references of the included articles were also reviewed to identify other potential studies.
Study selection. The PICO (population/intervention/comparison/outcome) components were as follows: P (adult patients who underwent open parotidectomy with presumed benign parotid neoplasms based on preoperative examinations), I (use of MFI in parotidectomy), C (use of MBI in parotidectomy), O (intraoperative and postoperative parameters, including operative time, tumor size, cosmetic satisfaction, incidences of facial palsy, Frey's syndrome and salivary complications).
The inclusion criteria were as follows: (1) original research articles with either prospective or retrospective study design; (2) articles published in the English language; (3) articles that included adult patients who underwent open parotidectomy with presumed benign parotid neoplasms based on preoperative examinations; (4) studies comparing intraoperative and postoperative outcomes between the MFI and MBI groups; and (5) studies including follow-up of at least 3 months after surgery. The exclusion criteria were as follows: (1) studies that included patients with known parotid malignancies before surgery; (2) studies using endoscope-assisted or robot-assisted surgery; (3) studies using a fibrin sealant after parotid surgery; (4) studies with flap or fascia reconstruction after parotidectomy; (5) studies without a control group; (6) articles not published in English; and (7) review articles, short reports, letters to the editor and cadaveric studies.
The MFI described in the present study includes a preauricular incision that extends around the origin of the earlobe, following the retroauricular sulcus. The incision then curves toward the occipital direction and can be continued with a segment along the hairline as needed. The temporal scalp segment and horizontal segment over the occipital scalp in traditional facelift incision are not included in the MFI. The MBI, on the other hand, starts in the preauricular skin crease, continues posteriorly around the lobule to the mastoid region, and extends inferiorly into a cervical skin crease. The segment running parallel to the zygomatic arch used in the original Blair incision is not included in the MBI used in the present study (eFigure 1). Data extraction. Data were independently extracted by 2 researchers (LJH and YCL). The quality of the included articles was independently assessed by two researchers (LJH and YCL) using the Newcastle-Ottawa Scale 8 . Any discrepancies in the study bias classification were resolved by discussion among authors until a consensus was achieved.
Outcomes. The outcomes of this meta-analysis included cosmetic satisfaction, operative duration (hours), tumor size (centimeters), and incidence of postoperative facial palsy, Frey's syndrome and salivary complications.
Data analysis.
The results of interest were analyzed with Comprehensive Meta-Analysis software (version 3; Biostat, Englewood, NJ). The mean difference (MD) was used to compare the cosmetic satisfaction score, operative duration and tumor size between the MFI and MBI groups. The risk difference (RD) was used to compare the incidence of postoperative facial palsy, Frey's syndrome and salivary complications between the MFI and MBI groups. When necessary, the mean and standard deviation were estimated according to the methods described in previous studies 9, 10 . The overall effect was calculated using a random-effects model. Grading of Recommendations, Assessment, Development and Evaluations (GRADE) was used to assess the quality of the evidence for each outcome 11 . Heterogeneity among studies was analyzed using the I 2 statistic, which calculated the proportion of overall variation attributable to between-study heterogeneity. An I 2 value exceeding 50% suggested moderate heterogeneity, and an I 2 value exceeding 75% suggested high heterogeneity 12 . Potential publication bias was analyzed using the Egger intercept test and funnel plots when more than 10 studies were present per outcome 13 . A 2-tailed P value of less than 0.05 was considered statistically significant.
Results
Study selection. The literature search initially yielded 809 articles. A total of 226 duplicate studies were excluded; 552 studies were excluded after reviewing the titles and abstracts. Careful review of the full text was performed for the remaining 31 potentially eligible articles. Among these articles, studies without a control group, review articles, studies including fascia or flap reconstruction, short reports, studies using incisions other than the MFI, studies including known parotid cancer patients before surgery, cadaveric studies and studies not published in the English language were excluded. Seven articles were included in the final analysis [14][15][16][17][18][19][20] www.nature.com/scientificreports/ diagram describing the process of study selection and inclusion/exclusion is shown in Fig. 1. The keywords used and the literature search are described in eTable 1 of the Supplementary material.
Demographics.
The basic demographics of the included study subjects are listed in Table 1. A total of 707 parotidectomies were included for analysis. The overall male/female ratio was significantly lower in the MFI group than in the MBI group (P < 0.00). The PRISMA checklist can be found in the eTable 2 of the Supplementary material. The quality assessment for the included studies is shown in eTable 3 of the Supplementary material.
Outcomes. Cosmetic satisfaction.
Five of the studies recorded cosmetic satisfaction with a numerical scale, on which 10 points indicated the highest score of cosmetic satisfaction 14,16,17,19,20 . Four articles provided sufficient data for analysis 16,17,19,20 . One study evaluated the cosmetic results 3-4 months after surgery 19 , two studies evaluated the cosmetic results at least 6 months after surgery 17,20 , and one study evaluated the cosmetic results (Fig. 2). We did not perform subgroup analysis due to the number of eligible studies. However, the study reported by Zheng et al. was found to be the source of heterogeneity 20 . The heterogeneity was obviously reduced after this study was removed from the analysis (I 2 = 19.84%). The study by this group attempted to shorten the hairline segment of the MFI and only extended this segment along the hairline when necessary. Other authors indicated that they regularly used the hairline limb of the MFI along the hairline, which might cause heterogeneity between studies. GRADE indicated evidence of moderate quality for this outcome (eTable 4).
Operative duration. The pooled analysis of two studies 14,19 showed that the operative duration was lower in the MBI group than in the MFI group (MD = 0.07; 95% CI 0.00-0.14; I 2 = 0.00%) (Fig. 3A). GRADE indicated evidence of moderate quality for this outcome (eTable 4).
Salivary complications (salivary fistula/seroma). The pooled results of five studies
Publication bias. The funnel plots were under powered when fewer than 10 studies were included in the meta-analysis according to the recommendations of the Cochrane handbook 13 . As only seven studies were included in this review, publication bias was not evaluated.
Discussion
This meta-analysis of the existing English literature was performed to compare the differences between the MFI and MBI techniques in parotidectomy. One study in 2013 published a systematic review regarding parotid surgeries using the MFI; however, the authors did not perform meta-analyses, probably due to the lack of sufficient data 21 . The present study, which included seven studies and 707 parotidectomies, is the first meta-analysis comparing parameters between the MFI and MBI. According to the pooled results, the MFI group reported a significantly higher cosmetic satisfaction score than the MBI group. Compared with those who underwent MBI parotidectomy, patients who underwent MFI parotidectomy had a longer operative duration, a smaller tumor www.nature.com/scientificreports/ size, and a decrease in the incidence of postoperative Frey's syndrome. The incidence of postoperative salivary complications, temporary and permanent facial palsy was comparable between the two groups. The fundamental function of an incision is to provide adequate surgical field exposure and lesion access. With the advancement and development of surgical techniques, physicians have begun to explore possible incisions with satisfactory cosmetic outcomes without jeopardizing oncologic safety. The conventional incision described by Blair in 1912, with its modifications, has been well studied and established as the most utilized approach in parotid surgeries 2 . However, an obvious scar on the face and neck after using the MBI is unavoidable, even with meticulous closure. According to previous studies regarding parotid surgeries, long-term evaluation by questionnaires seems to indicate high scar dissatisfaction 22,23 . Some authors have even reported that scars represent the most important long-term issue 22 . Several types of incisions have been proposed to improve aesthetic results after parotid surgery, and the MFI may be the most widely used technique. With the use of the MFI in parotidectomy, the scar is hidden behind the tragus, retroauricular sulcus and natural hairline. The postoperative scar is inconspicuous or visible only under close inspection. Five of the included articles in the present meta-analysis recorded cosmetic results after surgery. Among these studies, the study by Wasson et al. was the only one to report a higher mean cosmetic satisfaction score in the MBI group than in the MFI group. However, the authors did not provide a statistical comparison between the two groups; thus, the study was not included in our pooled analysis 13 . The other four studies all showed a higher satisfaction score in the MFI group than in the MBI group. The pooled results indicated that the use of the MFI significantly improved the cosmetic satisfaction score after parotidectomy.
The results of the present study show that the operative duration for parotidectomy is approximately 4.2 min longer with the use of the MFI than with the use of the MBI. This result is intuitive because the use of MFI requires a greater extent of flap dissection to expose the surgical field. However, some authors have indicated that the time needed for flap dissection may decrease as the surgeon becomes more experienced. Other factors, such as the surgery type, tumor size or location, may also be associated with the operation duration 24 . The present study also showed that the parotid tumor size was smaller in the MFI group than in the MBI group. Although different medical facilities may have different protocols for choosing the incision type and comparable tumor sizes between the two incision groups have been reported 18,19 , the presented pooled data suggest that physicians seem to consider using the MFI for patients with smaller tumors. One of the included studies suggested that MFI is more suitable for tumors in the lower and posterior portions of the parotid gland 20 , while several authors indicated that MFI can be used in parotidectomy regardless of tumor location 17,19 . However, the information regarding tumor location is limited and not reported with a universal method, which makes pooled analysis difficult. Future studies focusing on the relationship between tumor location and incision type may be useful. www.nature.com/scientificreports/ Facial palsy leads to both functional impairments and esthetic complaints, negatively affecting the quality of life of patients. Preservation of facial nerve function is therefore one of the most critical surgical steps in parotidectomy. In the present meta-analysis, both the incidence rates of temporary and permanent facial palsy were comparable between the two groups. One cadaveric study from 2010, which compared the achieved surgical field with the MFI and MBI approaches, also revealed no significant difference in the extent of exposure 25 . The use of electromyography in intraoperative facial nerve monitoring was introduced in 1970, and it has become increasingly popular in recent years. One review article from 2020 suggested that the risk of temporary and permanent facial nerve weakness after primary parotid gland surgery may be decreased with the use of a nerve monitoring system 26 . However, only one of the articles included in this study described the use of a nerve monitoring system in the surgical process 18 . The proper use of a nerve monitoring system may not only help to protect facial nerve function but also potentially decrease the time needed for surgery during parotidectomy with the MFI approach.
Frey's syndrome is caused by anatomical communication between the sweat glands of the face and the severed postganglionic parasympathetic nerve fibers supplying the parotid gland. Three of the included studies recorded the incidence of Frey's syndrome after surgery. However, none of these studies described the use of objective methods such as Minor's test in the diagnosis of Frey's syndrome. The true incidence may therefore be higher than reported if objective examinations are used. The pooled results in our analysis demonstrated that the incidence rate of Frey's syndrome was lower in the MFI group than in the MBI group. Tumor size has been considered a significant predictor of Frey's syndrome after parotidectomy 27 . The tumor size in the present study was smaller in the MFI group than in the MBI group, which may partly explain the lower incidence of Frey's syndrome in the MFI group. Other factors, such as the extent of surgery and the histological type of the tumor, have also been reported to be associated with the occurrence of Frey's syndrome 28 .
A sialocele is defined as the accumulation of saliva in the parotid region after parotidectomy, and a salivary fistula can occur if these fluid collections drain onto the skin. Our results showed that there was no significant difference in the incidence rate of salivary complications between the MFI and MBI groups.
The authors acknowledge the limitations of this study. First, we included only 7 retrospective studies in this meta-analysis. More well-controlled studies may be required to further confirm these results. Second, the types of parotidectomy, the size of tumors, the time of follow-up and the time to assess cosmetic satisfaction may all have potential influences on the outcomes of interest, and these results need to be interpreted with caution. Third, we were not able to analyze the recurrence rate because only three of the included studies provided the www.nature.com/scientificreports/ data regarding tumor recurrence, and the follow-up period was 18 months at most. Parotid tumors are reported to recur between 4.7 and 9.1 years, which exceeds the follow-up periods of these studies [29][30][31] . Despite these limitations, the present meta-analysis still provides evidence for the use of different incisions in parotidectomy.
Conclusion
In conclusion, the patients who underwent parotidectomy with the MFI demonstrated a significantly higher cosmetic satisfaction score than those who underwent parotidectomy with the MBI. In our study, compared with the use of the MBI in parotidectomy, the use of the MFI in parotidectomy was associated with a longer operative duration, a smaller tumor size, and a decrease in the incidence of postoperative Frey's syndrome. In addition, we observed a similar rate of postoperative salivary complications, temporary and permanent facial palsy between the MFI and MBI groups. Optimal local disease control is still the primary aim of surgical intervention in parotid tumors. Physicians may consider using the MFI for patients with particular cosmetic concerns, such as younger patients, when oncological safety can be ensured. | 2021-12-18T06:16:59.722Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "89ca1877291c5917527c1f62e3f998cb802b85d2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f47d10cadcfcfb90066471019542b0e1b074dc74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26599808 | pes2o/s2orc | v3-fos-license | Technology Forecasting Using a Diffusion Model Incorporating Replacement Purchases
Understanding the nature of the diffusion process is crucial for sustainable development of a new technology and product. This study introduces a replacement diffusion model that leads to a better understanding of the growth dynamics of a technology. The model operates in an environment with multiple competitors and overcomes the limitations of existing models. The model (1) consists of a diffusion model and an additional time series model; (2) separately identifies the diffusion of first-time purchases and that of replacement purchases; (3) incorporates players’ marketing-mix variables, affecting a new technology diffusion; and (4) characterizes consumers’ different replacement cycles. The proposed model is applied to South Korea’s mobile handset market. The model performs well in terms of its fit and forecasting capability when compared with other diffusion models incorporating replacement and repeat purchases. The usefulness of the model stems from its ability to describe complicated environments and its flexibility in including multiple factors that drives diffusion in the regression analysis.
Introduction
The sustainability of competitive advantage has been an important issue in management research [1,2].For example, Datar et al. [3] addressed the importance of sustainable market share gains, especially in a fast-cycle high-technology industry.To enjoy sustainable pricing and market share advantage, firms should understand the structural characteristics of demand in the industry [1].Adner and Zemsky [2] also emphasized demand-side factors, such as consumer utility, in analyzing firms' sustainability of competitive advantage.In today's dynamic and constantly changing business environment, the life cycle of a new technology is shortened, and competition between players in the market is intensifying.To plan the sustainable development of new technologies, therefore, a novel model considering the changing market environment is needed for mid-to long-term forecasts.
In this perspective, innovation diffusion models are very useful for companies to develop a strategic plan feasible enough to secure a sustainable competitive advantage [4].They are based on models from the fields of biology, epidemiology, and ecology [5].Mansfield [6] suggested an alternative diffusion model that follows a new technology's simple logistic growth, representing word-of-mouth effects.Bass [7] offered analytical and empirical evidence for the existence of an S-shaped pattern of diffusion by suggesting a general model that reflects innovative and imitative factors.Simple though it may be, the Bass model has been used widely to analyze demand and diffusion in such fields as management, policy, economics, and marketing.The Bass model has been improved upon to reflect replacement and repeat purchasing [8][9][10][11], competition between different technologies [12], and diffusion at the brand level [13,14], among others.
As Parker [15] suggested, first-time adopters make their purchase only at the beginning of a technology's life cycle and, therefore, replacement and repeat purchases become important factors over the long run in the diffusion of most technologies.Islam and Meade [16] also demonstrated the importance of modeling replacement components for a better total sales forecasting, especially for consumer durables.Given that many technologies' replacement cycles become ever shorter, and that consumer preferences could change rapidly in the current competitive market environment, consideration of replacement purchases by existing users becomes a fair assumption for analyzing diffusion of both durable and nondurable technologies.
Diffusion models that take into account replacement and repeat purchases have a long history in the forecasting literature.Representative examples of early works are Dodson and Muller [17], Lilien et al. [18], Mahajan et al. [19], and Kamakura and Balasubramanian [20], which incorporated consumers' subsequent purchase behavior, as well as their first-time purchase behavior.Recently, a few studies used novel methods, such as choice models [21] and agent based simulations [22], to analyze innovation diffusion of repeat purchase technologies.The diffusion models incorporating replacement and repeat purchases are widely applied to analyze the market dynamics of several products, as well as technologies, such as ethical drugs [23], motor vehicles [24], televisions [25], PC processors [26], and mobile handset units [27].Despite their usefulness, such models are hampered by some limitations.First, some of them limit the research focus to nondurable technologies, such as pharmaceuticals [10,18].Second, although other studies, such as Bass and Bass [28] and Bayus et al. [9], propose models that apply to durable products, those models are limited in that they neglect to account for marketing efforts and firms' strategies that affect consumers' subsequent purchase behavior.Third, most studies taking replacement and repeat purchases into account were applied only to a single technology category.Therefore, they failed to incorporate the competitive environment into the model, even though technology diffusion at the brand level is an important issue in most industries.Another drawback of existing models is that they do not separately identify first-time purchases and replacement purchases and, instead, use only aggregate sales data.Finally, the existing models do not estimate the replacement purchase rate, which may vary over consumers' different replacement cycles.
The purpose of the current study is to present a new innovation diffusion model that accounts for replacement purchases, which overcomes the aforementioned limitations and enables a researcher to discuss an application involving a durable consumer technology.The proposed model is applied to South Korea's mobile handset market.This application is believed suitable for verifying our diffusion model because South Korea's mobile communication services market has become saturated, and demand for mobile handsets caused by the need for replacement has maintained the mobile handset market's growth.For the mobile handset application, we compare our proposed model's fit and forecasting performance with that of other diffusion models incorporating replacement and repeat purchases to verify its effectiveness.This study will provide firms with useful tools to plan an innovative and sustainable diffusion process of an alternative technology from a long-term perspective.
This research is organized as follows: The next section depicts the suggested model's conceptual and analytical underpinnings.Section 3 describes the data and the empirical analysis of the diffusion of South Korea's mobile handset market, followed by comparisons with other diffusion models that account for replacement and repeat purchases.The concluding section presents the study's theoretical implications and suggested extensions.
Model Specifications
The two major components of the model are sales to first-time purchasers who try a technology initially and sales to replacement purchasers who have purchased a technology previously.This can be represented by a basic equation where n(t) are the technology's total sales at time If we specify the number of players in the technology market as N, the total sales of player i consists of sales to first-time purchasers and replacement sales as Note that Equation (2) adds up (across players) to Equation (1), where where F(t) is the cumulative distribution function of triers of the technology up to time t and parameter m indicates the number of potential triers in the technology market.To estimate the sales to first-time purchasers, Equation (4), we must specify the functional form for the underlying cumulative distribution function, F(t).Various diffusion models, such as the original Bass [7] model and a generalized Bass model with marketing-mix variables [29], are available for F(t).The number of triers of the technology at time t is estimated in Equation ( 4) using the diffusion model, and that estimate is applied in Equation ( 3).The restriction is added to reflect that the summation of each player's market share equals 1 for first-time purchasers.We explore this in more detail subsequently.
On the other hand, in diffusion by replacement purchasers at time t, the technology users' replacement cycle should be reflected because consumers' replacement cycles are not all alike.In the current study, we assume a replacement purchase rate γ τ j that varies over replacement cycle τ j (j = 1, 2), provided that γ τ 3+ is the replacement purchase rate in the case of replacement cycles of more than τ 3 years.Then, r(t) is as Replacement purchasers may adopt the new technologies of player i after taking into consideration multiple specific factors related to player i.In this study, we assume that consumers' replacement-purchase behavior is affected by the bandwagon effect and marketing-mix.Marketing-mix variables, such as price and advertising, affect replacement purchases of buyers.Therefore, it is important to know the impact of controllable marketing variables on the replacement cycle because resources can then be allocated more effectively to accelerate the purchase timing decision of consumers looking to replace [30].On the one hand, bandwagons are diffusion processes whereby consumers adopt an innovation because of a bandwagon pressure caused by the sheer number of other consumers that have already adopted this innovation [31].Bandwagon effects pervade in high-technology industries and economic analysis of them provides valuable insights into managerial implications [32].As proxy variables, we use market share data to reflect the bandwagon.We adopt price and product diversification as marketing-mix variables.As a proxy variable, we use the number of technology models of each player to reflect the product diversification.Therefore, incorporating those determinants of consumer's replacement purchase, replacement sales of player i can be expressed as where ms i (t − 1) is player I's market share at time t − 1, x i (t) are the marketing-mix variables of player i at time t.The subscript j denotes different types of marketing-mix elements such as price, advertising, product diversification, etc.In Equation ( 6), the parameters ξ represent the effects and magnitudes to which each player's bandwagon effect and marketing-mix affect replacement sales.For example, the parameter ξ iband represents the bandwagon effect on replacement purchases of previous users with regard to brand (player) i.The restriction is added to reflect that the summation of each player's market share equals 1 for replacement purchasers.
The model's final form is derived by combining Equations ( 3) and ( 6) as such that Equation (7) includes the vector autoregressive (VAR) equation's form in which player i's sales volume is expressed as a linear combination of its lagged value and the lagged values of the sales volume of all other players in the category.This model is based on the assumption that each player's sales volume is affected by competing players' sales volumes, as well as its own previous sales volume.In addition, the above equation is expanded to include other exogenous variables, such as marketing-mix variables, as well as lagged endogenous values.A time series model, such as VAR, however, is applicable only for short-run forecasting because the forecasts of time series become ever more imprecise as the forecast horizon increases.To overcome the nonstationary problem of the time series model, researchers have developed an error correction model (ECM), which uses co-integrated variables to transform variables' short-run equilibria into long-run equilibria.In other words, ECM is the methodology that matches economic variables' short-run and long-run behavior by including co-integrated variables with lagged terms as explanatory variables.See Hamilton ([33], Chapter 19) for details of this error correction model.
Our proposed model in Equation ( 7) can also be understood as a replacement diffusion model with concepts such as ECM.Instead of ECM's co-integrated variables, our model introduces the variable represented by the diffusion model, which shows the supremacy of its long-run forecast.The diffusion model provides medium-and long-run forecasts, such as an estimate of the ceiling point and estimates of the peak time and sales volume at the peak time [34].Therefore, our model's structure consists of a diffusion model (the first term of Equation ( 7)), which provides long-run information generated by first-time purchasers, and a time series model (the second term of Equation ( 7)), which provides short-run information generated by previous users' technology replacement.In the next section, we will discuss the substantive findings on the mobile handset market and the results of the model's fit and forecasting performance.
Data Description
The study uses quarterly data for three players of mobile handsets in South Korea.One reason for applying the proposed model to data on mobile handsets is that a mobile handset is a typical product with a relatively short replacement cycle.For this reason, issues on replacement and lifetimes of mobile handsets have been highlighted in several previous studies [21,27,35].Moreover, South Korea is one of the markets with the shortest handset replacement cycles, among others [36].Thus, the mobile handset is an important target of empirical analysis from the perspective of practical managerial implications.Another reason is that the proposed model considers brand-level replacement purchases.The quarterly data for mobile handsets in the Korean market was one of few available brand-level sales data at the time of our analysis.Although many players exist in the South Korean mobile handset market, we consider the two players that dominate the market, Samsung Electronics and LG Electronics (denoted as Player 1 and Player 2, respectively), plus an additional player that is defined by aggregating the sales of four other players and is denoted as Player 3. The remaining four companies are SMEs and full data for estimation is not available.However, since the aggregate handset data is available, we deduce data for Player 3. In addition, considering all players, the number of parameters to be estimated in the model increases, making it difficult to estimate significant parameters.The observation period used for our model's estimate stretches from the first quarter of 2000 to the first quarter of 2005 due to the limitation of available data at the player level.Even though sales data at the category level are available from the first quarter of 1995, sales data and marketing-mix data at the player level are available from the first quarter of 2000.We chose to aggregate sales volume over models of the products produced by each player because the focus of this study is to develop a replacement diffusion model at the brand-level.Therefore, the primary goal of the model is to model consumers' behaviors and factors affecting them when they purchase and repurchase a new mobile handset at the brand-level.The subscriber data for mobile communication services and the aggregate sales data for mobile handsets are reported by South Korea's Ministry of Information and Communication [37] and by National Statistical Office [38], respectively.The value of domestic sales revenue divided by the average price of mobile handsets is taken as the value of each player's sales volume data (i.e., the number of mobile handsets sold in each quarter) because of that figure's unavailability.To ensure the estimated data's reliability, we compared the estimated data to some actual data that we can observe, in part.The South Korean service DART provides domestic sales revenue data and average price data for mobile handsets, and Cetizen [39], a mobile handset market research firm, provides data related to each player's number of technology models.
Model Estimation
Generally, consumers in the telecommunications market can use technologies such as mobile handsets and fax connections only after subscribing to a service.This means that users of a telecommunication technology always have to be subscribed to a telecommunication service.Therefore, we assume that new subscribers to a mobile communication service at time t are first-time purchasers of mobile handsets at time t.On the other hand, replacement purchasers of mobile handsets at time t are defined as consumers who have subscribed to a mobile communication service and tried new mobile handsets by t − 1.This assumption implies that replacement purchasers include both re-subscribers to a mobile communication service and subscribers switching to competing service providers because they have already used mobile handsets.
We use the original Bass model as the diffusion model in Equation ( 4).Thus, the cumulative distribution function for the Bass model considered in our empirical analysis is In Equation ( 8), parameters p and q are the innovation and imitation coefficients, respectively.Parameter p reflects the impact of activities such as advertising and promotion on adoption.Similarly, parameter q captures communications internal to the social system, such as the word-of-mouth effect.To estimate the Bass model, we use nonlinear least squares (NLS) estimation due to its nonlinear function form.In addition, we reflect a seasonal dummy in the estimation process because the quarterly data contains fluctuations.
The Bass model is applied to the diffusion of new subscribers in South Korea's mobile communication services market from 1995 to 2005.When m is estimated from the data, the parameter's value is smaller than the number of actual cumulative subscribers, which is unreasonable.Therefore, we set the number of potential subscribers to mobile communication services to be about 44 million based on South Korea's population.Many existing studies of diffusion, such as Hahn et al. [10], estimate the diffusion model using fixed values of m.Given the number of potential subscribers, the estimated parameters p and q in the Bass model are 0.0216 (σ = 0.0123) and 0.0972 (σ = 0.0491), respectively.We performed a sensitivity analysis with increasing or decreasing m by 10%.Increasing m by 10% results in a 2% MAPE (mean absolute percentage error) based on m = 44 million in estimated new first-time purchasers.On the other hand, if m is reduced by 10%, MAPE is 8%.That is, the estimates are not significantly sensitive to the value of m.
Next, our replacement diffusion model, Equation (7), is estimated using subscriber data estimated by the Bass model and other quarterly data.To simplify the model, we assume each player's market share for first-time purchasers, β i (t), is constant throughout the time period: On the other hand, to determine Equation ( 7)'s lag order, the Akaike information criterion and the Schwarz information criterion, which choose the lag order to minimize some criterion function of lag, are used [40].We determine the lag order, however, based on survey results reported by the market research firm Cetizen [39], indicating that the replacement cycle of South Korea's mobile handsets is between one and two years.Therefore, the parameters γ τ 1 , γ τ 2 , and γ τ 3+ in Equation ( 5) can be simply expressed as γ 1 , γ 2 , and γ 3 .The estimated parameters γ1 , γ2 , and γ3 provide information for the ratio of replacement purchasers with each replacement cycle, and those parameters are used to forecast the demand for the mobile handset.
In estimating our full model in Equation ( 7), the least squares estimators are inconsistent and biased because the endogenous variables are correlated with the disturbances (ε i (t)).In simultaneous equations like Equation (7), the interaction of the variables causes inconsistent and biased problem [41].In this case, because an instrumental variable estimator can be used instead, we estimate our model's parameters using three-stage least square, for discussion of the structure and estimation procedure of three-stage least squares, see Greene [41] (Chapter 15), as proposed by Zellner and Theil [42].Only exogenous variables not included on the right side of Equation ( 7) can be used as instrumental variables.The instrumental variable used in this study is the squares of sales to first-time purchasers, s(t) 2 .The joint estimates of the nonlinear simultaneous equations' parameters were produced using TSP software.Table 1 shows our model's estimated coefficients.The estimates of parameters β 1 and β 2 are significant at 1%, and they are positive, as expected.We see that about 40.1% of new mobile communication service subscribers adopt Player 1's handset, and about 28% of new subscribers adopt Player 2's handset.As for Player 3's market share of new subscribers, 31.9% is automatically derived by Equation (7)'s constraint condition.We compared each player's estimated market share to the actual market share in terms of actual aggregate sales to make several inferences about the difference between overall sales and estimated sales to first purchasers.The average market shares from 2000 to the first quarter of 2005 are reported as 44.6% for Player 1, 19.9% for Player 2, and 35.5% for Player 3. Players 1 and 3 occupy higher market shares in terms of actual overall sales than their market shares in terms of estimated sales to first-time purchasers, and vice versa for Player 2. This indicates that the diffusion speed for replacement purchases of Player 2 is slow compared with that of other players.This result, thus, implies that Player 2's market condition may include threat factors for its technology's diffusion, in that replacement purchasers have stimulated the demand for mobile handsets since the saturation of South Korea's mobile communication service market around 1999.The parameters γ 1 , γ 2 , and γ 3 that describe the ratio of replacement purchasers with various replacement cycles are not significant at a significance level of 5%, except γ 2 .From that, we can observe that about 17.6% of previous users who had adopted mobile handsets at time t − 2 are going to purchase new handsets at time t.
With two exceptions, all of the estimated parameters of variables related to replacement purchases are significant.First, the estimated values of ξ 1band and ξ 2band show that the bandwagon effect has a significant impact on the replacement-purchase behavior of previous users only with regard to Player 1.Therefore, we can infer that more users of Player 1 may increase that player's market share.The results also prove that Player 1 has been more successful in creating bandwagon effects than the others in the Korean mobile handset market.Firms in high-technology industries clearly understand the importance of bandwagon effects because a successful proprietor of a bandwagon technology generally has a greater market power [35].Second, both parameters of the price variables, ξ 1price and ξ 2price , are significant at a significance level of 5%.The signs of those parameters are positive, contrary to the general case showing a preference for lower-priced technologies.The phenomenon reveals that consumers prefer up-to-date technologies even though the price of mobile handsets increases as advanced technologies-such as MP3 and digital photography capability-are included in the handsets.In the mobile handset market, therefore, advanced functionality realized by new technology may have a greater effect on handset diffusion than price.The introduction of new telecommunication service content, in addition to advances in mobile terminal technology, can also contribute to strategies that seek more functionality.Consumers may exercise their discrimination by observing whether the technology is compatible with new telecommunication services.This phenomenon indicates that the development of new functionality and the introduction of new mobile communication services need to operate in parallel.Finally, the results of the estimated parameters ξ 1diver and ξ 2diver show that demand for mobile handsets is negatively affected by product diversification in the case of Player 1.These results are consistent with what has been reported by the market research firm iSuppli, that is, that South Korea's mobile handset firms need to reduce the number of their product models and adopt a strategy, such as target marketing, to keep the mobile handset market's dominant position in the world [43].
Sales to First-Time Purchasers and Replacement Purchasers in the Mobile Handset Market
Most diffusion models do not separately identify first-time purchases and replacement purchases because most data sources report only aggregate sales volume, that is, the summation of sales to both first-time purchasers and replacement purchasers.To overcome that shortcoming, the results of the present diffusion model take into account first-time purchases and replacement purchases separately.
Figures 1-3 show the estimated cumulative sales volume of mobile handsets in each quarter at the player level, and Figure 4 shows at the category level, since 2000.The gap between sales to first-time purchasers and replacement sales has grown over time, although the two sales figures began at a similar level.In cumulative sales from 2000 to the first quarter of 2005, total sales of Player 1 of replacement sales of 85.9% and sales to first-time purchasers of 14.1%.In the case of Players 2 and 3, replacement sales cover 77.4% and 86.1% of each player's total sales, respectively.Overall, 84.3% of total sales in the mobile handset market are demanded by replacement purchasers.From those results, we can infer that growth of the mobile handset market is being driven mostly by replacement purchases, and a phenomenon we expect will become even more severe given that the market for mobile communication service is becoming saturated.As of the first quarter of 2006, the penetration of mobile telecommunication service in South Korea had already reached 80%.We can guess that the diffusion pattern of first-time purchases and replacement purchases in world's mobile handset market is similar to that in South Korea's market, because the number of the world's mobile subscribers showed negative growth in 2001.
Sales to First-Time Purchasers and Replacement Purchasers in the Mobile Handset Market
Most diffusion models do not separately identify first-time purchases and replacement purchases because most data sources report only aggregate sales volume, that is, the summation of sales to both first-time purchasers and replacement purchasers.To overcome that shortcoming, the results of the present diffusion model take into account first-time purchases and replacement purchases separately.
Figures 1-3 show the estimated cumulative sales volume of mobile handsets in each quarter at the player level, and Figure 4 shows at the category level, since 2000.The gap between sales to first-time purchasers and replacement sales has grown over time, although the two sales figures began at a similar level.In cumulative sales from 2000 to the first quarter of 2005, total sales of Player 1 consist of replacement sales of 85.9% and sales to first-time purchasers of 14.1%.In the case of Players 2 and 3, replacement sales cover 77.4% and 86.1% of each player's total sales, respectively.Overall, 84.3% of total sales in the mobile handset market are demanded by replacement purchasers.From those results, we can infer that growth of the mobile handset market is being driven mostly by replacement purchases, and a phenomenon we expect will become even more severe given that the market for mobile communication service is becoming saturated.As of the first quarter of 2006, the penetration of mobile telecommunication service in South Korea had already reached 80%.We can guess that the diffusion pattern of first-time purchases and replacement purchases in world's mobile handset market is similar to that in South Korea's market, because the number of the world's mobile subscribers showed negative growth in 2001.Therefore, to stimulate the mobile handset market, marketing to replacement purchasers should be given a higher priority than marketing to first-time purchasers.Specifically, the bandwagon effect, the incorporation of more sophisticated functions, and a smaller variety of product models may become more critical factors over time for market success in the mobile handset industry.
Comparisons with Other Diffusion Models Incorporating Replacement Purchases
Here we compare the fit and forecasting performance of the Lilien-Rao-Kalish (LRK) model [18], the Mahajan-Wind-Sharma (MWS) model [19], and the Hahn-Park-Krishnamurthi-Zoltners (HPKZ) model [10] to our study's proposed model through applications of South Korea's mobile handset market.It seems meaningful to compare the proposed model with a variety of diffusion models because it can provide several implications from a modeling perspective.As noted in Section 1, however, the focus of our research is to develop a new diffusion model incorporating replacement purchase and to overcome the limitations of existing models.If we compare our model with many other diffusion models which do not consider replacement purchases, our research may lose its focus.Therefore, here we compare the proposed model only with existing diffusion models incorporating replacement purchases.To ensure the comparison is fair, we exclude the variable describing product diversification in our proposed model.This forces the kinds of variables to be equal, as is the case in the other diffusion models incorporating replacement and repeat purchases.
The original LRK model is expressed in terms of adopters, not sales.After transforming the original LRK model into a sales model, modified by Hahn et al. [10], the LRK model is expressed as Therefore, to stimulate the mobile handset market, marketing to replacement purchasers should be given a higher priority than marketing to first-time purchasers.Specifically, the bandwagon effect, the incorporation of more sophisticated functions, and a smaller variety of product models may become more critical factors over time for market success in the mobile handset industry.
Comparisons with Other Diffusion Models Incorporating Replacement Purchases
Here we compare the fit and forecasting performance of the Lilien-Rao-Kalish (LRK) model [18], the Mahajan-Wind-Sharma (MWS) model [19], and the Hahn-Park-Krishnamurthi-Zoltners (HPKZ) model [10] to our study's proposed model through applications of South Korea's mobile handset market.It seems meaningful to compare the proposed model with a variety of diffusion models because it can provide several implications from a modeling perspective.As noted in Section 1, however, the focus of our research is to develop a new diffusion model incorporating replacement purchase and to overcome the limitations of existing models.If we compare our model with many other diffusion models which do not consider replacement purchases, our research may lose its focus.Therefore, here we compare the proposed model only with existing diffusion models incorporating replacement purchases.To ensure the comparison is fair, we exclude the variable describing product diversification in our proposed model.This forces the kinds of variables to be equal, as is the case in the other diffusion models incorporating replacement and repeat purchases.
The original LRK model is expressed in terms of adopters, not sales.After transforming the original LRK model into a sales model, modified by Hahn et al. [10], the LRK model is expressed as where x i (t − 1) and x i =j (t − 1) are marketing-mix variables of player i and competing players at time t − 1, respectively.In our study, player i's technology price and the competing players' average technology price are used as marketing-mix data.In Equation (9), m i represents the potential sales in the mobile handset market.Parameters β 0 and β 1 capture the effect of marketing on converting nontriers to triers, and β 2 captures the effect of competition on converting previous triers to nonrepeaters.Finally, β 3 captures word-of-mouth's effect on the trial.Mahajan et al. [19] proposed a repeat-purchase diffusion model that uses only aggregate sales data.They base their model on the assumption that the word-of-mouth variable is nonuniform over time.The MWS model's basic framework is simple in that it ignores players' marketing-mix variables and consists only of purchasers and nonpurchasers.The MWS model expressed by our notation is where the parameters β 0 and β 1 reflect technology diffusion's innovative and imitative effects, respectively; the parameter φ captures the dynamic word-of-mouth effect; and β 2 indicates the fraction of adopters in time t − 1 who continue to adopt in period t.
The HPKZ model has two forms, which emphasize the marketing effort's competitive aspect and its informative nature, respectively.In this study, we compare the competitive-aspect HPKZ model with our model, because that one is consistent with our formulation that reflects players' competitive structure.The HPKZ model can be written as where x i (t − 1) is player i's marketing-mix variable in time t − 1, and q i (t − 1) is the cumulative adopters by time t − 1, which has the relation q i (t) − q i (t − 1) = n i (t) − β 3 q i (t − 1).Hahn et al. [10] estimate the model's parameters using an iterative procedure and ordinary least squares estimation because q i (t − 1) is not directly observed from the data.For the specific estimation, see Hahn et al. [10] (p.229).However, we estimated the HPKZ model's parameters using nonlinear least squares (NLS) with mobile communication service subscriber data, which equals q i (t − 1).
In each case, the estimates of the equation systems' parameters were produced utilizing nonlinear least squares (NLS) on TSP software.To compare the models' fit and forecasting performance, each model's Bayesian information criterion (BIC) and mean absolute percentage error (MAPE) are computed.As the models have different numbers of parameters, BIC is used for the fit criterion.Each model's BIC and MAPE are given in Table 2.The dataset's estimation period was 21 quarters.The result of the fitted BIC (Table 2a), which is measured as logs of fit over the estimation period, reveals that our model produces the lowest BIC (average fitted BIC = 11.34) for the most part, except for Player 3. The HPKZ model that allows for marketing efforts' competitive effect, and that includes a repeat-purchase component, also indicates a good model fit (average fitted BIC = 11.37).On the other hand, the MWS model reports the worst model fit overall (average fitted BIC = 11.57).From those results, we can infer that consideration of marketing-mix variables, such as price tends to have a significant impact on the model's fit in that only the MWS model does not include marketing-mix variables.
To compare the models' forecasting performance, Table 2b reports the MAPEs of one-ahead forecasts, which measure MAPEs for out-of-sample forecasts from the first quarter of 1995 to the fourth quarter of 2004.Our model also produces the lowest MAPE values for Players 1 and 3.In summary, our study's proposed model shows good performance in terms of its fit and forecasting as it is combined with a diffusion model and a time series model considering various realistic factors, such as marketing effort, replacement cycle, and bandwagon effect.
Discussion and Conclusions
As the marketplace for a technology evolves, consumer preferences also evolve.Replacement cycles for a new technology become shorter, and competition among players becomes fiercer.To demonstrate and forecast complicated markets, diffusion models need to address consumers' replacement patterns and the interrelationships among players.To that end, we have developed a new replacement diffusion model that can explain the diffusion of a technology at the player level and can forecast future demand for the technology taking into consideration such factors as the bandwagon effect and marketing mix.This model is especially appropriate for separating first-time purchases from replacement purchases.
We compared our model's fit and forecasting performance to that of other diffusion models which account for replacement and repeat purchases, including the models proposed by Lilien et al. [18], Mahajan et al. [19], and Hahn et al. [10].As shown in the results of fitted BICs and forecast MAPEs, our model outperforms previously-developed diffusion models despite its removal of a critical variable that indicates product diversification.The results demonstrate that the combination of a diffusion model and a time series model incorporating realistic factors-such as marketing effort, replacement cycle, and bandwagon effect-contributes to the model's fit and forecasting performance.Particularly, the results suggest that multiple players competing in the new technology market should not be treated as being totally independent of each other as their mixed behavior drives the diffusion process of each player's new technology.
As with any analysis, there are also limitations.Further research should explore diffusion with respect to several different technology categories.That should be done to see how well results derived from the model presented in this study can be applied to other datasets.Since our model is applied only to a single product category, its superiority in terms of fit and forecasting capability over other replacement diffusion models should not be generalized to all consumer durables.Future research must conduct detailed examinations of other product categories to test and verify the applicability of the proposed model.In addition, further research should investigate how the results drawn here change when other diffusion models or growth models, such as a multi-generation model or a logistic model, are used instead of the original Bass model.For example, if we can disaggregate the sales data of one player into different product models and assume that these are a series of technological generations, it is possible to apply a successive generation model, such as the one by Norton and Bass [44].A diffusion model incorporating competitive relationship among players, such as [12,45], is also worth attempting as a more sophisticated modeling approach.Some parameters of our model may be biased because of ignoring such competitive effects; the parameters of price variables are most likely to be the ones because price is a key element both in the marketing strategy [46] and in the competitive diffusion model [47].These variants of our model will offer considerable promise to increase the level of realism in diffusion modeling.Moreover, there are alternative specifications available for the lag order of a time series model used in this research.It would be worthwhile to investigate the implications of such alternative specifications.Despite the limitations, our empirical results are promising.The replacement diffusion model described here should be useful in explaining the diffusion of new technology sales and in generating accurate long-term sales forecasts.Furthermore, the model has the flexibility to include marketing-mix variables in the regression process, the inclusion of which helps to explain differences across various players and the mechanisms leading to replacement purchases.The proposed model can be applied to new technologies such as smart phones and energy storage systems, and it can be applied to new technologies of other countries where replacement purchases occur.We believe practitioners and academics alike have interests in understanding new technology markets with competitive environments and in developing a diffusion model.
Figure 1 .
Figure 1.Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): Player 1.Figure 1. Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): Player 1.
Figure 1 .
Figure 1.Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): Player 1.Figure 1. Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): Player 1.
Figure 4 .
Figure 4. Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): The total of all players.
Figure 4 .
Figure 4. Quarterly sales to first-time purchasers and replacement purchasers (cumulative since 2000): The total of all players.
(2)Equation(2), first-time purchasers at time t determine to adopt the technology at time t, and then part of the triers would adopt the technology produced by player i.If β i (t) is player i's market share in terms of first-time purchases, then s i (t) is defined as
Table 1 .
Parameter estimates of the proposed model.priceandξ diver represent the parameters of variables related to price and product diversification, respectively.All the parameters explaining Player 3's diffusion (β 3 , ξ 3band , ξ 3price and ξ 3diver ) are not reported in Table1due to the constraint conditions.
Table 2 .
Performance measures of diffusion models incorporating replacement and repeat purchases. | 2017-07-28T18:04:20.922Z | 2017-06-16T00:00:00.000 | {
"year": 2017,
"sha1": "66545845e64004b16d99a33cf373dfe2f59c15fc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/9/6/1038/pdf?version=1497604749",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66545845e64004b16d99a33cf373dfe2f59c15fc",
"s2fieldsofstudy": [
"Business",
"Economics",
"Engineering"
],
"extfieldsofstudy": [
"Economics"
]
} |
72100115 | pes2o/s2orc | v3-fos-license | A Short Circular History of Vitamin D from its Discovery to its Effects
The discovery of vitamin D. It was as early as the mid-1600s that Whistler (1) and Glisson (2)independently published scientific descriptions (in Latin!) of rickets, caused, we now know, by a vitamin D deficiency. However neither treatise recognised the crucial role of diet or exposure to sunlight on the prevention of this disease. Around 200 years later, in 1840, a Polish physician called Sniadecki realised that cases of rickets occurred in children living in the industrial centre of Warsaw but did not occur in children living in the country outside Warsaw. He surmised that lack of exposure to sunlight in the narrow, crowded streets of the city where there was considerable pollution due to the burning of coal and wood, caused the disease. Such a view was poorly received at the time as it seemed inconceivable that the sun could have any useful benefit on the skeleton. The prevalence of rickets increased as industrial processes and labour expanded and, by the end of the nineteenth century, this bone disorder was estimated to affect more than 90% of children living in such urban polluted environments in Europe. Similarly, as Boston and New York City grew in the late 1800s, so did the number of cases until, in 1900, more than 80% of children in Boston were reported to suffer from rickets. Copyright Royal Medical Society. All rights reserved. The copyright is retained by the author and the Royal Medical Society, except where explicitly otherwise stated. Scans have been produced by the Digital Imaging Unit at Edinburgh University Library. Res Medica is supported by the University of Edinburgh’s Journal Hosting Service url: http://journals.ed.ac.uk ISSN: 2051-7580 (Online) ISSN: ISSN 0482-3206 (Print) Res Medica is published by the Royal Medical Society, 5/5 Bristo Square, Edinburgh, EH8 9AL Res Medica, Volume 268, Issue 2, 2005: 57-58 doi:10.2218/resmedica.v268i2.1031 A Short Circular History of Vitamin D from its Discovery to its Effects
A Short Circular History of Vitamin D from its Discovery to its Effects The discovery of vitamin D It was as early as the m id-1600s that W histler1 and Glisson2 independently published scientific descriptions (in Latin!) o f rickets, caused, we now know, by a vitamin D deficiency.However neither treatise recognised the crucial role o f diet or exposure to sunlight on the prevention o f this disease.Around 200 years later, in 1840, a Polish physician called Sniadecki realised that cases o f rickets occurred in children living in the industrial centre o f Warsaw but did not occur in children living in the country outside Warsaw.He surmised that lack o f exposure to sunlight in the narrow, crowded streets o f the city where there was considerable pollution due to the burning o f coal and wood, caused the disease.Such a view was poorly received at the time as it seemed inconceivable that the sun could have any useful benefit on the skeleton.The prevalence o f rickets increased as industrial processes and labour expanded and, by the end o f the nineteenth century, this bone disorder was estimated to affect more than 90% o f children living in such urban polluted environments in Europe.
Similarly, as Boston and New York City grew in the late 1800s, so did the number o f cases until, in 1900, more than 80% o f children in Boston were reported to suffer from rickets.
In 1918 Sir Edward M ellanby discovered that beagles, housed exclusively indoors and fed a diet o f oatmeal, developed rickets but that the addition o f cod liver oil to the food treated the disease successfully3.He wrote in 1921 "The action o ffa ls in rickets is due to a vitamin or accessory fo o d fa cto r which they contain, probably identical with the fat-soluble vitamin Various experiments by Hess, Steenbock and Black in the 1920s followed in which excised pieces o f rat skin were UV-irradiated or rat food was UV-irradiated.It must have been astonishing at the time to establish that both could be used as a d ietary source to treat rats w ith rickets.
Concurrently the first fat-soluble vitamin (A) and water-soluble vitamins (B and C) were being discovered; the factor protecting against rickets was known to be fat-soluble and was given the next letter in the alphabet -D.It was classified as a vitamin although it was recognised from the beginning that it was not necessarily required as a dietary constituent.
The chemical structures o f the various forms o f vitamin D were determined in the 1920s and 1930s by W indaus and colleagues'1 in G oettingen, Germany.Windaus was awarded the Nobel Prize in Chemistry in 1928 " for services rendered through his research into the constitution o f the sterols and their connection with the vitamins".The biologically active form o f vitamin D, found in the skin and called D3, was characterised in 1936 (see Figure 1), and was shown to result from the ultraviolet (UV) radiation o f 7-dehydrocholesterol. Thus vitamin D was established as a steroid.Very soon after this, the component in cod liver oil that prevented rickets was identified as vitamin D3.
Vitamin D in the diet is present as either vitamin D, if the source is plant, or D3 if animal.Few foods naturally contain vitamin D. Most is found in oily fish such as salmon, meat and eggs.Fat spreads and breakfast cereals are fortified with vitamin D. In the States orange juice, milk and some breads are also fortified.In the 1930s, vitamin D was added to many more American food-stuffs including peanut butter and hot dogs and even to a beer, marketed as having "sunny energy" .
Too much vitamin D docs you no good
It has been recognised for more than 50 years that too much vitamin D can result in intoxication, possibly due to the increased activity o f l,2 5 (O H )2D. T his is m anifest by nausea, vom iting, poor appetite, weakness and weight loss.Calcium levels are raised in the blood leading to a confused mental state and heart rhythm abnormalities.Calcinosis can also occur.There is no evidence that sun exposure, even at high levels, can cause vitamin D intoxication, and diet is also unlikely to either, although this can happen on occasion.After the second World War, excess amounts o f vitamin D were added to some milk products and this resulted in sporadic outbreaks in Britain o f vitamin D intoxication in infants and young children7.Such an outcome is not entirely past history as vitamin D toxicity was reported as recently as last year to occur in babies in Japan who had received prolonged feeding o f premature infant formula with a high vitamin D content.
With increasing interest by the general public in a "healthy" diet, it is possible that toxicity could occur nowadays from a high intake o f vitamin D in supplem ents, such a m ulti-vitam in pills.The safe upper limit recommended for the ingestion o f vitamin D is generally considered to be 25 mg/day for infants and 50 mg/day for all other ages, although some reports suggest that amounts considerably higher than these would still not represent a health hazard8.
Too little vitamin D does you no good
As vitam in D plays a m ajor role in the grow th, developm ent and maintenance o f bone health, any deficiency leads to mineralization defects with an increased risk o f osteoporosis, osteomalacia and fractures in adults, and rickets in children w ith a decrease in their genetically programmed height.An exciting discovery was made in 1979 by Stum pf and colleagues9 that vitamin D receptors are present in many part o f the body, in addition to the obvious locations associated with calcium metabolism -the gastro-intestinal tract, bone and kidney.This work led to the idea that vitamin D deficiency may be important in various nonskeletal disorders.Subsequently 1,25(OH)2D was demonstrated to inhibit the proliferation o f several cell types, to stimulate them to differentiate and, most recently, to act as an anti-apoptotic factor.As a result o f these various properties, many physiological functions have been attributed to vitamin D, including stimulation o f insulin production, modulation o f antigen presenting cell and T lym phocyte activities, prevention o f inflammatory bowel disease, photoprotection o f skin and reduction in blood pressure (reviewed in l0).
In addition to this remarkable list, vitamin D has been proposed to lower the risk o f several types o f internal cancers and autoimmune diseases.
Evidence to support this hypothesis has been gathering over the past 20 years or so with Cedric Garland and colleagues in the States being the first to note the association".More recent work along similar lines has been carried out by W illiam G rant12.The main indications have come from epidemiological studies at a population level in which a latitude gradient has been established for various tumours, such as colorectal, large bowel, breast and prostate.The results revealed that the lower the latitude, and hence the higher the ambient sun exposure, the lower the risk o f developing or dying from these cancers.Similar studies reached the same conclusion when the autoimmune disease, multiple sclerosis, was considered13.For example, in Australia where the genetic background o f the population is sim ilar throughout the whole country, the prevalence o f multiple sclerosis per 100,000 people is 12 in N.Queensland at latitudes o f 12-23°S and 76 per 100,000 people in Tasmania at latitude 45°S.It seem s that high exposure to the sun during childhood and early adolescence is particularly related to a reduced risk o f m ultiple sclerosis.Further reports have associated the consumption o f vitamin D supplements with a lowered risk o f cancer development.
So, how common is vitamin D deficiency? M any experts agree that babies who are entirely breast-fed (there is little vitamin D in human milk) and the elderly who seldom venture outdoors are frequently vitam in D deficient.For the ages in between, much controversy exists at present.Some argue that many working adults and children who do not spend much time outdoors or who rarely exposure their skin to sunlight may be at high risk o f vitamin D deficiency, especially during the winter period.
• RES M ED I C A
between the hours o f 10 am and 3 pm in the spring, sum m er and autumn is crucial.As vitamin D, is fat soluble, it can be stored in the body fat, thus providing a means o f seeing us through the winter months when there is essentially no solar UVB irradiation.
Secondly human behaviour with respect to sun exposure is very variable.In some cases, clothes are thrown o ff and lying in the full sun to develophe increased risk o f skin cancer induction due to excessive sun exposure is taken into account with the wearing o f protective clothing, hats, sunglasses and use o f sunblocks.For example a sunscreen with a sun protection factor o f 8 (thus allowing 8x greater time in the sun without burning) reduces the capacity o f the skin to produce vitamin D3 by more than 95%.What a dilemma -how to exposure yourself to sufficient sun to ensure the production o f vitamin D while, at the same time, not increasing your chances o f developing skin cancer!Michael Holick, in particular, has put forward the view that the population at large in developed countries may be becoming vitamin D deficient.He published a book in 2004 called "The UV Advantage".In it, he explained how we need solar irradiation on unprotected skin to create vitamin D. T his point w as considered contrary to governm ent health warnings regarding the dangers o f being out in the sun, and Holick was asked to resign late in 2004 from the Department o f Dermatology at the Boston School o f Medicine.
The consensus view at present is that we should expose ourselves to an "intelligent" amount o f sunlight.The dose should certainly be less than that required to redden the skin.Indeed as little as exposure o f the hands, arms and face 2-3 times weekly for 15 minutes on each occasion when the weather allows is probably sufficient1415.
So the history o f vitamin D is certainly not at an end.The story continues to unfold, even after 400 years o f research, and more revelations will surely follow as further knowledge regarding this intriguing molecule emerges.
M edical M icrobiology, School o f B iom edical and L aboratory Sciences, University o f E dinburgh, T eviot Place E D IN B U R G H , EM 8 9AG
For
most people living "normal" lives, more than 90% o f their vitamin D requirement is derived from exposure to the UV radiation in sunlight.The body has a huge capacity to produce vitamin D: for example, exposure of 6% o f the skin surface to summer sunlight for approximately 30 minutes around noon on a clear day in the UK would be equal to ingesting about 10 mg vitamin D. A 25(OH)D blood level o f between 50 and 125 nmol/L (20-50 ng/m L) is considered optim al, with levels below 25 nm ol/L indicating severe deficiency.An interesting study published in 19956 involved the crew o f an American submarine and revealed a steady decline in the 25(OH)D concentration from a starting level o f 78 nmol/L to 48 nmol/L after 2 months under the sea.This was despite a Navy diet that included milk and breakfast cereals fortified with vitamin D. Since the 1960s, a daily dietary allowance for children o f 10 m g vitamin D has been recommended -this was based on nothing more scientific than the vitamin D content o f a teaspoon o f cod-liver oil!In adults 5m g daily was recommended.Many experts today believe that these values are too low by several-fold.
First the amount o
f UV radiation in sunlight varies markedly depending on factors such as the time o f day, season, latitude, cloud cover and aerial pollution.Because o f the zenith angle o f the sun to the earth in the early morning and late afternoon and in winter, most UVB photons are efficiently absorbed by the ozone layer.As a result, little or no UVB reaches the skin and so the production of vitamin D, does not occur.Therefore sun exposure F i g u r e 1 : T h e m e ta b o lic p a t h w a y a n d f u n c tio n s o f v ita m in D .
Astonishing figures have been published recently, such as 40% o f the US population, 48% o f girls aged 9-11 years old and 80% o f nursing home of patients suffer from a vitamin D insufficiency.Even in areas o f the world intense insolation such as Queensland, high rates o f vitamin D insufficiency have been reported.Indeed the lack o f vitamin D has been called "an unrecognised epidemic" in adults over 50 years o f age.H ow do we e n su re a "p e rfe c t" a m o u n t o f vitam in D :' As almost all o f our vitamin D normally comes from the action o f sunlight on our skins, attempts have been made to calculate how much exposure is required to maintain adequate vitamin D levels for good health.For several reasons, such estimates are very difficult to establish. | 2018-12-18T04:35:16.706Z | 2014-11-10T00:00:00.000 | {
"year": 2014,
"sha1": "a2cd9137dd8119901c5914558f5f4e6aa9ffa881",
"oa_license": "CCBY",
"oa_url": "http://journals.ed.ac.uk/resmedica/article/download/1031/1525",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2cd9137dd8119901c5914558f5f4e6aa9ffa881",
"s2fieldsofstudy": [
"Medicine",
"History"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214697700 | pes2o/s2orc | v3-fos-license | Paleofluid Fingerprint as an Independent Paleogeographic Correlation Tool: An Example from Pennsylvanian Sandstones and Neighboring Crystalline Rocks (Tisia Composite Terrane,
In the basement areas of the southern Pannonian Basin, Central Europe (Tisia Composite Terrane, Hungary), Variscan blocks are essential components. The existing paleogeographic reconstructions, however, are often unclear and contradictory. This paper attempts to give a contribution for paleogeographic correlation of the Tisia using paleohydrological features (e.g., vein mineralization types, inclusion fluid composition and origin) of the Pennsylvanian continental succession and neighboring crystalline complexes. Vein-type mineralization in the studied samples dominantly forms blocky morphological types with inclusion-rich quartz and carbonate crystals. The evolution of hydrothermal mineralization and host rock alteration in the study area comprises three major stages. The first one is characterized by chloritization, epidotization, and sericitization of metamorphic rocks together with subsequent formation of Ca-Al-silicate and quartz-sulfide veins (clinopyroxene-dominant and epidote-dominant mineralization). The related fluid inclusion record consists of high-temperature and low-salinity aqueous inclusions, corresponding to a reduced retrograde-metamorphic fluid phase during the Late Westphalian (~310Ma). The next mineralization stage can be related to a generally oxidized alkaline fluid phase with a cross-formational character (hematite-rich alkali feldspar-dominant and quartz-dolomite veins). High-salinity primary aqueous inclusions probably were originated from the Upper Permian playa fluids of the region. The parent fluid of the third event (ankerite-hosted inclusions) was derived from a more reductive and low-salinity environment and can represent a post-Variscan fluid system. Fluid evolution data presented in this paper support that the W Tisia (Mecsek–Villány area) belonged to the Central European Variscan belt close to the Bohemian Massif up to the Early Alpine orogenic phases. Its original position is presumably to the northeast from the Bohemian Massif at the Late Paleozoic, north to the Moravo-Silesian Zone. The presented paleofluid evolution refines previous models of the paleogeographic position of the Tisia and puts constraints on the evolution of the Variscan Europe.
Introduction
In the basement areas of the Pannonian Basin, Central Europe (Hungary), pre-Variscan and Variscan blocks are essential components [1][2][3][4]. Late Variscan age granitoids are known in the Tisia Composite Terrane or Tisza Megaunit (e.g., ca. 330-360 Ma, Mórágy Granite Complex) where locally marine Silurian and terrestrial Permo-Carboniferous (meta) sediments are also preserved ( Figure 1). Based on its Variscan and early Alpine tectonostratigraphic characteristics, the Tisia was located at the margin of the European Plate prior to a rifting period in the Middle Jurassic [5][6][7]. The existing paleogeographic reconstructions, based on the correlation of the Paleozoic and Mesozoic facies belts in the Alpine-Carpathian domain, however, are contradictory ( Figure 2). On the one hand, at the end of the Variscan cycle, the polymetamorphic complexes of the Tisia belonged to the southern part of the Moldanubian Zone, forming the European margin of the Paleo-Tethys [2,3,5]. Position of the Tisia can be determined east to the Bohemian Massif and to the Western Carpathians (Figure 2(a)).
On the other hand, the Bohemian Massif can project below the Eastern Alps and the Vienna Basin as a major basement promontory referred to as the Bohemian Spur [9,11]. According to this concept, the conjugate upper margin can be found in the northern edge of the Moesian Platform (Romania and Bulgaria) which is a large microplate between the Southern Carpathians and the Balkans [12]. Therefore, its pre-Middle Jurassic reconstruction between the Bohemian Spur and the Teisseyre-Tornquist Zone leaves no space for the Tisia in this segment of the European margin ( Figures 1 and 2(b)). Consequently, the original paleogeographic position of the Tisia on the European southern margin had to be to the west of the Bohemian Spur, having the structural characteristics of syn-rift development during the latest Triassic and Early Jurassic (e.g., pronounced extensional half-grabens with the characteristic Gresten facies siliciclastic basin fill) [9,13]. This scenario was recently reinforced [14], suggesting a connection of the Tisia to the southern and/or southwestern part of the Bohemian Massif.
If the assumption that the Tisia was an integrated part of the Moldanubian Zone up to the latest Triassic is correct, at least some lithostratigraphic units from both areas [1,8] and geologic framework of the Carpathian-Pannonian area [1,4]. Late Variscan age granitoids known in the Hungarian part of the Tisia are also indicated [2]. should be characterized by similar Late Variscan and post-Variscan large-scale hydrothermal events and host rock alteration styles. Veins are common features in rocks and extremely useful structures to determine pressuretemperature range, fluid composition, and fluid origin during their formation [15]. This paper attempts to give a contribution for paleogeographic correlation of the Tisia Composite Terrane using paleofluid features of the Pennsylvanian (Upper Carboniferous) continental succession, Téseny Formation, S Hungary. In addition, the results from previous vein petrological and geochemical studies of the Téseny Formation and neighboring crystalline complexes in the study area [16][17][18][19][20][21][22][23][24][25] have also been integrated into this paper. [5]. (b) Position of the Tisia for the Early Jurassic [9]. The present-day outline of the Tisia is the end-result of severe continental E-W extension during the Miocene and equally important Eo-Alpine N-S shortening during the Cretaceous [10]. Thus, a more likely pre-Albian map-view outline of the Tisia should have an isometric shape [9].
A multidisciplinary approach including vein petrography, cathodoluminescence (CL) and Raman microspectroscopy, X-ray fluorescence analyses, stable O and C isotope analyses, and fluid inclusion petrography and microthermometry was applied to unravel the tectono-sedimentary record of the Téseny siliciclastic deposits in the Slavonia-Drava Unit (Figure 1). The paleofluid evolution presented in this article refines previous models of the tectonic history of the Tisia and puts constraints on the evolution of the Variscan Europe.
Geological Setting
The Tisia Composite Terrane, corresponding to a structural mega-unit, forms the basement of the southern Pannonian Basin [4,6,7]. Within the crystalline basement, three terranes have been distinguished: (1) Slavonia-Drava Unit, which can be subdivided into the Babócsa and Baksa Complexes (Subunits); (2) Kunság Unit, including Variscan granitoids of the Mórágy Complex; and (3) Békés Unit [4]. The study area includes the western Mecsek Mountains, which are parts of the Kunság Unit (Mórágy Subunit), and the western flank of the Villány Mountains which is a part of the Slavonia-Drava Unit (Figures 1 and 3(a)). The crystalline complexes and the overlying Paleozoic and Mesozoic sequences show heterogeneous lithological and metamorphic characteristics, indicating various phases of geologic evolution [4,7]. The oldest Paleozoic rocks are the late Early Silurian Szalatnak Slate deposits (Szalatnak Unit) related to a marine clastic sedimentation. In the Pennsylvanian, a molasse-type siliciclastic and, locally, coal-bearing sequence called Téseny Sandstone Formation was deposited in a foreland basin. In the Permian, continental red beds together with volcanosediments characterized the development of the study area [1,4,24,[26][27][28].
The nonmetamorphic (locally anchimetamorphic) Téseny Formation is interpreted as a fluvial system deposit. It unconformably overlies the Babócsa and Baksa metamorphic basement complexes and has a maximum thickness of 1500 m, occurring subsurface in the area between the Mecsek and Villány Mountains (Figure 3(a)). It is composed of conglomerate, sandstone, and siltstone; in addition, shale and coal seams also occur. These rocks contain a Namurian-Westphalian flora and Westphalian palynomorphs [24,25,[29][30][31].
Materials and Methods
For petrographic analysis of the Pennsylvanian Téseny rocks, Mining and Geological Survey of Hungary provided thin section collection from borehole Bm-1 (date of drilling: 1968) at depths of 848.9-1350.0 m below surface. Covered thin sections (n = 373) were studied by polarized light microscopy. The petrographic characterization of the observed veins (1 cm wide maximum), occurring in 27 thin sections, is based on papers by Bons [15,32]. Additionally, thin section collection (n = 11) of the Eötvös Loránd University, Budapest, from the wells near the village of Téseny (date of drillings: 1967) was also used for petrographic analysis. After that, a total of 11 core samples from the boreholes Bm-1, T-3, T-5, T-6, and T-7 were newly collected for this study. Detailed vein petrographic descriptions using CL microscopy, X-ray fluorescence analysis, and fluid inclusion study were conducted on representative thin sections (quartz and quartzcarbonate vein types). Vein sample locations (n = 40) are shown in Figure 3(b). Conventional mineral abbreviations have been used during the whole manuscript [33].
Cathodoluminescence (CL) studies were carried out by a Reliotron cold cathode luminoscope operating at 8 kV and 0.7 mA at the host department. Photomicrographs were made using an Olympus DP73 camera and the exposure times varied between 1 and 3 min.
X-ray element maps of representative areas of selected samples and single point analyses were made in order to monitor compositional variations of the minerals using a Horiba Jobin Yvon XGT 5000 X-ray fluorescence spectrometer at the host department. In the case of carbonates, Ca, Mg, Fe, and Mn were measured while Pb, Bi, Zn, Cu, Fe, As, Sb, Sn, Ag, Se, Te, Au, Ga, and Cd were analyzed for galena. Standardization was made by natural internal standards. Beam diameter was 100 μm and acceleration voltage was 30 kV during element mapping. Operating conditions of point analyses for carbonates and galena were 30 kV accelerating voltage and beam diameter 100 μm and 50 kV voltage with beam diameter 100 μm, respectively.
Fluid inclusions were studied in 75-150 μm doublepolished thick sections prepared from the vein materials at the host department. A low speed sawing machine was used 4 Geofluids to cut the samples in order to avoid changes in the volumes of the fluid inclusions. Microthermometric measurements were carried out with a Linkam THMSG 600 heating-freezing stage operating over a temperature range from −190 to +600°C. Synthetic fluid inclusions were used to calibrate at −56.6, 0.0, and +374.0°C. The accuracy of the data is approximately ±0.2°C under freezing and ±0.5°C under heating conditions. An LMPlanFI 100x objective lens (Olympus) was used to analyze the inclusions. The measurements of inclusions trapped in quartz began with freezing, while in the case of carbonate, the heating experiments were carried out first.
In the latter case, inclusions can suffer permanent deformation during the freezing experiment [34]. The cycling method [34] was used to determine the last ice melting temperature of fluids trapped in calcite. Determinations of volume fractions of vapor bubbles (φ vap ) were obtained from area analysis in a two-dimensional projection of fluid inclusions. Terms and symbols are used after Diamond [35]. Thermodynamic model to derive the behavior of highsalinity H 2 O-salt system with complex composition does not exist yet [36]. Failing a model, the salinities are calculated based on the equivalent mass% principle in the H 2 O-NaCl binary and H 2 O-NaCl-CaCl 2 ternary model systems. Salinity calculations are based on T m (ice) and T m (Hh) data of fluid inclusions using AqSo-Vir software by Bakker [37], corresponding to a numerical model [38]. Program module
Geofluids
BULK from program package FLUIDS [36] was used to determine the main physical parameters such as homogenization pressure (p h ), molar volume (V m ), and liquid to vapor ratio (φ vap ) of fluid inclusions. This program uses equations of state (EoS) of Zhang and Frantz [39] in the case of bulk fluid inclusion and EoS of Krumgalz et al. [40] in the case of aqueous fluids.
Raman microscopic analysis of fluid inclusions was carried out at the Department of Mineralogy and Petrology of Montanuniversität Leoben using a Jobin Yvon LABRAM confocal Raman microspectroscope which works with a frequency-doubled Nd-YAG laser of 100 mW capacity. Irradiations were made by using a laser light with 532.2 nm (green) wavelength in the case of each sample. The instrument is characterized by 4 cm −1 spectral-and few μm 3 spatial resolutions. The calibration of the spectrometer was made by using synthetic silicon chip, polyethylene, calcite, and natural diamond crystal. Recording time was 150 sec in the case of each spectrum with 30 sec accumulation periods. Raman spectra of salt hydrates can be found in the range of 3000-3700 cm −1 with a most important peaks around 3400 cm −1 [41,42]. During Raman spectroscopy-assisted microthermometry, the Linkam THMSG 600 heating-freezing stage was mounted on the Raman spectrometer. In order to make accurate identification of ice and salt hydrates, spectra were recorded at temperature condition lower than −170°C. This method is applicable with great efficiency to determine last melting temperature of salt hydrates and to distinguish the different types of salt hydrates from ice and each other.
Stable isotope analyses for seven representative carbonate samples were made using a Finnigan delta plus XP mass spectrometer at the Research Centre for Astronomy and Earth Sciences of the Hungarian Academy of Sciences, Budapest. The sampling was done by microdrilling of crack filling carbonate, obtaining 0.1-0.3 mg powder. Each sample was reacted with purified phosphoric acid, producing CO 2 gas which was analyzed by the mass spectrometer. The isotopic compositions δ 13 C (V-PDB) and δ 18 O (V-SMOW) of each sample are averages of replicate analyses. Precision of the measurements was ±0.1-0.2‰ in cases of δ 13 C and δ 18 O. The 13 C and 18 O isotope results are reported in the standard δ notation of the difference in isotope ratio between the sample and a standard expressed in per mil (‰), where δ = ½ðR sample /R standard Þ − 1 * 1000.
Vein Classification and Petrography.
Based on their geometric, textural, and mineralogical characteristics, the observed veins can be subdivided into four main groups: (1) blocky, (2) fibrous, (3) stretched, and (4) polytextured veins. Additionally, four subtypes of blocky veins can be distinguished, based on dominant mineral phases. Hydrothermal alteration of the siliciclastic host rock is rare; although, adjacent to some quartz-silicate-carbonate blocky veins, millimeter-submillimeter thick zones of feldspar alteration, carbonatization, and chloritization are present ( Figure 4).
Blocky Veins
(1) Quartz Veins. This subtype contains almost exclusively quartz as a fracture filling mineral. Thickness of the veins is in a range of~0.2-5 mm. They are closely parallel or at 10-30°angle to the long axis of the drill core. Most of the quartz veins are filled almost entirely euhedral and/or subhedral quartz crystals which are generally deposited directly to the vein walls. At some places, however, vein walls are coated by microcrystalline iron oxide/hydroxide (goethite; <1 mm in thickness) preceding quartz mineralization. Some of the quartz veins show elongate blocky texture where the c-axis of each crystal is close to perpendicular to the vein wall and tiny open cavities are observable among them frequently. Most of the quartz crystals are exhibiting well-developed growth zonation patterns indicated by fluid inclusion bands. 6 Geofluids Additionally, faintly brown-luminescent adularia crystals subordinately occur either as replacement and/or overgrowth on blue-luminescent detrital K-feldspar grains of the sandstone wall rock or as a fracture filling mineral among the quartz crystals ( Figure S1).
(2) Carbonate Veins. This subtype contains almost exclusively carbonate (mainly dolomite) fracture filling minerals. Thickness of these veins is in a range of~0.05-1 mm. Sparry dolomite crystals have curved crystal boundaries and undulose (sweeping) extinction ( Figure S1).
(3) Quartz-Carbonate Veins. This subtype can be found only in the drill core samples from the borehole Bm-1 where these veins occur frequently. Their thickness is in a range of~0.01-1 cm and they are at 30-35°angle with the long axis of the core. The quartz-carbonate veins are composed of symmetrical fracture infilling. Within the veins, quartz is deposited directly on the walls as euhedral crystals followed by a white colored sparry carbonate phase (mainly dolomite). This latter phase shows bulky habit and, in thin section, is characterized by a hypidiomorphic appearance with curved crystal boundaries ( Figure S2). Quartz crystals show growth zonation that is marked by fluid inclusion assemblages, while CL microscopy reveals a more detailed internal growth zonation pattern. Cores of the carbonate crystals show significant turbidity owned to fluid inclusion clusters. Their rims are, however, clear so they are free from fluid inclusions. Majority of the carbonate crystals display undulose extinction but do not show any deformation twins ( Figure S2). Raman measurements indicate that these crystals are dolomites (characteristic bands:~1098,~300, and~176 cm −1 ; http:// rruff.info/ [43,44]). Millimeter-sized blocky crystals of galena occur sporadically in the pore space among dolomite crystals. The dolomite and galena assemblage is followed by a carbonate phase with growth zonation pattern marked by fluid inclusion bands. This latter carbonate phase is ankerite (characteristic bands:~1093, 289, and~172 cm −1 ; http://rruff.info/ [43,44]). Using CL microscopy, the rhombohedral dolomite crystals with curved faces are nonluminescent while their very thin (<10 μm) clear rims have a characteristic red luminescence. The following ankerite shows fine-scale oscillatory zonation containing thin red-and nonluminescent bands. Tiny calcite crystals (~5-20 μm) with a bright orange luminescence can be found frequently as solid inclusions in this oscillatory zoned phase ( Figure S2).
(4) Quartz-Silicate-Carbonate Veins. This subtype can be found in several investigated boreholes but not occurs so frequently than the abovementioned blocky ones. This subtype contains dominantly euhedral quartz crystals and sparry habit alkali feldspars where the latter show turbid internal texture caused by minute hematite inclusions. More or less amount of chlorite±kaolin minerals, botryoidal calcite, and sulfide minerals (mainly pyrite) with a minor amount of epidote and Ti-oxides also occur in these veins ( Figure S2). In a single sample from the lowermost part of the Pennsylvanian section (borehole Bm-1 1062.1 m), chlorite and/or pyrite are accompanied by euhedral monazite and/or xenotime crystals ( Figure S3).
Fibrous Veins.
Fibrous veins appear only in subordinate amounts and their thickness is lesser than 1 mm. They are emplaced parallel or at very small angle (<20°) to sedimentary bedding. Elongated fibrous crystals are parallel with each other and perpendicular to the vein plane. The observed internal microstructure indicates antitaxial growth morphology. Two subtypes can be distinguished among them, one with solid inclusion trails which are parallel with the vein walls. This subtype occasionally contains quartz beyond the dominant carbonate fracture filling mineral. The other subtype is made by carbonate and frequently contains sinusoidal solid inclusion bands ( Figure S4).
Stretched Veins.
Stretched veins occur in subordinate amounts and contain dominantly stretched crystals of quartz together with minor amounts of chlorite or carbonates ( Figure S4). The thickness of these veins is in a range of 0.2-1 mm.
Polytextured Vein.
Polytextured vein is the rarest vein type in the investigated area (only a single studied sample). It contains a fibrous carbonate on the vein wall that is followed by a blocky textured carbonate in the middle of the vein ( Figure S4).
Mineral Chemistry.
In order to reveal the element distribution in the carbonate phases of quartz-carbonate veins, Xray element maps have been carried out ( Figure 5). The results indicate that dolomite is chemically homogeneous. On the other hand, two chemically distinct parts can be observed in ankerite. At the contact with dolomite, a thick band (50-200 μm) is enriched in Mn (Ank Mn ) while other parts do not show Mn enrichment only the higher iron content is typical for ankerite (Ank Fe ). Point analyses of dolomite indicate some iron substitution (Table S1). Their FeCO 3 content varies between 8 and 10 mass% while MnCO 3 content is <2 mass%. Composition of Ank Mn shows 14-18 mass% FeCO 3 and 5-8 mass% MnCO 3 concentration, while MnCO 3 content of Ank Fe shows 2-5 mass% and their FeCO 3 content varies in a wide range of 15-23 mass%.
Stable Isotopes.
A total of seven carbonate samples from quartz-carbonate veins were selected for stable isotope analyses (Table S3) Table 1. The raw data can be found in the Supplementary Materials (Table S4).
Quartz Veins.
In the well-grown quartz crystals, primary fluid inclusions can be distinguished due to their location along growing zones of the host crystal. In the euhedral quartz, three primary fluid inclusion assemblages (FIAs: 8 Geofluids QP1, QP2, and QP3) can be distinguished, corresponding to the temporary sequence of the assemblages ( Figure 6). Along healed microcracks, tiny (<2 μm) pseudosecondary and secondary assemblages rarely occur. They contain one-phase liquid, but they are inappropriate for exact microthermometry.
In the QP1 FIA, two-phase (liquid and vapor, L+V, respectively) inclusions are predominant. In the QP2 and QP3 assemblages, one-phase pure liquid (L) and two-phase (L+V) inclusions can be found in nearly equal amounts ( Figure S5). Two-phase inclusions are liquid-dominant and their φ vap values are in a range of 0.10-0.17 for QP1, while lesser then~0.1 is typical for both QP2 and QP3. Shape of two-phase inclusions is generally irregular. Their size varies in a broad range from 5 to 30 μm, while one-phase inclusions usually are smaller in size (5-10 μm). Raman spectra acquired at room conditions (T lab ) from vapor and liquid phases of the inclusions indicate that their inclusion fluid is an aqueous-electrolyte solution without any volatile contents.
Regarding the QP1 to QP3 inclusions, ice nucleation occurred at low temperatures and T n (ice) values varied between −75 and −62°C (Table S4) . During heating procedure, the homogenization of the studied inclusions occurred into liquid phase (L+V⟶L) without any exception. Homogenization temperatures (T h ) for QP1 FIA are in a broad interval from +88 to +145°C (n = 18); nevertheless, lower values are observable for QP2 and QP3 FIAs (T h : from +63 to +89°C, n = 10 and T h : from +50 to +77°C, n = 14, respectively; Tables 1 and S4).
Quartz-Carbonate Veins
(1) Quartz-Hosted Fluid Inclusions. Primary (P), pseudosecondary (PS), and secondary (S) FIAs occur in euhedral quartz crystals (Figure 7). Four primary FIAs are arranged along growth zones in the quartz phase (from QCP1 to QCP4, respectively). Besides primary ones, two pseudosecondary (QCPS1 and QCPS2) and one secondary (QCS1) FIAs can be observable. The relative temporary succession of these assemblages is the following: QCP1⟶QCPS1⟶QCP2⟶QCPS2⟶QCP3⟶QCP4-⟶QCS1. Two-phase liquid-vapor (L+V) and one-phase liquid (L) inclusions can be found in each primary FIA at room temperature (T lab ). In the QCP1, QCP2, and QCP3 assemblages, two-phase (L+V) and one-phase (L) inclusions can be found in nearly equal amounts but pure liquid (L) inclusions are in a higher proportion in the QCP4 assemblage. Most of the inclusions are irregularly shaped ( Figure S6). Their longest dimension varies between 3 and During cryoscopic analyses, nucleation of a vapor bubble can be occurred in many one-phase (L) inclusions during cooling. This phenomenon was observable in each FIA refering to a vapor nucleation metastability that may occur frequently in these inclusions. Ice nucleation temperatures are between −86 and −51°C. The initial melting temperature of ice (T i ) occurred between −57 and −47°C in each FIA and most of the measured data are around −52°C. Additionally, Raman microscopy revealed the presence of hydrohalite crystals besides ice ( Figure S6). The final melting temperatures of both ice and hydrohalite were measured during reheating procedure in each primary FIA. Last melting of ice occurred in the presence of hydrohalite crystal in each case. Similar melting temperatures are observable in each primary assemblage (QCP1 to QCP4). (Tables 1 and S4). Two-phase fluid inclusions of the primary FIAs were homogenized into a liquid phase (L+V⟶L) in each case. The broadest range and the highest T h values can be observed in the case of the QCP1 assemblage where the data vary between +54 and +153°C (n = 18). From QCP2 to QCP4, the T h values can be found in a range of~40-80°C (n = 39). Nevertheless, in the QCP3 and QCP4 FIAs, more and more values are lesser than +60°C (Tables 1 and S4).
(2) Carbonate-Hosted Fluid Inclusions. In the dolomite phase, fluid inclusions are not arranged along growing zones of the crystals, and inclusions along trails or healed microcracks are not observable. The inclusions made continuous clusters which fill the entire domain to the clear overgrowth band (Figure 8). This latter phase is almost free of fluid inclusions except few minor (<3 μm) ones which are inappropriate for microthermometry. The distribution of the studied inclusions follows the crystallographic directions, reflecting their primary origin hence fluid inclusions of dolomite can be classified into one primary originated FIA (DP1). Inclusions of DP1 mainly show two-phase (L+V) character ( Figure S7) and one-phase (L) inclusions can be found in subordinate amounts. Regular, close to negative crystalshaped inclusions are dominant but inclusions with an almost irregular shape can also be found among them. Their size is very variable, and their longest dimension is in a range of 5-20 μm. Two-phase inclusions are liquiddominant and their φ vap are in a range of 0.15-0.2.
Owing to the fluorescence of dolomite, it was extremely difficult to obtain suitable Raman spectra from the inclusions both at T lab and lower temperature also. In a few cases where spectra were collected successfully, the shape of the curve indicates high-salinity aqueous-electrolyte character of the inclusion fluid without any dissolved volatile components ( Figure S7). Ice nucleation temperatures are around −80°C and first melting occurs between −58 and −49°C. Last ice melting temperatures are in a range from −29 to −26°C (n = 10). Salt hydrate crystals were not able to detect visually in the inclusions; however, the T i values propose the possible presence of this phase during reheating procedure. Since salt hydrate phases were detectable neither visually nor by using Raman signal, the last hydrate melting temperatures were not possible to detect in the FIA. The inclusions were homogenized into the liquid phase (L+V⟶L) during heating, and liquid to vapor homogenization was not observed. Homogenization temperatures are in a range of 127-167°C (n = 14; Tables 1 and S4).
In the Ank Fe phase, fluid inclusions are arranged along growing zones suggesting their primary origin (Figure 8). They are not separated by inclusion free growth bands; hence, they can be classified into one primary FIA designated by AP1. The AP1 assemblage contains irregular and negative crystal-shaped inclusions in nearly equal amounts and their size varies in a range of 5-17 μm similarly to the DP1 FIA. At room conditions (T lab ), two-phase (L+V) liquiddominant (φ vap =~0:12) inclusions are predominant and one-phase (L) inclusions can be found in subordinate amounts. In spite of fluorescence of the host mineral, in some cases, suitable Raman spectra were collected at T lab . The appearance of these spectra indicates low-salinity aqueous- Figure S7). During cooling procedure, ice nucleation occurred around −44°C in each case. First melting temperature was not possible to detect visually or by Raman spectroscopy. T m (ice) values are in a very narrow range from −3 to −2°C in each inclusion (n = 10). Besides ice, other phases such as salt hydrate were not observable during reheating. Homogenization of inclusions occurred into a liquid phase (L+V⟶L) between 60 and 110°C (n = 21), and majority of the data are in a range from 80 to 100°C (Tables 1 and S4).
Composition of Inclusion
Fluids. The abovementioned narrow range of initial melting temperatures of ice in the QP1 to QP3 and QCP1 to QCP4 FIAs indicates that the analyzed fluids can be modeled by the NaCl-CaCl 2 -H 2 O model system ( Figure 9, Table 1). In the studied DP1 fluid inclusions, last melting temperatures of hydrohalites were not detectable; therefore, the composition of these inclusion quartz-and carbonate (dolomite)-hosted high-salinity aqueous fluids, and (2) carbonate (ankerite)-hosted low-salinity ones. The presented data suggest a paleohydrological communication between the crystalline basement and the overlying Téseny sedimentary rocks (see Figure 9) [17][18][19]. Therefore, the studied cross-formational fracture-vein systems can be interpreted as region-specific fluid mobilization events. As a novel approach, specific characteristics of the studied blocky veins (e.g., alteration and mineralization types, fluid compositions) in the study area are used for regional reconstructions (see discussion parts below). Thin fibrous, stretched, and polytextured veinlets appear only in subordinate amounts in the study area; hence, their correlative merit is insignificant. Bedding-parallel veins of fibrous calcite and rare quartz, corresponding to the "beef" or "cone-structure" with sinusoidal inclusions [46,47], reflect the diagenetic environment. The Pennsylvanian coalbearing Téseny Sandstone is a mature/over mature source rock with a vitrinite reflectance of~3.35% [23]; therefore, horizontal fractures could be a result of high pore fluid pressures during burial alteration of organic matter (hydrocarbon generation) [47].
Possible Correlation within the Tisia: Pennsylvanian
Continental Records. In the Békés Unit (Figure 1), within the Hungarian part of the Tisia, several deep wells with intermittent coring near the town of Szeged (Great Hungarian Plain, Algyő Basement High) penetrated greenish gray rocks described as Carboniferous breccia [30]. This breccia unit contains randomly oriented angular mica schist, gneiss, and quartzite fragments (up to 10 cm in diameter) in a micarich matrix. Tentatively, these fossil-less rocks were regarded as continental deposits which are tectonically covered by a Triassic sedimentary succession [30,48]. According to the detailed investigations, a network of dark fine-grained veins was also observed and these rocks were reconsidered as ultracataclasites. Therefore, the breccia was redefined as a tectonized part of the crystalline basement [48].
Both the fractured metamorphic basement rocks and the overlying Triassic sequences are hydrothermally altered, containing veinlets and dissolution vugs partially filled by saddle dolomite [49][50][51]. Under cathodoluminescence microscope, saddle dolomite crystals exhibit a marked zonation, whereas fluorescence microscopy reveals that significant part of the saddle dolomite crystals contains primary petroleumbearing aqueous inclusion assemblages. Microthermometry performed on saddle dolomite-hosted FIAs suggests the presence of hot (135-235°C) and moderately saline brines (4-9 eq. mass% NaCl) during the precipitation [50]. This post-Middle Triassic vein generation differs from the dolomitebearing veins in this paper. Therefore, there is no way to give an extensible correlation for Téseny rocks eastwards.
The petrography and geochemistry of the Radlovac Complex, including provenance of the Pennsylvanian metapsammites, was extensively characterized in the last few years [54,58,59]. According to these authors, the most common metasedimentary rocks of the Radlovac Complex are finegrained metapelites (more than 70% matrix) and moderately sorted metapsammites (less than 40% matrix). Both groups have similar mineral composition with dominant quartz, illite-muscovite, chlorite, and plagioclase, subordinate Kfeldspar, paragonite, hematite, and rare carbonate minerals. Complex heavy mineral (monazite, xenotime) analyses suggested that one major source for the Radlovac Complex metasedimentary rocks was felsic igneous rocks of Variscan age. Additionally, the bulk chemistry for both metapelites and metapsammites pointed to felsic igneous rocks as protoliths, corresponding to the continental island arc geotectonic environment [54].
A dominant felsic protolith source is in good correlation with the provenance of the Téseny Sandstone, but this latter succession has a more immature framework composition with significant amounts of acidic volcanic and metamorphic rock fragments [21,24]. In comparison whole-rock geochemical data of the Téseny Sandstone samples [21][22][23][24] with the published data from the Pennsylvanian metapsammites of the Slavonian Mts [54], a difference between their geochemical fingerprints is also obvious ( Figure 11). The rock and mineral fragments in the Téseny sands and conglomerates were identifiable as coming from three sources: (1) a recycled Variscan orogenic area (collision suture and foldthrust belt), indicated by the presence of metamorphic and sedimentary lithic fragments; (2) an uplifted plutonic (granite-gneiss) basement; and (3) an old (probably Variscan) magmatic arc, indicated by the lesser amounts of siliceous volcanic rocks [21,24]. These findings strongly suggest that the depositional basin of the Téseny rocks was spatially and/or temporally isolated from the Radlovac sedimentary basin in South Tisia.
Paleofluid Fingerprint of the Hungarian Part of W Tisia
(Mecsek-Villány Area). The distinct nature of the Pennsylvanian coal-bearing succession in the Slavonia-Drava Unit led to its paleohydrological comparison with the adjacent crystalline basement blocks (W Tisia; Figures 1 and 3). Within it, the Baksa Complex is the likely main source area for gneiss/metagranitoid and mica schist clasts from Téseny sediments. Additionally, crystalline plutonic rocks together with microgranite dikes in the Mórágy Complex (Kunság Unit, Mórágy Subunit) represent the principal fine-grained plutonic source (Téseny aplite clasts) that fed the Pennsylvanian sedimentary basin during deposition of the Téseny Sandstone Formation [24]. 13 Geofluids In the polymetamorphic rock body of the Baksa Complex ( Figure 3), a well-developed fracture filling network filled by Ca-Al-silicate minerals and/or sulfides (Fe-Zn-Pb; dominantly pyrite and sphalerite) was documented by previous authors [61][62][63]. In the silicate-rich paragenesis, clinopyroxene-dominant, epidote-dominant, and feldspar-dominant vein types were distinguished [17][18][19], following by quartzcarbonate (dolomite, calcite) veins in a later mineralization stage [16,17].
Additionally, the crystalline host rocks (gneiss, mica schist) of the Baksa Complex are strongly altered, exhibiting extensive epidotization, chloritization, albitization, and sericitization [18,61]. The pervasive hydrothermal leaching caused significant secondary porosity (cavities) in the altered domains which was partially filled by albite and epidote, and the total lack of the quartz phase is a characteristic feature of the metasomatized rocks. Fluid inclusions of cavity filling epidote show a similar character (T h : 180-360°C; salinity: 0.2-1.6 eq. mass% NaCl) that can be found in the Ca-Alsilicate veins [18].
In the Baksa Complex, development of calcite was characterized by filling the remaining pore space in the Ca-Alsilicate veins (first and second populations of calcite) and formation of crosscutting veins (second and third populations of calcite). Microthermometry of primary inclusions of the subsequent calcite1 and calcite2 generations (T h : 75-124°C, 17.5-22.6 eq. mass% CaCl 2 and T h : 106-197°C, 2.9-6.3 eq. mass% NaCl, respectively) reflects a significant change in the evolution of the vein system [19]. Based on the high salinity and low T h range of the earliest calcite, this carbonate phase was precipitated from downward-penetrating sedimentary brines or from descending meteoric water that infiltrated through evaporite bodies [17,19]. This interpretation was also supported by low values of the calculated fluid δ 18 O (−13 to −4‰, V-SMOW). Calcite in the Ca-Al-silicate veins, therefore, can represent the subsequent fluid circulation that led to precipitation of quartz, dolomite, and calcite in the quartz-carbonate veins [19]. Figure 11: Ternary discrimination diagrams [60] display the compositional trends among the Téseny and Radlovac Pennsylvanian samples and confirm that these sediments were dominantly derived from different provenance areas. Data come from [24,54], respectively.
Geofluids
In the postmetamorphic quartz-carbonate veins, corresponding to a relatively late mineralization stage mentioned above, a quartz⟶dolomite+calcite (cc1)⟶calcite (cc2) mineral sequence was observed, and traces of high-salinity fluids were detected [16,17]. The T h values of the quartz-hosted primary FIAs are between 44 and 139°C. The dissolved salt content is very high in each fluid inclusion, exhibiting a NaCl-dominant composition (20.1-25.6 eq. mass% NaCl) with a minor amount of CaCl 2 (1.5-6.0 eq. mass%). Additionally, in the vapor phases of each FIA, CH 4 15 Geofluids contents were detected by Raman microspectroscopy [16]. Regarding carbonate-hosted (dolomite and cc1) inclusions, the T h values can be found in a range of 95-182°C, whereas estimated salinity data are between 23.9 and 24.6 eq. mass% NaCl. These salt-rich paleofluids probably were originated from the Permian and/or Triassic evaporites of the region, reflecting a significant fluid migration event between the crystalline basement and the overlying sediments [16,17].
In the westernmost part of the studied area (Figure 3), near the village of Dinnyeberki (Mórágy Complex, borehole 9017), fine quartz and uranium ore-bearing carbonate veinlets were documented in the weakly to moderately altered Mórágy-type granitoids [64,65]. Economically insignificant uranium ore veinlets were developed along the fractures of the mylonitized granitoid rocks [65]. The earliest veins are thin quartz veinlets containing quartz and rare chlorite, whereas during a later substage, fractures were filled up by carbonates which gave rise to several populations of calcite (and minor dolomite and siderite) veins. Mineralization in carbonate veins forms uraninite, coffinite, pyrite, and calcite; additionally, U-Ti-oxides (such as brannerite), monazite, Thsilicate minerals, rare xenotime, and zircon also occur. Aggregates of spherulitic or irregularly shaped accumulations of uraninite are usually rimmed by coffinite. This latter mineral commonly forms rims around pyrite grains. The uranium minerals show complex chemistry with elevated P, Y, and rare-earth elements (REEs). This vein-type uranium mineralization might represent ascendant hydrothermal (epi-telethermal) mineralizing fluids genetically related to an Alpine rejuvenation of an earlier pre-Alpine event [65].
It is important to note that pilot studies [17][18][19] suggested that there could be a paleohydrological communication between the crystalline basement and the overlying Téseny sedimentary rocks during the cementation phase resulted in quartz-carbonate veins. Based on the results above, quartzand dolomite-hosted high-salinity aqueous fluids can represent the same mineralization stage in both Téseny and Baksa rocks (Figures 9 and 12). Furthermore, a probable genetic relationship was proposed between the Ca-Al silicate vein networks of the neighboring Baksa Complex (i.e., alkali feldspar-dominant vein type with an albite±K-feldspar⟶chlorite±adularia±prehnite⟶pyrite⟶calcite mineral assemblage) and similar veins (quartz-silicate-carbonate blocky veins) in the Téseny rocks [25].
Paleofluid Evolution of the W Tisia and Its
Relation to the Variscan Europe. As discussed above, the evolution of hydrothermal mineralization and host rock alteration in the Hungarian part of the W Tisia (Mecsek-Villány area) comprises several major stages. In this section, we provide a comprehensive set of fluid inclusion analyses from the studied area together with published data come from the Variscan Europe, especially from the Moldanubian Zone of the Bohemian Massif to test the presumed paleogeographic relationship between them.
In the Hungarian part of the W Tisia, the first stage is characterized by chloritization, epidotization, and sericitization of the metamorphic rocks together with subsequent formations of Ca-Al-silicate and quartz-sulfide veins in the Baksa Complex (diopside⟶epidote±clino-zoisite⟶sphalerite (pyrite) fracture filling mineral succession) [17-19, 62, 63]. The corresponding fluid inclusion record consists of high-temperature and lowsalinity aqueous inclusions. It was previously proposed [18] that this paragenesis could indicate a wall rock alteration related to the propylite metasomatic family; however, widespread chloritization, sericitization, and pyritization, at least partially, can also connect with a generally reduced retrograde-metamorphic fluid phase [66]. Regarding timing of related hydrothermal events, 40 Ar/ 39 Ar and K-Ar ages for white mica from amphibolite facies rocks within the metamorphic basement of the W Tisia range from 307 to 312 Ma [69]. Formation of white mica in the study area coincides with the postorogenic extension of the Variscan crust and with the accompanying rapid exhumation of the orogenic root [70], resulted in the retrograde-metamorphic equilibration of the high-grade metamorphic host rocks during the Late Westphalian (e.g., Massif Central and Bohemian Massif [66,71], respectively). It can be, therefore, suggested that the origin of fluids giving rise to the clinopyroxenedominant and epidote-dominant mineralization at Baksa was related to a greenschist-facies retrograde reequilibration of their amphibolite facies mineral paragenesis ( Figure 13). This interpretation fits in with the total lack of the clinopyroxene-and epidote-dominant vein types in the Westphalian Téseny rocks.
On the other hand, corresponding to the fracture filling network filled by Ca-Al-silicate minerals at Baksa, hematite-rich alkali feldspar-dominant vein type with chlorite (± epidote) and calcite also appears in the studied Téseny Sandstone ( Figure S2). Additionally, mineralogical and petrographic features of albitization, chloritization (authigenic Mg-chlorite), and carbonatization were documented in the host sandstone ( Figure 4 and [23,24]). Obviously, this mineralization stage can be related to a generally oxidized alkaline fluid phase and could be significantly separated in time from the previous one. The presence of hematite in the first-stage quartz-sulfide veins [62,63], conspicuously as a later phase mineral, reflects the cross-formational character of the parent fluid, whereas the high salinity and low T h range of the FIA for the latest stage calcite in the Baksa Ca-Al-silicate veins could support the proposed effect of descending fluids [16,17].
In the Mecsek Mountains (Figure 3), the Permian sedimentary record contains playa lake and saline mudflat deposits (Boda Claystone Formation; Kungurian to Capitanian), including evaporites such as gypsum, anhydrite, and rare pseudomorphs after hopper halite [72][73][74][75]. Consequently, Na-rich playa fluids were available during the Late Permian in the study area. Crosscutting monomineralic quartz and quartz-carbonate veins with high-salinity aqueous inclusions both in the Baksa and Téseny rocks are, more probable, in connection with the playa brines. Additionally, high-salinity fluids (T h : 75-123°C, T fm (ice): −24.6 to −16.9°C) were also identified in a metamorphic quartz lens from the Mecsekalja Zone metamorphic complex (Figure 3). They are possibly related to an active period of either of the fault generations crosscutting the Mórágy 16 Geofluids Complex. The high-salinity fluid generation can be related to a regional fluid flow event predated the Mesozoic brittle deformation of the Mecsekalja Zone and is considered as being derived from evaporitic sequences of the above-lying sedimentary pile [76,77]. This observation supports the timing of the high-salinity fluid flow event during the Late Permian. However, the appearance of galena and ankerite in the subsequent mineral paragenesis clearly indicates that their parent fluids derived from a more reductive environment, representing another mineralization stage during the post-Permian (Triassic?). Unfortunately, there are not any clear evidences for timing of this second (Late Variscan) and third (post-Variscan or Early Alpine) mineralization stages. However, in the Téseny quartz-silicate-carbonate blocky veins, chlorite is locally accompanied by euhedral monazite and/or xenotime together with pyrite and other opaque minerals ( Figure S3). Furthermore, the alteration of the Téseny wall rock is characterized by dissolution of quartz from the framework grains followed by albitization, carbonatization, and chloritization ( Figure 4). This strongly suggests that the second event may overlap with the Mórágy-type granitehosted, vein-or fault-type uranium mineralization at Dinnyeberki [65], forming a peripheral part of the alteration halo in the lower part of the Téseny succession. As a consequence, the related alkaline fluids are considered to be of crucial importance for correlation with the Late Variscan large-scale hydrothermal events.
Hydrothermal mobilization of REEs and Y is strongly related to the mobilization of Zr and U and generally indicates an influx of oxidized alkaline fluids [78,79]. The relative synchronous age of the U mineralizing events in the European Variscan belt (300-270 Ma) could suggest a similar mineralization condition, with long-term upper to middle crustal infiltration of oxidized fluids likely to have mobilized U from fertile crystalline rocks during the Pennsylvanian to Permian crustal extension events [80]. Alteration resulted in intensive leaching of quartz, pervasive albitization, and subsequent chloritization is characteristic for the formation of episyenites (aceites) described from numerous uranium deposits of the European Variscan belt [80][81][82][83][84][85][86], including hydrothermal vein-type, fault-and shear zone-hosted uranium deposits [66]. Within it, the Bohemian Massif is the most important uranium ore district in Europe [66,[83][84][85] where episyenitization and the associated uranium mineralization occur not only in granites (e.g., Schlema-Alberoda, Annaberg, Příbram, Okrouhlá Radouň, Lower Silesia) but also in high-grade metamorphic rocks (e.g., Vítkov II, Okrouhlá Radouň, Rožná, Olší).
In the Moldanubian Zone of the Bohemian Massif, based on the previous papers [66,84,86], the origin of the uranium mineralization is associated with an infiltration of oxidized, alkaline Na-rich playa fluids and/or basinal brines of the Upper Stephanian and Lower Permian basins into the crystalline basement along deep brittle structures that opened during the Late Variscan transcurrent tectonics and furrows formation ( Figure 13). Infiltration of brines effectively leached uranium from U-bearing minerals in the crystalline rocks. During the subsequent mineralization event (280-260 Ma, "uranium" stage or "ore" stage [66,86], respectively), newly formed albite is typically stained by hematite. Additionally, a very small amount of adularia (K-feldspar) was locally crystallized. In the chloritized and pyritized zones of cataclasites, uranium together with several Ti, Zr, Y, P, and REEs phases was gradually precipitated due to the interaction of ore fluids with reducing host rock components. Posturanium mineralization, corresponding to the Early Alpine transtension (240-220 Ma), resulted in quartz-carbonatesulfide veins [66,83].
Fluids from pre-uranium quartz-sulfide and carbonatesulfide mineralization at the Rožná deposit have low salinities (0.7-2.1 eq. mass% NaCl), and primary inclusions were trapped at temperatures close to or lower than 300°C [66]. Calculated oxygen isotopic composition of fluids (fluid δ 18 O, SMOW) was in a range from +7 to +8‰ for sideritesulfidic mineralization, whereas it was about +3‰ for preuranium albitization [83]. For uranium stage, T h of the primary aqueous inclusions in quartz and carbonates range from 84°C to 206°C. The salinities are highly variable and range from 0.5 to 23.1 eq. mass% NaCl resulted in the large-scale mixing of basinal brines with meteoric water [66,83]. Regarding fluids from post-uranium mineralization, early quartz contains high-salinity (25.0-25.5 eq. mass% NaCl) primary aqueous inclusions with T h of 165-178°C, whereas lower salinity and T h values in paragenetically younger minerals indicate low temperatures (<100°C) of the latest stage of fluid activity [66]. Calculated fluid δ 18 O value (SMOW) was close to 0‰ [83].
A similar evolution of fluid types is characteristic at the Okrouhlá Radouň deposit ( Figure 13). Albitization was caused by circulation of alkaline, oxidized, and Na-rich playa fluids, whereas basinal/shield brines and meteoric water were more important during the post-ore stage of alteration, corresponding to the post-uranium stage [66]. In this area, fluid δ 18 O values (V-SMOW) of post-ore calcite vary between −8.0 and +2.4‰ [87].
Very similar Late Variscan and post-Variscan hydrothermal events were published from the neighboring area of the Bohemian Massif (Harz, Black Forest, Oberpfalz; Figure 1) [67,68]. These authors suggested that characteristic fluid systems with distinctive differences (e.g., fluids associated with metamorphism, Permian basinal brines) are regionally distributed in the Central European Variscan belt.
Paleogeographic Position of the W Tisia (Hungary).
The previous published correlations have revealed several similarities in Variscan (Mississippian) evolution of the studied area (W Tisia) and the Moldanubian Zone of the Bohemian Massif [2,3,87]. For this reason, as a first step, we turned our attention to the Visean plutonic rocks (Figure 13). Significant and widespread durbachites seem to represent a distinct magnesio-potassic magmatic rock type derived from a strongly enriched lithospheric mantle source that is particularly characteristic for the European Variscan basement areas [88]. Within the Moldanubian sector of the Bohemian Massif, the emplacement of the (ultra-)potassic intrusions took 18 Geofluids place immediately after the tectonic ascent of the highpressure (HP) rocks during a relatively narrow time interval between 335 and 338 Ma [89,90]. It has been suggested that these durbachite plutons can be subdivided into two parallel belts, corresponding to the western HP and Durbachite belt and the eastern one [89]. Apart from the widespread Barrovian-style regional metamorphism, the combined occurrence of a granitoid (durbachite) pluton and HP rocks was also reported in the W Tisia [2-4, 87, 91, 92]. High-K and high-Mg granitoids found near Mórágy, 50 km northeast of the study area, show great similarities to the intrusions in the eastern part of the South Bohemian Batholith (Rastenberg) [2]. Using U-Pb geochronology on zircon crystals from the Mórágy Complex, the genetic picture of the Mórágy granitoid pluton was refined [93], showing a bimodal age distribution (335:6 ± 0:74 Ma and 345:9 ± 0:95 Ma) and suggesting a continuous crystallization in a long-time interval (Figure 13).
On the other hand, HP metamorphism was proven by analyses of the eclogite and amphibolite in the Baksa Complex [87,91,92,94]. In the SW Tisia, the eclogite shows great similarities to the eclogites of the Monotonous Series of the Moldanubian Zone. Additionally, K-Ar geochronological data on amphibole from garnetiferous amphibolite show 348 ± 13 Ma cooling ages [87]. Furthermore, mafic and ultramafic rocks of the Baksa Complex likely represent different segments of a juxtaposed consuming ocean. A subduction-related metamorphic evolution led to HP metamorphism of the lower amphibolite unit of the Baksa Complex which was exhumed during a later phase, probably due to reversal of the transport direction from subduction to uplift [92]. The relict evidences for a Variscan tectono-metamorphic event, possibly corresponding to the Moravo-Moldanubian Phase (345-330 Ma [89]), strongly suggest a genetic relationship between the crystalline rocks of the W Tisia and the eastern HP and Durbachite belt of the Bohemian Massif.
Spatially associated with the Variscan medium-grade metamorphic rocks of the Baksa Complex and with the Mórágy granitoids, serpentinites also occur (Gyód and Helesfa serpentinite bodies, respectively) in the W Tisia ( Figure 3). According to a recent publication [95], there exists a broad similarity in the composition and evolution between these serpentinites and similar rocks of some Sudetic ophiolites (Góry Sowie Massif in the West Sudetes). The aforementioned data seem to fix the original position of the W Tisia basement within the northern periphery of the eastern margin of the Bohemian Massif ( Figure 14).
As mentioned above, another important feature of the study area is the presence of uneconomical uranium orebearing veinlets in the weakly to moderately altered Mórágy-type granitoids (Dinnyeberki) [65]. Regarding the hydrothermal uranium deposits of the Bohemian Massif [66], a vein-type deposit associated with weakly altered granite was documented in its northern part (Lower Silesia) which could indirectly support the W Tisia paleogeographic position suggested above ( Figure 14). Other scenarios suggesting a connection of the Tisia to the southeastern [5] or southern/southwestern [14] part of the Bohemian Massif can be excluded because their hydrothermal uranium deposits reflect no evident association with granite plutons [66]. Therefore, a strong relationship between the study area and the Moldanubian part of the Bohemian Massif is also impossible. In accordance with the proximal character of the Silurian in the study area [1], the original position of the composite segments of the W Tisia (Mecsek-Villány area, Figure 2)
Conclusions
Based on detailed petrographic and geochemical investigations as well as fluid inclusion analyses, the following conclusions can be drawn about the studied basement rocks: ( A fundamental conclusion that can be drawn from our data derived from the Hungarian part of the W Tisia is that the major vein mineralization stages and host rock alteration styles reflect the effects of the characteristic hydrothermal events of the Central European Variscan belt. Therefore, as an independent tool, paleofluid fingerprints can be used to interpret the paleogeographic position of the studied area.
Data Availability
The petrographic, geochemical, and fluid inclusion data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. Table S1: chemical composition of the studied carbonate phases of the selected quartz-carbonate veins. Table S2: semiquantitative chemical composition of the studied galena samples. Table S3: stable isotope composition (in per mil) of the studied carbonate phases of the selected quartz-carbonate veins. Table S4: petrographic characteristics and microthermometric data of the studied fluid inclusions (individual measurement data). Abbreviations: T n : nucleation temperature of ice; T i : initial melting temperature of ice;
Supplementary Materials
T m (ice): temperature of final melting of ice; T m (Hh): temperature of final melting of hydrohalite; T h : temperature of homogenization; p h : pressure of homogenization; QP: primary fluid inclusions trapped in quartz from quartz veins; QCP: primary fluid inclusions trapped in quartz from quartz-carbonate veins; DP: primary inclusions trapped in dolomite; AP: primary inclusions trapped in ankerite. Note: notation refers to the temporary sequence of the assemblages (from 1 to 4, respectively). (Supplementary Materials) | 2020-03-26T10:36:21.380Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "d7f8e9adc2d63b1a3a9beee9a82e7f349a2394ef",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/geofluids/2020/3568986.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "561eb1a4990a1a46a9a59d4edae7c573dc923235",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
225644681 | pes2o/s2orc | v3-fos-license | Chiral information harvesting in helical poly(acetylene) derivatives using oligo(p-phenyleneethynylene)s as spacers
A chiral harvesting transmission mechanism is described in poly(acetylene)s bearing oligo(p-phenyleneethynylene)s (OPEs) used as rigid achiral spacers and derivatized with chiral pendant groups. The chiral moieties induce a positive or negative tilting degree in the stacking of OPE units along the polymer structure, which is further harvested by the polyene backbone adopting either a P or M helix.
During the last years, dynamic helical polymers have attracted the attention of the scientic community due to the possibility of tuning the helical sense and/or the elongation of the helical structure by using external stimuli. [1][2][3][4][5][6][7][8][9][10][11][12][13][14] In the case of a chiral dynamic helical polymer, modications in its structure-helical sense enhancement or helix inversion-arise from conformational changes induced at its chiral pendants-usually, with just one stereocenter-, by stimuli such as variations in solvent polarity or temperature, the addition of certain ions, and so on (Fig. 1a). 15 On the other hand, if a helical polymer is achiral (i.e., bearing achiral pendants), the chiral amplication phenomena can emerge from interactions between the polymer and external chiral molecules. 16 In both the above cases, the changes produced in the helical structures are related to the spatial dispositions adopted by the substituents or associated species at the pendant groups. [17][18][19] A step forward in the helical sense control of poly(phenylacetylene)s (PPA)s is to study different mechanisms of transmission of chiral information from the pendant to the polyene backbone by introducing achiral spacers. The goal is to demonstrate how far it is possible to place the chiral center and still have an effective chiral induction on the polyene backbone. Therefore, transmission of the chiral information from a remote position can occur through space, thus overpassing the distance generated by the spacer-tele-induction- (Fig. 1b), [20][21][22][23][24][25][26][27][28] or through the achiral spacer itself, producing in it a preferred structure, such as a helical structure and where the orientation of the achiral helix is further transmitted to the polyene backbone-conformational switch- (Fig. 1c). [29][30][31] For the rst mechanism-chiral tele-induction-, both exible and rigid spacers have been designed. [20][21][22][23][24][25][26][27][28] In all cases, supramolecular interactions, such as H bonding or p-p stacking, generate organized structures. As a result, the chiral center is located into a specic orientation, producing an effective helical induction. Additionally, those studies allow evaluating how distances and sizes have an effect on this phenomenon.
In the second strategy, the helix induction is transmitted through conformational changes along an achiral spacer which is harvested by the polyene. For instance, an achiral peptide or an achiral polymeric helix derivatized at one end with a chiral residue and linked to the polymer main chain at the other end. In such cases, changes in the absolute conguration or even just a conformational change at the chiral center can induce an opposite helical structure into the achiral spacer, which in turn will be harvested by the polymer main chain (Fig. 1c). [29][30][31] Herein we will demonstrate another remote chiral induction mechanism based on a different chiral harvesting process. In this case, the chiral center does not produce a conformational change at the achiral spacer, but affects its array within the helical scaffold. Thus, to perform these studies we decided to introduce the use of oligo(p-phenyleneethynylene)s (m ¼ 1, 2, 3) (OPEs) as rigid spacers to separate the distant chiral center from the polyene backbone. These OPE units have been used in the formation of benzene-1,3,5-tricarboxamide (BTA) based supramolecular helical polymers, demonstrating their ability to stack with a certain tilting degree commanded by the chiral center. [32][33][34] Hence, in our design, the chiral moiety will determine the supramolecular chiral orientation of the OPE groups used as spacers, which is further harvested by the polyene backbone. The overall process yields a helix with a preferred screw sense (Fig. 2).
X-ray structures of the monomers show a preferred antiperiplanar (ap) orientation between the carbonyl and methoxy groups (O]C-C-OMe) for m-(R)-2 and m-(S)-3, whereas in the case of m-(S)-1 a synperiplanar (sp) geometry is favoured (Fig. 4a). 35 In complementary studies, CD spectra of monomers m-(S)- [1][2][3] in CHCl 3 show negative Cotton effects, indicative of major ap conformations in solution (Fig. 4b), 35 further corroborated by theoretical calculations (see Fig. S10 †). Interestingly, the maximums of the Cotton effects in CD undergo a bathochromic shi-from 266 nm in m-1 to 327 nm in m-3-due to a larger conjugation of the p electrons (from the anilide to the alkyne group) when the length of the spacer increases (Fig. 4b).
CD studies of the polymer series bearing OPE spacers-poly-(R)-and poly-(S)-[2-3]-in different solvents show the formation of a PPA helical structure with a preferred helical sense, while the parent polymer, poly-1, devoid of the OPE unit, has a poor CD. This is a very interesting phenomena that indicates that the OPE spacers work as transmitters of the chiral information from remote chiral centers to the polyene backbone-placed at 1.7 nm for poly-2 and at 2.4 nm for poly-3- (Fig. 4a). These large distances between the chiral center and the polymer main chain mean that other mechanisms of chiral induction, such as chiral tele-induction effect, should be almost null in these cases.
In these two polymers (poly-2 and poly-3), the chiral information transmission mechanism must occur in different sequential steps. First, the chiral centers possessing a major (ap) conformation induce a certain tilting degree (q) in the achiral spacer array. This step resembles the helical induction This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 7182-7187 | 7183 mechanism found in supramolecular helical polymers bearing OPE units. [32][33][34] Next, the chiral array induced in the OPE units is harvested by the polyene backbone, resulting in an effective P or M helix induction (Fig. 2). 34,47 Additional structural studies were carried out in poly-(S)-2 and poly-(S)-3 to obtain an approximated secondary structure of these polymers and determine their dynamic behaviour.
From literature it is known that the conformational equilibrium of poly-1 can be altered in solution by the presence of metal ions. The addition of monovalent ions (e.g., Li + ) stabilizes the ap conformer at the pendant group by cation-p interactions, while divalent ions (e.g., Ca 2+ ) stabilize the sp conformations by chelation with the methoxy and carbonyl groups. 36,38,39,43 As a result, both the P or M helical senses can be selectively induced in poly-1 by the action of metal ions. Therefore, we decided to add different perchlorates of monovalent and divalent metal ions to solutions of poly-(S)-2 and poly-(S)-3 with the aim of determining the conformational composition at the pendant groups. Thus, when monovalent metal ions (Li + , Ag + and Na + ) are added to a chloroform solution of poly-(S)-2, a chiral enhancement is observed (Fig. 5d for Li + and Fig. S16 † for Na + and Ag + ). IR and 7 Li-NMR studies show that those ions stabilize the ap conformer at the pendant group in a similar fashion to poly-1, this is by coordination to the carbonyl group of the MPA (Fig. 5g) and the presence of a cation-p interaction with the aryl ring of the chiral (|Dd| 7 Li ca., 3.75 ppm) ( Fig. 5f and ESI †). Therefore, addition of Li + produces a larger number of pendant groups with ap conformation among poly-2, which triggers a chiral enhancement effect through a cooperative process.
On the contrary, the addition of perchlorates of divalent metal ions, such as Ca 2+ and Zn 2+ , produced an inversion of the third Cotton band-310 nm-associated to the MPA moiety and the disappearance of both rst and second Cotton effects (Fig. 5e for Ca 2+ and Fig. S17 † for Zn 2+ ). This is a very interesting outcome because, although the conformational equilibrium at the MPA group changes from ap to sp aer the addition of Ca 2+ , the number of pendant groups with sp conformation do not reach the number needed to trigger the helix inversion process and in fact, a mixture of P and M helices at the polyene backbone is obtained.
The helical structures adopted by both polymer systems, PPAs (poly-1) and poly[oligo(p-phenyleneethynylene)phenylacetylene]s (POPEPAs) (poly-2 and poly-3), are dened by two coaxial helices, one formed by the polyene backbone (internal helix, CD active) and the other constituted by the pendants (external helix, observed by AFM).
These two helices can rotate in either the same or the opposite sense, depending on the dihedral angle between conjugated double bonds. Thus, internal and external helices rotate in the same direction in cis-cisoidal polymers, while they rotate in opposite directions in cis-transoidal ones. 14,42,48,49 In order to nd out an approximated helical structure for poly-(S)-2, DSC studies were performed. The thermogram shows a compressed cis-cisoidal polyene skeleton (see Fig. S13a †), similar to the one obtained for poly-1. 42 Moreover, AFM studies on a 2D crystal of poly-(S)-2 did not produce high-resolution AFM images, although some parameters such as helical pitch (c.a., 2.8 nm) and packing distance between helices of (c.a., 6 nm) could be extracted from the well-ordered monolayer analyzed (Fig. 5c).
Previous structural studies in PPAs found that it is possible to correlate the internal helical sense with the Cotton band associated to the polyene backbone-CD (+), P int ; CD (À), M int -. 50,51 Herein, the positive Cotton effect observed for the polyene backbone [CD 365 nm ¼ (+)] in poly-(S)-2 is indicative of a P orientation of the internal helix, which correlates with a P orientation of the external helix in a cis-cisoidal polyene scaffold. To summarize, DSC, AFM and CD studies agree that poly-(S)-2 is made up of a cis-cisoidal framework with P int and P ext helicities (Fig. 5a).
Computational studies [TD-DFT(CAM-B3LYP)/3-21G] were carried out on a P helix of an n ¼ 9 oligomer of poly-(S)-2, possessing a cis-cisoidal polyene skeleton (u 1 ¼ +50 , u 3 ¼ À40 ) and an antiperiplanar orientation of the carbonyl and methoxy groups at the pendants. The theoretical ECD spectrum obtained from these studies ( Fig. 5b and see ESI † for additional information) is in good agreement with the experimental one, indicating that our model structure is a good approximation of the helical structure adopted by poly-(S)-2.
Next, a similar set of DSC and AFM studies were carried out for poly-(S)-3, that bears an OPE spacer with n ¼ 2. The data showed that this polymer presents a compressed cis-cisoidal polyene skeleton, similar to those obtained for poly-1 and poly-2 (see Fig. S13b †), with a helical pitch of 3.8 nm and a P ext helical sense (Fig. 6a and c).
UV studies indicate that, in poly-(S)-3, the polyene backbone absorbs at ca. 380 nm, coincident with the rst Cotton effect, that is positive (see Fig. S15b †). Therefore, it reveals that poly-(S)-3 adopts a P int helicity (Fig. 6b). Thus, as expected for ciscisoidal scaffolds, the orientations of the two coaxial helices are coincident.
Computational studies [TD-DFT(CAM-B3LYP)/3-21G] were carried out on a P helix of an n ¼ 9 oligomer of poly-(S)-3, possessing a cis-cisoidal polyene skeleton (u 1 ¼ +63 , u 3 ¼ À40 ) and an antiperiplanar orientation of the carbonyl and methoxy groups at the pendants. The theoretical results ( Fig. 6b and see ESI † for additional information) match with the experimental data, indicating that our model structure is a good approximation to the helical structure adopted by poly-(S)-3.
Finally, the stimuli response properties of poly-(S)-3 were explored by CD. These experiments revealed that the addition of monovalent or divalent metal ions to a chloroform solution of poly-(S)-3 does not produce any signicant effect in the structural equilibrium of this polymer (see Fig. S18 †). This fact, in addition to the previous results obtained from the interaction of poly-(S)-2 with divalent metal ions, corroborates the decrease of the dynamic character of helical PPAs when large OPEs are used as spacers.
The poor dynamic behaviour was further demonstrated by polymerizing m-(S)-3 at a lower temperature (0 C) (Fig. 6d). In this case, the region around 240-350 nm remains unaffected, indicating that the pendant is ordered in a similar manner in both batches of polymers, regardless of the temperature at which they were synthesized (20 C and 0 C). Interestingly, the magnitude of the rst Cotton band is duplicated when the polymer is obtained at low temperature due to a stronger helical sense induction at the polyene backbone. This result indicates that a preorganization process may occur during polymerization, affecting the screw sense excess of the PPA.
In conclusion, a novel chiral harvesting transmission mechanism has been described in poly(acetylene)s bearing oligo(p-phenylenethynylene)s as rigid spacers that place the chiral pendant group away from the polyene backbone, at a distance around ca. 1.7 nm for poly-2, and 2.4 nm for poly-3. Hence, the disposition of the chiral moiety affects the stacking of the OPE units within the helical structure, inducing a specic positive or negative tilting degree, which is further harvested by the polyene backbone inducing either a P or M internal helix.
We believe that these results open new horizons in the development of novel helical structures by combining information from the helical polymers and supramolecular helical polymers elds, which leads to the formation of novel materials with applications in important elds such as asymmetric synthesis, chiral recognition or chiral stationary phases among others.
Conflicts of interest
There are no conicts to declare. | 2020-06-25T09:09:21.500Z | 2020-06-23T00:00:00.000 | {
"year": 2020,
"sha1": "04ab16b9fb8c4f5cd74674a77755dc3c2a2f30e9",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc02685a",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95e438c3d50c3a1d511b2676adc5775425ba3b02",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
44118691 | pes2o/s2orc | v3-fos-license | Future prospects in endodontic regeneration-A review article
“Regenerative endodontics” is a branch of regenerative medicine and has been defined as biological procedures designed to replace damaged, diseased, or missing dental structures, including dentine and root as well as cells of the pulp-dentin complex, with living, viable tissues, preferably of the same origin, that restore the normal physiological functions of the pulp-dentin complex.[1] Regenerative endodontic procedures use biologically based treatment modalities and pulpal cells. The information available in regenerative studies to date, however, indicate that more must be learned about the interactions that occur between all cells, growth factors, proliferation and differentiation of cells, and the ability to use materials that will result in a well-formed, functioning tooth.[2]
Introduction
"Regenerative endodontics" is a branch of regenerative medicine and has been defined as biological procedures designed to replace damaged, diseased, or missing dental structures, including dentine and root as well as cells of the pulp-dentin complex, with living, viable tissues, preferably of the same origin, that restore the normal physiological functions of the pulp-dentin complex. [1] Regenerative endodontic procedures use biologically based treatment modalities and pulpal cells. The information available in regenerative studies to date, however, indicate that more must be learned about the interactions that occur between all cells, growth factors, proliferation and differentiation of cells, and the ability to use materials that will result in a well-formed, functioning tooth. [2] Stem cells The future of repair and regeneration depends on answers to the following questions. What is the nature of the stem cells that should be used to regenerate pulp tissue? A primary question must be asked as to what type of pulp-like tissue should be the result of implantation? Is it possible to obtain a functional, nonmineralized pulp that is vascularized and innervated as the original tissue would be? Or is the aim to develop a pulp tissue that would induce an increased amount of mineralization that could serve as a substitute for root canal therapy? All stem cells in odontogenesis, with the exception of ameloblast progenitor cells, originate in the mesenchyme and are said to be of ectomesenchymal origin. When dental pulp stem cells (DPSCs) and stem cells from human exfoliated deciduous teeth (SHED) were compared, SHED cells showed higher potential to migrate and mineralize. [2] Cell differentiation can lead to either adult progenitor or an odontoblast-like/osteoblast-like cell, which is divergent from other results obtained. The question of using multipotent stem cells remains unsettled, especially when attempting to regenerate pulpal tissue. The cells necessary are present in the pulp and can be associated with odontoblast and osteoblast cells, endothelial cells and formation of neurons. Therefore, is the use of multipotent progenitors or nonpotent cells, the cells of choice? [3,4] In the future, it may be possible to minimally invade and isolate suitable stem cells, have them undergo differentiation in vitro, and combine and develop them into tooth structures. Pulp cells differentiate in vitro into odontoblast-like stem cells. The dentin formed, as previously mentioned and is tubular. Is there a possibility of dental pulp cells producing tubular dentin? [5,6] A recent study mixed pulp cells with hydroxyapatite (tricalcium phosphate powder) and generated a dentin-pulp like tissue. [7] Bartouli and coworkers transplanted tubular dentin on the surface of dentin-pulp slices and generated increased amounts of tubular dentin; however, the origin of the progenitor cells giving rise to new odontoblasts (tubular dentin) and the signaling pathways in cell differentiation have not been clearly identified and remain a matter of debate. [8] Because repair and regeneration have different targets, the expectations of a particular therapy must be clear. Is regeneration of a non-mineralizing pulp the proper goal or is generation of a tissue that may become a completely mineralized root canal system the proper treatment option? Each aim uses specific tools that are valid for bioengineering treatment modalities. [2] Caries may be the most common and dangerous of all types of injury, provoking adverse stimuli to the dental pulp. Many of the processes involved are thought to be the same as the initial pulp developmental processes occurring embryonically. Because the onset of injury in the dental pulp may be a result of caries, markers of inflammation are different, depending on the depth of the inflammatory process of the lesion. [9][10][11] Still somewhat unclear is how inflammation may overwhelm and cause degeneration in the pulp, as opposed to its role in the regeneration of that tissue. To understand the treatment prognosis, understanding the balance between infection and inflammation is necessary, together with an understanding of proinflammatory and anti-inflammatory mediators and how they relate to the innate and adaptive immune systems. [12][13][14][15] Many studies have reported that several populations of stem cells in and around the tooth pulp are able to be used to repair or regenerate the pulp/dentin complex. To be able to use these cell lines clinically, translational research in the future will require both researchers and skilled clinicians who can develop new and novel therapies that can eventually be tested and used in clinical environments to answer these questions. [16][17][18][19][20]
Scaffolds
When stem cells are seeded on scaffolds, they are expected to attach, proliferate, and differentiate into new tissues that will eventually replace the scaffold. They should have an inductive ability with added growth factors and morphogens for a more rapid cell attachment, proliferation, migration, and differentiation into a specific tissue. [2,[20][21][22] The choice of a scaffold is critical in tissue regeneration. Most scaffolds are organic in nature and used to provide surfaces on which cells may adhere, grow, and organize. Scaffolds chosen for laboratory studies are diverse, including natural or synthetic polymers, extracellular matrices (EMCs), self-assembling systems, hydrogels, and bioactive ceramics. Recently, a synthetic polymer polycaprolactone was successful in growing increasing numbers of stem cells from apical papilla stem cells with apparent identification of NOTCH signaling expression. [23][24][25][26] Although the number of scaffolds has increase questions remain that must be addressed. [2] For example, are scaffolds able to support various kinds of stem cells or are they stem cell-specific? Are stem cells able to be seeded with like results on more than one scaffold? What are the limitations of the use of one or another scaffold that may be natural or synthetic scaffolds? The use of a self-assembling peptide system that allows a "bottom-up" approach of generating EMC materials, offering high control at the molecular level, will be a major step forward in constructing future scaffolds. The peptide system is referred to as a tunable matrix with several features that possibly allow scaffolds to be designed, as different requirements are needed to regenerate a tissue. [10,[25][26][27][28]
Growth Factors and Signaling Pathways
A variety of growth factors have been identified and grouped into several classes. They include the following: Transforming growth factor (TGF)-α and TGF-β, bone morphogenetic proteins, fibroblast growth factors, hedgehog proteins, and tumor necrosis factors. Growth factors are responsible for signaling many of the events in tooth morphogenesis and response of the dental pulp to caries, microorganisms, and other noxious stimuli. Although studies have been performed, the results have yet to be used in a manner that allows regeneration and repair while not decreasing the volume of pulp tissue. Because the formation of secondary dentin is thought to be physiologic and occurs throughout life, the growth factors must be used in a manner that allows normal processes to continue as would occur in a virgin tooth with no restoration or caries or other stimuli that would increase the chance of narrowing and limiting natural processes in the dental pulp. [2,27]
Vascularization
The understanding of the mechanisms that underlie dental pulp angiogenic responses still are not completely understood. Revascularization is critical for the development of new therapies necessary to regulate the dental pulp. New therapeutic methodology could be used for the regulation and expression of angiogenic factors, such as vascular endothelial growth factor and fibroblast growth Factor 2 to revascularization the pulp tissue of avulsed or other traumatized teeth. [11] Niches Today, the ability of stem cell-based tissue engineering of teeth faces dilemmas of methods from development due to several differing conceptual issues. For example, where is the location and identity of odontogenic precursor cells that participate in reparative dentin formation? [12,13] Stem cells appear to have the ability for tissue repair and regeneration throughout life. The signaling proteins functioning in these processes have been studied but more research is needed to determine the mechanisms that allow stem cells from a particular niche to increase in number and migrate to the area of injury. [26,27] Questions arise as to the environment of the niche surrounding the stem cells. Does that environment maintain stem cell lineage specificity? Are postnatal stem cells capable of converting from one type of cell into another, as they may do naturally in the body? A stem cell niche is a group of cells in special tissue locations that maintain stem cells. Niches are variable, containing different cell types depending on the need of its environment. The niche may be thought of as an anchor for a particular stem cell that generates extrinsic factors that control stem cell numbers and their fate. [14] Notch Signaling Proteins The question still unanswered is that, although the niches contain only a few cells, what signaling molecules are responsible for the almost immediate increase in numbers of cells that are activated, proliferate and differentiate, and migrate to aid the pulp in its ability to be repaired?
More studies are needed to answer the previously mentioned questions, which will lead to the exact growth factor or combinations of growth factors that will mimic the reaction of repair mechanisms and allow the tooth to develop normally. The NOTCH signaling proteins regulates stem cell behavior for tooth repair. NOTCH receptors are absent in adult rat pulp tissue; their expression was found to occur after pulp tissue injury. These studies also suggest that NOTCH signaling may act as a negative molecule in stem cell differentiation. The future of the full extent of NOTCH signaling abilities plus other signaling proteins that may be present are not fully known, which indicates that their ability in repair processing and participation in healing is not fully understood. Finally, it has not been demonstrated that NOTCH-positive stem cells participate in the repair process and leading to differentiation into odontoblast-like pulp cells. [15]
Inflammation-regeneration
Although much is known about the mechanisms of both inflammation and regeneration, a failing in most studies occurs because there is a tendency to consider these entities separately rather than together. Future studies should concentrate on both at the same time as they both occur, one step at a time (continuous until repair occurs). Inflammatory processes are seen as being antagonistic to these same processes that indicate that regeneration is occurring. Direct data have now emerged indicating that there is a relationship between the two processes. [9] The first (inflammation) results in tissue breakdown, whereas the latter develops regenerative (new tissue formation) actions. No doubt, increased inflammation may impede regeneration; however, if the inflammatory response is low grade, it may promote regenerative mechanisms that may include angiogenic stem cell processes. Therefore, it is necessary in the future not to separate the processes but attempt to study both at the same time. In the future, proper animal studies are necessary to demonstrate that these processes are fully described before clinical studies are undertaken. The limiting factor in both processes is the location of the dental pulp. Dentin surrounds the dental pulp and, although an inflammatory response to incipient caries may either regenerate or become a scar, the pulp tissue will be reduced in volume and other forms of dentin will occur that narrow the pulp tissue space. Studies need to be performed that develop suitable materials that will be able to reach the dental pulp through dentin tubules to regenerate original tissue without limiting the root canal system space. [27,28] With respect to all these the innervation of the pulp plays critical role in the homeostasis of the dental pulp complex. Invasion of immune and inflammatory cells into sites of injury in the pulp is stimulated by sensory nerves. Sensory denervation results in rapid necrosis of the exposed pulp because of impaired blood flow, extravasation of immune cells. Reinnervation leads to recovery in the coronal dentin. [16]
Future Perspective of Endodontic Regeneration
For cell-based therapy, the source of cells is an issue. Dental stem cell supply is limited especially from autologous sources. Not every individual who needs the regeneration treatment has the cells readily available. Establishing dental stem cell banking may be a necessary step and further progress on establishing individualized induced pluripotent stem cells for dental tissue regeneration is imminent. [5] The willingness of endodontists and other specialist dentists to accept training to deliver stem cell therapies and Regenerative Endodontic Procedures to their patients is unclear. The ethics of using stem cell therapies to accomplish dental treatment is controversial. Stem cells from exfoliated deciduous and extracted teeth can be saved and might provide supply for therapeutic interventions. A dentist office could become a stem cell bank for patients, who might require new bone, teeth or other oral tissues. [17] In a questionnaire regarding REPs, threw up some interesting statistics. The result of the survey showed that most of the dentists (96.8%) agreed that regenerative therapy should be incorporated into dentistry, and the majority (51.6%) believed that it would take between 11 and 20 years for regenerative stem cell therapies to be used in dentistry, for the development of new tooth in laboratory. Only a few of the dentists (19.4%) indicated that they or their family members had used umbilical cord banking or another type of stem cell banking. Nevertheless, most dentists (93.5%) had the opinion that dental stem cell banking would be useful in regenerating dental tissues. Most of the dentists (77.4%) believed that the biggest obstacle to patient acceptance of regenerative dental treatments would be a higher cost of the treatment. Most dentists (96.8%) indicated that they would be willing to save teeth and dental tissue for regenerative dental treatment, and a majority (87.7%) thought that regenerative dental treatments would be a better treatment option than tooth implant replacement. About 87.1% of dentist agreed that stem cell and regenerative treatments should be tested on animals before clinical testing. Many dentists (58.1%) were willing to deliver dental treatments involving embryonic stem cells sourced from a human fetus to their patients. In regard to the future of regenerative treatments, a majority of dentists (64.5%) believed that there is no risk in stem cell clinics delivering future dental treatments. A majority of the dentists (67.7%) were concerned about the health hazards associated with the use of stem cells as part of regenerative dentistry. The majority of dentists (87.1%) held the opinion that dental professional associations should regulate the use of REPs.
This survey concluded that dentists are supportive of using REPs in their dental practice and they are willing to undergo extra training and to buy new technology to provide new procedures. However, dentists also need more evidence for the effectiveness and safety of regenerative treatments before they REPs were recommended. [18] Stem cells can be used for regenerative procedures in dentistry. Despite its scientific validity, cell transplantation it has encountered major difficulties in translation into clinical therapy. The therapeutic use of stem cell products derived from nonhuman species will be limited because of the risk of immunorejection. Allogeneic cell transplantation has concerns of potential immunorejection and contamination.
The cell cryopreservation/banking system suffers from the potential loss of cells and additional costs. Potential contamination during cell manipulation and the costs of shipping and storage are additional barriers of cell transplantation. Few practitioners today know how to handle a vial of cells. In case a few cells, among thousands or millions of cells that are transplanted, acquire oncogenes during ex vivo cell processing, a practitioner, company, or hospital would likely be held liable. [19] Despite the initial promise, regenerative endodontics has encountered substantial barriers in clinical translation. DPSCs might seem to be a prior choice for dental pulp regeneration. However, DPSCs may not be available in patients who are in need of pulp/dentin regeneration therapy. Even if DPSCs are available autologously or allogeneically, one must address a multitude of scientific, regulatory and commercialization barriers; unless these issues are resolved, the transplantation of DPSCs for dental pulp regeneration will remain a scientific exercise rather than a clinical reality. These barriers include cell isolation; ex vivo manipulation with the potential for changing cell phenotype; and safety issues, including immunorejection, potential contamination, pathogen transmission, and potential tumorigenesis. Excessive costs associated with this issue in addition to shipping, storage, handling issues, and regulatory difficulties, including unclear pathway and the general inability to ensure batch-to-batch consistency in cell quality, cast multidimensional questions for the practicality of cell transplantation. Biomaterial scaffolds are another area of innovation in regenerative endodontics. Several natural and synthetic polymers have shown positive results in vivo. Preclinical animal models and randomized clinical trials that test novel therapies are indispensable for translating regenerative technologies into clinical therapies.
The outcome of the regenerative therapy is based on radiographic and histological evaluation in animals, whereas the case reports and case series emphasized the importance of follow-up, including recording symptoms, radiographs, and clinical tests. Due to the variability in recall, it is difficult to establish time frames in which practitioners may expect to see radiographic changes. This information is important because it can help in assessing if and when alternative treatments (i.e., apexification, traditional nonsurgical root canal therapy, or extraction) may be necessary. [19] Conclusion Future clinical research would likely focus on translating basic research findings into improved regenerative procedures, such as formation of cementum-like material on the dentinal walls that might lead to studies evaluating benefits of revascularization procedures for overall tooth resistance to fracture. Controlled differentiation into odontoblasts is an important area of research and amenable to tissue engineering concepts. The development of the delivery systems might permit structural reinforcement of the cervical area that might provide clinical opportunities to regenerate lost tooth structure, thereby permitting natural teeth to be retained. | 2018-06-01T05:15:23.915Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "d7458717c8fc21ada7da0e833302f81c86643d7e",
"oa_license": null,
"oa_url": "https://doi.org/10.15713/ins.idmjar.84",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d7458717c8fc21ada7da0e833302f81c86643d7e",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
50505128 | pes2o/s2orc | v3-fos-license | Loop Structures Optimization and Reordering for Efficient Parallel Processing
The problem of choosing an optimal sequence of transformations, leading to the most efficient parallel version of a program remains an open question. Related to this, compilers of the moment only manage to incorporate a set of heuristic decisions. This article treats the transformation of the program, addressing and analyzing the range of transformations of loop structures, that we consider most appropriate today. We tried to exemplify these transformations in case of groups of companies
INTRODUCTION
With a rather complicated organizational structure, characterized by behavioral flexibility and lack of bureaucracy present in all sectors of industry, commerce and services, groups of companies easily adapts to changing economic and social conditions. In the idea of implementation of new market information and communication technologies, this article proposes a prototype for decomposing existing operations in groups of companies using parallel computing.
During the development of compiler theory, several changes of source code had been proposed to optimize the execution of programs. Most optimizations for sequential cases intend to reduce the number of instructions executed by the program using transformations, based on a quantitative analysis of the values conveyed in the program and data flow analysis. In addition, recent optimization for parallel execution maximizes parallelism and data localization in the memory, using transformations based on the characterization of arrays and data dependency analysis results. The stages which must be completed by a compiler to perform optimizations are the following: a) Selection of the part of the source program which shall be optimized and the appropriate processing of a particular purpose; b) Checking if the semantic transformation ensures consistency; c) Transformation of the program. Techniques for data dependency analysis are used for steps a) and b). The selection stage is the most difficult and insufficiently treated topic in the current compiler theory. Due to high costs involved in a full analysis of optimization possibilities, compilers typically restrict their range of action to some transformations considered more efficient by their builders. On the other hand, there may be sequences of transformations that have the opposite effect. For example, an attempt to reduce the number of instructions executed may ultimately reduce performance because of improper use of caching. Architectures become more complex, because of significantly increasing optimization directions and decision making related to the range of transformations is very complicated.
Operators Reduction
Reduction of operators aims for replacing a loop expression with an equivalent expression that uses a less expensive operator ( [4], [6]). Based on the following loop structure which containing a multiplicity, Even if, most of the time, reducing the operators is accompanied by the introduction of an additional variable, time saving is achieved by the loop processing significantly. It is justified to put this transformation in the category of optimization execution. The most common use of operator reduction is the reduction of expressions that contain induction variables ( [3], [4], [6]). Table 1 presents various possibilities of reduction of operators. It is assumed that the operation in the first column occurs in a loop with index i from 1 to n at the time of processing and the compiler initializes a temporary variable T in the expression of the second column. Operation inside the loop is replaced by the expression in the third column, and the value of T is updated every iteration with the value in the fourth column. The positive effects are obvious.
Expression
Initializing Using Updating x / c T = 1/c x * T S e p t e m b e r 2 7 , 2 0 1 5
The Elimination of Induction Variable
A variable whose value is derived from the number of iterations that were executed by a loop is called induction variable. The control variable of a "for" loop is the most common type of induction variable, but there are other variables with this property. The example below illustrates this in case of induction variable j. Induction variable elimination simplifies the analysis of index expressions in the data dependency tests, as it is explained in the example below, where, after removing the variable j, the analysis is based only on the values of the loop variable i and constant n.
Factorization of Loop Invariants
When an operation occurs inside a loop, but its result is not changed between iterations (loop invariant), the compiler can transfer that computation outside the loop ( [3]). We give below a code sequence in which a transcendental function of an expensive call is transferred outside the loop. The test that appears in the transformed code ensures that if the loop is not executed again then the transfer code is not run either, to prevent triggering an exception.
Externalization of Conditional Instructions
This method is applied to loops that contain a conditional instruction with invariant test in the loop. The loop is then replicated at each conditional branch instruction, thus avoiding the disadvantage of conditional branching within the loop, reducing code size representing the loop body and making possible the parallelization of a possible conditional branch instruction as Allen remarks [5].
Conditional instructions that are subject to outsourcing can be detected while analyzing the possibilities of factoring, a process that identifies loop invariants.
In the following example the variable X is loop invariant, allowing the loop to be subjected to the operation of outsourcing and the true branch to be executed in parallel, as shown in the converted code. Notice that, like the factorization of loop invariants, if there's a chance to trigger an exception condition assessment, this should be prevented by a test of the possibility of execution. In a double nested cycle in which the inner loop has unknown limitations, if the code is generated directly, there will be a test before the inner body of the loop, to determine whether or not it will be executed. The test for the inner loop will be repeated each time the outer loop is executed. When the compiler uses an intermediate representation for the program, then test is explicit and outsourcing can be used to transfer this test outside the outer loop [27].
Iteration Reordering Transformations
In this section we describe the transformations that alter the relative order of execution of the iterations of nesting cycles. These changes are mainly used to highlight the opportunities for parallelization and locating data in memory. Some compilers use reordering transformations only for perfectly nested cycles. To increase opportunities for optimization, compilers can sometimes use fission to extract perfectly nested cycles of imperfect nesting. The compiler determines whether a loop can be executed in parallel, examining the associated dependencies induced by loop iterations. If all of the loop dependency distances are 0, this means that there is dependency carried over iterations in the loop.
We give below an example where the loop distance vector is (0,1), this way the outer loop may be parallelized ( figure 2).
More generally, the p-th loop of a a nested structure of cycles may be parallelized for any distance vector V = (v1, …, vp, …, vd ), vp = 0 ∨ q < p : vq > 0 In the case of (b) the distance vectors are {(1,0), (1, -1)}, so that the inner loop may be parallelized. Both references from the right part of the expression accesses in reading items on line i-1 of the a array, elements updated in the previous iteration of the outer loop. The i-th line items may be calculated and stored in any order.
Loops Interchange
This transformation changes the position of the two loops of a PNL (Perfectly Nested Loop), moving usually one of the outer loops innermost position ( [7], [34]). Interchange is considered one of the most powerful transformation and can improve performance in many ways. It is used mainly to: allow vectorising by exchanging an interior loop that manifests dependencies, with one exterior loop, independently; improve vectorization, loop by shifting largest independent position within the loop; improve the performance of parallel execution by transferring an outside independent cycle nested loop, thus increase the granularity and reduce the number of iterations required barrier synchronizations; to reduce looping step, preferably to 1; increase the number of expressions in the loop invariant cycle innermost.
We should consider that these benefits do not exclude each other. For example, a swap which improves the reuse grade of the registers can modify an access pattern with step 1 into an access patern with step n, which may have a much lower overall performance, due to a much larger number of mismatches in the memory cache. In the following example, the inner loop accesses array a with n steps. We use the convention of storing the array elements in columns. With loop exchanging, we convert inner cycle into a cycle where accessing step = 1. For a large array, for which more than one column fits in the cache memory, the optimization reduces the number of cache misses for a, from n2 to n2 * de / dl, where de is the dimension of the element and dl the dimension of the line.
Anyway, the original loop permits total[i] to be placed in a register, eliminating the load/store operations from inner loop.
This way, the optimized version increases the number of operation load/store for total from 2n to 2n2. If the array a fits in cache, it proves that the original loop is more advantageous. For a vectorial architecture, the transformed loop allows vectorization by eliminating dependency on total [i] in the inner loop.
Interchange of the cycles is legal when changed dependencies are legal and looping limits can be interchanged. If two loops, p and q, from a PNL of d loops The looping interchange of the limits is a simple operation when the iteration space is rectangular, as in the previous PNL example.
In this case the limit cycles are independent of indices inside the loop containing it, and the two can simply be interchanged. When the iteration space is not rectangular, calculation of new limit of looping becomes more complex.
In programming often triangular spaces and even trapezoidal are used. Cycles often occur imperfectly nested whose management requires more complex techniques. Some of these issues are addressed in detail in ( [36], [39]).
The Interior Cycle Translation
Translation of interior cycle (loop skewing) is a transformation especially useful in combination with interchanged cycles ( [22], [26], [39]). Translation has been introduced to solve the so-called calculation type: "wave crest" (wave front computations). It is called like this because it updates array elements as a wave propagates through space iterations. S e p t e m b e r 2 7 , 2 0 1 5 Translation is performed by adding the outer loop index multiplied by a factor of translation, f, and variable limits inner iteration, followed by reduction using the same values for each variable in the inner iteration cycle. Since looping limits change accordingly and use index variables to compensate, translation does not change the semantics of a transformation program and is always legal.
The cycle from figure 5 (a) may be subject to interchangeability, but the loop can not be parallelized, because of a dependency on both inner loops, (0,1) and in the outer (1.0). This graph is expressed by the existence of edges on the horizontal ((0,1)) and vertical ((1,0)).
The result of the translation by f = 1 is shown in figure 5 (c-d). Transformed code is equivalent to the original, but the effect on space iterations aligning "wave peaks" (diagonals) original nesting cycle (that is diagonal from right to left are vertical lines), so that for a given value of j all i iterations can be executed in parallel (because there is no vertical dependency arcs, iterations for a fixed j don't depend on one another).
To highlight this parallelism, loop structure must also be translated to subject interchangeability. After translation and interchange, the cycle has a nested distance vectors {(1,0), (1,1)}. The first dependency allows the inner loop to be parallelized, because the corresponding dependency distance is 0. The second dependency allows the inner loop to be parallelized because it is a dependency in report to previous iterations of the outer loop.
Translating can highlight parallelism for a nesting of two cycles with the set of distance vectors (Vk) if: When we translate with factor f, the originally distance vector (v1, v2) will become (v1, fv1 + v2). For any dependency with v2 0, the scope is to find f so that fv1 + v2 1. Correct translation of factor f is calculated by taking the S e p t e m b e r 2 7 , 2 0 1 5 maximum of fi = ⌈ (1 -vi2) / vi1⌉ in relation to all the dependencies (Kennedy 1993). The interchanging of translated loop is complicated because their looping limits depend on the loop iteration variables.
An alternative method for treating calculations of wave peak is super-node partitioning ).
Reversing the Looping Limits
This transformation changes the direction of the cycle space through its iterations. It is often used in combination with other reordering transformations of space iterations, because it changes depending vectors. As independent optimization, reverse looping can reduce loop overhead by eliminating the need for a comparison instruction on architectures without a comparison-branching instruction such as Alpha [32].
The cycle is reversed so that the iteration variable decreases to zero, allowing the loop to end an instruction of type BNEZ (branch if not equal zero). If loop p from a nesting of d loops is inverted, then for each dependency vector V, the element vp is denied. Reversal is legal if each vector result V' is lexicographically positive: if vp = 0 or q < p: vq > 0.
Changing the Cycle Granularity
Changing the granularity of a cycle (strip mining) is a method for adjusting the granularity of an operation, especially a parallelized operation ([2], [7], [24]). The original definition of this operation involves transforming a one-dimensional cycle to two-dimensional cycles. A dependency on (d) becomes (0, d) and (1, d -s -1), where S is the step value access (strip size). The transformation is always legal in the sense that it will induce negative dependencies in the transformed loops. But it is justified only if S ≥ d, otherwise it has no positive effect. Changing the granularity is usually performed for the execution on vectorial machines, to make an efficient exploitation of the size of the machine registers. We present an example below. Calculation with changed granularity is expressed in matrix notation and it is equivalent to a forall loop. If the iteration's length is not divisible by the step size, then additional changes are needed. For this purpose we use a so-called cleanup code [7]-as in the case for the last loop from figure 7 (b).
One of the most common uses of granularity change is choosing the number of independent calculations in the inner cycle of a nested loop structure. For example, in a vectorial machine, the serial cycle can be converted to a series of operations on arrays, each array consisting only of the unit of granularity.
Changing the granularity is also used for some compilations of type SIMD [34] to combine operations in a loop and send on distributed memory multiprocessors [13] and temporarily limit the size of pictures generated by the compiler ( [1], [39]).
Changing granularity often requires other changes. Cycle decomposition may reveal simple cycles nested within a cycle that is too complex to undergo to operation of changing granularity. Interchanging of cycles can be used to move a parallelized loop in the inner position or nested cycle, to maximize granularity unit size.
The above examples demonstrate that the granularity changing can create a bigger processing unit, from smaller ones.
Transformation can also be used in the opposite direction, reducing the initial granularity, if execution efficiency is necessary.
Shrinkage of Loop
The contraction of a loop (cycle shrinking) is a special case of changing granularity. When a cycle displays dependencies which cannot be executed in parallel (i.e. to be converted into a forall statement), the compiler can still detect a certain degree of parallelism possible that the distance dependency is greater than 1.
In this case, the contraction will convert a serial cycle into an external serial cycle and internal parallel cycle [28]. Contraction cycle is especially used to highlight fine granularity parallelism.
For example, in figure 8 (a), a [i + k] is updated in iteration i, and accessed in reading in the iteration i + k, depending on the distance k. As a result, the first k iterations can be executed in parallel only with the condition that none of the following iterations begin the execution until the first k were not finished. The same thing is then carried out with the following k iterations, as shown in figure 8 (b). Space iteration dependencies are shown in figure 8 (c): each group of k iterations is thus dependent only from the previous group.
Fig. 8. Space iteration dependencies
The result is, potentially, an increasing of the speed with k factor, but this k value is usually small (2 or 3). So, this optimization is typically limited, to highlight the parallelism which can be made at the instruction level, for example, by carrying out processing cycles. Note that the value of k must be constant in the cycle, and the compiler must know at least that it is positive.
Dividing Iteration Space
Dividing (loop tiling) is a multidimensional generalization transformation amending granularity. Dividing is primarily used to improve the reuse of the cache, dividing the iteration space into so-called divisions (tiles) and transforming nested cycle to iterate over them ( [2], [12], [21], [39]). Also, the transformation can be used to improve the location of the data to the CPU, registers or memory pages. S e p t e m b e r 2 7 , 2 0 1 5
Fig. 9. The division of iteration space
The need of using this transformation is illustrated in the loop from figure 9 (a) that assigns to a, the transpose of b. The j loop is the most interior, the access to b is made with step 1, while the access to a is with n step.
The interchange is not helpful because it accesses b with n steps. Iterating over divisions (tiles) of space iterations, as it is shown in figure 9 (b), the cycle uses each line of the cache. The two inner cycles of matrix multiplications also have such a structure, the division being necessary to obtain runtime efficiency in dense matrix multiplication.
A pair of adjacent cycles can be divided if it can be legally interchanged. After division, the outer pair of cycles can be shifted to improve data localization at the division level and inner cycles can be interchanged to exploit parallelism and data locality cycle at the registry level.
Dividing can be expressed as an increase of the granularity of a single iteration of the collections of iterations (this collection actually represents division), outer looping having the mission to scroll divisions and the inner are responsible for correctly completing iterations in a division.
The Looping Decomposition
The decomposition (also called fission cycle -loop distribution, loop fission or splitting) divides a loop structure in several ones. Each new iteration loop has the same space as the original, but contains only a subset of its instructions ( [20], [26]).
The decomposition is used to: • create perfectly nested cycles; • create sub-cycles with fewer dependencies; • improve instruction cache allocation due to lower dimensions of cycles; • reduce memory requirements, iterating over fewer arrays; • increase the reuse grade of registers. Figure 10 is an example in which decomposition removes dependencies and allows parts of a cycle to be executed at the same time. The decomposition can be applied to any cycle, but all the instructions which belong to a cycle of dependency (called block , [20]) should be placed in the same loop, and if S1 precedes S2 in the original loop, the loop containing S1 must also precede the one that contains the statement S2. If the loop contains a control flow execution, conversion application can show opportunities of decomposition. An alternative is to use a control flow graph of dependencies [19].
A specialized version of these transformations is the so-called decomposition by name, first called horizontal decomposition partitions by name [2]. Instead of a comprehensive analysis of data dependencies, the loop instructions are partitioned into mutually exclusive sets accessing variables. To those instructions is guaranteed their independence. When S e p t e m b e r 2 7 , 2 0 1 5 the arrays are large, the decomposition by the name may increase the amount of localization of data in the cache memory. Note that the above loop can't be decomposed using fission by name because the same instruction accesses the array a.
The Fusion of Loops
Reverse transformation of the decomposition is the fusion can improve performance by: • reducing delays due to overhead looping (loop overhead); • increase the level of instruction parallelism; • improving data localization at the level of registries, cache or memory pages [1]; • improving the load balance for parallel execution cycles.
In figure 10, decomposition allows partial parallelization of the cycle. The merging of the two loops improve the location registers and cache, because a[i] does not have to be loaded only once. The fusion also increases the degree of instruction-level parallelism by increasing the ratio of floating-point operations and integer values in the loop structure and reduces the overhead of the second cycle's time. If n is large, the split-cycle to run faster would be a vector machine, while fused cycle should be less in a superscalar machine.
In order to be able to merge two cycles, they must have the same limits. If the limits are not identical, it is sometimes possible to do the same through their adjustment (suggestive technique called a loop peeling) or by introducing conditional expressions in the loop body.
Two loops with the same limits can be merged if there aren't two instructions, S1in the first loop and S2 in the second loop, so that they would have a dependency S2 → S1 with the direction < in merged loop. The reason why this would be incorrect is that before the merge, all instances of S1 are executed before any instance of S2. After the merge, the corresponding instances are executed together. If there is an instance of S1 that has a dependency that must be executed after an instance of S2, the merger changes the order of execution, as it is shown in figure 11. Fig. 11. Two loops containing S1 şi S2 cannot merge if S2 → S1 in the looping structure obtained after merge.
Case Study
Let us consider the group of companies G which have a mother-firm and n subsidiaries named S1, S2, …, Sn. The reducing of operators is applied in the stage of aggregation of accounts. Mother-firm cumulates all the accounts like in figure 12 (multiplication operation and its transformation in the addition operation).
Fig. 12. The reduction of operations in mother-firm
In the same group we can reduce the variable of induction like in the next figure. We start with 2 variables, i and j, we reduce j and we finally have only i. In the stage of consolidation of accounts of the group of companies, we can reduce the number of variables by eliminating the loop variable j. The operations of the group companies will be reduced to the elimination of mutual accounts, eliminating mutual operations and eliminating reciprocical results using only one variable loop.
In the stage of factorizations of loop invariants, the incomes of subsidiaries which are reflected in mother-firm can be reflected like in Figure 14. In an article written by Chung in 1990 [10], the authors present a formal mathematical framework which unifies the existing loop transformations. This model gave us the idea to apply these transformations in accounting for groups of companies.
The article of Vivek describes a general framework for representing iteration-reordering transformations. These transformations are a special class of program transformations and change the execution order of loop iterations. Fernandez et al. in 1995 [11], in their article, present a method for code transformation using non unimodular transformations. We described a synergetic model to that presented by Fernandez et al. Jacobson et al. in their article describe current dependency analysis tests that can be used to identify ways for transforming sequential C code into parallel C code. Quing: "To optimize complex loop structures both effectively and inexpensively, we present a level loop transformation, dependency hosting, for optimizing arbitrarily nested loops, and an efficient framework that applies the new techniques to aggressively optimize benchmarks for better localization". Jain et al. [17] tell us: Based on important theorems, algorithmic methods are developed for program transformation to improve cache performance. A remarkable article is the one from the paper of Louis et al. [23] is a representative material in which the authors bring together algebraic, algorithmic and performance analysis results to design a tractable optimization algorithm over a highly expressive space.
Our work, aims to address a new concept of integrating groups of companies in parallel computing. This can be done easily by transformation of structures looping: optimization and reordering. In literature this approach has not been found.
Conclusions
The selection of transformation of looping structures, such as optimization and reordering, is a complex problem. In this article we have presented and analyzed the most important and commonly used transformations at the level of looping. They are useful in the context of automatic parallelization, although it is interesting to note that some changes were originally introduced as optimization of sequential execution. A future article will contain transformations of reordering iterations; they proved to be really specific purpose to highlight the inherent parallelism in the sequential programs. We tried to apply these transformations in the economics and management groups of firms, their complex activity requiring most often parallelization and business transformation for better organizational management. | 2019-02-15T14:20:19.607Z | 2015-09-27T00:00:00.000 | {
"year": 2015,
"sha1": "dcd7480344697f938abf5f2b691f5fca34e45d2b",
"oa_license": "CCBY",
"oa_url": "https://cirworld.com/index.php/ijct/article/download/1710/pdf_643",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9836e1eaa8dc8acdc13e2a276d1f2cd19c9efa1c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
203060450 | pes2o/s2orc | v3-fos-license | The Use of Technology in English Language Teaching
The application of modern technology represents a significant advance in contemporary English language teaching methods. Indeed, Mohammad Reza Ahmadi (2018) maintains that electronic teaching programmes have become the predominant preference of instructors since they arguably boost positive student engagement with teachers and incentivize overall English language learning. Most contemporary English language teachers now actively incorporate a range of technological aids designed to facilitate optimum teaching delivery. The current research therefore addresses various elements of the technology used in English teaching by devising innovative curricula which harnesses recent scientific and technical developments, equip instructors with the technological skills to ensure effective and quality subject delivery, provide technical media such as audio-visual and modern technical programmes, and create student-teacher platforms which maximize positive language learning outcomes. For the purposes of this study, the relevant literature has been reviewed, technology defined linguistically and conventionally, and correlation with modern teaching skills fully evaluated. In light of this, the researcher outlines the fundamental research problem, elucidates the significance of the research objectives and hypotheses, and presents the findings. The paper concludes by offering a number of recommendations which may further contribute to the improvement of teaching methods by advancing the widespread application of modern technology.
Introduction
The use of modern technology in teaching English is broadly understood to encompass an innovative application of methods, tools, materials, devices, systems, and strategies which are directly relevant to English language teaching and lead to the achievement of the desired goals. Thus, while technology is now generally accepted as an important educational and auxiliary tool across a range of teaching and learning contexts, it is particularly true of English language teaching since it affords a number of potential opportunities to enhance both the content and delivery of the pedagogies typically associated with traditional English language instruction. This is primarily achieved by enabling the student and/or teacher to revisit problematic content time after time until it is fully understood and assimilated.
Familiarity with the concept of using modern technology is not merely limited to the use of modern appliances and devices, but rather obtains to the introduction of innovative systems and methods of teaching which facilitate faster and more comprehensive learning progression. According to prevailing pedagogical theories, in utilizing the learning potential of technology students are better able to acquire and hone their language knowledge and skills. The use of technology in teaching English consolidates the integrated view of the modern means system and association with other components which benefits students by achieving the required results.
The use of modern technology in English language teaching has therefore become indispensable, especially in the wake of unprecedented developments across numerous fields and disciplines. It is essential that the education sector keep apace of the global technological revolution by adopting modern technological means such as computerization, multi-media devices, mobile phones, audio/visual effects applications, and social media, to optimize English language instruction and equip teachers to connect with classroom language learners in a systematic and advanced way. The Internet provides easy, immediate, and virtually unlimited access to software, applications, and a host of ancillary platforms and materials which can expedite English teaching and learning. While these affordances may be widely available to all, it is noted that teachers often play a key role in operating the different tools and teaching methods. Moreover, many such programmes are specifically designed to promote effective English teaching whilst simultaneously increasing learner understanding and attainment of English language skills.
Previous Research
Stepp-Greany (2002, p. 165) used survey data from Spanish language classes which utilized a range of technological approaches and methods in order to determine the importance of the role of teachers, the relevance and availability of technology labs and individual components, and the effect of using technology on the learning process of a foreign language. The results confirmed student perceptions of the teacher as the primary learning facilitator, and stressed the importance of regularly scheduled language labs and the use of CD Rom. Stepp-Greany recommended a follow-up study to measure the effects of relevant technology on the learning process of foreign language acquisition. Warschauer (2000a) proposed two different ways to integrate technology into the class: a cognitive approach which gives learners the opportunity to meaningfully increase their exposure to language and thus make their own knowledge; and a social approach which gives learners opportunities for authentic social interactions as a means to practice the real-life skills obtained through engagement in real activities.
Bordbar (2010) investigated the reasons and factors behind language teachers' use of computer technology in the classroom. The study further explored teacher attitudes towards computer and information technology and the various ways they applied practical computer-assisted language learning experience and knowledge to their own language instruction delivery. The results found that almost all the teachers held positive attitudes towards the use of computers in the class. The results also underscored the importance of teachers' overall perceptions of technology, technological experience, skill, and competence, and the cultural environment that surrounds the introduction of IT into schools and language institutes and shapes attitudes towards computer technology.
Shyamlee (2012, p. 155) analyzed the use of multi-media technology in language teaching. The study found that such technology enhances student learning motivation and attention since it implicates students in the practical processes of language learning via communication with each other. Shyamlee recommended the use multi-media technology in classrooms, particularly as its positive impact on the learning process aligns with the ongoing efficacy of the teacher role.
The findings of the research support the proven futility of traditional English teaching methods, and confirm that learners are more enthusiastic and interactive when using modern technology to assimilate English. Statistical data reiterates that a high percentage of those learning English language skills do so via modern media such as smart boards, computers and screens, as compared to traditional teaching methods. Moreover, the study reveals that the interaction with teachers and overall responsiveness of students in the classroom is significantly improved when using modern techniques in English teaching.
In fact, it is clear both that students are more likely to learn from electronic curricula and that English language teachers prefer to employ modern technology rather than traditional methods of instruction.
Purpose of the Study
The topic of English language teaching and learning has emerged as one of the central issues of 1) Traditional methods lead students through precise curriculum content and rely on outdated learning aids such as blackboards and textbooks. As such, the teacher merely relays the information without accounting for positive or negative results.
2) Traditional methods rely on simple strategies that do not meet the purpose of learning or basic needs in the process of teaching. Since such teacher-centred pedagogies situate the learner as a recipient, their overarching goal is the extent to which a student can replicate information without necessarily understanding it.
3) Students rely on received sounds and images as opposed to interaction and discussion with the teacher. 4) Student accreditation by means of set texts tend to foster boredom and loss of motivation and attention in attainment, as opposed to modern technological teaching methods which inhere numerous incentives that increase the likelihood of acquiring English language skills in a timely and positive way.
In light of the above obstacles, the present study was undertaken to ring-fence the causes at the heart of the problem and attempt to resolve the issue by introducing a range of modern technologies into the context of English language teaching.
Research Questions
Comprehensive investigation of the above issues and the bid to find logical solutions for these challenges rests on the following questions:
Significance of the Research
The study aims to advance knowledge in a number of significant areas. In the first instance it will identify traditional teaching practice challenges which retard or obstruct the process of effective language learning in order formulate a range of solutions to update them with technological methods and aids. The research will also evaluate the scale of the difficulties confronted by English teachers who use modern technology and determine whether additional IT skill training is required. It is hoped that the ensuing data may be used as a reference guide for future researchers in the same field and context, along with a detailed analysis of the teaching and education sector as intrinsic to the infrastructure of any modern society.
Technology has become ubiquitous in all forms of contemporary life. Since the teaching process cannot be atomized from this global trend, this study further considers the impact of recent English teaching technology as compared to traditional practices which arguably render students passive, and prone to boredom. Indeed, this study demonstrates that the introduction of modern technological assistance yields timely learning progress and improved student proficiency across all English language skills including writing, reading, and conversation. Ultimately, the research provides key educative stake-holders and authorities with practical solutions to tackle the problems related to the use of modern technology in English language teaching for teachers and students alike.
Objectives
The research aims to identify the following: 1) the extent of the technological contribution to the development of the English language teaching process.
2) a suite of solutions to enable both teachers and learners to overcome the challenges which currently hinder the use of modern technology in English teaching.
3) possible alternatives and/or substitutes for traditional instruction in order to boost the efficiency of teacher and student potential to acquire English language skills. 4) appropriate IT training for English language teachers to meet the growing need. 5) the pros and cons of using technology in teaching English.
6) technological teaching programmes and aids which enable students to learn via an electronic curriculum.
Hypotheses
This study tested the following hypotheses:
Methodology
The researcher followed each of the following methodologies: 1) The researcher applied the descriptive method and experimental monitoring in order to fully interrogate the study questions and devise appropriate solutions.
2) Based on the determination of time and spatial period, the application of historical methodology based on an analysis of the elements and reasons which gave rise to the basic research problem and the attendant challenges further assisted an evaluation of present and future developmental impacts. In addition the collation, review and comparison of secondary data sourced from relevant records, reports and previous studies, were intrinsic to the design and scope of effective solutions.
3) The researcher also applied experimental methodology which is based on studying the impact of changes placed on the research problem where one variable is fixed. The study the impact of its existence rests on several variables; namely the laboratory experimental methodology conducted in the laboratory under certain conditions, such as studying the impact of technology on teaching English and the non-laboratory experimental methodology applied to a group of volunteer students beyond the scope of study.
Technical Terminology
MODERN: designed and made using the most recent ideas and methods.
TECHNOLOGY: methods, systems, and devices which are the result of scientific knowledge being used for practical purposes.
TEACHING: the concerted sharing of professional knowledge and experience, usually organized within a discipline: more generally, the provision of stimulus to the psychological and intellectual growth of a person by another person or artifact.
Results
The research results support the uselessness of traditional English language teaching methods. This is evident in the studies conducted, where it was found that between 75% and 85% of students confirm these results and 60% to 80% of students are dissatisfied with the traditional methods. In contrast, students are more enthusiastic and interactive when using modern technology to absorb English by more than 90%. Statistical data confirm that a high percentage of those who learn English skills interact with modern technology means such as smart boards, computers and display screens compared to traditional teaching methods. According to statistics conducted on random samples of students, including private schools that adopt the most modern means of technology and public schools that lack modern means were surveyed on a number of students in the classroom and others volunteers outside the perimeter of the interaction of most students from both the results of the analysis of students' performance showed that 75% to 95% achieve high results in their attainment in English, unlike those who are taught by traditional means, their achievement rates are very low. In addition, the study revealed that interaction with teachers and the overall response of students in the classroom has improved significantly when using modern techniques in teaching English as the interaction with teachers using modern media reached more than 90%, unlike those who are taught by traditional means have less than 50% interaction with teachers, thus it is clear that studies, surveys have shown that students are more inclined to learn from E-curriculum and English teachers prefer to use modern technology rather than traditional teaching methods due to the students fast response and their interaction and educational attainment with high statistically rates.
Discussion
Despite the fact that modern technology is increasingly ubiquitous across all aspects of modern life, the scope and utilization of appropriate technology within the education sector in general, and within English language teaching in particular, has remained conspicuously limited. So much so, that recent educational studies have attributed poor levels of student achievement to an inadequate use of technology in education which is compounded by the continued prevalence of traditional teaching methods (Tamimi, 2014;Salama, 1999 found it enabled students to be more pro-active and to learn in line with their particular interests and abilities (Roma, 2013).
Reasons for Using Technology in English Language Teaching
Jacqui Murray (2015) taxonomies the rationale for using technology in English language teaching as follows: 1) Technology allows students to demonstrate independence.
2) Technology differentiates the needs of students.
3) Technology deepens learning by using resources that students are interested in. 4) Students actively want to use technology.
5) Technology gives students an equal voice.
6) Technology enables students to build strong content knowledge wherever they find it.
Merits of Using Technology in English Language Teaching
English-language students are highly implicated in and motivated by the use of the modern technology such as radio, TV, computers, the Internet, electronic dictionary, email, blogs, audio-visual aids, videos, and DVDs or VCDs (Nasser, 2017) as follows: 1) The use of technology in teaching English is deemed interesting and motivating as the student reacts with the subject.
2) Technology plays an important role in the process of teaching English by enhancing timely understanding, and thereby enabling students more learn more efficiently.
3) Teachers perform more effectively when using modern technology since they can communicate with the students through a variety of ways.
4) The use of modern technology enables both teachers and students to access a wealth of books, publications, and references which are directly relevant to the English language curriculum. 5) Modern technology encourages student self-sufficiency which better equips them for the future.
6) Unlike traditionally passive teaching methods, modern technology teaching and learning aids incentivize both teacher and student.
Drawbacks of Using Technology in English Language Teaching
1) Many teachers and students have limited access to modern technology.
2) The role of the teacher can be relegated in cases where students become over-reliant on modern technology.
3) It can reduce student social activities by consuming most of their free time.
Findings
The answers to the core research question are summarized as follows: 1) Studies confirm there are not enough English language instructors trained in the use of relevant technological teaching aids.
2) The survey found greater student response and interaction with the use of modern technology than traditional methods.
3) The study also showed that the language teaching process was hampered by the unequal availability of relevant technology across educational institutions.
4) Studies confirm that up-to-date sound and visual effects and tablet display devices are more effective in teaching English language skills due to their immediacy and user-friendly English content, which reflects real-life situations as opposed the traditional means that student find contrived and boring. 5) As anticipated, the study confirms that the use of modern technology leads to enhanced learner outcomes including better student motivation, improved achievement levels, and increased interaction between student and teacher. Improved student self-learning, self-reliance, positive self-talk were also observed, as were maximum utility of time and effort for both the teacher and student. Going forward, it is evident that the various modes and sources of modern classroom technology have proven their reliability and effectiveness in the comprehensive, relevant, and timely instruction of contemporary English language skills.
Hypotheses Findings
The following indications relating to the study hypotheses have been determined: There are statistical differences indicating variations between traditional methods and modern technology in teaching English.
No statistical differences were found to indicate significant variations between traditional methods and modern technology in teaching English.
There are statistical indicators demonstrating the level of student assimilation of English language skills.
No statistical indicators were found which demonstrated levels of student assimilation of English language skills.
There are statistically significant indications of the efficiency of teachers in using of modern technology in terms of teaching English language curricula.
No statistically significant indications of the efficiency of teachers in using of modern technology in terms of teaching English language curricula were identified.
Recommendations
In light of the findings, the researcher suggests the following: 1) Substitute modern technology for obsolete English language teaching methods.
2) Provide appropriate training for all teachers to use modern technology in English language teaching.
3) Adopt complete electronic curriculum projects in line with modern requirements.
Conclusion
In summary, it is clear that despite genuine efforts to modernize traditional methods of teaching English, residual obsolete practices should be phased out and replaced by the use of the available technology on offer via computer, smart devices, display, audio-visual materials, and electronic approaches. This study underscores the vital educative potential and numerous benefits of technology in the language classroom for positive learning outcomes in the language classroom and the wider world, the financial implications of setting up the infrastructure, and encouraging teachers to overcome their anxieties around of teaching technologies. Of course, the purpose of both traditional and modern technologies is to maximize students' English skills and provide a space where learning can be best facilitated. One of the ultimate goals of using modern technology is to actively engage them students in language learning and motivate them to acquire English language skills in a practical and realistic way.
This can be achieved through an open learning context which fosters openness and access to the subjects and information through modern technology means, wherein students are motivated and directed to communicate with each other. In terms of future development, it is clear that multimedia will be integral to the student-centred process of teaching English to modern standards. As such, the quality of teaching and application of students to modern educational foundations would benefit from an extensive survey of English language skills in to improve overall communication proficiency.
In conclusion, we believe that this process can fully enrich student thinking and practical language skills and promote improved efficacy in overall teaching and learning. Indeed it is evident that many routine learning issues that can be overcome through the effective incorporation of technology and appropriately trained teachers, while funding ramifications can be addressed through ministerial planning and the establishment of an infrastructure which prioritizes the interests of effective learning. | 2019-09-17T02:45:59.351Z | 2019-08-30T00:00:00.000 | {
"year": 2019,
"sha1": "38794dc87b79d88de088d075a814fb2d183cc756",
"oa_license": "CCBY",
"oa_url": "http://www.scholink.org/ojs/index.php/fet/article/download/2270/2375",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "575dedfb8593b82a2b849b8cb75e658e139673d0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
265032655 | pes2o/s2orc | v3-fos-license | Associations of bolus insulin injection frequency and smart pen engagement with glycaemic control in people living with type 1 diabetes
To evaluate whether both bolus insulin injection frequency and smart pen engagement were associated with changes in glycaemic control, using real‐world data from adults with type 1 diabetes (T1D).
| INTRODUCTION
2][3][4] To optimize treatment outcomes and minimize the risk of hypoglycaemia, current guidelines recommend considering personalized glycaemic targets based on an individual's characteristics, including diabetes duration, age, life expectancy, treatment type and comorbidities. 1,5The use of technology, such as insulin pumps and glucose sensors, is also recommended. 2naging glycaemic control in T1D requires multiple daily injections (MDI) of basal and bolus insulin or continuous subcutaneous insulin infusion using an insulin pump.MDI can be burdensome for individuals owing to the requirement to track the time and dose of each insulin injection. 1,6Furthermore, the risk and fear of hypoglycaemic episodes and insulin overdose, as well as the time-consuming nature of injections, represent other common challenges for individuals with T1D; consequently, this can reduce adherence to these regimens, resulting in suboptimal glycaemic control. 6][9][10] This may result in increased treatment adherence and improved disease outcomes for individuals with T1D.Smart insulin pens, such as the NovoPen 6 device, are an example of these advances, providing additional support for individuals with T1D in tracking and managing their insulin injections.The NovoPen 6 collects and stores data on the date, timing and dosage of administered insulin injections.This information can be uploaded to devices or repositories, such as a centralized database on a computer-based data visualization programme, using near-field communication.When paired with CGM data, these smart insulin pen data enable users and healthcare professionals (HCPs) to access and visualize insulin injection patterns and glucose metrics over time, and they facilitate discussions regarding optimizing glycaemic control and diabetes treatment management. 7Consequently, smart insulin pen injection data can provide unique insights into routine diabetes treatment. 7,11,12Indeed, real-world data have demonstrated improvements in glycaemic control and dosing behaviour benefits when the NovoPen 6 was used in the management of T1D. 7 The aim of this study was to evaluate whether bolus insulin injection frequency and smart insulin pen engagement were associated with glycaemic control, using real-world data from adults with T1D who use CGM and a smart insulin pen device (NovoPen 6) for bolus insulin delivery.
| Data collection
Throughout the study, smart pen injection (dosage timing and injection frequency) and CGM data were collected continuously through uploads to the Glooko/Diasend system, Glooko app and MySugr app.
Participants were followed from the date of their first injection using the NovoPen 6 to the date of their last upload prior to August 2022.
Glycaemic
Study data were also used to evaluate whether individuals reached the internationally recommended targets for percentage of TIR (>70%) or TBR (<4%) on a given day. 13,14lus injection behaviour was summarized according to the number of bolus injections per day and total bolus dose per day, excluding air shot injections.Air shot injections were detected by the iPrime algorithm (Table S1) based on size of dose and time to next dose.
Participant engagement with the smart insulin pen was quantified using a time-varying engagement score.On any given day, the engagement of a participant was quantified by the number of days with data uploads over the previous 14 days.The number of days with data uploads served as a proxy for measuring smart insulin pen engagement.Duration of smart insulin pen usage was calculated as the time since the first smart insulin pen injection.
| Data handling and statistical methods
All available days with data for at least one registered bolus insulin injection and ≥70% CGM coverage were considered in the analysis.
Data from the first day of smart pen use or from days with a total bolus insulin dose of ≥500 U were collected but were excluded from the analysis because these data are unlikely to be representative of real use (Figure S1).
The outcomes were derived per day and the covariates aggregated both by day and per individual.For each outcome, data were analysed for the overall study population and stratified according to the type of bolus insulin.
Continuous responses were analysed using a linear mixed model with participant as random effect and an exponential covariance function to model correlation between days.Binary responses (ie, clinical target met) were analysed using a generalized linear mixed model based on a binomial distribution and a logistic-link function, with participant as random effect.Both analysis types included individual-level covariates (mean number of bolus insulin injections per day, mean total bolus insulin dose per day, average number of upload days per 14-day period, country, age and sex) and day-level covariates (number of bolus insulin injections on a given day, total bolus insulin dose on a given day, number of upload days in the past 14 days and time since baseline in years).Linear and quadratic effects of each covariate were included in the model except for time since baseline, which was only included as a linear term.Country was included as a fixed effect in the analysis.The interaction terms between age and sex and between number of doses and mean number of doses were also included.
In general, results from the fitted model are presented as leastsquares means, setting continuous covariates to the observed mean and categorical covariates (country and sex) to observed frequencies in the cohort.
The probability that the 14-day average TIR met the clinical target of >70% was calculated based on the estimated least-squares mean and variance (within and across participants) for TIR, assuming a normal distribution.
| Study population
Of the 2087 NovoPen 6 and CGM users who provided data-sharing consent, 1291 met all the inclusion criteria (Figure S1).Subsequently, 97 individuals met at least one of the exclusion criteria and were removed from the analysis.In total, 1194 individuals (85.6% from Sweden; mean age, 38.1 years; 46.6% female) with 110 264 days of data were included in the analysis.A summary of participant characteristics is provided in Table 1.
Across the study population, individuals performed a mean (standard deviation [SD]) of 4.8 (1.8) bolus insulin injections per day (Table 1).Faster aspart was used by 52.6% of study participants, whereas 47.4% used aspart.The majority (63.1%) of participants exclusively used one smart pen; 406 participants (34.0%)only used a smart pen for aspart injections, whereas 417 (34.9%)only used a smart pen for the administration of faster aspart.A quarter (25.0%) of participants (n = 300) also used a smart pen with basal insulin degludec, whereas 55 (4.1%) also used basal insulin detemir.Over a 14-day period, participants completed uploads from the smart pen into the app on a mean (SD) of 1.7 (2.3) days.
The majority (53.9%) of individuals had isCGM, with a CGM interval (timing between CGM measurements) of 15 minutes; 45.3% used rtCGM, with an interval of 5 minutes (Table 1).Participants using isCGM had a median scan rate of 6.7 manual scans per day.Of these individuals, 25% scanned, on average, >10 times per day, whereas 25% scanned <3.3 times.No strong correlation between isCGM scan rate and the number of bolus injections was found.
Study population characteristics were similar in the faster aspart and aspart users (Table 1).
| Association between daily bolus insulin injection frequency and glycaemic control
The empirical distribution of, and association between, the mean percentage of TIR and the mean number of daily bolus doses is illustrated in Figure 1.The mean number of boluses varied greatly among individuals, and, on average, individuals with a high number of mean bolus doses had a higher percentage of TIR.This is confirmed in the formal statistical modelling, including all the covariates listed in the methods in which the mean number of daily bolus insulin injections was statistically significantly associated with percentage of TIR, TBR and TAR (Figure 2A-C).For TIR, there was a positive association; participants with six bolus insulin injections per day had a mean TIR of 64.4% (95% confidence interval [CI] 62.5%-66.3%),whereas participants with three daily bolus insulin injections had a lower (ie, worse) percentage of TIR of 46.7% (95% CI 44.4%-49.0%;Table 2).The total daily bolus dose was included in the statistical model, thus comparing individuals with the same daily dose but with a different number of injections.Similarly, there was a positive association for TBR; participants with six bolus insulin injections per day had a TBR of 4.0% (95% CI 3.6%-4.4%),whereas participants with three daily bolus insulin injections had a slightly lower percentage of TBR of 3.3% (95% CI 2.9%-3.8%);this trend was also seen for TBR L1 (3.0-<3.9mmol/L [54 -<70 mg/dL]) and TBR L2 (<3.0 mmol/L [<54 mg/dL]; Table S2 decreased TAR (Table 2), TAR L1 (>10.0-13.9mmol/L [>180-250 mg/ dL]) and TAR L2 (>13.9 mmol/L [>250 mg/dL]; Table S2).
Statistically significant associations were also shown between the mean number of daily bolus insulin injections and mean glucose levels (Figure 2D), GMI (Figure 2E) and glucose variability (Figure 2F).A higher number of daily bolus insulin injections corresponded to lower mean glucose levels, lower GMI and reduced glucose variability (%CV).These findings were comparable across the subgroups of participants using either faster aspart or aspart (data not shown).
On days when more bolus insulin injections than usual were administered, individuals had a significantly lower percentage of TIR and TBR (Figure S2).The reduction in percentage of TIR was larger for those individuals who usually administered fewer bolus injections (P < 0.0001).
The estimated probability of achieving >70% TIR over 14 days is presented in Figure 3A.Individuals with an average of three daily bolus insulin injections had an estimated 10.9% (95% CI 8.8%-13.4%)chance of reaching the recommended target of >70% TIR.This result is based on the estimate of a mean percentage of TIR of 46.7% (Figure 2A) and an estimated 14-day SD of 18.9%.
The study data showed that greater numbers of daily bolus insulin injections were associated with a higher probability of meeting the clinical target of TIR >70% and a lower probability of meeting the clinical target of TBR <4% (Figure 3B).
| Association between smart pen engagement and glycaemic control
The effects of number of daily uploads on glycaemic control parameters are shown in Figure S3 and Table S3.Across all study participants, an increased number of uploads was significantly associated with improved TIR (Figure S3A), with no effect on TBR (Figure S3B).The highest estimated mean was found for individuals with 7 upload days per 14 days (ie, one upload, on average, every other day) who had a mean TIR of 63.5%, compared with 55.1% in those individuals who almost never uploaded data (mean of 0 uploads per 14-day period).
However, the estimated TIR was seen to decrease with very high upload frequency (>10 days per 14-day period).During periods with a higher-than-average upload frequency for each participant, increased percentages of TIR and TBR were observed (data not shown).
| Association of glycaemic control with other factors
Glycaemic control (TIR and TBR) was shown to be associated with age and daily insulin dose when controlling for all other covariates (ie, number of bolus insulin injections, upload frequency, sex, time since baseline and country).Younger adults showed a lower percentage of TIR and a higher percentage of TBR than older adults (Table S4).Regarding bolus insulin dose, glycaemic control was poorer in individuals with higher total daily insulin dose (data not shown).The study data also showed a decrease in TIR with time since pen initiation.Across the study population, no statistically significant differences in TIR among countries were observed.
| DISCUSSION
In this observational study, we sought to investigate whether bolus insulin injection frequency and the level of smart insulin pen engagement was associated with changes in glycaemic control.The real-world data presented here demonstrate that daily bolus insulin injection frequency was a strong predictor of glycaemic control in Mean number of daily bolus injections 13,14 adults with T1D, as assessed by TIR, TBR, TAR, mean glucose levels, %CV and GMI (Figure 2).These data indicate that, even when comparing individuals with the same total daily insulin dose, an increased number of daily bolus insulin injections is associated with more stable glucose profiles and lower average glucose levels.
The mean TIR of the individuals with T1D using a smart insulin pen in this study was 59% (Table 1); this is higher than the TIR for individuals with T1D receiving insulin pump therapy, in conjunction with CGM, in both French (mean TIR after 3 months: 53.3% [rtCGM]) 15 and Canadian (mean TIR after 6-12 months: 58.3% [rtCGM] and 54.5% [isCGM]) 16 observational studies.
Data from the present study suggest that, on average, six to eight daily bolus insulin injections were needed to achieve a mean TIR of >70% (Figure 3).Nonetheless, we observed that participants administered a mean of 4.8 daily bolus insulin injections and that <25% of individuals administered six or more daily bolus insulin injections (Table 1).These data indicate that most participants in this study may not have been administering an optimal number of bolus insulin injections to reach the treatment target of >70% TIR.However, it should be noted that there is a trade-off between TIR and TBR, with there being an increased risk of hypoglycaemia with too many daily bolus insulin injections (Figure 2).
Notably, as the probability of meeting the TIR target increased, the probability of meeting the TBR target decreased, indicating that the recommended frequency for treatment administration may be a balance between TIR and TBR when using MDI to treat individuals with T1D.
This trade-off is illustrated in Figure 3B.Achieving an appropriate balance is important when considering the recommended number of daily injections and is likely to be an individual-dependent clinical choice.
Given that this is an observational study, we cannot conclude that the relationship between the number of bolus insulin injections and TIR is causal, and we do not claim that increasing bolus insulin injections per se improves TIR.In fact, our data indicate the opposite pattern because we found that days with a higher-than-average number of bolus insulin injections for a given individual corresponded with a lower percentage of TIR (Figure S2).This pattern was observed for all individuals regardless of the number of usual injections; however, the effect was strongest for those with typically fewer mean daily injections.The higher-than-average number of administered daily bolus insulin injections was probably due to correction doses or when the participant judged that extra doses were needed.These data suggest that, irrespective of the typical bolus insulin injection pattern, it is more challenging to keep glucose levels within target range on days with an increased number of bolus insulin injections (eg, on days with correction doses).
Missed bolus doses and late bolus doses are common among individuals with T1D treated with MDI and are associated with poor glucose control. 12According to the 2021 American Diabetes Association and European Association for the Study of Diabetes consensus report, the most effective way of maintaining glucose levels within the normal range in T1D is with a hybrid closed-loop system. 2 The use of smart pens, paired with CGM, may reduce the disparity between modern insulin pumps and conventional MDI.Compared with insulin pumps, smart pens with CGM could also have an important cost benefit.T A B L E 2 Association between the mean number of daily bolus insulin injections and glycaemic parameters in the overall study population The optimal treatment regimen should be decided between the HCP and the individual and should be tailored to the individual's specific lifestyle and conditions.Our analysis suggests that, depending on the individual's lifestyle and dietary conditions, this may require a recommendation to administer more bolus insulin injections, on average, than a typical individual would usually perform (Figure 3).Smart insulin pens may be an excellent tool to achieve this because the dialogue around optimal bolus insulin dosing can be based on the individual's own data and their specific circumstances.Indeed, our results indicate that tracking the number of bolus insulin doses with a smart insulin pen is a useful and simple treatment index to use in clinical practice.
Consideration should also be given to an individual's engagement with the smart pen because this was also shown to correlate with the proportion of TIR.Individuals who uploaded every second day, on average, had an 8% higher percentage of TIR than those who rarely uploaded (Figure S3; Table S3).We found that, during periods when participants uploaded more frequently, they also had a higher percentage of TIR.We did find a negative association between TIR and upload frequency for participants who uploaded more than every second day; however, as indicated by the reported CIs, this result has high uncertainty and is based on very few individuals (<5% of participants uploaded this frequently).8][19] Foster et al analysed real-world US data collected from the T1D Exchange registry, reporting that the majority of individuals (>50%) had never downloaded data from their blood glucose meter or CGM device outside of HCP visits. 20 F I G U R E 3 (A) Probability of participants achieving >70% time in range (TIR; 3.9-10.0mmol/L [70-180 mg/dL]), and (B) association between the mean number of daily bolus insulin injections and the probability of meeting glycaemic targets in the overall study population.Data are estimated least-squares mean with 95% confidence interval (CI).A standard deviation of 18.9% of TIR is assumed.TBR, time below range (<3.9 mmol/L [<70 mg/dL]).
with their smart insulin pen were analysed.In the study by Foster et al, while most of the individuals did not routinely download data, a subset of individuals downloaded their CGM data a few times per year and 5% downloaded it two to three times per month. 20Interestingly, this aligns with the average number of days with data uploads reported in this analysis: a mean of 1.7 days with data uploads over a To conclude, daily bolus insulin injection frequency was shown to be a strong predictor of glycaemic control in adults with T1D.These data suggest that the administration of six to eight daily bolus injections was required to have a high likelihood of achieving recommended glycaemic targets; however, the mean number of daily bolus injections administered per participant in this study was 4.8 (median: 4.6 bolus injections), indicating that many individuals may not be administering an optimal number of daily bolus insulin injections.The frequency of daily bolus insulin injections should be balanced so that there is an optimal improvement in TIR without significantly increasing TBR.There was also a positive association between smart pen engagement and TIR, with an increased frequency of uploads leading to improved TIR.
A carefully planned treatment regimen involving an optimal bolus insulin injection strategy, and effective smart pen engagement may result in better glycaemic control among adults with T1D.
AUTHOR CONTRIBUTIONS
Authors who are employees of Novo Nordisk (the trial sponsor) participated in developing the study concept and design, and in collecting the data.All authors were involved in the analysis and interpretation of data, participated in preparing the manuscript and approved the final manuscript for submission.
This was a retrospective, observational, real-world study that analysed data collected from individuals with T1D treated in several different clinics across Europe (Austria, Denmark, Finland, Germany, Norway, Spain, Sweden, Switzerland and the United Kingdom) from March 2021 to August 2022 (data collection period).Participating countries were selected for inclusion based on the availability of the NovoPen 6 (Novo Nordisk A/S, Denmark) device at the time of the analysis.Data were collected across the participating countries from all adults (≥18 years of age) with T1D who were using a NovoPen 6 device to administer their bolus insulin (fast-acting insulin aspart [faster aspart] or insulin aspart [aspart]), who had available CGM data and who provided consent for their pseudonymized injection and CGM data to be shared with Novo Nordisk for research purposes (FigureS1).Individuals using both faster aspart and aspart during the data collection period (n = 23) and those of unknown sex (n = 23) were excluded.
control was assessed daily through the evaluation of the percentage of time spent in glycaemic target range (time in range [TIR]; 3.9-10.0mmol/L [70-180 mg/dL]), the percentage of time spent above glycaemic target range (time above range [TAR]; >10.0 mmol/L [>180 mg/dL]), the percentage of time spent below glycaemic target range (time below range [TBR]; <3.9 mmol/L [<70 mg/dL]), mean glucose levels (mmol/L), glucose variability (within-day coefficient of variation [CV] as a percentage) and glucose management indicator
1
Distribution of, and association between, the mean percentage of time in range (TIR; 3.9-10.0mmol/L [70-180 mg/dL]) and the mean number of daily bolus doses.Data shown are empirical mean values per individual.The size of the circles indicates the number of days with continuous glucose monitoring.The smoothing line is based on a generalized additive model with spline basis and is for illustration only.
14 -
day period (or 3.4 times per month).This study analysed a large body of observational data from a wide range of individuals owing to the broad inclusion criteria.The analysis of these real-world data provides unique and clinically meaningful insights into the behaviours of individuals with T1D and the impact of these behaviours on glycaemic control.Additionally, realworld data can highlight treatment effects which cannot be studied, or which are considered unethical to study, in a randomized controlled trial, such as the effect of suboptimal bolus dosing frequency on glycaemic control.The use of smart insulin pen data gives a quantitative and precise insight into the daily lives of individuals with T1D treated with MDI; this is not subject to recollection bias and missing data, which can occur with diary-based approaches.However, there are inherent limitations associated with observational studies that should be considered, such as the nongeneralizability of results, selection bias, as well as bias due to unobserved confounders, such as treatment goals, HCP influence, food intake, exercise levels, overall treatment adherence, general disease awareness, and individuals' self-management skills.In this study, more than 85% of participants resided in Sweden; thus, the findings may not be universally applicable to a wider population.Furthermore, all the present study findings (for both the numbers of daily bolus insulin injections and smart insulin pen engagement) are associations; causality cannot be concluded based on these data alone.It should also be noted that, although participants consented to share de-identified injection and CGM data for use in this study, limited additional information about the clinical characteristics of the included individuals was available.Details of treatment regimens and specific treatment goals were not captured, and no information was collected regarding whether the actions or behaviours of HCPs might have influenced an individual's engagement with their smart pen.Food intake, carbohydrate consumption, the number of correction bolus insulin injections, and exercise levels were also not recorded, all of which could have affected the numbers of required daily insulin injections, as well as glycaemic control.Details regarding the CGM devices, such as the brand and model, were not available.As the amount of data collected with NovoPen 6 devices increases, future studies will focus more on investigating individuals' behaviour, injection patterns (including the frequency of correction bolus insulin injections) and treatment outcomes.
This differs from the present analysis, in which only data from individuals who engaged | 2023-11-07T06:18:18.221Z | 2023-11-05T00:00:00.000 | {
"year": 2023,
"sha1": "3b697246151f97148744c536094cf8de90096bab",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.15316",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "6ec7ac4507347717aece73f3521f35febf2514a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19009084 | pes2o/s2orc | v3-fos-license | Universal Behaviour of Metal-Insulator Transitions in the p-SiGe System
Magnetoresistance measurements are presented for a strained p-SiGe quantum well sample where the density is varied through the B=0 metal-insulator transition. The close relationship between this transition, the high field Hall insulator transition and the filling factor $\nu$=3/2 insulating state is demonstrated.
The strained p-type SiGe system exhibits, in addition to the normal integer quantum Hall effect (IQHE) transitions, an insulating phase near filling factor ν = 3/2 [1,2] and a B=0 metal-insulator transition of the kind observed in high mobility Si-MOSFETs [3]. Results are presented here that show the close relationship between these various transitions.
Samples were grown by a UHV chemical vapour deposition process. An intrinsic Si layer is followed by a 40nm Si .88 Ge .12 quantum well, a spacer layer of variable thickness and a boron doped silicon layer. The quantum well is sufficiently narrow that the lattice constant difference between the alloy and the pure Si is all taken up by strain which means the heavy hole band, characterised by a |M J | = 3/2 symmetry, is well separated from other bands. The g-factor is large (of order 6) and depends only on the perpendicular component of magnetic field [4,5]. At high fields the spin splitting is further enhanced by exchange resulting in a fully polarised "ferromagnetic" spin system at ν = 2. The ratio of the transport and quantum lifetimes, determined from the Shubnikov-de Haas oscillations, is about one [6] showing that the disorder is dominated by a short-ranged scattering potential.
with a scattering parameter s, (which can be identified with the Chern-Simons boson conductivity σ (b) xx [8]) given by Here ν c is the critical filling factor and the exponent κ is close to 3/7 at low temperatures. For N L =0, ie at the termination of the Quantum Hall sequence, this gives corresponding to a quantised Hall insulator. Near ν = 1.5 another insulating phase is observed in many samples. The presence of this phase depends on density, disorder and on magnetic field tilt [1,2]. An activation analysis [2] correlates insulating behaviour with the existence of degenerate spin states at the Fermi level: that is it is suppressed when the ferromagnetic polarisation of the spins at ν = 2 persists through ν = 1.5 as the field is increased. This can be demonstrated to be a reentrant transition, growing out of the ν = 1 IQHE state although, but when it is very strongly developed it appears to grow directly out of the ν = 2 or 3 states. Figure 1a shows the ν=3/2 and high field Hall insulating transitions measured using a two terminal technique in a sample with a density of 1.4 ×10 11 cm −2 . A scaling plot of this data ( fig.1b) shows ρ xx plotted against (ν c − ν)/T κ with κ = 3/7. There is good agreement with eqn.2 in both cases with slightly different values of T 0 . The scaling deteriorates in the ν = 3/2 insulating phase because of the close proximity of the two critical points.
For the B=0 metal-insulator transition the temperature dependence of ρ xx in the insulating phase is well described over several orders of magnitude, by ρ c exp[(T 0 /T ) n ] with ρ c ∼ 0.5h/e 2 and n ∼ 0.4 [6]. In the metallic phase, at low T, it is of the general form As is commonly observed in these systems [9,10], for densities near the critical value the resistance does not always increase monotonically and often has a maximum and a tilted separatrix between the metallic and insulating phases. This makes it difficult to independently determine the prefactor ρ 1 and the exponent p by fitting to eqn.4. Choices of p=1 (and ρ 1 small) or alternatively p ≈ 0.4 (with ρ 1 then of order 0.5 h/e 2 ) are equally successful.
In each case, however, the parameter T 1 varies with density and there is a general similarity to eqn. 2.
Although the temperature dependence in the "metallic" phase is dominated by activated behaviour there is also evidence of weak localisation [11]. Within experimental error, a ln(T) term cannot be detected directly, but the negative magnetoresistance around B = 0 and positive behaviour at higher fields, can be consistently interpreted in terms of a sum of weak localisation and Zeeman contributions. The value of F * [12] extracted in this way is large (2.45). Combined with the cancellation between the two terms it leads to a coefficient for the ln(T) behaviour which is close to zero, consistent with the experimental data. Figure 2 shows magnetoresistance data for a sample where the density was varied, by exploiting a persistent photoconductivity effect, through the B=0 critical value. The zero field resistivities (figure 3) show the critical density is 7.8 × 10 10 cm −2 . For the highest density trace (figure 2a) there are three fixed points corresponding respectively to transitions into the ν = 3/2 insulating phase, into the ν = 1 QHE state and into the high field Hall insulating phase. As the density is reduced the first transition disappears (or moves to a much lower field); this is followed, in the next trace by the simultaneous disappearance of the two higher field fixed points so, at the lowest density, the temperature dependence over the whole field range is insulating. The low field Hall resistance is well defined through the whole range of densities. This indicates a Hall insulator with σ xx and σ xy both diverging but with the ratio ρ xy = σ xy /σ 2 xx taking the classical value (B/pe, where p is the density). At higher fields it retains this classical behaviour until it becomes quantised near ν=1 and approximate quantisation then continues well into the high field insulating state, consistent with eqn.3.
The critical resistivities for all three transitions: the B=0 transition, the ν = 3/2 transition, and the high field Hall insulator transition are approximately 0.5 h/e 2 . In contrast to the situation in p-GaAs[ [10]] the resistivity at the high field transition point (and also for the ν = 1.5 transition) is almost independent of density. Furthermore, the B = 0 transition is unchanged by magnetic field. Again, this is in contrast to the situation in p-GaAs where the B=0 transition transforms smoothing into the IQH effect transition. This difference in behaviour is probably a consequence of the strong spin-coupling in p-SiGe which quenches the independent degree of freedom of the spins.
The behaviour is summarised in a phase diagram (figure 4) which is to some extent schematic. At high densities a well defined Landau level structure is observed with the re-entrant ν = 3/2 insulating phase. At lower densities this is washed out in a region where Γ (the Landau level broadening) is larger thanhω c (the Landau level spacing), but the high field and ν = 3/2 insulating phases persist. At the lowest density the behaviour is insulating, over the whole field range, with no clear distinction between the three types of insulating behaviour. The well-known "floating-up" of the lowest Landau level is shown. In this case the condition Γ =hω c is the same as the criterion for the appearance of the insulating behaviour k F l = 1 (where l is the mean free path). For higher Landau levels, however, this is not the case and the disappearance of the Landau levels must be associated more with the dominance of the disorder than with "floating-up".
In all cases spin plays an important role. For the B = 0 transition, in the insulating phase, this is demonstrated by the insensitivity to magnetic field; in the metallic regime it is presumeably is the cause of the large value of F * . The high field quantised Hall insulating state is, by definition, spin polarised and spins also seem to play a role in the formation of the ν = 3/2 insulating state. | 2014-10-01T00:00:00.000Z | 1999-06-17T00:00:00.000 | {
"year": 1999,
"sha1": "0748c80bff7adf2c41db50d35f5a13b813d003c7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9906285",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e08ae34a5c22b45de5c1071bba2f54c8d4aaf780",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
256164099 | pes2o/s2orc | v3-fos-license | Non-zero $\theta_{13}$, CP-violation and Neutrinoless Double Beta Decay for Neutrino Mixing in the $A_4\times Z_2\times Z_3$ Flavor Symmetry Model
We study the modification of the Altarelli-Feruglio $A_4$ flavor symmetry model by adding three singlet flavons $\xi'$, $\xi''$ and $\rho$ and the model is augmented with extra $Z_2\times Z_2^ \prime$ symmetry to prevent the unwanted terms in our study. The addition of these three flavons lead to two higher order corrections in the form of two perturbation parameters $\epsilon$ and $\epsilon^\prime$. These corrections yield the deviation from exact tri-bimaximal (TBM) neutrino mixing pattern by producing a non-zero $\theta_{13}$ and other neutrino oscillation parameters which are consistent with the latest experimental data. In both the corrections, the neutrino masses are generated via Weinberg operator. The analysis of the perturbation parameters $\epsilon$ and $\epsilon^\prime$, shows that normal hierarchy (NH) and inverted hierarchy (IH) for $\epsilon$ does not change much. However, as the values of $\epsilon^\prime$ increases, $\theta_{23}$ occupies the lower octant for NH case. We further investigate the neutrinoless double beta decay parmeter $m_{\beta\beta}$ using the parameter space of the model for both normal and inverted hierarchies of neutrino masses.
I. INTRODUCTION
Though the particle physics experiments and observations have been successfully confirming the standard model (SM) of particle physics, the origin of flavor structure, strong CP problem, matter-antimatter asymmetry of the universe, dark matter, dark energy, non-zero Neutrino physics is an experimentally driven field. It has made tremendous progress over the past few decades and attempts are underway to quantify the neutrino oscillation parameters more precisely. A few notable works in neutrino physics are placed in references [1][2][3][4][5][6][7][8][9][10].
Neutrino oscillation phenomenology is characterized by two large mixing angles, the solar angle θ 12 and the atmospheric angle θ 23 together with the relatively small reactor mixing angle θ 13 . In tri-bimaximal mixing (TBM), the reactor mixing angle θ 13 is zero and the CP phase δ CP is consequently undefined. However, in 2012 the Daya Bay reactor neutrino experiment (sin 2 2θ 13 = 0.089 ± 0.010 ± 0.005) [11] and RENO experiment sin 2 2θ 13 = 0.113 ± 0.013 ± 0.019 [12] showed that θ 13 9 • . Several accelerator-based long baseline neutrino oscillation experiments like MINOS [13], Double Chooz [14], T2K [15] also measured consistent non-zero values for θ 13 . Since TBM has been ruled out due to a non-zero reactor mixing angle, [12,14] one of the admired ways to achieve realistic mixing is through either it's extensions or through modifications.
Our model is based on the Altarelli-Feruglio (A-F) A 4 discrete flavor symmetry model [61][62][63]. We have extended the flavon sector of A-F model by introducing extra flavons ξ , ξ and ρ which transform as 1 , 1 and 1 respectively under A 4 to get the deviation from exact TBM neutrino mixing pattern. We also introduce Z 2 ×Z 3 symmetry in our model to prevent unwanted terms and this helps in constructing specific structure of the coupling matrices. We calculate higher dimension perturbative parameters and from the Lagrangian, modified the neutrino mass matrix M ν and realized symmetry based studies. However, in a few similar works [64,65], they have been simply added arbitrarily perturbative terms to M ν obtained from A-F model without calculations from the Lagrangian. But in these papers [66][67][68][69], perturbative term was calculated using Type-I seesaw mechanism and departed from the tri-bimaximal mixing pattern.
Also, various efforts have been made to deviate from TBM structure by adding extra flavons in order to generate non-zero θ 13 , non-trivial CP phase, thereby explaining the experimental data. A few of them like [70] where the author shows that non-degeneracy of the neutrino Yukawa coupling constants are the origin of the deviations from the TBM mixing and unremovable CP-phases in the neutrino Yukawa matrix give rise both low energy CP violation measurable from neutrino oscillation and high energy CP violation. In Ref. [71] the authors show that CP is spontaneously broken at high energies, after breaking of flavon symmetry by a complex vacuum expectation value of A 4 triplet and gauge singlet scalar field. In Ref. [72], Dirac CP violating phase is predicted by using the experimental mixing parameters and this model is consistent with the experimental data only for the normal hierarchy of neutrino masses.
Therefore, this gives us an opportunity to analyze M ν obtained through Weinberg operator in detail and study the effect of two perturbative terms and on neutrino oscillation parameters and NDBD parameter m ee .
The content material of our paper is organised as follows: In section 2, we give the overview of the framework of our model by specifying the fields involved and their transformation properties under the symmetries imposed. We give two types of corrections, analyse and study the impact of these correction terms on neutrino oscillation parameters. In section 3, we do numerical analysis and study the results for the neutrino phenomenology. We finally conclude our work in section 4.
II. FRAMEWORK OF THE MODEL
The non-Abelian discrete symmetry group A 4 is a group of even permutations of four objects and it has 12 elements (12= 4! 2 ). It can describe the orientation-preserving symmetry of a regular tetrahedron, so this group is also known as tetrahedron group. It can be generated by two basic permutations S and T having properties 1 Here the generator T has been chosen to be diagonal.
The multiplication rules corresponding to the specific basis of two generators S and T are as follows: Here, 1 is symmetric under the exchange of second and third elements of a and b, 1 is symmetric under the exchange of the first and second elements while 1 is symmetric under the exchange of first and third elements.
Here 3 is symmetric and 3 A is anti-symmetric. For the symmetric case, we notice that the first element here has 2-3 exchange symmetry, the second element has 1-2 exchange symmetry and the third element has 1-3 exchange symmetry.
Our model is based on the Altarelli-Feruglio A 4 model [61][62][63]. We have added additional flavons ξ , ξ and ρ to get the deviation from exact TBM neutrino mixing pattern. We put extra symmetry Z 2 × Z 3 to avoid unwanted terms. The particle content and their charge (2) 2 1 1 1 2 2 1 1 1 1 1 1 A 4 3 1 1 1 1 1 3 3 1 1 1 1 Consequently, the invariant Yukawa Lagrangian is as follows: where we have used the compact notation, and so on and Λ is the cut-off scale of the theory. The terms y e , y µ , y τ , x a , x a and x a , x b are coupling constants. We assume Φ T does not couple to the Majorana mass matrix and Φ S does not couple to the charged leptons.
After spontaneous symmetry breaking of flavor and electroweak symmetry, we obtain the mass matrices for the charged leptons and neutrinos. The vacuum expectation values (VEV) of the scalar fields are of the form [61][62][63]. For the sake of completeness, we present the explicit form of the scalar potentials and their corresponding VEVs in Appendix A.
The charged lepton mass matrix 2 is given as where, v d and v T are the VEV of h d and Φ T respectively. Now, taking higher dimension terms in the neutrino sector, we consider two types of corrections of the form x ξ (ll) ρρ where, x and x are coupling constants. These give rise to two cases and we will study the impact of these correction terms on neutrino oscillation parameters.
A. Case I
With the additional higher dimension term x ξ (ll) ρρ Λ 2 and using the VEVs Φ s = (v s , v s , v s ), ξ = 0, ξ = u , ξ = u , h u = v u and ρ = v ρ , we obtain the neutrino mass matrix which may be written as where, We can assume c d. This is a reasonable assumption to make since the phenomenology does not change drastically unless the VEVs of the singlet Higgs vary by a huge amount. Thus, the neutrino mass matrix in equation (8) becomes 2 {Charged fermion masses are given by [61]: We can obtain a natural hierarchy among m e , m µ and m τ by introducing an additional U (1) F flavor symmetry under which only the right-handed lepton sector is charged. We write the F-charge values in this model as 0, q and 2q for τ c , µ c and e c respectively. By assuming that a flavon θ, carrying a negative unit of F, acquires a VEV < θ > /Λ ≡ λ < 1, the Yukawa couplings become field dependent quantities y e,µ,τ = y e,µ,τ (θ) and we have y τ ≈ O(1), y µ ≈ O(λ q ), y e ≈ O(λ 2q ).
B. Case II
Here, we will take into consideration the correction term of the second type x ξ (ll) ρρ Λ 2 . The resulting neutrino mass matrix obtained in such a case is given as where =x u v 2 ρ Λ 3 , parameterizes the correction to the TBM neutrino mixing. Applying similar condition on c and d as in case I, we obtain In section 3, we give the detailed phenomenological analysis for both the cases and discuss the effect of perturbations ( and ) on various neutrino oscillation parameters. Further, we present a numerical study of neutrinoless double-beta decay considering the allowed parameter space of the model.
III. NUMERICAL ANALYSIS AND RESULTS
In the previous section, we have shown how Altarelli-Feruglio A 4 model could be modified by adding extra three singlet flavons and taking into consideration higher dimension terms.
In this section, we perform a numerical analysis to study the capability of the perturbation parameters and to produce the deviation of neutrino mixing from exact TBM. For each case, we will discuss the results for both normal as well as inverted hierarchies. Throughout and δ CP can be obtained from U as and δ may be given by with P = (m 2 2 − m 2 1 )(m 2 3 − m 2 2 )(m 2 3 − m 2 1 ) sin 2θ 12 sin 2θ 23 sin 2θ 13 cos θ 13 (15) For the comparison of theoretical neutrino mixing parameters with the latest experimental data [73], the modified A 4 model is fitted to the experimental data by minimizing the following χ 2 function: where λ model i is the i th observable predicted by the model, λ expt i stands for the i th experimental best-fit value and ∆λ i is the 1σ range of the i th observable.
First, we shall discuss case I, for perturbation parameter . In Fig. 1, we have shown the parameter space of the model for Case I, which is constrained using the 3σ bound on neutrino oscillation data (Table I). For both normal and inverted hierarchies, one can see that there is a high correlation between different parameters of the model. of the various oscillation parameters is shown in Table III.
where U Li are the elements of the first row of the neutrino mixing matrix U P M N S (Eq.1) which is dependent on known parameters θ 12 , θ 13 and the unknown Majorana phases α and β. U P M N S is the diagonalizing matrix of the light neutrino mass matrix m ν so that, Using the constrained parameter space, we have evaluated the value of m ββ for case I and case II in both NH as well as IH cases. The variation of m ββ with lightest neutrino mass is shown in Figure 6 for both the neutrino mass hierarchies. The sensitivity reach of NDBD experiments like KamLAND-Zen [74,75], GERDA [76][77][78], LEGEND-1k [79] is also shown in figure 6. m ββ is found to be well within the sensitivity reach of these NDBD experiments for both the cases, (I) and (II). The superpotential of the model with the "driving fields" φ T 0 , φ S 0 and ξ 0 that allows to build the scalar potentials in the symmetry breaking sector, reads as W =M (φ T 0 φ T ) + g(φ T 0 φ T φ T ) + g 1 (φ S 0 φ S φ S ) + g 2 ξ(φ S 0 φ S ) + g 3 ξ (ξ S 0 ξ S ) + g 4 ξ (ξ S 0 ξ S ) + g 5 ξ 0 (φ S φ S ) + g 6 ξ 0 ξ 2 + g 7 ξ 0 ξ ξ (A1) At this level there is no fundamental distinction among the singlets ξ, ξ and ξ . So, we can consider φ S 0 φ S is coupling with ξ only. | 2023-01-24T16:41:02.069Z | 2022-03-10T00:00:00.000 | {
"year": 2022,
"sha1": "6184ac42a4173940bcd68d3909e22fc4a2ebcdbd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2747e57218251ef71519805d851ac45dc304e2bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235735059 | pes2o/s2orc | v3-fos-license | Menopausal symptoms and work: a narrative review of women’s experiences in casual, informal, or precarious jobs
Governments, employers, and trade unions are increasingly developing “menopause at work” policies for female staff. Many of the world’s most marginalised women work, however, in more informal or insecure jobs, beyond the scope of such employment protections. This narrative review focuses upon the health impact of such casual work upon menopausal women, and specifically upon the menopausal symptoms they experience. Casual work, even in less-then-ideal conditions, is not inherently detrimental to the wellbeing of menopausal women; for many, work helps manage the social and emotional challenges of the menopause transition. Whereas women in higher status work tend to regard vasomotor symptoms as their main physical symptom, women in casual work report musculoskeletal pain as more problematic. Menopausal women in casual work describe high levels of anxiety, though tend to attribute this not to their work as much as their broader life stresses of lifelong poverty and ill-health, increasing caring responsibilities, and the intersectionally gendered ageism of the social gaze. Health and wellbeing at menopause is determined less by current working conditions than by the early life experiences (adverse childhood experiences, poor educational opportunities) predisposing women to poverty and casual work in adulthood. Approaches to supporting menopausal women in casual work must therefore also address the lifelong structural and systemic inequalities such women will have faced. In the era of COVID-19, with its devastating economic, social and health effects upon women and vulnerable groups, menopausal women in casual work are likely to face increased marginalisation and stress. Further research is need.
Introduction
Recent UK studies [1][2][3][4][5] and reviews of the global literature [6][7][8] have tended to regard employment and work as synonymous with one another. Across the world, however, many menopausal women 1 are not formally employed but nevertheless undertake 'informal', 'sessional', 'precarious' or 'casual' work, and in the so-called 'grey' economy, beyond the scope of taxation and employment protections [10]. For the most intersectionally marginalised menopausal women, work is not necessarily employment, and work that is not employment is often the most problematic form of work.
Whereas some casual workers operate relatively autonomously and can organise their workloads independently [11], others are closely managed or exploited by managers acting beyond the scope of employment legislation, and many are left unsure as to when they have work and how much and when they might be paid [12]. Whereas casual work may benefit the wellbeing of young people [13], it is generally regarded as detrimental to the health of adult workers with social and financial responsibilities, and as particularly detrimental to health of adult female workers [14]. Therefore, this literature review focuses upon the health impact of casual work upon menopausal women, and specifically upon the nature and determinants of menopausal symptoms experienced by women in casual work.
Background
Historically, women from socioeconomically marginalised groups have often worked 'cash in hand' from home in the so-called 'grey' economies, doing for example piecework sewing, cake-decorating and network marketing [15]. Beyond the home, women have long worked in 'informal', 'sessional' or 'casual' roles, for example as babysitters, agricultural pickers, and home care workers. In recent years, however, the growth of zero-hours contracts and the socalled 'gig economy' operating beyond conventional structures and safeguards of employment procedures legislation and policy [10] has meant that casual work exists throughout the traditionally 'working-class' and 'female-dominated' cleaning, retail, catering and care industries [14].
Women have always had a complicated relationship with the concept of work [8,16], particularly when facing competing obligations and responsibilities from their personal lives. For many women, the menopause is a time when the pressure of these competing duties intensifies. A lingering social archetype of the menopausal woman as a calm, wise and dependable carer [9,17] often combines with the intersectionally gendered ageism that undermines the credibility and self-confidence of menopausal women at work [4,5,8]. Menopausal women are often expected to set aside their own career aspirations and financial wellbeing to care for adult children, grandchildren, for elderly parents or grandparents or for other sick or disabled relatives and friends [7,8], and even though they may still have their own children at home and may be dealing with their own health challenges [1,8]. For some menopausal women, casual work or the grey economies may provide their only means of earning money whilst working in or near the home, or of working hours which accommodate their caring or health needs. Often, however, the 'flexibility' of such roles transpires as largely to the benefit of their company [18]; given that casual work falls beyond the reach employment legislation and protections, it can often be insecure, underpaid, hazardous or exploitative. Menopausal women in casual jobs may therefore to be working this way by necessity as well as choice, unable to secure formal employment due either to a lack of local opportunities, or to gendered ageism and other forms of disadvantage, or to their own lack of qualifications and personal work history [10].
Since early 2020, the economic impact of the COVID-19 pandemic has been particularly intense for women in causal or precarious work in the retail, travel, catering, and hospitality, sectors which have been subject to substantial job losses worldwide [19]. Together with rising levels of social inequality and gender-based violence, COVID-19 has proved hugely detrimental to the rights, wellbeing, and safety of women, undermining many years of progress towards gender equality [20]. It is likely that the most vulnerable menopausal women, who were already experiencing high levels of disadvantage and marginalisation at work, will have been particularly adversely affected.
Methods
We searched the eight academic databases found by previous reviews [6][7][8] to yield the most relevant results, adding Google Scholar as another widely-used resource. To establish findings of contemporary relevance, we restricted all searches to 1995 or later.
Our search protocol (see Figure 1) included the keyword terms used by previous reviews [6][7][8]. Like Jack et al. [6], we discovered that 'work' was a slightly ambiguous keyword, and thereby included their term 'employ*' as a synonym, even though our study sought research into casual and grey economy work rather than employment. Initial searches used the keywords 'the change' and 'menopause transition' as popular UK synonyms for the menopause [5,8,9]. However, we found the multiple meanings of 'change' and 'transition' yielded many irrelevant results. As our search protocol developed (see Figure 1), we found casual or exploitative working practices amongst menopausal women to be particularly associated with migration and poverty, so included these and their synonyms as a keyword term.
These searches produced 108 shortlisted primary research studies related to the menopause and work. We surveyed their reference lists of each in search of further potentially relevant literature, adding 3 studies as a result. We read and appraised each of the 111 shortlisted.
We included all qualitative studies which provided sufficient demographic information to suggest all or most menopausal women participants were in casual work (Group I, n=3 [21][22][23]).
Yoeli et al. Page 3 Amongst larger-scale population-based cohort studies of menopause and work, we included those which disaggregated their data to distinguish their findings on casual work and from other forms of employment (Group II, n=5 [24][25][26][27][28]).
The smaller-scale workplace-based and community-based quantitative studies tended to offer little contextual detail about either their participants or the workplace structures or conventions. Therefore, we found it more difficult to ascertain whether we should regard these participants as in casual work or employed. Acknowledging that the boundary between casual work and employment is ultimately a socioeconomic construct, we included all surveys which disaggregated the experiences of menopausal women in insecure, manual, low-paid, unskilled, hazardous or exploitative work from those in other roles, even when we could not be certain that participants had not been employed in their roles (Group III, n=5 [29][30][31][32][33]).
Included studies are classified and detailed in Figure 2. These were analysed using the MOOSE protocol [34]. In presenting the findings we draw out how women reported and assessed symptoms of the menopause, and how they addressed them to elicit any distinctions between the experiences of women in formal and casual work.
Musculoskeletal symptoms
For women in casual work, musculoskeletal symptoms of joint and muscle stiffness, aches, and pains, particularly in the legs, back, shoulders and neck, were the commonest and worst symptom of the menopause [22,23]. Within cohorts of employed women by contrast, hot flushes were slightly [33] or significantly [31] more prevalent.
Menopausal women in casual work with a strong manual or menial component reported these musculoskeletal symptoms as having a markedly detrimental effect on their work performance [22], in many cases causing them to leave their jobs in order to seek less physically-demanding work elsewhere [23]. These women found musculoskeletal pain particularly difficult to manage, feeling as though nothing they could do would relieve their symptoms [21,23].
Psychological symptoms
Within the qualitative literature, lifelong poverty seemed to provide precariously employed women with experiences of and resilience to emotional stress and mental ill-health which predated their menopause [21,22]. These women described feeling more able to manage the psychological aspects of the menopause than their musculoskeletal symptoms [21,23]. Though nevertheless affected by anxiety [21] or feeling tense and being touchy or irritable [23], women asserted that they were able to manage emotions by working harder [23], especially when their income, however precarious, relieved some financial worries and provided some economic independence [22].
This finding concurs with the workplace-based surveys comparing 'working' and 'housewife' status; irrespective of the nature of their job, women in work found the psychological symptoms of the menopause less problematic than housewives did [29,31,33]. Beyond empowerment, work can provide women with opportunities to overcome taboos around menstruation, emotions and ageing to talk and learn about the menopause and to seek medical help for symptoms [31][32][33]. Nevertheless, menopausal women in lower-paid and more manual jobs experience significantly more psychological symptoms than women in higher status 'white collar' jobs [31,35].
Both qualitative and quantitative studies highlight the importance to emotional wellbeing of menopausal women's self-image in response to the social gaze, stereotypes and expectations associated with ageing [22,23,32]. For many, anxiety was not only a symptom of the menopause but a response to the mounting social pressures placed upon them [21,23].
Determinants of menopausal symptoms
Within the qualitative studies, menopausal women in casual roles described work as only one part of their daily lives, and as less significant in determining their wellbeing than their social, family, or personal circumstances [21][22][23]. As such, the women attributed their menopausal symptoms not to their work but instead to the life circumstances which had led them into casual work or the grey economy, described in terms of gendered disadvantage [21][22][23], poverty [21,22] and intersectional marginalisation [21,23].
Within the surveys investigating the socioeconomic determinants of menopausal experience, current working circumstances or conditions consistently showed little if any impact upon age at menopause [24][25][26][27], nor quality of life at menopause [28]. Instead, educational history and early childhood adversity were established as the main determinants of menopause experience by European, North American and East Asian studies [24,25,27,28,31]. In studies undertaken in settings as diverse as the UK [25], France [24], Canada [21], and Turkey [33], working conditions appeared significant only to the extent that they reflect or are determined by a woman's education or earlier life experience.
Casual work
This review has found that casual work, even in less-then-ideal conditions, is not unambiguously detrimental to the wellbeing of menopausal women. As wider studies concur, any work may be preferable to unemployment [36]. Nevertheless, women in casual work appear more frequently and more severely affected by the musculoskeletal symptoms of the menopause than women more securely employed in jobs with comparable physical demands. This review has found that while menopausal women in casual work may experience similar levels of anxiety and other psychological symptoms to women employed in similarly low-paid and low status jobs; women in casual work seem largely to cope with their psychological symptoms more confidently and more effectively than with their musculoskeletal symptoms.
Symptoms
Few studies have focused upon musculoskeletal symptoms of the menopause at work [37,38]. Whereas all three previous reviews have discussed the prevalence of psychological symptoms, only one [7] makes mention of musculoskeletal symptoms, even though it found muscle and joint pains to be only incrementally less of a problem than hot flushes [1]. This underrepresentation of musculoskeletal symptoms illustrates the widely-acknowledged lack of research into the physical challenges facing menopausal women in manual work [3,8].
Care-home employees in physically-demanding yet secure employment reported musculoskeletal problems impaired their working abilities less than the psychological symptoms of the menopause [35]. When compared to employees in more formalised or stable manual work, women in casual work may suffer from musculoskeletal symptoms in particularly severe and disabling ways. However, none of the studies reviewed sought to link specific participant symptoms to particular aspects of their workplaces or working tasks, and all of the studies were published before the 2020 onset of the Covid-19 pandemic led to a rapid increase in home-based working. It is therefore important to emphasise that, by undertaking a narrative review rather than a systematic review or realistic synthesis, we make no attempt to posit any causal mechanisms claiming to explain how casual work might cause musculoskeletal symptoms. More clinically-focused empirical research would be needed to establish clearer understandings of menopause-specific work-related musculoskeletal difficulties.
Previous reviews have emphasised how employed menopausal women frequently struggle to cope with psychological symptoms at work [6][7][8], and menopausal women employed in professional or clerical positions within large organisations list their problems of concentration, memory and confidence as their foremost workplace challenges [1]. This review, by contrast, has found that menopausal women in casual work are apparently more psychologically resilient, implying that the seemingly most marginalised menopausal women might cope better than more advantaged and employed women. This challenges popular and arguable paternalistic assumptions around menopausal women as in need of the care or help from policy and legislation [9,17]. Concepts and models of psychological resilience to menopausal difficulties are emerging as explanations for the epidemiology of symptoms [30]. A more asset-based approach to menopause and work research might inform which women cope best with which symptoms and why.
Within menopause and work research and policy, findings around vasomotor symptoms dominate many studies [3,39]. Certainly, menopausal women find explaining and managing hot flushes at work a uniquely awkward and embarrassing task, even when other symptoms can be more disruptive [2,4]. However, feminist perspectives critique this disproportionate consideration given to vasomotor symptoms as a manifestation of the intersectional stigmatisation of the older female body, which society has long sought to normalise or to control [16]. Irrespective, then, of how women manage their menopausal symptoms, hot flushes are those that male managers and colleagues find the most difficult to cope with [5]. Musculoskeletal symptoms, by contrast, can be experienced by men as well as women and may therefore be less embarrassing for managers and colleagues.
Determinants
Workplace surveys undertaken amongst menopausal women in 'white-collar' or professional roles found high levels of work stress and low levels of job control significantly to exacerbate menopausal symptoms [3,40]. From this, as well as from the more general employment wellbeing literature [12], it might have been anticipated that this review would find menopausal women in casual work to experience symptoms directly related to this stress and lack of control. Instead, this review found that women in casual work attributed their menopausal symptoms and their difficulties in managing them not to their inadequate or unfair working conditions, but to their broader life circumstances [21][22][23]. Amongst the multiple everyday challenges these women navigated, work was not necessarily a major part of life, and therefore not necessarily a major stress factor [21,22,30].
Across both population-wide and workplace-based surveys of the menopause and work, early childhood adversity is shown as the greatest predictor of menopausal symptoms [26,28,41] whereas education [31,42] is the strongest preventative factor. Similarly, adverse childhood experiences have been shown, independent of education, to determine patterns of employment and work throughout adulthood [43]. Therefore, the association between casual work and a difficult menopause appears to be mediated by the common factors of poor education and early childhood adversity. Menopausal women in casual work may therefore experience the symptoms they do for the same reasons that they are working in casual roles or the grey economy rather than secure employment: because they were raised in poverty and disadvantage, and because they have had few educational opportunities. Therefore, while the precarity, low pay, exploitation, and lack of workplace protections prevalent within the casual sector and grey economy are undoubtedly detrimental to the overall wellbeing of workers [14,18], casual work cannot be claimed directly to cause or singularly to worsen the symptoms of menopausal women. Health promotion initiatives seeking to improve the wellbeing of menopausal women in casual work [21] are dealing not only with the current working conditions and lifestyles of participants, but with the cumulative legacy of life-long and intersectional adversity and disadvantage.
Impacts of the COVID-19 Pandemic
Research has begun to explore the mental health and broader wellbeing consequences of the stress of care work during the COVID-19 pandemic [41]. However, none has yet considered the specific challenges facing menopausal women [22]. Women working in cleaning have carried immense responsibility for the wellbeing of others, as have women in care, who have often been confronted with dying and death to an unprecedented degree [40]. Women whose older age, ethnicity, and/or health renders them more vulnerable to COVID-19 have additional concerns, as have those who combine work with their caring responsibilities for clinically-vulnerable family members or friends. Emerging research increasingly suggests that decreasing levels of oestrogen at menopause causes women at menopause to be at particular risk from the COVID-19 virus [44,45]. Menopausal women caring for elderly or disabled family or friends have been particularly limited in the work they can undertake because most want to avoid not only contracting the virus themselves, but also transmitting it to those for whom they are caring [46].
Conclusion
Through this review, we have proposed some ways in which menopausal experiences of women in casual work may be distinct from those of women more securely employed in similarly low-paid, low status or manual jobs. Given the relative dearth of research focusing specifically upon the menopausal experiences of women in casual work [21][22][23], we acknowledge that our assertions are based upon limited evidence. Within the evidence we have reviewed, the terminologies, definitions, and understandings of what "menopause" or "menopausal women" are so heterogenous and imprecise as to preclude any direct or quantifiable comparisons between specific datasets. We selected the methodology of narrative review, then, in order to foreground the broadest qualitative themes, as opposed to establishing the causal links of a realist synthesis, and as opposed to providing the replicable evidence of a systematic review. By foregrounding and describing the predominant themes within the literature it reviews, a narrative review seeks simply to inform the need for future research, and to stimulate debate.
One of our main findings was, however, that work was often not the main challenge which women face [21][22][23]. In light of this, we wish to caution against the appropriation of this review for any economic or political agenda. Certainly, we have found that casual work may not directly cause ill-health and may indeed provide psychological benefits to some menopausal women. However, we have also found that casual work itself is both a cause and a symptom of poverty, social exclusion and intersectionally gendered social injustice.
Instead, we hope that this review might assist in highlighting the limits of the workplace menopause policies upon which contemporary UK research is largely focused on informing [1][2][3][4]. As such menopausal women in casual jobs will likely not benefit from the recommendations, innovations or protections of the 'menopause at work' policies introduced by organisations or trade unions [2,47]. Given, however, that this review has identified particular levels of psychological resilience amongst casual workers, it should not be assumed that casual workers have the same needs as the employees upon whose experiences existing policy research is based [1][2][3][4]. Further research specifically investigating the menopausal experiences of casual workers in cleaning and care settings during the COVID-19 pandemic is especially needed.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Funding
The work for this article was supported by the Wellcome Trust (grant number 209513/Z/17/Z) and is subject to their Open Access Policy. In order to comply with this, and in the light of the Plan S Rights Retention Strategy, we maintain the right to apply a CC BY license to self-archive the author accepted version (AAV) on European PubMed Central. | 2021-06-09T13:15:10.948Z | 2021-05-31T00:00:00.000 | {
"year": 2021,
"sha1": "63b61273aef7a46195843e5997be397312cd74f7",
"oa_license": "CCBY",
"oa_url": "http://www.maturitas.org/article/S0378512221000773/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "29f2c33ee39e017683866fe617da0134aaada05e",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267847284 | pes2o/s2orc | v3-fos-license | Microproteomic-Based Analysis of the Goat Milk Protein Synthesis Network and Casein Production Evaluation
Goat milk has been consumed by humans since ancient times and is highly nutritious. Its quality is mainly determined by its casein content. Milk protein synthesis is controlled by a complex network with many signal pathways. Therefore, the aim of our study is to clearly depict the signal pathways involved in milk protein synthesis in goat mammary epithelial cells (GMECs) using state-of-the-art microproteomic techniques and to identify the key genes involved in the signal pathway. The microproteomic analysis identified more than 2253 proteins, with 323 pathways annotated from the identified proteins. Knockdown of IRS1 expression significantly influenced goat casein composition (α, β, and κ); therefore, this study also examined the insulin receptor substrate 1 (IRS1) gene more closely. A total of 12 differential expression proteins (DEPs) were characterized as upregulated or downregulated in the IRS1-silenced sample compared to the negative control. The enrichment and signal pathways of these DEPs in GMECs were identified using GO annotation and KEGG, as well as KOG analysis. Our findings expand our understanding of the functional genes involved in milk protein synthesis in goats, paving the way for new approaches for modifying casein content for the dairy goat industry and milk product development.
Introduction
Goat milk has greater digestibility and alkalinity, as well as a higher buffering capacity, than cow's milk.Therefore, it is highly praised for its unique nutritional and functional properties.It also has better emulsifying and foaming properties and is favored by manufacturers in developing new food products.Goat milk proteins also contain higher levels of certain amino acids, such as tryptophan and cysteine, compared to cow milk proteins and are believed to possess immunomodulatory, allergy management, anti-inflammatory, and antioxidant effects, as well as antimicrobial and anticancer properties [1,2].Furthermore, people who are allergic to cow milk may feel comfortable with goat milk because of its lower lactose content and the different forms of proteins found therein [3][4][5][6].
Initial information on milk secretion was obtained from goats' milk, and this has provided an insight into the processes occurring in mammary glands and cows' udders.Milk protein is secreted by mammary epithelial cells (MECs), in which milk quality is strongly influenced by casein production [7].Milk protein, consisting of approximately 80% casein and 20% whey, plays an important role in the production of cheese and other dairy products.Promoting milk production is a priority for food science in general, and the dairy goat sector is particularly in need of a way to increase casein content to ensure its development.
Due to the high kinase activity of insulin receptors, the mammary gland remains highly sensitive to insulin throughout pregnancy and the lactation period [8].Milk protein synthesis requires the activity of insulin, amino acids, and amino acid transporters, as well as the mammalian target of rapamycin (mTOR) pathway [9][10][11].
To better understand the pathways of milk protein synthesis, proteomic techniques have been used to investigate the functional proteins in animal tissues [12][13][14].Although standard (macro)proteomic application is suitable for large samples with protein losses measured in micrograms or milligrams, it is not sensitive enough for small numbers of cell samples.Moreover, sample preparation consists of several steps that can lead to protein loss, thus reducing the levels of low-abundance proteins and preventing their accurate identification.Fortunately, microproteomic (µP) approaches have been developed for the analysis of samples with attomolar protein concentrations, where even proteins present in sub-microgram levels can be analyzed while retrieving useful proteome data [15,16].
To date, no µP pipeline analyses have been performed on milk protein synthesis pathways in goat mammary epithelial cells (GMECs).Therefore, the present study examines the pathways of milk protein synthesis in GMECs using µP pipelines with the aid of stateof-the-art mass spectrometers and Orbitrap instruments.The results will shed greater light on the key genes taking part in milk protein synthesis networks and provide a novel insight into milk protein synthesis mechanisms in GMECs.
Microproteomic Analysis
A high-resolution mass spectrometer (MS) was used to analyze the microsample data.The microsample data of MS were processed with MaxQuant's integrated Andromeda engine and the "match between runs" mode [15].The analysis was based on peptide peak intensity, peak area, and LC retention time related to MS1, as well as other information.The data were subjected to statistical analysis and quality control before the GO, KOG, pathway, and other functional annotation analyses.
Microsample Protein Extraction and Enzymolysis
Protein extraction and enzymolysis were performed by BGI Genomics Co., Ltd.(Shenzhen, China).The cell sample was mixed with 10 µL 50 mM ammonium bicarbonate, subjected to ultrasonic lysis for 10 min, and incubated with DL-Dithiothreitol (DTT) at a final concentration of 10 mM in a water bath at 37 • C for 30 min.Following this, iodoacetamide solution (IAM) was added to a final concentration of 55 mM and left to react for 45 min in the dark.Finally, 1 µg trypsin was added for enzymatic hydrolysis at 37 • C for two hours [15].
Microsample MS Analysis
Protein separation was performed using a Thermo UltiMate 3000 UHPLC through a trap column and a self-packed C18 column at a flow rate of 500 nL/min.Peptide separation for DDA (data-dependent acquisition) mode was performed using a combined nanoESI source and Orbitrap Fusion™ Lumos™ Tribrid™ (Thermo Fisher Scientific, San Jose, CA, USA).The identification data were selected at PSM-level FDR ≤ 1%, and the significant identification was collected at protein-level FDR ≤ 1% [15].
Differential Quantification Analysis
The proteins identified in each sample were quantified using MaxQuant to determine their levels in each sample [18].The data were subjected to Welch's t-test to test the preset comparison group and calculate the multiple of differences.Significant differences were indicated by a fold change > 1.5 and p value < 0.05.
Bioinformatics Analysis
In all samples, proteins were identified using Gene Ontology (GO) functional annotation analysis [19].The GO analysis was based on three ontologies (cellular component, biological process, and molecular function); the IDs and the number of proteins of all the corresponding proteins were listed.The identified proteins were classified into functional divisions using eukaryotic orthologous group (KOG) annotation according to the KOG database.The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database was used to help further understand their biological functions.
RNAi
The siRNA used in this study was synthesized by Merck Life Science (Pozna ń, Poland) (Table 1).RNAi was performed by Lipofectamine™ RNAiMAX Transfection Reagent (13778075, Invitrogen, Thermo Fisher, Waltham, MA, USA) with the Opti-MEM ® I Reduced Serum Medium (31985070, Gibco, Thermo Fisher, Waltham, MA, USA) [20].MISSION ® siRNA Universal Negative Control #1 (SIC001, Sigma-Aldrich, Darmstadt, Germany) was used as a negative control at a concentration of 20 µM.The cells were combined with a transfection mixture at a concentration of 0.15 × 10 6 /mL and then incubated for 48 h at 37 • C and 5% CO 2 .RNA was isolated to determine silencing efficiency.Total RNA was isolated and purified with a NucleoSpin RNA Mini kit for RNA purification (Macherey-Nagel GmbH & Co. KG, Düren, Germany).RNA quantity and purity were determined using a Nanodrop 1000 (Thermo Scientific, Waltham, MA, USA) and a Bioanalyzer (Agilent 2100, Santa Clara, CA, USA).RNA was reverse transcribed with a Transcriptor First Strand cDNA synthesis kit (Roche, LifeScience Solutions, Basel, Switzerland) with random hexamer primers, according to the manufacturer's instructions [20].The final concentration of total RNA was approximately 595 ng/µL in all samples for cDNA synthesis.The relative expression of genes was determined by RT-qPCR.Glyceraldehyde-3phosphate dehydrogenase (GAPDH) was adopted as a reference.The primers were copied from previous studies on goat cells (Table 2).
Gene
Sequence Reference Expression analysis was performed using a LightCycler 480 SYBR Green I Master (Roche, Basel, Switzerland) using at least three technical replicates for each sample [20].The amplification reactions contained 2× Master Mix, 10× each PCR primer (0.4 µM), and water, to a total volume of 20 µL.The following sequence was performed: pre-incubation at 95 • C for 10 min, followed by amplification for 45 cycles at 95 • C for 10 s, 60 • C for 10 s, and 72 • C for 10 s.The melting curve was 95 • C for 5 s and 65 • C and 97 • C for 1 min.The 2 −∆∆Ct method was adopted to calculate relative gene expression.
Milk Protein Secretion Determination
The protein content was determined by the goat casein alpha (CSN1) ELISA kit, goat beta-casein (Csn2) ELISA kit, and goat kappa casein (κ-CN) ELISA kit.The absorbance was measured at OD450 nm with a microplate reader (TECAN F039300, Männedorf, Switzerland).All calculations were performed using CurveExpert Professional 2.6.5 software.All reagents were obtained from the Wuhan Xinqidi Biological technology Co., Ltd.(Wuhan, China).
Statistical Analysis
All the experiments were repeated three times with three replicates.All results were analyzed using Duncan's multiple-range tests (p < 0.05) by SAS 9.0 software (Cary, NC, USA).
Pathway Annotation Analysis of GMEC Proteins
Protein function is associated with its biological behavior, which is related to many complex signal transduction pathways.The most important biochemical metabolic pathways and the signal transduction pathways formed by a protein can be determined by pathway analysis.Our present findings indicate the presence of more than 2253 proteins and about 337 pathway annotations among the quantified key proteins in GMECs (Table S1).About 42 of the identified proteins have been recorded in the insulin signaling pathway, and 44 in the mTOR signaling pathway.Milk protein synthesis is known to involve many complex factors [22].The insulin-mTOR signal pathway merited particular attention because insulin has been reported to directly stimulate mTOR protein activity through phosphorylation [23].
Relative Quantitation of IRS1 Expression during Silence
The proteins belonging to the Insulin Receptor Substrate (IRS) family, IRS1, IRS2, IRS3, and IRS4, play a vital role in insulin signal transduction [24,25].All four are phosphorylated on multiple tyrosine residues following insulin receptor kinase activation [26].Previous studies found IRS1 to remarkably affect insulin-like growth factor and stimulate growth [27].IRS1-deficient mice have mild glucose intolerance and insulin resistance [28].IRS1 has also been found to be downregulated and to play a key role in cell proliferation and survival in breast cancer [29,30].The present study used four pairs of synthetic siRNAs to silence the IRS1 gene and then measure the relative expression of mRNA in all samples using RT-qPCR.It was found that mRNA expression was significantly reduced in all four siRNA samples compared to the negative control (NC), indicating successful blockage by the four synthetic siR-NAs (Figure 1).The samples treated with the four siRNAs demonstrated similar mRNA expression, with no statistically significant difference between them.
The proteins belonging to the Insulin Receptor Substrate (IRS) family, IRS1, IRS2, IRS3, and IRS4, play a vital role in insulin signal transduction [24,25].All four are phosphorylated on multiple tyrosine residues following insulin receptor kinase activation [26].Previous studies found IRS1 to remarkably affect insulin-like growth factor and stimulate growth [27].IRS1-deficient mice have mild glucose intolerance and insulin resistance [28].IRS1 has also been found to be downregulated and to play a key role in cell proliferation and survival in breast cancer [29,30].
The present study used four pairs of synthetic siRNAs to silence the IRS1 gene and then measure the relative expression of mRNA in all samples using RT-qPCR.It was found that mRNA expression was significantly reduced in all four siRNA samples compared to the negative control (NC), indicating successful blockage by the four synthetic siRNAs (Figure 1).The samples treated with the four siRNAs demonstrated similar mRNA expression, with no statistically significant difference between them.
Figure 1.
RT-qPCR analysis of IRS1 expression in silenced GMECs.Relative gene expression was determined after transfection with negative control (NC), Lipo (lipofectamine™ RNAiMAX), siRNA1, siRNA2, siRNA3, and siRNA4 in GMEC for 48 h.The results are shown as mean ± SD, and the statistically significant analysis was calculated by Duncan's multiple-range tests.The asterisks indicate statistically significant differences, p < 0.05.
Casein Production Detection of GMECs
Goat milk protein consists of approximately 80% casein and 20% whey.The two have unique properties that can support the conversion of milk into yogurt and cheese.In turn, goat milk casein consists of four principal proteins: αs1-casein (αs1-CN), αs2-casein (αs2-CN), β-casein (β-CN), and κ-casein (κ-CN) [1].Of these, β-casein is the most abundant in goat milk, and the allergen αs1-casein is the most abundant in cow milk.As shown in Figure 2, the content of κ-, β-, and α-casein differed significantly between IRS1-silenced cells and controls: κ-and β-casein contents were higher, while α-casein content was lower.As the samples treated with the four siRNAs demonstrated similar casein contents, siRNA1 was selected for further study.
Casein Production Detection of GMECs
Goat milk protein consists of approximately 80% casein and 20% whey.The two have unique properties that can support the conversion of milk into yogurt and cheese.In turn, goat milk casein consists of four principal proteins: α s1 -casein (α s1 -CN), α s2 -casein (α s2 -CN), β-casein (β-CN), and κ-casein (κ-CN) [1].Of these, β-casein is the most abundant in goat milk, and the allergen α s1 -casein is the most abundant in cow milk.As shown in Figure 2, the content of κ-, β-, and α-casein differed significantly between IRS1-silenced cells and controls: κand β-casein contents were higher, while α-casein content was lower.As the samples treated with the four siRNAs demonstrated similar casein contents, siRNA1 was selected for further study.
Identification of Differential Expression Proteins (DEPs) by Microproteomic Analysis
The DEPs in the test samples are depicted in volcano plots in Figure 3. Nine DEPs in the siRNA1 sample were found to be upregulated, and three were downregulated compared to the NC samples in GMECs (Table S2).The upregulated DEPs were identified as
Identification of Differential Expression Proteins (DEPs) by Microproteomic Analysis
The DEPs in the test samples are depicted in volcano plots in Figure 3. Nine DEPs in the siRNA1 sample were found to be upregulated, and three were downregulated compared to the NC samples in GMECs (Table S2).The upregulated DEPs were identified as Keratin, MAP7 domain, Syntaxin, KIAA1217 ortholog, Phosphodiesterase, Heme binding protein, Rhophilin Rho GTPase binding protein, and Myosin XVIIIA.The downregulated proteins were Protein arginine N-methyltransferase, Glutaredoxin, and Protein MAK16.The casein content was determined after transfection with negative control (NC), Lipo (lipofectamine™ RNAiMAX), and siRNA1 in GMEC for 48 h.The column represents the mean ± SD; statistically significant differences were calculated by Duncan's multiple-range tests; the asterisks indicate statistically significant differences, p < 0.05.
Identification of Differential Expression Proteins (DEPs) by Microproteomic Analysis
The DEPs in the test samples are depicted in volcano plots in Figure 3. Nine DEPs in the siRNA1 sample were found to be upregulated, and three were downregulated compared to the NC samples in GMECs (Table S2).The upregulated DEPs were identified as Keratin, MAP7 domain, Syntaxin, KIAA1217 ortholog, Phosphodiesterase, Heme binding protein, Rhophilin Rho GTPase binding protein, and Myosin XVIIIA.The downregulated proteins were Protein arginine N-methyltransferase, Glutaredoxin, and Protein MAK16.
KOG Analysis of the DEPs in GMECs
KOG analysis was used to classify DEPs in the NC vs. siRNA1 samples into three divisions: cellular process and signaling, information storage and processing, and poorly characterized.In Figure 4, it can be seen that most DEPs belong to the cellular process and
KOG Analysis of the DEPs in GMECs
KOG analysis was used to classify DEPs in the NC vs. siRNA1 samples into three divisions: cellular process and signaling, information storage and processing, and poorly characterized.In Figure 4, it can be seen that most DEPs belong to the cellular process and signaling division: post-translational modification, protein turnover, and chaperones.Others were classified as information storage and processing, with the most common function being transcription.Finally, in the poorly characterized division, general function prediction only was noted.
GO Analysis of DEPs
The DEPs in GMECs in NC vs. siRNA1 were classified into the cellular composition, biological processes, and molecular function groups according to GO annotation (Table S3).The GO function up and down chart of the DEPs is given in Figure 5.In the biological
GO Analysis of DEPs
The DEPs in GMECs in NC vs. siRNA1 were classified into the cellular composition, biological processes, and molecular function groups according to GO annotation (Table S3).The GO function up and down chart of the DEPs is given in Figure 5.In the biological process division, the most upregulated proteins belong to biological regulation, cellular process, and regulation of biological processes, while the major downregulated proteins belong to cellular process and metabolic process.In the cellular component division, both the most upregulated and downregulated proteins belonged to the cell, cell part, and organelle groups.Finally, in molecular function, the most upregulated components belonged to binding, while the most downregulated proteins belonged to binding and catalytic activity.A GO term relationship network was established to describe DEP enrichment (Figure 6).In the diagram, a node indicates a GO term.Green indicates cellular components, red biological processes, and blue molecular functions.Biological processes included two positive regulations (cellular process and response to stimulus) and nine negative regulations (RNA metabolic process, cellular metabolic process, macromolecule metabolic process, and nucleobase-containing compound metabolic process).No GO term regulation was observed in the cellular component or molecular function divisions.
KEGG Pathway Analysis of DEPs
The KEGG pathway analyses classified the DEPs into cellular process, genetic information processing, and metabolism divisions (Figure 7).The main pathways involved were folding, sorting and degradation, translation, global and overview maps, and metabolism of cofactors and vitamins.
KEGG Pathway Analysis of DEPs
The KEGG pathway analyses classified the DEPs into cellular process, genetic information processing, and metabolism divisions (Figure 7).The main pathways involved were folding, sorting and degradation, translation, global and overview maps, and metabolism of cofactors and vitamins.
Subcellular Localization Prediction of DEPs
Subcellular localization prediction refers to the computational task of determining the specific location of a protein within a cell.Proteins perform their functions within specific compartments or organelles within the cell.Understanding the subcellular localization of molecules and organelles is essential for studying cellular processes, signaling pathways, and the mechanisms underlying health and disease.Subcellular localization prediction can Foods 2024, 13, 619 9 of 13 be crucial for understanding the functions of DEPs within cells, as it can provide insights into their functions and roles within cellular processes.The DEPs were classified into six divisions: plasma membrane (plas), cytosol (cyto), mitochondrion (mito), nucleus (nucl), cytosol and nucleus (cyto_nucl), and endoplasmic reticulum (E.R.) (Table S4).The main subcellular locations of DEPs were cyto, nucl, and mito (Figure 8).
Subcellular Localization Prediction of DEPs
Subcellular localization prediction refers to the computational task of determinin the specific location of a protein within a cell.Proteins perform their functions within sp cific compartments or organelles within the cell.Understanding the subcellular localiz tion of molecules and organelles is essential for studying cellular processes, signalin pathways, and the mechanisms underlying health and disease.Subcellular localizatio prediction can be crucial for understanding the functions of DEPs within cells, as it ca provide insights into their functions and roles within cellular processes.The DEPs we classified into six divisions: plasma membrane (plas), cytosol (cyto), mitochondrio (mito), nucleus (nucl), cytosol and nucleus (cyto_nucl), and endoplasmic reticulum (E.R (Table S4).The main subcellular locations of DEPs were cyto, nucl, and mito (Figure 8).
Discussion
Milk protein is secreted by mammary epithelial cells (MECs), and casein content is a key determinant of milk quality [31].Recent years have seen a number of studies aimed at increasing milk protein secretion in MECs based on molecular mechanisms and signal pathways [32,33].However, milk protein synthesis is a complex process.AMP-activated
Discussion
Milk protein is secreted by mammary epithelial cells (MECs), and casein content is a key determinant of milk quality [31].Recent years have seen a number of studies aimed at increasing milk protein secretion in MECs based on molecular mechanisms and signal pathways [32,33].However, milk protein synthesis is a complex process.AMP-activated protein kinase (AMPK) and tumor suppressor LKB1 are located upstream of mTOR [34].AMPK activates ATP-generating pathways and inhibits ATP consumption.The inhibition of mTORC1 mediated by LKB1 relies on AMPK and TSC2.Milk protein synthesis also involves the insulin-mTOR pathway.All these signaling pathways have been confirmed by our µP approaches.
As shown in Table S1, about 66 of the proteins identified in GMECs belong to the PI3K-Akt signaling pathway, which has been shown to play an important role in camel milk protein networks [35].Interference in the PI3K-Akt signal may lead to insulin resistance, resulting in the creation of a vicious circle [36].Additionally, many of the identified proteins were found to be associated with more than 330 pathways, including MAPK signaling, insulin signaling, necroptosis, apoptosis, biosynthesis of amino acids, AMPK signaling, mTOR signaling, and TNF signaling; some of these are closely linked with milk protein synthesis (Table S1).Previous studies indicate that the insulin-mTOR pathway plays a role in regulating milk protein synthesis, with insulin directly stimulating the mTOR protein.
The IRS-family proteins are closely associated with the insulin signal pathway [37,38].Indeed, IRS1 can be found in the central part of a signaling pathway network diagram of camel milk proteins designed by Han (2022) [35].Our study explored the role of IRS1 in goat milk synthesis and casein composition.Casein plays an important role in cheese making as it dictates how well, and how rapidly, the milk clots and forms a curd.Any changes in the amount of α-CN or β-CN would alter the properties of the milk and the resulting cheese [39].Our findings indicate that IRS1 silencing significantly influenced the content of κ-casein, β-casein, and α-casein in GMECs (Figure 2).
Previous studies found goat milk with altered α S1 -CN contents to be allergenic in guinea pigs [40].Goat milk lacking α S1 -CN was less allergenic than other goat milks, probably due to its modified ratio of β-LG to α S -CN.In the present study, the IRS1-silenced sample demonstrated higher levels of β-casein and lower levels of α-casein.Unfortunately, as little is currently known about casein synthesis, it is hard to explain these changes.Nevertheless, these findings encourage further research in the area.
Syntaxin is involved in vesicle trafficking and membrane fusion events within cells, particularly during exocytosis [41], the cellular process in which substances are released from vesicles into the extracellular space.Although syntaxin itself is not directly implicated in the synthesis of milk proteins, proteins involved in vesicle trafficking, membrane fusion, and intracellular transport could indirectly impact the secretion of milk proteins.These processes are crucial for the proper packaging and release of proteins from cells, including the MECs responsible for milk production.
Phosphodiesterases play a crucial role in intracellular signaling by hydrolyzing cyclic nucleotides, particularly cyclic guanosine monophosphate (cGMP) and cyclic adenosine monophosphate (cAMP) [42].These cyclic nucleotides are involved in signaling pathways that regulate various cellular processes.Although there may not be a direct link between phosphodiesterase and milk protein synthesis, alterations in cyclic nucleotide levels regulated by phosphodiesterases could potentially influence cellular processes and signaling pathways, indirectly impacting milk protein synthesis.Protein Arginine N-Methyltransferase belongs to a family of enzymes involved in the methylation of arginine residues in proteins.They play various roles in cellular processes, including gene expression regulation, signal transduction, and RNA processing [43].
All these upregulated and downregulated proteins were associated with the modified casein composition in GMECs.Doubtlessly, changes in the levels of α-CN or β-CN would alter the properties of milk and the produced cheese, influencing their processing.Furthermore, increasing evidence indicates that β-asomorphin-7 derived from A1 β-casein contributes to milk intolerance syndrome.Our findings provide interesting information for the fields of milk processing and nutrition mechanisms.
The GO annotations found two DEPs to be upregulated, with these associated with cellular process and response to stimulus, and nine to be downregulated, related to RNA metabolic process, cellular metabolic process, macromolecule metabolic process, and nucleobase-containing compound metabolic process (Figure 6).
Milk protein synthesis is a complex process that occurs in the mammary glands and is generally associated with hormonal signals, nutritional factors, and the specific needs of the developing offspring [32,44].Therefore, to better understand the role of IRS1 in goat mammary glands and milk protein synthesis, further functional studies of the proteins influenced by IRS1 silencing in GMECs are needed.
Conclusions
Our findings confirm that the IRS1 gene influences the casein content of milk in goats and the milk protein synthesis pathways in GMECs.Modifying the expression of IRS1 could increase the amount of κ-casein and β-casein but decrease the content of α-casein.This study is the first to successfully use a microproteomic approach to analyze the proteins of GMECs with low amount requirements.By identifying the proteins that were differentially expressed in response to IRS1 silencing, it was possible to gain a new insight into the goat milk protein synthesis network and related signal pathways.Some DEPs were found to indirectly influence milk protein synthesis based on their GO annotation and their KEGG and KOG analysis.These findings may have positive implications for future studies on the milk synthesis system in goats.
Figure 1 .
Figure1.RT-qPCR analysis of IRS1 expression in silenced GMECs.Relative gene expression was determined after transfection with negative control (NC), Lipo (lipofectamine™ RNAiMAX), siRNA1, siRNA2, siRNA3, and siRNA4 in GMEC for 48 h.The results are shown as mean ± SD, and the statistically significant analysis was calculated by Duncan's multiple-range tests.The asterisks indicate statistically significant differences, p < 0.05.
Figure 3 .
Figure 3. Volcano plot of screened DEPs.The x-axis indicates the protein difference multiple, while the y-axis is the −log10 (p value).A gray dot indicates a non-significantly altered protein (following silencing), a red dot indicates an upregulated protein, and a green dot indicates a downregulated protein.
Figure 3 .
Figure 3. Volcano plot of screened DEPs.The x-axis indicates the protein difference multiple, while the y-axis is the −log10 (p value).A gray dot indicates a non-significantly altered protein (following silencing), a red dot indicates an upregulated protein, and a green dot indicates a downregulated protein.
Foods 2024 ,
13, x FOR PEER REVIEW 7 of 14 signaling division: post-translational modification, protein turnover, and chaperones.Others were classified as information storage and processing, with the most common function being transcription.Finally, in the poorly characterized division, general function prediction only was noted.
Figure 4 .
Figure 4. DEPs' KOG annotation.The x-axis represents protein count, and the y-axis represents KOG terms.
Figure 4 .
Figure 4. DEPs' KOG annotation.The x-axis represents protein count, and the y-axis represents KOG terms.
Foods 2024 , 14 Figure 5 .
Figure 5. DEP GO function classification up and down chart.The x-axis is GO annotation, and the y-axis is DEP number.
Figure 5 .
Figure 5. DEP GO function classification up and down chart.The x-axis is GO annotation, and the y-axis is DEP number.
Figure 6 .
Figure 6.GO term relationship network diagram.A node is a GO term.The colors indicate different functional categories.Red indicates biological processes, green cellular components, and blue molecular functions.Dark colors indicate significantly enriched GO terms, light colors indicate insignificant GO terms, and grays indicate no enriched GO terms.A solid arrow indicates an inclusion relationship between GO terms, while a dotted arrow indicates a regulation relationship.A red dotted line suggests positive regulation, and a green dotted line negative regulation.
Figure 6 .
Figure 6.GO term relationship network diagram.A node is a GO term.The colors indicate different functional categories.Red indicates biological processes, green cellular components, and blue molecular functions.Dark colors indicate significantly enriched GO terms, light colors indicate insignificant GO terms, and grays indicate no enriched GO terms.A solid arrow indicates an inclusion relationship between GO terms, while a dotted arrow indicates a regulation relationship.A red dotted line suggests positive regulation, and a green dotted line negative regulation.
Foods 2024 ,Figure 7 .
Figure 7. DEP pathway classification by KEGG enrichment.The x-axis represents protein numbe and the y-axis represents KEGG pathway enrichment.
Figure 7 .
Figure 7. DEP pathway classification by KEGG enrichment.The x-axis represents protein number, and the y-axis represents KEGG pathway enrichment.
Foods 2024 , 14 Figure 8 .
Figure 8. Subcellular localization prediction.The x-axis represents subcellular structure, and the yaxis represents protein number.
Figure 8 .
Figure 8. Subcellular localization prediction.The x-axis represents subcellular structure, and the y-axis represents protein number.
Author
Contributions: Conceptualization, L.C. and E.B.; methodology, L.C., E.B. and H.T.; writing-original draft preparation, L.C.; writing-review and editing, L.C. and E.B.; supervision, E.B. and H.T.; funding acquisition, L.C. and E.B.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Horizon 2020 Marie Skłodowska-Curie Action PASIFIC 847639 and the National Nature Science Foundation of China 32101908.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable. | 2024-02-25T05:18:23.360Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "53c4db4b6efd46316b8dbaf573ff87ac50e685ba",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "53c4db4b6efd46316b8dbaf573ff87ac50e685ba",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3568477 | pes2o/s2orc | v3-fos-license | Cardiovascular risk factors burden in Saudi Arabia: The Africa Middle East Cardiovascular Epidemiological (ACE) study
Background Limited data exist on the epidemiology of cardiovascular risk factors in Saudi Arabia, particularly in relation to the differences between Saudi nationals and expatriates in Saudi Arabia. The aim of this analysis was to describe the current prevalence of cardiovascular risk factors among patients attending general practice clinics across Saudi Arabia. Methods In this cross-sectional epidemiological analysis of the Africa Middle East Cardiovascular Epidemiological (ACE) study, the prevalence of cardiovascular risk factors (hypertension, diabetes, dyslipidemia, obesity, smoking, abdominal obesity) was evaluated in adults attending primary care clinics in Saudi Arabia. Group comparisons were made between patients of Saudi ethnicity (SA nationals) and patients who were not of Saudi ethnicity (expatriates). Results A total of 550 participants were enrolled from different clinics across Saudi Arabia [aged (mean ± standard deviation) 43 ± 11 years; 71% male]. Nearly half of the study cohort (49.8%) had more than three cardiovascular risk factors. Dyslipidemia was the most prevalent risk factor (68.6%). The prevalence of hypertension (47.5%) and dyslipidemia (75.5%) was higher among expatriates when compared with SA nationals (31.4% vs. 55.1%, p = 0.0003 vs. p < 0.0001, respectively). Conversely, obesity (52.6% vs. 41.0%; p = 0.008) and abdominal obesity (65.5% vs. 52.2%; p = 0.0028) were higher among SA nationals vs. expatriates. Conclusion Modifiable cardiovascular risk factors are highly prevalent in SA nationals and expatriates. Programmed community-based screening is needed for all cardiovascular risk factors in Saudi Arabia. Improving primary care services to focus on risk factor control may ultimately decrease the incidence of coronary artery disease and improve overall quality of life. The ACE trial is registered under NCT01243138.
Introduction
C oronary artery disease is the leading cause of morbidity and mortality worldwide, especially in developing countries [1][2][3]. Conventional risk factors for atherosclerotic heart disease have been highlighted as potential modifiable targets for lowering cardiovascular disease risk [4][5][6]. However, many studies have focused on patients in developed countries, such as the United States [7][8][9]. Therefore, systemic epidemiological data on the prevalence of cardiovascular risk factors in developing countries is somewhat lacking [10][11][12].
Saudi Arabia has undergone a major economic transition and experienced significant urbanization in recent years [3,13], and the proportion of individuals living in urban centers has doubled in the past decade [14][15][16]. This rapid urbanization has been associated with a rise in the burden of cardiovascular diseases; however, the national preventive health system and screening programs have trailed behind [17][18][19]. In particular, screening procedures targeted towards adults at risk of developing cardiovascular disease are very limited. In addition, data on the differences in the prevalence of cardiovascular risk factors between local citizens and expatriates are lacking. Thus, the aim of this analysis was to describe the current prevalence of cardiovascular risk factors among patients enrolled in the Africa Middle East Cardiovascular Epidemiological (ACE) study attending general practice clinics in Saudi Arabia. We also compared the prevalence of risk factors in the ethnic Saudi population (SA nationals) and expatriates.
ACE study design and objectives
The methods and primary results of the ACE study have been published previously [20]. In summary, the ACE study was a cross-sectional epidemiological study conducted in 98 clinics across 14 countries in the Africa and Middle East region between July 2011 and April 2012. In particular, the study was aimed at countries in the Africa and Middle East region where there was a paucity of systematic epidemiological data. Site selection was based upon the ability of a site to conduct clinical studies based on the availability of clinical research expertise, infrastructure, and ethical oversight. The primary objective of the ACE study was to estimate the prevalence of cardiovascular risk factors in outpatients attending general practice and other nonspecialist clinics in urban and rural communities [20]. All epidemiological data were saved and used under ethical approval. The ACE study was registered on clinicaltrials.gov (registration number NCT01243138).
Participant selection
Outpatients aged >18 years were enrolled; all patients provided written, informed consent. Pregnant women, lactating mothers, and outpatients with life-threatening conditions were excluded. In order to avoid selection bias, the study population was selected by a sampling technique based on enrolment of every fifth eligible outpatient seen by a physician or general practitioner on a particular day. The primary investigators of the study evaluated outpatients through history taking, physical examination, and laboratory investigations. Evaluations were typically undertaken over one clinic visit; however, for nonfasting outpatients during the first visit, a second visit was arranged to obtain fasting blood samples.
ACE Saudi Arabia
In Saudi Arabia, a total of 550 patients were enrolled, constituting about 13% of the entire ACE study cohort. This analysis was conducted in multiple clinics in urban and rural regions of the kingdom. In this post hoc analysis, patients were divided into two groups: SA nationals and expatriates.
Cardiovascular risk factor definitions
Dyslipidemia was recorded if patients were on treatment with a lipid-lowering drug or if a current fasting lipid profile measurement documented one or more of the following: high total cholesterol (!240 mg/dL); high low-density lipoprotein (LDL)-cholesterol (based on associated risk factors: LDL-cholesterol !100-160 mg/dL); low highdensity lipoprotein (HDL)-cholesterol (<40 mg/dL [21]. Outpatients on lipid-regulating treatments were considered to have controlled LDLcholesterol if their values were at goal according to their risk category, based on the NCEP ATP III recommended LDL-cholesterol targets [7]. Lipids were also assessed according to American College of Cardiology (ACC)/American Heart Association (AHA) 2013 guidelines. Arterial blood pressure (BP) was recorded as the higher of two consecutive measurements, taken once from each arm with a standardized automated BP measuring instrument after the outpatients had been sitting quietly for at least five minutes. Hypertension was defined as being on current antihypertensive drugs, or having an abnormal BP reading (BP !140/90 mmHg, and for diabetic patients !130/80 mmHg), according to the European Society of Cardiology (ESC) Cardiovascular Prevention Guidelines [22]. Outpatients on antihypertensive drugs were considered to have controlled BP if they had values below the targets set by the ESC guidelines [22].
The following modifiable cardiovascular risk factors were also recorded: diabetes mellitus [fasting blood glucose !126 mg/dL (7 mmol/L)], defined as per the American Diabetes Association criteria [23]; smoking, defined as current or past consumption of cigarettes, pipe, or water pipe (shisha); obesity, defined as body mass index (BMI) !30 kg/m 2 ; and abdominal obesity, defined in-line with the International Diabetes Federation harmonized criteria as a waist circumference !94 cm in males and !80 cm in females [24]. Metabolic syndrome was defined as presence of three or more abnormal findings out of the following five: large waist circumference, elevated triglycerides, elevated BP, elevated fasting glucose, and reduced HDL [7].
Statistical methods
Categorical data are summarized using frequencies and percentages. Continuous data are reported as mean ± standard deviation or median (25th, 75th percentiles). Dichotomous data comparing the SA nationals versus expatriate subgroups were evaluated with a Chi-square test, a Fisher exact test was used to compare categorical data with small cells, and continuous variables were evaluated with an unpaired two-tailed student t test. The test was considered statistically significant if p 0.05. As this was an exploratory analysis, No multiplicity correction was applied.
Results
The ACE study enrolled 550 participant from different clinics across Saudi Arabia (35.3% SA nationals; 64.7% expatriates). Distribution of expatriates by nationality is given in Table S1. The mean age of the overall cohort was 43 ± 11 years and just over half (55%) were younger than 45 years. Among outpatients, nearly threequarters of the study cohort (72.4%) were male ( Table 1). Nearly 50% of all participants had three or more modifiable cardiovascular risk factors ( Fig. 1 ferent between cohorts, with difference in prevalence of approximately 4% and 2% between SA nationals and expatriates, respectively (Table 2). Further gender-and age-specific were compared among both SA nationals and expatriates (Table S2).
Differences were noted in prevalence of abnormal lipid profiles between SA nationals and expatriates and, in general, expatriates showed a higher prevalence of abnormal lipid profiles (Table 3). Based on 2013 ACC/AHA guidelines for initiation of lipid-lowering therapy [25], expatriates were eligible for initiation of statin therapy to nearly the same extent as SA nationals (Fig. 2). The prevalence of newly diagnosed hypertension and diabetes mellitus, highlighted during screening, was similar in the two cohorts of outpatients, but prevalence of newly diagnosed dyslipidemia (based on NCEP ATP III criteria) varied (Fig. 3).
Metabolic syndrome
The prevalence of metabolic syndrome in the outpatient cohort was 41.8%; 72.2% of these outpatients were expatriates. Among patients with a diagnosis of metabolic syndrome, dyslipidemia was the most common component (87.4%), followed by abdominal obesity (63.9%), hypertension (61.3%), diabetes mellitus (49.6%), and smoking (24.3%). Fig. 4 shows the distribution of these risk factors between SA nationals and expatriates.
The medians (25th, 75th percentile) of measurable parameters of metabolic syndrome are shown in Fig. 5. Median lipid values, BP, and waist circumference were lower among expatriates compared with SA nationals. Initiation of lipidlowering therapy for patients not previously on lipid-lowering therapy (according to 2013 ACC/ AHA guidelines [25]) was more frequent for outpatients with metabolic syndrome in the SA national compared with the expatriates group. Additionally, subgroup analysis for certain laboratory tests is shown in Table S3.
Discussion
Cardiovascular diseases represent a major health challenge for the contemporary Saudi population. The ACE Saudi Arabia study sheds light on the current status of risk factor management in Saudi Arabia among outpatients attending general practice clinics in this region. This analysis clearly demonstrates that there has been a significant increase in cardiovascular risk factor prevalence. In addition, a significant proportion of patients with modifiable risk factors have poor overall control, based on recommendations from international guidelines.
The urban population of Saudi Arabia has notably increased during the previous decade, with numbers expected to double in the next few years as a result of lifestyle changes and improved standards of healthcare [13]. This shift towards an increase in urban Saudi population, with an accompanying increase in the numbers of expatriates, will have profound implications for healthcare services, healthcare access, and resource utilization, as well as public health. Our analysis demonstrates that with the current risk factors, more resources are needed for risk factor detection and control to avoid an epidemic in atherosclerotic cardiovascular diseases. In our study, dyslipidemia and abdominal obesity were the most prevalent risk factors, affecting approximately three-quarters of screened outpatients, followed by high rates of hypertension, diabetes, and smoking. These findings were observed in both genders and across different age groups. In addition, approximately half of the participants did not use appropriate management, and half of the outpatients who were on therapy for dyslipidemia still had poor lipid profile control. This is despite lipid-modifying medications being available without cost. Thus, future studies are needed to address the true barriers to risk factor control in a society where healthcare and medications are free of charge, making them largely accessible.
FULL LENGTH ARTICLE
Al-Nozha et al. [17] estimated the prevalence of diabetes in the community-based Coronary Artery Disease in Saudis Study (CADISS). More than a decade later, the present study demonstrates that there has been little change in the prevalence of diabetes in the Saudi population. Similar observations were noted regarding diabetes control. Thus, there is an urgent need for a national prevention program that targets highrisk groups in an attempt to achieve better risk factor and diabetes control. In contrast, there has been a significant increase in the rate of hypertension, from 26.1% in the CADISS study [17,26] to more than 40% in the present study. In addition, hypertension control is suboptimal in all outpatients. Thus, public health programs are needed to reduce salt intake and improve physical fitness, measures that have been shown to reduce hypertension incidence and improve BP control.
Several studies [18,27,28] have reported that patients with traditional cardiovascular risk factors have poor risk factor control. Numerous potential reasons could explain this, including limited assessment by healthcare workers, accessibility of healthcare facilities, and level of compliance. Furthermore, limited knowledge of some primary care physicians about cardiovascular risk factor management, advanced therapies, and new tools to help achieve better risk factor management may contribute to suboptimal risk factor management.
Public health approaches can be adopted by a number of organizations and associations and may play an important role in reducing the incidence of modifiable traditional cardiovascular risk factors. Variable reduction rates in prevalence of these risk factors may significantly reduce health expenditure and as a consequence, develop and provide a well-structured primary course of prevention. For example, increase in tobacco-free environments, establishment of additional taxes on smokers, restricted roles of manufacturers of tobacco-based products, and support and rehabilitation programs to those willing to quit. Furthermore, promoting healthy diet and physical activity through community-based awareness ini- tiatives, including social media interaction will further contribute to risk reduction. Increased consumption of important food classes accompanied by a reduced intake of harmful substitutes like saturated fatty acids and drinks with high calorie intake are measures that will prove beneficial. Furthermore, adoption and supplementation of National Guidelines on physical activity to support and encourage physical activity for all age groups will be key to improving overall health outcomes [29].
FULL LENGTH ARTICLE
Our analysis is a subgroup of the main ACE study including only patients recruited from Saudi sites. Few points are addressed in this analysis that are unique and were not studied in the entire cohort. First, our analysis showed individualized detailed estimates of the prevalence of cardiovascular risk factors and metabolic syndrome precisely for the population of Saudi Arabia. Second, we compared the risk factors burden between expatriates and SA nationals, which was very important for resource allocation, especially in regards to healthcare transformation planning. Finally, our paper highlights the patients' eligibility to lipid-lowering medications according to the recent 2013 ACC/AHA guidelines, which were not previously studied in a Saudi population.
Our study is not without limitations. The ACE Saudi Arabia substudy enrolled outpatients that were randomly selected from different clinics in Saudi Arabia. Multiple attempts were made to obtain a representative sample. However, it is still possible that there might be some selection bias limiting the generalizability of the study findings. Further national surveys for traditional atherosclerotic heart disease, including larger sample size and with a structured database and periodically monitoring assessment, are needed. Although our study presented the situation of patients attending healthcare, the distribution of cardiovascular risk factors among study cohorts cannot represent the general population and only population-based surveys. Additionally, the analysis was an exploratory one; thus, the p values published may not be indicative of statistical significance, rather limited to identifying variables for further exploration in a properly designed randomized clinical trial.
Conclusion
This analysis clearly shows that there is a high prevalence of cardiovascular risk factors in the Saudi population. In addition, a significant proportion of patients with risk factors have poor overall control. Programmed community-based screening is needed for all cardiovascular risk factors in Saudi Arabia. Increased awareness and improved primary care services may decrease the incidence of coronary artery disease and improve overall quality of life. | 2018-04-03T03:42:29.955Z | 2016-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "148b4e3d0ab386dd8a07475ec0069bcf6d9a1249",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jsha.2017.03.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c76082c0bfe57d9ab0e5e25804f13765b69ac2ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
88517657 | pes2o/s2orc | v3-fos-license | A Geostatistical Framework for Combining Spatially Referenced Disease Prevalence Data from Multiple Diagnostics
Multiple diagnostic tests are often used due to limited resources or because they provide complementary information on the epidemiology of a disease under investigation. Existing statistical methods to combine prevalence data from multiple diagnostics ignore the potential over-dispersion induced by the spatial correlations in the data. To address this issue, we develop a geostatistical framework that allows for joint modelling of data from multiple diagnostics by considering two main classes of inferential problems: (1) to predict prevalence for a gold-standard diagnostic using low-cost and potentially biased alternative tests; (2) to carry out joint prediction of prevalence from multiple tests. We apply the proposed framework to two case studies: mapping Loa loa prevalence in Central and West Africa, using miscroscopy and a questionnaire-based test called RAPLOA; mapping Plasmodium falciparum malaria prevalence in the highlands of Western Kenya using polymerase chain reaction and a rapid diagnostic test. We also develop a Monte Carlo procedure based on the variogram in order to identify parsimonious geostatistical models that are compatible with the data. Our study highlights (i) the importance of accounting for diagnostic-specific residual spatial variation and (ii) the benefits accrued from joint geostatistical modelling so as to deliver more reliable and precise inferences on disease prevalence.
Introduction
Disease mapping denotes a class of problems in public health where the scientific goal is to predict the spatial variation in disease risk on a scale that can range from sub-national to global (Murray et al., 2014;Liu et al., 2012). Understanding the geographical distribution of a disease is particularly 1 important in the decision-making process for the planning, implementation, monitoring and evaluation of control programmes (World Health Organization, 2017;Bhatt et al., 2015). In this context, model-based geostatistical methods (Diggle et al., 1998) have been especially useful in low-resource settings (Diggle and Giorgi, 2016;Gething et al., 2012;Zouré et al., 2014) where disease registries are non-existent or geographically incomplete, and monitoring of the disease burden is carried out through cross-sectional surveys and passive surveillance systems.
It is often the case that prevalence data from a geographical region of interest are obtained using different diagnostic tests for the same disease under investigation. The reasons for this are manifold.
For example, when the goal of geostatistical analysis is to map disease risk on a continental or global scale by combining data from multiple surveys, dealing with the use of different diagnostic tests may be unavoidable. In other cases, gold-standard diagnostic tests are often expensive and require advanced laboratory expertise and technology which may not always be available in constrained resource settings. This requires the use of more cost-effective alternatives for disease testing in order to attain a required sample size. Different diagnostics might also provide complementary information of intrinsic scientific interests into the spatial variation of disease risk and the distribution of hotspots.
In the absence of statistical methods that allow for the joint analysis of multiple diagnostics, most studies have reported separate analyses. A shortcoming of this approach is that it ignores, and therefore fails to explain, possible correlations between prevalence of different diagnostics. Statistical inference might benefit from a joint analysis, which can yield more efficient estimation of regression parameters (Song et al., 2009) and more precise predictions of prevalence.
However, different diagnostic tests can exhibit considerable disparities in the estimates of disease prevalence for the same population, or even the same individuals. Obvious sources of such variation include differences in sensitivity and specificity. Furthermore, different diagnostics may exhibit differences in their association with the same risk factors. In a geostatistical context, there may also be differences between the spatial covariance structures of different diagnostics.
These aspects highlight the potential challenges that joint modelling of multiple diagnostics needs to take into account. In this paper, we address such issues in order to develop a general framework for geostatistical analysis and describe the application of this framework to Loa loa and malaria mapping in Africa.
The structure of the paper is as follows. In Section 2, we describe the two motivating applications. In Section 3, we review existing methods for combining prevalence data from different diagnostics. In Section 4, we introduce a geostatistical framework for combining data from two diagnostics and distinguish two main classes of problems that arise in this context. In Sections 5 2 and 6, we apply this framework to the two case studies introduced in Section 2. In Section 7, we discuss methodological extensions to more than two diagnostics.
Loa loa mapping in Central and West Africa
Loiasis is a neglected tropical disease that has received an increased attention due to its impact on the control of a more serious infectious disease, onchocerciasis, that is endemic in large swathes of sub-Saharan Africa. Mass administration of the drug ivermectin confers protection against onchocerciasis, but individuals who are highly co-infected with Loa loa -the Loiasis parasite -can develop severe and occasionally fatal adverse reaction to the drug (Boussinesq et al., 1998). Boussinesq et al. (2001) have shown that high levels in Loa loa prevalence within a community are strongly associated with a high parasite density. For this reason, Zouré et al. (2011) have suggested that precautionary measures should be put in place before the roll-out of mass drug administration with ivermectin in areas where prevalence of infection with Loa loa is greater than 20%.
In order to carry out a rapid assessment of the Loa loa burden in endemic areas a questionnaire instrument, named RAPLOA, was developed as a more economically feasible alternative to the standard microscopy-based microfilariae (MF) detection in blood smears (Takougang et al., 2002).
To validate the RAPLOA methodology against microscopy, cross-sectional surveys using both diagnostics were carried out in four study sites in Cameroon, Nigeria, Republic of Congo and the Democratic Republic of Congo (see Wanji et al. (2012) and Additional Figure 1 in Web Appendix B).
In this study, the objective of statistical inference is to develop a calibration relationship between the two diagnostic procedures. This could then be applied to map microscopy-based MF prevalence in areas where the more economical RAPLOA questionnaire is the only feasible option.
Malaria mapping in the highlands of Western Kenya
Malaria continues to be a global public health challenge, especially in sub-Sharan Africa which, in 2016, accounted for about 90% of all the 445,000 estimated malaria deaths worldwide (World Health Organization, 2017). Polymerase chain reaction (PCR) and a rapid diagnostic test (RDT) are two of the most commonly used procedures for detecting Plasmodium falciparum, the deadliest species of the malaria parasites. PCR is highly sensitive and specific, but its use is constrained by high costs and the need for highly trained technicians. RDT is simpler to use, cost-effective and requires minimal training, but is less sensitive than PCR (Tangpukdee et al., 2009). Recent studies have reported that PCR and RDT can lead to the identification of different malaria hotspots, i.e. areas where disease risk is estimated to be unexpectedly high (Mogeni et al., 2017). In this context, mapping of both diagnostics is of epidemiological interest since their effective use is dependent on the level of malaria transmission, with PCR being the preferred testing option in low-transmission settings (Mogeni et al., 2017).
In order to investigate this issue, a malariometric survey was conducted using both RDT and PCR in two highland districts of Western Kenya (see Additional Figure 2 in Web Appendix B); see Stevenson et al. (2015) for a descriptive analysis of this study. In this scenario, a joint model for the reported malaria counts from the two diagnostics could allow to exploit their cross-correlation and identify malaria hotspots more accurately.
Literature review
We formally express the format of geostatistical data from multiple diagnostics as where y ijk is a binary outcome taking value 1 if the j-th individual at location x ik tests positive for the disease under investigation using the k-th diagnostic procedure, and 0 otherwise. We use p ijk to denote the probability that an individual has a positive test outcome from the k-th diagnostic.
When data are only available as aggregated counts, we replace y ijk in (3.1) with y ik = n ik j=1 y ijk and p ijk with p ik . When all diagnostic tools are used at each location, we replace x ik with x i , although this is not a requirement in the development of our methodology.
In the remainder of this section, we review non-spatial methods for joint modelling of the p ijk across multiple diagnostics and a geostatistical modelling approach proposed by Crainiceanu et al. (2008).
Non-spatial approaches
Existing non-spatial methods for the analysis of data from multiple diagnostics fall within two main classes of statistical models: generalised linear models (GLMs) and their random-effects counterpart, generalised linear mixed models (GLMMs). Mappin et al. (2015) analysed data on P. falciparum prevalence from RDT and microscopy outcomes from sub-Saharan Africa, using a standard probit model thus assuming a linear relationship between the p ik on the probit scale. Wanji et al. (2012) used a similar approach for Loa loa in order to study the relationship between microscopy and RAPLOA prevalence by replacing the probit link in (3.2) with the logit. This model was also used by Wu et al. (2015) to estimate the relationship between RDT, microscopy and PCR, for each pair of diagnostics.
A major limitation of these approaches based on standard GLMs is that they do not account for any over-dispersion that might be induced by unmeasured risk factors. Coffeng et al. (2013) proposes a bivariate GLMM for joint modelling of data on onchocerciasis nodule prevalence and skin MF prevalence in adult males sampled across 148 villages in 16 African countries. More specifically, the linear predictor of such model can be expressed as where the random effects terms Z i and V ij are zero-mean Gaussian variables accounting for unexplained variation between-villages and between-individuals within villages, respectively. Using this approach, Coffeng et al. (2013) estimated a strong positive correlation between nodule and MF prevalence but also reported a variation in the strength of this relationship across study sites. Crainiceanu et al. (2008) proposed a bivariate geostatistical model (henceforth CDRM) to analyse data on microscopy and RAPLOA Loa loa prevalence (see Section 2.1). To the best of our knowledge, this is the only existing approach that attempts to model the spatial correlation between two diagnostics.
The Crainiceanu, Diggle and Rowlingson model
Let k = 1 correspond to the RAPLOA questionnaire, and k = 2 to microscopy. To emphasize the spatial context, we now write p ij = p j (x i ); CDRM can then be expressed as is a vector of spatially varying covariates, S(x i ) is a zero-mean stationary and isotropic Gaussian process and the Z i are zero-mean independent and identically distributed Gaussian random variables. Crainiceanu et al. (2008) also provide empirical evidence to justify the assumption of a logit-linear relationship between the two diagnostics.
A limitation of the CDRM is that it assumes proportionality on the logit scale between the residual spatial fields associated with RAPLOA and microscopy. In our re-analysis in Section 5, we use a Monte Carlo procedure to test this hypothesis.
Two classes of bivariate geostatistical models
We now develop two modelling strategies that address the specific objectives of the two case studies introduced in Section 2. Our focus in this section will be restricted to the case of two diagnostics (hence K = 2). We discuss the extension to more than two in Section 7.
Case I: Predicting prevalence for a gold-standard diagnostic
Let S 1 (x) and S 2 (x) be two independent stationary and isotropic Gaussian processes; also, let f 1 {·} and f 2 {·} be two functions with domain on the unit interval [0, 1] and image on the real line. We propose to model data from two diagnostics, with k = 2 denoting the gold-standard, as (4.1) In our applications, we specify exponential correlation functions for S k (x), k = 1, 2, hence where σ 2 k is the variance of S k (x) and φ k is a scale parameter regulating how fast the spatial correlation decays to zero for increasing distance. Finally, we use τ 2 k to denote the variance of the Gaussian noise Z ik .
Selection of suitable functions f 1 and f 2 can be carried out, for example, by exploring the association between the empirical prevalences of the two diagnostics in order to identify transformations that render their relationship approximately linear. Alternatively, subject matter knowledge could be used to constrain the admissible forms for f 1 and f 2 ; see, for example, Irvine et al. (2016) who derive a functional relationship between MF and an immuno-chromatographic test for prevalence of lymphatic filariasis by making explicit assumptions on the distribution of worms and their 6 reproductive rate in the general population.
The proposed model in (4.1) is more flexible than the CDRM because (i) it allows for diagnosticspecific unstructured variation Z ik and, more importantly, (ii) relaxes the assumption of proportionality between the residual spatial fields of the two diagnostics through the introduction of S 2 (x).
Case II: Joint prediction of prevalence from two complementary diagnostics
Let S 1 (x) and S 2 (x) be two independent Gaussian processes, and Z ik Gaussian noise, each having the same properties as defined in the previous section. We now introduce a third stationary and isotropic Gaussian process T (x) having unit variance and exponential correlation function with scale parameter φ T .
Our proposed approach for joint prediction of prevalence from two diagnostics, when both are of interest, is expressed by the following equation for the linear predictor (4. 2) The spatial processes S k (x) and T (x) accounts for unmeasured risk factors that are specific to each and common to both diagnostics, respectively. The resulting variogram for the linear predictor is and the cross-variogram between the linear predictors of the two diagnostics is Given the relatively large number of parameters, fitting the model may require a pragmatic approach. In order to identify a parsimonious model for the data, we recommend an incremental modelling strategy, whereby a simpler model is used in a first analysis (e.g. by setting S k (x) = 0 for all x) and more complexity is then added in response to an unsatisfactory validation check, as described below.
Inference and model validation
We carry out parameter estimation for both the asymmetric and symmetric models using Monte Carlo Maximum Likelihood (MCML) (Geyer, 1991;Christensen, 2004). To carry out spatial predictions at a set of unobserved locations, we plug the MCML estimates into a Markov Chain Monte Carlo algorithm for simulation from the distribution of the random effects conditional on the data.
8
We summarise our predictive inferences on prevalence using the mean, standard deviation, and exceedance probabilities, i.e. the probability that the predictive distribution of prevalence exceeds a predefined threshold. Details of the derivation and approximation of the log-likelihood function are given in Web Appendix A.
For model validation we propose the following procedure. We first re-write both models in general form where µ ijk is the mean component expressed as a regression on the available covariates. In (4.5), we recover the symmetric model (4.2).
We define the empirical variogram of W k (x) to bê conditioned to the data. To test whether the adopted spatial structure for W k (x) is compatible with the data, we then proceed through the following steps.
Step 0. ObtainŴ k (x i ) from two separate standard geostatistical models (i.e. W k ( where S k (x), k = 1, 2 are independent processes) and compute the empirical variogramγ k , for k = 1, 2.
Step 1. Simulate prevalence data as in (3.1) from the adopted model for W k (x) by plugging-in the MCML estimates. Fit separate standard geostatistical models as in Step 0 and compute the empirical variogram for the simulated dataset.
Step 2. Repeat Step 1 a large enough number of times, say M .
Step 3. Use the resulting M empirical variograms to generate 95% confidence intervals at each of a set of pre-defined distance bins.
If the empirical variogram in
Step 0 falls fully or partly outside the 95% confidence intervals, we conclude that the model is not able to capture the spatial structure of the data satisfactorily. We consider the two following models.
• Model 1: a slightly modified, more flexible, version of the CDRM, given by where e(x) denotes the elevation in meters at location x, e 1 = 750, e 2 = 1015 and I(P) is an indicator function which takes value 1 if P is true and 0 otherwise. In this parameterisation, β k,1 is the effect of elevation on prevalence below 750 meters, β k,2 its effect between 760 and 1015 meters, and β k,3 its effect above 1015 meters.
• Model 2: obtained by incorporating an additional spatial process S 2 (x), independent of S 1 (x), into the linear predictor for microscopy in Model 1 to give Table 1 reports the MCML estimates obtained for Models 1 and 2. We observe that all parameters common to both Models 1 and Model 2 have comparable point and interval estimates, except for τ 2 1 which has a substantially narrower 95% confidence interval under Model 1 than Model 2. As expected, both models show a significant and positive logit-linear relationship between RAPLOA and miscroscopy. However, Model 2, which include the additional spatial process S 2 (x), is also able to capture spatial variation in microscopy prevalence on a scale of about 24 meters.
Results
We use the validation procedure of Section 6.1 to test which of the two models better fits the spatial structure of the data. The results (see Web Figure 5) show a satisfactory assessment of Model 2, whereas for Model 1 the empirical variogram for microscopy partly falls outside the 95% confidence band, questioning its validity.
We now compare the predictive inferences on microscopy prevalence between the two models in order to assess whether the introduction of S 2 (x) makes a tangible difference. Figure 5.1 shows the point estimates for microscopy prevalence and the exceedance probabilities for a 20% prevalence threshold under Model 1 (upper panels), under Model 2 (middle panels), and the difference between the two (lower panels). Overall, the predicted spatial pattern in prevalence from the two models is similar, but with substantial local differences. The difference between the point estimates for prevalence ranges from -0.12 to 0.13, while the difference between the two exceedance probabilities ranges from -0.44 to 0.59.
Simulation Study
We carry out a simulation study in order to quantify the effects on the predictive inferences for prevalence when ignoring microscopy-specific residual spatial variation. To this end, we compare the predictive performances of Model 1 and Model 2 at 20 unobserved locations corresponding to the centroids of 20 clusters (shown as red points in Web Figure 1) that we identify using the kmeans algorithm (Hartigan and Wong, 1979) and proceed as follows. We simulate 10,000 Binomial data-sets under Model 2 by setting its parameters to the estimates of Table 1 and fit both models.
We then carry out predictions for microscopy prevalence over the 20 unobserved locations. We summarise the results at each of the 20 prediction locations using the 95% coverage probability (CP), the root-mean-square-error (RMSE) and the 95% predictive interval length (PIL). Table 2 shows the three metrics averaged over the 20 locations for Model 1 and Model 2. The CP of Model 1 (77.5%) is well below its nominal level of 95%. This is also reflected by a smaller PIL for Model 1, suggesting that this provides unreliably narrow 95% predictive intervals for prevalence. Finally, we note that Model 1 also has a larger RMSE than Model 2.
Application II: Joint prediction of Plasmodium falciparum prevalence using RDT and PCR
The malaria data consist of 3,587 individuals sampled across 949 locations (see Web Figure 2). The outcomes from RDT (k = 1) and PCR (k = 2) were concordant in 92.4% of all the individuals tested for P. falciparum. This suggests that estimating components of residual spatial variation that are unique to each diagnostic may be difficult. For this reason our model for the data takes the following form where: d ij,1 is a binary variable taking value 1 if the j-th individual at x i is a male and 0 otherwise; d ij,2 = min{a ij , 5} and d ij,3 = max{a ij − 5, 0}, i.e. the effect of age, a ij , is modelled as a linear spline with a knot at 5 years. Table 3 reports point estimates and 95% confidence intervals for the model parameters. Gender has a significant effect on PCR prevalence, but its effect on RDT prevalence is not significant at the conventional 5% alpha level. The effect of age is comparable between the two diagnostics, with the probability of a positive test increasing with age up to 5 years and decreasing thereafter.
Results
The estimated variance component,ν 2 1 = 0.230, associated with RDT is about three times that for PCR,ν 2 2 = 0.081. The spatial process T (x), common to both diagnostics, accounts for spatial variation in malaria prevalence up to a scale of about 11.6 kilometers, beyond which the correlation falls below 0.05. The variogram-based validation procedure of Section does not show any strong evidence against the fitted model (see Web Figure 6 in Web Appendix B).
To quantify the benefit of carrying out a joint analysis for RDT and PCR, we compare the predictive inferences for prevalence that are obtained under two scenarios: (i) the fitted model in (6.1); (ii) separate fitted models that ignore the cross-correlation between the outcomes of the two diagnostic tests. Figure 6.1 shows the point predictions and standard errors for RDT and PCR prevalences for five-year-old male children under scenarios (i) (left panels) and (ii) (right panels).
We observe that the point predictions for prevalence under the two models are strongly similar but the joint model in (6.1), as expected, yields smaller standard errors throughout the study area.
Having chosen (6.1) as the best model, we compare the exceedance probabilities (EPs) for a 10% threshold between RDT and PCR. Using each of the two diagnostics, we then identify malaria hotspots, as the sets of locations such that their EP is at least 90%. Figure 7 of Web Appendix B shows that PCR identifies a considerably larger hotspot in the north east of the study area than does RDT, and a smaller hotspot in the south west that is undetected by RDT. These results are consistent with the main findings of Mogeni et al. (2017).
Conclusions and extensions
We have developed a flexible geostatistical framework to model reported disease counts from multiple diagnostics and have distinguished two main classes of problems: (1) prediction of prevalence as defined by a gold-standard diagnostic using data obtained from a more feasible low-cost, but potentially biased, alternative; (2) joint prediction of prevalences as defined by two diagnostic tests.
As the burden of disease declines in endemic regions, the use of multiple transmission metrics and diagnostics becomes necessary in order to better inform and adapt control strategies. It is thus important to develop suitable methods of inference that allow the borrowing of strength of information across multiple diagnostics. As our study has shown, the main benefit of this approach is a reduction in the uncertainty associated with the predictive inferences on disease risk.
Our application to Loiasis mapping has shown the importance of acknowledging the existence of residual spatial variation specific to each diagnostic test. Through a simulation study, we have also shown that ignoring this source of extra-Binomial variation can lead to unreliably narrow prediction intervals for prevalence, with actual coverages falling well below their nominal level.
The second application on malaria mapping has highlighted the benefits of a joint analysis of data from two diagnostic tests when both are of scientific interest. A joint model can yield estimates of prevalence with smaller standard errors than estimates obtained from two separate geostatistical models.
Although we have only considered the case of two diagnostic tests throughout the paper, our methodology can be easily extended to more than two. However, the nature of the extension will be dependent on the specific context and scientific goal. For example, a natural extension of the models of Section 4.1 would be to use multiple biased diagnostic tools (for k = 1, . . . , K − 1) to better predict a gold-standard (k = K). In this case, the cross-correlation between the outcomes of the biased diagnostic tests could be modelled using the symmetric structure of the model in Section 4.2, while preserving an asymmetric form for the linear predictor of the gold-standard. Formally, this is expressed as (7.1) However, we would be wary of attempting to fit this, or other comparably complex models without an initial exploratory analysis that might help to understand the extent of the cross-correlations between the outcomes of different diagnostics, with a view to reducing the dimensionality of the model. | 2018-08-09T13:21:43.000Z | 2018-08-09T00:00:00.000 | {
"year": 2018,
"sha1": "6b0c70fcb44f7c8092da0bf69a3838b3d0e55f3d",
"oa_license": null,
"oa_url": "https://eprints.lancs.ac.uk/id/eprint/136346/1/1808.03141.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "76a6dbb2653f068152d5eecb5102985aa81ee8cd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
]
} |
269307450 | pes2o/s2orc | v3-fos-license | Severe Upper Airway Obstruction in a Patient With Infectious Mononucleosis
Infectious mononucleosis (IM) is a clinical disease caused by the Epstein-Barr virus (EBV). Common presenting symptoms include sore throat, lymph node enlargement, fever, and malaise. Although severe upper airway obstruction is uncommon, it is a potentially fatal complication that requires immediate intervention. We describe the case of an 18-year-old Hispanic man who presented with a progressive sore throat and difficulty speaking, requiring endotracheal intubation for airway protection. CT images showed diffuse swelling of Waldeyer’s tonsillar ring, multiple enlarged lymphadenopathies, and splenomegaly. Acute EBV infection was confirmed considering clinical presentation and using the heterophile antibody, anti-nuclear and anti-viral capsid antigens, and quantitative PCR. The patient was managed with ventilatory support, empirical antibiotic therapy, and systemic corticosteroids, achieving a positive outcome. Our case illustrates the use of corticosteroids in managing severe upper airway obstruction complicating IM.
Introduction
Infectious mononucleosis (IM) is usually a self-limited, benign lymphoproliferative disease caused by the Epstein-Barr virus (EBV).EBV is a gamma herpesvirus with a double-stranded DNA genome of about 172 kb [1].In seroprevalence surveys, over 90% of adults worldwide are infected with this virus [1,2].It most commonly affects the adolescent population, with a peak incidence in persons between 15 and 19 years (3.2-3.7 cases per 1000 persons) [1,3].EBV is transmitted predominantly through infected saliva during kissing and less commonly through sexual contact, blood transfusions, or organ transplantations [1,2,4].EBV can be spread using objects, such as a toothbrush or drinking glass, that an infected person recently used [4].The most common clinical manifestations are sore throat, lymph node and tonsillar enlargement, fever, myalgia, and malaise [1,2,5].Some complications of IM include concurrent Streptococcus pyogenes pharyngitis (30%), significant airway compromise (<5%), splenic rupture, and peritonsillar abscess (<1%) [2,5,6].We describe a case of IM complicated by severe upper airway obstruction requiring ventilatory support and the use of corticosteroid therapy.
Case Presentation
An 18-year-old Hispanic man with a presumptive history of IM three years prior was transferred to our institution for the management of acute respiratory failure secondary to upper airway obstruction.Two days before the presentation, the patient complained of a progressive sore throat, tonsillar swelling, and difficulty speaking.The heterophile antibody test was checked and returned positive.Given the worsening of his condition and concerns for upper airway compromise, the patient was intubated for airway protection and administered a dose of intravenous dexamethasone before transference to our hospital.
On admission, vital signs were notable for a weight of 129 kg (BMI 35), temperature of 36.8 °C, heart rate of 94 beats/minute, blood pressure of 147/70 mmHg, respiratory rate of 18 breaths/minute, and oxygen saturation of 100% while on ventilatory support.The physical exam was notable for an enlarged neck and bilateral prominent and firm cervical lymphadenopathies.The endotracheal tube limited the oral exam.There was no evidence of skin abnormalities, including rash and jaundice.The abdominal exam was negative for evident organomegaly.
As seen in Table 1, initial laboratory results were remarkable for mild elevation of liver enzymes, a positive heterophile test, negative EBV viral capsid antigen (VCA) and EBNA antibodies (IgG), a positive EBV VCA IgM with an elevated titer, and positive EBV qualitative and quantitative PCR.There was no leukocytosis or lymphocytosis.However, few reactive lymphocytes were reported.
Reference Range
White blood cell count 9.52 x 10 CT images showed diffuse swelling of Waldeyer's tonsillar ring associated with severe narrowing of the nasopharyngeal airway producing obstruction (Figure 1) surrounded by bilateral reactive level 2 cervical lymphadenopathies and a necrotic right level 2B cervical lymph node (Figure 2).Splenomegaly (16.5 cm long in the coronal view) was another finding reported in CT images (Figure 3).
The patient was admitted to the medical ICU for ventilator management.The otolaryngology consultant, after evaluation, recommended against surgical intervention as no evident fluid collection(s) were seen in CT images, but empiric antibiotic coverage for potential bacterial superinfection and corticosteroid treatment.The patient received intravenous piperacillin-tazobactam and dexamethasone, respectively.The former was used as empiric therapy for possible bacterial superinfection, and the latter to help with the extensive inflammatory process in the airway.On day six, the anesthesia team took the patient to the operative room for a safe extubation, and it occurred without complications.An elongated palate was observed on examination, and potential obstructive sleep apnea features were noted.On day eight, after clinical improvement, the patient was discharged home with a short course of oral amoxicillin and prednisone.Extensive counseling to avoid contact sports for at least a month, the need for an outpatient sleep study, and a follow-up in the otolaryngology clinic were discussed.
Discussion
While IM is generally a benign, self-limiting condition, it can cause severe otolaryngologic complications in up to 5% of patients [7].Palatal and nasopharyngeal tonsil hypertrophy and edema of the surrounding soft tissue can lead to airway compromise and are among the most common indications for hospitalization.Younger children seem to be at greater risk [8] than adults.The likely risk factors contributing to airway obstruction in our patient included obesity (BMI 35), an elongated palate, and possible underlying obstructive sleep apnea.The cardinal symptoms of severe upper airway obstruction are stertor or stridor, dyspnea, intercostal and suprasternal retractions, tachypnea, and cyanosis, which demand emergent intervention.However, they might be absent until late in the disease process [9].These symptoms were not reported in our case and were prevented by securing the airway.
IM should be suspected clinically in teenagers and young adults who present acutely ill with sore throat, cervical lymphadenopathy, fever, and fatigue [2].The heterophile antibodies help support the diagnosis in suspected individuals.They are IgM-class antibodies directed against mammalian erythrocytes.They are not specific; therefore, false positive heterophile antibodies can be reported in conditions including other acute infections, autoimmune diseases, and cancer [2].Given its non-specificity, false negative results can also be reported, especially in younger children [2,4].Specific serologic tests can be performed when the diagnosis is uncertain, such as antibodies against EBV VCA and nuclear antigen (EBNA).EBV VCA IgM can be detected as early as a week before the onset of clinical symptoms [2,4,5].EBV VCA IgG, on the other hand, can be detected within a month of illness [2,4,5].Antibodies against EBNA develop within three months of illness.Therefore, if they are present during the acute disease, they rule out EBV acute infection [2,4,5].Our patient had an acute EBV infection supported by a positive heterophile antibody test and VCA IgM, negative antibodies against EBNA, and the presence of EBV viremia.
The previous report of IM in our patient was likely a different acute illness, likely cytomegalovirus infection, due to the presence of a positive IgG and a negative IgM.Other conditions can have clinical manifestations similar to EBV, including cytomegalovirus, human herpesvirus 6, herpes simplex virus type 1, adenovirus, Toxoplasma gondii, Streptococcus pyogenes, and HIV acute infection [1,10].Thus, they are considered heterophile-negative mononucleosis-like illnesses or mononucleosis syndrome [1,10].
Management of IM is primarily supportive; however, other strategies have been employed, such as corticosteroid and antiviral therapy [1,2].The use of corticosteroids for managing IM remains controversial due to conflicting data and the possibility of impairing viral clearance or associated superinfections [1,2].Its use is supported in cases of airway compromise, idiopathic thrombocytopenic purpura, or hemolytic anemia, but corticosteroids are not recommended in uncomplicated IM cases [1,2,11].In a multicenter study conducted by Tynell et al., the combination of acyclovir and prednisolone did not affect the duration of symptoms or lead to an earlier return to school or work [12].Wohl and Isaacson described their success in averting emergent surgical procedures using high-dose corticosteroids in children with compromised airways [13].Some authors suggest acute tonsillectomy as a second-line treatment to treat upper airway obstruction that has failed to respond to corticosteroids.However, due to the high-risk perioperative bleeding (up to 13%), it is not strongly advocated [14,15].
Antivirals have also been used in the management of IM.Acyclovir and valacyclovir have in-vivo antiviral activity but have no clinical benefit [1,2].Ganciclovir and valganciclovir are antivirals commonly used to manage EBV infection in immunocompromised patients, but no trials support their clinical efficacy [2].Antiviral therapy cannot be the standard of care in managing IM until robust data are available from randomized controlled trials.Antivirals were not used in our patient.
Conclusions
Upper airway obstruction is a rare but potentially lethal complication of IM in cases with rapid deterioration of respiratory condition if management is delayed.This highlights the importance of maintaining a high index of suspicion for airway compromise in high-risk individuals.Systemic corticosteroids should not be used in uncomplicated cases but should be considered in non-resolving and severe cases, and airway patency should be closely monitored.Endotracheal intubation may be required, as in the case we described.Refractory and severe cases may require a more invasive intervention, such as tonsillectomy or tracheostomy. | 2024-04-24T15:13:42.766Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "c8af4f6fb177db0d09bd8f730c4975e2f0ed987c",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/246907/20240422-12168-1b4dfis.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ef951947d94642b06fe4dec6c87c0c306e3f49d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271475533 | pes2o/s2orc | v3-fos-license | 4-Phenylbutyric Acid Treatment Reduces Low-Molecular-Weight Proteinuria in a Clcn5 Knock-in Mouse Model for Dent Disease-1
Dent disease-1 (DD-1) is a rare X-linked tubular disorder characterized by low-molecular-weight proteinuria (LMWP), hypercalciuria, nephrolithiasis and nephrocalcinosis. This disease is caused by inactivating mutations in the CLCN5 gene which encodes the voltage-gated ClC-5 chloride/proton antiporter. Currently, the treatment of DD-1 is only supportive and focused on delaying the progression of the disease. Here, we generated and characterized a Clcn5 knock-in mouse model that carries a pathogenic CLCN5 variant, c. 1566_1568delTGT; p.Val523del, which has been previously detected in several DD-1 unrelated patients, and presents the main clinical manifestations of DD-1 such as high levels of urinary b2-microglobulin, phosphate and calcium. Mutation p.Val523del causes partial ClC-5 retention in the endoplasmic reticulum. Additionally, we assessed the ability of sodium 4-phenylbutyrate, a small chemical chaperone, to ameliorate DD-1 symptoms in this mouse model. The proposed model would be of significant value in the investigation of the fundamental pathological processes underlying DD-1 and in the development of effective therapeutic strategies for this rare condition.
Introduction
Dent disease is a monogenic proximal tubulopathy linked to the X chromosome that is characterized by an incomplete renal Fanconi syndrome, low-molecular-weight proteinuria (LMWP), hypercalciuria, nephrocalcinosis and nephrolithiasis.Between the third and fifth decade of life, more than two-thirds of patients reach end-stage renal disease (ESRD) [1].This disease mainly affects male children; female carriers are usually asymptomatic, although a few cases have been described in girls and young women presenting a partial or even complete phenotype due to a skewed inactivation of the X chromosome [1][2][3][4][5].Approximately 60% of cases are caused by mutations in the CLCN5 gene (Dent disease-1, DD-1), which encodes for the ClC-5 chlorine/proton exchanger, located mostly in early endosomes of the proximal tubule [3,[6][7][8].
The mechanism by which mutations in the CLCN5 gene induce DD-1 remains unclear.Traditionally, and given the nature of the ClC-5 protein, the most widespread hypothesis is a defect in endosome acidification, which is essential for ligand dissociation and correct sorting along the endocytic pathway, although some studies have shown the existence of pathogenic mutations that do not cause endosomal acidification damage [9,10].In the year 2000, the first Clcn5 knock-out (KO) mice were generated; the Jentsch and Guggino models, which exhibited the phenotypic characteristics of DD-1 [11,12].The studies carried out in both models showed that the inactivation of ClC-5 caused a loss of the endocytosis process, both in the fluid and in the receptor-mediated phases, and a loss of the expression of megalin and cubilin in the cell membrane of proximal tubule cells, which would explain the presence of LMWP [11][12][13].On the other hand, ClC-5 function is needed to preserve suitable endosomal pH and chloride concentration in the proximal tubule (PT).Although ClC-5 is mainly co-localized with vacuolar H + -ATPase in renal cells, it is not obvious how it modulates its activity [8,14].In these KO mouse models, the expression of vacuolar H + -ATPase is preserved but altered expression has been reported in some DD-1 patients [11,15].Recently, the importance of functional coupling between ClC-5 and V-ATPase for proper endosomal acidification and maturation has been confirmed [16].Additionally, there is increased evidence that ClC-5 and other ion exchangers of the ClC family exchange one H + for two Cl − anions, and that ClC-5 is required to support constant import of protons by the V-ATPase [17].
In 2010, Novarino et al. generated a Clcn5 knock-in (KI) mouse [9].Unlike KO mice, where gene expression is null, KI mice generally express a mutated protein.These KI models are used as a complementary or alternative strategy to the KO mouse and have several applications, including the study of the physiological role of the specific diseasecausing mutation in humans.In this KI model, a mutation, p.Glu211Ala, not detected in any patient, was introduced.This variant uncouples Cl − /H + transport, thereby converting the mutant protein into a Cl − channel.This KI mouse presented the same phenotype as the KO mouse.However, normal levels of endosomal acidification were observed, which implies that damage to this mechanism does not provide a sufficient explanation of what happens in DD-1 [9].
We cannot discard other mechanisms by which loss of ClC-5 might modify membrane traffic such as via effects on lipid composition or dynamics [18].A study that compared gene expression in dissected PTs of ClC-5 KO and WT mice described that the majority of pathways considerably affected are related to lipid metabolism, and particularly to fatty acid and cholesterol metabolism [19].Nevertheless, transcriptional changes in lipid metabolism genes have not been established in DD-1 patients.
KO mice models have been very useful for understanding the pathogenesis of DD-1; however, as they do not express the ClC-5 protein, their use for the development of therapeutic approaches has been limited.More recently, a DD-1 KO mouse model was generated using CRISPR-Cas9 technology and used to test kidney delivery of human CLCN5 cDNA in a lentivirus-mediated gene therapy strategy.However, the therapeutic effects initially observed disappeared after the third month and were not recovered, probably due to the activation of the immune response against the transgenic product [20].
At the moment, there is no specific therapy for DD-1 patients, only supportive measures aimed at slowing the progression of kidney disease.Eventually, patients that progress to ESRD undergo hemodialysis followed by kidney transplantation [1,2].No drug has been developed for this rare pathology, and there are currently no pharmacological clinical trials related to DD-1 (EU Clinical Trials Register https://www.clinicaltrialsregister.eu/, NIH Clinical Center Trials https://clinicaltrials.gov/, WHO Clinical Trials Search Portal https://www.who.int/clinical-trials-registry-platform/the-ictrp-search-portal,all accessed on 20 April 2024).
Here, we present the generation and characterization of a new Clcn5 KI mouse model carrying mutation p.Val523del, which has been detected in several unrelated patients from different countries [21][22][23][24][25], hence it may be a possible mutation hotspot of the CLCN5 gene.This mutation causes the deletion of valine 523, located in the P helix of the ClC-5 protein that is involved in the conformation of the ClC-5 homodimer.Therefore, it is expected that this deletion may result in destabilization of the structure or the prevention of interaction between the two monomers [26,27].On the other hand, it has been shown that mutation p.Val523del reduces ClC-5 expression both at the mRNA and protein levels [28], and electrophysiological studies of this mutation in HEK293 cells demonstrated that it resulted in the complete elimination of currents [21].Additionally, we studied the potential effect of sodium 4-phenylbutyrate (4-PBA) in this model in order to ameliorate the effects of the disease.Various studies have shown that the use of low-molecular-weight chemical chaperones can alleviate the symptoms of several renal, cardiovascular, neurological, endocrine and respiratory diseases [29][30][31][32][33][34].Previous assays have shown that 4-PBA treatment significantly reduced 24 h urinary albumin excretion in the urine of mice with acute kidney injury [33].Its therapeutic potential has been established in models of different diseases in vivo and in vitro, where it has been shown that it attenuates endoplasmic reticulum stress and decreases apoptosis and pyroptosis in renal tubule cells, as well as renal interstitial fibrosis [35][36][37][38].The lack of animal models for DD-1 that express a mutant ClC-5 protein and mimic the phenotype of this disease hinders the study of disease mechanisms and the evaluation of potential new therapies.Addressing this gap would be of great value in the investigation of DD-1.Our main objective was to develop an animal model for DD-1 that would be useful for the preclinical evaluation of molecules with therapeutic potential, such as low-molecular-weight chaperones.Our KI model complements the existing ones and provides a tool for the study of possible treatments aimed at recovering the phenotype of the ClC-5 protein.
Generation of Clcn5 Val523del Knock-in Mice
Clcn5 Val523del KI mice were generated by homologous recombination in ES cells to create a mutant allele (PolyGene Transgenetics, Rümlang, Switzerland) (Figure 1A).Southern blot analysis of SphI-digested DNA derived from six potentially positive ES clones resulted in a band of 12.3 Kb that confirmed the correct homologous recombination, as opposed to the band of 15.4 Kb, which corresponds to the size of the normal allele (Supplementary Figure S1).The positive clones were injected into C57BL/6 ES blastocysts, implanted in pseudo pregnant females and chimera mice were generated.Chimeras were bread to germ-line Flp-expressing mice for neomycin cassette removal.The quantification obtained by the average of two independent Western blots carried out showed that, unlike the KO models, the KI model expressed the mutant ClC-5 protein and also in the same amount as the WT, as shown in Figure 1D.
Genotyping
Sequencing analysis of genomic DNA of the mice generated showed that the KI mice had a deletion of the TGT nucleotides corresponding to the loss of valine 523 in the ClC-5 protein (Figure 1C).These nucleotides do appear in WT mice.In the case of females, the carriers presented a double reading from the deletion, which corresponds to the overlap of the WT and the mutated allele (Figure 1B).
Phenotypic Characterization of Clcn5 Val523del Knock-in Mice
The Clcn5 Val523del KI mice (KI) were fertile, the number of offspring per litter was normal, as was the percentage of females and males.No survival studies were performed, since the experiment ended at four months of age, although one litter was maintained for eight months and showed no differences in aging compared to WT mice.All mice selected for characterization were males.The evolution of size was measured by weight, in which no significant changes were observed (Figure 2A).In addition, as shown by the data on water and food ingested, the KI mice did not show significant differences when it came to feeding (Figure 2B,C).Since LMWP (measured as β-2-microglobulin), increased calcium and phosphate excretion in urine and increased diuresis are some of the indicators of DD-1, these parameters were measured and compared between KI and WT mice (Figure 2D-G).The reflected values in Figure 2A-G correspond to the mean and standard deviation.
Genotyping
Sequencing analysis of genomic DNA of the mice generated showed that the KI mice had a deletion of the TGT nucleotides corresponding to the loss of valine 523 in the ClC-5 protein (Figure 1C).These nucleotides do appear in WT mice.In the case of females, the carriers presented a double reading from the deletion, which corresponds to the overlap of the WT and the mutated allele (Figure 1B).
Phenotypic Characterization of Clcn5 Val523del Knock-in Mice
The Clcn5 Val523del KI mice (KI) were fertile, the number of offspring per litter was normal, as was the percentage of females and males.No survival studies were performed, since the experiment ended at four months of age, although one litter was maintained for LMWP, expressed as β-2-microglobulin excretion (Figure 2D), was 18-fold higher in the KI model compared to the WT group (p < 0.0001).Furthermore, β-2-microglobulin excretion in the KI model was doubled at day 31 after the beginning of the experiment (Supplementary Table S1).KI mice excreted 75% more calcium at T31 compared to the WT group (p < 0.05) (Supplementary Table S1), and the calcium excretion increased 8% as the experiment progressed (Figure 2E).At T21, we observed a significant difference in the phosphate excretion data in the KI group (p < 0.01) (Figure 2F), compared to the WT group, as is shown in Supplementary Table S1.This trend was maintained throughout the experiment, at the end of which, a 15% increase was observed, unlike in the WT group, in which the data remained stable.This significant difference reached its maximum at T31 (p < 0.05), when the KI mice excreted twice as much calcium as the WT group.no significant changes were observed (Figure 2A).In addition, as shown by the data on water and food ingested, the KI mice did not show significant differences when it came to feeding (Figure 2B,C).Since LMWP (measured as β-2-microglobulin), increased calcium and phosphate excretion in urine and increased diuresis are some of the indicators of DD-1, these parameters were measured and compared between KI and WT mice (Figure 2D-G).The reflected values in Figure 2A-G correspond to the mean and standard deviation.LMWP, expressed as β-2-microglobulin excretion (Figure 2D), was 18-fold higher in the KI model compared to the WT group (p < 0.0001).Furthermore, β-2-microglobulin excretion in the KI model was doubled at day 31 after the beginning of the experiment (Supplementary Table S1).KI mice excreted 75% more calcium at T31 compared to the WT group (p < 0.05) (Supplementary Table S1), and the calcium excretion increased 8% as the experiment progressed (Figure 2E).At T21, we observed a significant difference in the phosphate excretion data in the KI group (p < 0.01) (Figure 2F), compared to the WT group, A progressive increase in urine output was observed as the experiment progressed, as shown in Figure 2G.At T31, a significant increase in the urinary excretion of the KI model was observed with respect to the WT group.In this case, the KI group excreted approximately 59% more than the WT group (p < 0.01).The results divided by interquartile range are shown in Supplementary Figure S2, where it is observed that KI mice present polyuria, LMWP, hypercalciuria and hyperphosphaturia, especially in medium ranges, compared to WT mice.
Effect of 4-PBA in the Phenotype of Knock-in Clcn5 Val523del Mice
We then studied the effect of 4-PBA in β-2-microglobulin excretion, diuresis, calciuria, phosphaturia, weight, food intake and water intake of the KI mice.All mice selected for treatments and control experiments were males.As in the characterization results, the values represented in Figure 3 correspond to the mean and standard deviation of the results of the KI treated (KI T) and KI untreated (KI U) mice.(Figure 3) The results of the treated and untreated WT groups are shown in Supplementary Figure S3, where a significant increase in phosphaturia was found in the treated group (p < 0.01).No differences were observed in weight and food intake results in the treated (KI T) and untreated (KI U) groups (Figure 3A,B).A significant reduction in water intake was observed in KI T mice at T31 compared to KI T mice at T0 (p < 0.05) (Figure 3C).Calcium excretion was also significantly reduced at medium levels, whether we compare between T0 and T21 in the T group (p < 0.01) or when comparing between T and U groups at T21 (p < 0.001).The LMWP results showed a significant difference between the KI T and KI U groups at T21 (p < 0.01) and T31 (p < 0.001) (Figure 3D).At T21, β-2-microglobulin excretion decreased from 799.7 ng/24 h in the KI U group to 181 ng/24 in the treated mice, which represents a reduction of 77%.This reduction percentage was maintained at T31, where a decrease from 936.1 ng/24 h in the KI U group to 219.8 ng/24 h in the KI T group was observed (Figure 3D).The KI T group showed a significant reduction in β-2-microglobulin excretion at T21 (p < 0.001) and T31 (p < 0.001) (Figure 3D).At the end of the experiment, we observed a decrease from 675.8 ng/24 h at T0 to 219.8 ng/24 h at T31, which represents a 67% reduction.Regarding the WT mice, at T31, the excretion of the KI T group was six times higher than the WT T group, which contrasts with the U groups, where the difference was almost 20 times higher (Supplementary Tables S1 and S2).We performed a classification of the individuals of both KI groups based on their level of LMWP, and we observed that as the experiment progressed, in the case of the KI U group the number of mice with medium and high excretion values increased from 76% at T0 to 95% at T31.In the KI T group, the percentage of mice with medium and high LMWP excretion obtained at T0, as in the U group, was 75%, however, at T31, the percentage of mice with medium to high excretion values was 24%, which is a 71% reduction from the KI U group (Table 1).Although there was no significant reduction in calcium excretion in KI T mice compared to KI U, a reduction in the variability of the data was observed (Figure 3E).Furthermore, when comparing the data from KI T mice with WT T mice, we observed that at T21 and T31, the significant difference in calcium excretion with respect to WT mice disappeared (Supplementary Table S2).Regarding the urinary phosphate levels observed in Figure 3F, the KI T group presented an increase in urinary phosphate excretion at T31 compared to T0 (p < 0.05).No differences were observed between the T and U groups at T31, both in the KI model and in the WT groups (Figure S2 and Figure 3F).No differences were observed in the diuresis of the KI T mice with respect to the KI U mice (Figure 3D), although we did find, as in the calcium excretion data, that the polyuria presented at T0 with respect to the WT T mice, disappeared at T21 and T31 (Supplementary Table S2).Supplementary Figure S4 shows the results of 4-PBA treatment in the KI model separated by interquartile range.We observed that the reduction in LMWP occurred at all levels.Calcium excretion was also significantly reduced at medium levels, whether we compare between T0 and T21 in the T group (p < 0.01) or when comparing between T and U groups at T21 (p < 0.001).
The analysis of the characterization data showed a significant difference between both groups (p = 0.017 *).The MANOVA analysis of the 4-PBA treatment results also showed a significant difference between the KI T and KI U groups (p = 0.01 *).This difference was higher in the β-2-microglobulin data, in which the p-value at T21 and T31 was 0.005 ** and 0.008 **, respectively.
Discussion
DD-1 lacks therapeutic options beyond those aimed at palliating its symptoms, so it is important to develop a new strategy aimed at correcting or reducing the molecular effects caused by CLCN5 mutations.As observed in the Western blot results presented in this manuscript and in the studies carried out by Durán et al. [28], the p.Val523del mutation affects the intracellular localization of the protein, but not its expression.The two existing KO mouse models for DD-1 do not express the ClC-5 protein [11,12]; therefore, they have not been used to test potential drug therapies.In the present study, we have generated a KI mouse of the C57BL/6 strain carrying Clcn5 deletion p.Val523del, located in exon 10.This KI mouse expresses a mutated ClC-5 protein, so it could be useful for the study of molecular effects caused by the mutation and to test pharmacological treatments.We first verified that this mouse presented the phenotypic characteristics of DD-1.LMWP, which affects 100% of DD-1 cases, as well as hypercalciuria and hyperphosphaturia, present in 80% and 30% of patients, respectively, have been considered for this characterization [2,39].Body weight, the amount of food and water ingested, and the amount of urine excreted in 24 h were also monitored.
LMWP was calculated by analyzing β-2-microglobulin in mice urine at 24 h.This low-molecular-weight protein is the most widely used for the calculation of LMWP in humans [40].Throughout the experiment, an increase in the β-2-microglobulin excreted by the KI mice was observed, indicating the progression of the disease.At 12 weeks of age, 75% of the KI mice presented LMWP.DD-1 is a pediatric disease, so all mice are expected to have debuted with LMWP before adulthood.This lower percentage may be due to the fact that C57BL/6 mice are often resistant to developing proteinuria [41].This variability could be a limitation for the applicability of our findings to other mouse models or to DD-1 patients.At 16 weeks of age, the percentage of mice with LMWP increased to 95%.Only one mouse presented a normal LMWP value, although in the upper range.On average, within 31 days, β-2-microglobulin excretion was twice as high as at the beginning of the experiment.Compared to WT mice, a 20-fold higher excretion of β-2-microglobulin was observed in the KI group at the end of the experiment.The great variability observed in the data, which was around 150 %, should be noted as a limitation of this study.However, several studies have shown that this variability also occurs in humans [40,42].Likewise, the quantitative data of clara cell protein (CC16) excretion in the Guggino KO model also showed high variability [12].Additionally, the total proteinuria calculated in mice of the C57BL/6 strain showed a deviation of practically 100% [43].Stechman et al. attributed this variability in the murine models to random factors such as the social hierarchy, which can lead to aggressions and, therefore, a higher level of stress or the changes in the behavior and physiology of the animals when placed in the metabolic cages, even with a previous period of acclimatization, a factor that has already been reported in other studies [44,45].Furthermore, other stress factors in the animals, such as their handling by the researcher, must also be considered [46][47][48].The values of LMWP observed in our study are not comparable to others described in the literature, since in both the Clcn5 KO models and in other models on the C57BL/6 strain, LMWP was measured using other low-molecularweight proteins like CC16, non-quantitative measures of β-2-microglobulin, or just as total proteinuria [12,41,43].
The KI mouse group presented polyuria, with no significant differences in the amount of water ingested.As in the β-2-microglobulin data, polyuria increased as the experiment progressed, unlike in the WT group, in which the values remained stable.An upward progression was also observed in the calciuria and phosphaturia data of the KI group, in addition to a significant difference in both cases compared to the WT group.An increase in tubular PTH can induce an increase in 1-α-hydroxylase and therefore an elevation in calcitriol, as observed in KO models and in patients with DD-1 [11,12].At the intestinal level, calcitriol would increase calcium absorption, so it would be expected that this increase would produce hypercalciuria and nephrolithiasis.However, a decrease in 25(OH)D endocytosis and a very significant increase in urine were also observed in KO mice, so it is believed that the balance between 25(OH)D and stimulation of 1-α-hydroxylase, both produced by a loss of ClC-5 function, would determine the presence or absence of hypercalciuria and that this balance could depend on genetic or nutritional factors [3,11,12,49,50].The NHE3 exchanger also appears to play an important role in receptor-mediated endocytosis [51].In this case, a reduction in its expression was also observed in both KO models and in PTH-induced endocytosis in the Jentsch model and could also explain the cases of metabolic alkalosis and hypokalemia in patients with DD-1 [3,52].On the other hand, in the Jentsch KO model, hyperphosphaturia was associated with decreased NaPi2a expression.In KO models, PTH was found at normal levels in serum, but an increase of approximately 1.7-fold in urine was observed compared to WT mice, due to lack of internalization into the cell due to loss of PTH and megalin, which led to the conclusion that phosphaturia in KO mice could possibly be caused by decreased endocytosis of filtered PTH [3,[11][12][13]49,52].
Compared to WT mice, our KI mice exhibited LMWP, hypercalciuria, hyperphosphaturia and polyuria.These results are comparable to the Guggino KO model [12], in which the same phenotypic characteristics were observed, and unlike the Jentsch model [11], which did not present hypercalciuria.These data confirm that our KI Clcn5 Val523del mouse is a good model for the study of DD-1 since it presents the main clinical characteristics of the disease, especially LMWP.In addition, our model could be useful for the study of possible therapeutic options that help mitigate the disease symptoms.
The deletion of valine 523 in ClC-5 causes, at least partially, an increase in stress in the ER due to the partial retention and accumulation of the mutated protein [28].This mutation can also alter the endocytosis pathway and the ion transporter function of the protein.A broader focus on these pathways could provide a more comprehensive understanding of DD-1 pathology and treatment responses.Chemical chaperones are molecules capable of stabilizing a target protein, allowing it to exit the ER and thus reducing reticular stress.In this study, the therapeutic effect of one of these chaperones, 4-PBA, was evaluated in the KI Clcn5 Val523del model.The drug, 4-PBA, has been approved by the European and American drug agencies for its current use for the treatment of metabolic diseases related to the urea cycle, since it reduces ammonia and glutamine concentrations in plasma [53].On the other hand, it has also been observed in some studies that 4-PBA helps stabilize protein folding [31,33,38].Its main mechanism of action is through the interaction of its hydrophobic regions with the exposed hydrophobic segments of misfolded proteins, preventing protein aggregation, promoting protein folding and preventing ER stress [54].Evidence shows that 4-PBA modulates the misfolded protein response and acts as a histone deacetylase inhibitor, modulating chromatin remodeling and transcription by increasing histone acetylation [55] and regulating the expression of anti-apoptotic genes [56].Its therapeutic potential has been widely established as a potential anti-cancer agent and in diseases caused by misfolded proteins, such as cystic fibrosis, thalassemia, spinal muscular atrophy, type 2 diabetes mellitus, amyotrophic lateral sclerosis, Huntington's disease, Alzheimer's disease and Parkinson's disease [54,[57][58][59][60][61][62].In the kidney, it has been shown to attenuate ER stress and decrease apoptosis in renal tubule cells and renal interstitial fibrosis [35,36] and to reduce the concentration of albuminuria in mice with acute kidney injury [33].Therefore, we hypothesized that 4-PBA could be useful to reduce LMWP in DD-1 patients with mutations that cause retention of the ClC-5 protein in the ER.
Drug repositioning offers several advantages over the search for new formulas to treat a disease.One of the biggest problems with the discovery of new molecules is the lack of security it offers [63].According to data from Orphanet, only 10% of trials with new molecules give a positive therapeutic effect on the disease, so the development of new drugs aimed at treating rare diseases, due to their low prevalence, represents a very high economic investment.In this case, 4-PBA has already shown its safety in preclinical and human trials.In addition, the time required for its use as a possible treatment for DD-1 would be significantly reduced and the large financial investment required by new drugs both in their development stage and in phases I and II of tests would not be necessary in humans [63].
We observed a reduction of 77% in the excretion of β-2-microglobulin in the KI T group, which was maintained until the end of the experiment.In addition, the data variability was reduced in these mice, going from 150% in the untreated group to 100% in the treated group.On the other hand, a division by degree of LMWP was performed using the interquartile range, in which the same behavior was observed in all groups, so there was no difference in the action of 4-PBA depending on the degree of LMWP.Comparing the KI and the WT group, LMWP is still observed in the KI T mice, but this difference goes from being twenty times higher in the untreated group to six times higher in the treated group.As a percentage, 24% of the KI T mice exhibited medium or high levels of β-2-microglobulin excretion at T31, in contrast to 95% observed in the untreated group.This reduction in urinary β-2-microglobulin may be due to two factors: the 4-PBA may be helping the correct folding of ClC-5 and promoting its arrival at its destination, or as seen in previous studies, treatment with 4-PBA produces a cellular increase in megalin and cubilin levels, both in kidney lysate and in the plasma membrane [33].It is possible to think that this increase in megalin and cubilin may be due to the chaperone effect, although Mohammed-Ali et al. also found not only an increase in these receptors in the cell membrane, but also an increase in mRNA levels, so further studies would be necessary in this regard [33].The increase in megalin and cubilin could replace the loss of these proteins occurring in DD-1, due to the lack of recycling of the endosomes towards the membrane caused by the malfunction of ClC-5 [9,[11][12][13].In turn, this would produce an increase in the absorption of low-molecularweight proteins, so the treatment with 4-PBA could be useful for different cases of DD-1, not just those in which the protein is retained in ER.
Regarding urine excretion, no significant differences were observed between the KI T and KI U groups, although the diuresis data of the KI T mice does not show the tendency to increase over time that is seen in the KI U group.KI T data remained stable throughout the experiment.Furthermore, in the comparison of the KI T mice with the WT T group, we observed that the KI mice presented polyuria at the beginning of the experiment and this significant difference was lost as the treatment with 4-PBA progressed.The results of calcium excretion were similar to those of diuresis.No significant differences were observed between the KI T and KI U groups, although the data did stabilize and the tendency to increase over time presented by the KI U mice was lost.We have no suitable explanation for this but perhaps it has to do with the fact that hypercalciuria is not a constant feature of DD-1.In addition, although it was not significant, we observed a decrease of approximately 30% in the data at T21 that was maintained at T31.It is also worth highlighting the decrease in the dispersion of the data of the KI T group.On the other hand, as in the diuresis data, when comparing the KI T group with the WT T at T0, the KI mice presented a significant increase in calcium excretion with respect to WT.This increase disappeared with treatment at T21 and T31.Also, the two treated groups, KI and WT, started with lower phosphaturia values than those of the untreated groups.In samples T21 and T31, an increase in excretion was observed, in which the values were equal to the untreated group.To date, no other study has described the effect of 4-PBA on these ions in urine.
In summary, our KI mouse model shows the main characteristics of DD-1 and is therefore a suitable model for testing therapeutic drugs.KI mice treated with 4-PBA exhibit significantly decreased β-2-microglobulin excretion.Further work will be needed to determine whether the treatment with 4-PBA reduces ER retention of the mutated ClC-5 protein in the kidney of KI mice and to establish whether the reduction in LMWP is directly related to a more efficient endocytosis process.Although interspecies variations in drug metabolism should be considered and clinical trials should be conducted to confirm the efficacy of 4-PBA in DD-1 patients, our results suggest that 4-PBA could be useful in slowing the progression of DD-1 in patients with CLCN5 variant p.Val523del.Furthermore, we propose that this chaperone could be suitable for DD-1 patients with other mutations that result in partial or total retention of the ClC-5 protein in the ER.
Preparation of the Targeting Vector and Generation of Clcn5 Val523del Knock-in Mice
C57BL/6 mice carrying the c.1566_1568delTGT; p.Val523del mutation in the Clcn5 gene were generated by PolyGene Transgenetics (Rümlang, Switzerland) using gene targeting in ES cells.The targeting vector contained two Clcn5 homologous regions with the mutation in exon 10 and an FTR-flanked neomycin resistance cassette (Neo) inserted in intron 9 (Figure 1A).For the construction of this vector, the Clcn5 homology regions were amplified by PCR from a BAC DNA template (RP23-440N3), and the mutation was introduced by overlap extension PCR.The construct was verified by DNA sequencing.The targeting vector was then linearized with XhoI and electroporated into C57BL/6 ES cells.G418 was used to select for stable transfection.PCR, Southern blotting and sequencing were used to screen and validate clones.Positive ES-cell clones were injected into C57BL/6 ES blastocysts to generate chimeric mice.Lastly, the FTR-flanked neomycin cassette was excised by Flt recombination upon breeding with germ-line Flp-expressing mice.For this work, the principle of the three Rs was followed.Males and females were separated from each litter without there being more than six animals per cage at the time of carrying out the experiment.All mice were housed in ventilated cages in rooms with controlled temperature, humidity and a 12 h light/12 h dark cycle.Water and food were supplied ad libitum.These processes were performed in the laboratory animal facility of the University of La Laguna.The study protocol and animal procedures were approved by the Comité de Ética y Bienestar Animal (CEIBA) of University of La Laguna and the Dirección General De Ganadería de la Consejería de Agricultura, Ganadería, Pesca y Aguas del Gobierno de Canarias.
Genotyping
Genotyping of mice was performed between 4 and 6 weeks of age by standard polymerase chain reaction (PCR) procedures on DNA derived from ear tacking.Both DNA extraction and amplification were performed using the AccuStart™ II Mouse Genotyping Kit (Quanta Biosciences, Beverly, MA, USA) following the manufacturer's instructions and the following PCR primers: E10genotF: 5 ′ -ACTGACTCCCATGCTTTGCT-3 ′ and E10genotR: 5 ′ -CATCACATCCATTGCCAGAG-3 ′ .The obtained amplicons were purified with the NucleoSpin™ Gel and PCR Clean-up Kit (Macherey Nagel, Dueren, Germany) and sent to Macrogen Spain for sequencing.Sequences were compared in BLAST with the reference sequence for Clcn5 (AF134117.1).
Western Blot
The expression of the murine ClC-5 protein in the kidney was analyzed by Western blot to confirm its expression in the Clcn5 Val523del KI model.Total proteins were obtained from kidney lysates from WT and KI mice and separated by 10% sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE).Proteins were transferred to a 0.45 µm nitrocellulose membrane (Bio-Rad, Hercules, CA, USA).The membrane was blocked with 5% nonfat milk powder in TBS buffer supplemented with 0.1% Tween 20 (Sigma-Aldrich, St. Louis, MO, USA) for 1 h at room temperature under shaking.To detect the protein, a ClC-5 Polyclonal Antibody (ThermoFisher Scientific, 1:1000, Waltham, MA, USA) was used.Actin was taken as a constitutive WT protein and was detected using the β-Actin Mouse mAB antibody (Cell Signaling Technology, 1:1000, Danvers, MA, USA).The membrane was analyzed in a Fusion Solo S chemiluminescence chamber (Vilber Lourmat, Collégien, France) using EvolutionCapt software v18.02 (Vilber Lourmat, Eberhardzell, Germany).Quantification of the bands was performed with the Fiji software (https://imagej.net/accessed on 3 November 2023) [64] using the ROI Manager tool.The ratios represented are between the WT mouse and the KI for ClC-5, and for actin.The ratio between ClC-5 and actin was also obtained in both cases.The data obtained are the average of the quantification of two independent Western blots carried out.A t-test was performed to confirm whether significant differences existed.
Phenotypic Characterization
All mice selected for phenotypic characterization were males.At three months of age (T0), urine excreted in 24 h by 13 WT and 21 Clcn5 Val523del KI mice was collected using metabolic cages with 24 h of prior acclimatization, in addition to the amount of water and food ingested in 24 h, individually.This collection was repeated 21 (T21) and 31 (T31) days later.Measurements related to the main phenotypic characteristics of DD-1 were performed on urine samples; β-2-microglobulin excretion using the Mouse Beta-2-Microglobulin ELISA Kit (Abcam, Cambridge, UK), calcium and phosphate, using the Calcium Colorimetric Assay Kit (Sigma-Aldrich, St. Louis, MO, USA) and Phosphate Assay Kit (Sigma-Aldrich, St. Louis, MO, USA), respectively, and creatinine in serum and urine, using the Mouse Creatinine ELISA Kit (My BioSource, San Diego, CA, USA).All assays were performed following the manufacturers' instructions.
Treatment with 4-PBA
All mice selected for treatments and control experiments were males.Treatment with 4-PBA (Santa Cruz Biotechnology, Paso Robles, CA, USA) was administered in water for 31 days to 20 WT and 29 Clcn5 Val523del KI mice randomly assigned to four groups: WT and KI mice receiving treatment (treatment group) and WT and KI mice receiving only water (WT group).Due to the variability in the doses used in previous studies, an intermediate dose of 250 mg/kg/day was established [65,66].Serum (24 h) and urine samples were taken to re-determine the values of the mentioned characteristic clinical parameters of DD-1.
Statistical Analysis
Due to the nature of this experiment, we sought to obtain sufficient statistical power of the data to ensure the results.The sample size was calculated using the G*Power application v 3.1.9.7 (Heinrich-Heine-Universität, Düsseldorf, Germany) [67] (https://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower accessed on 3 November 2023).Due to the lack of data from previous studies in this regard, both the mean and the variability of the groups were fixed with respect to the results obtained in the analysis of a group of five mice from each group (WT and KI).Statistical analysis of the results was performed with the GraphPad Prism 9 software v9.0.0 (GraphPad Software, San Diego, CA, USA) (https://www.graphpad.com/accessed on 3 November 2023).For all data, normality tests were performed.Subsequently, and based on the results of the normality test, parametric (one-way ANOVA) or non-parametric tests (Kruskal-Wallis) were performed to compare groups.In addition, a multivariate analysis of variance (MANOVA) was performed using SPSS software v29.0 (IBM, Armonk, NY, USA) to analyze the difference between groups and the effect of 4-PBA on them.The variables β-2-microglobulin, phosphaturia and calciuria were included in this analysis.On the other hand, the β-2microglobulin data from treated and untreated KI mice were divided into low, medium or high excretion groups established by the 25th and 75th percentiles.The statistical power of this analysis was 0.93.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijms25158110/s1.Funding: This study was supported by Asociacion de la Enfermedad de Dent (ASDENT) (D18-003, Fundación Instituto de Investigación Sanitaria de Canarias, FIISC) and by grants PI17/00153 and PI20/00652 co-financed by the Instituto de Salud Carlos III-Subdireccion General de Evaluacion y Fomento de la Investigacion and the European Regional Development Fund "Another way to build Europe".G.M.-E.participation was supported by the pre-doctoral training fellowship from M-ULL 2019 program of the Universidad de La Laguna.
Institutional Review Board Statement: The animal study protocol was approved by the Comité de Ética y Bienestar Animal (CEIBA) of University of La Laguna and the Dirección General De Ganadería de la Consejería de Agricultura, Ganadería, Pesca y Aguas del Gobierno de Canarias (protocol codes CEIBA2017-0253 and CEIBA 2022-3190).
Informed Consent Statement: Not applicable.
Figure 1 .
Figure 1.Generation of Clcn5 Val523del knock-in.(A) Schematic representation of the gene targeting strategy.The diagram shows the wild-type (WT) Clcn5 locus, targeting vector and targeted locus after Flt homologous recombination.The targeted construct contained the Clcn5 homologous regions (exons 8 to 12 and flanking exons) with the TGT 1566-1568del mutation (*) in exon 10 and an FTR-flanked neomycin cassette (Neo) inserted in intron 9.After homologous recombination, this cassette was excised by Flt recombination.(B) Examples of electropherograms of direct DNA sequencing of PCR products containing the deletion site in exon 10 of WT mouse, KI and carrier female, respectively.The arrow indicates region of overlap between WT and mutant sequences (C) Deletion of nucleotides 1566-1568 (TGT) leads to the loss of valine 523 in the ClC-5 protein.Nucleotides in red are those affected by the deletion (D) Western blot of ClC-5 expression in kidney and quantification.
Figure 1 .
Figure 1.Generation of Clcn5 Val523del knock-in.(A) Schematic representation of the gene targeting strategy.The diagram shows the wild-type (WT) Clcn5 locus, targeting vector and targeted locus after Flt homologous recombination.The targeted construct contained the Clcn5 homologous regions (exons 8 to 12 and flanking exons) with the TGT 1566-1568del mutation (*) in exon 10 and an FTRflanked neomycin cassette (Neo) inserted in intron 9.After homologous recombination, this cassette was excised by Flt recombination.(B) Examples of electropherograms of direct DNA sequencing of PCR products containing the deletion site in exon 10 of WT mouse, KI and carrier female, respectively.The arrow indicates region of overlap between WT and mutant sequences (C) Deletion of nucleotides 1566-1568 (TGT) leads to the loss of valine 523 in the ClC-5 protein.Nucleotides in red are those affected by the deletion (D) Western blot of ClC-5 expression in kidney and quantification.
Figure 3 .
Figure 3. Results of 4-PBA treatment of the Clcn5 Val523del mouse model.(A-G).Comparison of weight, food and water intake, LMWP, calciuria, phosphaturia and diuresis between KI treated (KI T) and untreated (KI U) mice.The bars correspond to the mean and standard deviations.Striped and unstriped bars correspond to KI T and KI U mice, respectively.* and ** correspond to p < 0.05 and p < 0.01 vs. Untreated/KI T T0 vs. KI T T31.
Table 1 .
Percentage of KI treated and untreated mice classified based on their β-2-microglobulin excretion.The percentages at T0, T21 and T31 are shown.This classification was made by analysis of quartiles, with low excretion being below the first quartile, average excretion between the second and third quartiles, and high excretion above the last quartile.
Figure 3 .
Figure 3. Results of 4-PBA treatment of the Clcn5 Val523del mouse model.(A-G).Comparison of weight, food and water intake, LMWP, calciuria, phosphaturia and diuresis between KI treated (KI T) and untreated (KI U) mice.The bars correspond to the mean and standard deviations.Striped and unstriped bars correspond to KI T and KI U mice, respectively.* and ** correspond to p < 0.05 and p < 0.01 vs. Untreated/KI T T0 vs. KI T T31.
Table 1 .
Percentage of KI treated and untreated mice classified based on their β-2-microglobulin excretion.The percentages at T0, T21 and T31 are shown.This classification was made by analysis of quartiles, with low excretion being below the first quartile, average excretion between the second and third quartiles, and high excretion above the last quartile. | 2024-07-27T15:17:18.871Z | 2024-07-25T00:00:00.000 | {
"year": 2024,
"sha1": "07791266e510e98395703269373d722e0ebe71d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms25158110",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ef0874625f74726342799f4a2c2d7eeecbcb18b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15642402 | pes2o/s2orc | v3-fos-license | Induction and mode of action of suppressor cells generated against human gamma globulin. II. Effects of colchicine.
The ability of colchicine (Col) to interfere with suppressor cells specific for the soluble protein antigen human gamma globulin (HGG) has been examined. This interference may be the mechanism of the adjuvanticity promoted by Col. When injected into A/J mice at the appropriate time and concentration, both Col and cyclophosphamide promoted an adjuvant increase in the plaque-forming cell response to 100 micrograms of immunogenic, aggregated HGG. Col abrogated both the induction of suppressor cells when injected with 3 h of tolerization with deaggregated (DHGG) and the expression of previously induced suppressor cells when injected with the antigenic challenge. Interference with the generation and expression of antigen-specific suppressor cells had no detectable effects on the immunologic unresponsive state to HGG. Col did not interfere with the induction of tolerance at a dose (1 mg/kg) that abolished the generation of suppressor cells. Furthermore, the absence of colchicine-sensitive-suppressor cells during the establishment of tolerance had no observable effect on the duration of unresponsivness in either helper T- or B-lymphocyte populations. Finally, Col was not able to terminate the unresponsive state established by DHGG even when responsive splenic B cells could be demonstrated in tolerant animals. These data indicate that suppressor cells are not required for the establishment and maintenance of the unresponsive state to this antigen.
It has become increasingly apparent that the regulation of immune responsiveness is accomplished by a variety of processes. Suppressor cells have recently emerged as a major element of this regulation. These cells may act either by directly suppressing responsive cells or by liberating soluble-suppressor factors. Not only have suppressor cells been found to be active in the regulation of ongoing immune responses, but they have also been found in association with the tolerant state to a number ofT-dependent and T-independent antigens, including synthetic antigens (1,2), polysaccharide antigens (3), and soluble protein antigens, human serum albumin (HSA) 1 (4) and human gamma globulin (HGG) (5)(6)(7). Although the role of suppressor cells in the establishment of tolerance has been convincingly demonstrated in some unresponsive states (8)(9)(10)(11), this relationship remains to be proven in many systems. Several workers have reported that unresponsiveness to protein antigens can be established in the absence of detectable suppressor ceils in normal animals (12)(13)(14)(15)(16)(17), neonatal mice (18), adult thymectomized, X-irradiated, bone-marrow reconstituted mice (4,13), and athymic nude mice (19)(20)(21). Although antigen-specific-suppressor cells may not be necessary for the establishment or maintenance of these tolerant states, their presence in antigen-primed animals (22,23) suggests that they may play a regulatory role in immune responsiveness to these protein antigens.
The presence of suppressor cells and the degree of their activity can be modulated by a number of factors. Suppressor cells acting directly upon helper T cells can be induced by other T ceils (24). In addition, HGG-specific-suppressor cells can be inhibited or inactivated by treatment with antisera specific for surface antigens present on suppressor cells, such as antigens coded for in the 1-J region of 1-1-2 (25), and with antisera directed against cell-surface antigens on subsets of T lymphocytes such as Lyt-2 (25,26) and Thy-1 (6,25) antigens. Pharmacological agents known to serve as adjuvants may increase immune responsiveness by inhibiting the generation or action of suppressor cells. Agents that have been demonstrated to interfere with suppressor cells affecting humoral or cellular-immune responses include cyclophosphamide (27)(28)(29)(30)(31), corticosteroids (32)(33)(34)(35), indomethacin (36), and colchicine (Col) (37). Sublethal doses of irradiation can also increase immune responsiveness (38)(39)(40)(41), possibly by disturbing cellular division of radiosensitive-suppressor cells.
Experimental abrogation of suppressor-cell activity allows investigation of the role of these cells in the regulation of the immune response and the establishment and maintenance of immunologic unresponsiveness in the absence of suppressor cells. The goals of this paper are to demonstrate pharmacological interference with both the induction and expression of antigen-specific-suppressor cells for the soluble protein antigen HGG by Col and to explore the resulting tolerant state to HGG. The effects of this agent are examined in hopes of elucidating the cellular and subcellular dynamics of the generation and action of suppressor cells and the induction and duration of immunologic unresponsiveness to a protein antigen in the absence of these cells.
Materials and Methods
Animals. Male A/J mice were purchased from The Jackson Laboratory (Bar Harbor, Maine) at 4-to 5-wk old. They were maintained on Wayne Lab Blox F6 (Allied Mills, Inc., Chicago, I11.) and acidified water ad lib. All mice were housed five to a cage. Mice ranging in age from 8 to 9 wk were utilized in these experiments. These mice had a mean body weight of 24.4 ± 0.4 g. Antigens and Immunization. Cohn fraction II lot RC-104 of human plasma was obtained through the courtesy of the American Red Cross National Fractionation Center with the partial support of the National Institutes of Health grant HE 13881 HEM. Human gamma globulin was purified from this material by column chromatography on DEAE-cellulose in 0.01 M phosphate buffer, pH 8.0. Immunogenic, heat-aggregated HGG (AHGG) was prepared from the DEAE-purified HGG as described previously (42) by using a modification of Gamble's method (43). Mice received a primary injection of 400/.tg of AHGG i.v. into the lateral caudal vein followed in some cases by a secondary injection of the same amount of antigen intraperitoneally 10 d later. Sheep erythrocytes (SRBC) were purchased from Colorado Serum Co., Denver, Colo. Primary injection of 10 s SRBC was given 6 d before assessment of plaqueforming cells (PFC).
Chemicals. Phenol-extracted bacterial lipopolysaccharides (LPS) from
Induction of Tolerance and Suppressor Cells. Tolerogenic, deaggregated HGG (DHGG) was prepared by ultracentrifugation of DEAE-purified HGG for 150 min at 150,000 g to remove aggregated material as previously described (17). The upper quarter of the centrifuged solution was diluted and injected into mice i.p. Each mouse received 2.5 mg of DHGG. The spleens of mice tolerized 10 d previously served as the source of suppressor cells.
Irradiated Recipients. Recipients for adoptive transfer were irradiated 2-3 h before reeonstitution. Mice placed in an aluminum chamber in a Gamma Cell 40 small animal irradiator (Atomic Energy of Canada, Ltd., Ottawa, Canada) received 900 roentgen (R) of whole body irradiation from a XSTCe source emitting a central dose of 106 R per rain. Reconstituted recipients received 100/~g of gentamicin (Schering Pharmaceutical Corp., Kenilworth, N. J.) i.p. diluted in 2.7% glucose in saline on the day of irradiation and again 3 d later and were caged in groups of two to avoid the problem of early irradiation death, presumably due to bacterial infection.
Adoptive Transfer Assay for Suppression. The spleen cells of normal and tolerant mice that were adoptively transferred into lethally irradiated animals were obtained as described previously (20). Briefly, spleens were aseptically removed and sterilely grated through 350 #m mesh stainless steel screens into balanced salt solution (BSS) supplemented with 100 U of penicillin and 100gag of streptomycin per ml. The cell suspensions were washed three times in BSS, and 70 × 10 viable spleen cells were injected i.v. via a lateral caudal vein into irradiated recipients. Animals receiving more than one cell type received sequential injections. Primary antigenic challenge was delivered i.v. immediately after cellular reconstitution and was followed 10 d later by a secondary injection of AHGG i.p. 5 d after the secondary challenge, the recipients' spleens were removed and individually assayed for PFC.
Statistical Methods. Results of the plaque assay are expressed as the mean number of PFC per 106 spleen cells :t= the standard error corrected for the background response to the indicator erythrocytes. Data were analyzed statistically with Student's t test. The percent suppression was determined as follows:
Results
Adjuvant Activity of Pharmacological Agents. To assess the effectiveness of two pharmacological agents previously reported to mediate adjuvant activity in other immunologic systems, mice were injected with 2.5 mg of cyclophosphamide 2 d before, or 25 pg of Col at the time of, immunization with either 100 or 400 #g of antigenic AHGG. Some mice also received 10 s SRBC. The results of the PFC responses of these animals 6 d after antigenic exposure are presented in Table I. Both of these agents stimulated a marked increase in the number of PFC in spleens of animals given the lower dose of HGG. However, the response of mice to 400 #g of AHGG was less susceptible to augmentation by Col than was the response to 100 #g of antigen. The primary response to SRBC was also increased by the preadministration of cyclophosphamide. Thus, both cyclophosphamide and Col promoted an adjuvant increase in the primary PFC response in vivo to the soluble protein antigen HGG under appropriate conditions.
Temporal Relationship between Col and Antigen Injection.
The possibility that the failure to detect an increase in PFC to HGG in animals given Col and the higher dose of 400 pg of AHGG was a result of the inappropriate timing of the Col injection was investigated. Mice immunized with either 100 or 400 #g of AHGG were injected with Col at various days ranging from 1 d before antigen through 5 d after antigen. As illustrated inFigs. 1 and 2, 400 #g of AHGG alone stimulated a greater PFC response at 6 d after immunization than did 100 #g of antigen. Furthermore, the injection of Col from 1 d before to 1 d after 100#g of antigen stimulated a marked increase in the number of PFC detected to HGG (Fig. 1). On the contrary, Col was not able to exert an adjuvant effect in animals receiving the higher dose (400 pg) of antigen regardless of the temporal relationship between the injection of antigen and Col (Fig. 2). The data in Fig. 1 are in agreement with previous reports indicating that Col must be given in close proximity to antigenic challenge to serve as an effective adjuvant (46,47). Additionally, these data suggest that the injection of Col within 1 d of the assessment of the immune response does not significantly decrease the number of antigen-specific PFC detected. Therefore, both the antigenic dose and the temporal relationship between the injection of antigen and Col appear to be crucial for the expression of colchicine-mediated adjuvanticity.
Abrogation of the Generation of Suppressor Cells to HGG by CoL To investigate the 1400.
+4 +'~ Day of Colchicinn InjectiH
Fzo. 2. Lack of adjuvanticity of colchicine with high antigen dose. The data were obtained as described in Fig. 1 except that 400 #g of AHGG was injected on day 0. possibility that Col may act as an adjuvant for soluble protein antigens by interfering with the generation of antigen-specific suppressor cells as previously demonstrated for the synthetic copolymer L-glutamic acidS°-L-tyrosine 5° (37), the effects of Col upon HGG-specific suppressor cells were assessed. The basic protocol for measuring suppressor-eell activity to HGG involved the adoptive transfer of 70 × 106 viable spleen cells containing suppressor cells from mice tolerized 10 d previously with DHGG together with 70 × 106 viable spleen cells from normal animals into lethally irradiated, syngeneic recipients (17,48). Control groups received either normal or tolerant spleen cells alone. In the experiments assessing interference with the induction of suppressor cells, all recipients were challenged with 400 #g of AHGG on the day of cell transfer and again 10 d later, and the PFC response was assessed 5 d after the secondary challenge.
To investigate the ability of Col to abrogate the generation of suppressor cells during the induction of tolerance to HGG, mice were injected with 2.5 mg of tolerogen (DHGG) followed 3 h later by 25 #g of Col injected i.v. (Table II and Table II, the injection of Col had no effect on the induction of tolerance to HGG. Spleen cells from mice treated with Col and DHGG (group 1) were unresponsive on adoptive transfer as were cells from animals treated with tolerogen alone (group 5). The injection of DHGG generated suppressor cells as evidenced by the depressed response of recipients receiving both tolerant (DHGGtreated) and normal spleen cells (group 4) when compared with the response of the recipients of normal spleen cells alone (group 3). However, mice injected with Col 3 h after the injection of DHGG possessed no detectable suppressor cells in their spleens as illustrated by the response of animals receiving spleen cells from both Col-treated tolerant and normal mice (group 2). On the contrary, the recipients ofceUs from both DHGG and Col-treated mice and normal mice showed responses equal to (Table II) or exceeding (Fig. 3) those of recipients of normal cells alone. The demonstration in Fig. 3 of heightened responsiveness in the recipients of cells from normal and tolerant mice not containing detectable suppressor cells compared to recipients of normal cells alone is in agreement with previqusly published data on the responsiveness of mixtures of tolerant but nonsuppressive spleen cells and normal cells (17).
Interference with the Expression of Suppressor Cells by Col. The demonstration that the injection of Col 3 h after tolerization inhibits the generation of HGG-specifie suppressor cells was extended by assessing the ability of Col to interfere with the expression of suppressor cells. To investigate the effects of Col on the activities of suppressor cells previously generated by the injection of DHGG, spleen eeUs from animals tolerized 10 d previously were transferred together with spleen cells from normal mice into irradiated recipients. These recipients were injected with 400 ~g of AHGG i.v., and 3 h after this antigenic challenge one-half of the recipients were injected with 25 #g of Col. The secondary response of these mice was assessed by injecting 400 #g of AHGG alone 10 d after the primary injection and plaquing 5 d after this secondary challenge. Table III demonstrates that the addition of Col to the primary antigenic challenge abolishes the activity of the suppressor cells present in the tolerant spleen-cell population (group 2). However, Col injected at this time does not terminate the unresponsive state in tolerant spleen cells transferred 10 d after tolerization (group 1). The adjuvant effect of Col can be detected by comparing the responsiveness of transferred normal spleen cells challenged with antigen and Col (group 3) or challenged with antigen alone (group 6). This adjuvant increase is due presumably to the inhibition of the generation of suppressor cells in the normal spleen-cell population. However, the increased responsiveness of normal spleen cells challenged with antigen and Col does not account for the abolition of supp~r-cell activity found in colchicine-treated recipients of normal and tolerant spleen cells.
Inability of Col to Interfere with the Induction of Tolerance in T-or B-Spleen Cells.
The observations that Col can act as an adjuvant and can inhibit both the induction and the expression of suppressor cells raise the possibility that Col might interfere with the induction of immunologic unresponsiveness. However, the data presented above indicate that the injection of Col 3 h after DHGG does not interfere with the generation of unresponsiveness. In Fig. 3 and Table II, spleen cells from mice injected with DHGG and colchicine remained unresponsive when transferred into irradiated recipients and challenged twice with antigen. Similarly, the unresponsive state to HGG was not perturbed by the incorporation of Col into the antigenic challenge of tolerant mice (Table III). The effect of Col on the induction of tolerance was further examined in intact animals with particular emphasis on splenic B cells. Mice injected with DHGG and Col 10, 30, or 65 d before were challenged with 400 #g of AHGG i.v. followed 3 h later by 25 #g of LPS and were plaqued 6 d after antigenic challenge. The injection of LPS with antigen into mice possessing tolerant T cells but responsive B cells leads to the generation of PFC to the injected antigen (49). Therefore, this protocol would allow the demonstration of responsive splenic B ceils at a time when splenic T cells are unresponsive to this antigen and would result in the stimulation of HGG-specific B cells if Col interferes with the induction of tolerance in these B cells. The responsiveness of the total spleen-cell population to HGG was also investigated by injecting these mice with 400 #g of AHGG twice 10 d apart and plaquing 4 d after the second antigenic challenge. Table IV illustrates the inability of Col to interfere with the induction of tolerance in splenic T cells as illustrated by the unresponsiveness observed when mice tolerized with DHGG and Col are challenged with antigen alone. Furthermore, the addition of LPS to the antigenic challenge does not alter the reacquisition of HGG-responsive splenic B cells in mice tolerized with DHGG and Col as compared to mice tolerized with DHGG alone. By 65 d after tolerization, substantial numbers of HGG-responsive B cells are present in both animals tolerized with and without Col. Nevertheless, tolerance is still persistent in T cells as assayed by challenge with AHGG alone. Therefore, although Col interferes with both the generation and expression of suppressor cells specific for HGG, it does not appear to interfere with the induction or duration of unresponsiveness to this antigen in either the splenic T-or B-cell population assayed in situ (Table IV) or by adoptive transfer to irradiated recipients (Table II and Fig. 3).
Inability of Col to Terminate Tolerance to HGG. The unresponsive state to HGG is of much shorter duration in bone marrow (50) and splenic (12,51) Table IV. It has been proposed that unresponsiveness of the whole animal is maintained in the face of responsive B cells by the absence of responsive-helper T cells and that these T cells are permanently inactivated or deleted by antigen (tolerogen) (50,52). However, responsive-helper T cells might be present at such times but prevented from cooperating with responsive B cells by suppressor cells. If responsive-helper T cells are present but inhibited by suppressor cells in this latter stage of immunologic unresponsiveness to HGG and if Col interferes with both the induction and expression of HGG-specific suppressor cells as suggested by Tables II and III and Fig. 3, then the injection of AHGG and Col into such mice would be expected to terminate the tolerant state. Mice were injected with AHGG and Col or LPS 10 or 66 d after tolerization and plaqued 6 d after antigenic challenge to test this possibility. As demonstrated in Table V, 66 d after tolerization HGG-responsive B cells are detected in tolerant mice by the injection of LPS and AHGG. However, the injection of Col and antigen does not terminate tolerance in these mice. Neither Col nor LPS have any effect upon the unresponsive state in the cells of day-10 tolerant animals. These data suggest that interference with the induction or expression of suppressor cells is not sufficient to allow the termination of tolerance, even in mice possessing responsive B cells, and indicate that unresponsiveness to HGG in helper T cells is not maintained by Col-sensitive-suppressor cells.
Discussion
The data presented in this report demonstrate that Col can interfere with suppressor cells specific for the soluble protein antigen HGG. Col prevents the generation of the. antigen-specific-suppressor cells induced by the tolerogenic form of this antigen when the drug is injected within hours of tolerization. Furthermore, the expression of these suppressor cells is inhibited when Cot is utilized during the assessment of the suppressor cells previously induced with tolerogen. This interference with the induction and expression of antigen-specific-suppressor cells may be the mechanism of the adjuvant activity previously attributed to Col. As early as 1954 (46,53), Col was reported to enhance serum antibody of rabbits injected with diphtheria toxoid. Since these initial observations, several workers have reported similar enhancement of humoral antibody responses mediated by Col in rabbits (54), hamsters (55,56), guinea pigs (57), and mice (37,47) to a variety of antigens. Recently, Shek et al. (37) have demonstrated that Col can act as an adjuvant for the PFC response to the random, synthetic copolymer of L-glutamic acidS°-L-tyrosine 5° by interfering with the induction of suppressor cells to this synthetic peptide antigen. The data presented here confirm and extend these previous reports by demonstrating that the induction of suppressor cells specific for a naturally occurring protein antigen, HGG, can be abrogated with Col and that the expression of existing, mature suppressor cells can also be inhibited by this agent.
The ability of Col to block the activity of suppressor cells and thereby increase immune responsiveness may exemplify a class of immunopotentiating agents defined by their mechanism of adjuvant action. In addition to Col, irradiation (38)(39)(40)(41) and other pharmacological agents including corticosteroids (32)(33)(34)(35), cyelophosphamide (27)(28)(29)(30)(31), and indomethaein (36) have been reported to serve as adjuvants for humoral and cellular-immune responses possibly by interfering with suppressor cells. Of these agents that may selectively inhibit suppressor cells, Col may have the greatest potential for use as an adjuvant. With the exception of Col, these agents possess adjuvant activity over a narrow dose range and are more widely recognized as potent immunodepressants (58). In contrast, in the present studies, Col did not depress immune responsiveness when injected up to 24 h before assessment of the PFC response ( Figs. 1 and 2), and it has been reported to be immunodepressive in vivo only when injected repeatedly (59,60) or at nearly toxic doses (61,62). In the present work as in previous reports (46,47), Col possessed adjuvant properties at concentrations approaching toxicity, and at concentrations that were toxic for individual animals, the surviving animals exhibited heightened immune responses.
The mechanism by which Col inhibits suppressor cells is unclear. However, it has been suggested that the selective inhibition of suppressor cells is due to the antimitotic activity of Col (37,47). This postulate is applicable to the induction of suppressor cells that have been demonstrated to require cell division (63) and to be sensitive to irradiation (64). However, the expression of suppressor cells appears to be relatively more resistant to irradiation (64)(65)(66)(67) or mitomycin C (33). If the effects of Col upon the expression of mature suppressor cells is limited to the anti-mitotic activity of this drug, then the cells directly affected by Col may be cells which amplify the action of the mature suppressor cells. These amplifier cells might require cell division before becoming fully competent. T lymphocytes that may augment the functions of mature suppressor cells have recently been described (24,31,68,69). Alternatively, Col may interfere with subcellular processes associated with cell motility or communication, thus preventing cell-to-cell interaction between mature suppressor cells and their targets or intermediary lymphocytes.
Interference with the induction of tolerance has been proposed as the most stringent assay for adjuvant activity (70). However, this view can no longer be supported in light of the demonstration that Col promotes an adjuvant increase in both PFC (Table I and Fig. 1) (47) and circulating antibody (47) to HGG even though it does not interfere with the induction of tolerance to this antigen (Table II and Fig. 3). This segregation of the ability to interfere with the induction of tolerance from other mechanisms of adjuvant activity has been documented previously (71) in athymic nude mice (19,20) and in the LPS-nonresponder C3H/HeJ mouse (72), with polymerized flagellin (19), LPS (20), and lipid A-associated protein (72).
A final area of interest arises from the inability of Col to interfere with the induction of tolerance although successfully interfering with the generation of suppressor cells. These data have direct implications on the putative role of suppressor cells in the tolerant state. Although the unresponsiveness established by the injection of DHGG and Col lacks demonstrable levels of suppressor cells, this tolerant state is: (a) as complete as in animals receiving tolerogen alone; (b) stable upon adoptive cell transfer; (c) persistent in splenic T lymphocytes for at least 66 d; (d) established in splenic B lymphocytes as assessed by challenge with antigen and LPS; and (e) maintained in B cells with the same kinetics as animals receiving only tolerogen. Furthermore, interference with suppressor cells by Col was unable to terminate tolerance. This ability to induce and maintain immunologic unresponsiveness to HGG in the absence of antigen-specific suppressor cells is in agreement with previous reports (17,73) and confirms the demonstration that suppressor ceils are not necessary for this tolerant state. As suggested previously (52,73), tolerance to HGG may exemplify a state of central or intrinsic unresponsiveness in which antigen-responsive B cells are irreversibly inactivated or permanently deleted. The data presented here support this postulate. Suppressor cells do not appear to be responsible for the induction of unresponsiveness in either T or B lymphocytes to HGG nor do they appear to influence the duration of tolerance in these ceils. Similar conclusions have been drawn from systems in which tolerance is established in B cells in vitro (74,75).
The tenuous association of suppressor cells with the unresponsive state to soluble protein antigens has been suggested by others (73). Suppressor cells are only transiently associated with the unresponsive state established to HGG (6, 48) and do not appear to be required for the maintenance of tolerance in either T or B lymphocytes (17). Furthermore, new suppressor cells cannot be induced in unresponsive mice after the disappearance of the initial, transient suppressor ceils (16) speaking against a role for suppressor cells in states of unresponsiveness to persistent self-antigens. Unresponsiveness can be established in the absence of transient suppressor-cell activity (a) in adult animals by tolerizing with a low dose of commercially acquired HGG (16) or with HGG purified from either individual volunteers or myeloma patients (17); or (b) in neonatal animals (18). Unresponsiveness lacking suppressor cells has also been reported in adult animals (12)(13)(14) with the conventional DAGG tolerization protocol. The demonstration that tolerance can be induced in congenitally athymic nude mice devoid of competent T cells with both immunogenic (21) and tolerogenic (19,20) forms of heterologous gamma globulins and in adult thymectomized, X-irradiated, bone marrow-reconstituted mice (4,13) further support the postulate that tolerance can be induced in the absence of suppressor T cells. These observations coupled with the demonstration of the presence of antigen-specific-suppressor cells to HGG in primed mice (23) and the more efficient suppression of HGG-primed cells compared to normal cells (76) suggest that these suppressor cells may represent a normal regulatory mechanism operative during antigenic stimulation although their relevance to the unresponsive state to HGG must remain dubious. The existence of antigenspecific-suppressor cells in an animal immunologically unresponsive to that antigen does not in itself imply a causal relationship between the suppressor cells and the establishment and maintenance of the tolerant state. Any postulate that the suppressor cells present in a tolerant host represent the mechanism of unresponsiveness must be firmly established experimentally.
Summary
The ability of colchicine (Col) to interfere with suppressor cells specific for the soluble protein antigen human gamma globulin (HGG) has been examined. This interference may be the mechanism of the adjuvanticity promoted by Col. When injected into A/J mice at the appropriate time and concentration, both Col and cyclophosphamide promoted an adjuvant increase in the plaque-forming cell response to 100/~g of immunogenic, aggregated HGG. Col abrogated both the induction of suppressor cells when injected within 3 h of tolerization with deaggregated (DHGG) and the expression of previously induced suppressor cells when injected with the antigenic challenge. Interference with the generation and expression of antigenspecific-suppressor cells had no detectable effect on the immunologic unresponsive state to HGG. Col did not interfere with the induction of tolerance at a dose (1 mg/ kg) that abolished the generation of suppressor cells. Furthermore, the absence of colchicine-sensitive-suppressor cells during the establishment of tolerance had no observable effect on the duration of unresponsiveness in either helper T-or Blymphocyte populations. Finally, Col was not able to terminate the unresponsive state established by DHGG even when responsive splenic B cells could be demonstrated in tolerant animals. These data indicate that suppressor cells are not required for the establishment and maintenance of the unresponsive state to this antigen. | 2014-10-01T00:00:00.000Z | 1979-05-01T00:00:00.000 | {
"year": 1979,
"sha1": "84546d2738b6ac481e971c07a7e81ae658fc162d",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/149/5/1168.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "84546d2738b6ac481e971c07a7e81ae658fc162d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
10808757 | pes2o/s2orc | v3-fos-license | Quo vadis? Microbial profiling revealed strong effects of cleanroom maintenance and routes of contamination in indoor environments
Space agencies maintain highly controlled cleanrooms to ensure the demands of planetary protection. To study potential effects of microbiome control, we analyzed microbial communities in two particulate-controlled cleanrooms (ISO 5 and ISO 8) and two vicinal uncontrolled areas (office, changing room) by cultivation and 16S rRNA gene amplicon analysis (cloning, pyrotagsequencing, and PhyloChip G3 analysis). Maintenance procedures affected the microbiome on total abundance and microbial community structure concerning richness, diversity and relative abundance of certain taxa. Cleanroom areas were found to be mainly predominated by potentially human-associated bacteria; archaeal signatures were detected in every area. Results indicate that microorganisms were mainly spread from the changing room (68%) into the cleanrooms, potentially carried along with human activity. The numbers of colony forming units were reduced by up to ~400 fold from the uncontrolled areas towards the ISO 5 cleanroom, accompanied with a reduction of the living portion of microorganisms from 45% (changing area) to 1% of total 16S rRNA gene signatures as revealed via propidium monoazide treatment of the samples. Our results demonstrate the strong effects of cleanroom maintenance on microbial communities in indoor environments and can be used to improve the design and operation of biologically controlled cleanrooms.
Space agencies maintain highly controlled cleanrooms to ensure the demands of planetary protection. To study potential effects of microbiome control, we analyzed microbial communities in two particulate-controlled cleanrooms (ISO 5 and ISO 8) and two vicinal uncontrolled areas (office, changing room) by cultivation and 16S rRNA gene amplicon analysis (cloning, pyrotagsequencing, and PhyloChip G3 analysis). Maintenance procedures affected the microbiome on total abundance and microbial community structure concerning richness, diversity and relative abundance of certain taxa. Cleanroom areas were found to be mainly predominated by potentially human-associated bacteria; archaeal signatures were detected in every area. Results indicate that microorganisms were mainly spread from the changing room (68%) into the cleanrooms, potentially carried along with human activity. The numbers of colony forming units were reduced by up to ,400 fold from the uncontrolled areas towards the ISO 5 cleanroom, accompanied with a reduction of the living portion of microorganisms from 45% (changing area) to 1% of total 16S rRNA gene signatures as revealed via propidium monoazide treatment of the samples. Our results demonstrate the strong effects of cleanroom maintenance on microbial communities in indoor environments and can be used to improve the design and operation of biologically controlled cleanrooms.
T he vast majority of microorganisms is known to play essential roles in natural ecosystem or eukaryote functioning 1 . However, the indoor microbiome is only at the beginning of being explored and could have severe impact on human health, well-being or living comfort 2,3 . Next generation sequencing and OMICStechnologies have tremendously contributed to the census of microbial diversity and enabled global projects analyzing terrestrial, marine, and human microbiomes 1,4,5 . These techniques opened up also new possibilities to study indoor microbiomes, which are an important component of everyday human health [6][7][8] . In general, uncontrolled indoor microbial communities are characterized by a high prokaryotic diversity and are comprised of diverse bacterial and archaeal phyla [7][8][9][10][11] . The microorganisms originate mainly from the human skin or from outside air and soil, and have even been known to include extremophiles 12 . In addition, the plant microbiome was suggested as important source for indoor microbiomes 13 . Although numerous developments and improvements have been reported during the last decade, the proper monitoring and control of microbial contamination remains one of the biggest challenges in pharmaceutical quality control, food industry, agriculture or maintenance of health-care associated buildings, including intensive care units [14][15][16] .
Another important research area dealing with indoor microbiomes is planetary protection, which aims to prevent biological contamination of both the target celestial body and the Earth 17 . Space missions that are intended to land on extraterrestrial bodies of elevated interest (in chemical and biological evolution and signifi-cant contamination risk) are subject to COSPAR (Committee on Space Research) regulations, which allow only extremely low levels of biological contamination. However, all space agencies that involve in life-detection and sample return missions should consider to catalogue microbial inventory associated with spacecraft using state-ofthe art molecular techniques that enable not to compromise the science of such missions. At present, all space agencies enumerate heat-shock resistant microorganisms as a proxy for the general biological cleanliness of spacecraft surfaces that are bound to Mars (COSPAR planetary protection policy; ECSS (European Cooperation for Space Standardization)-Q-ST-70-55C 18 ).
In order to avoid contaminants as much as possible, spacecraft are constructed in highly controlled cleanrooms that follow strict ISO and ECSS classifications (ISO 14644; ECSS-Q-ST-70-58C, http:// esmat.esa.int/ecss-q-st-70-58c.pdf and https://www.iso.org/obp/ui/ #iso:std:iso:14644:-1:ed-1:v1:en). Cleanrooms for spacecraft assembly were the first indoor environments, which were extensively studied with respect to their microbiome 10,[19][20][21][22][23] . As expected, the detected microbial diversity and abundance strongly correlated with the applied sampling and detection methods, and during the last years a vast variety of bacterial contaminants was revealed 18,24 . The aforementioned studies gave rise that cleanroom microbiomes are mainly composed of human-associated microbes and hardy survival specialists and spore-forming bacteria as they can tolerate harsh cleanroom conditions 24 . However, cultivation assays that even included media for specialized microbes like anaerobic broths need to be complemented with molecular assays due to the vast majority of uncultivated microorganisms in general 24 . These molecular methods enabled the detection of archaea as a low but constant contamination in cleanrooms, and their presence was linked to human activity; the human body and in particular the skin was shown to function as a carrier of a variety of archaea and is therefore responsible for the transfer of these organisms into cleanrooms 25 . In contrast to general office or other indoor areas, controlled indoor environments, such as cleanrooms, represent an extraordinary, extreme habitat for microorganisms: the exchange with the outer environment is limited as much as possible, the air is constantly filtered, and particles are vastly reduced and frequent cleaning and/or disinfection of surfaces is performed. To date none of the previous research activities have focused on the real effect of cleanroom maintenance procedures on the diversity and abundance of microorganisms and compared the microbiome to typical indoor environments, such as an office facility.
To overcome this gap of knowledge, we have analyzed a cleanroom complex operated by Airbus Defence and Space GmbH in Friedrichshafen, Germany. The controlled environments at this complex are not monitored for biological contamination but provide an excellent research object in order to determine the baseline contamination level and possible contamination routes. The Airbus Defence and Space complex harbors uncontrolled rooms: an office (check-out room, CO), a changing room (UR) and two controlled cleanrooms of different ISO certification in very close vicinity (CR8, CR5; Fig. 1). We used four different methods, which were cultivation, classical 16S rRNA gene cloning, 454 pyrotagsequencing and PhyloChip G3 TM technology, in order to analyze the microbial diversity and abundance of these four separated modules at a cleanroom facility. In addition, we performed network analyses to visualize the microbial contamination tracks within the entire facility.
Results
Abundance of microorganisms decreased from uncontrolled to controlled areas. In general, the distribution of cultivable microbes on facility floors was very heterogeneous. Wipe samples taken in one room revealed highly variable colony counts of up to three orders of magnitude. However, technical replicates from one wipe were comparably low in variation with respect to obtained colony counts (representatively, original data for oligotrophs and alkaliphiles are given in Table S1). As shown in Table 1, the changing room (UR) revealed the highest colony counts of cultivable oligotrophs (17.2 3 10 3 colony forming units (CFU) per m 2 ), alkaliphiles (1.9 3 10 3 CFU per m 2 ) and anaerobes (44.4 3 10 3 CFU per m 2 ), whereas the lowest numbers of cultivable microorganisms were detected in the CR5 cleanroom (0.4 3 10 3 , 0, 0.1 3 10 3 CFU per m 2 , respectively). This corresponds to an at least 40-fold reduction of CFUs towards CR5. Bioburden determination according to ESA standard protocols revealed the highest number of CFU in CO samples (heat-shock resistant microbes: 0.2 3 10 3 CFU per m 2 ) and UR (without heat-shock).
These cultivation-based observations were confirmed by qPCR analyses of wipe samples, which revealed the highest contamination in the check-out as well as the changing room (both approx. 3 3 10 7 gene copies per m 2 ), corresponding to an estimated microbial contamination of about 7 3 10 6 cells per m 2 (in average 4.2 16S rRNA gene copies per bacterial genome 26 ). The two cleanrooms revealed an order of magnitude lower gene copy numbers (Table 1). When samples were pre-treated with PMA to mask free DNA (i.e. DNA not enclosed in an intact cell membrane), detected copy numbers per m 2 were even lower: 5.5 3 10 4 (CR5), 2.6 3 10 5 (CR8), 2.0 3 10 6 (CO) and 1.3 3 10 7 (UR; Table 1). The changing room (UR) thus revealed the highest portion of intact cells (45% of 16S rRNA gene copies).
Cultivation approach revealed the omnipresence of Staphylococcus, Bacillus and Micrococcus in all areas and a great diversity overlap of changing room with cleanroom areas. Cultivation on alternative media (for oligotrophic, anaerobic, and alkalitolerant microorganisms) revealed the presence of facultatively oligotrophic and facultatively anaerobic microorganisms in all rooms. Alkalitolerant microorganisms were not detected in CR8 and in very low abundance in CR5 (Table 1). The relative distribution of identified isolates across the facility rooms is depicted in Fig. 2. A complete list of all isolates is given in Table S2. The most prevalent microbes were staphylococci and Microbacterium, whereas Staphylococcus representatives were retrieved from each location and Microbacterium from CR8 and UR. The overwhelming majority of the isolates obtained from CR5 were identified as representatives of the genus Staphylococcus (S. caprae, S. capitis, S. lugdunensis, S. pettenkoferi), whereas most of the colonies were observed under nutrient-reduced conditions (oligotrophic; Fig. 2). Erwinia and Cellulomonas were only retrieved from cleanroom samples (CR8). The changing room shared four genera (Acinetobacter, Propionibacterium, Rhodococcus, Microbacterium) with the cleanroom environment. Except the three omnipresent cultivated genera Staphylococcus, Bacillus and Micrococcus no additional overlap was found for check-out room and cleanrooms, (Fig. 2). Overall, most CFU were obtained from phyla Firmicutes (98), Actinobacteria (49) and Gammaproteobacteria (10). Only two colonies of a Bacteroidetes-representative were obtained (Chryseobacterium; UR only). (CR5) exclusively revealed the presence of Aerococcaceae, Nostocaceae and Deinococcus signatures. PMA-treatment of samples allowed the detection of signatures from intact cells of Corynebacterium (UR, CR5), Microbacterium, Propionibacterium, Streptococcus, Brevundimonas, Chroococcidiopsis, Ralstonia, Rickettsiella (UR), Propionicimonas, Paracoccus, Chitinophagaceae, Bacillus, Myxococcales, Tissierella (CO) and Staphylococcus (UR, CR8). With the exception of omnipresent microorganisms (see above), no overlap occurred between sample diversity obtained from UR samples (changing room) and check-out facility (Fig. 3).
Alpha diversity analysis of pyrotagsequencing suggested an opposed distribution of Proteobacteria and Firmicutes signatures in controlled and uncontrolled areas. On average, 1863 bacterial 16S rRNA gene sequences were obtained from each sample. Normalized data revealed the highest microbial diversity (see Table 1) in the checkout (6.0 H9) and the lowest in the changing room (4.76 H9). Nine bacterial phyla were detected after setting a threshold of 1% relative sequence abundance, whereas Actinobacteria, Bacteroidetes, Cyanobacteria, and in particular Firmicutes and Proteobacteria revealed the highest sequence abundance (see Table S5 and Fig. 4). Bacterial 16S rRNA genes belonging to the phylum Actinobacteria were most relative abundant in the checkout room and appeared lower in all other samples (10%). Bacteriodetes sequences could be detected in all rooms with a constant relative abundance, with Wautersiella falsenii signatures predominating in changing room (UR) amplicons. Sequences of the genus Tessaracoccus (Actinobacteria) were exclusively found in the checkout (CO). A detailed look at the phylum Proteobacteria revealed Rhodocyclaceae sequences as most abundant in CO samples, sequences affiliated to the genera Stenotrophomonas and Comamonas as the most abundant in UR, and Paracoccus yeei as the most abundant proteobacterial signature in CR8. Within the Firmicutes, 16S rRNA gene signatures of Aerococcaceae were predominant in CR5. On genus level, Anaerococcus sequences dominated in CR5 and Paenibacillus sequences in CR8. Signatures of the species Finegoldia magna could be detected in the changing and both cleanrooms with the highest abundance in CR5, whereas Lactobacillales-sequences (including Lactobacillus and Lactococcus) were predominant amongst Firmicutes signatures in UR and CO. Noteworthy, the relative abundance of Firmicutes sequences increased towards the cleaner areas (CR8, CR5; rel. abundance: 17-45%), whereas proteobacterial pyrosequenced operational taxonomic units (pOTUs) decreased (37-23%) compared to CO Displayed are genera that showed an at least 25% increase or decrease in both cleanroom samples compared to non-cleanroom samples and had a minimum number of reads of at least 10. Numbers in the cells give number of reads. For the color gradient, read scores were normalized for each genus and are presented as Z-scores. and UR areas (see Fig. 4 and Table S5). Overall, the largest portion of Firmicutes sequences was obtained from cleanroom samples.
In order to find microorganisms that increased or declined in cleanroom samples, read abundances were normalized to 5000 across each sample and then aggregated at genus level. Genera exhibiting at least 10 reads in one sample and showing a decrease or increase of at least 25% in cleanroom samples over non-cleanroom samples (tested individually) were filtered from the entire dataset and are depicted in Fig. 5.
Altogether 44 microbial genera were found to vary greatly between the two room categories, 17 of them decreased in non-cleanroom samples. These 17 included many Gram (2) bacteria like Proteobacteria-related taxa but also Actinobacteria. Most of the microbial taxa enriched in cleanroom samples were designated Gram (1), like Firmicutes (clostridia, Paenibacillus) and again Actinobacteria.
PhyloChip G3 TM DNA microarray revealed variations in microbial richness and a great reduction of Staphylococcus and other genera in cleanroom areas when considering signatures from intact and non-intact cells. Presence/absence calling of reference-based operational taxonomic units (rOTUs) produced values ranging from 2 to 1007 different microbial taxa with 2059 different rOTUs in total. All areas revealed the signatures of Streptococcus, Microbacterium, Corynebacterium and Staphylococcus with up to 361 detected rOTUs belonging to Streptococcus (non-PMA treated samples; Fig. 6). Considering the microbial diversity that was unique for each facility area, PhyloChip analyses revealed different compositions compared to pyrotagsequencing data with the exception of Simplicispira and Helcococcus sequences, which were found by both methods to be solely present in CO and CR, respectively. A complete list of all detected phylotypes (PhyloChip) is given in Table S6.
In order to detect the intact (and thus probably living portion of microbial contaminants), PhyloChip was combined with PMAtreatment prior to DNA extraction of each sample 29 . Non-PMA samples generally exhibited more than 500 different rOTUs (511 to 1007), whereas samples treated with PMA had a much lower microbial richness ranging from 2 to 190 different rOTUs. A statistical comparison (paired student's t-test) of PMA treated to non-PMA samples resulted in a p-value of ,0.005 indicating a highly significant reduction of the microbial richness in PMA treated sam-ples. On abundance level, OTUs were analyzed with regard to increase after PMA treatment. Here, 14 different rOTUs produced a significant p-value (,0.05, paired student's t-test), which all belonged to the phylum Proteobacteria in the class Betaproteobacteria/Gammaproteobacteria. The 14 rOTUs were classified as Bradyrhizobiaceae, Phyllobacteriaceae, Erythrobacteraceae, Sphingomonadaceae, and Pseudomonadaceae. Consequently, the abundance of these rOTUs was underestimated when non-PMA sample data were analyzed. Focusing on microorganisms that get selectively reduced due to cleanroom conditions, rOTU abundances were first rank-normalized across each array and then aggregated at genus level. Genera that decreased or increased at least 25% in their relative rank in both cleanroom samples compared to both non-cleanroom samples were filtered from the entire genus dataset. These genera are displayed in Fig. 7 and belonged to various phyla. Since PMA and non-PMA samples were treated separately, some rOTUs showed an increase in PMA samples but a decrease in non-PMA samples. This effect can be attributed to corresponding amount of DNA signatures from non-intact cells in the samples, which could have a masking effect. Fig. 7 depicts 48 different genera, which showed some congruence with the pyrotagsequencing predicted changes (e.g. Paenibacillus). However, when considering the PMA-treated samples, information regarding the reduction of microbial signatures due to potential cleaning efforts can be gained. For instance, when considering only the intact fraction of cells, staphylocci were enriched in the less controlled environments of the changing room and the checkout room. In contrast, the non-PMA samples exhibited similar aggregated ranks of Staphylococcus signatures in cleanroom and changing room samples, while only the checkout room exhibited less prominent signatures. Consequently, Staphyloccus appeared to get reduced due to the controlled environment of cleanrooms.
The changing room revealed the lowest diversity but the highest abundance of microbial signatures. For a comparative analysis of 16S rRNA gene cloning, pyrotagsequencing and PhyloChip G3 technology representative sequences of OTUs were classified with the same taxonomic tool against the same database (see Methods for details). Measures of microbial diversity of pyrotagsequencing and PhyloChip G3 showed that the changing room harbored the lowest Table 1).
PMA pretreatment (detection of intact cells) was performed for experiments with PhyloChip G3 and 16S rRNA gene cloning. PMA treated samples, which were analyzed by PhyloChip G3, revealed a significant decrease in their diversity indices compared to the total microbial fraction (p-value 0.026, paired student's t-test). Concerning microbial richness measure, no correlation of number of OTUs in non-PMA treated samples was found when comparing the different methodologies (p-value . 0.05). However, when the OTUs were grouped at genus level, OTUs derived from PhyloChip G3 experiments (rOTUs) and pOTUs (OTUs obtained from pyrotagsequencing) showed a significant correlation of the microbial richness measure (p 5 0.003, Pearson's r 5 0.997, Fig. 8). A paired student's t-test testing for differences between genus richness of PMA and non-PMA samples produced a significant result for cloning (p 5 0.011) and highly significant result for PhyloChip G3 data (p 5 0.001). Thus, PMA-treated samples clearly show different richness than non-PMA samples. With regard to the agreement of PhyloChip G3 TM and pyrosequencing, 62% of all genera detected by PhyloChip G3 technology were also detected via 454 pyrosequencing as depicted in Fig. 8. 16S rRNA gene cloning revealed seven genera, which were not detected by PhyloChip G3 or 454 pyrosequencing. Fig. 9 displays the microbial richness of genera detected in each sample grouped at phylum level (class level for Proteobacteria). The changing room (UR) generally showed the lowest amount of different genera detected by all three methods employed. However, as found with cultivation-dependent and -independent methods, the changing room (UR) revealed the highest contamination level with respect to colony forming units and detectable 16S rRNA genes after PMA treatment (Table 1).
All methods revealed different microbiomes present in controlled and uncontrolled areas. Adonis testing (Refs. 27, 28) based on abundance metrics produced a significant p-value for PMA versus non-PMA samples (0.034 for cOTUs (cloning), 0.036 for rOTUs (PhyloChip G3); experiment was not performed for pyrosequencing) indicating that PMA treated samples harbored a different microbiome structure than non-PMA samples. Ordination analyses based on ranknormalized abundance scores of cOTUs and rOTUs (Fig. 10) showed a separation of PMA-treated samples. This is in accordance with the above mentioned significant p-value in the Adonis test. Moreover, ordination analysis showed for all three methods employed (16S rRNA gene cloning, pyrosequencing and Phylochip) that samples taken from cleanrooms (CR) group together apart from other samples (check out room (CO), changing room (UR)) considering PMA treated and non-PMA samples separately. Similar observations were made for HC-AN analysis with the exception of clone library data, which were, however, only based on few counts in the PMAtreated samples.
The archaeal microbiome was predominated by Thaumarchaeota representatives. Archaeal 16S rRNA gene signatures were detected for each locations, whereas the CR5 facility revealed slightly higher qPCR signals than CR8 (1.7 3 10 5 and 0.9 3 10 5 , respectively 25 ). The archaeal diversity was investigated by pyrotagsequencing of 16S rRNA gene amplicons (Table S7 and included figure). OTU grouping revealed five (CR5) to 19 OTUs (CO), which were assigned to two archaeal phyla (Thaumarchaeota and Euryarchaeota, Table S7). The dominant lineage (Candidatus Nitrososphaera) accounted to 55-92% of all reads of each location. Signatures of halophilic archaea (Halobacteriaceae) were found in all sampled rooms, whereas Halococcus signatures appeared highly abundant in CR8 (43%). Signatures of Methanocella were detected in the check-out facility (CO). Cloning of 16S rRNA genes revealed the presence of signatures from unclassified (Eury)archaeota in CR8 as well as from Candidatus Nitrososphaera (both cleanrooms). Halophilic archaea have not been detected by 16S rRNA cloning 25 .
Network analyses allowed tracking of the microbial routes and identified the changing room as most critical contamination source for the cleanrooms. All sampled rooms shared certain OTUs as presented in the network analyses (see supplementary Fig. S1 for pyrotagsequencing and supplementary Fig. S2 for PhyloChip analysis). Network tables were generated in QIIME (see Material and Methods and supplementary real node and edge tables S9.1, S9.2, S10.1 and S10.2) and visualized in Cytoscape. Lower amounts of pOTUs were shared outside the cleanroom (CO and UR, 18 pOTUs), than inside cleanrooms CR8 and CR5 (39 pOTUs). pOTUs detected in UR were spread to the highest relative proportion (68%) throughout the cleanroom facility. Although high in relative abundance and taxonomic resolution only a few pOTUs were common in all four sample locations (68 pOTUs), many were grouped at two (204 pOTUs) or three locations (115 pOTUs). The network revealed a similar portion of exclusive pOTUs in both cleanrooms (208 pOTUs in CR8 and 180 pOTUs in CR5), in contrast to CO and UR, where CO showed the highest (411 pOTUs) and UR the lowest number (76 pOTUs) of exclusive pOTUs. Similar patterns could be observed for rOTUs derived from PhyloChip data with the following exceptions: Most rOTUs were common in two sample locations (654 rOTUs). Portions of exclusive rOTUs were highest in CR8 (393 rOTUs) followed by CO (356 rOTUs) and lowest in CR5 (185 rOTUs) and UR (174 rOTUs). Beside UR, rOTUs were spread to the highest relative proportion in CR5 as well (,66% both rooms). Additional patterns were detected by the use of PMA treatment of samples. Hence, rOTUs from UR spread high, but only the smallest fraction (compared to all other samples) were derived from uncompromised cells (14% relative proportion). In contrast, almost all rOTUs from CR8 were represented by intact cells (59% CR8_PMA compared to 61% CR8).
Discussion
HEPA air filtration, control of humidity and temperature, partial overpressure (ISO 5), frequent cleaning, limited number of persons working at the same time in a cleanroom and strict changing protocols -all these cleanroom maintenance procedures have strong impact on the abundance, viability and diversity of microorganisms therein. Such countermeasures, performed in order to decrease particulate contamination, result in the development of clearly distinct microbial communities in controlled and uncontrolled facility areas.
Firstly, the abundance of molecular microbial signatures and colony forming units was tremendously reduced within the cleanrooms compared to changing and office area. This has been proven true via four different methods. The changing room revealed the highest CFU numbers in all cultivation assays (except heat-shock resistant bioburden) and the highest number of 16S rRNA gene signatures per m 2 (PMA-qPCR), whereas lowest numbers were detected in CR5 in these experiments (except cultivation of alkaliphiles, which also revealed 0 in CR8). The microbial abundance with respect to CFU thus decreased from UR to CR5 by a factor of 43 (oligotrophs), 431 (alkaliphiles), 444 (anaerobes), 10 (heat-shock resistant bioburden), and the 16S rRNA gene numbers by a factor of 6 (qPCR) and 40 (PMA-qPCR; Table 1). Secondly, the portion of intact cells decreased immensely: Only 10% (CR8) and 1% (CR5) of the qPCR signals obtained from the cleanroom samples were judged to be derived from intact, and thus possibly living cells. These values are in in the range of previously reported numbers for cleanroom facilities 29 . However, the ratio of these probably living cells was tremendously higher for the changing room (UR; 45%), which is in congruence with the cultivation-based experiments, revealing a decrease of the cultivable microbial portion towards cleanrooms by at least 10 fold. Thirdly, the cleanroom areas are most likely highly influenced by the human microbiome. Although each investigated room harbored its indigenous microbiome, a low, but general overlap of microbial diversity was found. In particular Staphylococcus, Micrococcus, Corynebacterium, Propionibacterium, Clostridium, and Streptococcus were detected by different methods in all facility areas, implying the major source of bacteria in these facilities: the human body. Fourthly, the cleanroom maintenance procedures clearly impacted the microbial diversity. Cultivation experiments revealed several microbial genera, which were exclusively found in the cleanrooms, including Staphylococcus (S. lugdunensis, S. pettenkoferi), Erwinia and Cellulomonas. Noteworthy, S. lugdunensis, a typical human skin commensal 25,30 , did not appear in any other area except CR5. This finding indicates the presence of a potential ''hot spot'' for these microorganisms and an increased contamination risk via human activity in this area. Although staphylococci are clearly human-associated and thus might not embody a risk for planetary protection considerations, their presence could have severe influence on planetary protection bioburden measurements: Cleanroom Staphylococcus species were shown to be able to survive heat-shock procedures which are the basis for contamination level estimations 31 . However, when comparing microarray data from intact versus nonintact cells, a strong decrease of Staphylococcus signatures was found for cleanroom samples, although their diversity was even higher in these areas.
The changing room represents the area of highest human activity and agitation, compared to office area and cleanrooms. In the changing area, particles and microorganisms, attached to human skin or cloths (also brought from the outer environment), are spread all over the place: into the air and onto the surfaces. Consequently, the highest abundance of 16S rRNA gene signatures from intact cells was detected in this area (1.3 3 10 7 16S rRNA gene copy numbers per m 2 ). Noteworthy, this location also revealed the lowest microbial diversity when PhyloChip and pyrotagsequencing were applied. This finding, however, was not supported by cultivation-based experiments, pointing at a methodical problem of molecular techniques with microbial communities predominated by one or several species, which may arise from the various normalization procedures applied for these technologies. The central and important role of the changing area has been confirmed by network analyses, which revealed this location being the major source for microbial contamination possibly leaking into the cleanrooms. The changing procedure follows strict rules, thus being a completely defined and effective process to reduce the microbial (and particulate) contamination of cleanrooms. However, the microbial transport via this route, at least in our setting, could not completely be avoided. Interestingly a high portion of microbes transferred from the changing area into the cleanroom environment might be hampered to proliferate under these new extreme conditions, as revealed by network analyses of PhyloChip data. However after this selection process almost all microbes detected in the cleanroom environment (CR8) comprise intact cells (or spores), which now have a high potential to colonize new environments and products (e.g. spacecraft). As known from other studies, slight modifications in room architectures can have enormous impact on the indoor's microbiome und could help to further reduce the microbial contamination 8 . Thus, a two-step changing-room system, as it is generally established for cleanrooms of higher cleanliness levels, is certainly more effective in microbial contamination reduction. Studies of those systems, however, have not been conducted thus far.
To understand the introduction of contaminants and to estimate the risk of detected microorganisms for planetary protection orunder certain circumstances -even staff health, the natural origin as also the potential pathogenic character of the contaminants is of general interest. The genera detected via pyrotagsequencing in samples from uncontrolled environments were mostly assigned to natural environments. Particularly, soil-related genera were detected in the changing area. Noteworthy, 8% of the signatures detected in CO could be attributed to a food source. The cleanest area revealed sequences mostly from unknown sources (55%), and the lowest level of soil associated microorganisms (11%). Most bacteria with pathogenic potential were detected in UR (31%), followed by genera from the cleanroom environment (18%). The checkout room (CO) microbial community revealed the lowest pathogenic potential (13%). Relative proportions of potential beneficial microbes were higher in UR and the cleanroom CR8 (both 17%) than the checkout room CO (9%) and CR5 (7%). Interestingly, some beneficials belonging to the order of Lactobacillales like Lactobacillus and Lactococcus increased towards the cleanroom, and could also be associated with the human body 13,31 .
Members of Bacillus, Staphylococcus and Deinococcus (identified in the cleanroom area) are well-known for their capability to resist environmental stresses 32,33 . With regard to clinical environments, the reduced diversity within such areas could lead to a proliferation of bacterial species with pathogen potential and might increase the risk to acquire disease or allergic reactions 34 . This knowledge offers the possibility to use ecological knowledge to shape our buildings in a way that will select for an indoor microbiome that promotes our health and well-being. Biocontrol using beneficials like lactobacilli or the implementation of a highly diverse synthetic beneficial community would be an option, which should be evaluated for indoor areas besides cleanroomrooms 35,36 . Each human activity is correlated with microbial diversity; therefore sterility in cleanrooms is impossible. This requires new ways of thinking and is also important for cleanroom facilities for pharmaceutical and medical products but also for hospitals, especially intensive care units.
In our comprehensive study, using cultivation-dependent and cultivation-independent methods, we obtained further insights into the microbiology of cleanrooms. We were able to show a strong effect of cleanroom maintenance procedures on diversity, abundance and physiological status of microbial contaminants. All rooms belonging to the cleanroom facility, an office, a changing room and two cleanrooms of different ISO certification (ISO 5 and 8), harbored different microbial communities, including non-intact and intact (thus possibly living) cells. Additionally, we revealed also potential contamination sources and routes within the facility and thus identified the changing room as the area harboring the major risk for cleanroom contamination. Currently used countermeasures to avoid a severe contamination with outside-microorganisms seem to work properly, but potential risks could highly be reduced by a different architecture of the changing area.
Methods
Sampling sites and setting. Sampling took place in September 2011 in Friedrichshafen, Germany. Samples were taken at various places within a cleanroom facility (integration center) maintained by the Airbus Defence and Space Division www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9156 | DOI: 10.1038/srep09156 (the former European Aeronautic Defence and Space Company, EADS). In this facility, different types of indoor environments were located in close vicinity as depicted in Fig. 1. Check-out room (office and control room, CO), changing room (change room with lockers and bench, directly attached to the entrance (air lock) of the cleanrooms, UR), ISO 8 cleanroom (H-6048, CR8) and ISO 5 cleanroom (to be entered through the ISO 8 cleanroom, CR5). Both cleanrooms were maintained according to their classification (ISO 14644; HEPA air filtration, control of humidity and temperature) and were fully operating. Particulate counts in cleanroom ISO 8 determined within three days before sampling did not exceed 10.000 particles (0.5 mm) and 100 particles (5.0 mm) per ft 3 (,0.028 m 3 ), respectively, and therefore exhibited contamination levels well within specifications. Cleanroom ISO 5 was maintained with overpressure. These indoor environments reflect different levels of human activity, presence of particles (CO, UR: uncontrolled; CR8: 3.5 3 10 6 and CR5: 3.5 3 10 3 particles $ 0.5 mm), clothing (CO: streetwear; UR: changing area; CR8 cleanroom garment; CR5 complete covered cleanroom garment), entrance restrictions (CO to CR5 increasing restrictions), cleaning regimes (CO and UR household cleaning agents; CR8 and CR5 alkaline cleaning agents or alcohols) and environmental condition controls (CO and UR uncontrolled conditions; CR8: 0.5 air change per min, filter coverage 4-5%, filter efficiency 99.97%, vinyl composition tile on floors; CR5: 5-8 air change per min, filter coverage 60-70%, filter efficiency 99.997%, vinyl or epoxy on floors). As given above, sample abbreviations were as follows: CO (check-out room), UR (changing room), CR8 (ISO 8 cleanroom), CR5 (ISO 5 cleanroom).
Sampling and sample processing. All areas (CO, UR, CR8, CR5) were sampled individually and in parallel. Samples were collected from floor (areas of 1 m 2 maximum (1 sample) and 0.66 m 2 (all other samples)) by using BiSKits (biological sampling kits, Quicksilver Analytics, Abingdon, MD, USA) for molecular-based analysis and wipes (TX3211 Sterile Wipe LP, polyester; Texwipe, Kernersville, NC, USA; 15 3 15 cm; wipes were premoistened with 4 ml of water before autoclaving) for cultivation-based assays. Overall, 74 samples were taken (see supplementary Fig. S3). BiSKit samples (four from each room) for molecular analyses were pooled according to the area sampled and immediately frozen on dry ice. Wipe samples (four per location for bioburden analysis, eight per room for alternative cultivation strategies) were stored on ice packs (4-8uC) and microbes were extracted immediately after return to the laboratory for inoculation of cultivation media (within 24 h after sampling). In sum, 10 field blanks were taken as process negative controls.
Cultivation. Wipes were extracted in 40 ml PBS buffer (for sampling, extraction and cultivation procedures of anaerobes: please refer to Ref. 34. For the cultivation of oligotrophic microorganisms, 5 3 1 ml of the sample was plated on RAVAN agar (including 50 mg/ml nystatin; Ref. 34). Alkaliphilic or alkalitolerant microbes were grown on R2A medium, pH 10 as given earlier (Ref. 24). Facultatively or strictly anaerobic bacteria were cultivated on anoxic TSA plates 36 ; 4 3 1 ml was plated and plates were incubated under nitrogen gas phase. Incubation was performed at 32uC for 8 (alkaliphiles), 11 (anaerobes) and 12 days (oligotrophs), respectively. Additionally, the microbial bioburden was determined following the ESA standard ECSS-Q-ST-7055C (wipe assay for bioburden (heat-shock resistant microbes) and vegetative microorganisms). Sampling and wipe-extraction details were also described earlier (Ref. 29). In brief, wipe samples (in 40 ml water) were split into two portions, whereas one aliquot was subjected to heat-shock treatment (80uC, 15 min). Sample was pour-plated in R2A medium (4 3 4 ml). Samples for vegetative microorganisms (not subjected to heat-shock) were pour-plated similarly. Cultivation was performed at 32uC for 72 hours (final count).
Isolate processing and taxonomic classification. Isolates were purified by two subsequent streak-outs and sent to DSMZ (Leibniz institute DSMZ, Deutsche Sammlung von Mikroorganismen und Zellkulturen, Braunschweig, Germany). At DSMZ, strains were classified using MALDI-TOF MS (matrix assisted laser desorption/ionization time of flight mass spectrometry) or 16S rRNA gene sequencing. MALDI-TOF mass spectrometry was conducted using a Microflex L20 mass spectrometer (Bruker Daltonics) equipped with a N 2 laser. A mass range of 2000-20.000 m/Z was used for analysis. MALDI-TOF mass spectra were compared by using the BioTyper (Bruker Daltonics) software package for identification of the isolates. Currently the MALDI Biotyper reference library covers more than 2,300 microbial species. Strains which could not be identified by MALDI-TOF, were identified by 16S rRNA gene sequence analysis.
DNA extraction for molecular assays. Due to the low-biomass-nature of the samples and the recurrent observation of an inhomogeneous microbial distribution in cleanrooms (see also Ref. 31), the samples were pooled by facility room for molecular analyses (4 BiSKit samples per location) in order to allow a more accurate estimation of microbial diversity. Pooled BiSKits samples were thawed gently on ice over night and concentrated. 1/5 of each sample was treated with propidium monoazide (20 mM) as described elsewhere (Ref. 2) for masking free DNA. Covalent linkage was induced by light (3 min, 500 W). In general, all samples were subjected to beadbeating for DNA extraction (PowerBiofilm RNA Kit Bead Tubes, MO BIO, Carlsbad, CA, USA; 10 min vortex). Supernatant was harvested after centrifugation (52003 g, 4uC, 1 min) and bead-washing with 400 ml DNA-free water and subsequent additional centrifugation (1003 g, 4uC, 1 min). DNA was extracted from PMAtreated and untreated samples using the XS-buffer method as described earlier 37 . The resulting pellet was solved in 15 ml DNA-free water.
Quantitative real-time PCR. QPCR was performed as described earlier 10 . One microliter of extracted DNA was used as template and amplification was performed with Bacteria-and Archaea-targeted primers using the SYBR Green system. As a reference, 16S rRNA gene amplicons of Methanosarcina barkeri (archaeon) and Bacillus safensis (bacterium) were used for generation of a standard curve. QPCR was performed in triplicates for each sample.
Cloning and sequencing of bacterial 16S rRNA gene amplicons. Cloning of archaeal and bacterial 16S rRNA genes was performed as described earlier (Ref. 31). For the analysis of bacterial 16S rRNA genes from PMA-untreated samples, each 96 clones were analyzed; additionally 48 and 72 clones were picked for samples from the cleanrooms (CR5 and CR8, respectively). 48 clones were analyzed from PMA-treated samples. Cloned 16S rRNA genes were RFLP analyzed (HinfI, BsuRI), representative inserts were fully sequenced and chimera-checked (Bellerophon 3 ; Pintail 38 ). The sequences were submitted to GenBank (accession nos: JQ855509-635) and grouped into operational taxonomic units (OTUs; later referred to as cOTUs). Coverage was calculated according to Good, 1953 39 .
454 pyrotagsequencing analysis of bacterial and archaeal 16S rRNA genes. For bacterial diversity analyses, DNA templates from all four rooms were amplified using the bacteria-directed 16S rRNA gene primers 27f and 1492r (5 mM each 40 ), followed by a second (nested-) PCR with tagged primer Unibac-II-515f_MID and untagged Primer Unibac-II-927r_454 (10 mM each 41 ). Polymerase chain reactions were accomplished with Taq & Go TM (MP Biomedicals) in 10 ml 1 st PCR -and 30 ml 2 nd PCR reaction mix as follows: 95uC 5 min, 30 cycles of 95uC 30 s, 57uC 30 s, 72uC 90 s and 72uC for 5 min after the last cycle; 32 cycles were applied for the 2 nd PCR with the following parameters: 95uC 20 s, 66uC 15 s, 72uC 10 min. Archaeal PCR-products were obtained by nested PCR as described earlier 25 . The first PCR was performed using Archaea-directed primers 8aF and UA 1406R primers 42,43 . The second PCR included 5 mM primers (340F_454 and tagged 915R_MID 44,45 ), 6 ml Taq-&Go TM [53], 0.9 ml MgCl 2 [50 mM] and 3 ml PCR product of the first archaeal PCR in a final volume of 30 ml. An optimized temperature program for primers with a 454 tag included the following steps: initial denaturation 95uC 7 min, 28 cycles with a denaturation at 95uC for 30 s, annealing at 71uC for 30 s, elongation at 72uC for 30 s, repetitive cycles were concluded with a final elongation at 72uC for 5 min. After PCR, amplified products were pooled respectively and purified using the WizardH SV Gel and PCR Clean-Up System (Promega, Madison, USA) according to manufacturer's instructions. Pyrotagsequencing of equimolar PCR products was executed by Eurofins MWG Operon (Ebersberg, Germany) on a Roche 454 GS-FLX1 Titanium TM sequencer. Resulting 454 reads (submitted to QIIME-DB, http://www. microbio.me/qiime/ as Study 2558) were analyzed using the QIIME 46 standard workflow as described in the 454 Overview Tutorial (http://qiime.org/tutorials/ tutorial.html) and briefly summarized in the following: Denoising of pyrotagsequencing reads of the four samples (CO, UR, CR5 and CR8) resulted in 1003-5118 bacterial and 890-2725 archaeal sequences. OTUs (later referred to as pOTUs) were grouped at 97% similarity level using uclust 47 picking the most abundant OTU as representative. Sequences were aligned using PyNAST 48 . An OTU table was created after removing chimeric sequences (561) via ChimeraSlayer (reference: greengenes 12_10 alignment) and filtering the PyNAST alignment. All pOTUs detected in the extraction blank were removed as potential contaminants from the entire sample set, which resulted in 816-2982 sequences (267-855 pOTUs) for bacteria and 47-229 sequences (5-14 pOTUs) for archaea. pOTU networks were visualized using Cytoscape 2.8.3 layout edge-weighted spring embedded eweights 49 .
PhyloChip G3 TM DNA microarray analysis of bacterial 16S rRNA gene amplicons. The basic of PhyloChip G3 TM data acquisition and analysis can be found in the supplementary information of Hazen et al., 2010 50 . In brief, bacterial amplicons were generated as described above for 16S rRNA gene cloning with primer pair 9 bf and 1406ur 40 . After quantification, amplicons were spiked with a certain amount of non-16S rRNA genes for standardization, fragmented and biotin labeled as described in the abovementioned reference. After hybridization and washing, images were scanned. Raw data processing followed the principle of stage 1 and 2 analysis described in Hazen et al., but with modified parameters. First, an updated Greengenes taxonomy was used for assigning rOTUs (''reference-based OTUs'') to the probes 51 . Second, only those probes were included in the analysis that corresponded to the targeted 16S rRNA gene region of the amplicons generated with 9 bf and 1406ur primers. Third, at minimum seven probes were considered for an OTU and the positive fraction of scored versus counted probes was set to 0.92. The quartiles of the ranked r scores (response score to measure the potential that the probe pair is responding to a target and not the background) were set to rQ 1 $ 0.80, rQ 2 $ 0.93, and rQ 3 $ 0.89 for stage 1 analysis. For stage 2, the rx values (cross-hybridization adjusted response score) was set to rxQ 1 $ 0.22, rxQ 2 $ 0.40, and rxQ 3 $ 0.42. These adjusted parameters are considered sufficiently stringent for cleanroom diversity measures. Calculated hybridization values for each OTU were log 2 *1000 transformed. As different amounts of PCR product were loaded onto the chips (PCR reactions performed differently for each sample, particularly for those that were PMA treated) abundance values were rank normalized across each array and are referred to as hybridization scores/abundances. Taxonomic classification of 16S rRNA genes. 16S rRNA gene amplicons were classified using the Bayesian method implemented in mothur (cutoff 80%, Refs. 52, 53) against an updated Greengenes taxonomy 51 , which was manually curated and in www.nature.com/scientificreports which OTUs were grouped at 98% similarity level. For taxonomic comparison of rOTUs (obtained from PhyloChip analysis) against amplicon generated OTUs (cOTUs, pOTUs), representative sequences of rOTUs were also classified with this method.
Microbial diversity measure. Shannon-Wiener indices were computed of all samples using the R programming environment 54 . Phlyochip G3 abundance data was multiplied with binary data, i.e. using abundance data of only those rOTUs that were called present in a sample. Abundance data of clone libraries, pryotagsequencing libraries and microarray data were individually rarefied to the lowest number of OTU abundances in the sample set and the Shannon-Wiener index was calculated for each sample. To avoid statistical errors originating from rarefication, the procedure was performed 1000 times and the average Shannon-Wiener index of each sample was calculated.
Cytoscape OTU networks. Node and edge tables (see supplementary Tables S9.1, S9.2, S10.1 and S10.2) for OTU networks were generated in QIIME and visualized in Cytoscape 2.8.3 49 . Shared OTUs were colored according to their presence in each sample (color mixtures were applied according to the color circle of Itten). OTUs as well as samples were displayed as nodes in a bipartite network. Both were connected via edges if their sequences were present in that sample. Edge weights (eweights) were calculated according to the sequence abundance in an OTU. For network clustering of OTUs and samples a stochastic spring-embedded algorithm was used with a spring constant and resting length. Nodes were organized as physical objects on which minimized force was applied to finalize the displayed networks.
Statistical analysis. Multivariate statistics were employed for microbial community analysis 54 . Bray-Curtis distance was calculated from the clone library, the pyrotagsequencing and PhyloChip G3 abundance data, which were all ranknormalized. Principal Coordinate Analysis (PCoA) and Hierarchical Clustering based on Average Neighbour (HC-AN) was performed to analyze the microbiome relatedness of the samples. Adonis testing was used to investigate if PMA treatment of samples had a significant effect on the microbial community structure observed. Paired student's t-test was performed to find significant difference between qPCR data of PMA and non-PMA samples.
Identification of enriched genera. HybScores of rank-normalized OTUs were aggregated at genus level for PhyloChip data. Considering pyrotagsequencing data, sum-normalized reads (5000 per sample) were summarized at genus level. In order to identify those genera that were enriched in cleanroom samples versus others, a 25% increase of aggregated scores was used as a threshold. In a similar manner, a 25% decrease of HybScores/sequencing reads was used as an indicator for genera that declined in cleanroom samples.
Controls and blanks for molecular analyses and cultivation. Control samples were included in each step of the extractions and analyses. Field blanks (procedure see: Ref. 31), extraction blanks (for BiSKit samples: unopened PBS included in the kit was used for extraction), water blanks and no-template controls (for PCR), as well as media blanks were processed. If not stated otherwise no signal or positive cultivation result was obtained thereof. For bacterial 16S rRNA analyses, detected OTUs (for cloning, pyrotagsequencing and PhyloChip) were removed from the entire analysis (Table S8). Bacterial copies detected in qPCR negative controls were subtracted from sample values. | 2017-09-16T00:50:33.826Z | 2015-03-17T00:00:00.000 | {
"year": 2015,
"sha1": "bdd4b357135ee9a4567f6db20da9942d488b10e9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep09156.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bdd4b357135ee9a4567f6db20da9942d488b10e9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
42935319 | pes2o/s2orc | v3-fos-license | Studies on the stereochemical assignment of 3-acylidene 2-oxindoles
The designation of E/Z-geometrical isomers in 3-acylidene 2-oxindoles by NMR spectroscopy can lead to erroneous assignment of alkene stereochemistry because of the narrow chemical shift range observed over a large series of analogues. In contrast, UV-Vis spectroscopy offers a convenient and more reliable method for alkene stereochemical assignment. A combination of X-ray crystallography and theoretical studies shows that the observed differences in UV-Vis spectroscopic behaviour relate to the twisted conformation of the Z-isomers that provides reduced conjugation and weaker hypsochromic (blue-shifted) absorbances relative to those of the E-isomers.
Introduction
3-Alkenyl-oxindoles are an attractive template for the discovery of new medicines, and there are several compounds containing this and related moieties that exhibit useful biological activity (Fig. 1). 1 For example, Sunitinib 1 is a tyrosine kinase inhibitor that was approved in 2006 for the treatment of renal cell carcinoma and gastrointestinal stromal tumours. 2 In addition, Woodard et al. identified a series of selective plasmodial CDK inhibitors (e.g. 2), 3 while Khosla and co-workers described 3-acylidene-oxindoles such as 3 as inhibitors of human transglutaminase-2. 4Our own interest in these compounds stems from the use of these functionalised heterocycles as useful synthetic building blocks, where they have been employed as substrates in cycloaddition reactions for the synthesis of spiro-fused compounds. 5n important challenge associated with the characterisation of these compounds is the assignment of alkene stereochemistry around the acylidene moiety.In many cases, this assignment is ambiguous and there are few reports that offer any general guidance as to how this can be done in a routine way.However, Righetti and co-workers described the use of NMR spectroscopy for the assignment of alkene stereochemistry. 6This assignment is made on the basis of a downfield shift of the acylidene protons at the β-carbon and at C-4 in the E-isomer.Indeed, this method was employed by Khosla in assigning E-stereochemistry of compounds derived from their study. 4This approach is straightforward when a mixture of isomers is prepared, however, 3-acylidene 2-oxindoles are generally formed with high stereocontrol and so an assignment based on comparative shift values is not always straightforward.Indeed, a recent report by Jing and co-workers highlighted that an example substrate previously assigned as an E-acylidene oxindole, was in fact the Z-isomer, moreover, the olefin geometry was found to have a significant impact on stereocontrol in Michael spirocyclisation reactions. 7In connection with our own interests in the chemistry of 3-acylidene 2-oxindoles, we therefore set out to address this issue by establishing a routine and consistent method for the assignment of alkene stereochemistry in this class of compounds and report herein our findings.
Results and discussion
In order to cover a reasonably broad scope of acylidene oxindoles, we opted to prepare representative examples with vari- ation at the acylidene unit with both N-unsubstituted and substituted heterocycles (4, Fig. 2).We also wanted to generate analogues containing substituents on the oxindole ring, especially those with C-4 substitution as these have been shown to exhibit useful biological activity. 4-Acylidene 2-oxindoles are easily prepared by a two-step aldol condensation sequence following the method of Braude and Lindwall. 8Condensation of isatin with a small series of acetophenones provided oxindoles 5-8 and 12-16 in good overall yield and as single isomers (as judged by 1 H NMR spectroscopy).This method was readily extended to N-Me isatins 9 and delivered the heterocyclic compounds 9-11 and 17-18, with similarly high levels of stereocontrol.Using this approach, we were able to quickly generate 14 acylidene oxindoles from commercially available starting materials on preparatively useful scales (Fig. 3).
In order to begin the task of characterising the product stereochemistry, we attempted to grow crystals of representative oxindoles.We were able to characterise compounds 5, 6, 11, 14 and 18 by X-ray crystallography, and the structures are depicted in Fig. 4. Compounds 5, 6 and 11 were found to exhibit E-stereochemistry and adopted conformations that minimised interaction of the acyl-aromatic group with C4-H of the oxindole ring.In contrast, and contrary to the reports of Khosla and co-workers, 4 4-chlorooxindoles 14 and 18 were found to adopt the Z-configuration.Interestingly, the solid state structures of these Z-acylidene oxindoles deviate significantly from a planar orientation, and this presumably reflects dipole-dipole alignment or allylic strain that arises in these arrangements.
The X-ray crystallography study provided an unambiguous method of assigning E/Z-stereochemistry, and provided a basis from which to make comparisons towards a general method of alkene stereochemical assignment.As discussed earlier, NMR spectroscopy has the potential to provide the most direct method for assignment and so we decided to compare CH shift values in the 1 H NMR spectrum of the β-carbon at the acylidene group for the major isomer generated each case.As highlighted in Table 1, the NMR shift values follow the trend highlighted by Righetti and co-workers 6 in that Z-acylidene products show an downfield shift in the NMR spectrum as compared to the corresponding E-isomers (cf. 5, 6, 11 versus 14, 18).Interestingly, N-Me derivatives also display a slight downfield shift relative to their N-H analogues (e.g.compare 5-8 with 9-11, and 14, 16 with 17-18).Overall, the relatively narrow shift range observed for the compounds analysed in this study (δ 7.69-8.03),together with the potential for N-substitution to affect peak position suggests that the use of NMR spectroscopy alone for assignment of configuration could be unreliable, especially when only a single isomer is available.
The X-ray structures of compounds 5, 6, 11, 14, 18 suggested that E-and Z-alkylidene oxindoles should exhibit different degrees of conjugation, and this in turn suggested that UV-Vis spectroscopy could offer an additional tool for assigning product stereochemistry.Indeed, it is notable that compounds 5-13 are an intense red or orange colour while compounds 14-18 are pale yellow.We therefore recorded UV-Vis spectra of compounds 5-18.Stock solutions were prepared in anhydrous MeOH (75 μM) and the spectrum of each compound was recorded over 200-700 nm.As shown in Fig. 5, the spectra of compounds 5-13 are very similar and show three distinct maxima (red bands).In contrast, compounds 14-18 also have UV-Vis spectra that are all similar to one another but are quite different from the spectra of 5-13, showing two distinct peaks and one shoulder, all at higher energy than the corresponding features in the spectra of 5-13.These are highlighted as green bands in Fig. 5.For the purposes of this study, we chose to focus on three main absorptions in regions I, II, and III indicated in Fig. 5.Although differences in absorption bands in region I were apparent, the differences were relatively small and so we decided to concentrate on bands II and III, these data are compiled in Table 2.With regard to absorption II, compounds 5-13 were all found to exhibit a maximum in the range of 334-341 nm with an extinction coefficient between 8890-13 820 M −1 cm −1 .In contrast, substrates 14-18 all exhibit a shorter-wavelength maximum in the range of 292-302 nm with a smaller extinction coefficient between 3980-6390 M −1 cm −1 .With regard to absorption III compounds 5-13 all exhibit a maximum in the range of 419-436 nm with an absorption coefficient between 1190-2890 M −1 cm −1 , whereas the maxima for 14-18 are again blue-shifted (370-390 nm) with smaller extinction coefficients in the range 760-1190 M −1 cm −1 .It is clear from this data that the absorption spectral behaviour of compounds 5-18 can be separated into two classes, with compounds 14-18 having blue-shifted absorption maxima with lower extinction coefficients for absorptions II and III than compounds 5-13.Given that we were able to confirm the Z-stereochemistry for compounds 14 and 18 by X-ray crystallography, we assign substrates 15, 16, and 17 also as Z-isomers on the basis of similarities shown in their UV-Vis spectra.Similarly, we assign compounds 7, 8, 9, 10, 12, and 13 as E-stereoisomers.
We envisaged that the potential of UV-Vis spectroscopy to aid stereochemical assignment of acylidene oxindoles could be further demonstrated by conducting this analysis on individual E/Z-isomers of one specific substrate.In this context, AlCl 3 has been established as an effective reagent for the E-Z isomer- isation of acylidene oxindoles, 6 and so we opted to use this method to generate Z-5.
Accordingly, a sample of E-5 was treated with one equivalent of AlCl 3 over three days at room temperature to give an inseparable mixture of the E and Z-acylidene oxindoles.Before carrying out detailed spectroscopic studies on Z-5, we first investigated the configurational stability of this compound.Interestingly, a slow isomerisation of Z-5 back to the thermodynamically more stable E-isomer was observed under ambient conditions when the compound was stored in CDCl 3 solution.In contrast, a much slower rate of isomerisation was noted when the compound was stored as a pure solid (Fig. 6). 10 Given the relatively slow rate of isomerisation, we were confident that we could isolate and characterise a pure sample of Z-5, and were pleased to find that the isomers could be separated via preparative reverse phase HPLC.The UV-Vis spectrum (75 μM solution in MeOH) of Z-5 was recorded and compared with the E-isomer (Fig. 7).The spectra of E/Z-5 closely match the two types of UV-Vis spectrum described earlier in Fig. 5, with the Z-compound showing a characteristic blue-shift and reduced absorbance intensity relative to the E-isomer.Additionally, the 1 H NMR spectrum of Z-5 shows a resonance at δ 7.70 for the olefinic proton whereas E-5 has a signal at δ 7.72.Once again, the small variations in proton shift values do not provide a basis from which to confidently assign product stereochemistry, whereas the observed changes in the UV-Vis spectra are pronounced and consistent with the trends highlighted in Fig. 5.
In an effort to further validate our methodology for assignment of alkene geometry through UV-Vis spectroscopy, 3-acylidene oxindole 19 was prepared.Compound 19 was isolated as a single isomer in a good overall yield of 50% via a slightly modified aldol condensation reaction.The UV-Vis spectra of 19 (75 μM solution in MeOH) showed two distinctive maxima and a broad shoulder at 252 nm (I), 293 nm (II), and 370-390 nm (III), respectively (Fig. 8).The molar absorption coefficients calculated for peaks II and III were 4460 M −1 cm −1 and 900 M −1 cm −1 (at 380 nm) respectively.The pattern of peaks and intensities in the spectrum strongly indicated that this alkene has Z-geometry, and further support for this assignment was gathered by nOe spectroscopy.
Theoretical studies
In order to better understand the origin of the observed spectroscopic differences of E-and Z-3-acylidene oxoindoles, we computationally modelled the UV-Vis spectrum of optimised structures of E/Z-5.The first 100 singlet-to-singlet electronic transitions for each isomer were calculated and our results are shown in Fig. 9.In addition, the wavelength of each significant transition, the corresponding energy, the oscillator strength and the major molecular orbital contribution (based on FMO analysis) for the UV-Vis spectrum of each isomer are shown in Tables 3 and 4, respectively.For full details, see the ESI, † whereby we note that similar calculations have been used for structure determination previously. 11ur calculations show that Z-5 exhibits four absorptions after 220 nm.The absorption at 244 nm is the strongest, and is followed by medium and weak absorption bands at around 259 nm, 299 nm and 381 nm, respectively.In contrast, the calculated spectrum of E-5 is quite different.The absorption at the lowest energy region is located at 485 nm with a stronger absorbance 353 nm.Similar to the experimental spectra shown in Fig. 5, the calculated spectra also exhibit diagnostic bands of type II and III.As in the experimental spectra these are stronger and red-shifted for E-5 relative to Z-5.Inspection of the major FMO contributions to these bands depicted in Tables 3 and 4 showed that electronic transitions involving the HOMO−1, HOMO and LUMO orbitals were dominant.Thus their energies were calculated and they are summarised in Table 5.
From Table 5, it is clear that the energy levels of both the HOMO and HOMO−1 orbitals are similar for both alkene isomers.In contrast, the energy levels of the respective LUMO orbitals are significantly different, whereby the LUMO of E-5 is 0.81 eV lower in energy than the LUMO of Z-5.Thus, the observed relative shift of the transitions in regions II and III of the UV-Vis spectrum of these compounds appears to be caused solely by the different LUMO energies for each of the isomers.Hereby, the lower energy LUMO of the E-isomer results in a relative red-shift of bands in this region.The apparent specificity of the LUMO-lowering effect for E-stereochemistry compared to Z-stereochemistry (as evidenced by the experimental data) is intriguing.Inspection of the HOMO and the HOMO−1 of E/Z-5 shows that these molecular orbitals are localised on the alkylidene oxindole moiety, as well as the acyl carbonyl lone pair in both isomers.In contrast, the corresponding LUMO orbitals are more generally delocalised through the 3-acylidene 2-oxindole structure, and the conformational twist in the Z-isomer leads to less efficient overlap and less stabilisation than in the E-isomer (Fig. 10).
Evidence of this is found in the differential density maps 12 for the S 0 -S 1 transition for both isomers.Comparison of the two panels in Fig. 11 clearly shows that, while the decrease in the electron density for both isomers is mainly located on the oxindole subunit, the concomitant increase for the Z-isomer is more localized than for the E-isomer, clearly showing the effect of the planarity of the E-isomer, leading to a lower excitation energy for the S 0 -S 1 transition in the E-isomer.Thus, our investigations of E/Z-5 show that direct, semiquantitative, comparison between experimentally and theoretically derived UV-Vis spectra is possible for these systems.Therefore, the UV-Vis spectra for the 5-and 7-Cl oxindole isomers not studied experimentally, together with the 4-and 6-Cl isomers 12 and 14 were also calculated.The UV-Vis spectra for these E/Z-chloro-oxindoles are depicted in Fig. 12.With regard to the Z-isomers, a consistent pattern was observed for the calculated spectra that matched the experimentally derived data quite well (cf.Fig. 5).In the case of the E-isomers however, only the 5-, 6-, 7-Cl compounds gave the expected UV-Vis spectra, whilst the E-4-Cl-oxindole provided a spectrum that was inconsistent with the expected pattern of absorbance bands.Inspection of the structure of this compound shows that in this case the steric clash between the chlorine atom and the phenyl ring ensures that the E-isomer, like the corresponding Z-isomer, is also not planar, which clearly results in a blue-shift of the absorptions and a corresponding increase in the intensity of the region II band.
The X-ray crystallography studies together with calculated optimised structures allow us to formulate a rationale for the observed trends in these UV-Vis spectra.3E-Acylidene 2-oxindoles can adopt a fully planar orientation A with the carbonyl groups opposing to minimise dipole-dipole interactions.The corresponding Z-configuration B cannot readily adopt a planar conformation because of attendant dipole-dipole interactions or allylic strain, these interactions force the acyl group to twist out of conjugation.Compound C (R ≠ H) cannot be generated synthetically, but we were able to produce a minimised structure for the compound theoretically.Unsurprisingly, this compound also showed a twist at the acyl group to minimise steric interactions leading to a concomitant change in the absorption spectrum (Fig. 13).
Conclusions
In conclusion, we believe that alkene geometry of 3-acylidene oxindoles can be assigned through UV-Vis spectroscopy based on the following general empirical observations (75 µM solutions in MeOH): E-isomers are expected to have two diagnostic peaks in the range of 330-345 nm and 415-440 nm, with extinction coefficients of 8890-13 820 M −1 cm −1 and 1200-2890 M −1 cm −1 , respectively.In contrast, 3Z-acylidene oxindoles have a weaker band in the range of 290-305 nm, with a shoulder band around 370-390 nm, and with smaller extinction coefficients of between 3980-6390 M −1 cm −1 and 890-1170 M −1 cm −1 .Our studies also suggest that 3-acylidene oxindoles typically exist as E-isomers with a configurationally labile double bond, but that those analogues bearing a substituent at the 4-position are exceptions that exist as Z-geometrical isomers.These findings have implications for structureactivity relationship data for this compound class (e.g.compounds 2, 3 in Fig. 1), and could prove to be of use in the design of new bioactive molecules.
Fig. 5
Fig.5UV-Vis spectra of compounds 5-18 (75 µM in MeOH).Spectra are grouped according to relatively strong (red) and weak (green) bands in regions II and III.
Fig. 11
Fig. 11 Differential Density maps for the S 0 -S 1 transition of the Z-5 isomer and the E-5 isomer.Green denotes a decrease in the electron density upon excitation, whereas purple denotes an increase.
Table 1 1
H NMR shift values of the alkylidene CH in compounds 5-18 Entry Compound a Alkylidene CH b (ppm) a Compound configuration in parenthesis assigned on the basis of X-ray crystallography.b NMR data for compounds dissolved in d 6 -DMSO.
Table 2
UV-Vis data for compounds 5-18 a
Table 4
Calculated electronic spectrum for Z-5 a H: HOMO and L: LUMO. | 2017-04-28T22:26:05.373Z | 2014-04-22T00:00:00.000 | {
"year": 2014,
"sha1": "231388ac13f57783c82c6007ea35d600e82cc5c7",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2014/ob/c4ob00496e",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d1e20d7f645297a8b4596fde8a43bd65b56a78ab",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
154412195 | pes2o/s2orc | v3-fos-license | MAKING TEACHER EDUCATION RELEVANT FOR PRACTICE: THE PEDAGOGY OF REALISTIC TEACHER EDUCATION
The gap between theory and practice in teacher education has led to much criticism regarding the eectiveness of teacher education. In this article, the causes of this gap are discussed and related to a framework for teacher behaviour and teacher learning. Using this framework, the socalled ‘realistic approach’ to teacher education has been developed, which marks a new direction in the pedagogy of teacher education. This approach, developed at Utrecht University in the Netherlands, is described in this article, and its basic principles are discussed. Several evaluative studies into the realistic approach show its positive outcomes. Important conclusions are presented for (1) programme design, based on (2) a view of the intended process of student teacher learning, (3) the pedagogical interventions and arrangements used, and (4) the professional development of teacher educators.
Introduction
At many places in the world, including the Czech Republic, there is a growing emphasis on bridging theory and practice in teacher education. In many countries, school-based teacher education has been introduced in an attempt to overcome the criticism that teacher education is not su ciently relevant to practices in schools (Ashton, 1996). However, without careful consideration of the pedagogy used in teacher education, there is a risk that this move towards schools is counterproductive, as will be explained below.
In this context, it is a positive development that the book entitled Linking practice and theory, the pedagogy of realistic teacher education has been translated into several languages and has recently been published in Czech .
In the present article, the main issues that are elaborated in this book will be discussed. First, we will focus on the gap between theory and practice, which has made teacher education a di cult enterprise. Next, the causes of this gap will be analysed.
Central to the article is the presentation of a three-level model of teacher behaviour and teacher learning. This model clari es that professional learning is a bottom-up process taking place in the individual student teacher. Based on the model, the so-called 'realistic approach' to teacher education will be described. It aims at supporting the bottom-up process, starting from experiences and leading to fruitful knowledge about teaching which really in uences teachers' practices. After presenting the central principles of realistic teacher education, the approach will be illustrated by looking at one typical programme element, the so-called oneto-one.
Evidence of the e ectiveness of the realistic approach to teacher education will be presented through a brief description of a number of evaluative studies, which show that the approach really makes a di erence. Finally, important conclusions will be presented regarding (1) programme design, based on (2) a view of the intended process of student teacher learning, (3) the pedagogical interventions and arrangements used, and (4) the professional development of teacher educators. This will also lead to some critical remarks about current professional habits in teacher education.
The Gap Between Theory and Practice
The gap between theory and practice has been a perennial issue. As early as the beginning of the 20st century, Dewey (1904) noted this gap and discussed possible approaches by which it might be bridged (see also Shulman, 1998). Nevertheless, in the course of the more than 100 years since, the relationship between theory and practice has remained the central problem of teacher education world-wide (Lanier & Little, 1986).
What has become clear is that the idea of simply transmitting important pedagogical knowledge to teachers, hoping that they will apply this knowledge in their practices, does not really work. Wideen, Mayer-Smith, and Moon (1998, p. 167) describe this traditional view as follows: The implicit theory underlying traditional teacher education was based on a training model in which the university provides the theory, methods and skills; the schools provide the setting in which that knowledge is practiced; and the beginning teacher provides the individual e ort to apply such knowledge. In this model, propositional knowledge has formed the basis of university input.
Many other researchers, too, have critiqued this model. Clandinin (1995) calls it "the sacred theory-practice story", Schön (1983, p. 21) speaks about "the technical-rationality model", and Carlson (1999) names it the "theory-to-practice approach", and discusses its limitations. As Barone et al. (1996) argue, this approach often has led to a collection of isolated courses in which theory is presented with hardly any connection to practice, based on the following assumptions: 1. Theories help teachers to perform better in their profession; 2. These theories must be based on scienti c research; 3. Teacher educators should make a choice concerning the theories to be included in teacher education programmes.
The traditional model has been dominant for many decades (Sprinthall, Reiman, & Thies-Sprinthall, 1996;Imig & Switzer, 1996, p. 223), although many studies have shown its failure in strongly in uencing the practices of graduates of teacher education programmes. A thorough overview of these studies is presented by Wideen, Mayer-Smith, and Moon (1998), who conclude that the impact of traditional teacher education on their students' practices seems rather limited, a conclusion also drawn by the Research Panel on Teacher Education of the American Educational Research Association (Cochran-Smith & Zeichner, 2005). Several of the cited studies show that beginning teachers struggle for control, and experience feelings of frustration, anger, and bewilderment. The process they go through is more one of survival than of learning from experience.
Causes of the Gap
The causes of these problems are well-documented in the literature. A rst, oft-mentioned cause of the theory-practice divide has to do with the learning process within tea cher educa ti on itself, even before the stage in which theory can be applied to practice. Student teachers' prior knowledge plays a powerful role in their learning during a teacher education programme (e.g., Wubbels,199 2), and their preconceptions show a remarkable resistance to change (Joram & Gabriele, 1998). In the literature, this has been explained by the many years of experiences that student teachers have had as pupils within the educational system (Lortie, 1975;Brouwer & Korthagen, 2005).
A second, more funda mental cause has been named the feed-forward pro blem: "resistance from the student teacher at the time of exposure to given learnings and, later, protestations that the same learning had not been provided in stronger doses" (Katz et al., 1981, p. 21; see also Bullough, Knowles, & Crow, 1991, p. 79). This pro blem can also be stated as follows: in order to learn anyt hing during teacher educati on, student teachers must have personal con cerns about teaching or they must have encounte red concrete problems . Otherwise, they do not perceive the usefulness of the theory.
A third cause has to do with the nature of teaching. Hoban (2005, p. 9) states that "what a teacher does in a classroom is in uenced by the interaction of many elements such as the curriculum, the context, and how students respond to instruction at one particular time". Hoban continues by saying that this view of the nature of teaching necessitates 'holistic judgement' (cf. Day, 1999) about what, when and how to teach in relation to a particular class, and this is something for which it is hard to prepare teachers. Moreover, practice is generally ambiguous and value-laden (Schön, 1983), whereas teachers often have little time to think and thus need prompt and concrete answers to situati ons (Eraut, 1995). What they need is rather di e rent from the more abstract, systematised and general expertknow ledge that teacher educators often present to student tea chers (Tom, 1997).
Finally, it is not only knowledge that is involved. Many studies on teacher development show that teaching is a profession in which feelings and emotions play an essential role (Day, 2004;Hargreaves, 1998), but "the more unpredictable passionate aspects of learning, teaching and leading (…) are usually left out of the change picture" (Hargreaves, 1998, p. 558). The problem of promoting fundamental professional change is rst of all a problem of dealing with the natural emotional reactions of human beings to the threat of losing certainty, predictability or stability. This a ective dimension is too much neglected in the technical-rationality approach, which seems to be another cause of the gap between theory and practice.
Although these causes of the gap between theory and practice are well-known, it is remarkable that many teacher education programmes still re ect the traditional 'application-of-theory model' described above. In his work as a trainer of teacher educators in various countries, the author of this article has had the opportunity to analyse the 'everyday pedagogy' of teacher education. It has clari ed that basically the traditio nal view of teacher educati on has not changed and even that many "new" approa ches often take the form of sophistica ted proce du res to try and interest student tea chers in a particular theory, for example by using video cases or having students create portfolios. This means that the fundamental idea that there exists theory that should be transferred to student teachers still repre sents a very dominant line of thoug ht. The funda mental conception inherent to this line of thought is that there is a gap to be bridged. One often forgets that it was the a priori choice of the educator that created this gap in the rst place. In line with this, Robinson (1998, p. 17) states: "[N]arrowing the research-practice gap is not just a matter of disseminating research more e ectively or of using more powerful in uence strategies. " The Essence of Teacher Behaviour and Teacher Learning In order to further develop our understanding of the problems, but also to better realise the opportunities we have in teacher education, there is a need for a theory on teacher behaviour and teacher learning. For this purpose, developed a model which contributes to a deeper insight into the phenomena described above (see Figure 1). The model distinguishes between three main levels, the rst of which is the gestalt level, which is rooted in practical experiences, and is often unconscious. Through re ection on the gestalt level, teachers may develop a personal practical theory, and, at the next level, a logical and adequate ordering in such a theory concurring with research outcomes, called formal theory. The three levels will be explained below.
The gestalt level
Based on a general psychological perspective, Epstein (1990) argues that the manner in which humans deal with most situations is mediated by the so-called experiential body-mind system, which processes information in a rapid manner. According to Epstein, the experiential system functions through emotions and images in a holistic and often subconscious manner, which means that the world is experienced in the form of wholes, in which cognitive and emotional aspects are interconnected (Epstein, 1990, p. 168;Epstein, 1998;cf. Bargh, 1990). Epstein's analysis is highly relevant to the teaching domain, as many studies on teacher routines (e.g., Halkes & Olson, 1984) emphasise that automatic or mechanical behaviour is characteristic of much teaching. Dolk (1997) states that most teacher behaviour is immediate behaviour, i.e. behaviour occurring without re ection. A similar position is taken by Eraut (1995).
This view implies that much of a teacher's behaviour is grounded in unconsciously and instantaneously triggered images, feelings, notions, values, needs or behavioural inclinations, and often in combinations of these aspects. Precisely because they often remain unconscious, they are intertwined (Lazarus, 1991) and thus form a whole that call a gestalt, based on Korb, Gorrell, and Van de Riet (1989). This implies a broadening of the gestalt concept, which was originally used just to describe the organisation of the visual eld (Köhler, 1947). A gestalt is considered to be a dynamic and constantly changing entity encompassing the whole of a teacher's perception of the here-and-now situation, i.e. sensory perceptions of the environment as well as images, thoughts, feelings, needs, values, and behavioural tendencies triggered by the situation. This implies an holistic view, which concurs with the observation by brain researcher Damasio (1994, p. 83-84) that behaviour is grounded in many parallel bodily systems, and that emotion is strongly linked to the primary decision-making process (see Immordino-Yang & Damasio, 2007 for a more detailed elaboration and a model of the complex relations between cognition and emotion).
The notion of a gestalt can be illustrated with an example from a study by Hoekstra et al. (2007) into informal learning among 32 teachers. The aim of the research study was to nd relationships between the teachers' behaviours and the accompanying internal processes, and their in uence on their professional learning in the workplace. The 32 experienced teachers were monitored over a period of 14 months with the aid of questionnaires, digital reports on their learning experiences, and interviews. In an in-depth component of the study, four of the 32 teachers were observed more intensively, using video recordings of their teaching and postlesson interviews. One of the teachers, Albert, was observed while teaching on the topic of potential energy. It seemed that the pupils were lost while he kept on talking. In the interview after the lesson, Albert said: I later noticed they did not have a clear idea of what that [potential energy] was. (…) And looking back, I am not quite satis ed with how I've done it. Some concepts were not clear enough to the pupils. To understand the whole story, you actually have to know more about the phenomenon 'potential energy' . I ignored that concept, because it had been talked about in the previous assignment. But in that very assignment, the question of 'what exactly is potential energy?' had not been dealt with either.
What we see here is quite a common didactical problem. The teacher went on, although, from the perspective of his objectives, something seemed to be going wrong. A sequence of actions unfolds, probably triggered by the (conscious or unconscious) need to get the concept of potential energy across, based on a (perhaps not completely conscious) notion that the concept had already been dealt with. After the lesson, Albert becomes aware of the fact that his teaching strategy was not very e ective, and he also re ects on why he did what he did. This may have been triggered by the fact that he was being interviewed about the situation. In many cases, however, teachers are not really aware of the e ects of their behaviour and its underlying causes, as several authors (e.g., Clark & Yinger, 1979) have found.
The level of personal practical knowledge As noted, many of the sources of a teacher's behaviour may remain unconscious to the teacher. However, through re ection, he or she may become aware of at least some of these sources. In the example, Albert became aware of an underlying cause of his behaviour, namely his (wrong) idea about the previous assignment, and the e ects of this idea on what happened in the situation. During such a re ection process, in this case a didactical re ection, notions or concepts become interrelated. Hence, when a teacher re ects, often a previously unconscious gestalt develops into a conscious network of concepts, characteristics, principles, and so on, which is helpful in describing practice. This cognitive network is called a personal practical theory. It is very much coloured by the desire to know how to act in particular situations, as opposed to having an abstract understanding of them.
The level of formal theory If someone aims at developing a more theoretical understanding of a range of similar situations (as researchers often want and do), this may lead to the next level. This is the level at which a logical ordering is constructed in the personal practical theory formed before: the relationships within one's cognitive network are studied or several notions are connected into one coherent theory. One can only speak about reaching the third level if the resulting cognitive network concurs with formal scienti c theory.
Interestingly, in the study by Hoekstra et al. (2007) mentioned above, no examples were found in which teachers demonstrated this level. Perhaps this is understandable. The third level is aimed at deep and generalised understanding of a variety of similar situations, whereas practitioners often focus on directions for taking action in a particular situation, and as a consequence, often do not reach the level of formal theory. This was also the conclusion reached by an empirical study by .
Level reduction
If a teacher does reach the theory level, knowledge at this level rst has to become part of a personal practical theory if it is to start in uencing behaviour; or, even better, it has to be integrated into a gestalt in order to become part of the teacher's routine. This is called level reduction (see Figure 1). Often, however, level reduction does not take place at all, for it requires much practising in authentic contexts, and even then friction may remain between pre-existing gestalts and the new theory. This is an important cause of the gap between theory and practice. Originally, the three-level model was developed by Van Hiele (1973, 1986) within the context of mathematics education, as an adaptation of Piaget's theory. It concurs with Epstein´s (1990Epstein´s ( , 1998 distinction between an experiential and a rational system within the human organism, which re ects the distinction between the gestalt level on the one hand and two levels on the other. Other authors whose work shows similar lines of thinking are Johnson (1987) and Lako and Johnson (1999). They talk about the embodied mind, and emphasise the importance of image schematic structures, which are of a non-propositional and gurative nature, and mostly unconscious: These are gestalt structures, consisting of parts standing in relations and organized into uni ed wholes, by means of which our experience manifests discernible order. When we seek to comprehend this order and to reason about it, such bodily based schemata play a central role. For although a given image schema may emerge rst as a structure of bodily interactions, it can be guratively developed and extended as a structure around which meaning is organized at more abstract levels of cognition. " (Johnson, 1987, p. xix-xx).
The idea that a great deal of people's behaviour is grounded in unconscious gestalts, concurs with ndings from neuroscience showing that much of our decision-making is rooted in subconscious processes in our brain, and that decisions are made unconsciously, even before our conscious mind thinks we make such decisions deliberately (William, 2006). Brain researcher Gazzaniga (1999, p. 73) points towards the same phenomenon: "Major events associated with mental processing go on, measurably so, in our brain before we are aware of them. " More empirical data supporting the three-level model are described in Korthagen and Kessels (1999), Korthagen and Lagerwerf (2001, pp. 185-190), and Korthagen (2010).
Realistic Teacher Education
The realistic approach is an approach to teacher education that takes into account the above analysis of the gap between theory and practice as well as the above framework regarding teacher learning and teacher behaviour. It was originally developed at Utrecht University in the Netherlands. Its ve guiding principles are formulated by as follows: 1. The approach starts from concrete practical problems and the concerns of student teachers in real contexts. 2. It aims at the promotion of systematic re ection by student teachers on their own and their pupils' wanting, feeling, thinking and acting, on the role of context, and on the relationships between those aspects. 3. It builds on the personal interaction between the teacher educator and the student teachers and on the interaction amongst the student teachers themselves. 4. It takes the three-level model of professional learning into account, as well as the consequences of the three-level model for the kind of theory that is o ered. 5. A realistic programme has a strongly integrated character. Two types of integration are involved: integration of theory and practice and the integration of several academic disciplines.
Re ection
From the above it is clear that re ection plays an important role in the realistic approach, as it helps to promote level transitions. The approach to re ection used in realistic teacher education is based on an alternation between action and re ection. Korthagen (1985) distinguishes ve phases in this process: (1) action, (2) looking back on the action, (3) awareness of essential aspects, (4) creating alternative methods of action, and (5) trial (see Figure 2). This ve-phase model is called the ALACT model (named after the rst letters of the ve phases). The fth phase is again the rst phase of the next cycle, which means that we are dealing with a spiral model: the realistic approach aims at an on ongoing process of professional develop ment. Here is an example of a student teacher, Judith, going through the phases of the ALACT model under the supervision of a teacher educator: Judith is irritated by a pupil named Jim. She has the feeling that Jim always tries to avoid having to do any work. Today she noticed this again. In the preceding lesson the children received an assignment for three lessons to be worked on in pairs; they would hand in a written report at the end. Today, during the second lesson, Judith had expected everyone to work hard on the assignment and to use this second lesson as an opportunity to ask for her help. Jim, however, appeared to be busy with something completely di erent. In the lesson she reacted to this by saying: "Oh, so again you are not doing what you are supposed to.…I think the two of you will again end up with an unsatisfactory result!" (Phase 1: action) During the supervision, Judith becomes more aware of her irritation and how this in uenced the way she acted. When the supervisor asks her how her reaction might have a ected Jim, she realises that her irritation may, in turn, have caused irritation in Jim, probably causing him to be even more demotivated in his work on the assignment. (Phase 2: looking back) By this analysis she becomes aware of the escalating negativity which is evolving between her and Jim and she starts to realise how this leads to a dead end (phase 3: awareness of essential aspects). However, she does not see a way out of the escalation. Her supervisor shows understanding of Judith's struggle. She also brings in some theoretical notions about escalating processes in the relationship between teachers and pupils, such as the often occurring pattern of 'more of the same' (for the underlying formal theory, see Watzlawick, Weakland, & Fisch, 1974) and the guidelines for how to de-escalate by changing this pattern by deliberately giving a positive reaction. This is the start of phase 4: creating alternative methods of action. She compares these guidelines with her impulse to be even stricter and put more constraints on Jim. Finally, she decides to try out (phase 5) a more positive, empathetic approach, which starts by asking Jim about his plans. This is rst done in the supervision session: the supervisor asks Judith to practise such reactions and includes a mini-training exercise in the giving of empathetic reactions. If the results of this new approach are re ected on after the try-out in a real situation with Jim, phase 5 becomes the rst phase of the next cycle of the ALACT model, thus creating a spiral of professional development.
As we see in the example, during phase 3 of the ALACT model, when the student teacher starts to become aware of the essence of the situation she is re ecting on, the teacher educator can bring in theoretical elements, but these need to be tailored to the speci c needs of the student teacher and the situation at hand. As explained above, this changes the nature of relevant theory brought in during a supervisory session: it seldom takes the form of formal theory.
The idea of learning by re ection is in harmony with the three-level model introduced above and can also be applied to other components of teacher education, such as group seminars. The teacher educator may, for example, create an experience in class which is the basis for an ALACT process in the whole group. An example of this is the idea of organising ten-minute lessons given by student teachers to their peers.
The promotion of re ection is not only important for the supporting of level transitions. When teachers learn how to re ect during their preparation for the profession, by systematic use of the ALACT model, for example, they develop a growth competence, i.e. the ability to direct their own professional development during the rest of their careers. If they experience how this can be done in collaboration with their peers, this prepares them for peer-supported learning during the rest of their careers, which creates a counterbalance to the often somewhat individualistic culture of teaching that exists in many schools.
An Example: the One-To-One This Section describes an example of a programme element, namely the oneto-one, which has been developed in response to the problem that teaching a whole class on a regular basis appears to be a complex experience for novice teachers, and that this experience tends to foster gestalts and concerns related to 'survival' . This is why the rst teaching-practice period has been simpli ed. Each prospective teacher gives a one-hour lesson to one high-school pupil once a week for eight weeks. Neither the university supervisor nor the mentor teacher is present during actual one-to-one lessons, but there are supervisory sessions and seminar meetings during the one-to-one period. The lessons are recorded on audio or video, and are subsequently the object of detailed re ection by the student teacher. This re ection is structured by means of the ALACT model.
During the one-to-one period, the student teachers form pairs. Of the eight one-to-one lessons, four are discussed by the student teachers within these pairs, and four lessons are discussed by the pair and the teacher educator. The teacher educator can suggest small theory-based ideas that t the processes the student teachers are going through. These ideas can be derived from a variety of theoretical backgrounds. After both types of discussion, each student teacher writes a report that brings together the most important conclusions.
A general nding is that by use of audio and video recordings the student teachers rapidly discover that they failed to listen to what the pupil was saying, or started an explanation before the problem was even clear to the pupil. As one of our student teachers put it: "The one-to-one caused a shift in my thinking about teaching, from a teacher perspective to a pupil perspective. " This quote is representative of the learning processes of most student teachers in the one-toone. However, there also appear to be considerable di erences between student teachers in terms of what is learnt during such a one-to-one arrangement. To give some examples, one student teacher focused on a lack of self-con dence in the pupil she was working with, and started a search for ways of improving the child's self-image, while another student teacher was confronted with her own tendency to explain things at a fairly abstract level. The latter developed the wish to include more concrete examples.
In sum, the one-to-one gives student teachers many opportunities to learn on the basis of their own experiences and the concerns they develop through these experiences. In this way the student teachers re ect on, and sometimes question, their initial gestalts and develop a personal practical theory that is meaningful to them. In this respect, the one-to-one is a good illustration of realistic teacher education.
Once student teachers have developed their own personal practical theory, it becomes important to o er them theoretical knowledge from professional articles and books in order to deepen, challenge and adapt their personal theories and help them reach the level of formal theory. For this reason, the nal part of the Utrecht programme has curriculum elements in which experts in areas such as learning psychology or classroom interaction o er theoretical knowledge to students. It is important at this stage, too, that theory is built onto the experiences and insights the students themselves have already developed.
Empirical Support for the Realistic Approach As Zeichner (1999) notes, what really happens in teacher education programmes often remains obscure. Processes and outcomes are seldom studied systematically. In contrast to this general picture, the realistic approach is well researched. Of interest are the following evaluative studies, described in more detail in and in the Czech translation of this book .
A national evaluation study of all Dutch secondary-teacher education
programmes carried out by an external research o ce, showed that 71% of a sample of graduates of the Utrecht programme (n=81) rated their professional preparation as good or very good (Luijten, Marinus, & Bal, 1995;Samson & Luijten, 1996). In the total sample of graduates from all Dutch secondary-teacher education programmes (n=5135) this percentage was only 41%, which shows a statistically signi cant di erence (p<.001).
2. An evaluative overall study among all graduates of the Utrecht University programme carried out at the end of the 1990s, showed that 86% of the respondents considered their preparation programme as relevant or highly relevant to their present work as a teacher (Koetsier, Wubbels, & Korthagen, 1997). 3. An in-depth study by Hermans, Créton, and Korthagen (1993) in a cohort group of twelve student teachers, showed that all experienced a seamless connection between theory and practice. In the context of the above-cited research on the problematic relationship between theory and practi ce in teacher education, this is a remarkable result. Some quotes from stu dent teachers' evaluations are: "To my mind, the integration theory/practice was perfect"; "Come to think of it, I have seen and/or used all of the theory in practice"; "The things dealt with in the course are always apparent in school practice. " However, one may wonder here what these student teachers mean by 'theory' . Considering the processes and contents of the programme, probably they are not referring to purely formal theory but to a mixture of personal practical theory and more formal theory. Perhaps this is the essence of what a real integration of theory and practice might mean. 4. An extensive longitudinal study by Brouwer and Korthagen (2005) focused on the relationship between the programme design and outcomes of the realistic approach. At various moments during the programme, and during the rst two years in which the graduates worked as teachers, quantitative and qualitative data were collected among 357 student teachers, 31 teacher educators and 128 mentor teachers. Positive in uences on these teachers' practices appeared to depend primarily on the degree to which theoretical elements in their preparation programme were perceived by the student teachers as being functional for practice during their student teaching, and on the degree of cyclical alternation between school-based and university-based periods in the programme. In addition, a gradual increase in the complexity of activities and demands placed on the student teachers appeared to be a crucial factor in the integrating of theory and practice. 5. In 1992 and 1997 external evaluations of the programme performed by o cial committees of experts on teacher education, researchers, and representatives of secondary schools led to highly positive outcomes. In 1997, 25 out of 34 evaluation criteria scored 'good' or 'excellent' , including the criteria 'value of programme content' and 'professional quality of the graduates' . The school principals in the committees reported that they considered Utrecht graduates to be the best teachers in their schools. In the nine other criteria the programme received the quali cation 'su cient' . No other Dutch teacher-education programme received such high evaluations.
Implications for Teacher Education
The realistic approach concurs with the model of teacher learning proposed by Clarke and Hollingsworth (2002), who also advocate "[the placing of ] 'the pedagogy of teachers' (that is, the theories and practices developed by teachers) at the heart of our promotion of the professional growth of teachers" (p. 965). It should be emphasised that the development of a programme based on the principles of realistic teacher education may take much time and energy, especially as it requires that teacher educators assume a speci al and often unconventional role. To achieve the following, they often need to go through a deep process of professional change that a ects their professional identity: 1. They must be able to create suitable learning experiences for student teachers, in which these student teachers can develop fruitful gestalts as the basis for the next step. 2. They must be competent in promoting further awareness in student teachers as the student teachers re ect on their gestalts and thus develo p fruitful personal and formal theories. It is often helpful to take as a starting point for re ection one concrete, recently-experienced and relatively short teaching situation that still evokes some concern or question in the student teachers. It is our experience that for many teacher educators, this is not an easy role to take. 3. They must be able to o er theoretical notions based on empirical research in such a way that these notions t the student teachers' re ections on their existing gestalts and support them as they develop helpful practices. Moreover, after the students have developed personal practical theories, they should re ect on the relation between more formal theories and their own thinking. Only then will a real integration of practice and theory take place.
The realistic approach to teacher education has consequen ces not only for the types of interventions teacher educators should make to promote the intended learning process in the student teachers but also at the organisa tional level of teacher-educati on curricu la. First of all, linking theory and practice with the aid of the ALACT model requires frequent alterna ti on of school teaching days and speci c meetings aimed at the deepening of teaching experiences. Secondly, in order to harmonise the interventions of school-based mentor teachers and institu tebased teacher educators, close coopera tion be tween the schools and the teachereducation institute is necessary. Not every school may be suitable as a practicum site: the school must be able to o er a sound balance between safety and challenge and a balance between the goal of serving stu dent tea chers' learning and the interests of the school.
The approach advo cated here implies that it is impos si ble to make a clear distinction be tween di erent subjects in the teacher-education pro gramme. The realistic appro ach is not compati ble with a programme structure showing separa te modules such as 'sub ject matter me thods' , 'general educati on' , 'psyc hology of lear ning' , and so forth, meant to provide student teachers with knowledge they can later apply to their own practices. Relevant and realistic teacher learning is grounded in gestalts formed during experiences, and teaching expe riences are not as fragmen ted as the structure of many teacher-education programmes would suggest.
All this implies the need for profes sional develop ment of teacher-education sta and mentor teachers, an issue often overlooked . Most teacher educators do not receive any formal preparation for this profession, whereas several authors emphasise that being a good teacher does not automatically mean being a good teacher educator (Arizona group, 1995;Dinkelman, Margolis, & Sikkenga, 2006;Murray & Male, 2005). The team of teacher educators at Utrecht University have invested much time and energy in their own professional development, through training sessions, intensive sta meetings, all kinds of collegial support, and structured individual re ection. Without such an investment in the professional development of teacher educators the changing of traditional habits in teacher education would appear to be a di cult matter.
Conclusion
In conclusion, it is possible to bridge the gap between theory and practice in teacher education if we put the emphasis on student teachers' experiences, concerns, and existing gestalts, and work towards level transitions as described by the three-level model of teacher behaviour and teacher learning. Here the principles of realistic education provide a gateway. As we have seen, teacher education can make a di erence, but this requires (1) careful programme design, based on (2) a clear view of the intended process of teacher learning, (3) speci c pedagogical interventions, and (4) an investment in the education of teacher educators (Korthagen, Loughran, & Russell, 2006). In the development of a programme based on the principles of realistic teacher education, each of these components may take much time and energy, especially as they require from teacher educators a speci c and often unconventional role.
A warning has to be given regarding an extreme elaboration of the realistic approach. In many programmes in the world at large, the traditional approach of 'theory rst, practice later' has been replaced by the adage 'practice rst, theory later' . Many alternative programme structures have been created in which novice teachers receive very little theoretical background and teacher education becomes more of a process of guided induction into the tricks of the trade. Often this trend is in uenced by the need to solve the problem of teacher shortages. Although this development may satisfy those teachers, politicians and parents who criticise traditional practices in teacher education, there is a great risk involved. The balance seems to shift completely from an emphasis on theory to reliance on practical experiences. Such an approach to teacher education does not, however, guarantee success. Long ago, Dewey (1938, p. 25) stated that "the belief that all genuine education comes about through experience does not mean that all experiences are genuinely or equally educative" (cf. Loughran, 2006, p. 22). As discussed above, teaching experience can be a process of mere socialisation into established patterns of practice rather than an opportunity for sound professional development (cf. Wideen, Mayer-Smith, & Moon, 1998). There is a risk that in a 'practice rst approach' the basic question, namely how to integrate theory and practice, will remain unsolved. This integration is the basic feature of the realistic approach, and this article may have clari ed that this requires much more than a shift away from university-based teacher education towards a school-based alternative.
Moreover, as we have emphasised above, student teachers have to learn how to direct their own professional growth through the use of structured re ection as a means of integrating theory and practice. Hence too much emphasis on learning the 'tricks of teaching' is counterproductive to life-long professional learning.
Recent Developments
Currently there are new developments taking place in the theory of realistic teacher education. In particular, signi cant changes are taking place in the approach to re ection. The ALACT model is in itself only a process model and does not describe the content of the re ection. To ll this gap, a model has been developed which describes content levels of re ection. This so-called onion model appears to be helpful for deepening teacher re ection. It describes six of such levels: (1) environment, (2) behaviour, (3) competencies, (4) beliefs, (5) professional identity, and (6) mission (Korthagen, 2004). This onion model can be applied to a variety of di erent contents of teachers' re ections, for example didactical or pedagogical re ections, or re ections about collaboration with colleagues. We talk about core re ection if the inner levels (5 and 6) are included in the re ection process and if the person considers the relations of these inner levels with the more outer levels of competencies, behaviour, and environment (Korthagen & Vasalos, 2005).
Moreover, under the in uence of positive psychology (Seligman & Csikszentmi halyi, 2000), the importance has been discovered of re ection on positive experiences, successes and ideals instead of on problems and failures. Such a shift in focus makes it easier to include the inner levels of the onion model in the re ection process. This implies that concerns and ideals deeply ingrained in teachers' thinking are touched upon and used as starting points for deep re ection and enduring professional change. Recent research has shown the strong impact of this new view of re ection on the supervision of teachers (Meijer, Korthagen, & Vasalos, 2009;Hoekstra & Korthagen, 2011).
Within the limitations of the present article we cannot address this area in greater depth, but this brief sketch of recent developments illustrates that the realistic approach is not a static framework but rather a dynamic view of teacher education that is open to adaptation and cultural change. This view continues to evolve, and as a result of the translation of publications on the realistic approach into many di erent languages, this evolution is currently taking place in a variety of countries at the same time. It is to be hoped that this will have a bene cial e ect on teachers and pupils all over the world. | 2019-05-16T13:07:25.126Z | 2018-02-21T00:00:00.000 | {
"year": 2018,
"sha1": "9b931203ec534deb135efab209b14ae1f32d36e8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14712/23363177.2018.99",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b360aec9c4295ee56a0e4eb128d698d4c97b70a2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
225725587 | pes2o/s2orc | v3-fos-license | A tumor-microenvironment-responsive nanomaterial for cancer chemo-photothermal therapy
Taxol (TAX) is a typical anticancer drug that is widely used in clinical treatment of cancer, while gold nanorods (AuNRs) are a kind of well-known material applied for photothermal therapy (PTT). The therapeutic outcome of TAX in chemotherapy is however limited by drug resistance, while AuNRs often show poor accuracy in PTT. To optimize the functions of TAX and AuNRs, we developed a hydrogen peroxide (H2O2)-triggered nanomaterial (LV–TAX/Au@Ag) for combined chemo-photothermal therapy. In normal tissues, TAX is protected in the lipid bilayer and isolated from the surrounding normal cells, while AuNRs are coated with silver shells and show low photothermal capacity. However, after reaching the tumor tissues, the silver shells can be etched by endogenous H2O2 in the tumor microenvironment, and the photothermal properties of AuNRs are then recovered. Meanwhile, the generated oxygen destabilizes the LV, which makes the 100 nm sized nanosystems disassemble into the smaller sized TAX and AuNRs, leading to the deep penetration and direct interaction with tumor tissues. The related in vitro experiments proved the validity of this “turn off/on” effect. Extensive necrosis and apoptosis were observed in the tumor tissues and the proliferation of solid tumor was greatly suppressed due to this combined chemo-photothermal therapy. In addition, no significant damage was found in normal tissues after the treatment of LV–TAX/Au@Ag. Therefore, the strategy to achieve environmental response by modifying the photothermal agents enhanced the efficiency and safety of nanomedicine, which may help improve cancer treatment.
Introduction
Cancer has always been the focus of people's attention due to its horric fatality rate. 1,2 Chemotherapy is the most typical and conventional way that is widely applied to cancer treatment in the clinic. 3 Nanocarriers like liposomes are then developed for prolonged circulation life and thus better therapeutic effect. [4][5][6][7][8][9] The main components of liposomes are phospholipids and cholesterol, which are of high biocompatibility due to the similar structure with biological membranes. The hydrophobic drugs can be loaded into the lipid bilayers while the hydrophilic drugs can be loaded into a liquid core of liposomes. Moreover, liposomes with the size of about one hundred nanometers are able to escape from the arrest by the reticulo-endothelial system (RES) due to the hydrophilic surface and can be further targeted to the tumor tissues via the EPR effect. [10][11][12][13] However, the release of drugs aer the arrival at the tumor tissues is still a big challenge. The specic characteristics such as low pH and hypoxia in tumor microenvironment are utilized by researchers with the aim of controlled-release. 14,15 Liposomes composed of pH-sensitive lipids or fabricated with pH-sensitive polymers are stable in normal tissues and destabilize under acidic conditions. [16][17][18] Liposomes conjugated with hypoxiasensitive molecule like nitroimidazole also have the tendency to disintegrate due to the change of water solubility under hypoxic conditions caused by the transformation from nitroimidazole to aminoimidazole. 19,20 Therefore, the pH-responsive or hypoxia-responsive liposomes could release drugs only aer reaching at the tumor sites. Besides, other drug release triggers such as light and temperature are also utilized in drug delivery systems with controlled-release. 9,[21][22][23] Although the drug delivery systems with controlled-release can deliver the anticancer drugs to the tumor tissues to a great extent, the current chemotherapy still remains unsatisfying because of multiple drug resistance (MDR). The combination of chemotherapy with photothermal therapy (PTT) may be a promising strategy to improve the therapeutic effect in recent years as the hyperthermia or induced by near-infrared light can directly kill cancer cells. 24,25 As previously reported, a single round of PTT combined with a sub-therapeutic dose of drugs could elicit robust anti-tumor immune responses and eliminate local as well as untreated, distant tumors in >85% of animals bearing CT26 colon carcinoma. 26 However, how to realize the combination therapy efficiently and securely is still a big challenge.
To achieve the combined chemo-photothermal therapy, researchers have taken advantages of the surface-modiable photothermal materials to load anticancer drugs, such as loading doxorubicin (DOX) into nanographene oxide or conjugating curcumin on the surface of AuNRs. 27,28 Considering the protection of drugs, many researchers have also tried to wrap chemical drugs and photothermal agents together in the nanocarriers. For example, Zheng et al. have used a single-step sonication method to load doxorubicin (DOX) and indocyanine green (ICG) in the self-assembled polymer nanoparticles for combination therapy. The combined treatment not only synergistically induced the apoptosis and death of DOX-sensitive MCF-7, but also showed great cytotoxicity to DOX-resistant MCF-7/ADR cells. 25 Similarly, Cao et al. have also integrated DOX, ICG, and gadolinium(III)-chelated silica nanospheres to form a theranostic platform including chemotherapy, PTT, and magnetic resonance imaging (MRI). The complex could effectively improve the therapeutic efficacy compared to the single treatment and showed the ability to be the T 1 -type MRI contrast agents. 29 However, these have not fully developed the potential of photothermal agents. Moreover, the simple physical encapsulation can not affect the property of photothermal agents, suggesting that the normal tissues around the material may be also damaged under irradiation. The security of treatment is thus compromised greatly.
Herein, we developed a tumor-microenvironment-responsive nanomaterial (LV-TAX/Au@Ag) for combination therapy, whose chemotherapy and PTT were turned on by the endogenous hydrogen peroxide (H 2 O 2 ). As shown in Fig. 1, the photothermal agent gold nanorods (AuNRs) were rst coated with silver (Au@Ag) and then encased in the liquid core of lipid vesicles (LV), while the anticancer drug taxol (TAX) was loaded in the hydrophobic bilayers of LV. Due to the presence of silver shells, the ability of AuNRs to absorb near-infrared light was reduced signicantly. Meanwhile, the toxicity of TAX was also suppressed because of the LV coating. Therefore, the functions of TAX and AuNRs were temporarily inhibited in normal physiological conditions, which meant that LV-TAX/Au@Ag could not cause damage to cells in normal tissues even under nearinfrared light. However, when the developed LV-TAX/Au@Ag reached the tumor via the EPR effect, the endogenous H 2 O 2 in tumor microenvironment was then utilized as the trigger of combination therapy through etching the silver at the surface of AuNRs. [30][31][32] The silver shells of Au@Ag were etched by H 2 O 2 and then produced oxygen and silver ions, as well as baring the AuNRs, which recovered the photothermal conversion capacity of AuNRs. Thus, the accuracy of PTT was greatly improved and the damage to normal cells owing to thermo was reduced signicantly. Furthermore, the generated oxygen could then rupture LV and result in the release of TAX and AuNRs. The released materials could then further penetrate into the solid tumor due to their smaller size. TAX could inhibit the mitosis of tumor cells, while the bare AuNRs could heat up and further destroyed the tumor cells under 808 nm laser irradiation, leading to an enhanced therapeutic effect compared with the single treatment. [33][34][35] Relevant measurements combined with in vitro and in vivo experiments were carried out to prove that LV-TAX/Au@Ag were able to diagnose and treat the HeLa-tumorbearing mice efficiently without damaging the normal tissues in the process of combination therapy. Therefore, we successfully combined chemotherapy and PTT through physical encapsulation and surface modication, and suppressed the lethality of chemical drugs and photothermal materials to normal tissues during treatment, thus improving the efficiency and safety of nanomedicine.
Preparation of AuNRs and Au@Ag
AuNRs were synthesized through a seed-mediated growth method as previously reported. 36,37 Briey, the whole process was divided into three steps. First, 100 mL of HAuCl 4 (0.01 M) and 1.88 mL of CTAB (0.2 M) were added to 6.25 mL deionized water. 0.5 mL of NaBH 4 (0.01 M) was then added to the solution to form the gold seed solution. Next, the growth solution was prepared by adding 5.4 mL of AgNO 3 (0.01 M) and 5.4 mL of ascorbic acid (0.1 M) to an aqueous solution containing 29.7 mL of HAuCl 4 (0.01 M), 425.7 mL of CTAB (0.2 M) and 427.5 mL deionized water. At last, 7.2 mL of gold seed solution was added to the mixture solution and was kept at 38 C overnight. The reaction solution was centrifuged at 12 000 rpm for 20 min and washed with deionized water for three times to purify the prepared AuNRs. The puried AuNRs were lyophilized and stored at 4 C for further usage. In order to obtain Au@Ag, the modied procedure was used as previously reported. 38,39 Herein, 0.1 mL of AgNO 3 (0.01 M) and 50 mL of ascorbic acid (0.1 M) were added to 50 mL of CTAB (0.1 M) solution containing 1 mg of AuNRs. 0.5 mL of NaOH (0.1 M) was then added to the mixture to increase the pH over 10 and was vigorously stirred for 1 h to ensure the complete reduction reaction. Finally, the reactant was also centrifuged at 12 000 rpm for 20 min and washed with deionized water thrice for purication.
Preparation of LV-TAX/Au@Ag
The surface of Au@Ag was rst modied by adding 0.5 mg of prepared Au@Ag to 1 mL of DOPC (0.05 M in chloroform). 7,40 The mixture was then gently stirred for 2 h at room temperature and collected via centrifugation. Aer that, the mixture of DOPC : cholesterol : TAX (8 : 1 : 1) was dissolved in chloroform at a concentration of 2 mg mL À1 . Cholesterol was added in order to enhance the rigidity of LV for better efficiency of tumor penetration as previously reported by researchers. 8 250 mL of the mixture of in the vial was evaporated under a nitrogen ow to form the lipid lm. The resulting lipid lm was then kept in a vacuum oven at room temperature overnight to remove any residual chloroform. Aerwards, the lipid lm was hydrated in 1 mL of PBS that containing the surface modied Au@Ag under the occasional shake for 2 h. At last, the hydrated solution was sonicated for 20 min and puried via centrifugation to acquire the developed LV-TAX/Au@Ag. The encapsulation rate of TAX was sequentially measured by high performance liquid chromatography (HPLC) (Waters, USA). 41 Moreover, LV-TAX were prepared according to the above steps without the adding of Au@Ag, while LV-Au@Ag were prepared in absence of TAX. LV-AuNRs were obtained by replacing Au@Ag with AuNRs during the synthesis of LV-Au@Ag. The synthetic procedures of LV-TAX/AuNRs were same as that of LV-TAX/Au@Ag through mixing AuNRs instead of Au@Ag with DOPC.
Characterization
The morphology was examined by a transmission electron microscope (TEM) (JEOL, Japan). The average size was measured by dynamic light scattering (DLS) (Malvern, UK). The surface element composition was investigated by X-ray photoelectron spectroscope (XPS) (PHI 5000 Versa Probe, Japan). The absorption spectra of nanomaterials were measured by ultraviolet-visible (UV-vis) spectrophotometer (YOKE, China).
In vitro drug release
The vitro drug release proles of LV-TAX/AuNRs and LV-TAX/ Au@Ag were measured through a dialysis method as previously reported. 41 The solution of LV-TAX/AuNRs or LV-TAX/ Au@Ag was rstly put in a dialysis bag. The dialysis bag was then incubated in 200 mL of PBS at 37 C with magnetic stirring. 0.7 mL of incubation medium was taken out at determined time points and 0.7 mL of fresh PBS was supplied accordingly. The collected 0.7 mL of aqueous solution was mixed with 0.3 mL of acetonitrile subsequently. The concentration of TAX in the mixture was nally measured by HPLC. On the other hand, in order to simulate the tumor microenvironment, 200 mL of PBS that contain H 2 O 2 (10 mM) was used to replace the pure PBS at the beginning.
In vitro photothermal study
The heating proles of LV-TAX/AuNRs, LV-TAX/Au@Ag and LV-TAX/Au@Ag etched with H 2 O 2 were measured by a thermometer in vitro, as well as PBS in contrast. These materials were rst dispersed in PBS respectively as the concentration of Au was 1 mg mL À1 uniformly. The solutions were irradiated by an 808 nm laser (1 W cm À2 ) and the real-time temperature was measured every minute. The heating curve of LV-TAX/Au@Ag with different Au concentrations that etched with H 2 O 2 was also measured subsequently.
MTT assay
The security of LV-TAX/Au@Ag against human umbilical vein endothelial cells (HUVEC) was measured by MTT assay. HUVEC were seeded in 96-well plates at a density of 10 4 cells per well in Dulbecco's modied eagle medium (DMEM) and incubated until the conuence of cells reached 80% in each well (37 C, 5% CO 2 ). Then the medium was replaced by 100 mL of fresh medium with different Au concentrations of LV-TAX/Au@Ag. Aer 4 h, the designated wells were irradiated by an 808 nm laser (1 W cm À2 ) for 6 min. Aerwards, the medium of all wells was removed and cells were washed thrice with PBS (pH ¼ 7.4). Then 20 mL of MTT solution (2.5 mg mL À1 in PBS) and 80 mL of culture medium was added into each well, followed by incubating for another 2 h. The medium was then aspirated and 200 mL of dimethylsulfoxide (DMSO) solution was added to each well. Aer 15 minutes, their absorbance at 490 nm was measured using an iMark Enzyme mark instrument (Bio-Rad Inc., USA). The cell viability was calculated according to previous report. 42 Moreover, the in vitro cytotoxicity of LV-TAX/ AuNRs against HUVEC was also measured in the same way as a contrast.
In vitro apoptotic analysis
The in vitro cytotoxicity of different materials against HeLa cells was examined by uorescence imaging and ow cytometry respectively. HeLa cells were seeded in 6-well plates at a density of 10 6 cells per well in DMEM and incubated until the conuence of cells reached 80% in each well (37 C, 5% CO 2 ). Different materials including PBS, TAX, LV-AuNRs, LV-TAX/Au@Ag (etched with H 2 O 2 ) were respectively added to the wells (the Au concentration of both LV-AuNRs and LV-TAX/Au@Ag was 1 mg mL À1 while the concentration of TAX was 0.1 mg mL À1 ). PBS was selected as the control group, while the TAX, LV-AuNRs, and LV-TAX/Au@Ag represented chemotherapy, PTT, and combination therapy, respectively. Aer 12 h, the wells were irradiated by an 808 nm laser for 6 minutes. For uorescence imaging, HeLa cells were simply dyed with calcein-AM/PI and observed by a uorescence microscope. The excitation and emission wavelength of calcein-AM was 490 nm and 515 nm, while the excitation and emission wavelength of PI was 535 nm and 617 nm. For ow cytometry, the whole medium and cells in each well were collected. The mixture was dyed with annexin V-FITC/PI and observed by ow cytometer (FCM).
In vivo infrared thermal imaging
Three groups of nude mice that bearing HeLa-tumor were respectively intravenously injected with 100 mL of PBS, AuNRs, and LV-TAX/Au@Ag (1 mg mL À1 ). The mice were then irradiated by an 808 nm laser (1 W cm À2 ) 12 h aer injection. Meanwhile, the whole body of each mouse was imaged by using an infrared thermal camera to monitor the change of temperature during the irradiation.
Bio-distribution study
The HeLa-tumor-bearing mice were intravenously injected with 100 mL of LV-TAX/Au@Ag solution (1 mg mL À1 ). The mice were sacriced and the tumor and main organs (heart, liver, spleen, lung, and kidney) were extracted 2, 4, 12, 24, 48 h post the injection, respectively. Moreover, the blood was also collected subsequently. Aer being washed and weighed, the tumor and organs were dissolved by aqua regia solution. The concentration of gold was eventually determined by Inductively Coupled Plasma Optical Emission Spectrometer (ICP-OES) (Optima 5300DV, USA).
Histological examination
The HeLa-tumor-bearing mice were intravenously injected with 100 mL of LV-TAX/Au@Ag solution (1 mg mL À1 ). Radiation was brought to bear on the mice by an 808 nm laser (1 W cm À2 ) for 6 min aer the injection. The mice were sacriced and the main organs (heart, liver, spleen, lung, and kidney) were extracted 1, 7, 14, 21, 28 d post the treatment, respectively. The organs were then put into 4% paraformaldehyde for at least 7 days. Aerwards, all the organs were made into sections that stained with H&E and observed under the microscope.
In vivo antitumor effect
HeLa-tumor-bearing models were established by challenging nude mice with subcutaneous injection of HeLa cells (10 7 per mouse) at the right armpits. The mice were randomly divided into ve groups (Control, LV-TAX, LV-Au@Ag, LV-TAX/AuNRs, and LV-TAX/Au@Ag) when the tumor volume reached about 500 mm 3 . Aerwards, the mice in ve groups were intravenously injected with 100 mL corresponding solutions (the mice in control group were injected with PBS) respectively. Furthermore, the mice were irradiated for 6 min by an 808 nm laser (1 W cm À2 ) 12 h aer the injection. The tumor volume and survival number of mice in each group were measured and recorded every two days. The relative tumor volume (V/V 0 ) was then calculated as V 0 represented the initial volume of tumor. Aer 15 days, the mice were sacriced and the tumors were collected and stored in 10% neutral buffered formalin solution. At last, the tumors were made into slices and stained with hematoxylin and eosin (H&E), TdT-mediated dUTP nick end labeling (TUNEL) and Ki67 respectively. The stained slices were nally observed and imaged by a uorescence microscope. All protocols for animal test were reviewed and approved by the committee on animals in Nanjing University and carried on according to guidelines given by National Institute of Animal Care.
Statistical analysis
All data were analyzed using SPSS 16.0 statistical soware (SPSS Inc, Chicago, IL, USA) and are expressed as mean values. Differences between data from the experimental and control groups were analyzed using one-way analysis of variance (ANOVA) to determine statistical signicance. P < 0.05 was considered statistically signicant.
Preparation and characterization of LV-TAX/Au@Ag
AuNRs were rst synthesized through a seed-mediated growth method as previously reported. 36,37 The size and morphology of AuNRs were measured by TEM (Fig. S1A †). The prepared AuNRs were found to be $30 nm in length and $10 nm in width. Aer the coating of silver, the size became a little bit larger and the obvious silver shell could be observed by TEM ( Fig. 2A). Moreover, the LV and LV-TAX/Au@Ag were respectively prepared and measured by TEM and DLS (Fig. 2B, S1B and S2 †). The modied gold nanorods, Au@Ag, could be found in the internal core of LV. The average size of LV-TAX/Au@Ag was 148 AE 12 nm, which was slightly larger than that of LV (140 AE 13 nm). XPS (Fig. 2C) and UV-vis (Fig. 2D) were employed to verify the successful synthesis of LV-TAX/Au@Ag. The peak of Au element and Ag element were both found in the XPS spectrum, which meant the existence of Au@Ag in the core of LV-TAX/Au@Ag. The typical double absorption peaks of AuNRs could be found in the spectra of LV-AuNRs at 517 nm and 780 nm, indicating that the existence of LV did not inuence the photothermal conversion capability of AuNRs. Aer being coated with silver, the double absorption peaks had a blue shi to 364 nm and 604 nm. On the other hand, we could nd the typical absorption peak of TAX at 227 nm in the spectra of LV-TAX. Furthermore, as the absorption peaks of Au@Ag and TAX were all shown in the spectrum of LV-TAX/Au@Ag, the successful coating of Au@Ag and loading of TAX were thereby proved. Meanwhile, the encapsulation rate of TAX was measured by HPLC and calculated to be 87.29%. Moreover, aer being etched with H 2 O 2 , LV-TAX/Au@Ag had the double absorption peaks close to those of AuNRs instead of Au@Ag, which meant the dissolution of Ag and the restoration of photothermal capacity.
The drug release of LV-TAX/Au@Ag was then examined (Fig. 3A). As we can see, LV-TAX/Au@Ag exhibited a poor release prole under the normal condition, they only released about 27% of TAX in 48 h, which is similar to LV-TAX/AuNRs, indicating that both of them could only release TAX via permeation in normal tissues. However, when LV-TAX/Au@Ag were immersed in the solution that contained excess H 2 O 2 just like the microenvironment of tumor, they could release TAX rapidly. LV-TAX/Au@Ag released about 80% of TAX in 12 h and over This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 22091-22101 | 22095 90% at last, which was much larger than that of LV-TAX/AuNRs as they only released about 33% of TAX in 48 h. This huge contrast suggested that TAX release could be triggered by H 2 O 2 in the microenvironment of tumor, as the silver shell at the out of AuNRs could react with H 2 O 2 and generated oxygen, leading to the rupture of LV and thus the huge release of TAX. Aerwards, related heating curves were measured in order to examine the photothermal ability of LV-TAX/Au@Ag (Fig. 3B). The heating curve of LV-TAX/Au@Ag etched with H 2 O 2 was similar to that of LV-TAX/AuNRs, as the temperature increased to about 45 C rapidly in 5 min and persistently rose to over 50 C at last under the irradiation of laser, which was much higher than that of PBS and was enough to kill tumor cells. In contrast, the heating rate of LV-TAX/Au@Ag without H 2 O 2 was much slower due to the blue shi of absorption peak, indicating that the photothermal efficiency of LV-TAX/Au@Ag was inhibited in normal condition. The photothermal conversion experiment of LV-TAX/Au@Ag (etched with H 2 O 2 ) at various concentrations was also carried out to measure the photothermal capacity of LV-TAX/Au@Ag (Fig. S3 †). As we can see, LV-TAX/Au@Ag (etched with H 2 O 2 ) had the favourable photothermal conversion efficiency. Furthermore, the higher the concentration, the better the performance. At last, the biocompatibility of LV-TAX/Au@Ag was tested by incubating with HUVEC (Fig. 3C). Both LV-TAX/AuNRs and LV-TAX/ Au@Ag showed low cytotoxicity to HUVEC, as the cell viability was generally high and was still higher than 90% even when the concentration was up to 2000 mg mL À1 . However, aer the irradiation of an 808 nm laser, the two exhibited entirely different cytotoxicity against HUVEC (Fig. 3D). Cell viability in the group of LV-TAX/Au@Ag still remained at a high level, while a large number of cells died in the group of LV-TAX/AuNRs whose cell viability decreased from 73% to 32% as the concentration increased. Therefore, we can conclude that LV-TAX/Au@Ag showed the negligible cytotoxicity to normal cells even under the irradiation of laser and might be nondestructive in normal tissues during the process of PTT.
In vitro antitumor effect
The in vitro antitumor effect of LV-TAX/Au@Ag was rst evaluated by uorescence micrography (Fig. 4A). As HeLa cells were double stained with calcein-AM (green) and PI (red), the green uorescence represented live cells, while the red uorescence represented dead cells. Control group showed very high cell viability because HeLa cells in this group were only incubated with PBS without any treatments. There was a certain amount of red uorescent signals both in chemotherapy group and PTT group, indicating a part of HeLa cells were killed due to chemotherapy or PTT. However, the antitumor effect of a single treatment was not enough. In contrast, almost all of the HeLa cells showed red uorescence in the group of combination therapy, suggesting an excellent antitumor effect of combination therapy. Moreover, ow cytometry was then used to quantify the antitumor effect of different treatments (Fig. 4B). It showed similar results to that of uorescence micrography. 93.5% of HeLa cells were alive in control group. On the other hand, 30.4% of HeLa cells were alive in PTT group, while only 21.3% of HeLa cells remained alive aer the treatment of chemotherapy due to the direct interaction with TAX. However, in the actual treatment process, it is difficult for all of TAX to contact with tumor cells directly, as some of them may travel to other tissues and be metabolized soon aer. Therefore, the antitumor effect of chemotherapy may be greatly reduced in vivo treatment. In terms of combination therapy, the cell viability declined to 4.71% with irradiation, which meant that 95.29% of HeLa cells were erased by LV-TAX/Au@Ag, indicating the superior antitumor effect of combination therapy induced by LV-TAX/Au@Ag.
Bio-distribution and histological study
In order to verify the targeting ability of LV-TAX/Au@Ag, the bio-distribution of LV-TAX/Au@Ag in HeLa-tumor-bearing mice aer intravenous administration was measured by ICP-OES (Fig. 5A). There was a rapid decrease in blood aer injection, whereas the concentration of LV-TAX/Au@Ag in liver and spleen rose continuously in the rst 12 h post injection, indicating that a certain part of LV-TAX/Au@Ag was taken up by the liver and spleen from the blood circulation. However, the majority of LV-TAX/Au@Ag reached the tumor sites as the concentration in tumor was always higher than that in any other tissues. This result signied LV-TAX/Au@Ag indeed had the capacity to target the tumor through EPR effect.
H&E stained sections of each organ were also taken to evaluate the histological security of LV-TAX/Au@Ag (Fig. 5B). The sections of heart, liver, spleen, lung, and kidney of the mice treated with LV-TAX/Au@Ag were respectively taken at 1, 7, 14, 21, and 28 d post the treatment and dealt with H&E staining. Compared with the images obtained from normal mice, no apparent histological damage was observed in these organs of the treated mice, indicating that LV-TAX/Au@Ag had high biocompatibility and might not cause damage to normal tissues.
In vivo infrared thermal imaging
In vivo infrared thermal imaging was carried out to evaluate the photothermal conversion properties of LV-TAX/Au@Ag. The images were taken from the HeLa-tumor-bearing mice that intravenously injected with different materials and irradiated by an 808 nm laser for different exposure time, respectively (Fig. 6). The white circles in the photographs represented the areas of tumors. The temperature of tumor sites in the group of PBS was nearly constant because the simple PBS had no photothermal effect. For the group of AuNRs, the temperature did not rise appreciably due to the lack of tumor targeting ability. As the bare AuNRs had poor water solubility, most of them were directly metabolized instead of reaching the tumor area. However, the distinct increase in temperature was exhibited as the irradiation time increased in the group of LV-TAX/Au@Ag, revealing that LV-TAX/Au@Ag had both capacities of photothermal conversion and tumor targeting. Moreover, Perrault et al. have demonstrated that smaller nanoparticles are able to rapidly diffuse throughout the tumor matrix. They observed distinct trends in tumor permeation over the entire size range 8 h post injection, as the 20 nm particles were able to permeate deep into tumor tissues, the 60 nm particles less so, while the most of 100 nm particles were still localized in the perivascular region. 43 Liu et al. also took advantage of this characteristic to construct the pH-responsive polymer-coated gold nanorod clusters. The nanoparticles could be triggered by acidic, leading to a deep tumor penetration effect for enhanced theranostic performance. 44 Herein, for the group of LV-TAX/Au@Ag, we could also nd that the thermal signal was distributed almost throughout the solid tumor, suggesting that the released bare AuNRs were able to penetrate deep into the solid tumor due to their smaller size. Therefore, the tumor cells might be eliminated thoroughly due to the sufficient photothermal effect raised by the bare AuNRs released from LV-TAX/Au@Ag.
In vivo antitumor effect
Considering the advantages of LV-TAX/Au@Ag proved above, the in vivo antitumor effect of LV-TAX/Au@Ag was then assessed to demonstrate the therapeutic effect in cancer therapy. Tumor volume and survival number of the mice in different groups were recorded every two days. As displayed in Fig. 7A, tumor grew rapidly in control group and the tumor volume nally reached about 8.86-fold that of the initial day. For groups of LV-TAX and LV-Au@Ag, the tumor growth rate of both was inhibited to some degree in the rst few days due to the chemotherapy and PTT, respectively. However, the separate chemotherapy or PTT was not enough to wipe out tumor cells thoroughly. Therefore, the nal tumor volume severally increased by 4.51-fold and 3.46-fold compared with the rst day. LV-TAX/AuNRs and LV-TAX/Au@Ag exhibited better antitumor effect owing to the combination of chemotherapy or PTT. Nevertheless, the release of TAX and AuNRs in LV-TAX/AuNRs was inhibited while LV-TAX/Au@Ag were able to exhibit a rapid release once reaching the tumor tissues due to the reaction of silver shells and H 2 O 2 . The small sized TAX and AuNRs were then able to penetrate deep into the solid tumor, and fully interact with tumor cells. Finally, tumor volume of LV-TAX/Au@Ag group decreased to 0.42-fold compared with the rst day, while that of LV-TAX/AuNRs group reached 1.59-fold. Furthermore, different from LV-TAX/Au@Ag, PTT induced by LV-TAX/AuNRs was non-specic as LV-TAX/AuNRs in normal tissues could also kill cells under near-infrared exposure, which might result in the damage on normal tissues. Therefore, we concluded that LV-TAX/Au@Ag showed the best therapeutic efficacy among all materials. Similarly, the survival number also reected the outstanding antitumor effect of LV-TAX/Au@Ag, as the most of mice survived at the end of the experiment aer treating with LV-TAX/Au@Ag compared with other groups (Fig. 7C).
H&E, TUNEL, and Ki67 staining of tumor sections could further validate the therapeutic effect on tumors (Fig. 7D). Tumor tissues in LV-TAX/Au@Ag group showed a large area of cell deformation, indicating the necrosis and apoptosis of tumor cells, while fewer phenomena occurred in other groups. TUNEL staining showed this kind of contrast more apparently. A large degree of apoptosis (cyan area) could be found in the group of LV-TAX/Au@Ag while the majority of tumor cells stay alive in other groups. In addition, Ki67 staining was used to demonstrate the proliferation. Similarly, LV-TAX/Au@Ag group showed the least positive signals (red uorescence) compared with other groups, suggesting that the proliferation of tumors Fig. 6 In vivo infrared thermal images of HeLa-tumor-bearing mice intravenously injected with PBS, AuNRs, and LV-TAX/Au@Ag respectively after irradiation (808 nm, 1 W cm À2 ).
This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 22091-22101 | 22099 was suppressed signicantly aer the HeLa-tumor-bearing mice were subjected to the combination therapy induced by LV-TAX/ Au@Ag.
Conclusion
In summary, we successfully designed the novel physiological safe and H 2 O 2 -triggered LV-TAX/Au@Ag for combined chemophotothermal therapy. LV-TAX/Au@Ag were able to target the tumor through EPR effect and take advantage of the silver shells to react with the endogenous H 2 O 2 in tumor microenvironment. The generated oxygen could disassemble the large LV coated nanosystem into the small sized TAX and AuNRs, which were then able to permeate deep into the solid tumor. The etched bare AuNRs showed the excellent photothermal effect and were able to eliminate the tumor cells with the help of near-infrared laser, while TAX could further suppress the proliferation of tumor cells. The related in vitro and in vivo experiments proved that the induced combination therapy exhibited an outstanding antitumor efficiency and negligible histological damage. Therefore, we believe that this kind of H 2 O 2 -responsive nanomaterial has broad prospects in future cancer treatment.
Conflicts of interest
The author reports no conicts of interest in this work. | 2020-06-11T09:01:32.831Z | 2020-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "183ca65e1ab903a5a65d13ad6db07cca4a74fcb4",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra04171h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9f10b599e039de0182e8165199e5a83134ebcb3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
13658373 | pes2o/s2orc | v3-fos-license | Extracellular adenosine modulates host-pathogen interactions through regulation of systemic metabolism during immune response in Drosophila
Phagocytosis by hemocytes, Drosophila macrophages, is essential for resistance to Streptococcus pneumoniae in adult flies. Activated macrophages require an increased supply of energy and we show here that a systemic metabolic switch, involving the release of glucose from glycogen, is required for effective resistance to S. pneumoniae. This metabolic switch is mediated by extracellular adenosine, as evidenced by the fact that blocking adenosine signaling in the adoR mutant suppresses the systemic metabolic switch and decreases resistance to infection, while enhancing adenosine effects by lowering adenosine deaminase ADGF-A increases resistance to S. pneumoniae. Further, that ADGF-A is later expressed by immune cells during infection to regulate these effects of adenosine on the systemic metabolism and immune response. Such regulation proved to be important during chronic infection caused by Listeria monocytogenes. Lowering ADGF-A specifically in immune cells prolonged the systemic metabolic effects, leading to lower glycogen stores, and increased the intracellular load of L. monocytogenes, possibly by feeding the bacteria. An adenosine-mediated systemic metabolic switch is thus essential for effective resistance but must be regulated by ADGF-A expression from immune cells to prevent the loss of energy reserves and possibly to avoid the exploitation of energy by the pathogen.
Introduction Pro-inflammatory state of various immune cells in mammals, as neutrophils, dendritic cells and M1 macrophages and some adaptive-immunity cells (e.g. Th17), is associated with a metabolic switch including increased glycolysis and glucose consumption [1]. Therefore, the immune response is an energy-demanding process [2] and poor nutrition often leads to reduced immune resistance [3]. However, the energy supply to the immune system must be tightly regulated to protect it from exploitation by the pathogen [4] and also properly allocate limited energy reserves, especially because the immune response is known to be associated with anorexia [5,6]. For example, mycobacterial infection may lead to wasting due to the consumption of energy reserves [7].
The mechanisms for the systemic regulation of the metabolism during an immune response have only recently begun to be intensively studied. Systemic insulin resistance, caused by proinflammatory cytokines (e.g. TNF-α or Il-6), is believed to be an important mechanism to ensure the energy supply to the immune system [8]. Although this is likely to be beneficial during an acute response, chronic inflammation may lead to various metabolic disturbances [9]. Experimental studies of the effects of metabolic regulations on host-pathogen interactions are practically challenging, especially in higher experimental models such as mice [6,10]. Simpler model organisms such as the fruit fly Drosophila melanogaster may provide a more streamlined platform for such studies.
We have recently shown that extracellular adenosine (e-Ado) mediates a systemic metabolic switch upon infection of Drosophila larva by a parasitoid wasp [11]. This switch redirected energy normally devoted to developmental processes towards the immune system. Such a systemic switch is crucial for an effective immune response because blocking adenosine signaling drastically reduces resistance. We have also shown that immune cells produce this signal, thus usurping energy from the rest of the organism. Such a privileged behavior of the immune system was recently proposed to be important for an effective immune response [8].
Adenosine (Ado) is an important intracellular metabolite of purine metabolism. It can also appear extracellularly under certain conditions, thus becoming an important signaling molecule. For example, when tissue is damaged, adenosine triphosphate leaks out and is converted by ecto-enzymes into e-Ado [12]. Alternatively, when intracellular adenosine triphosphate levels decrease due to metabolic stress, increased adenosine monophosphate is converted to Ado, which is subsequently released into the extracellular space where it binds to adenosine receptors informing other tissues of this metabolic stress. During extensive exercise, muscles release Ado to induce fatigue to allow the initiation of recovery to this type of stress [13]. In another example, hypoxia induces adenosine to be released from the affected tissue to increase nutrient-rich blood flow [14]. This putative stress signaling by adenosine is conserved in evolution from primitive unicellular organisms all the way up to humans. For example, adenosine is released upon starvation by the social bacteria Myxococcus xanthus to induce the formation of fruiting bodies that produce spores for surviving harsh conditions [15]. Adenosine therefore represents a very ancient and universal system for signaling metabolic stress.
Here we investigate the regulatory role of e-Ado during the immune response and the effects of this regulation on host-pathogen interactions. Taking two complementary approaches, we block adenosine signaling by elimination of the adenosine receptor AdoR and enhance adenosine signaling by down-regulation of adenosine deaminase ADGF-A. We analyze the effects of these manipulations on host response to two types of bacterial infections: acute caused by Streptococcus pneumoniae, and chronic by Listeria monocytogenes. We show that adenosine regulates energy supply during bacterial infections in adult flies and that adenosine signaling is crucial for host defense, extending our previous results from research on parasitoid wasp infection [11]. In addition, we also demonstrate an important regulatory role of adenosine deaminase ADGF-A during the immune response. Enhancing adenosine effects may be beneficial for host resistance during the acute phase of the infection, but negative feedback regulation becomes important to prevent exhausting energy reserves and to prevent host nutrients being exploited by the pathogen.
Phagocytosis is crucial for clearance of S. pneumoniae
When our control w flies were injected with 20 000 CFUs of S. pneumoniae, bacteria grew to 200-300 thousands within 24 hours (Fig 1A and 1B). This load persisted in the majority of flies for a few further days (acute phase) during which many flies died ( Fig 1C). The surviving flies cleared the infection by the 5 th -6 th day (Fig 1A), but some lethality still occurred after this, probably because they either did not clear the infection or they did not recover (Fig 1C). It has been previously demonstrated by injecting the flies with latex beads, which jams their hemocytes, making them unable to phagocytose, that phagocytosis was crucial for the clearance of S. pneumoniae [16]. We confirmed this result here when the injection of latex beads 24 hours before infection drastically increased pathogen load (to a million bacteria per fly; Fig 1B) and led to the rapid death of the flies within 2 days (Fig 1C). The effect of blocking phagocytosis on resistance was independent of the used genotypes (Fig 1B and 1C; see further detailed description of genotypes). Fig 1D shows
Systemic metabolic switch is required for an effective immune response during S. pneumoniae infection
We had previously shown that a systemic metabolic switch, which redirected energy devoted to developmental processes to the immune system, was required for the resistance of Drosophila larvae to parasitoid wasp infection [11]. Therefore, we analyzed here the systemic metabolism during S. pneumoniae infection and found a similar systemic metabolic switch, in which circulating glucose in control flies rose upon infection, peaking at 12 hpi (Fig 2A), while glycogen levels were reduced to half within the first 24 hpi (Fig 2B). In agreement with this glycogen breakdown during infection, the expression of the glycogen phosphorylase rose during S. pneumoniae infection (Fig 2C).
To test the importance of energy release from glycogen for the immune response, we tested the effect of silencing glycogen phosphorylase (GlyP) during S. pneumoniae infection. We used Gal4/UAS induced RNA interference (RNAi) with thermosensitive Gal80 to induce RNAi just prior to infection to avoid any developmental effects of silencing GlyP; both control flies and flies with RNAi showed similar levels of glycogen at the beginning of infection (Fig 2B). Knocking down GlyP specifically in the fat body (ppl>GlyP[RNAi]; Fig 2C) significantly reduced glycogen breakdown and suppressed hyperglycemia upon infection (Fig 2A and 2B). This led to increased pathogen load ( Fig 2D) and decreased survival (Fig 2E). Similar effects on pathogen load and survival were achieved by systemically induced RNAi (Act>GlyP [RNAi]) (Fig 2D and 2F).
Adenosine signaling mediates the systemic metabolic switch
In our previous work, mentioned above [11], we showed that the systemic metabolic switch was mediated by adenosine signaling. Therefore, we used here three different genetic manipulations to investigate the role of adenosine on the immune response against the bacterial infection of adult flies. First, we blocked systemic adenosine signaling by the adoR null mutation of the adenosine receptor AdoR. To enhance the effects of adenosine, we lowered an expression of adenosine deaminase ADGF-A first by a heterozygous null mutation in the ADGF-A gene (adgf-a[kar]/+), since the homozygous mutation causes larval lethality [17]. Second, we Zero levels represent flies that cleared infection (majority at day 6). (B) Comparison of pathogen loads of S. pneumoniae at 18 and 24 hpi in flies with intact phagocytosis and in flies injected with latex beads (24 hours prior to infection) to block phagocytosis (labeled with "+ latex BEADS"). The adoR mutation (red) significantly increases and adgf-a/+ (green) significantly decreases load compared to w control (black) in flies with intact phagocytosis both at 18 and 24 hpi (marked by asterisks). Blocking phagocytosis significantly increases pathogen loads for all genotypes compared to intact phagocytosis (significance not marked) with no significant differences between genotypes (marked as NS). Loads, shown in logarithmic scale (values follow lognormal distribution), were compared with unpaired t-tests. (C) Flies with phagocytosis blocked by latex beads displayed significantly reduced survival (dashed lines, marked by black asterisk) compared to flies with intact phagocytosis (solid lines) with equal reduction for all tested genotypes. With intact phagocytosis, adoR significantly decreases (marked by red asterisk) and adgf-a/+ significantly increases (marked by green asterisk) survival compared to w control. (D) Survival of w, adoR, and adgf-a/+ genotypes are comparable when injected with mock (PBS) buffer (adoR P = 0.1059 and adgf-a/+ P = 0.5980 compared to w). Survivals were analyzed by both Log-rank and Gehan-Breslow-Wilcoxon tests. . Infection induces hyperglycemia, peaking at 12 hpi, in control flies (significant differences in all 3-24 hpi when compared to uninfected, PBS injected control; not marked). ppl>GlyP [RNAi] significantly suppresses this infection-induced hyperglycemia when compared to infected ppl x KK control (comparing solid lines marked by asterisks; glucose level is not significantly different in any time point when infected and uninfected ppl>GlyP RNAi are compared). (B) Glycogen levels corresponding to measurements in panel A show the decrease of glycogen upon infection (significant differences in 6-24 hpi for ppl x KK control and 15-24 hpi for ppl>GlyP [RNAi]; not marked). The glycogen decrease is significantly suppressed by ppl>GlyP [RNAi]) when compared to infected control (comparing solid lines marked by asterisks). Lines connect mean values with SEM as error bars; significant differences were tested by unpaired t-test (corrected for multiple comparisons using the Holm-Sidak method). (C) Expression of the glycogen phosphorylase increases at 24 hpi with S. pneumoniae infection in ppl x KK control flies. ppl>GlyP RNAi (induced in fat body) reduces the whole-fly expression to less than half, thus demonstrating an efficient silencing of glycogen phosphorylase both in uninfected and infected flies. Bars show mean fold change relative to expression of glycogen phosphorylase in uninfected ppl x KK control flies obtained by qRT-PCR with SEM as error bars; values were compared with unpaired t-tests and significant differences are marked by asterisks. (D) Comparison of pathogen loads of S. pneumoniae at 24 and 48 hpi shows that knocking-down glycogen phosphorylase both specifically in the fat body (ppl>GlyP[RNAi]); purple) and in the whole flies (Act>GlyP[RNAi]; dark blue) significantly increases pathogen loads when compared to appropriate controls (ppl x KK control in black and Act x KK control in light blue, resp.) Loads, shown in logarithmic scale (values follow lowered the expression of ADGF-A by targeted RNAi in hemocytes (Drosophila immune cells), using Gal4/UAS system [18] when the expression of double stranded RNA for ADGF-A was driven by the hemocyte-specific Hemolectin-Gal4 driver (Hml>ADGF-A[RNAi]).
The metabolic switch, induced by S. pneumoniae infection and expressed by hyperglycemia associated with glycogen breakdown (Fig 2A and 2B), was clearly delayed in the adoR mutant, in which the glucose and the glycogen profiles were delayed 6-9 hours compared to the control (Fig 3A and 3B). In contrast, lowering ADGF-A in adgf-a/+ and Hml>ADGF-A[RNAi] flies caused the continuation of hyperglycemia beyond the 9-hpi control peak (Fig 3C and 3E), leading to a deeper depletion of glycogen stores (Fig 3D and 3F), thus causing an expected opposite effect compared to the adoR mutation. The role of adenosine signaling in glycogen breakdown is further supported by the expression of the glycogen phosphorylase and glycogen synthase. There was a lower expression of the glycogen phosphorylase and a higher expression of the glycogen synthase in the adoR mutant (Fig 3G and 3H). In contrast to the situation in the control flies, where the glycogen phosphorylase rose and the glycogen synthase dropped during infection, their expression did not change in adoR (Fig 3G and 3H), supporting the role of adenosine signaling in the regulation of glycogen synthesis/breakdown.
Adenosine is required for resistance to S. pneumoniae
Since the immune response was dependent on the systemic metabolic switch and the results above showed that the switch was mediated by adenosine, we tested the resistance of the flies with manipulated adenosine signaling. Injecting a sublethal dose (15 000 CFUs or less) allowed most of the control flies to survive the acute phase of infection (the first 7-8 days; Fig 4A) while a lethal dose (20 000 CFUs or more) killed 50% of control flies by day 8 and led to less than 25% survival overall ( Fig 4B). The adoR mutation decreased survival compared to the w control when injected with both sublethal and lethal doses (Fig 4A and 4B). While the lethal dose killed most of the control flies, adgf-a/+ and Hml>ADGF-A[RNAi] flies improved survival during the acute phase (Fig 4B and 4C). The observed effects were not caused by a difference in vigor [19] among these genotypes since the injection of PBS buffer led to comparable survival under the experimental conditions ( Fig 1D). Fig 4D shows that S. pneumoniae load increased mostly within the first 24 hpi, reaching up to three hundred thousand CFUs per fly, and was eventually cleared within 5-6 days. We monitored the pathogen load during this 6-day span and detected differences among the tested genotypes, especially during the first 24-48 hpi ( Fig 4D) when most flies were still alive ( Fig 4A-4C). At later time points, differences in pathogen load in flies, which were still alive and available for pathogen load measurements, virtually disappeared between genotypes ( Fig 4D and S1 Fig). Therefore, we monitored in detail the growth of S. pneumoniae during the first 24 hpi. In agreement with the relative survival rates, the number of CFUs was significantly higher in adoR and lower in both adgf-a/+ and Hml>ADGF-A[RNAi] compared to the controls ( Fig 4E and 4F and S2 Fig). Although the adgf-a/+ and Hml>ADGF-A[RNAi] flies had lower pathogen loads and better survived the acute phase of S. pneumoniae infection, many of them died after the acute phase and thus their overall survival was eventually comparable to the control (Fig 4B and 4C). These results suggest that adgf-a/+ and Hml>ADGF-A[RNAi] better controlled the pathogen load, leading to their better survival of the acute infection phase, but lognormal distribution), were compared with unpaired t-tests. (E) and (F) Knocking-down glycogen phosphorylase both specifically in the fat body (ppl>GlyP[RNAi]); purple in panel E) and in the whole flies (Act>GlyP[RNAi]; dark blue in panel F) significantly reduces survival (P<0.0001) when infected with S. pneumoniae (solid lines); no differences in survival were detected between genotypes when injected with PBS buffer only (wound control; dashed lines). Survivals were analyzed by both Log-rank and Gehan-Breslow-Wilcoxon tests.
https://doi.org/10.1371/journal.ppat.1007022.g002 It is important to stress that the use of the heterozygous adgf-a knock out mutant as a model led to a similar result as that obtained by depleting ADGF-A in hemocytes by RNAi (Hml>ADGF-A[RNAi]) and to an opposite result to that obtained with the adoR mutation, thus connecting the phenotypes specifically to the genetic manipulations of extracellular adenosine. We were not able to detect clear and reproducible changes in the extracellular adenosine levels, having problems rapidly collecting a reasonable amount of hemolymph from adult flies and due to the short half-life of this molecule upon sample collection. Therefore, to determine whether the observed effects of lowering ADGF-A expression are indeed due to adenosine signaling, we tested resistance to S. pneumoniae in a double mutant, heterozygous for adgf-a and homozygous for adoR (adgf-a adoR/adoR), lowering ADGF-A expression and at the same time removing adenosine signaling. Fig 4B and S3 Fig demonstrate that the effect of adgf-a/+ is completely suppressed by the adoR mutation in the adgf-a adoR/adoR double mutant when both survival and pathogen loads are similar to the adoR single mutant. This demonstrates that the observed effects in adgf-a/+ are dependent on adenosine signaling.
As shown above, resistance to S. pneumoniae is dependent on phagocytosis; blocking phagocytosis with latex beads erased differences between genotypes (Fig 1B and 1C) demonstrating that the differences are connected to phagocytosis. The effectivity of the phagocytic response could be influenced by the number of hemocytes, which in turn is dependent on genetic background [20]. A comparable number of active phagocytes, and even adult hemocytes in general, among w, adoR and adgf-a/+ genotypes (with unified genetic backgrounds) showed that it was not the different number of phagocytes ( Fig 5A and 5B) causing the observed effects of these genotypes on resistance to S. pneumoniae. We detected a significantly lower number of cells labeled by pHrodo, a marker of active phagocytosis, in the adoR mutant challenged by S. pneumoniae 6 hours prior to a pHrodo injection when the total number of the Hml>GFP-labeled hemocytes was the same as in the control (Fig 5C). This lower phagocytosis in challenged adoR was significantly increased by a carbohydrate-rich, 10%-glucose diet ( Fig 5C). So, although there are comparable numbers of phagocytes with comparable basal phagocytic capacity (without challenge), their activity (but not their numbers) is lower in the adoR mutant when the immune response is activated by the challenge.
Role of adenosine signaling during Listeria monocytogenes infection
Next, we tested a different type of infection, triggered by Listeria monocytogenes, which causes a chronic intracellular infection leading eventually to the death of the host [21]. Intracellular infection is established by phagocytosis followed by the escape of L. monocytogenes from the phagosome to the cytoplasm. Similarly to S. pneumoniae infection, the L. monocytogenes infection also led to a systemic metabolic switch, characterized by hyperglycemia ( Fig 6A); glycogen stores were accordingly reduced ( Fig 6B). The adoR mutation reduced hyperglycemia ( Fig 6A) higher variability in measurement, a trend emerges in which the adoR mutant starts to break down glycogen later than the w control while glycogen drops lower in both adgf-a/+ and Hml>ADGF-A [RNAi] flies as compared to their respective controls. Lines connecting mean values with SEM as error bars and significant differences (tested by unpaired t-test) marked by asterisks (black asterisks for comparison of S.p. solid lines and blue asterisk for comparison of PBS dashed lines). (G and H) Expression of glycogen phosphorylase increases (panel G) and expression of glycogen synthase decreases (panel H) in w control flies during infection (grey and black lines and black asterisks for significant differences). There is no such infection-induced change in adoR (pink and red lines) but expression of glycogen phosphorylase is significantly lower and expression of glycogen synthase significantly higher both in uninfected and infected adoR when compared to w controls (red asterisks). Lines connect mean fold change values of expression relative to expression in uninfected w control at 6 hpi with SEM as error bars; asterisks indicate significant changes (tested by unpaired t-test). and increased glycogen storage compared to the control (Fig 6B), the two effects persisting even during the chronic phase ( Fig 6C). Since there was no peak of hyperglycemia during the acute phase of L. monocytogenes infection, we did not detect a difference in glucose level in the adgf-a/+ flies (S4 Fig), however both the adgf-a/+ and Hml>ADGF-A[RNAi] flies had significantly lower glycogen stores during the chronic phase ( Fig 6C).
The length of host survival depends on the injected dose [19] and the genetic background that determines, for example, the number of phagocytes [20]. We injected 1000 bacteria (S5A Fig) leading to a median time to death of 17 days in our control flies (Fig 7A). Both the adoR mutation and lowering ADGF-A by adgf-a/+ and Hml>ADGF-A[RNAi] shortened survival during L. monocytogenes infection to a median time to death of only 8 days (Fig 7A-7C). The observed shorter survival durations were associated with increased pathogen load (Fig 7D-7F), ostensibly suggesting a lower resistance to L. monocytogenes for all three genotypes. However, distinguishing the intracellular and extracellular L. monocytogenes populations revealed a more complicated picture. We used gentamycin, an antibiotic that is unable to cross cellular membranes, to determine the total number of bacteria (without gentamycin treatment) and the number of intracellular bacteria (after gentamycin treatment). This gentamycin-chase assay described in [5] showed that the increased pathogen load in the adoR mutant was mainly due to an increased extracellular population (Fig 7D and S5 Fig). On the other hand, the increased pathogen load in adgf-a/+ and Hml>ADGF-A[RNAi] flies was almost solely caused by an increase in the intracellular L. monocytogenes population (Fig 7E and 7F and S5A Fig); the extracellular population disappeared faster in adgf-a/+ and Hml>ADGF-A[RNAi] flies compared to the control suggesting more effective phagocytosis upon lowering ADGF-A. Adenosine effects on host-pathogen interactions via metabolic regulations . Black asterisks mark significant differences between intracellular bacteria (comparing dashed lines), colored asterisk mark significant differences between total bacteria (comparing solid lines); results were compared by unpaired t-tests corrected for multiple comparisons using the Holm-Sidak method. Statistics and detailed dot plot graphs can be found in S5 Fig. (D) The adoR mutant increases the total number of bacteria, mainly due to the extracellular population. Both adgf-a/+ (E) and Hml>ADGF-A[RNAi] (F) also increase the total pathogen load, which is mostly due to the intracellular bacteria because the numbers after gentamicin treatment (dashed lines) are markedly increased compared to controls.
The increased intracellular pathogen load in adgf-a/+ and Hml>ADGF-A[RNAi] flies persisted even during the chronic phase (day 7, Fig 8A), when the phagocytosis likely no longer played a role because all bacteria were intracellular (i.e. no difference in total and intracellular load in w control in Fig 8A). The adoR mutation also increased the total pathogen load at day 7, although this represented a surge in the extracellular population, as the intracellular population was similar to the control (Fig 8A). Melanization was previously shown to be important in controlling extracellular L. monocytogenes loads when lowering the melanization response led to an increased extracellular bacteria population [22]. The increased presence of extracellular bacteria in adoR flies most likely stimulated an increase in disseminated melanization [22] at day 7 (S6 Fig), when only 37% of w showed melanization (with extensive melanization in 12%
Sugar-enriched diet rescues the adoR mutant resistance defect
In our previous work, we had been able to partially rescue the effect of adoR on the metabolic switch and immune response by providing more glucose in the fly diet [11]. Here we show that a sugar-enriched, 10%-glucose diet significantly increased the survival of adoR flies ( Fig 7A and S7 Fig) and lowered the pathogen load to control levels during both S. pneumoniae and L. monocytogenes infections (Fig 8B and 8C). The sugar-enriched diet did not influence the survival or pathogen load of adgf-a/+ or Hml>ADGF-A[RNAi] flies (Figs 7B, 7C and 8C).
Expression of antimicrobial peptides during S. pneumoniae and L. monocytogenes infection
The results above demonstrated the effects of adenosine manipulation on immune defenses associated with phagocytosis and metabolism. Host immunity may also be mediated by the production of antimicrobial peptides (AMPs) and, therefore, we analyzed the expression of four selected AMPs
ADGF-A expression
Adenosine is regulated by the adenosine deaminase ADGF-A [17]. Gene expression analysis showed that ADGF-A expression increased both upon L. monocytogenes and S. pneumoniae challenges (Fig 9A and 9B). Fig 9A also demonstrates that there is lower ADGF-A mRNA expressed in the adgf-a/+ heterozygous mutant; some of this mRNA might possess a premature stop codon (adgf-a mutation) leading to aberrant protein production, which is not distinguishable by the used q-PCR (for a description of this mutation, see [23]). The Hml-induced RNA interference of ADGF-A specifically in hemocytes effectively silenced the ADGF-A expression, demonstrating the functionality of the Hml>ADGF-A[RNAi] knockdown ( Fig 9A). It is important to note that the expression was measured on the whole-organism level, suggesting that the expression of ADGF-A in hemocytes, where Hml-Gal4 driver is expressed, represents a majority of the whole organism expression of this gene. A hemocyte-specific expression analysis ( Fig 9C) confirmed this speculation when hemocytes showed a one order of magnitude higher expression of ADGF-A compared to the expression measured in whole flies and this expression rose four times during infection demonstrating that hemocytes are the primary producers of ADGF-A. The time-course expression during S. pneumoniae infection (Fig 9B and 9C) also showed that the ADGF-A expression rose after 9 hpi, which coincided with the down-regulation of hyperglycemia during S. pneumoniae infection (Fig 3A).
Discussion
In this work, we used two types of bacterial infection of adult D. melanogaster, one caused by the facultative intracellular bacterium L. monocytogenes and the other triggered by the To block systemic adenosine signaling, we used a null adoR mutation in the adenosine receptor and to enhance the effects of adenosine, we either removed one copy of the adenosine deaminase ADGF-A gene or lowered its expression by RNAi. We show that both infections are associated with a systemic metabolic switch manifested by hyperglycemia at the expanse of the glycogen stores. Manipulating e-Ado signaling influences this metabolic switch and, at the same time, affects host resistance. The activated immune system requires an increased supply of energy/nutrients [24]. Therefore, we propose that the observed e-Ado-mediated systemic metabolic switch supplies the immune system with the required nutrition, and thus is important for the effectivity of the immune response.
Resistance to S. pneumoniae was shown to be dependent on effective phagocytosis by hemocytes [16]. We confirm here the crucial role of phagocytosis by injecting latex beads which jam hemocytes, making the hemocytes unable to phagocytose. Blocking the phagocytosis made flies extremely sensitive to S. pneumoniae infection and eliminated the differences in responses between the control and mutants used in this work. The observed effects of Ado manipulation on immunity are not due to an altered phagocyte number [20] since all the strains used in this work have comparable numbers of hemocytes, including active phagocytes.
We show here that S. pneumoniae infection is associated with systemic metabolic switch manifested by hyperglycemia (peaking at 9-12 hpi) at the expanse of the glycogen stores, when glycogen drops to less than one third within 24 hpi. This could merely be a pathological consequence of the infection, however phagocytic cells are known to increase glycolysis and glucose consumption [25] and thus the systemic metabolic switch may also be a reflection of an increased need for energy by the hemocytes phagocytosing S. pneumoniae. The latter possibility is strongly supported by our experiment with knocking down GlyP, an enzyme responsible for glycogen breakdown. Knocking down GlyP in the fat body immediately prior to infection decreased resistance to S. pneumoniae, demonstrating that the glucose, liberated from the glycogen stores, is required for effective phagocytosis. Therefore, the observed systemic metabolic switch is most likely an active process ensuring adequate energy supply to the activated immune system. Our previous work [11] demonstrated that the immune cells of Drosophila larva release Ado during parasitoid wasp infection to mediate a systemic metabolic switch and thus to supply the immune system with the required nutrients. Here we show that blocking adenosine signaling by adoR mutation prevents the metabolic switch during L. monocytogenes infection and postpones the hyperglycemic peak and glycogen use during S. pneumoniae infection. This notion is further supported by transcriptional data showing that the glycogen synthesis/ breakdown is under the AdoR control. Furthermore, hyperglycemia during S. pneumoniae infection peaks at 9 hpi, which coincides with a rise in ADGF-A expression. When ADGF-A action is lowered by the heterozygous adgf-a mutation or RNAi, hyperglycemia continues at the increased expense of the glycogen stores. All these results together demonstrate that the systemic carbohydrate metabolism is regulated by e-Ado in adult flies during infection. The AdoR signaling causes a release of energy, i.e. hyperglycemia associated with decreased glycogen stores and the e-Ado-mediated release of energy is regulated by ADGF-A.
The lower resistance of the adoR mutant to S. pneumoniae is then in agreement with the role of Ado signaling in the systemic metabolic switch and with the importance of this switch for the effective immune response. Therefore, we propose that adenosine signaling mediates the supply of energy to the activated immune system. This is supported by our experiments with a carbohydrate-rich diet compensating for the missing switch in systemic metabolism in the adoR mutant when this diet increases phagocytosis, normalizes the pathogen load and ultimately increases survival in the adoR mutant when compared to the carbohydrate-poor diet. These observations are similar to our previous work showing that a carbohydrate-rich diet rescued the effect of adoR on the production of immune cells during parasitoid wasp infection [11]. The role of e-Ado in supplying the energy to immune cells is further supported by the opposite effect achieved by removing one copy of ADGF-A or lowering its expression by RNAi. These genetic manipulations have the opposite effect on carbohydrate metabolism compared to adoR and at the same time lower the pathogen load during S. pneumoniae infection, demonstrating more effective phagocytosis in these flies. The complete suppression of this increased resistance by simultaneously mutating adoR demonstrates that the effect of lowering ADGF-A is indeed due to adenosine signaling. We can then conclude that e-Ado mediates a systemic metabolic switch which is important for supplying energy to immune cells, and e-Ado is thus crucial for effective phagocytosis and host resistance to S. pneumoniae.
S. pneumoniae infection provides a simpler model of host-pathogen interaction in which the clearance is crucially dependent on phagocytosis and the host either clears the bacteria or the pathogen outgrows and kills the host. The other pathogen, L. monocytogenes, is a facultative intracellular bacterium causing a chronic and ultimately lethal infection in flies. Phagocytosis is actually a way for this pathogen to colonize the host [21]; the intracellular population is eventually established following escape from the phagosome. Infection by this pathogen causes a systemic metabolic switch similar to the one caused by S. pneumoniae, i.e. hyperglycemia associated with the loss of glycogen stores. Here, as is the case in S. pneumoniae, adenosine mediates this switch because adoR decreases the infection induced-hyperglycemia associated with a lower loss of glycogen while lowering ADGF-A leads to a greater loss of glycogen.
The adoR mutation increases the extracellular load of L. monocytogenes suggesting that, similarly to the case of S. pneumoniae infection, this mutation decreases the effectivity of phagocytosis via the suppression of the adenosine-mediated metabolic switch. This is supported by evidence of rescue with an increase of glucose in the diet, which normalizes pathogen load and increases the survival of the adoR mutant. In agreement with this, lowering ADGF-A leads to the opposite effect, in which this manipulation leads to a faster clearance of the extracellular L. monocytogenes population associated with the increased intracellular population, suggesting more effective phagocytosis. Thus far, the results obtained with L. monocytogenes, focused on the metabolism and early response associated with phagocytosis, are similar to those obtained with S. pneumoniae. Phagocytosis is not, of course, the only defense mechanism available to the flies, two other mechanisms, melanization and antimicrobial peptides, are discussed further.
Unlike to S. pneumoniae infection, both blocked and enhanced Ado signaling decreased the survival of flies upon L. monocytogenes infection. This decreased survival is associated with increased total pathogen loads in all examined adoR, adgf-a/+, and Hml>ADGF-A [RNAi] flies. In the adoR mutant, the larger pathogen load is mainly due to an increased extracellular population, as mentioned above. In contrast, the increased pathogen load in adgf-a/+ and Hml>ADGF-A[RNAi] flies is almost completely caused by an increase in the intracellular L. monocytogenes population. The increased intracellular load can be due to less effective intracellular defense mechanisms, which remains mostly unexplored in flies [26], or due to an increased carrying capacity for the pathogen at the expanse of the host energy reserves as suggested by a greater loss of glycogen reserves in flies with lowered ADGF-A. An increased bacteria population obviously requires more nutrients. Among the important virulence factors of L. monocytogenes is the bacterial homolog of glucose-6-phosphate translocase, which allows the pathogen to exploit hexose phosphates from the host cell as a carbon source [27]. In addition, L. monocytogenes hijacks host cell actin polymerization for its propagation [28]. Since this process requires energy, an increased supply to an infected cell may potentially further promote the propagation of this intracellular pathogen. The question is whether the nutrient supply to infected cells is the limiting factor for pathogen proliferation and propagation, however, there is evidence that the proliferation capacity of intracellular pathogens is strongly influenced by host metabolism [4]. If so, as a reaction to the increased release of energy from the host stores caused by lowering ADGF-A, the carrying capacity could increase, and thus could lead to the observed increase in the intracellular pathogen load. However, we cannot exclude the possibility that lowering ADGF-A somehow decreases the host intracellular defense and the associated wasting is just a consequence of increased pathogen load.
Carbohydrate-rich diet normalizes the pathogen load in the adoR mutant, decreasing the extracellular population of L. monocytogenes, which is present in this mutant on a carbohydrate-poor diet. This in turn leads to the longer survival of this mutant, comparable to control flies. A carbohydrate-rich diet can rescue the glycogen loss in flies with lowered ADGF-A but the pathogen load, being intracellular, is still increased and survival is as short as on a carbohydrate-poor diet. It seems that increased dietary glucose can compensate for the glycogen loss detected on a carbohydrate-poor diet, which is potentially associated with the nutrient exploitation by the pathogen in these flies, as discussed above. However, the higher pathogen burden still kills the flies faster, suggesting that the survival is primarily determined by the pathogen number.
Although we focus on phagocytosis in this work, there are other immunity mechanisms which may influence the host resistance and physiology. While melanization was not observed with S. pneumoniae infection, it was shown to play an important role in controlling the extracellular population of L. monocytogenes [22]. Therefore, the increased extracellular L. monocytogenes load detected in adoR could also be due to the lower induction of melanization response. However, we detected a rather stronger disseminated melanization [22] in the adoR mutant suggesting that the induction of melanization is not lowered by the lack of adenosine signaling and may rather reflect the reaction provoked to a greater extent by the increased extracellular population of L. monocytogenes in this mutant. Nevertheless, the role of adenosine signaling in the melanization arm of immunity requires further work.
Host immunity is also mediated by the production of antimicrobial peptides (AMPs). We did not detect clear and consistent differences in the expression of AMPs in the mutants that would be in agreement with the observed effects on resistance, i.e. lower expression of AMPs in adoR and higher expression in adgf-a/+. The expression of AMPs was shown to be dependent on pathogen load, at least with L. monocytogenes [19]. We did not test the expression with different loads and therefore we can directly compare the expressions between mutants and control at only 6 hpi when the pathogen loads are similar among the genotypes both for S. pneumoniae and L. monocytogenes. In the case of Defensin, Diptericin, and Drosocin, but not Metchnikowin, the adoR mutant showed lower expression during S. pneumoniae infection, which would be consistent with its lower resistance. However, such a difference was not detected at 18 hpi and in the case of L. monocytogenes for neither time point. In case of adgfa/+, the expression of AMPs is mostly lower compared to control, including cases when the pathogen loads are comparable, which is in contrast to the observed higher resistance. While we cannot exclude the role of AMPs in the observed effects on resistance, this arm of immunity does not seem to play as important a role as phagocytosis.
Our work demonstrates the crucial role of e-Ado in mediating the systemic metabolic switch, which is required for effective immune response. Although we did not directly measure e-Ado levels, the opposite effects of the adoR mutation, which blocks adenosine signaling on one hand, and lowering ADGF-A, which degrades adenosine, on the other, leave little doubt that the observed effects are indeed associated with the e-Ado action. Being able to mount an adequate immune response is vital for the organism (as demonstrated by the lower resistance of adoR) but its regulation is as important as the response itself; reducing regulation by lowering ADGF-A may prolong the switch associated with a greater loss of the host energy reserves and may potentially lead to the exploitation of released nutrients by the pathogen. The regulatory role of ADGF-A is demonstrated by a hyperglycemic peak and consecutive decrease after 9-12 hpi during S. pneumoniae infection which coincides with a rise in ADGF-A expression; when one copy of ADGF-A gene is removed or ADGF-A is knocked down by RNAi, the hyperglycemia continues past this time point and glycogen continues to fall. It is important to note that ADGF-A was knocked down specifically only in hemocytes by the Hml-Gal4 driver; also, our hemocyte-specific expression analysis demonstrated that hemocytes are the primary source of ADGF-A. Therefore, the immune cells are the regulators of e-Ado actions during the immune response. This is in agreement with our previous results showing a specific expression of ADGF-A in specialized larval immune cells, lamellocytes that encapsulate parasitoid wasp eggs [29], which represent a later phase of immune response. Taken together with immune cells first releasing adenosine to usurp energy from the rest of the organism during a parasitoid wasp attack [11], and later producing ADGF-A to downregulate adenosine, we can say that the immune system is able to regulate its privileged access to nutrients by producing a regulator of the signal mediating the systemic metabolic switch.
In summary (Fig 10), we demonstrate here that bacterial infections of the adult Drosophila flies are associated with a systemic metabolic switch, manifested by a hyperglycemia at the expense of glycogen stores. This switch supplies the immune system with the energy required for an effective response and is mediated by e-Ado signaling. Blocking e-Ado signaling demonstrates its crucial role for an effective immune response, in this case phagocytosis. However, the proper regulation of e-Ado by adenosine deaminase ADGF-A is also important. Although lowering such regulation may increase host resistance to some infections, it may also lead to an excessive loss of energy reserves during chronic infection. An increased release of energy allocated for the immune system may also be exploited by the pathogen leading to decreased host survival. Here we build on our previous work showing that the privileged behavior of the immune system is crucial for host resistance by revealing a mechanism in which the same immune system limits its own privilege to ultimately protect the whole organism.
Fly stocks and culture
The fly strains used for manipulating adenosine were first backcrossed 10 times into the same w 1118 strain with a genetic background based on CantonS, which was used as a control (referred to simply as w in text). To block adenosine signaling, we used a homozygous adoR 1 null mutation (FBal0191589) of the adenosine receptor AdoR (CG9753; FBgn0039747). To enhance adenosine effects, we lowered the fly adenosine deaminase ADGF-A (CG5992; FBgn0036752) by heterozygous null adgf-a kar /+ mutation [23], referred to as adgf-a/+ in the text, and by RNA interference for the ADGF-A gene P{GD17237} (VDRC-50426; FBtp0028959) using the HmlΔ-Gal4 driver (FBti0128549) specific to hemocytes in combination with a thermosensitive GAL80 construct P{tubP-GAL80 ts }2 (FBti0027797). To induce RNAi, we crossed w 1118 ; HmlΔ-Gal4; P{tubP-GAL80 ts } flies to w 1118 ; ADGF-A RNAi -P{GD17237} flies and the resulting progeny (referred to as Hml>ADGF-A[RNAi]) lowered ADGF-A mRNA to 20% or less of control levels ( Fig 9A). As a control for RNAi, we crossed w 1118 ; ADGF-A RNAi -P{GD17237} to w 1118 , with the resulting progeny referred to as ADGF-A[RNAi]/+. To induce RNAi for glycogen phosphorylase, we crossed y, w 1118 ; P{KK100434}VIE-260Bline (VDRC-109596; FBtp0066083) to either P{ppl-GAL4.P} (FBti0163688) or Act-Gal4 in combination with a thermosensitive GAL80 construct P{tubP-GAL80 ts } to obtain P{KK100434}VIE-260B/P{ppl-GAL4.P}; P{tubP-GAL80 ts }/+ (labeled as ppl>GlyP [RNAi]) or P{KK100434}VIE-260B/P{tubP-GAL80 ts }; Act-Gal4/+ (labeled as Act>GlyP [RNAi]) flies. KK control line y,w 1118 ; P{attP,y[+],w [3`]} (VDRC-60100) was crossed to the Gal4 lines to obtain control flies for the GlyP RNAi with the same genetic background (labeled as either ppl x KK control or Act x KK control). For hemocyte counting, we Extracellular adenosine (e-Ado) mediates a systemic metabolic switch via the adenosine receptor AdoR upon infection, leading to increased circulating glucose at the expense of glycogen storage. Glucose is required for an effective immune response but energy resources are limited and the energy may also be exploited by the pathogen. Therefore it is important to regulate e-Ado action, which is achieved by immune cells themselves expressing the e-Ado regulator ADGF-A. Lowering such regulation leads to an increased resistance to S. pneumoniae on one hand but also to lower glycogen stores and to an increased intracellular L. monocytogenes loads (possibly by feeding pathogen) on the other. used the HmlΔ-Gal4 UAS-EGFP marker on chromosome II in each of the w, adoR, and adgfa/+ backgrounds. Flies were grown on cornmeal medium (8% cornmeal, 5% glucose, 4% yeast, 1% agar) in 6-oz square bottom plastic bottles (20 females per bottle laid eggs for 24 h only to prevent crowding). The w, adoR and adgf-a/+ flies were raised in bottles at 25˚C, 70% humidity with 12/12 hours light/dark cycle; the Hml>ADGF-A [RNAi], ppl>GlyP [RNAi] and Act>GlyP [RNAi] and ADGF[RNAi]/+ and KK control flies were raised at 18˚C to suppress RNAi during development and transferred to 25˚C 24 h before infection to induce RNAi. Two-day-old male progeny flies were anesthetized with carbon dioxide and collected in plastic vials (20 flies per vial) with either carbohydrate-poor, 0%-glucose (8% cornmeal, 4% yeast, 1% agar, no additional sugar) or carbohydrate-rich, 10%-glucose (8% cornmeal, 4% yeast, 10% glucose, 1% agar) diets and transferred every other day to a fresh meal.
Bacterial strains and culturing conditions
The Listeria monocytogenes strain 10403S was stored at -80˚C in brain and heart infusion (BHI) broth containing 25% glycerol. For the experiments bacteria were streaked onto Luria Bertani (LB) agar plates containing 100 μg/mL streptomycin and incubated at 37˚C overnight; plates were stored at 4˚C and used for inoculation for a period of two weeks. Single colonies were used to inoculate 3 mL of BHI and incubated overnight at 37˚C without shaking to obtain a morning 600 nm optical density (OD600) of approx. 0.4. Further, L. monocytogenes cultures were diluted to OD600 0.01 in phosphate buffered saline (PBS) and stored on ice prior to loading into an injection needle. The Streptococcus pneumoniae strain EJ1 (D39 streptomycin-resistant derivative; [30]) was stored at -80˚C in Tryptic Soy Broth (TSB) media containing 10% glycerol. For the experiments, bacteria were streaked onto blood-containing TSB agar plates containing 100 μg/mL streptomycin and incubated at 37˚C overnight; a fresh plate was prepared for each experiment. Single colonies were used to inoculate 3 mL of TSB with 100 000 units of catalase (Sigma C40) and incubated overnight at 37˚C + 5% CO2 without shaking. Morning cultures were 2x diluted in TSB with fresh catalase and were grown for an additional 4 hours, reaching an approximate 0.4 OD600. Final cultures were concentrated by centrifugation and re-suspended in phosphate buffered saline (PBS) so that the concentration corresponded to OD600 2.4 and stored on ice prior to injection needle loading. For sublethal doses, we used approximately 12000-15000 CFUs, for lethal doses 20 000 CFUs; the EC50 was between 15000-20000 CFU.
Fly injection
Seven-day-old male flies were anaesthetized with carbon dioxide. The Eppendorf Femtojet microinjector and a drawn glass needle was used to inject precisely 50 nl of bacteria or mock buffer into the fly at the cuticle on the ventrolateral side of the abdomen. Infectious doses were determined for each experiment by plating a subset of flies at time zero. 50 nl of 10% 0.5-μm carboxylate-modified polystyrene (latex) beads (Sigma L5530) in PBS or pure PBS as control were injected 24 hour before infection into the adult fly body cavity to block phagocytosis.
Pathogen load measurement
Single flies were homogenized in PBS using a motorized plastic pestle in 1.5 ml tubes. Bacteria were plated in spots onto LB (L. monocytogenes) or TSB (S. pneumoniae) agar plates containing streptomycin in serial dilutions and incubated overnight at 37˚C before manual counting. To determine intracellular L. monocytogenes loads, flies were injected with 50 nl of gentamycin solution (1 mg/ml in PBS) 3 h prior to fly homogenization. Pathogen loads of 16 flies were determined for each genotype and treatment in each experiment; at least three independent infection experiments were conducted and the results were combined into one graph (in all presented cases, individual experiments showed comparable results). Values were transformed to logarithmic values, since they followed the lognormal distribution, and compared using unpaired t-tests corrected for multiple comparisons using the Holm-Sidak method in the Graphpad Prism software.
Survival analysis
A total of 200 to 300 flies were injected for each genotype and treatment in one experiment; at least three independent infection experiments were repeated and combined into one survival curve (in all presented cases, individual experiments showed comparable results). Injected flies were placed into vials with 20 flies per vial, transferred to a fresh vial every other day and checked daily to determine mortality. Flies infected by L. monocytogenes were kept at 25˚C and flies infected with S. pneumoniae were kept at 29˚C. Survival curves were generated by Graphpad Prism software and analyzed by Log-rank and Gehan-Breslow-Wilcoxon (more weight to deaths at early time points) tests, as specified in the appropriate figure legends.
Hemocyte counting and phagocytosis analysis
The number of hemocytes in adult flies were determined by counting Hml>GFP-positive cells visualized by confocal microscopy. The number of phagocytic cells was determined by injection of the marker pHrodo Red S. aureus Bioparticles (ThermoFisher Scientific) 40 min prior to fixing flies. Flies were fixed by 4% paraformaldehyde in PBS and imaged using confocal microscopy with maximal projection from five different layers; the same setting of the Z-stack range as well as the intensity of lasers were used for all animals. The cells were observed in whole flies to detect possible gross changes but no obvious differences were observed. The exact number of cells were counted within a selected thorax region as depicted in Fig 2A using Fiji software and compared by student t-tests using the Graphpad Prism software.
Hemocyte isolation by fluorescent activated cell sorter
A fluorescent activated cell sorter was used for an isolation of HmlΔ-Gal4 UAS-EGFP-labeled hemocytes from the adult flies. Approximately 12 000 living cells were seperated from the homogenate of 100 fly males. The males used for this analysis were anesthetized with CO 2 , washed several times with PBS and then homogenized by sterile pestle in 800 μl of PBS. Cell homogenate was then filtered through a 70-μm cell strainer (Corning) and washed three times with ice cold PBS followed each time by centrifugation at 5000 RPM for 3 min at 4˚C. Samples were filtered once more through a 40-μm cell strainer immediately before sorting. S3E cell sorter (BioRad) was used for sorting. GFP-specific cells constituted approximately 1% of the total cell number. Sorted hemocytes were verified by fluorescent microscopy and with DIC.
Gene expression
Gene expression was analyzed by quantitative real-time PCR. Whole flies or sorted HmlΔ-Gal4 UAS-EGFP hemocytes were homogenized and total RNA was isolated by Trizol reagent (Ambion) according to manufacturer's protocol. DNA contamination was removed by using a Turbo DNAse free kit (Ambion) according to the protocol (37˚C 30 min) with subsequent inactivation of DNAse by DNAse inactivation reagent (5 min at RT, spin 13000 RPM at RT). Reverse transcription was done by Superscript III reverse transcriptase (Invitrogen) and amounts of mRNA of particular genes were quantified using the IQ Sybr Green Supermix Mastermix (BioRAd) on a CFX 1000 Touch Real time cycler (BioRad). Expressions were analyzed using double delta Ct analysis, normalized to the expression of Ribosomal protein 49 (Rp49) in the same sample. Relative values (fold change) to control (specified in each graph) were compared and shown in the graphs. Primer sequences can be found in S1 Table. Samples were collected from three independent infection experiments with three technical replicates for each experiment and compared by unpaired t-tests using Graphpad Prism software.
Metabolite measurement
Glucose and glycogen were measured by approaches published in [31]. 3 flies were homogenized in 1x PBST (PBS with 0.3% Tween) and large tissue fragments were pelleted by centrifugation (800xg, 5 min, 4˚C); half of the sample was used for protein quantification and the remainder was denatured by heating at 75˚C for 10 min and stored at -80˚C. Glucose was determined using a GAGO-20 kit (Sigma) according to the supplier's protocol using spectrophotometric measurement at 540 nm. Glucose measurements probably also contained trehalose since this carbohydrate is usually present in flies, but we were not able to distinguish between glucose and trehalose in our measurements, most likely due to endogenous trehalase activity in homogenized samples. Glycogen samples were first treated with amyloglucosidase enzyme (Sigma) for 30 min. Protein concentration was analyzed by Bradford measurement. Samples were homogenized and proteins were dissolved in 1x PBS. A 100-μl volume of protein sample was mixed with 10 μl of Bradford solution (10 mg of Brilliant blue, 5 ml of 96% Ethanol, and 10 ml of 85% Phosphoric acid in 100 ml of solution). The concentration of proteins was derived from absorbance of the reaction solution at 595 nm. Values were compared by multiple unpaired t-tests using the Graphpad Prism software. Survival upon S. pneumoniae infection was investigated for flies on carbohydrate-poor, 0%-glucose (marked as 0%; solid lines) and carbohydrate-rich, 10%-glucose (marked as 10%; dashed lines) diets and were analyzed by both Log-rank and Gehan-Breslow-Wilcoxon tests. The adoR mutation (red) significantly reduces survival (P<0.0001) on a 0%glucose diet compared to the w control. The survival of adoR is significantly improved by a 10%-glucose diet (P<0.0001 when compared to adoR on 0% and 10%-diets; marked by asterisks). (TIF)
S8 Fig. Gene expression analysis of antimicrobial peptides during infection.
Expression of antimicrobial peptides Defensin, Diptericin, Drosocin and Metchnikowin in PBS-injected flies (PBS control, grey column) and those infected either by S. pneumoniae (S.p., black columns) on left panels or L. monocytogenes (L.m., black columns) on right panels, 6 and 18 hpi in four genotypes-w (as a control), adoR (blocking adenosine signaling), heterozygous adgfa/+ mutant and Hml-driven ADGF-A RNAi (Hml>ADGF-A[RNAi]), both for enhancing e-Ado effects. Bars show mean fold change relative to the expression of uninfected w control flies at 6 hpi obtained by qRT-PCR with SEM as error bars; stars mark significant changes in infected samples when compared to infected w control at the given time tested by two-way ANOVA. (TIF) S1 Table. Primer sequences for gene expression analysis. (DOCX) experiments that led to a significant improvement in our manuscript. We would like to thank Vienna Drosophila Resource Center and Bloomington Stock Center for fly line. | 2018-05-03T02:09:31.996Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "52033e7721d3527295441a7223505f7c74c1b485",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1007022&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "52033e7721d3527295441a7223505f7c74c1b485",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204039926 | pes2o/s2orc | v3-fos-license | Management of Hepatopulmonary Syndrome in a Child Due to a Large Congenital Intrahepatic Porto-Systemic Shunt
Background: Abernethy malformations are rare vascular anomalies of the portal system which present as extrahepatic congenital portosystemic venous shunts (CPSS). Sometimes they can be intra-hepatic anomalies. There is scarcity of literature on management of these rare anomalies especially intra-hepatic shunts. Case: A five years old child came with the complaints of progressive breathlessness on exertion with effort-intolerance for the past two years. There was no history suggestive of underlying cardiopulmonary illness. On examination, there was cyanosis and clubbing. On evaluation, the imaging showed a large congenital intra-hepatic portosystemic shunt from the left portal vein draining directly into intrahepatic inferior vena cava (IVC) and a hypoplastic right branch of the portal vein leading to a clinical presentation of hepatopulmonary syndrome. Result: The shunt was occluded by placing a covered stent in the IVC across the shunt opening, making sure the openings of hepatic veins and renal vein also were not being covered. There was a significant improvement in oxygenation post procedure with complete disappearance of cyanosis. Conclusion: Covered IVC stent placement is a novel technique for large fusiform dilated intra-hepatic CPSS by closing the shunt flow into the IVC, thereby restoring the physiological flow in the liver.
Introduction
Abernethy malformations are very rare vascular anomalies of the portal system with an overall incidence of 1:30,000 births [1]. These anomalies can also be intra-hepatic. Due to abnormal shunting of portal venous blood into the systemic circulation, there is a passage of vasoactive mediators from the splanchnic circulation directly into the pulmonary system bypassing the liver. This leads to intrapulmonary vascular dilatations and impaired oxygen exchange which results in hepatopulmonary syndrome (HPS) [2]. Porto-systemic encephalopathy and porto-pulmonary hypertension are other complications, frequently observed in children [3]. Characteristically, they do not develop features of portal hypertension.
History of Presenting Illness
A five-year-old boy presented with two years of history of shortness of breath and fatigue with minimal exertion. There was also a lack of weight gain. There was no other history to suggest congenital heart disease, cystic fibrosis or childhood asthma. Born out of non-consanguineous marriage, delivered by cesarean section at 8 months of gestation, the child cried immediately after birth and discharged in a week's time from the hospital. No similar history was noted in the siblings.
On Examination
The child was poorly built and nourished with a body mass index was 11.3 kg/m 2 . There was bluish discoloration of tongue and nails suggestive of central and peripheral cyanosis. Undescended testis was noticed on the right side. Resting oxygen saturation was 75% on room air, which reduced to 68% with exertion and improved to 95% with oxygen supplementation.
Laboratory Investigations
Showed normal blood counts, liver and kidney functions. Viral serology was negative.
Radiological Investigations
(i) Chest X-ray was unremarkable.
(ii) Transthoracic echocardiography showed no structural anomaly, but bubble echo was positive for intrapulmonary shunting. (iii) Pulmonary angiography was confirmative of diffuse pulmonary arteriovenous malformations. With no obvious cardiac congenital anomaly but echocardiogram and pulmonary angiogram been suggestive of hepatopulmonary syndrome, further evaluation was done to look for any sub-diaphragmatic anomalies.
(iv) Ultrasound doppler of the abdomen showed normalsized liver and normal hepatic veins. Interestingly, the portal system showed a dilated left portal vein draining directly into intrahepatic IVC and a hypoplastic right branch of the portal vein. (v) Subsequently, contrast-enhanced computed tomography abdomen was done which showed fusiform dilatation of left portal vein, measuring 13.2 mm in diameter, 32.4 mm in length and communicating with the intra-hepatic portion of IVC through an opening of 5.8 mm in diameter. The right portal vein was hypoplastic and spleen was normal ( Figure 1). Institutional review board approval was obtained.
Methods
The treatment approaches were discussed in detail in a multidisciplinary board meeting involving interventional radiologists, hepatologists, pediatrician, liver surgeon, and anesthesiologist.
Surgical Approach
Liver resection was initially contemplated but later ruled as it would be a major hepatectomy with added risk of administering anesthesia in a hypoxemic child (due to HPS).
Interventional Radiology Approach
It was the next best option. As it was large shunt, coil embolization would have been unsuccessful. Vascular plugs were also not considered as it was high flow large shunt through which the plug would have easily migrated into cardiopulmonary circulation. Finally, endovascular occlusion of the shunt by placing the covered stent graft in the IVC across the shunt opening with backup plans of using additional embolizing agents like vascular plugs, detachable and pushable coils and glue if required, was considered as the most feasible therapeutic option.
Procedure
Under general anesthesia, shunt closure was done by endovascular approach. Using ultrasound guidance, bilateral femoral veins and right internal jugular vein access were obtained and 5F sheaths were placed. Another 12F sheath was placed in the right femoral vein. Positions of the hepatic veins and left renal vein were noted and 5F cobra catheters were placed one each in the hepatic vein and left renal vein. The shunt between IVC and portal vein was cannulated by passing a multipurpose catheter from the right jugular sheath and positioned it within the superior mesenteric vein ( Figure 2). Through the right femoral vein sheath, the diagnostic catheter with 035" Terumo wire was passed across the right atrium and positioned in the right subclavian vein. Subsequently, it was replaced with 035" exchange length Amplatz wire. Over this wire, Saline 14 x 30 mm covered stent graft was passed and deployed across the shunt avoiding covering the hepatic veins and renal vein. Angiographic runs taken after the deployment of graft revealed an absence of the flow across the intrahepatic portosystemic shunt with visible opacification of intrahepatic portal vein radicles ( Figure 3). The portal pressure post stent deployment was measured to be around 5 mm Hg suggestive of no consequent portal hypertension.
Results
There were no immediate procedural complications. Postprocedure recovery was uneventful.
Follow up ultrasound doppler on 2nd and 4th day revealed no flow across the shunt with normal flow pattern in the portal vein and IVC with patent stent graft in the IVC.
The child was started on antiplatelets and discharged on 5th post op day. At the time of discharge, the child was symptomatically better with room air oxygen saturation maintained around 82 -85%. The bluish hue in tongue and nails had also considerably decreased.
Discussion
CPSS can be either intrahepatic or extrahepatic. Usually, intrahepatic CPSS are classified into 4 types [4,5]. Rarely, a patent ductus venosus originating from the left portal vein can present as intra-hepatic CPSS [6]. Patients with HPS present with dyspnea on exertion, cyanosis, and clubbing [7]. Due to the rarity of CPSS occurrence, the standard of care approach is not available. Interventional radiology procedures include coil embolization for small shunts [8] and use of vascular plugs for large high flow ones [9]. Surgical options [10,11] include shunt ligation for extrahepatic CPSS, liver resection for large multifocal intrahepatic CPSS or focal malignant liver lesions or failed embolization and finally liver transplantation [12] for CPSS with complete agenesis of the portal vein. This child had a large intrahepatic CPSS which was not amenable to embolization or vascular plug insertion due to fear of migration. As CT angiography and portal venogram showed patent but atretic intrahepatic portal vasculature, IVC covered stent placed was attempted to cover the shunt opening of CPSS into the IVC, thereby blocking the shunt flow into the systemic circulation. There is some interesting data that with successful closure of shunt, the hypoplastic intra-hepatic portal branches can also open and gradually restore the normal physiological portal flow [13]. During the procedure, careful mapping was done to avoid inadvertent placement of stent over the opening of hepatic veins or left renal vein opening into the IVC. Postdeployment of the stent, portal venogram showed no flow across the shunt into the IVC, with visualization of some intrahepatic portal vasculature. There is a similar case report [14] of an IVC covered stent deployment in an adult with extrahepatic Type II CPSS presenting with HPS. Another interesting case of extrahepatic Type II in a child with HPS was reported [15], where the authors utilized the concept of hepatic plasticity and had done three staged endovascular procedure, the first stage being IVC graft to block the portocaval fistula and transjugular intrahepatic shunt (TIPS) placement to control the consequent portal hypertension. Subsequent two stages being gradual reduction and closure of the TIPS stent till normal portal venous circulation is restored in the liver. In our case, it was a large intrahepatic shunt which was diverting blood into the IVC bypass the first pass metabolism in the liver. Hence, directly closing the shunt was the best option. Portal pressure was 4 mm Hg prior to stent deployment and 5 mmHg after deployment.
Conclusions
To the best of our knowledge, this is the first time, that a similar procedure of placement of covered IVC stent has been done in a large fusiform dilated intrahepatic portosystemic shunt malformation or patent ductus venosus in a child, which led to amelioration of HPS. At six months post procedural follow up, the child is doing well and free of any complications. Resting oxygen saturation is 96% and post-exercise saturation of 92%. He also demonstrated significant improvement in his physical activity and started going to school.
Recommendations
Abernethy malformations is an interesting but rare entity, which once identified, needs to be managed in a personalized approach, that will benefit the patient clinically. Newer techniques should be employed to detect this rare entity in antenatal period itself.
Author Contributions
CKK and JV identified the patient's clinical condition and made the treatment strategy with SV and RK. The procedure was done by SV and RK. Follow up of the patient, drafting of the manuscript was done by CKK, critical revision of manuscript done for important intellectual content done by JV; administrative and technical support by JV. | 2019-10-11T17:40:41.846Z | 2019-09-24T00:00:00.000 | {
"year": 2019,
"sha1": "0fd7a224713a58b14040f95475635d7fc6341f49",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijg.20190301.14.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "571357ecf96c38aa5017a6261afc8c1e121b9150",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246285518 | pes2o/s2orc | v3-fos-license | Pair-Level Supervised Contrastive Learning for Natural Language Inference
Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer the relationship between the sentence pair (premise and hypothesis). Many recent works have used contrastive learning by incorporating the relationship of the sentence pair from NLI datasets to learn sentence representation. However, these methods only focus on comparisons with sentence-level representations. In this paper, we propose a Pair-level Supervised Contrastive Learning approach (PairSCL). We adopt a cross attention module to learn the joint representations of the sentence pairs. A contrastive learning objective is designed to distinguish the varied classes of sentence pairs by pulling those in one class together and pushing apart the pairs in other classes. We evaluate PairSCL on two public datasets of NLI where the accuracy of PairSCL outperforms other methods by 2.1% on average. Furthermore, our method outperforms the previous state-of-the-art method on seven transfer tasks of text classification.
INTRODUCTION
Natural Language Inference (NLI) is a fundamental problem in the research field of natural language understanding [1,2], which could help tasks like questions answering, reading comprehension, summarization and relation extraction [3,4,5,6]. In NLI settings, the model is presented with a pair of sentences, namely premise and hypothesis and is asked to reason the relationship between them from a set of relationships, including entailment, contradiction and neutral. In the last several years, large annotated datasets were made available, e.g., the SNLI [7] and MultiNLI datasets [8], which made it feasible to train rather complicated neural network-based models [9,10].
However, these methods only use the feature of the sentence pair itself to predict the class, without considering the comparison between the sentence pairs in different classes. * Corresbonding Author Many recent works explored using contrastive learning to tackle this problem. Contrastive learning is a popular technique in computer vision area [11,12,13] and the core idea is to learn a function that maps positive pairs closer together in the embedding space, while pushing apart negative pairs. A contrastive objective is used by [14] to fine-tune pre-trained language models to obtain sentence embeddings with the relationship of sentences in NLI, and achieved state-of-the-art performance in sentence similarity tasks. However, this approach can't distinguish well between the representation of sentence pairs in different classes. For example, two sentence pairs are in the same class of entailment from NLI dataset (P 1 : Two men on bicycles competing in a race. H 1 : People are riding bikes. P 2 : Two dogs are running. H 2 : There are animals outdoors). They simply consider H 1 as the positive set and H 2 as the negative set for P 1 without taking into account that these two pairs are in the same class.
Given this scenario, we propose a pair-level supervised contrastive learning approach. The pair-level representation is obtained by cross attention module which can capture the relevance and well characterize the relationship between the sentence pair. Therefore, the pair-level representation can perceive the class information of sentence pairs. Then we use the pair-level representations for contrastive learning by capturing the similarity between pairs in one class and contrasting them with pairs in other classes. The model is trained with a combined objective of a supervised contrastive learning term and a cross-entropy term. We evaluate PairSCL on two public datasets of NLI where the accuracy of PairSCL outperforms other methods by 2.1% on average. Furthermore, our method outperforms the previous state-of-the-art method on seven transfer tasks of text classification.
APPROACH
In this section, we describe our approach PairSCL. Figure 1 shows a high-level general view of PairSCL. PairSCL comprises the following three major components: an encoder that computes sentence representations for input text, a cross attention module to capture the relationship between the sentence pair and a joint-training layer including a cross-entropy term and supervised contrastive learning term.
Encoder
Premise: Two men on bicycles competing in a race.
Hypothesis: A few people are catching fish.
Supervised Contrastive Loss
Entailment Contradiction Neutral
Cross-entropy Loss
Push Pull Fig. 1. The framework of PairSCL.
Text Encoder
Each instance in a NLI dataset consists of two sentences and a label indicating the relation between them. Formally, we denote premise as X (p) = {x n }, where m and n are length of the sentences respectively. The instance in the batch I is denoted as (X (p) , X (h) , y) i∈I , where i = {1, . . . , K} is the indices of the samples and K is the batch-size. The encoder (e.g., BERT, RoBERTa) takes X (p) , X (h) as inputs and computes the semantic representations, denoted as S (p) = {s n}, where k is the dimension of the encoder's hidden state.
Cross Attention Module
Different from single sentence classification, we need a proper interaction module to better clarify the sentences pair's relationship for NLI task. In practice, we need to compute token-level weights between words in premise and hypothesis. Therefore, we introduce the cross attention module to calculate the co-attention matrix C ∈ R m×n of the token level. Each element C i,j ∈ R indicates the relevance between the i-th word of premise and the j-th word of hypothesis: where W ∈ R d×k , P ∈ R d , and denotes the elementwise production operation. Then the attentive matrix could be formalized as: We further enhance the collected local semantic information: where [·; ·; ·; ·] refers to the concatenation operation. s indicates the difference between the original representation and the hypothesis-information enhanced representation of premise, and s represents their semantic similarity. Both values are designed to measure the degree of semantic relevance between the sentences pair. The smaller the difference and the larger the semantic similarity, the sentences pair are more likely to be classified into Entailment category. The difference and element-wise product are then concatenated with the original vectors (S (p) , S (p) ). We expect that such operations could help enhance the pair-level information and capture the inference relationships of premise and hypothesis. We get the new representation containing hypothesis-guided inferential information for premise: where LayerN orm(.) is a layer normalization. The result S (p) is a 2D-tensor that has the same shape as S (p) . The representation of hypothesisŜ (h) is calculated in the same way. We aggregate these representations and the pair-level representation Z for the sentence pair is obtained as follows: As described, the cross attention module can capture the relevance of the sentence pair and well characterize the relationship. Therefore, the pair-level representation can perceive the class information of sentence pairs.
Training Objective
Supervised contrastive loss A contrastive loss brings the latent representations of samples belonging to the same class closer together, by defining a set of positives (that should be closer) and negatives (that should be further apart). In [13], the authors extended the above loss to a supervised contrastive loss by regarding the samples belonging to the same class as positive set. Inspired by this, we adopt supervised contrastive learning objective to align the pair-level representation obtained from cross attention module to distinguish sentence pairs from different classes.
In the training stage, we randomly sample a batch I of K examples (X (p) , X (h) , y) i∈I={1,...,K} as denoted in Section 2.1. We denote the set of positives as P = {p : p ∈ I, y p = y i ∧ p = i}, with size |P|. The supervised contrastive loss on the batch I is defined as: where i,p indicates the likelihood that pair i is most similar to pair p and τ is the temperature hyper-parameter. Larger values of τ scale down the dot-products, creating more difficult comparisons. Z i is the pair-level representation of pair (X (p) , X (h) ) i from the cross attention module. Supervised contrastive loss L SCL is calculated for every sentence pair among the batch I. To minimize contrastive loss L SCL , the similarity of pairs in the same class should be as large as possible, and the similarity of negative examples should be as small as possible. In this way, we can map positive pairs closer together in the embedding space, while pushing apart negative pairs.
Cross-entropy loss Supervised contrastive loss mainly focuses on separating each pair apart from the others of different classes, whereas there is no explicit force in discriminating contradiction, neutral and entailment. Therefore, we adopt the softmax-based cross-entropy to form the classfication objective: where W and b are trainable parameters. Z is the pair-level representation from the cross attention module and y is the corresponding label of the pair.
Overall loss The overall loss is a weighted average of CE and the SCL loss, denoted as: where α is a hyper-parameter to balance two objectives.
Benchmark Dateset
We conduct our experiments on NLI task and other 7 transfer learning tasks. Natural language inference task: We evaluate on two popular benchmarks: the Stanford Natural Language Inference (SNLI) [7] and the MultiGenre NLI Corpus (MultiNLI) [8] and compute classification accuracy as the evaluation metric. Detailed statistical information is shown in Table 1.
Transfer tasks: We also evaluate on the following transfer tasks: MR [15], CR [16], SUBJ [17], MPQA [18], SST-2 [19], TREC [20] and MRPC [21]. For single-sentence classification tasks, we train a logistic regression classifier on top of frozen BERT encoder representation S. In MRPC task, we use the pair-level representation Z obtained from the cross attention module for the sentence pair to map the semantic space. We follow default configurations from SentEval [22].
Implementation Details
We start from pre-trained checkpoints of BERT [23] (uncased) or RoBERTa [24] (cased). We implement PairSCL based on Huggingface's transformers package [25]. All experiments are conducted on 5 Nvidia GTX 3090 GPUs. We train our models for 10 epochs with a batch size of 512 and temperature τ = 0.05 using an Adam optimizer [26]. The hyper-parameter α is set as 1 for combining objectives. The learning rate is set as 5e-5 for base models and 1e-5 for large models. The maximum sequence length is set to 128.
For transfer tasks, we evaluate with SBERT, SRoBERTa [29] and SimCSE [14]. We directly report the results from [29], since our evaluation setting is the same with theirs. Table 3. Transfer task results of different snetence embedding models (measured as accuracy). ♣: results from [29]; ♥: results from [14]. The best performance is in bold among models with the same pre-trained encoder. For the results on two datasets, we conduct the students paired t-test and the p-value of the significance test between the results of PairSCL and RoBERTa is less than 0.01 and 0.05, respectively. This performance gains are due to the stronger ability of PariSCL to learn pair-level representation with cross attention. PairSCL can capture pair-level semantics effectively by the specifically-designed contrastive signal -predicting whether two sentence pairs belong to the same class. Table 3 shows the evaluation results on transfer tasks. We can observe that PairSCL-BERT base outperforms several supervised baselines like InferSent and Universal Sentence Encoder, and keeps comparable to the strong supervised method SBERT base . When further performing representation transfer with RoBERTa base architecture, our approach achieves even better performance. On average, our approach outperforms SimCSE-RoBERTa base with an improvement of 1.15% (from 88.08% to 89.23%).
Transfer Tasks Results
As we argued earlier, it benefits from that our model can distinguish the sentences of different classes well by pulling the sentence from the same class together and pushing them of different classes further apart.
Ablation Study
To better understand the contribution of each key component of PairSCL, we conduct an ablation study on SNLI based on BERT encoders. The results are shown in Table 4.
After removing the cross attention mechanism, the model simply concat the representation of two sentences. The per-formance decreases by 1.6% on the test set which shows the joint representation obtained by cross attention can well characterize the relationship between the sentence pair. Remove the cross-entropy loss and the test accuracy decreases by 0.7%. Without the supervised contrastive learning loss, the accuracy of our model is decreased to 90.7%. The reason is that the contrastive learning objective can learn the discrepancy between the sentence pairs of different classes by pulling the sentence pairs from the same class together and pushing the pairs of different classes further apart.
CONCLUSION
In this paper, we propose a pair-level supervised contrastive learning approach. We adopt a cross attention module to learn the joint representations of the sentence pairs. A contrastive learning objective is designed to distinguish the varied classes of sentence pairs by pulling those in one class together and pushing apart the pairs in other classes. We evaluate PairSCL on two popular datasets: SNLI and MultiNLI. The experiment results show that PairSCL obtains new state-of-the-art performance compared with existing models. For the transfer tasks, PariSCL outperforms the previous state-of-the-art method with 1.2% averaged improvement. We carefully study the components of PairSCL, and show the effects of different parts. | 2022-01-27T02:16:00.097Z | 2022-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "ca9c25e964abd0b6634132836e54e58beb050428",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ca9c25e964abd0b6634132836e54e58beb050428",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247159564 | pes2o/s2orc | v3-fos-license | Dynamics of COVID-19 in Amazonia: A history of government denialism and the risk of a third wave
The city of Manaus (the capital of Brazil’s state of Amazonas) has become a key location for understanding the dynamics of the global pandemic of COVID-19. Different groups of scientists have foreseen different scenarios, such as the second wave or that Manaus could escape such a wave by having reached herd immunity. Here we test five hypotheses that explain the second wave of COVID-19 in Manaus: 1) The greater transmissibility of the Amazonian (gamma or P.1) variant is responsible for the second wave; 2) SARS-CoV-2 infection levels during the first wave were overestimated by those foreseeing herd immunity, and the population remained below this threshold when the second wave began at the beginning of December 2020; 3) Antibodies acquired from infection by one lineage do not confer immunity against other lineages; 4) Loss of immunity has generated a feedback phenomenon among infected people, which could generate future waves, and 5) A combination of the foregoing hypotheses. We also evaluated the possibility of a third wave in Manaus despite advances in vaccination, the new wave being due to the introduction of the delta variant in the region and the loss of immunity from natural contact with the virus. We developed a multi-strain SEIRS (Susceptible-Exposed-Infected-Removed-Susceptible) model and fed it with data for Manaus on mobility, COVID-19 hospitalizations, numbers of cases and deaths. Our model contemplated the current vaccination rates for all vaccines applied in Manaus and the individual protection rates already known for each vaccine. Our results indicate that the SARS-CoV-2 gamma (P.1) strain that originated in the Amazon region is not the cause of the second wave of COVID-19 in Manaus, but rather this strain originated during the second wave and became predominant in January 2021. Our multi-strain SEIRS model indicates that neither the doubled transmission rate of the gamma variant nor the loss of immunity alone is sufficient to explain the sudden rise of hospitalizations in late December 2020. Our results also indicate that the most plausible explanation for the current second wave is a SARS-CoV-2 infection level at around 50% of the population in early December 2020, together with loss of population immunity and early relaxation of restrictive measures. The most-plausible model indicates that contact with one strain does not provide protection against other strains and that the gamma variant has a transmissibility rate twice that of the original SARS-CoV-2 strain. Our model also shows that, despite the advance of vaccination, and even if future vaccination advances at a steady pace, the introduction of the delta variant or other new variants could cause a new wave of COVID-19.
The city of Manaus (the capital of Brazil's state of Amazonas) has become a key location for understanding the dynamics of the global pandemic of COVID-19. Different groups of scientists have foreseen different scenarios, such as the second wave or that Manaus could escape such a wave by having reached herd immunity. Here we test five hypotheses that explain the second wave of COVID-19 in Manaus: 1) The greater transmissibility of the Amazonian (gamma or P.1) variant is responsible for the second wave; 2) SARS-CoV-2 infection levels during the first wave were overestimated by those foreseeing herd immunity, and the population remained below this threshold when the second wave began at the beginning of December 2020; 3) Antibodies acquired from infection by one lineage do not confer immunity against other lineages; 4) Loss of immunity has generated a feedback phenomenon among infected people, which could generate future waves, and 5) A combination of the foregoing hypotheses. We also evaluated the possibility of a third wave in Manaus despite advances in vaccination, the new wave being due to the introduction of the delta variant in the region and the loss of immunity from natural contact with the virus. We developed a multi-strain SEIRS (Susceptible-Exposed-Infected-Removed-Susceptible) model and fed it with data for Manaus on mobility, COVID-19 hospitalizations, numbers of cases and deaths. Our model contemplated the current vaccination rates for all vaccines applied in Manaus and the individual protection rates already known for each vaccine. Our results indicate that the SARS-CoV-2 gamma (P.1) strain that originated in the Amazon region is not the cause of the second wave of COVID-19 in Manaus, but rather this strain originated during the second wave and became predominant in January 2021. Our multi-strain SEIRS model indicates that neither the doubled transmission rate of the gamma variant nor the loss of immunity alone is sufficient to explain the sudden rise of hospitalizations in late December 2020. Our results also indicate that the most plausible explanation for the current second wave is a SARS-CoV-2 infection level at around 50% of the population in early December 2020, together with loss of population immunity and early relaxation of restrictive measures. The most-plausible model indicates that contact with one strain does not provide protection against other strains and that the gamma variant has a transmissibility rate twice that of the original SARS-CoV-2 strain. Our model also shows that, despite the advance of vaccination, and even if future vaccination advances at a steady pace, the introduction of the delta variant or other new variants could cause a new wave of COVID-19.
Introduction
The SARS-CoV-2 virus, which causes coronavirus disease 2019 , is responsible for the biggest pandemic of the 21st century. Decision makers have tried to identify the most effective strategies to overcome the pandemic and avoid as many deaths as possible, but there appears to be an exception in the case of Manaus, the capital of Brazil's state of Amazonas and the largest city in the Amazon region. In order to favor the economy in the short term, local politicians refused to take measures to stop the pandemic and prevent a second wave of COVID-19, even when official government data did not indicate a decline in the number of cases or deaths (Ferrante et al., 2020). In January 2021 Manaus experienced a second wave of cases, hospitalizations and deaths that surpassed the first peak of the pandemic that occurred in April and May 2020 (FVS, 2021a). Hardly any other city in the world has undergone a natural experiment with the pandemic like the current one in Manaus.
The population's high rate of contact with SARS-CoV-2 raises the questions of which hypotheses are the most plausible to explain the second COVID-19 wave that Manaus is experiencing, and whether seroprevalence was overestimated (Sabino et al., 2021). Several studies have suggested that natural immunity is temporary and would tend to fade within a few months (Brett and Rohani, 2020;Edridge et al., 2020;Yang and Ibarrondo, 2020;Ferrante et al., 2021a), and this is also a plausible explanation for the second wave in Manaus (Ferrante et al., 2021a;2021b). On January 10, 2021, the Japanese government announced that it had identified a new variant of SARS-CoV-2 in people who had traveled to Japan from Manaus .
Genomic evidence indicates that the Amazonian (gamma or P.1) variant was not yet in circulation in Manaus in November 2020 and that it became almost predominant in January 2021 . Although politicians have been quick to blame the advent of the second wave on the new variant (thus avoiding the conclusion that their own refusal to take recommended control measures was to blame), the data make clear that the hypothesis that the second wave was initiated by the gamma variant is less plausible than other hypotheses. However, the gamma-variant hypothesis also needs to be tested. Here we test five hypotheses to explain the second wave of COVID-19 in Manaus: 1) The greater transmissibility of the gamma variant is responsible for the second wave. 2) SARS-CoV-2 infection levels were overestimated during the first wave, and the population remained below the limit for herd immunity when the second wave began in December 2020; 3) Antibodies acquired from infection by one lineage do not confer immunity against other lineages; 4) Loss of immunity has generated a feedback phenomenon among infected people, which could generate future waves, and 5) A combination of the foregoing hypotheses and rapid lifting of social-distancing restrictions.
We also evaluated the possibility of a third wave in Manaus. Despite advances in vaccination, a new wave would be likely due to the introduction of the delta variant in the region and the loss of immunity from natural contact with the virus. The arrival of the omicron variant now makes a third wave virtually certain.
The SEIR, SEIRS, and multi-strain models
The SEIR (Susceptible -Exposed -Infected -Removed) model is the primary tool for analyzing the epidemiological curves of the COVID-19 pandemic (Adam, 2020;Bakker et al., 2020;Li et al., 2020;Prem et al., 2020). Individuals susceptible to infection in a population come into contact at random with the SARS-CoV-2 virus, becoming exposed. After the incubation period, they become infected and can transmit the virus randomly to other susceptible individuals. Infected individuals can be either asymptomatic (have few or no symptoms) or symptomatic. Over time, infected individuals are removed (they either recover or die and no longer can infect susceptible individuals).
The SEIRS (Susceptible -Exposed -Infected -Removed-Susceptible) model (Trawicki, 2017;Bjørnstad et al., 2020) is an extension of the SEIR model, allowing individuals who have been removed and are still surviving to become susceptible again after a given average period for loss of immunity. Adding the capability of individuals to return to the infected pool drastically changes the epidemiological regime, creating the possibility of recurring infection waves and a persistent, nonvanishing flux of COVID-19 hospitalizations and deaths. The SEIRS model equations are: with the compartments indicated by: • S: Susceptible • E: Exposed • I: Infected • H: Hospitalized (due to COVID-19 infection) We draw attention to the fact that all variables (S, E, I, H, R, D, V) are 3-dimensional vectors (corresponding to the three age ranges of the population of Manaussee below).
The model's parameters are: • β γ and β δ are the transmission rates for the two SARS-CoV-2 strains considered • α is the vaccination rate, obtained via linear regression • γ − 1 γ and γ − 1 δ are the incubation periods for the two SARS-CoV-2 strains considered • σ is the vaccine overall inefficacy, weighted for 1st and 2nd doses of Astra-Zeneca, Pfizer, CoronaVac (SinoVac) and Janssen vaccines • ξ − 1 is the infection time • κ is the infection fatality rate • λ − 1 is the recovery time • ω is the rate of re-susceptibility (changeover between Recovered and Susceptible).
• C s is the sum of the compartments S(t) + E γ (t) + E δ (t) + I γ (t) + ⎦ is a transition matrix of the three age groups considered: (a) below 18 yrs. (b) 18-59 yrs. and (c) 60 + yr. The multi-strain (Fudolig et al., 2020;Khyar and Allali, 2020) SEIR model allows two or more strains of the SARS-CoV-2 virus to co-exist, with different outbreak dates and transmission rates. Finally, our multi-strain SEIRS model combines the features of the SEIRS and multistrain SEIR models; this setup allows testing the hypotheses of the presence of a new SARS-CoV-2 variant with a higher-than-usual transmission rate, along with a potential loss of immunity. We used this model for the following two purposes: (1) to test abovementioned five hypotheses regarding the second wave of COVID-19 that Manaus experienced in January 2021 and (2) to estimate the impact of the delta variant. To test our hypotheses regarding the second wave, we studied several scenarios generated by the multi-strain SEIRS model, with different values for social distancing, average immunity loss period, new strain outbreak date, and transmission rate. We argue that the December 2020 surge of severe acute respiratory illness (SARI) hospitalizations in Manaus cannot be fully explained by a multi-strain SEIR model alone. However, it can be easily fit into a multi-strain SEIRS model, assuming the emergence of a new SARS-CoV-2 strain that is twice as contagious as the previous one and an average period for loss of immunity of about eight months.
As a second application, we included data on vaccination rates up to August 2021 and a projection of vaccination progress to the end of 2021 to simulate the effect of introducing the delta variant into the susceptible population in Manaus over this period (Fig. 1). We considered all vaccines applied in accord with information by age group (based on bulletins released by the Health Secretariat of the State of Amazonas) (FVS, 2021b; 2021c) and the levels of protection provided by the first and second vaccine doses according to the most recent literature Kang et al., 2021).
Given the rapid dissemination of the delta strain, our analysis showed that the number of infections by the gamma strain after a previous infection by the delta strain must have been negligible. This possibility was thus disregarded in the model.
The multi-strain SEIRS algorithm described in this section was implemented in Python 3.8.10 and R version 3.6.3 programming languages, using the deSolve package, with GNU/Linux Mint 20.2.
Model parameters and data structure for Manaus
Amazonas, the largest state in Brazil's Amazon region, was the first to have its health system collapse due to the COVID-19 pandemic. Manaus (the capital of Amazonas) has a population of 2.2 million and is home to all of the state's intensive-care units (ICUs). Due to insufficient community testing for COVID-19 in Brazil, most of the tests are used for health and security personnel and hospitalized patients. This leads to a lack of periodic and randomized testing of the general population and makes it difficult to model the epidemic based on data on confirmed cases. Thus, in the present analysis of the Manaus epidemic, we base our model mostly on data on reported deaths and cases of SARI. We do not work with the number of officially registered (confirmed) cases of COVID-19 to estimate the number of unreported cases. The underreporting of deaths due to COVID-19 in Brazil is well-known and has been the subject of scientific publications (França et al., 2020). It is also important to note that our model assumes that individuals' contacts follow a uniformly random pattern of interaction. There is no spatial (geographic) restriction on social contact; this is embedded in the transmission parameter, which is determined empirically from the observed data at the beginning of the epidemic.
In our model, the population can be partitioned into different groups according to age, income level, or occupation, allowing calculations to be based on a social-contact matrix that specifies the intensity of contact between such groups Duczmal et al., 2020;França et al., 2020;Li et al., 2020;Prem et al., 2020). In age-structured models, the population is stratified into two or more age groups, and the models reflect the different intensities of social contact among them. This is particularly important for the elderly, who are more vulnerable to COVID-19. For Brazil in general, and for Manaus in particular, elderly people usually live in the same households as their younger family members, if not in the same rooms, as is often the case in lower-income homes. Thus, in these neighborhoods, we assume high population mixing, in line with Bitar and Steinmetz (2020). In this situation, it is much more challenging to model social-contact networks based on agestructured models.
In the application of our model to test the hypotheses regarding the second wave of COVID-19 in Manaus, the population is partitioned into two sub-compartments with distinct social-contact rates. The compartment model employs the contact matrix where C ij indicates the social-contact intensity of virus transmission from an individual in group i to an individual in group j, where 0 ≤ C ij ≤ 1. If C ij = 1, then the contact is not restricted, and if C ij = 0 no individual in group i can transmit the virus to any individual in group j (França et al., 2020;Prem et al., 2020).
On the other hand, in the application of our model to the estimation of the impact of the delta variant we employed a transition matrix (M t ) dividing the population in the three sub-compartments according to age groups as the vaccination schedule is age-dependent, with older and more-vulnerable persons being vaccinated first. COVID-19 parameters are different for each age group.
In applying the model to test the hypotheses regarding the second wave, the multi-strain SEIRS simulation runs were conducted with a full ensemble of stochastic parameters, reflecting as precisely as possible the variation in the clinically observed parameters in accord with recent COVID-19 literature (Lauer et al., 2020;Prem et al., 2020;Sanche et al., 2020;Verity et al., 2020). Each full simulation consists of 1000 Monte Carlo replications of the epidemiological curves for susceptible, exposed, infected, and removed individuals. For the set of epidemiological solutions to the system from each Monte Carlo replication, all of the COVID-19 epidemiological parameters and scenario parameters are randomized following their statistical distributions in accordance with the latest estimates from the literature. We used a fourth-order Runge- Kutta method to solve the system of differential equations numerically. The initial condition assumes the value of 0.001 for the compartment of exposed individuals at a time (day) t = 1 (where day one is January 1, 2020).
The parameters of interest in the model are described as follows: • We assume the population of Manaus to be N = 2.2 million inhabitants and consider scenarios with a population mixing value of 0.85 (Bitar and Steinmetz, 2020). • COVID-19 ′ s infection fatality rate (IFR) for the population of Manaus is based on estimates of the infection fatality rate for each age group and on the age structure of the population of the Manaus city according to the Brazilian census (IBGE, 2010;Ioannidis, 2020). We consider IFR to be a normally distributed random variable (σ = 0.0005). We assumed an average IFR value of 0.30% until May 31, 2020, linearly decreasing until June 30 to 0.20%, and remaining constant at that level from then onwards; this modeling follows empirically from fitting the COVID-19 hospitalization and death curves in Manaus and reflects the overall improvement of patient care following the first wave during March and April 2020. The initial March-April IFR value of 0.30% was estimated through a semi-Bayesian procedure to adjust the SEIRS computed curve for infected individuals to the observed curve for hospitalizations. This value is close to the 0.27% median IFR value (Ioannidis, 2020) and the 0.26% IFR value estimated for Manaus by Buss et al. (2021). • The average infectious period is normally distributed (μ = 10.0 days, σ = 2.0 days) (Verity et al., 2020).
In applying the model to simulate to impact of the delta variant, the above parameters were adjusted and calibrated to observed data on hospitalizations and deaths in the period from November 2020 until mid-August 2021 and to vaccination numbers from January 2021 until July 2021. Daily counts of COVID-19-related deaths in the municipality of Manaus were estimated (according to the date of occurrence) from data obtained from official records of deaths due to severe acute respiratory illness (SARI) by the Municipal Health Department (Open Data SUS, 2020; Prefeitura de Manaus, 2020) from April 2020 to January 2021. Due to underreporting of deaths from COVID-19 in Manaus (Felizardo, 2020;Silva et al., 2020), especially during the peak in mortality in the first epidemic phase in April 2020, we considered data on unspecified SARI (the etiological agent of SARI is still being investigated) and confirmed deaths due to COVID-19. In our model we assume that all of these deaths are due to COVID-19. During the years 2016-2019 only 8, 13, 2, and 18 deaths due to SARI were recorded in the months of April to July, respectively, in the city of Manaus (Open Data SUS, 2020).
Immunity-loss test
The Amazonas Health Surveillance Foundation (FVS) bulletin shows a rapid increase in COVID-19 hospitalizations in Manaus in the last two weeks of December 2020 (FVS, 2021a), as seen in Fig. 1a. This probably reflects the exposure to SARS-CoV-2 during all the months of October, November and December with return of in-person classes and the loosening of commerce restrictions, a substantial increase in general circulation, and less adherence to mask use.
Using the multi-strain SEIRS model, we ran scenarios where the parameter ω (the inverse of the average period for loss of immunity) was set to zero (indicating no immunity loss), 1/720, 1/300, 1/240, 1/180, and 1/150, measured in days − 1 units (Fig. 1d-e). The first graph shows the recorded COVID-19 daily deaths until December 2020, followed by the projected SEIRS death curves from January 2021 onwards. From March 2020 to January 9, 2021, the second graph showed the observed raw (in orange) and smoothed (in red) daily SARI hospitalizations, superimposed on the curve for the number of infected individuals computed by the multi-strain SEIRS system. The red/orange curve is drawn out-of-scale with a 6-day delay from infection to hospitalization (in order to fit the black curve). According to different values of the average period for loss of immunity, the six curves from December 2020 onwards project the number of infected individuals at each point in time (under the first graph). During the last weeks of December 2020 and the first week of January 2021, the sharp increase in COVID-19 hospitalizations (orange/ red curve) cannot be matched with the SEIRS curves for the number of infected individuals when the ω value is zero, 1/720, 1/300, 1/180, or 1/150. However, a tighter match is achieved when ω is set to 1/240, suggesting that the average period for the loss of immunity is about 240 days. Scenarios that do not consider the loss of immunity, and consequently COVID-19 reinfection, cannot fully explain the surge of hospitalizations in December 2020.
The second wave and the gamma variant
In January 2021, Manaus experienced a second wave of cases, hospitalizations and deaths that surpassed the first peak of the pandemic that occurred in April and May 2020 (Fig. 2a). The gamma variant, which is of Amazonian origin, is believed to have arisen between November and December 2020 , with an estimated 51% of the infections in Manaus being due to this variant between December 17th and 31st, 2020, rising to 60% of infections by the end of January 2021 . We simulated an exaggerated scenario to test the hypothesis that the second wave was solely caused by the gamma variant, where we assumed that this variant had been in circulation since November 1, 2020 (i.e., before it is believed to have appeared) and has a transmission rate five times higher than the original strain that gave rise to the pandemic (i.e., much higher than the estimated doubling of the transmission rate). Our SEIRS model indicates that under this exaggerated scenario the number of cases and deaths during the second wave in Manaus would be only one-third of the number that has in fact been observed, thus refuting the hypothesis that the new variant caused the onset of the second wave (Fig. 2b). This hypothesis test is corroborated the genomic analyses and by the estimated numbers of gamma-variant infections between November and January , ruling out the possibility that the gamma variant generated the second wave of COVID-19 in Manaus.
We assessed the pandemic scenario in Manaus using a SEIRS model that was fit to observed data on deaths and hospitalizations from the first case recorded in Manaus in March 2020 up to March 2021. The model showed that the second wave was potentially greater than the first wave, both in terms of the number of infections and the number of deaths. In this scenario, the best model indicates that the only plausible hypothesis is a combination of factors, such as a population with a much lower percentage of infection than is currently estimated for Manaus, together with a high rate of immunity loss and a high rate of reinfection, in addition to a smaller proportion contaminated by the gamma variant (See Fig. 2c). This scenario is corroborated by a census of infections in Manaus with genomic identification and by models that corrected the SARS-CoV-2 attack rates that were previously overestimated in the city (Ferrante et al., 2021b).
The most plausible model also indicates that contact with SARS-CoV-2 did not confer immunity to the gamma variant. Serological tests in patients in Manaus attest to the loss of immunity due to natural contact with the virus and demonstrate the absence of any immunity to the gamma variant being present from immunity that had been acquired conferred by contact with the original strain of SARS-CoV-2 (Ferrante et al., 2021a). These findings confirm the results of our model. According to our model, the only plausible projected scenarios based on the observed data for Manaus have naturally acquired immunity to the original lineage of SARS-CoV-2 being lost after an average of 240 days (Fig. 2c-e). A longer period of natural immunity would not be consistent with the scale of the current wave of COVID-19 in the city, in line with evidence of a recent study (Hall et al., 2021). Based on the most-plausible model (Fig. 2c), the gamma variant is estimated to be two (2.0) times more transmissible than the SARS-CoV-2 strain that gave rise to the pandemic. This transmission rate is lower than a previously proposed rate 2.6 times higher than that of the original variant (Coutinho et al., 2021) because the higher rate was based on an overestimated SARS-CoV-2 attack rate during the second wave of COVID-19 in Manaus and an underestimated rate for the loss of population immunity (Ferrante et al., 2021b).
Our model estimates that the gamma variant became predominant in infections observed in Manaus only in early January 2021 (Fig. 2c). The initiation of the second wave cannot be attributed to the gamma variant, since this wave was already underway, with the beginning of the peak of cases, deaths, and hospitalizations being in progress at the time that the gamma lineage is estimated to have appeared . The appearance of the gamma lineage coincides with the beginning of the period of reduced social isolation in Manaus, with return of in-person classes that preceded the Christmas and New Year's celebrations (Fig. 3a, b).
Genomic data indicate that the gamma variant appeared in Manaus and then spread to other locations . According to our results, on November 15, 2020 there already were 2000 active cases of gamma-variant infection in Manaus, so it is ruled out that the gamma variant emerged due to the agglomerations during the November 2020 elections or the end-of-year festivities (Fig. 3a). In addition, viral transmission rates increased by more than 40% in Manaus due to the return of in-person classes at the end of September and beginning of October: exactly 21 days (the mean viral cycle length) after the resumption of classes the number of hospitalizations increased, leading to the doubling that inaugurated the second wave (Fig. 3a, b). This increase in viral transmission in Manaus due to the return of face-to-face classes is indicated by the SEIRS model as the event that caused the emergence of the gamma variant and its proliferation (Fig. 2a, b). This conclusion is supported by phylogenetic analyses that trace the origin of the gamma variant . Thus, our results show that the second wave was not initiated by the gamma variant, but rather this wave was the cause of the appearance of the gamma variant. The same was observed for the alpha variant (or B.1.1.7) that appeared in late summer 2020 in the United Kingdom, where early easing of social distancing was the cause of the explosion of cases that consequently gave rise to the alpha variant (Volz et al., 2021). Our results also indicate that the scenario in Manaus (Fig. 2c) was aggravated by the early easing of restrictions that occurred at the end of December 2020 (such as greater flexibility in shopping malls), which increased community transmission of SARS-CoV-2, boosting the transmission of the gamma variant in the population.
The return of in-person classes was a strategy of the governor of the state of Amazonas (Wilson Lima) and the president of Brazil (Jair Messias Bolsonaro) to make Manaus reach herd immunity, as was declared by the vice-governor of the state (Ferrante et al., 2021c). Wilson Lima went so far as to declare that the state of Amazonas no longer had any reported deaths and that official government data contained gross errors that as had been pointed out by researchers from the director's office of the Foundation for Health Vigilance (FVS) at a meeting organized by the Public Ministry of the State of Amazonas (Felizardo, 2020). Other official government communiqués declared that there was no risk of a second wave and that COVID-19 in Manaus had ended, with the return of in-person classes being safe (Felizardo, 2020).
Thus, both the beginning of the second wave of COVID-19 in Manaus and the emergence of the gamma variant can be attributed to the disastrous management by the governor, the president and the former minister of health (General Eduardo Pazuello), which included promoting the early opening of economic activities and the return of inperson classes that stimulated community transmission of SARS-CoV-2 (Ferrante et al., 2021c). Our results further indicate that the gamma variant caused two-thirds of the COVID-19 deaths in Manaus up to August 2021. This proportion of deaths caused by the gamma variant can be extrapolated to the total deaths in Brazil, which on August 5, 2021 were officially over 559,000 but could potentially reach double this number (Ferrante et al., 2021c). Based on the mortality rates observed for the city, only 51.6% of the population had had contact with SARS-CoV-2 as of December 15, 2020. In addition, the models that best fit the observed course of the pandemic in Manaus, both for the number of infected individuals and for the number of deaths, demonstrate that the numbers of cases and deaths would only be possible with levels of reinfection in the population of 1.3% for the SARS-CoV-2 strain and 0% for the gamma variant on December 14th; 3.9% for the original strain and 1.4% for the gamma variant on December 31st and 14.7% for the original strain and 17.4% for the gamma variant on January 31st (Fig. 2c). Thus, the conclusion that Manaus had achieved herd immunity, as suggested by Buss et al., (2021), is completely ruled out by our SEIRS model and by long-term case studies that have shown a natural decline of IgG levels in patients who had natural contact with the virus, in addition to the absence of an immune response to the gamma variant in cases of reinfection (Ferrante et al., 2021a).
Although politicians and media reports blame the second wave in Manaus on the new variant (Ferrante et al., 2021c), this politically convenient conclusion is not supported by our study. Modeled results with parameters for the "old" variant fit the observed cases and deaths in the second wave and are explained by the negligence of government authorities over the course of the pandemic [See Ferrante et al., 2020;2021c]. The continued circulation of the virus in the population of Manaus may foment the emergence of still more new strains due to mutations (such as the K4174N mutation in the spike protein, which originated in the state of Amazonas) (Ferrante et al., 2021c). Because of this, we recommend that certain activities, such as face-to-face classes with schoolchildren, return only when teachers and students are completely vaccinated. Warnings to contain the COVID-19 pandemic in the Amazon have been given in peer-reviewed journals since April 2020 (Ferrante and Fearnside, 2020a), including warning of the possibility of a second wave in Manaus (Ferrante et al., 2020). It was also warned that the strategies of Bolsonaro, Pazuello and Lima implied an early resumption of classes in Manaus as a way to achieve herd immunity (Ferrante et al., 2021c).
The third wave and the delta variant
Our results indicate that, with just over 63.3% of the total population of Manaus immunized (second doses + single doses) by December 20, 2021 (FVS, 2021b), a third wave of COVID-19 caused by the delta variant would be expected, given that the proportion of susceptible individuals is still high (Fig. 4a, b). The model also indicates herd immunity via vaccination control of community transmission to delta variant only after 85 to 90% of the population of Manaus has been vaccinated with the second dose or single dose. Almost 13 daily deaths would be expected in Manaus at the peak of the third wave according to the most conservative scenario, with an estimated number of more than 911 new deaths occurring from the delta variant alone before the population reaches herd immunity via vaccination and exhaustion of the pool of susceptible individuals (Fig. 4c). In a new drastic scenario, with a new health system collapse and lack of oxygen, 26 daily deaths would be expected in Manaus at the peak of the third wave, with an estimated total number of more than 1355 deaths only by the delta variant (Fig. 4c). In either scenario there would be a greater impact on outpatient care in comparation with the number of hospitalizations.
It is unlikely that Manaus will shut down businesses and schools again; however, this must be considered because high community transmission has the potential to give rise to new variants (Ferrante et al., 2021b;2021c). According to the SEIRS model, the delta variant was estimated to account for 1% of active cases in Manaus in August 2021, which is supported by genomic sequencing of patients who tested positive for SARS-CoV-2 (N = 400 with 4 patients infected with the delta variant) (Amazonas Atual, 2021). For the months before the third wave, the SEIRS model presents an average for hospitalizations caused by the gamma variant that fits the observed data perfectly; however, the average indicated by the model for deaths from June to August 2021 is slightly higher than the official data, which suggests underreporting, probably due to classification of deaths from COVID-19 as being from other causes.
Although these rates were still low in November 2021, the model indicates explosive behavior of the third wave, meaning that a new collapse of the health system could occur, with a consequent increase in fatalities as was observed during the first two waves. Thus, the state of Amazonas and the city of Manaus must prepare for the same hospital admission rates as those observed during the second wave, considering both regular beds and intensive-care unit (ICU) beds. Although vaccination tended to reduce the mortality rate during the third wave in the model, the absence of beds and oxygen would increase lethality, requiring the state to prepare to avoid another catastrophe in Manaus.
The day after the Brazilian Health Regulatory Agency (ANVISA) approved the vaccination of children aged 5 to 11 years on December 15, 2020, President Bolsonaro intimidated the technical staff who participated in this approval by requesting their personal data from ANVISA and announcing this to his (often violent) followers in a "live" on social media (G1, 2021). Three days later a flood of threats lead ANVISA to request police protection for its staff (Garcia, 2021). President Bolsonaro had already promoted the dismantling of ANVISA over the course of his first two years in office, and he has repeatedly tried to influence the body's actions on an ideological basis (Ferrante and Fearnside, 2019). Following a pre-dawn telephone call from the president, Health Minister Marcelo Queiroga announced that the vaccination of children aged 5 to 11 years would only be undertaken after a public consultation (Camarotto, 2021;G1, 2021). The president has formally requested the health minister to require a medical prescription for the vaccination of each child, in addition to written permission from the parents (Garcia, 2021). Current vaccination levels in Manaus are not sufficient to curb the third wave of COVID-19 that our SEIRS model projects (Fig. 4). The SEIRS model points to wide community circulation of SARS-CoV-2 in the absence of vaccination of this portion of the population, and the pandemic would continue due to the large portion of young people in the population of Manaus.
By December 15, 2021, Brazil had already recorded three cases of the omicron variant, and the spread of this variant has the potential to converge with the third wave of COVID-19 that is predicted by our model. Thus, a scenario might occur similar to the one observed in Manaus in January 2021, when the then-new gamma variant boosted the second wave, causing the health and cemetery systems to collapse in Manaus.
All of the intensive-care units in the state of Amazonas are located in Manaus, a fact that would add to mortality in the interior of the state with proliferation of the delta variant. The state of Amazonas, including the capital (Manaus), is home to a substantial part of Brazil's indigenous population (Ferrante et al., 2021a), which is a COVID-19 risk group (Ferrante and Fearnside, 2020a) and which has higher mortality than other ethnic groups (Ferrante et al., 2021c). The fact that elderly people are more vulnerable to COVID-19 represents a particularly serious risk to maintaining indigenous ethnic groups in the Amazon because traditional knowledge is transmitted orally by the elders (Ferrante et al., 2020). The federal government has failed to protect indigenous peoples from COVID-19, with provision of even basic resources such as drinking water being vetoed by the president himself (Ferrante et al., 2021c). Chloroquine and other medicines that are ineffective against COVID-19 have been distributed to indigenous peoples by the government (Ferrante and Fearnside, 2020b;Ferrante et al., 2021c). Given the increase in invasions of indigenous lands (Ferrante and Fearnside, 2020c;, including those around Manaus (Ferrante and Fearnside, 2020b;Ferrante et al., 2021c), we recommend that measures be taken to ensure the basic rights of these peoples and that programs be created to protect them during and after the third wave that is projected in the region as a result of the delta and omicron variants. Decision makers in Brazil, especially in the Amazon region, should cease their denial of science and consequent insistence on early flexibilization of social-distancing measures, rejection of masking and opposition to a vaccine passport; these positions contribute to continuation of the COVID-19 pandemic (Diele-Viegas et al., 2021;Ferrante et al., 2021c;Ribeiro et al., 2021).
Conclusions
Our results indicate that, even if the new variant that originated in Amazonas (gamma or P.1) had appeared earlier than indicated, and even if its transmission rate were five times higher than the original strain, it would be implausible that this new lineage was responsible for initiating the second wave of COVID-19 in Manaus. The most plausible scenario, which best fits the second wave of COVID-19 in Manaus, corroborates a mixed hypothesis and indicates that the levels of infection by SARS-CoV-2 were overestimated by those foreseeing herd immunity. We estimate that 51.6% of the population had had contact with SARS-CoV-2 in mid-December 2020 and 60.0% by the end of January 2021. In addition, our model estimates that 32.1% of the cases were due to reinfection at the end of January 2021, with loss of immunity occurring after about 240 days and contact with one strain not conferring immunity to other strains. These results make it implausible that natural immunity is possible. Our results also indicate that the gamma variant originated after the beginning of the second wave and has a transmissibility rate twice that of the original SARS-CoV-2 strain, becoming predominant in the population in early January 2021. Under a conservative scenario in the absence of appropriate action to contain the delta variant, infections would culminate in a third wave with the potential to lead to an additional 911 deaths in Manaus before ample vaccination coverage is achieved. The model also indicates herd immunity via vaccination control of community transmission to delta variant only after 85 to 90% of the population of Manaus has been vaccinated with the second dose or single dose. The likely concurrent spread of the omicron variant during a third wave of COVID-19 in Manaus caused by the delta variant will only add further pressure on the health system, potentially leading to a new collapse of the system if appropriate action is not taken.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-03-01T14:11:17.281Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "3b016a486257c182138851f8a3af4735427e49dc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pmedr.2022.101752",
"oa_status": "GOLD",
"pdf_src": "ElsevierCorona",
"pdf_hash": "3b016a486257c182138851f8a3af4735427e49dc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84340086 | pes2o/s2orc | v3-fos-license | Evaluation of the Effect of Photosynthesis on Biomass Production with Simultaneous Analysis of Growth and Continuous Monitoring of CO2 Exchange in the Whole Plants of Radish, cv Kosena under Ambient and Elevated CO2
Abstract The effects of elevated CO2 (approximate doubling of atmospheric CO2 concentration) on the rate of photosynthesis estimated from continuous monitoring of CO2 exchange in whole plants were investigated in radish cv. Kosena accompanied with simultaneous analysis of growth for 6 days from 15 to 21 days after planting (DAP). The elevated CO2 increased the dry weights of hydroponically grown radish plants by 59% at 21 DAP. The increase in dry weight was due to a combined effect of increased leaf area and increased photosynthetic rate per unit leaf area. Leaf area and the photosynthetic rate were increased by elevated CO2 by 18-43% and 9-20%, respectively, during 15 to 21 DAP. Namely, an increase in the rate of photosynthesis is accompanied by an increase in leaf area, both having a significant effect on biomass production.
A very basic and important correlation of the rate of photosynthesis per unit leaf area (LA) with productivity, and/or biomass of plants, is still not clear and is a controversial issue. Positive and negative correlations and even the absence of a correlation have been reported. Reynolds et al. (1994) reported a positive relationship between the average rate of photosynthesis in fl ag leaves and average grain yield using 16 modern spring wheat varieties. Evans and Dunstone (1970) found a negative relationship between the rate of photosynthesis (per unit LA) in fl ag leaves and grain weight, and a positive relationship between the area of the largest leaf blade and grain weight among wild and cultivated wheat. They concluded that a larger LA of cultivated wheat caused a high yield. On the other hand, Evans (1985) found no relationship between the rate of photosynthesis (per unit LA) in fl ag leaves and fl agleaf area among wild and cultivated wheat. Recently Murchie et al. (2002) reported that there was no association between the grain fi lling rate and the rate of fl ag leaf photosynthesis (per unit LA) among new plant-type varieties and indica variety of rice. These studies attempted to relate the rate of photosynthesis in a certain leaf at a certain time with biomass and/or productivity of the plants. The rate of photosynthesis changes very much during development (see, e.g., the accompanying paper, Usuda, 2004). It is crucial to know the integrated rate of photosynthesis in the whole plant during certain periods to evaluate the relationship between photosynthesis and biomass production.
Studies on the growth under elevated CO 2 should help to elucidate the effect of enhanced rate of photosynthesis per unit LA on biomass production. Several decades have already passed for evaluation of the effect of growth under elevated CO 2 on photosynthesis and biomass. Many studies have been done to clarify the effect of growth under elevated CO 2 on the rate of photosynthesis in a specifi c single leaf at a certain time (see, e.g., Bowes, 1991;Long and Drake, 1992). Still many questions remain to be answered (see, e.g., the accompanying paper, Usuda 2004). Recent development of facilities for free-air CO 2 enrichment has provided an opportunity to evaluate the effect of elevated CO 2 on canopy photosynthesis and biomass (Kimball et al., 2002). In these studies the rates of canopy photosynthesis were usually expressed on a soil surface area basis and not on a unit LA basis. The rates of daily canopy photosynthesis under elevated CO 2 (about 200 µmol mol -1 higher than atmospheric concentration) (measured every few hours) have been determined 3 to 9 times during the whole growing season in cotton and wheat. There was a 21 to 40% increase in the rate in cotton (Hileman et al., 1994), and about a 16% season-long increase in wheat (Brooks et al., 2000). The effects of elevated CO 2 on the biomass of shoot, and peak LA index, were also determined in these and similar studies. In cotton, the biomass of shoot, and peak LA index, increased by 34% and from 15.6 to 3.8 %, respectively, under elevated CO 2 (Mauney et al. 1992;Mauney et al. 1994). In wheat, the biomass of shoot, and peak LA index were increased by elevated CO 2 by 12% (Brooks et al., 2000) and 24% (Kimball et al., 2002), respectively. To my knowledge, only a few studies have been made on the rate of photosynthesis per unit LA in whole plant throughout the growth period under elevated CO 2 in relation to biomass production. Christ and Körner (1995) measured the rate of photosynthesis in the whole plants of wheat during the initial growth stage under ambient (around 275 µmol mol -1 ) and elevated (around 500 µmol mol -1 ) CO 2 . They examined the carry-over effect of rather short-term exposure to elevated CO 2 in the early vegetative growth period on grain fi lling because they thought that increased tillering induced by elevated CO 2 in a very early stage was important for grain fi lling. The rate of photosynthesis per unit LA increased by 2 to 3 fold, but the dry mass of shoot increased only 20-75%. Gifford (1995) also measured the rates of photosynthesis in the whole plants of wheat for one day, and measured the total dry weight (DW) and LA after a certain period which was not specifi ed in the report. He found that an approximate doubling of atmospheric CO 2 concentration resulted in a 25, 9 and 10% increase in DW, LA and the rate of photosynthesis per unit LA, respectively. Vivin et al. (1995) measured the rate of whole plant photosynthesis and DW in oak seedlings twice in the fi rst year under elevated CO 2 (fi rst, at an early growth stage, 77 days after germination, and the second, at the end of the growing season). The rate of photosynthesis and LA in the whole plant at the early growth stage were increased by ca. 100 and 30 to 40%, respectively, by approximate doubling of the atmospheric CO 2 concentrations. The enhancement by elevated CO 2 , however, disappeared at the end of the growing season. The total DW of plants grown under elevated CO 2 was increased by 17% at the end of growing season. These results do not show the quantitative relationship between the increase in the rate of photosynthesis in whole plant and the increase in biomass production under elevated CO 2 . Elevated CO 2 may accelerate plant ontogeny and result in an early decline in the photosynthetic capacity during growth (see, accompanying paper, Usuda, 2004). Considering the uncertainty of the effect of elevated CO 2 on the rate of photosynthesis per unit LA, and that on LA in relation to biomass production, this research was undertaken.
In this study, radish cultivar Kosena was grown hydroponically. It grows rapidly and can be harvested at 21 days after planting (DAP). The shoot of Kosenna is edible and the DW of shoot is more than 85% of total DW at 21DAP (Usuda et al., 1999). Analysis of growth and continuous monitoring of CO 2 exchange in the whole plant were done simultaneously for 6 days from 15 to 21 DAP. In addition, the amount of water transpired per day was also estimated to evaluate the effect of elevated CO 2 on water use effi ciency. The dry matter accumulated during this period was more than 80% of the fi nal total DW at 21DAP (Usuda et al., 1999 and see also Fig. 2 in this study). Therefore, the analyses of the rate of photosynthesis per unit LA in the whole plants estimated from CO 2 exchange in whole plant and the increase in biomass in this study should give us a realistic and rational insight into the effect of elevated CO 2 on the rate of photosynthesis and biomass production.
Plant material and growth conditions
Raphanus sativus L. cv Kosena was used. In the experiments to measure the rate of photosynthesis and transpiration in the fi rst leaf of plants at various ages, plants were grown as described in the accompanying paper (Usuda, 2004). In the experiments to measure the rates of photosynthesis and transpiration in the whole plants with growth analysis, plants were grown as described in the accompanying paper (Usuda, 2004) until 15 DAP but with the following exception. Six plants were grown in a container with ca. 3.5 L of aerated culture medium from 9 to 15 DAP. At 15 DAP, seven relatively uniformly growing plants were selected from 18 plants. Four of them were harvested and three were held in the container with ca. 3.5 L of aerated (100 mL min -1 ) culture solution. The plants (n=3-1) were grown in a controlled growth chamber (Eyelatron FLI-301NH, Tokyo-Rikakiki, Tokyo, Japan) with a 14.5-h light/9.5-h dark cycle. Light intensities were 46, 130 and 570 µmol m -2 s -1 at plant height during (0-1st h and 13.5-14.5th h) (1-2nd h and 12.5-13.5th h), and (2-12.5th h) of the light period, respectively. The temperatures were 20 0.5, 23 0.5, 24 0.5 and 25 0.5˚C during the dark period, and, (0-1st h and 13.5-14.5th h), (1-2nd h and 12.5-13.5th h), and (2-12.5th h) of the light period, respectively. Humidity was kept at 60 10%. Light was provided from fl uorescent lamps (Neolumi-super FL40S·W Mitubishi Electric, Tokyo, Japan) during the whole light period and a metal-halide lamp (MT400DL/BUD Iwasaki Electric, Tokyo, Japan) was added during 2-12.5th h of the light period.
Measurements of the rates of photosynthesis and transpiration using a single leaf
The rates of photosynthesis and transpiration in the fi rst leaf of plants at various ages were measured as described in the accompanying paper (Usuda, 2004).
Continuous measurements of CO 2 exchange in the whole plants
The rates of CO 2 exchange in the whole plants (n=1-3) grown in the above mentioned growth chamber (volume of the chamber was about 300 L) were measured continuously with an open system using an infrared gas analyzer from 15 to 21 DAP. Air was taken into a big container of 600 L from outside and supplied to the growth chamber after passing through another container of 300 L. Flow rates in the experiments with ambient and elevated CO 2 were 70 and 90 L min -1 , respectively. This kept the air pressure in the chamber always at a slightly elevated atmospheric pressure. A preliminary check by supplying pure nitrogen into the chamber showed that the gas fl ow at the rate of 50 L min -1 guaranteed no entry of air from outside the chamber. An air pump (DA-241S, Ulvac-Kiko, Yokohama, Japan) and mass fl ow controller (CMQ0200J/K, Yamatake, Tokyo, Japan) were used to provide a constant air fl ow, and the rate of air fl ow was logged in a data logger (K8DL-G16, Omron, Tokyo Japan) every 30s. Air circulation in the chamber was maintained with the equipped fan and an additional fan at a wind volume of 18 m 3 min -1 . The CO 2 concentration in the inlet air and the differences in the concentration of CO 2 between the inlet and the outlet air were monitored with an infrared gas analyzer having two channels, one for the absolute mode and another for the differential mode (MLT3.2 Emerson Process Management, Hasselroth, Germany) after passing through a cooling trap and an air dryer of membrane type (Sunsep SWG-A01-18/PP, Asahi Glass Engineering Co, Chiba, Japan). The data were logged in the data logger every 30s. Zero adjustment in the differential mode of measurement, was made every day. Span adjustment in the differential mode of measurement and zero and span adjustment in the absolute mode of measurements were made once a week. Elevated CO 2 (375 µmol mol -1 ) was supplied by adding pure CO 2 into the inlet air in the fi rst container using a mass fl ow controller (MC-2100NC, Lintec, Shiga, Japan). The actual CO 2 concentration in the inlet air fl uctuated considerably (see Fig. 4). The average concentrations of CO 2 in the inlet air of ambient and elevated CO 2 were 388 16, and 752 17 µmol mol -1 (mean SD, n>50,00), respectively. The concentrations of CO 2 in the chamber during the light periods were 340-384 and 692-758 µ mol mol -1 in ambient and elevated CO 2 , respectively. The following control experiments were conducted in each experiment to check the rate of CO 2 exchange in microorganisms in the culture solution. After the fi nal harvest at 21 DAP the rates of CO 2 exchange in aerated culture solution without plants were monitored for at least one day as described above. These rates were very small and within the noise levels. Therefore I assumed that CO 2 exchange by microorganisms in the culture solution was negligible in this study.
Plant harvest and measurement of plant fresh weigh during continuous monitoring of CO 2 exchange
Every day between 15 to 21 DAP, fresh weight (FW) of each plant was measured. From 16 to 20 DAP the growth chamber was opened once a day at the 6.5-7th h in the dark period for about 10-15 min to measure the FW of the plants and weight of culture medium. At 17, 19 and 21 DAP one plant was harvested. Therefore for the measurements of CO 2 exchange, three, two and one plant(s) were used during 15-17, 17-19 and 19-21 DAP, respectively. After harvest the plants were separated into leaves without midrib, midrib plus stem, fi brous roots and storage roots. The LA of each leaf without midrib was determined with a leaf area meter (AM100, Analytical Development Company, Hoddesdon, U.K). DW of each leaf without midrib, all midribs plus stem, fi brous roots and storage root were determined after drying at 80 ˚C for at least 2 weeks.
Determination of the amount of water transpired
The amount of water transpired from plants was determined as follows. The weights of culture medium and FW of plants were determined every day from 15 to 21 DAP, around the 6.5-7th h in the dark period. Plants were taken from the container and kept in measuring cylinders to collect water withdrawn by the roots. The weights of the culture medium in the container, and the culture medium collected from roots, as described above, were determined. From these measurements the weights of total water lost from culture medium per day (referred to as W lm ) were obtained. The amounts of water lost from the aerated culture medium without plants per day were also determined separately after 21 DAP, as described above. These amounts were referred to as W lc . The amounts of water absorbed by plants and retained within plants per day were obtained from the differences of the weights between the increase in FW and DW of plants per day. These amounts will be referred to as W p . The amount of water transpired from plants per day (W t ) was obtained from W t =W lm -W p -W lc . The actual amounts of W lm , W lc and W p were 174-377, ca. 8-32, and around 15 g day -1 , respectively. Therefore the major portion of water lost from the culture medium was due to transpiration by plants.
Determination of growth characters
Growth characters of relative growth rate (RGR) (g g -1 day -1 ), net assimilation rate (NAR) (g m -2 day -1 ), leaf area ratio (LAR) (LA total DW -1 ) (m 2 kg -1 ), leaf weight ratio (LWR) (DW of leaf total DW -1 ) (g g -1 ), and specifi c leaf area (SLA) (LA leaf DW -1 ) (m 2 kg -1 ) were obtained from the following equations (1) to (5), Values were obtained from direct measurements and estimations using FW, see text for details. Each value is mean SD (n=3-9). Circles and triangles represent LA and DW, respectively. Open and closed symbols represent the results with ambient and elevated CO 2 grown plants, respectively. The levels of signifi cance of t-test were p<0.01 with one exception. The signifi cance of LA at 21DAP was p<0.05. respectively.
Statistics
For the data analysis t-test and analysis of variance in two regression lines were done using Excel 2001 (Microsoft Corporation, Redmond, USA) and Multivariate Analysis Ver.4.0 (Esumi, Tokyo, Japan), respectively. For regressions Kaleida Graph Ver.5.0 (Synergy Software, Reading, USA) was used.
Correlation of FW of the whole plant with total LA, DW of the whole plant and DW of total leaves
Every two days from 15 to 21 DAP, one plant was harvested under ambient and elevated CO 2 , and FW and DW of the whole plant, LA and DW of each leaf were determined. FW of the whole plant was signifi cantly correlated with total LA, DW of the whole plant and DW of total leaves (Fig. 1). These relationships were used for the estimation of these values of the plants not harvested but for which FW of the whole plants were determined. These estimated values were used for the analysis of the rate of photosynthesis per unit LA and others. (a) in elevated CO 2 , y=0.00496x 2 + 11.8x +100.9. (b) in ambient CO 2 , y=0.00012x 2 + 0.0885x -0.183. (b) in elevated CO 2 , y=-0.00026x 2 + 0.132x -0.818. (c) in ambient CO 2 , y=0.000049x 2 + 0.0625x -0.169. (c) in elevated CO 2 , y=0.000197x 2 + 0.0920x -0.591. and DW of the whole plant during the growth from 15 to 21 DAP (Fig. 2). Total LA of the plants grown under elevated CO 2 was 18-43% larger than that under ambient CO 2 during this period. DW of the plants under elevated CO 2 was 59-82% heavier than that under ambient CO 2 . At 21 DAP, LA and DW of the plants grown under elevated CO 2 were 43% and 59% greater than those under ambient CO 2 . Expanded leaves had nearly the same LA under ambient and elevated CO 2 , but leaf initiation and expansion were accelerated by elevated CO 2 ( Fig. 3 and data not shown). Increase in total LA was due to early initiation and expansion of leaves but not due to larger area of fully-expanded leaves. At 21 DAP, the ratio of shoot to root in terms of DW under ambient and elevated CO 2 was 6.89 0.36 and 6.58 0.80 (mean SD, n=3), respectively. There was no signifi cant effect of elevated CO 2 on this ratio.
CO 2 exchange in the whole plants grown under ambient and elevated CO 2
The exchange of CO 2 in whole plants was monitored continuously for 6 days from 15 to 21 DAP at 10 to 15 min intervals during the dark period in a day. The rate of CO 2 exchange was calculated from the rate of the inlet air fl ow, and differences in CO 2 concentration between the inlet and the outlet air. Fig. 4 shows the recordings of CO 2 exchange under ambient and elevated CO 2 . Although the CO 2 concentration in the inlet air fl uctuated considerably, the rate of CO 2 exchange in the whole plants could be calculated. From this type of experiment the rate of CO 2 uptake (per plant and per unit LA) and dark respiration (per plant and per FW of plant) were obtained. LAs of plants in each day (6.5-7th h of the dark period) were estimated as described above (see, Fig. 2). LA and FW of plants at a certain time were estimated using these values and assuming that LA and FW increased linearly during the period where two values were obtained at the beginning and the end of the period. These values were used for the determination of the rates of CO 2 fi xation per unit LA and respiration per g FW.
Effect of elevated CO 2 on growth characters
The increase in LA and DW during 15 to 21 DAP was accelerated by elevated CO 2 (Fig. 2). Table 1 shows RGR, NAR, LAR, LWR and SLA of the plants grown under ambient and elevated CO 2 during 15-17, 17-19 and 19-21 DAP. RGR decreased during 15-21 DAP but was not signifi cantly affected by elevated CO 2 . RGR, however, plotted against DW was higher in the plants grown under elevated CO 2 than under ambient CO 2 (Fig. 5). NAR stayed almost constant during 15 to 21 DAP, but was slightly increased by elevated CO 2 during Fig. 4. Continuous monitoring of CO 2 exchange in whole plants grown under ambient (a) and elevated (b) CO 2 . The concentrations of CO 2 in the inlet air ( ) and that of CO 2 absorbed by plant, the differences in the concentration of CO 2 between the inlet and the outlet air ( ) were shown. See text for details. Table 2 shows the amounts of carbon gained (mg plant -1 2 days -1 ) (C gained), the apparent rate of photosynthesis (APS) per unit LA during 2-12.5th h of the light period, the ratio of C gained per DW increase (C/DW) during 2 days, the amounts of water transpired (W lost) (g plant -1 2 days -1 ), the values of water use effi ciency (WUE) (µmol CO 2 fi xed per mmol H 2 O transpired) during 2 days, the amounts of carbon respired during dark period (C DR ) (mg C plant -1 2 days -1 ), the rate of dark respiration (DR) (nmol CO 2 g FW 1 2 days -1 ) and the ratio of photosynthesis to respiration (PS/Res) under ambient and elevated CO 2 from 15 to 21 DAP. The amount of C gained increased during this period, and was increased by elevated CO 2 . APS was estimated from the amount of CO 2 fi xed during 2-12.5th h of the light period, neglecting the amount of carbon respired in roots, stems and midribs.
Effect of elevated CO 2 on photosynthesis, transpiration, water use effi ciency and respiration
Assuming that the rate of respiration in roots, stems and midribs was the same as that of dark respiration and was proportional to their DW, the absolute values of the rates of respiration in roots, stems and midribs during this period were less than 5% of the rate of CO 2 fi xation. Therefore, the estimated values of APS may be very close to the real values. Under ambient and elevated CO 2 , the APS per unit LA was around 18 and 20-22 µ mol CO 2 m -2 s -1 , respectively. Thus, elevated CO 2 increased the APS by 9-20%. C/DW and DR stayed almost constant during 15-21 DAP and were not signifi cantly infl uenced by elevated CO 2 . W lost and C DR increased during 15 to 21 DAP and were signifi cantly increased by elevated CO 2 only during 19-21 DAP but not during 15-19 DAP. WUE stayed Table 2. Effects of elevated CO 2 on the amounts of C gained (C gained), the rate of apparent photosynthesis (APS), the ratios of C gained to DW increased (C/DW), the amounts of transpired water (W lost), water use effi ciency (WUE), the amounts of respired carbon during dark period (C DR ), the rates of dark respiration in whole plants (DR) almost constant during 15-21 DAP and was signifi cantly increased by elevated CO 2 . The effect of elevated CO 2 on the ratio of the amount of carbon fi xed during the light period to that respired during the whole day (PS/Res) was assessed by assuming that the rate of respiration during the day was the same as that in the dark period. There was a signifi cant positive effect of elevated CO 2 on this ratio during 19 to 21 DAP (Table 2).
Effect of elevated CO 2 on the rate of photosynthesis and water use effi ciency in the fi rst leaf of plants at various ages
The APS and WUE in the fi rst leaf of the plants at various ages (11 to 25 DAP) grown under ambient and elevated CO 2 were determined under the similar condition to the growth condition with the highest light intensity (Table 3). The rate of photosynthesis and transpiration in the fi rst leaf decreased with aging (see Fig. 1 in the accompanying paper, Usuda 2004). The rate of APS in the fi rst leaf of ambient CO 2 grown plants at various ages under ambient CO 2 condition were 17.9 to 4.0 µmol CO 2 m -2 s -1 . The rates increased by 46% on an average by elevated CO 2 . On the other hand, the rates of APS in the fi rst leaf of elevated CO 2 grown plants at various ages under elevated CO 2 were 26.6 to 6.6 µ mol CO 2 m -2 s -1 . The rates under ambient CO 2 were 16.5 to 2.3 µmol CO 2 m -2 s -1 . Therefore the rates increased by 85% on average by elevated CO 2 . WUE of the fi rst leaf of the plants grown under ambient and elevated CO 2 were 7.19 and 12.2, respectively.
Effect of elevated CO 2 on LA and DW
Elevated CO 2 , approximate doubling of atmospheric CO 2 concentration, increased DW accumulation by 59% in hydroponically grown Kosena at 21 DAP, when it reached a reasonable size for harvest. Kimball (1983) analyzed the effect of elevated CO 2 (elevated by approximately 300 µmol mol -1 ) on productivity in a number of species and showed a 30% increase on an average of productivity by elevated CO 2 . In the present study, however, elevated CO 2 increased DW by 59%. Such a difference in the magnitude of the increase in productivity or DW may be attributed to the following three factors. 1) The concentration of CO 2 was elevated by about 350 µmol mol -1 in the present study. 2) Kosena was grown hydroponically with ample water and nutrient. 3) Plants in the present study were harvested at a relatively early stage during vegetative growth (21 DAP).
In this study, elevated CO 2 increased LA at 21 DAP, but the effect of elevated CO 2 on LA varies with the studies. For example, a 35, 20 and -7% increase in total LA was reported in hybrid poplar (Curtis et al. 1995), Desmodium paniculatatum (Wulff andStrain, 1982) and rice (Ziska and Teramura, 1992), respectively. Pett (1986) reported that LA of cucumber was increased by elevated CO 2 (1000 µ mol mol -1 ) during initial seedling growth stage but not thereafter. In rice, the area of the fully-expanded 12th leaf of the plants grown under elevated CO 2 of 1000 µ mol mol -1 was about 25% smaller than that of the plants grown under ambient CO 2 (Makino et al., 2000). Recently, Kimball et al. (2002) summarized the results of the analysis of the effect of free-air CO 2 enrichment on agricultural crops. According to their analysis, the peak LA index was increased by -15.6 to 24% by elevated CO 2 (~600 µ mol mol -1 ) in C3 plants, again showing a large variation in the effect of elevated CO 2 on LA. Such a large variation might be due to the difference in plant species or growth conditions. The positive effect of elevated CO 2 on total LA found in this study was due to early initiation and expansion of the leaves but not due to the larger size of fully-expanded leaves (Fig. 3). Table 3. Effects of elevated CO 2 on the rates of apparent photosynthesis and water use effi ciency with the fi rst leaf of plants at various ages grown under ambient or elevated CO 2 . The rates of photosynthesis and water use effi ciencies were obtained under 500 µmol photon m -2 s -1 and 25 1˚C. The rates of photosynthesis were measured under the concentrations of CO 2 in the inlet air of 350 (P 350) or 750 µmol mol -1 (P 750). When the rate of apparent photosynthesis was 27.6 µmol CO 2 m -2 s -1 , the concentration of CO 2 in the leaf chamber decreased about 39 µmol mol -1 .
Each value is mean SD. For units, see Table 2. Values in the parenthesis were the highest and lowest values obtained with the fi rst leaf of various ages. a The rates of apparent photosynthesis were determined under a CO 2 concentration in the inlet air of 350 µmol mol -1 b The rates of apparent photosynthesis were determined under a CO 2 concentration in the inlet air of 750 µmol mol -1 c WUE determined under a CO 2 concentration in the inlet air of 350 µmol mol -1 d WUE determined under a CO 2 concentration in the inlet air of 750 µmol mol -1 This is basically consistent with the previous fi nding mentioned above.
Effects of elevated CO 2 on the growth characteristics
RGR of Kosena hydroponically grown under ambient and elevated CO 2 was between 0.257 and 0.401 g g -1 day -1 (Table 1). According to Lambers et al. (1990), RGR of herbaceous plants grown in growth rooms with optimum nutrient supply was 0.1-0.3 g g -1 day -1 . This indicates that Kosena grew very rapidly under the conditions used in this study. RGR decreased with increasing DW of the plants, and RGR relative to DW was increased by elevated CO 2 (Fig. 5). In other species, positive and negative effects and even absence of effect of elevated CO 2 on RGR have been reported (Wulef and Strain 1982;Poorter et al., 1988;Musgrave and Strain, 1988;Atkin et al., 1999;Makino et al., 2000). Generally speaking, elevated CO 2 increased RGR during the initial growth stage, but thereafter it had no effect or a rather suppressive effect on RGR (see e.g., Poorter et al., 1988). NAR of 17-23 g m -2 day -1 obtained in this study was within the range of the reported values of 7-20 g m -2 day -1 in C3 plants grown under optimum nitrogen and moderate to high light intensity (Lambers et al., 1990). Elevated CO 2 enhanced NAR slightly (Table 1). It seemed to be due to increased rate of photosynthesis per unit LA, which was increased by 9 to 20% by elevated CO 2 ( Table 2). A positive effect and absence of the effect of elevated CO 2 on NAR have been reported previously (Wulef and Strain, 1982;Porter and Grodzinski, 1984;Patterson et al., 1988;Poorter et al., 1988;Musgrave and Strain, 1988;Atkin et al., 1999;Makino et al., 2000). NAR, however, was usually stimulated by elevated CO 2 treatments at the beginning of the growth and not thereafter (see, e.g., Poorter et al, 1988). LAR of 13.2 to 20.4 m 2 kg -1 were obtained in this study. The values of LAR vary widely with the species and the environmental condition (Lambers et al., 1990), but it was decreased by elevated CO 2 in this study. This is similar to previous fi ndings (Ford and Thorne, 1967;Jolliffe and Ehret, 1985;Makino et al., 2000). The LWR of ca 0.68 g g -1 obtained in this study is within the values of 0.3 to 0.8 g g -1 observed in the plants grown at optimum nitrogen supply and moderate to high light intensity (Lambers et al., 1990). LWR was not infl uenced by elevated CO 2 in this study, as reported for Acassia species by Atkin et al. (1999). LWR was, however, slightly increased by elevated CO 2 in bush bean (Jolliffe and Ehret, 1985), cotton (Patterson et al., 1988) and Plantago major (Poorter et al., 1988).
The SLA values of 20 to 30 m 2 kg -1 observed in this study were within the range of the values of 10 to 50 m 2 kg -1 , reported for the plants grown under optimum nitrogen supply and moderate to high light intensity (Lambers et al., 1990). SLA was decreased by 13-21% by elevated CO 2 in this study. This is consistent with previous fi ndings (Ford and Thorne, 1967;Porter and Grodzinski, 1984;Hrubec et al., 1985;Jolliffe and Ehret, 1985;Patterson et al., 1988;Ziska and Teramura, 1992;Rufty et al., 1994). The decrease in LAR under elevated CO 2 in radish was, therefore, caused by the decrease in SLA but not LWR. Both increased leaf thickness and increased dry matter content resulted in lower SLA. The mechanisms responsible for the lower SLA under elevated CO 2 are not yet clear.
Effects of elevated CO 2 on the amount of carbon
gained, the rates of photosynthesis and respiration, and water use effi ciency Gifford (1995) reported a 25% increase in carbon accumulation per plant under elevated CO 2 of 710 µ mol mol -1 in wheat. In radish elevated CO 2 enhanced the amount of carbon gained per plant by 45 to 64%. Therefore, the effect of elevated CO 2 on carbon accumulation on a plant basis seems to be high in radish compared with wheat. It may be due to the difference in the effect of elevated CO 2 on LA between wheat and radish (see above for details).
The rate of photosynthesis in the fi rst leaf of plants at various ages varied substantially (Table 3, see also Fig. 1 in the accompanying paper, Usuda 2004). The rates of photosynthesis in these leaves were increased by 46 to 85% by increasing CO 2 concentration in the inlet air from 350 to 750 µmol mol -1 (Table 3), but elevated CO 2 accelerated ontogeny (see accompanying p a p e r U s u d a , 2 0 0 4 ) . T h e r e f o r e , c o n t i n u o u s measurement of the rate of photosynthesis in whole plant is crucial to assess the effect of photosynthesis on biomass production under elevated CO 2 . The rate of photosynthesis per unit LA calculated from CO 2 exchange in the whole plants was increased by 9 to 20% by increasing the CO 2 concentration from 340-384 to 692-758 µ mol mol -1 (Table 2). Elevated CO 2 , however, accelerates growth (Fig. 3, see also the accompanying paper, Usuda 2004), and therefore, increases the number of senescent leaves and also mutual shading. This may be why the increase in photosynthetic rate caused by elevated CO 2 in whole plants (9 to 20%) was lower than that in single leaves (the fi rst leaf) measured under a constant light intensity of ca. 500 µmol m -2 s -1 (46 to 85%).
The rate of dark respiration per plant (C DR ) was slightly higher under the elevated CO 2 than under ambient CO 2 during 19 to 21 DAP, but the rate of dark respiration per g FW was not infl uenced by elevated CO 2 (Table 2). There was no effect of elevated CO 2 on the rate of dark respiration per unit LA in the fi rst leaf of Kosena at various ages (Fig. 1b in the accompanying paper Usuda 2004). The effect of elevated CO 2 on respiration is an important issue in evaluating the effect of global increase in atmospheric CO 2 on growth and biomass production, and many studies have been conducted. Reduction or no effect of elevated CO 2 on respiration were often reported, but even a promotive effect has been reported (Amthor 1997). Recently Jahnke and Krewitt (2002) pointed out the diffi culty in accurate measurement of the rate of dark respiration under elevated CO 2 . They improved the method of measurement of dark respiration under elevated CO 2 and concluded that elevated CO 2 had no effect on leaf respiration. The results obtained in this study support this conclusion. Since elevated CO 2 increased the rate of photosynthesis (Table 2), this means that elevated CO 2 increased the ratio of the rate of photosynthesis to that of respiration per plant. Gifford (1995), however, reported that elevated CO 2 had no effect on the ratio of photosynthesis to respiration in wheat. The rates of dark respiration were low compared with that of photosynthesis and fl uctuations of the concentration of CO 2 in the inlet air might hamper accurate measurement of the rate of dark respiration (Fig. 4). More precise measurement is necessary to draw a fi nal conclusion on these subjects.
WUE of a single leaf at various ages under a light intensity of 500 µmol m -2 s -1 under ambient CO 2 was 7.2 mmol CO 2 mol -1 H 2 O and was increased to 12.2 mmol CO 2 mol -1 H 2 O by elevated CO 2 (Table 3). On the other hand total WUE of the whole plants during the day and night was increased from 4.6-5.0 to 7.0-7.3 mmol CO 2 mol -1 H 2 O by elevated CO 2 ( Table 2). The lower values in the whole plant compared with the single leaf seemed to be due to CO 2 evolution and transpiration during the dark period which were included in the values measured with the whole plants but not in the values measured with the single leaf. WUE increased by 70, and 38 to 57%, with the single leaf and the whole plant, respectively. Many studies have revealed an increase in instantaneous transpiration effi ciency for leaves exposed to atmospheric CO 2 enrichment, primarily attributed to the reduced stomatal conductance, enhanced photosynthesis, or both factors in combination. Increases in the instantaneous transpiration effi ciency due to elevated CO 2 ranged from 25 to 229% (Wullschleger et al., 2002). Increase in WUE with whole plants by elevated CO 2 ranged from no effect in the well-watered serpentine grass, to 180% increase in drought alfalfa, but the majority of the values ranged between 30 and 50% (Wullschleger et al., 2002). The values obtained in this study fall in this range confi rming that WUE is improved under elevated CO 2 .
Conclusion
Approximate doubling of atmospheric CO 2 concentration enhanced DW by 59% in hydroponically grown radish cv. Kosena after 21 DAP. LA was increased by 18-43% by elevated CO 2 during 15 to 21 DAP. The rate of photosynthesis in the fi rst leaf of plants at various ages was increased by 46 to 85% by elevating CO 2 . But the rates of photosynthesis changed very much during leaf development and elevated CO 2 accelerated ontogeny (see, e.g., the accompanying paper, Usuda, 2004). Therefore integration of the rate of photosynthesis by continuous measurement of CO 2 exchange in the whole plants is essential to evaluate the effects of elevated CO 2 on photosynthesis. The rate of photosynthesis per unit LA determined from CO 2 exchange in whole plants was increased by 9 to 20% by elevated CO 2 . The difference between the effects of elevated CO 2 measured with the single leaf and the whole plants was attributed to the acceleration of senescence and mutual shading caused by elevated CO 2 . In this study, I quantitatively assessed the effect of elevated CO 2 on LA, photosynthetic rate per unit LA and dry-matter production. All results shown here indicate that the elevated CO 2 increases both photosynthetic rate per unit LA and LA, resulting in higher biomass productivity. | 2019-03-21T13:08:09.896Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "ba302bf2e9ec8bc6fe7dc8cb78fe74ee6e00cc4f",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1626/pps.7.386?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0c1330ead863ed7036a0ecc092d7b326a33afe88",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
268852817 | pes2o/s2orc | v3-fos-license | In vivo evaluation of safety and performance of a tapered nitinol venous stent with inclined proximal end in an ovine iliac venous model
A tapered stent with inclined proximal end is designed for fitting the iliac anatomically. The aim of the present study was to evaluate the safety and performance of the new stent in ovine left iliac veins. The experiment was performed in 30 adult sheep, and one nitinol-based VENA-BT® iliac venous stent (KYD stent) was implanted into each animal’s left common iliac vein. Follow-up in all sheep consisted of angiographic, macroscopic, and microscopic examinations at Day 0 (< 24 h), Day 30, Day 90, Day 180 and Day 360 post-stenting (six animals per each time-point). 30 healthy ~ 50 kg sheep were included in this study and randomly divided into five groups according to the follow-up timepoint. All stents were implanted successfully into the left ovine common iliac vein. No significant migration occurred at follow-up. There is no statistically significant difference between the groups (p > 0.05), indicating no serious lumen loss occurred during the follow-up period. Common iliac venous pressure was further measured and the results further indicated the lumen patency at follow-up. Histological examinations indicated that no vessel injury and wall rupture, stent damage, and luminal thrombus occurred. There was moderate inflammatory cell infiltration around the stent in Day-0 and Day-30 groups with the average inflammation score of 2.278 and 2.167, respectively. The inflammatory reaction was significantly reduced in Day-90, Day-180 and Day-360 groups and the average inflammation scores were 0.9444 (p < 0.001, Day-90 vs Day-0), 1.167 (p < 0.001, Day-180 vs Day-0) and 0.667 (p < 0.001, Day-90 vs Day-0), respectively. The microscopic examinations found that the stents were well covered by endothelial cells in all follow-up time points. The results suggested that the KYD stent is feasible and safe in animal model. Future clinical studies may be required to further evaluate its safety and efficacy.
Stent system KYD stent system, specifically for fitting the iliac anatomically, was designed by ShenZhen KYD Biomedical Technology Co. Ltd., China.This laser-cut and self-expandable stent is made of nickel-titanium alloy with a beveled proximal end, the proximal part of the stent is the closed-cell stent, while the distal part is the opencell stent.The two parts are soldered together into the smoothly tapered stent.This specific design ensures the radial support of the stent as well as enough flexibility.(Fig. 1A).The nickel-titanium alloy used in the stent is antimagnetic, thus not interfering with MRI.If the stents were made by magnetic material, the displacement of stents could occur during the MRI scan, which might cause serious injury to human body.Thus, antimagnetic properties of the stents are essential for the safety.KYD Stents with different diameters at both ends are available (part of it is cylindrical stent), which is designed with high crush resistance, flexible interconnections specifically for use in the central venous (Table 2).The stent with the same proximal and distal diameter, that is cylindrical stent, was prepared for some special cases, e.g.similar proximal and distal diameter of the iliac veins.The stent is pre-installed in the sheath of the delivery device.And the delivery device consists of a handle assembly, a push spring tube assembly, a PI (polyimide) sheath core tube (with tip tip), a delivery sheath tube, an outer sheath tube, and a 3-way valve assembly (Fig. 1B).The specifications and dimensions of each stent delivery device are shown in Table 3.
Stenting procedure
After fasting for 12 h, the sheep was anaesthetized and fixed on the operating table in the supine position.After tracheal intubation, the sheep was ventilated with O 2 /NO 2 .The anesthesia was maintained under continuous intravenous infusion of anesthetics, and vital signs were monitored.After skin preparation and disinfection, puncture the left femoral vein, insert a 6F catheter sheath, and inject 100 U/kg heparin through the catheter
Postoperative management
4.8 million units of penicillin were injected intramuscularly within 24 h after the operation, and then 4.8 million units of penicillin were injected intramuscularly every day for 3 consecutive days.After the animals regained consciousness, they were fed and observed for 1, 3, and 6 months.Each sheep was given 100 mg of oral aspirin and 75 mg of clopidogrel per day for 3 months after surgery, and then 100 mg of oral aspirin per day until being euthanized.The efficacy of aspirin and clopidogrel in sheep seems controversial 10,11 .But antiplatelet treatment in sheep post stenting were still performed in some recent studies 9,12 .Thus, aspirin and clopidogrel were also given to the sheep in this study, with the same of dosage of that in stented patients.After indicated follow-up time, the animals were euthanized by intravascular injection of heparin and overdose of anesthetic.
Study outcome measures
Stenting success was defined as deployment of the stent within 5 mm of the intended location in the iliac system.Stent fracture was examined by digital subtraction angiography (DSA).The degree of Stenting oversizing relative to vein size was evaluated by calculating the compression ratio.For evaluating the acute and long-term lumen patency, lumen loss rate was calculated and common iliac venous pressure (CIVP) was measured.
Compression ratio calculation
According to the proximal diameter and distal diameter of iliac vein measured by angiography before iliac vein implantation (Div), and the diameter of the implanted stent (Ds), the compression ratio of stent was calculated with the following formula.
Lumen loss evaluation
Angiographic reviews were performed at day 0 (< 24 h), Day 30, Day 90, Day 180, and Day 360 post-stenting.The minimum lumen diameter (MLD) of the iliac vein stent was measured from the venography images immediately after the procedure and at each follow-up endpoint, and the lumen loss (in mm) and lumen loss rate were calculated according to the following formulas respectively:
CVIP measurement
The BeneView T5 monitor was used to measure CVIP.The zero point was set when water level in calibration catheter was on the level of the sheep's right atrium.After successful puncture, the distal end of the pigtail catheter was placed at the confluence of the iliac veins.All air was removed from the lumen of the catheter, and then it was connected to the manometry catheter to measure the pressure of proximal CVIP.Subsequently, the pigtail catheter was removed, and the flushing catheter of the puncture sheath was connected to manometry catheter after complete air removal to measure the pressure of distal common iliac vein.The procedure was performed before and immediately after stenting, as well as at each follow-up timepoint.
Histologic examination
Three parts of the stented veins (distal, middle, and proximal/closest to the IVC) were examined histologically with a total of 60 sections.Hematoxylin and eosin staining (H&E staining) of stented vein section were performed and examined under light microscopy.Morphometric computer-assisted methods and software was used for image capture and quantitative analysis.
Inflammation score
Parastrut inflammation was defined as the presence of macrophages and foreign body giant cells admixed with variable numbers of lymphocytes.The inflammation score was calculated according to the following criteria (Table 4).For each timepoint per animal, the leukocytes were counted in totally 30 high-powered fields to calculate the inflammation score, 10 from the histological section of the distal end of the stent, 10 from the one of the middle part of stent, and 10 from the one of the proximal end of the stent, respectively.Thus, for six animals www.nature.com/scientificreports/ in each time point per group, the leukocytes was counted in totally 180 HP fields and the inflammation score was calculated.
Endothelialization examination
The cross sections of the stented veins at indicated follow-up timepoint were evaluated under high-powered microscopy after hematoxylin and eosin staining to evaluate for the presence of a layer of endothelial cells covering the lumen.
Statistical analysis
The data were analyzed using SPSS software (version 20, SPSS Inc., USA).Statistical differences between groups were compared using ANOVA and t-test.For the inflammation score, Kruskal-Wallis test was used to analyze the statistical difference between the groups.p < 0.05 was considered statistically significant if not otherwise specified.
Statement
The study is reported in accordance with ARRIVE guidelines.
Stent insertion and migration
30 sheep were successfully implanted with iliac vein stents in the left common iliac vein via the left femoral vein under DSA guidance.The release of the bracket is successful, and the positioning of the bracket is accurate (Fig. 2).In all the 30 sheep, no damage of the iliac vein or stent fracture was observed by angiography immediately after implantation of the iliac vein stent (Fig. 3).
During follow-up, no stent displacement, stent fracture or stent placement-related complications, such as bleeding, vein injury and thrombosis, were observed by DSA angiography in all groups.The positioning of the proximal end of the stent is accurate, and the imaging re-examination shows that the position of the proximal end of the stent is the same as before, without displacement, and no shortening is observed.(Fig. 3).
Stent compression ratio
The selection of stent diameter in this study was based on the compression ratio (10-30%) 12 .The minimum proximal compression ratio of the deployed stent was 3.3%, the maximum was 30.3%, and the mean was 12.4%.As for the distal compression ratio the minimum was 14%, the maximum was 56.9%, and the mean was 35% (Fig. 4, Table 5).No significant difference was shown in either proximal or distal compression ratio among groups (Fig. 4).60% (18/30) implanted stents had a distal compression ratio of more than 30%.6).No significant difference in lumen loss rate was observed among the four follow-up groups (Fig. 5A, Table 6).No lumen loss rate in individual sheep was higher than 50% at all follow-up time points (Fig. 5, Table 6).
To further evaluate the venous patency, CIVP was also measured at follow-up.Vein stenosis may cause the rise of venous pressure.Our results indicated that neither distal nor proximal CIVP showed significant alteration during the acute period of stenting (< 24 h) (Fig. 5B, C and Table 7).In contrast, both distal and proximal CIVP at day 30 post-stenting were slightly increased (Fig. 5B, C and Table 7), which might be related to intima hyperplasia (Fig. 6A).In Day-90 and Day-360 groups, there was no significant change of the final CIVP at day 90 or day 360 compared to pre-stenting CIVP.However, the distal but not proximal CIVP at day 180 after stenting was higher than pre-stenting CIVP.Considering no significant lumen loss was observed in Day-180 groups, the increase of distal CIVP may not be related to alteration of venous patency post stenting.Overall, the KYD stents demonstrated satisfactory performance with regard to the long-term lumen patency.
Gross appearance
At each follow-up time point, the iliac vein segment at the stenting site was dissected and exposed.No vascular wall rupture, thrombosis, or stent damage was observed in any of the sheep.No adhesion between the iliac vein and the surrounding tissue occurred (Fig. 2).
Inflammatory response
Inflammatory response caused by stent was mostly seen in the Day 0 and Day 30 group.Immediately after stenting, punctate as well as focally infiltrated inflammatory cells with inflammatory exudate were seen around the stent, and more scattered inflammatory cells infiltration was presented in the intima at day 30 post procedure www.nature.com/scientificreports/(Fig. 6A).After 90 days of stenting, inflammatory response was markedly alleviated (Fig. 6A).To better evaluate the inflammatory response, the inflammation score was calculated.Significant difference was observed between Day-0 and Day-90/Day-180/Day-360 (Fig. 6B).
Intima hyperplasia and endothelialization
With HE staining of the pathological sections, the intima hyperplasia and endothelial cell coverage of the stent were observed.Both intima formation and endothelialization was seen in sections of Day-30, Day-90, Day-180 and Day-360 groups (Figs. 6 and 7).In the sections of Day-360 group, the iliac vein stent was completely covered and wrapped by hyperplastic intima (Fig. 7).The newly formed intima structure and endothelial cell coverage prevented the luminal thrombus formation.The absence of thrombosis at all follow-up time-points indirectly indicated the presence of endothelial cell coverage of the stent.
Discussion
The overall performance of KYD stent system was satisfactory.The average lumen loss rate at day 30 after procedure was 27.3%, and at day 90, the lumen loss rate was stabilized at about 20% and decreased to 17% at day 360.Thus, these results indicated the high radial force and crush resistance of KYD stents.The inflammation was mainly seen in Day-0 and Day-30 groups.The inflammation was significantly alleviated after day 30.In addition, neither stent damage nor venous injury were found, which indicating the safety of the stents.
In recent years, more attention has been paid to IVCS as the cause of lower limb symptoms.At present, endovascular intervention technology has achieved favorable outcomes in relieving the symptoms of IVCS [13][14][15] .Stent treatment of iliocaval obstructions significantly decreases the clinical manifestations of chronic venous insufficiency and swelling and pain 14,16,17 .
A wide variety of dedicated venous stents were developed, such as Zilver Vena (Cook, Bjaeverskov, Denmark), Sinus Venous, Sinus Obliquus and Sinus XL Flex (Optimed, Ettlingen, Germany), Vici (Veniti; St. Louis, USA), Abre (Medtronic, Minnesota, USA) and Venovo (Bard, Tempe, USA).It is widely believed that a single "perfect" venous stent for the deep veins does not currently exist, and the type of stent used should be tailored to the needs of the specific situation 18 .For the patients with chronic iliocaval venous obstruction, especially at the iliocaval junction, one particular question for placing the currently available venous stents is whether these venous stents should extend into the inferior vena cava (IVC).One previous study reported that new thrombosis of the nonstented contralateral iliofemoral vein occurred in about 9% patients if the stents extended into IVC 19 .On the contrary, the stenosis may not be fully covered by the stents if the stents not extending into the IVC.The KYD stents with the inclined proximal end may avoid this problem.Again, the 'ideal' venous stent is still not available, particularly one which performs well in the different regions of the deep venous system as each area has its particular performance needs 20 .The caliber (absolute cross-sectional area) of the iliac venous outflow controls peripheral venous pressure.Thus, the basic principle of the venous stents should mirror normal venous anatomy to adequately decompress peripheral venous hypertension 21 .The tapered KYD stents was exactly designed for fully fitting the human iliac vein anatomically (Table 8).Although we believe this is a good attempt, further study is definitely required to evaluate the advantage of tapered venous stent over cylindric one in future.
Due to the significant difference between the distal and proximal diameters of iliac veins, tapered stent has obvious advantages, which conforms to the physiological structure of human iliac vein.Compared with the cylindrical stent, the conical stent implantation may make the changes of vascular hemodynamic much closer to the physiological condition, which can reduce the incidence of intra-stent restenosis and thrombosis 24 .It would seem logical that in the relatively immobile common iliac vein at the May-Thurner point, a stent with high radial force and crush resistance would be favorable.Conversely, in the caudal external iliac vein and common femoral vein, one could make a case for a more flexible stent as the venous segments here are more mobile 20 .
It is crucial to select proper stent size to ensure good apposition to vessel wall 25 .In current practice, slight oversizing of the stent relative to the target vessel is usually recommended 26 .Under-sizing of the stent is more harmful than slight oversizing as it may cause a permanent iatrogenic stenosis that is not easily corrected, and may also cause stent migration, even to cardiopulmonary system 27,28 .In clinical practice, the recommended stent www.nature.com/scientificreports/diameter for the KYD's iliac vein stent is 2 mm larger than the target vessel diameter.Hammer F. et al. indicated that oversizing (by 2-4 mm) of any type of stent in the venous system was important to ensure a close stent-tovessel wall contact and avoid stent migration 29 .Consistently, Raju et al. also reported that they oversized the stent by 2 mm beyond the recommended calibre but post dilatation is restricted to the optimum outflow calibre for the segment 23 .Marston WA et al. suggested that venous stent diameter should achieve 10-30% oversizing 12 .
In current study, the mean proximal and distal compression ratio of the stent was 12.4% (3.3-30.3%)and 35% (14-56.9%)(Fig. 4, Table 5) respectively.Apparently, the distal compression ratio of most KYD stents used is above the recommended value (30%).In future, redesigned KYD stents specifically for ovine iliac vein may be applied to further investigation.In this study, each sheep was given 100 mg of oral aspirin and 75 mg of clopidogrel per day for 3 months after stenting, and then 100 mg of oral aspirin per day until being euthanized.However, the efficacy of aspirin and clopidogrel in sheep seems controversial.Spanos reported that aspirin fails to inhibit platelet aggregation in sheep because sheep platelets have an ASA resistant cyclooxygenase and may be able to aggregate by a pathway that is independent of arachidonic acid 10 .Weigand and Boos et al. reported high dosages of clopidogrel inhibited platelet aggregation merely in a low number of sheep despite sufficient absorption 11 .These studies suggested that ticagrelor and acetylsalicylic acid might be not suitable for platelet inhibition in sheep, which might provide indirect evidence to further support the performance of KYD stents with regard to venous patency.Future study is still required to evaluate the effect of ticagrelor and acetylsalicylic acid in sheep.In addition to antiplatelet therapy, there is a general consensus regarding anticoagulant therapy following venous stenting in
Figure 1 .
Figure 1.Keyidun stent system.(A) The structure of the Keyidun iliac vein stent.(B) Schematic diagram of the structure of Keyidun's iliac vein stent delivery device.
Figure 5 .
Figure 5. Evaluation of lumen patency.(A) Lumen loss rate of stent at each follow-up time-point was calculated according to the angiographic results.(B-C) Distal and proximal CVIP were measured at each follow-up timepoint by using BeneView T5 monitor.Data are shown as mean ± SD. ns non statistical significance.*p < 0.05; **p < 0.01.
Figure 6 .
Figure 6.Inflammatory response around stent.(A) H&E staining shows inflammatory cell infiltration (black arrow) in the intima at indicated timepoint.(B) Inflammation score of each timepoint was calculated according to the H&E staining images.Data are shown as mean ± SD.
Figure 7 .
Figure 7. Intima hyperplasia and endothelialization.Representative H&E staining images shows the intimal hyperplasia and endothelial cell coverage at indicated timepoints post stenting.Black arrow: endothelial cells.
Table 2 .
Specifications and parameters of KYD stent.
Table 3 .
Iliac vein stent delivery device specifications from KYD.
Table 4 .
Inflammation scoring criteria.HP high-power field.
Table 5 .
Preoperative angiography and stent compression ratio. | 2024-04-03T06:18:18.598Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "e5e893fba8f05d6105cae563dc86864e16160358",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "00d2df46c01c3aebd5d92865109bd45237a31643",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90895238 | pes2o/s2orc | v3-fos-license | Determinants of child labour in Crop Production (A Case Study of Anambra State of Nigeria). Determinantes del trabajo infantil en la producción de cultivos (Un estudio de caso del estado de Anambra de Nigeria)
Determinants of child Labour use among rural household crop farmers in Anambra State of Nigeria were studied. A multistage random sampling technique was used to select one hundred (100) respondents for the detailed study. A structured questionnaire was used to elicit information from the respondents for the study. Percentage response was used to capture objective i and iii. Objective ii was capture using Probit model analysis. The result showed that majority of the respondents were married, youthful, had moderate household size, educated and highly experienced in farming. The determinant factors to the use of child labour among rural household were relationship between the child and household heads, access to credit and educational level. The major operations accomplished by the children in the study area were bird scaring, fertilizer application and planting. The child right act should be enforced by appropriate government agencies and the offenders brought to book, free education to all children and social mobilization on change of attitude to use of child labour were recommended. Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), 6(1), 2018: 45-57 http://dx.doi.org/10.7770/safer-V6N1-art1354 46
exposed to risks, which include exposure to chemicals, organic dusts and biological hazards (Yadav and Gowri -Sengupta, 2009). In addition, they are in contact with psycho-social hazards, abuse and sexually transmitted diseases, increased by isolation, poverty (Bhalotra, and Heady,(2000); Beegle, Dehejia and Gatti,(2004)), long periods of stooping and repetitive movements, carrying heavy loads over long distances, work in extreme temperatures and without access to safe water (Buchmann, 2000, Das andMukherjee, 2007).
It is paramount to state that efforts of governments and donor organizations to curtail child labour have not yielded the desired dividends, as over 200 million children are still found on paid working places worldwide (Ali, et al.;2004;ILO 2010). To improve this situation, it is of important to be fully equipped with factors that could influence the decisions of parents (or other caretakers) of engaging their wards into paid employment as well as the push or pull factors of children into the labor market. For this study, efforts is geared towards the influence of the parents or other caretakers as against in other studies that dwelled on all levels (household, national and region) (Khan, et al.;2007, Fekadu, el al;2009).This is because the major decision makers regarding child's work or education are centred at on the household head, but other family members may also contribute. The decision of household head has four possible outcomes; the child can be in school, it can be engaged in paid work, it can be both in school and engaged in paid work and it can be neither in school nor engaged in paid work (Khan, et al. 2007, Busa, et al. 2007).
Furthermore, when modeling the determinants of child labour supply, the household is taken as the unit of analysis. In addition, several studies (Ali, et al. 2004;Bura, 2009;Sanusi and Akinniram, 2010) reported that household income is the major determinant in the decision of household head to child labour supply. Nevertheless, prohibiting child labour use generally among the institutions concerned, will entail not just making the laws but greater commitments by implementing agencies as this will be met with stiff opposition considering the gains accruing from the use of the labour (International Labour Organisation (ILO); 2005; Manacorda and Rosati, 2007).However, eliminating child labour entails inculcating programmes that are capable of increasing awareness on evils of child labour, making education affordable across all levels and enforcement of anti-child labor laws. Furthermore, such programme be able to address the problems of the four pillars of decent work, which are provision of quality jobs, which provide income to cover at least basic needs, ensure minimum income security to reduce households' need, provision of old age pension scheme and provision of basic health facilities (UNICEF, 2007, ILO, 2015. (2011), who reported that if household head do not have access to credit markets, then they have to rely on internal assets by putting their children to work rather than investing in human capital that will make their children more productive in the future. The coefficient of level of education was positive and significant at 1% probability level. Empirical evidences show that education of the parents affect child labour decision positively (Ray, 2006, Mendonca, 2007, Self and Grabswski, 2009. They opined that the educational status of the father and mother have significant impacts on the sons and daughters respectively in participation in the labour markets. This finding is in consonance with Basu and Tzannatos, (2003), who reported that education is a vehicle through which people are empowered to improve their quality of life and by which they are protected from all forms of exploitation such child labour. To eliminate child labour, it is imperative that we establish free, compulsory, equal and quality education for all children no matter the race, gender religion and socioeconomic status. Table 3 shows the crop production activities engaged by the children in the study area. The major crops considered in the study were cassava, rice, cocoyam and maize. Among items considered, bird scaring in rice farm encountered the highest number of use of child labour as reported by 83.3% of the respondents Bird scaring in rice paddy is one of the least tedious crop production operations but requires long hours of working with meagre wage, of which only children can accept. This is followed by fertilizer application, reported by 66.7% of the sampled farmers. Fertilizer particularly inorganic application is less tedious especially where broadcasting method as use in rice paddy is applied. This finding concurred to Ume and Okoye, (2006), who opined that children are often used to accomplish light jobs in farming, although, less efficient in the job. The least of the farming activities carried by the children was tillage operation and represented by 30% of the respondents. Tillage system is a tedious operation and as result needed the services of able bodied and energetic individuals to accomplish (Ume, 2006). The determinants of child labour were access to credit, educational level and relationship between household heads and the child. The major crop production operations activities in which children were used in the study area were bird scaring, fertilizer application and planting. Based on the results, the following recommendations were proffered: (1) Government should put in place educational policies that could facilitate children attending school. These policies include free education that is not limited because of the need to purchase supplies and uniforms, unbiased education where the rights of girls and minorities are protected, and supplemental meal programs to encourage poor children to attend school and enhance their academic performance when they attend.
(2) Social mobilization through campaigns to provide information, raise awareness and change attitude of people towards child labour through exposing the occupational hazards involved.
(3) Advocacy for the right of child and enacting laws and policies aimed at eliminating all the forms of child labour. The advocacy should monitor the progress of the implementations and enforcement of the laws.
(4) Government should make policies to enhance access of the household heads to credit facilities through commercial banks and microfinance banks in order to boost their income.
This credit could be used to hire labour instead of using the labour of under aged. | 2019-04-02T13:13:57.627Z | 2018-05-30T00:00:00.000 | {
"year": 2018,
"sha1": "7b1fe0ee1ddaab893472a0abf68c00b0cb37d869",
"oa_license": "CCBYNCSA",
"oa_url": "https://portalrevistas.uct.cl/index.php/safer/article/download/1354/1336",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2cc1a4a52a812295be73d7d7d3844f9a1342fa1b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
53178407 | pes2o/s2orc | v3-fos-license | Impact of an education intervention on knowledge of high school students concerning substance use in Kurdistan Region-Iraq: A quasi-experimental study
Background and aims Substance use among adolescents especially smoking, is becoming a public health problem in the Kurdistan region of Iraq. School-based health education is an appropriate approach for improving students’ knowledge regarding substance use in an attempt to prevent or reduce such problem The purpose of the study was to examine the effect of an educational intervention for high school students to improve their knowledge for substance use and its negative consequences, which will, in turn, motivate students to take protective measures against substance use. Methods This quasi-experimental (one group; pre, and post-test) design was carried out in Erbil city from January 2017 to June 2017. A random sampling technique was employed to collect a sample of 280 students amongst four high schools in Erbil city which is the capital of Kurdistan Region-Iraq. A self- administered questionnaire on knowledge assessment regarding substance use was developed and validated by the experts regarding the relevance of the items. A structured teaching program for imparting knowledge on various aspects of substance use was developed based on extensive review of literature and experts’ opinion. The intervention program consisted of a series of 4 education modules. These modules were mainly taught by “Rabers” over a period of four weeks (one session per week). SPSS version 21 was used for data entry and analysis. Data was analyzed through descriptive and inferential statistics (McNemar tests, paired t-test, and Chi-square test). Results Out of 280 students, a total of 270 students completed a pre and post-intervention survey. Of the 270 students, 124 (45.9%) were males and 146 (54.1%) were females. The mean age ± SD of the participants was 16.59 ± 0.784 years, ranging from 15–18 years. The study reveals a statistically significant improvement in the mean score of knowledge of students following the implementation of a health education program from 15.959 ± 3.25 to 20.633 ± 3.26 (p < 0.001). Moreover, no one of the students remained with poor knowledge, and relatively more than half (50.2%) of the students have upgraded to good knowledge level. Conclusion Implementing a health education program about substance use on high school students in Erbil city had improved the knowledge of students about this topic.
Introduction
Substance use (alcohol, tobacco, and illicit drugs) are the major concerns of today's world [1]. It is impairing human will and vanishing his mind in such a way that pushes them to commit crimes [2]. Therefore, substance use, of which every developed and developing community suffer, is considered one of the most complicated problems that face the community which is not less dangerous than terror [3].
Iraq is one of poor-income, conflict-affected countries that have a series of challenges in mental health, because of its continuous exposure to large-scale affective events like successive wars (from 1980 to present), economic embargo, organised continuous violence and terrorism [4]. These unsafe conditions mirrored negatively on the psychosocial status of the Iraqi community, mainly the children and youth who were hardly influenced by these events, by suffering diseases, psychological shock and death [5]. In addition, the geographic location of Iraq is another factor that makes Iraq vulnerable to substance use, this is because of the long and porous border between Iraq and Iran, as Iran faces an increasing and very serious problem in substance use [6]. According to the more recent report, the prevalence of lifetime uses of (Alcohol, Licit or Illicit Drugs) in Iraq (Including Kurdistan region) was approximately 10.3% [7]. Substance use extends through different age levels, but it seems to be more common among adolescents which are usually in high school and college [8], as adolescence is a critical lifecourse period during which patterns of health behaviour are formed, before tracking into adulthood, and they experience physical, mental, and social interactional changes during this period [9]. Moreover, adolescence is characterised by an increased adventurous tendencies and peer influences. As a result, adolescents are vulnerable to new things including the use of the substances [10].
In Iraq (including Kurdistan region), studies have shown that substance use by young people is on the rise. For example, according to the report of the Iraqi Community Epidemiology Work Group, there has been an increase in the use of alcohol, prescription drugs, and illicit drugs in Iraq, especially among the young people [11]. The most recent study among high school students in Erbil city has found that 41.7% of students are smokers, and this deems the alarming point, as smoking considered a gate for using other illicit substances, this may, in turn, progress to the use of other drugs [12].
Several factors influence whether an adolescent tries substances, and knowledge is one of the factors that influence students' decision to use substances [9], with inadequate knowledge about the substance use and its consequences, a student will be less likely to make a fact-based and informed decision [13,14]. Studies have shown that providing youth with accurate information about negative consequences of substance use will encourage abstinence from substance use [15][16][17]. School-based health education is an appropriate approach for improving students' knowledge regarding substance use [18]. Hence, providing substance use education in schools can encourage students to make positive decisions on their future lives, this, in turn, prevents or reduces the use of substances among this segment of population. This approach is influential in the field because as far as we know, no previous research has investigated an educational program about substance use for adolescents in the Kurdistan region of Iraq, and it will attempt to fill the gap in the literature, especially, Kurdistan region. Thus, the purpose of this study was to examine the effect of an educational intervention for high school students to improve their knowledge for substance use and its negative consequences, which will, in turn, motivate students to take protective measures against substance use.
Design and sample
This quasi-experimental (one group; pre, and post-test) design was conducted from January 2017 to June 2017 amongst high school students in Erbil city which is the capital of Kurdistan Region-Iraq. According to the data obtained from Erbil Directorate of Education, the total number of high schools was 78 in Erbil city during the academic year 2016-2017. Out of the 78 schools, only four schools (two for males and two for females) had been selected by simple random sampling technique using Microsoft Excel program. Students of grade 10 and 11 from these four schools were the target population for the educational program. A single class from grades 10, and 11 was randomly selected from each school, and then in the selected classes, 35 students were chosen randomly, knowing that the total number of students in each class is more than 35. As a result, 280 students in the age range of 15-18 were recruited to participate in the study. It is worthwhile to mention that high schools in Kurdistan Region-Iraq are consisting of three grades (10, 11, and 12). In this study, grade 12 was excluded because the students in this grade finish their classes and take an early break to prepare for the final exam; hence it is difficult to follow them up.
Instrumentation
A self-administered questionnaire was developed by the researchers, based on the extensive review of literature, as a tool for data collection. The questionnaire consisted of two sections (S1 File). The first section related to socio-demographic items such as age, gender, educational level of parents, occupational status of father, and monthly family income which was categorised into four groups namely, more than enough, enough, barely enough, and less than enough. The second section was designed to assess students' knowledge of substance use through 30 knowledge-testing items. Knowledge-testing questions were multiple choice questions. The majority of these questions were taken from both education program and booklet. A score of 1 was given to the correct answers and 0 to wrong answers. Scores of knowledge ranged from 0 to 30 with a higher score representing better knowledge about substance use. The level of knowledge was described as good if they scored 20-30, moderate if 10-19, and poor if 0-9. Prior to the actual study, a pilot test was done to determine the reliability of the questionnaire. Reliability was measured using the Cronbach's alpha method with a value of 0.78. This result demonstrated that the questionnaire was internally consistent. The content validity the questionnaire was confirmed by a panel of four experts; two consultant psychiatrists (substance use), and two community medicine specialists. Based on the experts' comments, only minor modifications to the wording of the content were required.
The educational program
A structured teaching program for imparting knowledge on various aspects of substance use especially prevention was developed by the authors. The content of the educational curriculum was designed based on extensive review of literature and experts opinion. The intervention consisted of a series of 4 education modules. Module content was created and edited by the researchers; the first module included an introduction to substance use and historical background, some definitions, side effects and negative consequences, and causes and risk factors. The second and third modules identified the types of substance use and the final module described prevention and control of substance use. Before the curriculum is finalised, it has been sent to three experts. The three experts have read and edited the content of the curriculum to assure that the program fits local teaching and learning styles.
Procedure
After ensuring informed consent from the students' parents, the students in the four schools were given the pre-test questionnaire one week before administration of the educational program. The questionnaire was administered in the classrooms by schools' social workers "Rabers" in the presence of the primary author. Each student was given a serial number to be followed in the second assessment (post-test). After administration of the questionnaire, four '45 minute-sessions' were designed using lectures, group discussion, and booklets. These sessions were mainly taught by "Rabers" during four weeks (one session per week). These "Rabars" were trained properly by the primary author on the content of the educational program. As a reminder, each participant student was provided with a copy of the health education booklet prepared and designed by the primary author and reviewed by other authors, entitled "Know the Facts about Substance Use" (In the Kurdish language). The content of the booklet was similar to that of the educational program and it summarized the most important points in the program. One month later, students were asked to complete the same questionnaire a second time (post-test).
Data analysis
The data were coded and entered into the Statistical Package for the Social Sciences (SPSS) software, version 21, and analyzed. McNemar test was conducted as univariate analysis to compare pre-to post-intervention changes in the percentage of students' knowledge of each levels (low, moderate, and high). A paired t-test compared pre to post-intervention scores. Chi-square test was used to examine group differences. A P value of �0.05 was considered as the level of significance for all analyses.
Results
Out of 280 students, a total of 270 students (96.4% response rate) completed a pre and postintervention survey: 124 (45.9%) males and 146 (54.1%) females. The mean age ± SD of the participants was 16.59 ± 0.784 years, ranging from 15-18 years.
The result reveals a statistically significant improvement in mean knowledge score of students following the implementation of a health education program from 15.959 ± 3.25 to 20.633 ± 3.26 (p < 0.001) ( Table 1). There is no significant association between the gain in knowledge scores (post minus pretest scores) and certain sociodemographic variables such as gender, age, education of parents, and occupational status of fathers, house ownership, and monthly family income ( Table 2).
Pre-test knowledge shows that 82.6% of the students had a moderate knowledge and about 4.1% of the students had poor knowledge whereas only 13.3% had good knowledge. An improvement in students' knowledge has been shown immediately after program implementation, as significant differences were found between the levels of knowledge after educational program compared to its level before (Table 3). It is clearly shown that 63.6% of students with poor knowledge have upgraded to moderate knowledge and 36.4% upgraded to good knowledge. Regarding the students with moderate knowledge, half (50.2%) of them have upgraded to good knowledge and 49.8% remained as having moderate knowledge.
Discussion
Having knowledge and information is the first key and necessary element in developing healthy behavior [14]. There are some evidences about the effectiveness of the school-based programme in raising knowledge of the students regarding the harms of substance use, which will, in turn, contributes to preventing or reducing it among students [9,14,19]. Numerous studies cited inadequate knowledge of students regarding substance use as a contributing factor for abusing it [9,13,20]. Students can construct their knowledge about substance use from different sources such as peers, media, family, and school [9]. However, the most appropriate source of information is a school-based programme [1], that is adopted by many countries, except Iraq, and more specifically the Kurdistan region of Iraq. So that, this study was conducted with the aim of assessing the effectiveness of a health education programme to improve students' knowledge regarding substance use. Prior to implementation of the education programme, the majority of high school students were at a moderate level and this level of knowledge was consistent with that reported by Geramian et al [13] and Rockville [21]. After implementation of a health education programme, the students' knowledge of substance use has significantly increased and this confirms the success of the health education programme in improving students' knowledge of substance use.
No significant association has been shown between the difference (gain) in knowledge scores and certain sociodemographic variables such as gender, age, and education of parents, occupational status of fathers, house ownership, and monthly family income. This result explains the possibility of applying the current education programme to all high school students, regardless of differences in sociodemographic variables.
Significant improvement in the students' knowledge is consistent with what has been found in the previous studies. For example, Kavitha [22] reported that first and second-year students of Pre-University College in Indore demonstrated an increase in substance use knowledge after a health education programme. Similarly, Goswami et al [23] examined the effect of structured teaching programme on knowledge of nursing students regarding substance use and found that the intervention significantly improved students' knowledge at one-week posttest. A further similar pattern of results was obtained in a study conducted by Isensee et al in Germany [24]. It has shown that a school-based prevention program improves adolescents' smoking-related knowledge after the implementation of such a program. One more similar reported finding by Theou et al [25] also coincided with our findings, as students' knowledge of substance use has obviously increased after implementation of an education program with a mean difference of 4.23. This result indicates the success of the current programme and this can be attributed to the content of the educational programme that was based on the previous literature relevant to the substance use and the methods used in the presentation of the programme, and most importantly the participation of the social workers "Rabars", who played a vital role in educating students about the consequences of substance use [26]. This study has some limitations that should be considered. First limitation is the use of pre-test post-test design which is weak in evaluating the education intervention, in case of not using a control group. The second limitation of this study is an inability to follow the students and assess their knowledge over a long period of time, and that was because of the time of conducting the study (at the end of the academic year). In spite of these limitations, the present study provides a valuable contribution, because, to our knowledge, it is the first study examining an educational intervention to improve substance use knowledge of high school students in the Kurdistan Region of Iraq.
Conclusions
Implementing a health education program about substance use on high school students in Erbil city had improved the knowledge of students about this topic. That is why a health education program about substance use is feasible to be implemented in high schools of Erbil. Thereby, proper implementation of this school-based prevention program is critical to strengthening protection and reducing the prevalence of substance use amongst this segment of the population.
Ethical considerations
This study was conducted with approval from the research ethics committee at the college of medicine of Hawler medical university. A written permission was obtained from the Directorate of Education of Erbil to collect data. Prior to administering the surveys, written informed consent was obtained from the parents of all participant students. However, all students and their parents were assured that students' participation is voluntary and the collected data are used only for the purpose of the present study, as well as for their benefit. | 2018-11-15T08:55:05.813Z | 2018-10-31T00:00:00.000 | {
"year": 2018,
"sha1": "e2596d8a0ac3d6b04d668308458136e037b83b0a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0206063&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2596d8a0ac3d6b04d668308458136e037b83b0a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
233396974 | pes2o/s2orc | v3-fos-license | Preclinical In Vivo Modeling of Pediatric Sarcoma—Promises and Limitations
Pediatric sarcomas are an extremely heterogeneous group of genetically distinct diseases. Despite the increasing knowledge on their molecular makeup in recent years, true therapeutic advancements are largely lacking and prognosis often remains dim, particularly for relapsed and metastasized patients. Since this is largely due to the lack of suitable model systems as a prerequisite to develop and assess novel therapeutics, we here review the available approaches to model sarcoma in vivo. We focused on genetically engineered and patient-derived mouse models, compared strengths and weaknesses, and finally explored possibilities and limitations to utilize these models to advance both biological understanding as well as clinical diagnosis and therapy.
Introduction
Sarcomas are mesenchymal malignancies accounting for about 15% of cancers in children and adolescents, making them the third most common group of childhood cancers, following blood malignancies and brain tumors [1]. While the last decades have seen vast improvements in pediatric cancer care with overall improved prognosis, this does not hold true for sarcomas, which are often prone to metastasis and relapse, typically accompanied by dismal prognosis [2]. Research efforts to improve this situation are complicated by the extremely diverse intrinsic nature of pediatric sarcoma with more than 60 genetically distinct entities [3]. While some pediatric sarcoma types may show widespread genomic instability (e.g., osteosarcoma (OS)), many are genetically rather simple, characterized by pathognomonic fusion oncogenes (e.g., SSX-SS18 in Synovial Sarcoma (SySa)) [4][5][6]. Ongoing molecular profiling efforts will likely lead to further sub-classification as we learn more about specific genetic and epigenetic alterations and underlying biology [7]. Figure 1 provides a snapshot of the most common pediatric sarcoma types according to the recently (2020) updated World Health Organization (WHO) classification of soft tissue and bone tumors ( Figure 1a) [8,9], as well as an unbiased molecular clustering of the most common tumor entities based on DNA methylation data (Figure 1b) [10]. The development of novel therapeutic agents heavily relies on preclinical testing in disease specific models. Given the rarity of each individual pediatric sarcoma subtype, appropriate model systems are naturally scarce, and the selection of suitable models is challenging. This represents a major problem for pediatric cancer research and has significantly contributed to the lack of meaningful therapeutic improvements in pediatric sarcomas [11]. Therefore, in this review, we aim to outline the pros and cons of major in vivo modeling approaches applicable for pediatric sarcoma. We review existing models applied for specific sarcoma entities and
In Vivo Modeling Approaches Applicable for Pediatric Sarcoma
Analogous to other solid tumors, four main general approaches of in vivo cancer modeling can be distinguished for pediatric sarcoma ( Figure 2) [12,13]. Two of these entail the engraftment of human cancer tissue into immunocompromised mice-cell-line-derived xenograft models (CDXs) and patient-derived xenograft models (PDXs)-while the other two induce de novo tumorigenesis in immunocompetent wild type mice-environmentally-induced models (EIMMs) and genetically-engineered mouse models (GEMMs). Figure 3 highlights the major advantages and disadvantages of these approaches. Independent of the specific approach, one should consider that different mouse strains, much like humans, possess an inherent and strain-specific risk of spontaneously developing different cancer entities over their lifespans [14,15]. While these can, in some cases, also serve as useful models of human cancer, they should by no means be mistaken for specifically engrafted human or induced murine tumor tissue [16].
CDXs are the most commonly used, but least representative model when aiming to recapitulate the original disease. EIMMs are extremely powerful, but since pediatric sar-
In Vivo Modeling Approaches Applicable for Pediatric Sarcoma
Analogous to other solid tumors, four main general approaches of in vivo cancer modeling can be distinguished for pediatric sarcoma ( Figure 2) [12,13]. Two of these entail the engraftment of human cancer tissue into immunocompromised mice-cell-line-derived xenograft models (CDXs) and patient-derived xenograft models (PDXs)-while the other two induce de novo tumorigenesis in immunocompetent wild type mice-environmentallyinduced models (EIMMs) and genetically-engineered mouse models (GEMMs). Figure 3 highlights the major advantages and disadvantages of these approaches. Independent of the specific approach, one should consider that different mouse strains, much like humans, possess an inherent and strain-specific risk of spontaneously developing different cancer entities over their lifespans [14,15]. While these can, in some cases, also serve as useful models of human cancer, they should by no means be mistaken for specifically engrafted human or induced murine tumor tissue [16].
comas are usually not driven by environmental factors, they are not as relevant for childhood sarcoma. PDXs and GEMMs however, are highly representative of their human counterpart, therapeutically predictive and complementary to each other in nature.
Genetically-Engineered Mouse Models (GEMMs)
GEM models follow the principle of activating or inactivating specific cancer-associated genes in immunocompetent mice to study their impact on tumorigenesis in vivo. While early GEMM-approaches were restricted to costly embryonic stem cell alterations and extensive crossing of chimeric offspring to yield hetero-and homozygous animals for a given gene of interest [17], later work developed transgenic mouse lines expressing Cre under a variety of tissue-specific promoters, allowing conditional gene activation (typically LoxP-Stop-LoxP-Promoter-gene) or inactivation (typically LoxP-gene-LoxP) via crossing of mice [18,19]. However, germ-line models conveying constitutive gene regulation often exhibit embryonic lethality and developmental defects due to the important developmental role of many oncogenes and tumor suppressors genes [20][21][22][23][24]. This problem was overcome by the introduction of tamoxifen-inducible Cre lines, which express Cre in a Tamoxifen-inducible fashion from the endogenous Rosa26-or other more tissuespecific promoters, allowing time-and space-dependent gene regulation pre-and postnatally [25]. Profound advances in model technology entail the application of somatic gene editing approaches via lenti-and adenovirus-delivery or in vivo electroporation [26,27] and their constant optimization for in vivo use [28]. Importantly, Huang et al. could show that both CreLoxP-mediated recombination as well as somatic clustered regularly interspaced short palindromic (CRISPR)/Cas9-mediated gene editing do not lead to differences CDXs are the most commonly used, but least representative model when aiming to recapitulate the original disease. EIMMs are extremely powerful, but since pediatric sarcomas are usually not driven by environmental factors, they are not as relevant for childhood sarcoma. PDXs and GEMMs however, are highly representative of their human counterpart, therapeutically predictive and complementary to each other in nature.
Genetically-Engineered Mouse Models (GEMMs)
GEM models follow the principle of activating or inactivating specific cancer-associated genes in immunocompetent mice to study their impact on tumorigenesis in vivo. While early GEMM-approaches were restricted to costly embryonic stem cell alterations and extensive crossing of chimeric offspring to yield hetero-and homozygous animals for a given gene of interest [17], later work developed transgenic mouse lines expressing Cre under a variety of tissue-specific promoters, allowing conditional gene activation (typically LoxP-Stop-LoxP-Promoter-gene) or inactivation (typically LoxP-gene-LoxP) via crossing of mice [18,19]. However, germ-line models conveying constitutive gene regulation often exhibit embryonic lethality and developmental defects due to the important developmental role of many oncogenes and tumor suppressors genes [20][21][22][23][24]. This problem was overcome by the introduction of tamoxifen-inducible Cre lines, which express Cre in a Tamoxifen-inducible fashion from the endogenous Rosa26or other more tissue-specific promoters, allowing time-and space-dependent gene regulation pre-and postnatally [25]. Profound advances in model technology entail the application of somatic gene editing approaches via lenti-and adenovirus-delivery or in vivo electroporation [26,27] and their constant optimization for in vivo use [28]. Importantly, Huang et al. could show that both CreLoxP-mediated recombination as well as somatic clustered regularly interspaced Pros and cons of different in vivo modeling approaches. While cell-line-derived xenograft models (CDXs) and patient-derived xenograft models (PDXs) are both engraftment models, environmentally-induced models (EIMMs) and genetically-engineered mouse models (GEMMs) can be utilized to establish syngeneic engraftment models (SAMs), enabling scalability for these models, too. Pros and cons of different engraftment sites depicted at the bottom apply to all of the engraftment models, regardless of origin.
Patient-Derived Xenografts (PDXs)
In PDX models, primary patient tumor material is transplanted into immunocompromised mice. Although PDXs cannot recapitulate the early steps of tumorigenesis in an intact immune microenvironment like GEMMs, their standout strength is the recapitulation of heterogeneity of their human counterparts with each PDX reflecting an individual patient [31]. This makes them a particularly valuable resource enabling the correlation of intensive molecular profiling with preclinical therapy response data to direct rational clinical trial design and select the correct patient cohorts for targeted treatments [32][33][34]. While most PDXs are established by subcutaneous (s.c.) rather than orthotopic engraftment due to easier handling and tumor surveillance, the orthotopic microenvironment may be beneficial for tumor take rates and for preserving tumor biology, as comprehensively analyzed by Stewart et al. [35]. From a technical point of view, one has to consider that tumor take rates vary greatly between about 20% and 60%, with mostly of successful engrafted tumors corresponding to aggressive/relapsed/metastasized cases. Further, stable PDX-establishment typically requires four passages in vivo, making the entire process from first engraftment to final establishment lengthy and cost-intensive, but nonetheless Pros and cons of different in vivo modeling approaches. While cell-line-derived xenograft models (CDXs) and patient-derived xenograft models (PDXs) are both engraftment models, environmentally-induced models (EIMMs) and genetically-engineered mouse models (GEMMs) can be utilized to establish syngeneic engraftment models (SAMs), enabling scalability for these models, too. Pros and cons of different engraftment sites depicted at the bottom apply to all of the engraftment models, regardless of origin.
Patient-Derived Xenografts (PDXs)
In PDX models, primary patient tumor material is transplanted into immunocompromised mice. Although PDXs cannot recapitulate the early steps of tumorigenesis in an intact immune microenvironment like GEMMs, their standout strength is the recapitulation of heterogeneity of their human counterparts with each PDX reflecting an individual patient [31]. This makes them a particularly valuable resource enabling the correlation of intensive molecular profiling with preclinical therapy response data to direct rational clinical trial design and select the correct patient cohorts for targeted treatments [32][33][34]. While most PDXs are established by subcutaneous (s.c.) rather than orthotopic engraftment due to easier handling and tumor surveillance, the orthotopic microenvironment may be beneficial for tumor take rates and for preserving tumor biology, as comprehensively analyzed by Stewart et al. [35]. From a technical point of view, one has to consider that tumor take rates vary greatly between about 20% and 60%, with mostly of successful engrafted tumors corresponding to aggressive/relapsed/metastasized cases. Further, stable PDX-establishment typically requires four passages in vivo, making the entire process from first engraftment to final establishment lengthy and cost-intensive, but nonetheless worthy [34,[36][37][38]. Maybe most importantly, both GEMM and PDX models are regarded highly predictive for clinical therapy response [30,33,38,39].
Established In Vivo Models of Pediatric Sarcoma
To find relevant PDX repositories entailing pediatric sarcomas ( Figure 4) and to identify established GEM-modeling approaches for 18 relevant pediatric sarcoma entities (Supplementary Figure S1), a systematic literature search was performed. Additionally, the most up-to-date previous sarcoma mouse model reviews by Dodd et al. (2010) [19], Post (2012) worthy [34,[36][37][38]. Maybe most importantly, both GEMM and PDX models are regarded highly predictive for clinical therapy response [30,33,38,39].
Existing Cell-Line-Derived Xenograft Models (CDXs) for Pediatric Sarcoma
As extensively review by Gengenbacher and Singhal et al., the current mainstay of in vivo cancer modeling are engraftments of long-established cell lines from often cultured over many passages, using high amounts of fetal calf serum (FCS) in vitro [13]. This also holds true for sarcoma with over 70% of articles entailing sarcoma mouse models in highranking journals in 2016 applied CDX models both for basic and translational research while utilization of PDXs and GEMMs was under 10% [13]. This is often due to broad availability and experience as well as ease of use. Common "work horses" of sarcoma cell lines also used for in vivo engraftment are Rh30 (alveolar rhabdomyosarcoma-aRMS), A204 (embryonal rhabdomyosarcoma-eRMS), RD (RT), HS-SY-II (SySa), TC71 (Ewing
Existing Cell-Line-Derived Xenograft Models (CDXs) for Pediatric Sarcoma
As extensively review by Gengenbacher and Singhal et al., the current mainstay of in vivo cancer modeling are engraftments of long-established cell lines from often cultured over many passages, using high amounts of fetal calf serum (FCS) in vitro [13]. This also holds true for sarcoma with over 70% of articles entailing sarcoma mouse models in highranking journals in 2016 applied CDX models both for basic and translational research while utilization of PDXs and GEMMs was under 10% [13]. This is often due to broad availability and experience as well as ease of use. Common "work horses" of sarcoma cell lines also used for in vivo engraftment are Rh30 (alveolar rhabdomyosarcoma-aRMS), A204 (embryonal rhabdomyosarcoma-eRMS), RD (RT), HS-SY-II (SySa), TC71 (Ewing Sarcoma-EwS) and KHOS (osteosarcoma-OS) among many others. Many of these models, however, do not faithfully represent the original tumor and are highly adapted to two-dimensional (2D) culture conditions. TC71, for example, one of the most commonly used EwS cell lines, harbors a BRAF mutation, which is rarely found in EwS patients and can influence therapy response [46,47]. Hinson et al. provide a holistic review on commonly used RMS cell lines and necessary precautions to consider, which can also be applied to other sarcoma entities [48].
Existing Patient-Derived Xenograft Models (PDXs) for Pediatric Sarcoma
PDX models of various entities, including sarcoma, are undisputedly highly valuable tools towards a deeper understanding of cancer biology and treatment advances [33]. The current key challenge, however, is their availability with many fragmented efforts to collect and transplant patient samples into mice, scattered over the world [49]. For sarcoma, this is particularly challenging due to the rarity of specific sub-types and the logistic challenges to obtain these samples, particularly in countries with decentralized pediatric cancer care [32,35,38,50]. Figure 4 provides a list of PDX repositories entailing pediatric sarcoma. The Memorial Sloan Kettering Cancer Center's Department of Pediatrics will also soon be releasing their collection of pediatric PDX models including several sarcoma entities such as desmoplastic small round cell tumor (DSRCT) [51].
Some models can also be retrieved amidst larger repositories, mostly entailing models for common adulthood cancers. While many of these repositories have large amounts of models and accompanying data (e.g., https://www.europdx.eu/, last accessed 21 April 2021, with 1500+ models) and try to bridge the efforts of academic institutions and contract research organizations (e.g., https://repositive.io/, last accessed 21 April 2021, with 8000+ models), PDXs of pediatric malignancies remain largely underrepresented or not present at all. Furthermore, since many do not allow filtering available models by age group, it appears that they are often not poised to encourage deposition of pediatric PDXs.
Most repositories also supply comprehensive data sets on the molecular characterization of PDXs. This is particularly important to increase their value as preclinical testing tools. To this end, the recently proposed so-called "PDX models minimal information standard" (PDX-MI), defining a basic standard of PDX model description, could be of great value to help researchers to pick the right models for their respective question [52].
Existing Environmentally-Induced Mouse Models (EIMMs) for Pediatric Sarcoma
Environmentally-induced sarcoma models are mostly relevant for adulthood sarcoma since sarcoma in children and adolescents is typically the result of distinct genetic events rather than accumulation of genetic alterations as a result of environmental factors. Zanola et al. and Kemp et al. provide reviews entailing several EIMM systems, relevant for adulthood cancers [44,53]. Nonetheless, intramuscular (i.m.) injection of both cardiotoxin (CTX) and barium chloride to induce muscle damage and subsequent muscle regeneration with a more activated state of satellite cells (major stem cell pool for muscle regeneration), have been successfully used to induce a regenerative environment with increased susceptibility towards undifferentiated pleomorphic sarcoma (UPS) and RMS in several GEM modeling approaches (see GEMMs section) [54][55][56][57]. Interestingly, genetic models of muscular dystrophy also seem to provide a micromilieu and cellular state, which clearly facilitates sarcomagenesis with a remarkable specificity towards eRMS, which even correlates with severity of muscle dystrophy (see GEMM section) [54,[58][59][60].
Radiotherapy and chemotherapy can also be regarded as external cues of mutagenic nature that children being treated for cancer frequently face on an everyday basis [61]. Since close to 10% of pediatric cancer is likely based on predisposing germ line variants, many of which can increase the susceptibility to mutagenic cues (e.g., radiotherapy in neurofibromatosis), the relevance of this could be largely underestimated [62]. To this end, Lee et al. presented a very informative comparison of murine sarcoma induction by either radiation or local injection of the mutagen 3-methylcholanthrene (MCA) in a wild type or Tp53-null background as well as a genetic model of Kras overexpression and Tp53 knockout. These comparisons revealed distinct mutagenic patterns and different levels of genomic stability, depending on the causative event [63].
Existing Genetically-Engineered Mouse Models (GEMMs) of Pediatric Sarcoma
Both the cell of origin, which is often not entirely known for many sarcomas, and mutational profile likely determine sarcoma biology and appearance [18]. While many sarcomas are determined by pathognomonic driver oncogenes, such as PAX3-FOXO1 in aRMS, UPS, and eRMS are not as clearly genetically defined. Additionally, the same genetic alterations (e.g., oncogenic RAS mutation plus TP53 inactivation) can lead to both UPS, pleomorphic RMS and eRMS, and can be seen as a disease spectrum of varying divergence in cell of origin and mutational profile [18]. Details on existing GEMMs of the 18 focus entities researched for the scope of this review is provided in Figure S1.
Undifferentiated Pleomorphic Sarcoma (UPS)
UPS refers to aggressive undifferentiated soft tissue and bone sarcomas, which lack an identifiable line of differentiation. While UPS are more common in adults, they may also occur in children [64]. The early work of genetic cancer modeling in mice focused on the tumor spectrum of mice deficient for major tumor suppressors such as Tp53, including specific mutations mimicking Li-Fraumeni syndrome [65]. While this does not induce specific tumor entities, but rather a plethora of different cancers with increased penetrance compared to unaffected mice, the most commonly occurring neoplasms are lymphomas and sarcomas. Sarcomas typically possess UPS morphology, more rarely also OS-, RMS-, and angiosarcoma appearance [17]. Sarcoma penetrance varied from about 10 to 50% when mice were surveilled over their entire lifespan [66,67]. If additional tumor suppressors, such as Pten, are knocked out, the efficiency increases dramatically (100% penetrance/10 weeks median latency) [68]. Furthermore, oncogenic Ras could be identified as one of the strongest oncogenes, requiring co-occurring tumor suppressor silencing (e.g., Tp53 or Rb1) to avoid apoptosis and senescence [27,69,70]. While the introduction of TP53 hotspot mutations are even more efficient in tumorigenesis than Tp53 loss and lead to spontaneous metastasis in about 13% of cases [70], mutant Ras can also cooperate with Cdkn2a inactivation to induce UPS with similar efficiency [26]. Applying Myf7-and MyoD-CreER lines, Blum et al. identified Myf7-positive muscle progenitors as a cell of origin for both UPS (62%) and RMS (38%, 63% of which were graded eRMS), while MyoD-positive progenitors only led to UPS (70% of which showed myogenic features) [55]. This is believed to correlate with activation status of satellite cells as the muscle regeneration stem cell pool, which could also be induced by i.m. injection of cardiotoxin to induce muscle damage and regeneration.
Embryonal/Fusion-Negative Rhabdomyosarcoma (eRMS) and Pleomorphic RMS
Embryonal rhabdomyosarcoma is the most common RMS subtype and genetically more diverse than aRMS and other sarcomas. This is reflected by the plethora of different eRMS models induced by different oncogenes and tumor suppressors over the last 25 years [71]: • Sonic Hedgehog signaling: interestingly, one of the first identified RMS GEMMs with embryonal morphology was incidentally found in a Ptch-inactivated mouse model of Gorlin syndrome, an autosomal dominant syndrome predisposing towards basal cell carcinoma, medulloblastoma, and RMS. This model developed eRMS with an incidence of 9% and 1% in CD-1 and C57BL/6 mice respectively, also highlighting the relevance of mouse strain differences for studying tumorigenesis [72].
Alveolar/Fusion-Positive Rhabdomyosarcoma (aRMS)
The aRMS constitute the second most common RMS subtype and usually occurs in adolescents and young adults (peak incidence at 10-25 years of age) [84]. aRMS exhibit skeletal muscle differentiation and specific molecular alterations (either a PAX3-FOXO1 or a PAX7-FOXO1 gene fusion) are detected in the majority of cases [85]. Despite this clear molecular definition, the cell of origin of aRMS is not entirely clear, complicating mouse modeling development [86]. Lagutina et al. and others found that constitutively expressing the Pax3-Foxo1 fusion from the endogenous Pax3 locus leads to developmental defects, but not tumorigenesis [87]. Heterozygous and chimeric mice showed developmental muscle defects and died perinatally from cardiac/respiratory failure. Interestingly, Pax3-Foxo1 expression from exogenous PGK-, MyoD-and rat-beta-actin promoter did not yield any phenotype.
Keller et al. later validated the developmental role of Pax3-Foxo1 upon embryonic expression and further applied a conditional model using Pax7-Cre to knock-in Pax3-Foxo1 into the endogenous Pax3 locus [88]. This expression, starting in terminally differentiating muscle cells, gave rise to aRMS, although with extremely low penetrance (1 of 228 mice, about a year after birth) [89]. Additionally, inducing haploinsufficiency for Pax3 by a second conditional allele did not accelerate tumorigenesis. Inactivation of Cdkn2a and even more so of Tp53 however, increased efficiency markedly to about 30-40% when carried out on both alleles [89]. This model was further characterized molecularly by Nishijo et al. who found a preserved gene expression signature between human and murine aRMS and observed that spontaneous metastasis, albeit occurring at very low frequency, was selected for high expression of the Pax3-Foxo1-fusion [90].
A follow-up study by Abraham et al. from the Keller laboratory further utilized this conditional Pax3-Foxo1 model, using four different Cre lines [91]: MCre (Pre-and postnatal hypaxial lineage of Pax3 that includes postnatal satellite cells), Myf5-Cre (Preand postnatal lineage of Myf5 that includes quiescent and activated satellite cells and early myoblasts), Myf6-CreER (Pre-and postnatal lineage of Myf6 that includes maturing myoblasts) and Pax7-CreER (Postnatal lineage of Pax7 that includes quiescent and activated satellite cells) [18]. While tumors of MCre (40% penetrance/median 29 weeks) and Myf6-CreER (100% penetrance/median 15 weeks) showed typical aRMS morphology, Pax7-CreER had spindle/pleomorphic appearance, reminiscent of fusion-negative RMS and prolonged tumor-free-survival (65% penetrance/median 48 weeks) [91]. Strikingly, reporter gene-and fusion gene expression was also lower in Pax7-CreER tumors, indicating lower transcription from the Pax3 locus in these mice and conveying divergent therapy response to histone deacetylase (HDAC) inhibitor entinostat. Myf5-Cre mice were embryonically lethal with only one mouse surviving and developing aRMS amidst increased anaplasia after 10.5 weeks: • Unfortunately, so far, none of the described RMS models could shed light on the age discrepancy between eRMS and aRMS patients.
Spindle Cell/Sclerosing RMS with MYOD1 Hotspot Mutation
MYOD1-mutant RMS represents a distinct subtype of spindle cell and sclerosing RMS. While the recurrent hotspot mutation of this biologically distinct RMS is known and appears to result in a particularly aggressive clinical course, no GEM modeling attempts could be identified in the literature to date [93]. Only a single extensively characterized patient-derived cell line model was identified [94].
Osteosarcoma (OS)
Osteosarcoma, a bone sarcoma characterized by a complex karyotype, can occur in children/young adults or later in life (about 60 years of age). OS is associated with genetic predisposition, in particular to Li-Fraumeni and retinoblastoma syndromes and most OS exhibit mutations/deletions of TP53 and/or RB [95][96][97]. Accordingly, several murine models relying of Tp53 inactivation have been applied to study osteosarcoma. Despite the fact that Tp53 null mice are prone to several malignances, it has been reported that 4% develop osteosarcomas (OS showing longer latency than other malignancies) with a higher frequency of OS (25%) in Tp53 heterozygous mice [17,65]. Consistent with a key role for p53 in osteosarcoma, mice harboring the p53R172H gain of function mutant knock-in develop osteosarcoma able to metastasize to other organs [67]. On the other hand, mice heterozygous for Rb deletions are not predisposed to OS, while mice homozygously deleted for Rb die at birth [98,99]. The development of several cell lineage-specific Cre expressing lines, allowed to develop many additional and improved GEMMs where Tp53 and/or Rb are inactivated in the mesenchymal/osteogenic linage; therefore, more faithfully resembling the human disease. Inactivation of Tp53 alone or together with Rb in Prx-1 positive cells (mesenchymal/skeletal progenitors) Osx, Col1A1, or Og2 positive cells (preosteoclasts and osteoclasts) generates OS with high penetrance often leading to metastatic disease [100][101][102][103][104][105][106]. For a detailed summary of genetically engineered mouse models for OS, see Figure S1, and recent reviews provided by Guijarro et al. [107] and Uluçkan et al. [108].
Ewing Sarcoma (EwS)
EwS is a small round cell sarcoma most commonly arising in the bone of children and young adults (most cases < 20 years of age). Extraskeletal manifestation of EwS occurs in about 12-20% of affected patients [109]. Ewing sarcoma's pathognomonic driving oncogene EWSR1-ETS (typically FLI1) is functioning as a neo-transcription factor as well as an epigenetic regulator by inducing de novo enhancers at GGAA microsatellites [109]. Unfortunately, all 16 alternative attempts in 6 independent laboratories to create a transgenic Ewing sarcoma mouse model failed to date-comprehensively presented in a joined manuscript by Minas et al., 2017 [22]. Most attempts did not lead to any tumorigenesis at all despite using various tissue-specific promoters to target the potential cells of origin in EwS in different stages of development: Mesenchymal stem cells (MSCs), neural crest stem cells and embryonic osteochondrogenic progenitors [110]. Developmental EWSR1-FLI1 expression typically led to embryonic lethality while conditional expression in later stages led to various developmental defects, such as muscle degeneration. A modeling approach with successful tumorigenesis applied EWSR1-FLI1 transduction into bone marrow derived MSCs followed by intravenous (i.v.) injection into sub-lethally irradiated mice. However, it did not result in EwS, but fibrosarcoma located in the lung.
Possible explanations for these marked difficulties in model generation could lie in promoter leakiness, the lack of potential co-factors, but also in distinct biological differences between mice and humans, such as divergent splice acceptor sites, low CD99 homology, and unequal GGAA microsatellite architecture, which is not well conserved between species [111]. Particularly, the important epigenetic regulator function of EWSR1-FLI1 depends on an appropriate number of chromatin-accessible GGAA microsatellites in proximity to relevant genes to allow transformation without inducing apoptosis and might not be appropriate in mice. The fact that in vitro transformation of murine cells, such as osteochondrogenic progenitors, is possible and to some extent resembles human EwS, suggests that creating an endogenous EwS GEMM could be conceivable [112,113].
Synovial Sarcoma (SySa)
SySa represents a spindle cell sarcoma with variable epithelial differentiation, harboring a pathognomonic SS18-SSX1/2/4 gene fusion. Haldar et al. showed that when SS18-SSX2 expression is induced in Myf5-expressing myoblasts, 100% of mice develop synovial sarcoma-like tumors [114]. Importantly, the induction of SS18-SSX2 expression through Hprt-Cre, Pax3-Cre, or Pax7-Cre resulted in embryonic lethality, while SS18-SSX2 activation in Myf6-expressing myocytes or myofibers resulted in myopathy but no tumors. Therefore, this fusion is able to induce tumorigenesis in mice when expressed at the right time, in the right cell population (permissive cellular background) [114][115][116]. In the Myf5-Cre linage, tumors formed with 100% penetrance, presented both biphasic and monophasic histology and expressed a gene signature that partially overlapped with that of human synovial sarcoma [114]. Locally induced expression of SS18-SSX1 or SS18-SSX2 using TATCre injection also yields tumors, however with longer latency than Myf5-Cre mice. Exome sequencing identified no recurrent secondary mutations in tumors of either genotype (SS18-SSX1/2) further highlighting the idea that the fusion alone is able to drive the disease in a specific permissive background [117]. Using this localized induced model Barrott et al. showed that Pten silencing dramatically accelerated and enhances sarcomagenesis without compromising synovial sarcoma characteristics and additionally leading to spontaneous lung metastasis [118]. The same laboratory further showed that co-expression of a stabilized form of β-catenin greatly enhances synovial sarcomagenesis by enabling a stem-cell phenotype in synovial sarcoma cells, blocking epithelial differentiation and driving invasion [119].
Malignant Peripheral Nerve Sheath Tumor (MPNST)
Sarcomas rarely follow the benign-to-malignant multistep progression course, prototypical for many of the "big killers" in adult oncology (e.g., colorectal carcinoma). MPNST, however, can partly be regarded an exception to this rule as it often develops from plexiform or dermal neurofibromas, which represent benign lesions with homo-or heterozygous deletion of the tumor suppressor neurofibromin (NF1), acting inhibitory to Ras-signaling through its GTPase activity [120]. This can mostly be attributed to the fact that a large proportion of MPNSTs arises in neurofibromatosis patients, an autosomal dominant disease caused by inactivating NF1-mutations, but also to the fact that the unusually large NF1 gene is among the most frequently mutated genes of the human genome [121]. Additional genetic hits, such as TP53-or Cdkn2a-inactivation induce malignant progression of neurofibromas. In mice, while Nf1-plus Tp53 deletion in Schwann cells leads to MPNST, inactivation in more mesenchymal progenitors or muscle cells leads to Nf1-inactivated RMS or UPS [122]. NF1 and P53 are also exemplary of the developmental importance of many tumor suppressor genes and oncogenes. The first Nf1/Tp53 knockout models, described in 1999 by both Cichowski et al. and Vogel et al., observed embryonic lethality upon inactivation of these genes in embryonic development [20,21]. Later conditional knockout models using Cre lines specific to different tissue-specific promoters established the Schwann cell lineage as the cellular origin of MPNSTs (comprehensively reviewed Brossier et al., 2012) [41]. Since then, Dodd et al. showed that local injection of an adenovirus, delivering Cre into the sciatic nerve of Tp53-wild type mice (Nf1 Flox/Flox ; Ink4a/Arf Flox/Flox ) also induces MPNST, while intramuscular injection leads to RMS/UPS [122]. Huang et al. validated that MPNST induction can also be obtained via lentiviral delivery of a CRISPR/Cas9 construct, targeting Nf1 and Tp53, when injected into the sciatic nerve of wild type mice [29].
Infantile Fibrosarcoma (IFS)
IFS represents a primitive sarcoma of fibroblastic differentiation in many of which a characteristic ETV6-NTRK3 fusion is identified [123]. While there have not been any comprehensive GEM models described to date, both ETV6-NTRK3 as well as the non-canonical fusion gene EML4-NTRK3 have been shown to be able to transform murine NIH3T3 fibroblasts, which successfully engrafted s.c. in severe combined immunodeficiency disease (SCID) and NOD SCID gamma (NSG) immunocompromised mice to result in tumors with IFS-like histomorphology [124,125].
Malignant Rhabdoid Tumors (MRT)
Malignant rhabdoid tumors (MRT) are aggressive, poorly differentiated pediatric cancers, characterized by the presence of germline/somatic biallelic inactivating mutations or deletions of the SMARCB1 (INI1, SNF5, or BAF47) gene, which is a component of the SWItch/Sucrose Non-Fermentable (SWI/SNF or BAF) chromatin-remodeling complex. Tumors can arise in the soft-tissue or in the kidney and less commonly in the central nervous system (referred to as atypical teratoid rhabdoid tumor; AT/RT). In mice, Smarcb1 heterozygous loss predisposes to soft-tissue sarcoma tumors consistent with human MRT but with low penetrance (approximately 12%) [126][127][128]. However, homozygous or heterozygous deletion of Tp53 (but not of Cdkn2a or Rb1) in Smarcb1 heterozygous mice accelerates tumor onset and penetrance [129,130]. Conditional inactivation of Smarcb1 in mice (Smarcb1 Flox/Flox , Mx-Cre plus polyI/polyC treatment) results in rapid cancer susceptibility, with all animals developing tumors at a median age of 11 weeks [131]. These lesions exhibit many features of rhabdoid tumors, such as rhabdoid cells and complete absence of Smarcb1 expression.
Clear Cell Sarcoma of Soft Tissue (CCS)
CCS is an aggressive neoplasm that usually arises in the deep soft tissue of young adults. The genetic hallmark of CCS is t(12;22)(q13;q12) leading to an EWSR1-ATF1 gene fusion [132]. Straessler et al. published a model for conditional and Tamoxifen-inducible EWSR1-ATF1 expression under the endogenous Rosa26 promoter [133]. Temporal regulation of the fusion gene expression was required due to embryotoxicity of EWSR1-ATF1. Conditionally expressing human cDNA of EWSR1-ATF1 without any accompanying alterations led to highly efficient tumorigenesis of CCS-like tumors with 100% penetrance that resemble human CCS morphologically, immunohistochemically and by genome-wide expression profiling [133].
Alveolar Soft Part Sarcoma (ASPS)
ASPS is a deadly soft tissue malignancy, which consistently demonstrates a t(X;17)(p11.2; q25) translocation that produces the ASPSCR1-TFE3 fusion gene [134]. Expression of human cDNA in a temporal fashion (Rosa26-CreER) leads to ASPS-like tumors that resemble the human disease in terms of histology and expression patterns. Mouse tumors demonstrate angiogenic gene expression and are restricted to the tissue compartments highest in lactate, suggesting a role for lactate in alveolar soft part sarcomagenesis [135].
•
Clear cell sarcoma of the kidney (CCSK): CCSK is a rare neoplasm, typically arises in the kidney of infants and young children. CCSK has a dismal prognosis, often showing late relapses [136]. Recently, an internal tandem duplication of exon 15 of BCL-6 corepressor (BCOR) was identified as the major oncogenic event in CCSK [137]. • Small round blue cell tumor with BCOR alteration (SRBCT-BCOR): SRBCTs represent a heterogenous group of tumors, from which SRBCT-BCOR was only recently defined as a stand-alone entity. SRBCT-BCOR typically harbors BCOR-related gene fusions (e.g., BCOR-CCNB3) or an internal tandem duplication within Exon 15 of BCOR [138,139]. SRBCT-BCOR are rare neoplasms, mostly arising in infants and young children, showing a striking male predominance [140]. • Small round blue cell tumor with CIC alteration (SRBCT-CIC): similarly, to SRBCT-BCOR, SRBCT-CIC was recently identified as a distinct subtype of SRBCT [141]. In most cases a CIC-DUX4 gene fusion is identified [142]. SRBCT-CIC may arise in children and older adults; however, most cases are observed in young adults (25-35 years of age) [143]. • Desmoplastic small round cell tumor (DSRCT): DSRCT is a malignant mesenchymal neoplasm, most frequently arising in the abdominal cavity of children and young adults [144]. Typically DSRCT harbors t(11;22)(p13;q12), leading to a ESWR1-WT1 gene fusion [145]. • Mesenchymal chondrosarcoma (MC): MC is a rare neoplasm, typically arising in craniofacial bone and adjacent soft tissues of young adults [146]. The genetic hallmark of MC is a HEY1-NCOA2 gene fusion [147]. • Inflammatory myofibroblastic tumor (IMT): IMT is a myofibroblastic neoplasms arising in various locations, which usually shows a benign clinical course. However, few patients will present with local recurrence and/or distant metastasis [148]. In IMT, gene rearrangements affecting receptor tyrosine kinase genes (most often involving ALK) are typically identified [149].
The lack of models for many entities are a consequence of limited knowledge in regards to the tumor cell of origin, high heterogeneity of cellular backgrounds, rapid emergence of new molecular sub-types, and the need for more flexible models that allow the testing of various genetic alterations in different cellular backgrounds.
Non-Murine Animal Models for Pediatric Sarcomas
Mice as modeling organisms enable a great trade-off between appropriate resemblance of their human counterpart, and thereby translatability, as well as decently short generation times and experimental practicality. However, all aforementioned modeling approaches in mice can, in principle, also be applied in other animal species. Particularly, zebra fish represent a suitable modeling organism with largely unlocked potential in pediatric solid cancer research. While zebra fish models might not be as translatable as mouse models due to their non-mammal nature, shorter generation times, higher scalability, lower costs, extracorporeal embryonic development, and skin transparency (allowing live cell imaging) render them a powerful and complementary modeling tool for pediatric tumors [150]. Good examples of such a genetically-engineered zebrafish models for pediatric sarcoma can be found in the study of Parant et al. on MPNST [151] as well as the recent review of Casey et al. on sarcoma zebrafish models for pediatric cancers [152]. For rhabdomyosarcoma an eRMS model expressing KRAS G12D in muscle satellites cells via Rag2 promoter by Langenau et al. [153,154] as well as an aRMS model, systemically expressing the PAX3-FOXO1 fusion by Kendall et al. [155] were developed to date. These models have already proven to be a valuable tool to deepen the understanding of RMS tumorigenesis [156]. Galindo et al. further showed that expression of PAX3-FOXO1 in syncytial muscle fibers of Drosophila can drive sarcomagenesis and is further supported by constitutive RAS expression [157]. Apart from GEMMs, zebra fish can also serve as host organisms for engraftment models, such as PDX. While this was previously hampered by the typical 32 • living conditions of zebra fish and time-restricted to the first four weeks of life when the fish's immune system is not yet fully developed, the recently reported prkdc − / − , il2rga − / − line represents a 37 • -adapted immuno-compromised zebra fish line, allowing for engraftment of several cancer types, including eRMS and drug-response monitoring via live cell imaging [158].
Canine models have also been important for sarcoma research especially for osteosarcoma. Spontaneous OS is quite common in large dogs and highly resembles human OS in terms of gene expression profiles and histological analysis [159,160]. From a genetic perspective, OS in dogs is also characterized by complex karyotypes with variable structural and numerical chromosomal aberrations and involves many of the genes important for human OS pathogenesis including TP53, RB, and PTEN [161][162][163]. Because osteosarcoma naturally occurs with high frequency in dogs and shares many biological and clinical similarities with osteosarcoma in humans, canine OS models have provided means to understand the disease at different levels. Most importantly, they provided an opportunity to evaluate new treatment options, and indeed the development of treatment strategies in dogs and humans has mutually benefited both species [164]. Although canines have been instrumental for OS research, it is worth noting that OS in dogs occurs exclusively in old age, not entirely mimicking the human disease that peaks in adolescence [107].
Applications of Pediatric Sarcoma Mouse Models
Most and foremost, model generation is no end in itself. Both biological and translational advances require purposeful utilization of the right model system for the respective research question. While CDXs are still the by far most commonly used model due to broad availability and ease of use, either PDX-or GEM-models are typically the most suitable model for both basic and translational research questions ( Figure 5). In general, GEM models are ideal to deepen our understanding of basic sarcoma biology while PDX models are of particularly value for preclinical testing, adequately representing patient heterogeneity. While many cell-based immunotherapies can also be tested in conventional PDX models, immunotherapies requiring endogenous immune cells can either be tested in GEMMs or humanized PDX models, the latter being very costly and technically challenging thus largely limiting their use [165,166] (Figure 6b). Both, PDX and GEM models are suitable for local therapy advancement and imaging studies, a rather underrepresented field of research, given the importance of complete resection for clinical outcome [68,167].
GEMMs of different genetic makeup can also be used to assess the fraction of tumor cells with tumorigenic potential, following the notion that some tumors rely on a small fraction of cells to drive overall cell renewal and tumor growth [168]. Following this cancer stem cell idea, Buchstaller et al., for example, compared the tumorigenic potential of two engrafted MPNST GEMMs and found that transplanted MPNST cells from Nf1 +/− Ink4a/Arf −/− tumors encompassed a 10-fold higher fraction of cells with cancer-initiating potential than MPNST cells from Nf1 +/− and Tp53 +/− tumors [169].
Given these advantages of PDXs and GEMMs, CDXs possess one natural prime advantage GEMMs and PDXs are often lacking: they entail a corresponding in vitro system, allowing for variable functional characterization and are often very well characterized. This strength paired with the high practicality of their use makes them a highly valuable tool for present and future sarcoma research.
Considerations for Preclinical Testing
A major consideration for model utilization is how to design meaningful and translatable preclinical therapy trials. This is particularly important for pediatric sarcoma since the rarity of individual subtypes combined with the incredibly diverse array of subsets complicates rational clinical trial design. To this end, Langenau et al. provided a comprehensive and contemporary review, highlighting 10 key points to consider when designing preclinical treatment trials [170]. The key concept is to apply the same principles and guidelines used in clinical phase I, II, and III trials to the preclinical setting in a similar stepwise approach by conducting preclinical phase I, II and III trials alike [170]. This systematic approach is equally feasible for combinatorial agent testing in vivo and elucidated that some synergistic effects can be mediated by the in vivo environment and are not picked up in vitro [171]. A prerequisite of sublime importance for successful in vivo trials is the appropriate selection of promising treatment agents, based on comprehensive molecular and drug-screening in vitro data [49]. Equally important is the selection of a set of appropriate and well characterized model systems, adequately representing patient heterogeneity, including relevant patient subsets based on molecular characteristics serving as biomarkers, possibly informing about treatment response [172,173]. Connecting molecular model characterization and drug response data is an essential avenue in moving towards precision oncology in pediatrics [172]. Recent reports applying this concept by conducting single-mouse-design studies highlight the feasibility and translational relevance of this approach [33,174,175]. Approaches to use PDX models as avatars for individual patients that are parallelly being treated in the clinic are possible in principle, but typically hampered by time-consuming model establishment and variable engraftment rates [176]. Nimmervoll et al. further introduced the concept of a mouse clinic, aiming to more closely resemble the multimodal clinical therapy, including chemotherapy, radiotherapy, and local resection for testing the application of new targeted treatments [177]. While this elaborate approach will likely be to too complex and resource-intensive for general use, one should carefully consider the combination of new targeted treatments and immunotherapies with mainstays of current therapy, including local resection and radiation. A more feasible approach to deepen the insight of therapy trials is the use of molecular barcoding of engrafted cells to reveal therapy-induced clonal selection processes [178][179][180] (Figure 6c). Useful examples on how to present preclinical therapy response data can be found in the review from Gengenbacher et al. [13].
Considerations for Preclinical Testing
A major consideration for model utilization is how to design meaningful and translatable preclinical therapy trials. This is particularly important for pediatric sarcoma since the rarity of individual subtypes combined with the incredibly diverse array of subsets complicates rational clinical trial design. To this end, Langenau et al. provided a comprehensive and contemporary review, highlighting 10 key points to consider when designing preclinical treatment trials [170]. The key concept is to apply the same principles and guidelines used in clinical phase I, II, and III trials to the preclinical setting in a similar stepwise approach by conducting preclinical phase I, II and III trials alike [170]. This systematic approach is equally feasible for combinatorial agent testing in vivo and elucidated that some synergistic effects can be mediated by the in vivo environment and are not picked up in vitro [171]. A prerequisite of sublime importance for successful in vivo trials is the appropriate selection of promising treatment agents, based on comprehensive molecular and drug-screening in vitro data [49]. Equally important is the selection of a set of appropriate and well characterized model systems, adequately representing patient heterogeneity, including relevant patient subsets based on molecular characteristics serving as biomarkers, possibly informing about treatment response [172,173]. Connecting molecular model characterization and drug response data is an essential avenue in moving towards precision oncology in pediatrics [172]. Recent reports applying this concept by conducting single-mouse-design studies highlight the feasibility and translational relevance of this approach [33,174,175]. Approaches to use PDX models as avatars for individual patients that are parallelly being treated in the clinic are possible in principle, but typically hampered by time-consuming model establishment and variable engraftment rates [176]. Nimmervoll et al. further introduced the concept of a mouse clinic, aiming to more closely resemble the multimodal clinical therapy, including chemotherapy, radiotherapy, and local resection for testing the application of new targeted treatments [177]. While this elaborate approach will likely be to too complex and resource-intensive for general use, one should carefully consider the combination of new targeted treatments and immunotherapies with mainstays of current therapy, including local resection and radiation. A more feasible approach to deepen the insight of therapy trials is the use of molecular barcoding of engrafted cells to reveal therapy-induced clonal selection processes [178][179][180] (Figure 6c). Useful examples on how to present preclinical therapy response data can be found in the review from Gengenbacher et al. [13].
Future Directions
Ideally, the toolbox of preclinical sarcoma models should sufficiently represent the immense intertumoral heterogeneity of different sarcoma subtypes. The major limitation towards establishing enough of such highly predictive sarcoma models remains the scarcity of available patient material for research of these rare diseases. As recently outlined by Painter et al. in "The Angiosarcoma Project" as a part of the "Count me in" initiative by the Broad Institute, a more patient-centered research approach, empowering patients to directly share their experience, samples and data proved to be very successful in overcoming the logistic difficulties of non-centralized treatment of rare cancers and could potentially serve as a blueprint for pediatric solid tumors to help affected children and their families to engage in meaningful research to better future therapies [181]. More information on the patient-centered "Count me in" initiative, which has recently expanded to OS, can be found here: https://joincountmein.org/, last accessed 21 April 2021.
For existing models, a key challenge will be to make the scarcely used, but particularly representative and predictive GEMM-and PDX-models more available to both academic and industrial research in order to increase relevance and translational validity of obtained results [13]. This is often complicated by the legal frameworks accompanying
Future Directions
Ideally, the toolbox of preclinical sarcoma models should sufficiently represent the immense intertumoral heterogeneity of different sarcoma subtypes. The major limitation towards establishing enough of such highly predictive sarcoma models remains the scarcity of available patient material for research of these rare diseases. As recently outlined by Painter et al. in "The Angiosarcoma Project" as a part of the "Count me in" initiative by the Broad Institute, a more patient-centered research approach, empowering patients to directly share their experience, samples and data proved to be very successful in overcoming the logistic difficulties of non-centralized treatment of rare cancers and could potentially serve as a blueprint for pediatric solid tumors to help affected children and their families to engage in meaningful research to better future therapies [181]. More information on the patient-centered "Count me in" initiative, which has recently expanded to OS, can be found here: https://joincountmein.org/, last accessed 21 April 2021.
For existing models, a key challenge will be to make the scarcely used, but particularly representative and predictive GEMM-and PDX-models more available to both academic and industrial research in order to increase relevance and translational validity of obtained results [13]. This is often complicated by the legal frameworks accompanying the exchange of models, particularly for PDXs, since they entail genomic patient information. Increased model availability would make a broadly accepted standard for model description even more important (e.g., PDX-MI/PDX models minimal information standard) [52].
For GEMMs, more efforts should be taken towards establishing syngeneic transplantation models, allowing biobanking such as in PDX models, but with immunocompetent background. Especially immunotherapeutic approaches for children with sarcoma would immensely benefit from this approach. Importantly, once a GEMM-derived syngeneic transplantation model is established, GEM-tissue becomes expandable and the thus now scalable models can be made available beyond the host institution while costs drop from high to moderate [12] (Figure 6a). Ideally, syngeneic engraftment models (SAMs) should be accompanied by matched FCS-free 3D spheroid-/tumoroid-cultures for in vitro screening and functional studies [182,183]. This is particularly important since vast potential for future treatment advances might lie in rational combination therapies, which even more so require high-throughput in vitro screening before validation in vivo [45,61]. An additional largely unexplored potential lies in revisiting previously developed GEMMs among human samples, with up-to-date molecular profiling techniques, as exemplified by Mito et al. [184]. To this end, the recently released 285k methylation array for broad classification or the single-cell methylation analysis for mouse samples might help to deepen the knowledge on cellular origins and epigenetic determinants of various sarcoma subtypes [110,185]. To make best use of existing models, open-source platforms to access generated molecular data and microarrays to simultaneously stain for specific targets among many existing models, can be of great value to select appropriate models for particular studies [49]. Figure 6 highlights a selection of different modeling approaches that could be of additional or complementary value to various established models. For, so far, elusive GEMMs, such as EwS, which seem to require a human-specific molecular background, human-mouse chimera models could provide a solution (Figure 6d). For instance, Cohen et al. successfully developed such a model for neuroblastoma (NB) recently, by introducing NB-specific alterations on a doxycycline(Dox)-inducible vector into human stem cells (in this case hNCCs/human stem-cell derived neural crest cells) before injecting them into early mouse embryos [186]. NB-specific gene expression could later be regulated by dox administration in chimeric offspring. Another, already commonly used approach to regulate gene expression in vivo via alimental Dox are engraftment models transduced with vectors carrying a Dox-inducible RNA interference (e.g., TRE-shRNA) or inducible nuclease (e.g., Cas9-sgRNA) cassette to inducible knockdown or knockout a gene of interest [187] (Figure 6e). Such approaches are also feasible on a systemic level for to investigate the systemic toxicities due to genetic targeting of a specific gene [188,189] ( Figure 6f). Genetic dropout screens via RNAi or CRISPR are also no longer limited to the in vitro setting, but can also be carried out in vivo [190]. This is also possible in models of metastasis and progression [191] (Figure 6g), a phenomenon of utmost importance for sarcoma patients, but understudied via mouse modeling [62]. Many mouse models do not metastasize spontaneously before mice have to be sacrificed due to primary tumor burden [192,193]. Thus, the most translationally relevant modeling approach for metastasis modeling is local tumor resection, typically via limb resection, and holds much promise to further elucidate mechanisms of metastasis for the various pediatric sarcoma entities, as well as possible therapeutic vulnerabilities [194,195]. At the same time, it enables utilization of PDX-and GEM models for improvement of local resection, e.g., via molecular imaging or photodynamic therapy, which, despite promising ongoing efforts, is still a rather unexplored field of pediatric sarcoma research with much unexplored potential to improve clinical outcomes [195][196][197][198][199][200].
Conclusions
Mouse models for pediatric sarcoma are an indispensable tool to deepen our understanding of this incredibly diverse group of diseases. Among them, genetically-engineered and patient-derived xenograft models are the most representative and predictive model types for meaningful basic and translationally relevant research. Recent technological advances, such as somatic GEM modeling, inducible in vivo gene regulation, cellular barcoding of engraftment models and first and foremost strong collaborative and international efforts to establish representative model repositories have the power to adequately represent at least major, high risk entities of the diverse landscape of pediatric sarcoma. If utilized correctly for preclinical testing, these models have the potential to transform the future clinical treatment of childhood and adolescence sarcoma.
Data Availability Statement:
No new data were created in this study. Data sharing is not applicable to this article.
Acknowledgments:
We thank Nicolas Gengenbacher, Mahak Singhal, Hellmut Augustin, et al. for sharing details on sarcoma modeling from 2016 [13]. We further thank Christian Kölsche et al. for allowing to utilize DNA methylation data from the sarcoma classifier [10] and Katharina Hartwig for help with graphic design. Illustrations and figures were made with BioRender and Affinity Designer. We apologize to all authors whose related valuable work was not discussed due to space restrains.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-04-27T05:13:25.150Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "a8df73d7ac58b644c02905bdc45254aea6ef851e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/8/1578/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8df73d7ac58b644c02905bdc45254aea6ef851e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52189997 | pes2o/s2orc | v3-fos-license | Proteomic Analysis of Rapeseed Root Response to Waterlogging Stress
The overall health of a plant is constantly affected by the changing and hostile environment. Due to climate change and the farming pattern of rice (Oryza sativa) and rapeseed (Brassica napus L.), stress from waterlogging poses a serious threat to productivity assurance and the yield of rapeseed in China’s Yangtze River basin. In order to improve our understanding of the complex mechanisms behind waterlogging stress and identify waterlogging-responsive proteins, we firstly conducted iTRAQ (isobaric tags for relative and absolute quantification)-based quantitative proteomic analysis of rapeseed roots under waterlogging treatments, for both a tolerant cultivar ZS9 and sensitive cultivar GH01. A total of 7736 proteins were identified by iTRAQ, of which several hundred showed different expression levels, including 233, 365, and 326 after waterlogging stress for 4H, 8H, and 12H in ZS9, respectively, and 143, 175, and 374 after waterlogging stress for 4H, 8H, and 12H in GH01, respectively. For proteins repeatedly identified at different time points, gene ontology (GO) cluster analysis suggested that the responsive proteins of the two cultivars were both enriched in the biological process of DNA-dependent transcription and the oxidation–reduction process, and response to various stress and hormone stimulus, while different distribution frequencies in the two cultivars was investigated. Moreover, overlap proteins with similar or opposite tendencies of fold change between ZS9 and GH01 were observed and clustered based on the different expression ratios, suggesting the two genotype cultivars exhibited diversiform molecular mechanisms or regulation pathways in their waterlogging stress response. The following qRT-PCR (quantitative real-time polymerase chain reaction) results verified the candidate proteins at transcription levels, which were prepared for further research. In conclusion, proteins detected in this study might perform different functions in waterlogging responses and would provide information conducive to better understanding adaptive mechanisms under environmental stresses.
Introduction
Rapeseed is an important oil crop, due to the high content of oil in the seed, and it accounts for one-third of edible oils around the world [1]. However, rapeseed is sensitive to various environmental stresses, and abiotic stress conditions, such as drought, salinity, flood, and cold, severely limit the growth and production of crop, resulting in agricultural economic loss and agricultural production risks [2][3][4][5]. Waterlogging, also known variously as flood, submergence, soil saturation, anoxia, and hypoxia, is a considerable agricultural problem around the world, and usually manifests as one of two types in the field: (1) "waterlogging" refers to the root and part of the shoot being submerged underwater; while (2) "complete submergence" refers to the entire plant being submerged underwater [6]. Since oxygen is necessary for respiration in roots, oxygen deficiency causes the main damage, as waterlogging always results in anoxic soils and hypoxia within roots, which limits root growth and consequently reduces shoot growth and yield [7,8]. In addition, waterlogging stress also raises the probability of a pathogen infection [9]. As a result, improvement to water stress tolerance has become an urgent priority for crop breeding programs [6,10].
Crops have evolved and developed various acclimation strategies at morphological, cellular and metabolic levels, such as the formation of adventitious roots and aerenchyma, increased soluble sugar content, enhancement of the glycolytic pathway, activation of antioxidant defense, and triggered innate immunity [6,11,12]. At the soil surface, under oxygen-deficient conditions, the formation of adventitious roots has been universally known as an important method of waterlogging stress adaption in different species [13]. Previous studies have implicated that when the oxygen supply is difficult to maintain, plants initiate organogenesis with adventitious roots emerging from stem nodes, thus improving gas diffusivity around the root [14,15]. In various plants species, the significance of ROS or hormonal regulation in adventitious roots has been reported extensively [15][16][17][18], and adventitious root formation has been successfully applied to be the root system architecture in genetic resource development or identification [19][20][21][22][23]. However, the process is largely unknown in rapeseed. To date, there have been many achievements in the discovery of waterlogging mechanisms. Ethylene is the primary signal for most adaptations under waterlogging stress condition. When rice becomes submerged, ethylene promotes root emergence from the nodes by inducing ROS formation and cell death in the epidermal cells [24]. Similarly, in Solanum dulcamara, ethylene co-opts the ABA and auxin signaling cascades to regulate root development under flooded soil condition [25]. In wheat, it has also been reported that ethylene could induce the expression of genes correlated with aerenchyma formation, glycolysis, and a fermentation pathway [7]. Gene functional analysis demonstrates that the VII ethylene response factors (ERFs) play a crucial role in the regulation of the waterlogging response. Submergence-1 (Sub1) is a major quantitative trait locus (QTL) affecting submergence tolerance in rice. This QTL in the tolerant line encodes SUB1A, which belongs to the group VII ERF in rice, limits the GA responsiveness promoted by ethylene. The QTL functions by using a quiescence strategy that economizing carbohydrate reserves, thus helping keep the plant alive in submergence conditions and regrow it after submergence [26][27][28]. Meanwhile, two other ERFs, SNORKEL1 (SK1) and SNORKEL2 (SK2), trigger the internode elongation via the promoting gibberellin pathway under deep water conditions [29]. Furthermore, five members of group VII ERF in Arabidopsis-HRE1, HRE2, RAP2.12, RAP2.2, and RAP2.3-have been proven to participate redundantly in the anaerobic response [30][31][32]. However, mechanisms in rapeseed still need to be established, and the gene or genes responsible for waterlogging tolerance have not been identified.
In recent years, research efforts based on the transcriptome technique have been applied to gain a preliminary understanding of the adaptive mechanisms adopted by plants in combating waterlogging stress. Studies have demonstrated that waterlogging stress in root leads to drastic expression changes of genes related to oxidation reduction, secondary metabolism, transcription regulation, and translation regulation [33,34]. In addition, waterlogged rapeseed leaves respond to hypoxia by regulating genes related to the scavenging of reactive oxygen, degradation (proteins, starch, and lipids), and premature senescence [35]. Regardless, transcriptome profiling has some limitations, because mRNA levels are not always correlated to corresponding proteins, mainly because of post-transcriptional regulation [36]. More recently, proteomic analyses were conducted on soybean and maize seedlings under different waterlogging conditions [37,38]. The iTRAQ combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS) is a high throughput and stable strategy that explores dynamic changes in protein abundance with a highly accurate quantitation of different proteins, especially for lowly-expressed proteins [39][40][41]. Thus far, considerable work focusing on various stress traits has been carried out-for example, waterlogging stress on cucumber [42] and maize [38], salt stress on rapeseed [43,44], maize [45] and wheat [46], heat stress on Arabidopsis [47], and artificial aging in rapeseed [45]. However, to date, limited information is available about waterlogging-response proteins in rapeseed. This has limited our understanding of the molecular mechanism adopted by this important crop in response to waterlogging stress.
Waterlogging stress has significant effects on rapeseed (Brassica napus L.) at all stages of development, and our previous study has outlined the waterlogging tolerance coefficient (WTC) for evaluating waterlogging tolerance [34]. The transcriptome under waterlogging stress was then assayed in the tolerant variety ZS9 with 4432 differentially expressed genes identified [33]. In order to explore more responsive genes under waterlogging condition, the tolerant cultivar ZS9 plus a sensitive cultivar GH01 were used in this study, and iTRAQ-based quantitative proteomic analysis approach was firstly applied to rapeseed root at the germination stage. The main objective is to identify whether differentially expressed proteins associated with waterlogging stress depend on genetic background or not, which would be beneficial to resolving the molecular mechanism in responses to waterlogging stress. Thus, this analysis provides deeper insights into the effects of waterlogging stress on rapeseed root at the germination stage.
Effects of Waterlogging Stress on Root Growth
Based on the standard of the waterlogging stress condition, roots are the first to be affected by stress condition; thus, effects on roots were studied in detail. Rapeseed seeds of the tolerant cultivar ZS9 and sensitive cultivar GH01 were germinated for 36 h until their radicle grow out, then treated with or without waterlogging stress. Phenotypes show that the root growth of GH01 was repressed significantly compared with ZS9 after a 12-hour treatment ( Figure 1A); in addition, under constant treatment for three days, GH01 had a lower seedling rate than ZS9 ( Figure 1B). The cytological observation indicates that ZS9 and GH01 had a similar cell structure in the radicle, but different morphological changes within 12 h of waterlogging, as red arrows showed, with more parenchyma cells having withered in GH01 ( Figure 1C). In addition, the root length, shoot length, and fresh weights of rapeseeds were measured under normal conditions and waterlogging stress conditions for three days ( Figure 1D). The present results clearly demonstrate that the growth of the two cultivars was significantly suppressed by waterlogging stress. Moreover, GH01 showed a shorter root length and shoot length, and a lower fresh weight than ZS9. These results suggest that ZS9 has stronger adaptability than GH01 under waterlogging stress conditions.
Protein Identification and Quantification
The proteomes of ZS9 and GH01 s roots (ZS9-CK and GH01-CK, respectively) were collected before waterlogging stress and at 4 h (ZS9-4H, GH01-4H), 8 h (ZS9-8H, GH01-8H), and 12 h (ZS9-12H, GH01-12H) after waterlogging stress. The samples were then used for iTRAQ analysis. The peptides were searched against proteins derived from the Brassica napus L. genome database [48]. In total, 35,410 unique peptides corresponding to 7306 proteins were identified (Table S1). The number of peptides defining each protein is distributed in Figure 2A, and over 62% of the total (4529) proteins matched with at least two peptides. In addition, wide coverage was obtained on protein molecular weight (MW) distribution ( Figure 2B), with the molecular weights of the identified proteins ranging from 3.3 to 570.6 kDa. Among them, 4774 weighed between 20 to 70 kDa, 914 between 0 to 20 kDa (low molecular weight proteins), 912 between 70 to 100 kDa, and 706 over 100 kDa (high molecular weight proteins), which show advantages in identifying proteins with low or high molecular weight compared to the traditional two-dimensional (2D)-gel technique. The isoelectric point distribution indicates that most of the identified proteins were between 5 to 10 isoelectric points ( Figure 2C). Besides the above, the distribution of peptide sequence coverage is displayed in Figure 2D. (C) Transverse view of the ZS9 and GH01 radicles before and after waterlogging stress for 12 h. The sections were excised from root at 0.5 cm above the root tips, stained with toluidine blue, and photographed under stereoscopic microscope (scale bar = 0.25 mm); (D) Root length, shoot length, and fresh weights of ZS9 and GH01 under normal conditions and waterlogging stress conditions for three days (asterisks indicate statistically significant differences between ZS9 and GH01 at ** p < 0.01). Error bars present standard deviations based on three biological repetitions. Figure 2. The distribution of peptide numbers, protein mass, isoelectric points, and sequence coverage of proteins identified, based on isobaric tags for relative and absolute quantification (iTRAQ) proteomics analysis.
The Venn diagram shows a comparative analysis of the differentially expressed proteins mentioned above. As the dynamic detection for iTRAQ proteomics analysis contains three time points (4 h, 8 h, and 12 h), we focused on the differentially expressed proteins identified by at least two time points, which are underlined in
Gene Onology Annotation of the Differentially Accumulated Proteins
To further identify and annotate these proteins, we selected proteins belonging to ZS9-up, ZS9-down, GH01-up, and GH01-down groups ( Figure 4) for gene ontology analysis by searching the NCBI protein database for homologous sequences using the BLASTp program, and 73-85% differentially accumulated proteins had GO annotation. The protein sequences were >40% identical with those of their homologs, suggesting that the proteins might have similar functions. These differentially accumulated proteins chosen from Figure 3 were classified into three groups (molecular function (GO-MF), cellular component (GO-CC), and biological process (GO-BP)) on the basis of GO enrichment analysis ( Figure 4). Results show that enrichment of the main categories between ZS9 and GH01 are similar. The top two GO-CCs were "cell" with the percentage at about 32% and "organelle" with the percentage at about 35%, while there are higher ratios and a larger quantity of proteins in the classification of "nucleus" in GH01. This is consistent with the categories of "transcription regulator" in molecular functions, suggesting that the nuclear localized regulatory factors changing in GH01 might be negatively related to waterlogging stress. The top two GO-MFs were "binding activity", with the percentage at about 47%, and "catalytic activity", with the percentage at about 35%, respectively. For biological process, the enriched proteins mainly belong to the following categories: cellular (18.8-20.0%), metabolic process (18.4-19.4%), single-organism process (17.1-17.3%), response to stimulus (8.7%,) and biological regulation (7.8%).
Therefore, waterlogging stress in rapeseed affects sets of responsive proteins with different sub-cellular localizations and molecular functions, as well as different biological processes, indicating the diversity of response mechanisms, which need to be developed.
In order to further understand the specific biological process of differential accumulated proteins between ZS9-up, ZS9-down, GH01-up, and GH01-down groups, hierarchical cluster analysis was applied for the detailed biological process, based on the numbers of proteins belonging to the four groups ( Figure S1). The clustering heat map shows that the enrichment of the four groups had both similarity and distinction. The GO-BPs with highest enrichment were the regulation of DNA-dependent transcription and oxidation-reduction processes, indicating that predominant responsive proteins after waterlogging stress were transcription factors, with proteins participating in oxidation-reduction process. We also observed that both ZS9 and GH01 contained proteins responsive to various stress and hormone stimuli, such as defense, water deprivation, wounding, oxidative, cold, and salt stress, as well as auxin, ethylene, auxin, jasmonic acid, and abscisic acid stimuli. This works together with some other reported mechanisms in stress responses, such as cell iron ion homeostasis, sucrose metabolic process, positive regulation of programmed cell death, lignin biosynthetic process, and peroxidase reaction, among others. However, proteins belonging to photorespiration, sodium ion transport, and ribofiavin metabolic processes were specific in sensitive cultivar GH01, while proteins annotated in defense response, hyperosmotic response, response to heat, response to cytokinin stimulus, and calcium ion transport were particular in tolerant cultivar ZS9. The results mentioned above suggest that cultivars with different level of waterlogging resistance exhibit similar and specific response mechanisms.
Pathway Analysis
To obtain an insight into the pathways in rapeseed roots during the course of waterlogging stress, we conducted a KEGG orthology based annotation system (KOBAS) analysis [49]. The significantly accumulated or decreased proteins in ZS9 and GH01 (ratio >1.2 or <0.8) (Tables S2-S5) identified in our iTRAQ data were subjected to KOBAS analysis, and then were mapped into different Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. A detailed view of the KEGG pathway is shown in Figure 5, and it was found to be enriched in ribosomes, plant-pathogen interaction, plant hormone signal transduction, carbon metabolism, biosynthesis of amino acids, oxidative phosphorylation, starch and sucrose metabolism, glycolysis/gluconeogenesis, and others ( Figure 5). A significant difference was seen in the distribution frequency in the two cultivars. There were more proteins enriched in the carbon metabolism, especially fructose and mannose metabolism, for GH01. Moreover, it has been reported that in the initial flooding response, fructose functioned through the regulation of hexokinase and phosphofructokinase [50]. Meanwhile, for ZS9, special enrichment was observed in plant-pathogen interaction, as well as the arginine and proline metabolism, with these two pathways also reported in abiotic stress [51,52]. Therefore, we propose that the two cultivars developed different pathways to resist waterlogging stress, including metabolic-related functional proteins or regulators for fine-tuning plant stress response.
Differential Protein Overlapped with ZS9 and GH01
In order to find differentially expressed proteins based on the two genotype cultivars, we begin by comparing the proteins with the opposite tendency of fold change between tolerant cultivar ZS9 and sensitive cultivar GH01 (ZS9-up overlapped with GH01-down; ZS9-down overlapped with GH01-up) (Table S6). Seven proteins increased in ZS9 and decreased in GH01, suggesting they might be positively related to waterlogging stress, while three proteins decreased in ZS9 and increased in GH01, which might have a negative correlation with waterlogging stress ( Figure 6A). Some of these had homologous proteins in Arabidopsis, which have been reported: BnaA09g29780D (GSBRNA2T00090709001) has 82% similarity in identity to AT1G24110, in which encoding peroxidase [53], and scavenging reactive oxygen species is one of the mechanisms of various abiotic stress responses. GO analysis also predicted this gene is related to peroxidase activity and oxidation-reduction process. Meanwhile, BnaC08g02330D (GSBRNA2T00052894001) is negatively associated to waterlogging stress, showing similarity in 92% identity to homolog in Arabidopsis, which responses to osmotic, salt, cold and drought stress [54]. It was also important for ethylene signaling [55], suggesting it might be a novel negative regulator in waterlogging stress response.
Noticeably, more proteins with the same tendency overlapped between ZS9 and GH01 (ZS9-up overlapped with GH01-up; ZS9-down overlapped with GH01-down) (Table S7), and their ratio was compared for fold changes between waterlogging-tolerant cultivar ZS9 and waterlogging-sensitive cultivar GH01. Nearly half of the proteins displayed differentially upward (21 in 50) or downward (13 in 55) regulated levels ( Figure 6B). Of these proteins, BnaA09g07120D (GSBRNA2T00005761001) accumulated after waterlogging stress in ZS9 and GH01, showing 86% similarity in identity to TGA1 in Arabidopsis, which encodes a bZIP transcription factor and responds to ethylene stimulate [56]. Meanwhile, there might be other mechanisms besides the ethylene pathway. BnaC02g24210D (GSBRNA2T00012422001) decreased in the two cultivars, and the homologous protein in Arabidopsis was shown to respond to jasmonic acid (JA) and be produced during leaf senescence [57]. These proteins mentioned above suggest various mechanisms for waterlogging stress response, and we will continue to focus on these candidate proteins in future research. For observing the aforementioned candidate proteins as a whole, a clustering method was applied to the threshold ratio of proteins overlapped by the ZS9-up/GH01-down, ZS9-down/GH01-up, ZS9-up/GH01-up, and ZS9-down/GH01-down groups, which corresponded to 7, 3, 21, and 13 proteins, respectively (Figure 7). The results show that in waterlogging-tolerant cultivar ZS9 and sensitive cultivar GH01, proteins functioning in different pathways changed in opposite or similar directions at different levels. In order to further study the functions of candidate genes in rapeseed, we are now preparing transgenic plants with FLAG tags or antibodies to verify the expression pattern at the protein level. Meanwhile, we performed qRT-PCR to confirm some of members at transcriptional level, which were generally consistent with iTRAQ data (Figure 8); the corresponding sequence name is listed in Table S8. The above results suggesting the diversiform molecular mechanism or regulation pathway in waterlogging stress response, which needs to be dissected. BnaACT7 and BnaUBC21 are amplified as internal reference. Asterisks indicate statistically significant differences between 0H and 4H, 8H, or 12H at ** p < 0.01 and * p < 0.05. Error bars indicate the standard errors based on three replicates.
Contributions of Proteome Changes to Adapt to Waterlogging Stress
Waterlogging is a common environmental problem in rapeseed production, especially in the middle and lower reaches of the Yangtze River in China, where rice is usually the previous crop. Due to extensive flooding, waterlogging exposure leads to a decline in root development, nutrient transfer, seed setting rate, and rapeseed yield. As with the two cultivars ZS9 and GH01, our previous results showed that at germination stage, seedling stage, and maturity stage in the field, ZS9 displays a stronger ability to keep growth and recover under waterlogging stress conditions [34], in order to match our rapid screening method of waterlogging resistance that focuses on the germination stage [58], we kept on exploring response proteins at the germination stage, which are much less studied. In the present study, inhibition was observed for biomass accumulation in root and leaf tissue, and in seeding emergence (Figure 2), but the sensitive cultivar GH01 exhibited greater decline after undergoing waterlogging treatment.
Thus far, the application of the iTRAQ technique in exploring stress response proteins has been a heated issue. However, we noticed the higher number of waterlogging-responsive proteins in the GH01 cultivar, and a lower percentage of proteins changed in the early stage (4 or 8 h after waterlogging stress), while prolonging the treatment resulted in more proteins starting to response at 12 h. Nonetheless, response proteins at later stage could not rescue the plant from damage, thus GH01 reflected lower growth capability under stress condition (Figure 2). On the contrary, no special response proteins emerged at 12H after stress in the tolerant cultivar ZS9 (Figure 4), suggesting a quicker active proteome response in this tolerant cultivar. This phenomenon is possibly due to their different genetic backgrounds.
Waterlogging Stress Affects Proteins Involved in Ethylene Signaling
Many researches have shown that ethylene is a key mediator of waterlogging stress responses in plants [59], such as its important role in adventitious root or aerenchyma formation [18,60], and we did detect some ethylene response factors in ZS9-up and GH01-down. The accumulated protein BnaA02g27630D was homologous with EDF3, which promotes flower senescence or abscission and activates senescence-associated genes by ectopic expression in Arabidopsis [61], and premature senescence was known as a pathway for the root-waterlogged rapeseed resistant to hypoxia [35]. While BnaA09g18210D (GSBRNA2T00072354001) decreased in GH01-down, which displayed 63% similarity in identity to MACD1, that has been reported to positively regulate factors affecting programmed cell death [62]. Figure 8 showed these two members respond to waterlogging stress significantly with unique patterns, and the cluster analysis in Figure S1 also demonstrates that the rapeseed plant can cope with waterlogging through managing and regulating programmed cell death. Unexpectedly, few proteins out of the total were involved in ethylene signaling, indicating that the two genotype cultivars adopted various adaptive mechanisms to combat stress from waterlogging. On the other side, adventitious root would promote gas exchange, as well as uptake of water and oxygen, thus helping plants survive under low oxygen conditions; however, the adventitious root or aerenchyma formation always emerges many days after waterlogging stress [15,60]. According to previous standards for identification of resistance to waterlogging, we highlight the early stage within 12 h [58,63], so we presume that proteins related to adventitious root or aerenchyma formation are not yet starting to respond.
Waterlogging Stress Affects Proteins Involved in Protein Phosphorylation
Based on the GO cluster, the identified proteins include well-known classical pathways, such as ethylene response factors as well as pathways, including protein phosphorylation, the oxidation−reduction process, and regulation of transcription factors ( Figure S1), which had good overlap with previous findings [33,35]. Despite these, other aspects, like protein phosphorylation or carbon metabolism, were poorly understood in rapeseed. Thus, it is certainly worthwhile to explore the responsible genes and resolve the response pathways.
As previously reported, protein post-translational modifications (PTMs) play a unifying and coordinating role in the plant. For instance, phosphorylation-mediated signaling mechanisms are associated with plant growth, development, and abiotic stress [64][65][66][67][68]. Studies also indicate that reversible protein phosphorylation constitutes a major event for perception and response to environmental and hormonal stimulates in plants [69,70]. Up until now, specific molecular mechanism based on protein phosphorylation in stress response has been a hot research topic [71,72]. Despite extensive studies, little is known about rapeseed. Yet, one study showed that calcium-dependent protein kinase (CDPK) members in rapeseed could interact with protein phosphatase 2C (PP2C) members, but the mechanisms need to be further studied [73]. In this study, members of proteins (GSBRNA2T00084051001, GSBRNA2T00109342001, GSBRNA2T00062102001, GSBRNA2T00062600001, etc.), including the serine/threonine protein kinase domain, were predicted to show protein phosphorylation activity (Tables S2-S5). In addition, GSBRNA2T00014963001 (BnaCnng27940D) decreasing in GH01 was predicted to encode a PP2C member, which has 78% similarity in identity to the PP2C5 in Arabidopsis that was reported to function in the early ABA signaling pathway [74,75]. The qRT-PCR results of our current work primarily confirmed the multifarious induction of these members under waterlogging stress conditions (Figure 8), whether the candidate proteins function through PTMs in rapeseed or not, we need to do more investigations, including phosphorylation assay, in further studies.
Waterlogging Stress Affects Proteins Involved in Metabolism
Previous papers indicate that plants respond to stress by modulating carbon metabolism in cell wall composition, including two main highlighted mechanisms: cell wall extensibility or polysaccharides and lignin synthesis, both of which help cells alleviate abiotic stress effects [76,77]. Xyloglucan endotransglucosylase/hydrolases (XTHs) are important cell wall enzymes that function in the modification of cell wall components by grafting xyloglucan chains to oligosaccharides or to other xyloglucan chains, and by hydrolysing xyloglucan chains [78,79]. Xyloglucan endotransglycosylase-1 (xet-1), a putative loosening enzyme in maize, was previously indicated in its involvement in response to root growth and oxygen deprivation [78,80]. In our study, xyloglucan endotransglucosylase hydrolase proteins (GSBRNA2T00081505001, GSBRNA2T00067403001, GSBRNA2T00131400001) (Tables S2-S5) were found to be differentially expressed under waterlogging stress conditions. Additionally, notably increased changes related to the biosynthesis of lignin were also observed in ZS9 and GH01 (GSBRNA2T00112348001, GSBRNA2T00116917001, GSBRNA2T00042846001, GSBRNA2T00121624001, etc.) ( Figure S1). Lignin is one of the cell-wall components that has already been reported in defensive responses against clubroot in canola [81]; low water potential in maize [82]; flooding in cotton, wheat, and soybean [83][84][85]; and cold in plants [86]. Accordingly, it is clearly a necessity to undertake large-scale analysis to unravel the consequences of waterlogging stress on the cell wall in rapeseed.
Plant Material and Stress Treatments
According to our previous standard on waterlogging tolerance, the waterlogging-tolerant cultivar ZS9 and sensitive cultivar GH01 were chosen for this study. Seeds of the two rapeseed cultivars were sterilized with sterilized distilled water for 10 min. Then the seeds were germinated on wet filter paper in the dark for 36 h at 25 • C, until the root or radicle grow to 5mm long. Waterlogging stress was applied by adding distilled water until the germinated seeds are completely submerged. For iTRAQ sample preparation, 5 mm-long roots were collected at 0, 4, 8, and 12 h during the course of waterlogging treatment, then immediately frozen in liquid nitrogen before analysis. Three biological replications were mixed equally for each iTRAQ sample and qRT-PCR template.
After 12 h of waterlogging treatment, the germinated seeds were transferred to normal conditions for two days to observe seedling rate and measure root length, shoot length, and fresh weight. For normal conditions, rapeseeds were grown in a growth chamber illuminated with white fluorescent light (130 µmol m −2 s −1 , 16 h light period/day) at 25 • C and 70% relative humidity.
Root Morphology and Optical Microscopic Observation
The experiment was performed as described previously [87]. Before and after 12 h of waterlogging treatment, the roots of germinated seeds from the two cultivars were put in 5% glutaraldehyde in a 50 mM phosphate buffer (pH 7.4), and then dehydrated in ethanol and embedded by white resin. The white resin section was stained with toluidine blue and then observed using optical microscope.
Protein Extraction
Protein extraction was conducted based on the trichloroacetic acid (TCA)/acetone method, with some modifications [88]. Roots were powdered in liquid nitrogen and suspended in 35 mL chilled (−20 • C) acetone containing 10% (w/v) TCA. The homogenate with incubation at −20 • C for 2 h was centrifuged at 7830 rpm for 30 min at 4 • C. The supernatant was carefully removed, then precipitation was washed three times with chilled acetone. Proteins were air-dried at room temperature and dissolved in 600 µL SDT buffer (4% SDS, 100 mM Tris-HCl, 1 mM DTT, pH 7.6), incubated in boiling water for 5 min under 100 Watts sonication, then boiled for 5 min and centrifuged at 13,400 rpm for 30 min at 4 • C. The supernatant was collected as a soluble protein fraction. The concentration of each extract was determined using a Bradford protein assay kit (Bio-Rad, Hercules, CA, USA) with bovine serum albumin (BSA) as a standard. The quality of each protein sample was tested using SDS-PAGE. Furthermore, tricine-sodium dodecyl sulfate polyacrylamide gel electrophoresis (Tricine-SDS-PAGE) was used to verify the quantitative results from the Bradford assay and determine the quality of the extract.
Protein Digestion and iTRAQ Labeling
The following experiments were carried out according to the technology roadmap in Figure 9. About 400 µg of mixed protein samples were added in 100 mM DTT, incubated in boiling water for 5 min, cooled to room temperature, diluted with 200 µL UA buffer (8 M urea, 150 mM Tris-HCl, pH 8.0), and subjected to 10 kDa ultrafiltration. The samples were centrifuged at 14,000× g for 15 min; 200 µL UA buffer were added and centrifuged for another 15 min. After adding 100 µL IAA buffer (0.05 M IAA), the samples were vibrated at 6000 rpm for 1 min, incubated for 30 min in darkness, and then centrifuged at 14,000× g for 10 min. After washing the filters three times with 100 µL UA buffer, 100 µL dissolution buffer was added and centrifuged for 10 min. This step was repeated twice, and then 5 µg trypsin (Promega) in 40 µL dissolution buffer was added to each filter; then the samples were vibrated at 6000 rpm for 1 min. The samples were incubated overnight at 37 • C. The resulting peptides were collected by centrifugation. The filters were rinsed with 40 µL dissolution buffer and centrifuged again. Finally, the peptide content was tested by spectral density with UV light at 280 nm [89]. . Experimental design of this proteomic study. After undergoing waterlogging stress for 0 h, 4 h, 8 h, and 12 h, the root of the germinated seed is sampled for protein extraction, trypsin digesting, and iTRAQ labeling. The iTRAQ-labeled samples are identified and quantified using the Easy-nLC Nanoflow HPLC system, and then identified proteins were taken for GO and KEGG analysis. Three biological replications are mixed equally for each iTRAQ sample. About 80 µg peptides of each sample were labeled with iTRAQ reagents following the manufacturer's instructions (Applied Biosystems, Thermo Fisher Scientific Corporation, Waltham, MA, USA), by using iTRAQ 8-plex kits (AB Sciex Inc., Framingham, MA, USA). After labeling, the samples were combined and lyophilized. The iTRAQ-labeled peptides were fractionated by strong cation exchange (SCX) chromatography in an AKTA Purifier 100 (GE Healthcare, Chicago, IL, USA) system equipped with a Polysulfethyl (PolyLC Inc., Columbia, MD, USA) column. The peptides were eluted at a flow rate of 1 mL/min. Buffer A consisted of 10 mM KH 2 PO 4 and 25% v/v ACN, pH 3.0, and Buffer B consisted of 10 mM KH2PO4, 25% v/v ACN and 500 mM KCl, pH 3.0. The two buffers were filter-sterilized. The following gradient was applied to perform separation: 100% Buffer A for 25 min, 0-10% Buffer B for 7 min, 10-20% Buffer B for 10 min, 20-45% Buffer B for 5 min, 45-100% Buffer B for 5 min, 100% Buffer B for 8 min, and finally 100% Buffer A for 15 min. The elution process was monitored by measuring absorbance at 214 nm, and fractions were collected every 1 min. The collected fractions were finally combined into eight pools and desalted on C18 cartridges (Empore TM SPE Cartridges C18 (standard density), Sigma-Aldrich, St. Louis, MO, USA). Each fraction was concentrated via vacuum centrifugation and reconstituted in 40 µL of 0.1% v/v trifluoroacetic acid. All samples were stored at −80 • C until LC-MS/MS analysis.
Liquid Chromatography-Tandem Mass Spectrometry
The iTRAQ-labeled samples were analyzed using Easy-nLC nanoflow HPLC system connected to Q-Exactive mass spectrometer (Thermo Fisher Scientific, San Jose, CA, USA). A total of 5 µg of each sample was loaded onto Thermo Scientific EASY column (2 cm × 100 µm, 5 µm C18), using an auto-sampler at a flow rate of 0.3 mL/min. The sequential separation of peptides on a Thermo Scientific EASY column (100 mm × 75 µm, 3 µm C18) was accomplished using a segmented 1-h gradient from Solvent A (0.1% formic acid in water) to 35% Solvent B (84% ACN in 0.1% formic acid) for 50 min, followed by 35-100% Solvent B for 4 min, and then 100% Solvent B for 6 min. The column was re-equilibrated to its initial highly aqueous solvent composition before analysis. The mass spectrometer was operated in positive ion mode, and mass spectrometry (MS) spectra were acquired over a range of 300-1800 m/z. The resolution powers of the MS scan and MS/MS scan at 200 m/z for the Q-Exactive were set at 70,000 and 17,500, respectively. The top ten most intense signals in the acquired MS spectra were selected for analysis. The isolation window was 2 m/z, and ions were fragmented through higher energy collisional dissociation, with normalized collision energies of 30 eV. The maximum ion injection times were set at 10 ms for the survey scan and 60 ms for the scan, and the automatic gain control target values for both scan modes were set at 3.0 × 10 −6 . The dynamic exclusion duration was 25 s. The underfill ratio was defined as 0.1% on the Q-Exactive.
iTRAQ Analysis
The raw files were analyzed using the Proteome Discoverer 1.4 software (Thermo Fisher Scientific). A search for the fragmentation spectra was performed using the MASCOT search engine embedded in Proteome Discoverer against the Brassica napus genoscope database, which was downloaded from a Brassica napus L. filtered database (www.genoscope.cns.fr/brassicanapus). The following search parameters were used: mono-isotopic mass; trypsin as the cleavage enzyme; two missed cleavages; iTRAQ labeling and carbamidomethylation of cysteine as fixed modifications; peptide charges of 2+, 3+, and 4+; and the oxidation of methionine specified as variable modifications. The mass tolerance was set to 20 ppm for precursor ions and to 0.1 Da for fragment ions. The results were filtered based on a false discovery rate (FDR) of no more than 1% [90]. The protein identification was supported by at least two unique peptides.
The relative quantitative analysis of the proteins based on the ratios of iTRAQ reporter ions from all unique peptides representing each protein was performed using Proteome Discoverer (version 1.4). The relative peak intensities of the iTRAQ reporter ions released in each of the MS/MS spectra were used, and the reference (REF) sample was employed for calculating the iTRAQ ratios of all reporter ions. Thereafter, the final ratios obtained from the relative protein quantifications were normalized based on the median average protein quantification ratio. Protein ratios represent the median of the unique peptides of the protein. For quantitative changes, a 1.2-fold cutoff was set to determine upward-accumulated and downward-accumulated proteins, with a p-value < 0.05.
Bioinformatics Analysis
Functional analysis of proteins identified was conducted using GO annotation (http://www. geneontology.org/), and proteins were categorized according to their biological process, molecular function and cellular localization [91,92]. The differentially accumulated proteins were further assigned to the KEGG database (http://www.genome.jp/kegg/pathway.html) [93,94]. This study implemented FDR correction to control the overall Type I error rate of multiple testing using GeneTS (2.8.0) in the R (2.2.0) statistics software package. Pathways with FDR-corrected p-values < 0.05 were considered statistically significant.
RNA Extraction and qRT-PCR Analysis
The experiments were conducted as previously described [95]. Briefly, the RNA samples were collected from the root before and after waterlogging treatment at 4 h, 8 h, and 12 h at germination stage, which were exactly same with the samples for iTRAQ analysis. Then total RNA was extracted from the samples using TRIzol reagent (transgene) and then reverse transcribed to cDNA via Thermo kit protocol (Thermo Scientific RevertAid First Strand cDNA Synthesis Kit). The cDNA was then diluted 10-fold to perform qRT-PCR by using the PowerSYBR Green PCR Master Mix (Appliedbiosystems) on the StepOnePlus Real-Time PCR system, each reaction containing 0.4 µL specific primers, 2.0 µL cDNA, and 5.2 µL SYBR mixture. Three technical replicates were performed for each sample, and the program was set as following: 95 • C for 10 min; 42 cycles of 15 s at 95 • C, and 30 s at 60 • C. The melt curve, which aims to estimate the specificity of primers, was set from 65 • C to 95 • C with temperature increments of 0.5 • C for 5 s. The BnaACT7 and BnaUBC21 were used to standardize the RNA sample according to previous study [96][97][98]. The gene-specific primers used for qRT-PCR were designed on NCBI primer-blast and examined for specificity by BLASTn search in NCBI database, with all primers targeted mainly at the 3 UTR; the amplification product size was between 70-250 bp. All primers related to this experiment are listed in Table S8. The statistically significance was determined through a t-test with ** p ≤ 0.01 and * p ≤ 0.05.
Conclusions
Our results show that iTRAQ is a powerful technique to perform quantitative proteome analysis of rapeseed roots under waterlogging stress. A large number of differentially expressed proteins responding to waterlogging stress were identified, and functional categorization of GO and KEGG analysis were applied, showing that the differentially changed proteins were enriched in the oxidation-reduction process, signal transduction, carbohydrate metabolism, and other processes. Quite a number of proteins constantly accumulated or decreased after undergoing waterlogging stress, with a quicker active proteome response in the tolerant cultivar ZS9. Cluster analysis was also applied to the differentially expressed proteins overlapped with the two cultivars, displaying that both similarity and distinction exist in waterlogging response mechanisms in the two genotypes. Taken together, our results show comprehensive proteome coverage of rapeseed roots in response to waterlogging treatments, and provide new insight into the molecular basis of waterlogging-stress response in rapeseed.
Supplementary Materials: The following are available online at http://www.mdpi.com/2223-7747/7/3/71/s1. Figure S1. Heat map of differentially accumulated proteins clustered according to GO-BP terms based on the numbers. Table S1. All proteins identified through iTRAQ analysis. Table S2. The up-regulated proteins (ZS9-up) identified based on the overlap in ZS9. Table S3. The down-regulated proteins (ZS9-down) identified based on the overlap in ZS9. Table S4. The u-regulated proteins (GH01-up) identified based on the overlap in GH01. Table S5.
The down-regulated proteins (GH01-down) identified based on the overlap in GH01. Table S6. Proteins with the opposite tendency of fold change between ZS9 and GH01. Table S7. Proteins with the same tendency of fold change between ZS9 and GH01. Table S8. Primers of candidate genes used for qRT-PCR.
Author Contributions: X.Q. and Y.L. designed the experiments; X.Q. prepared the iTRAQ samples; Z.T. performed the bioinformatic analysis; X.Z., X.Z., and Y.C. contributed to the iTRAQ data analysis; J.X. performed the qRT-PCR; G.L. provided plant material; L.Z. performed the qRT-PCR assays; X.Q. and Y.L. wrote the paper, and J.X. revised the paper. All authors have read and approved of the final version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-09-16T07:03:53.915Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "78afbc3f6b7f7322fb912ed381af1b343832d988",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/7/3/71/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78afbc3f6b7f7322fb912ed381af1b343832d988",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
251178934 | pes2o/s2orc | v3-fos-license | Paternal age impairs in vitro embryo and in vivo fetal development in murine
The association between advanced paternal age and impaired reproductive outcomes is still controversial. Several studies relate decrease in semen quality, impaired embryo/fetal development and offspring health to increased paternal age. However, some retrospective studies observed no alterations on both seminal status and reproductive outcomes in older men. Such inconsistency may be due to the influence of intrinsic and external factors, such as genetics, race, diet, social class, lifestyle and obvious ethical issues that may bias the assessment of reproductive status in humans. The use of the murine model enables prospective study and owes the establishment of homogeneous and controlled groups. This study aimed to evaluate the effect of paternal age on in vitro embryo development at 4.5 day post conception and on in vivo fetal development at 16 days of gestation. Murine females (2–4 months of age) were mated with young (4–6 months of age) or senile (18–24 months of age) males. We observed decreased in vitro cleavage, blastocyst, and embryo development rates; lighter and shorter fetuses in the senile compared to the young group. This study indicated that advanced paternal age negatively impacts subsequent embryo and fetal development.
No differences were observed between groups for placental weight (0.094 ± 0.012 g and 0.092 ± 0.003 g; p = 0.90), length (0.769 ± 0.050 cm and 0.771 ± 0.021 cm; p = 0.96) and area (0.458 ± 0.072 cm 2 and Table 1. Summary of advantages for use murine as a model in scientific research.
Observations References
Anatomical and physiological similarities with humans Bryda 25 , Uhl 26 , Vandamme 27 Easily handling on laboratory animal facility Bryda 25 , Uhl 26 , Vandamme 27 Short generational interval and larger litter size Bryda 25 , Uhl 26 , Vandamme 27 Use of inbred strains promote homogenous control group on a controlled environment Casellas 28 and Barré-Sinoussi et al. 29 Full genome sequenced Waterston et al. 30 Model for male infertility O'Bryan 31 , Jamsai et al. 32 Sperm evaluation. There was no difference between the young and senile groups for the analyses performed by flow cytometry (mitochondrial membrane potential, plasma and acrosome membrane integrity, and oxidative stress; see Table 2). Considering CASA analysis, we observed lower average pathway velocity (VAP) straight-line velocity (VSL); curvilinear velocity (VCL); total sperm motility (TM), percentage of sperm with a.
Senile group b. c. www.nature.com/scientificreports/ progressive motility (PROG), percentage of rapid sperm (RAPID) in the senile group compared to the young group, as shown on Table 2. Moreover, a higher percentage of static sperm (STATIC) was observed in the senile group compared to the young group (Table 2). No differences were observed in the other variables evaluated by CASA between the groups ( Table 2).
Correlation. We observed positive correlations between high mitochondrial membrane potential and cleavage (rho 0.88), blastocyst (rho 0.74), and embryo development rates (rho 0.77) in this study. Table S1). When the correlation was performed only in the senile group, we observed a high positive correlation between high mitochondrial membrane potential with blastocyst (rho 0.89) and embryo development rates [(0.89-Supplementary Table S2)].
Discussion
In our study, we tested the hypothesis that the in vitro embryo and in vivo fetal development was negatively affected when senile mice are mated with reproductive-aged females. In fact, our results demonstrated that females mated with senile males presented lower cleavage, blastocyst, and embryo development rates compared to the ones mated with young males.
In the present study, we observed that senile male mice had a lower cleavage and blastocyst rate than young male mice. This data corroborates with the study performed by Katz-Jaffe et al. 37 , these authors evaluated in vitro embryo development from superovulated mice females and found that males aged 12-15 months showed a decrease in the formation of the blastocyst (71% vs. 81%) and quality of morphology (Gardner and Schoolcraft grading system) of expanded blastocyst (63% vs. 71%) when compared to male mice less than 12 month of age, showing that advanced paternal age has negative effects on embryonic development. However, authors did not report differences in cleavage rate and embryo quality in this stage.
In humans, Klonoff-Cohen et al. conducted a prospective study with 221 couples during in vitro fertilization protocol and observed that each additional year of male age was associated with an additional 11% chance of Table 2. Sperm Evaluation performed by flow cytometry and computer-assisted sperm analysis (CASA). VAP average pathway velocity, VSL straight-line velocity, VCL curvilinear velocity, ALH amplitude of lateral head displacement, BCF beat cross frequency, STR straightness, LIN linearity, TM total motility, PROG progressive motility, RAPID rapid velocity, MEDIUM medium velocity, SLOW slow velocity, STATICS non-moving sperm. Different superscript letters in each bar represent p < 0.05, as indicated by statistical T-test. www.nature.com/scientificreports/ not getting pregnant and 12% of unsuccessful births 38 . Nevertheless, Wu et al. did not observe any differences in fertilization and cleavage rates, embryo quality, and miscarriage rate when analyzing 9991 cycles of in vitro fertilization regardless of maternal (30-38 years) or paternal (30-42 years) age considered in this study 39 .
In the present experiment, we observed low embryo production rates probably due to the culture medium used in the in vitro manipulations. However, we believe that this factor did not influence the statistical results regarding the lower embryo production rates found in the senile when compared to the young males since this condition influenced both groups equally.
In this study, we observed that fetuses from females mated with senile males were lighter and smaller. It was described that men over 50 years old generate children with low birth weight in 90% of the cases, in addition to premature births 40,41 . In agreement, Katz-Jaffe et al. observed that female CF1 mice superovulated at 6-8 weeks (1-2 months) of age mated with males with more than 12 months of age presented smaller and lighter fetuses; and a decrease in placental weight 37 . Similarly, Denomme et al. verified the same changes in their study, such as the decrease in weight and length of fetuses, lighter placentas and a decrease in successful mating frequency from males' mice aged 11-15 months 42 .
Paternal age can affect placental development since changes in sperm DNA and epigenetic dysregulation can be frequent in older men 43 . Surprisingly, the diameter, area and weight of the placenta showed no statistical difference, unlike previous studies, which demonstrated that advanced paternal age affects placental development in mice 37,42 and humans 8,44 . In the human species, the placenta weight from pregnancies with older men (over 50 years old) increased compared to the group between 20 and 24 years old 44 . The consequence of the decrease in human placenta weight would be portrayed as an inadequate exchange of nutrients and gases due to the smaller surface area. On the other hand, an increase in placenta weight could indicate edema of the placental villi which would reduce the transfer of nutrients and gases 45 .
Recent studies indicate that the ratio of human newborn to placental weight may be related to perinatal changes; a higher ratio indicates insufficient oxygen to the fetus and a lower ratio suggests a suboptimal fetal condition 45,46 . In the study by Denomme et al. senile male mice were found to have a higher fetal: placental weight ratio compared to their youth 42 . Controversially, we observed in the present work that senile male mice had a lower fetal: placental weight ratio in relation to the group of young mice, which may be a consequence of the lower fetus weight observed in senile group.
Despite the differences that we observed in the embryo and fetal development, there were no differences on mitochondria function, plasmatic and acrosomal membranes integrity and oxidative stress in young and senile groups. However, we observed lower values for motility, kinetics variables, percentage of rapid sperm; and a higher percentage of static sperm in senile compared to the young group. This result indicated that the sperm of senile males are slower, and probably interferes with sperm fertilization capacity. The positive correlations between the percentages of motile, progress, and rapid sperm with cleavage, blastocyst, and embryo development rates reinforce this hypothesis.
Mitochondrial membrane potential (MMP) correlates positively with sperm parameters such as motility, sperm capacitation, and fertilization 47 . Several studies support that there is a link between aging, mitochondrial dysfunction, and decreased male fertility, as well as changes in fatty acid composition that can alter the fluidity of the inner mitochondrial membrane. In this study we observed a positive correlation between high mitochondrial membrane potential and embryo development rates in the senile group indicating that an increase in the number of sperm with high mitochondrial membrane potential may improve in vitro embryo development rates.
Controversially, Katz-Jaffe et al. 37 observed no differences in sperm motility in male mice aged 15 months and 8-10 weeks (1-2 months) of age, this result could be explained by the age of the animals used, which were 3 months younger compared to the age of senile male mice used in the present work. According to Dutta and Segunpta et al., every 9125 days of a mouse represents 1 year for men 34 . Therefore, several changes can occur within a relatively short time, such as 3 months. Moreover, Katz-Jaffe et al. used conventional microscopy to assess sperm motility 37 , and we assessed the sperm motility through the CASA system, obtaining greater accuracy and details on sperm movement pattern compared to routine evaluations by light microscopy 48 .
Results of the present study indicate that senile males present a decrease in reproductive performance. In consonance, in humans, paternal age can be associated with increased prevalence of comorbid conditions of urological character (decrease sperm motility, percentage of normal sperm morphology, sperm concentration, and increased sperm DNA fragmentation and ejaculatory dysfunction) [49][50][51] , which affect reproductive potential, fertility 46 , low conception rates to poor offspring health 52 .
The increase in paternal age compromises the motility, velocity, and coordination of sperm, and negatively influences in vitro embryo development rates and the size of the fetus in mice. Therefore, more studies are necessary to indicate when male murine reproductive senility occurs, and clarify the biological mechanisms involved in the influence of paternal age on embryo and fetal development.
Methods
The present study was conducted following ethical directives for animal experiments, complied with ARRIVE guidelines 53 www.nature.com/scientificreports/ maintained in mini-isolators (ALESCO ® , Sao Paulo, Brazil) at 22-24 °C controlled air temperature, 12 light/12 dark cycle light on room, and offered industrial pellet food and filtered water "ad libitum". Male animals were divided into 2 groups, according to their age. The young group was composed of 4-6 months old mice, corresponding to men of approximately 20 years old 34 . The senile group was composed of 18-24 months old animals, corresponding to men of approximately 60-79 years old 34 . As an inclusion factor, we used only senile murine males that did not present neurological, locomotion, ophthalmological, dermatological alterations and visible increases in body volume (tumor or edema), which may be related to metabolic disorders. Before mating, the males remained in groups of up to 4 animals in each mini-isolator, and after mating the males were isolated for the continuation of the other experiments.
We used sexually mature female mice at 2-4 months of age. The females were distributed randomly in the experimental groups and the total number of the experimental units were described in each experiment. During the evaluation of embryo and fetal development, researchers were blinded to the experimental group.
Murine female management. Females estrous were synchronized with intraperitoneal injection of 5-2.5 IU of eCG (Equine Chorionic Gonadotropin, Novormon, Zoetis, Brazil) followed by 5-2.5 IU of hCG after 48 h, approximately 1 h before the start of the Laboratory Animal Facility dark cycle (12 h).
For the embryo development experiment, we used monogamous mating (1 male:1 female). In the fetal development experiment, monogamous and polygamous (1 male:2 females) mating was performed. For all experiments, after hCG administration, females were transferred to male cage throughout the night, and mating was evaluated in the morning of the following day by visualization of the vaginal plug. Regardless of plug visualization www.nature.com/scientificreports/ all females were euthanized at the 1 th and 16 th day of gestation for in vitro embryo and fetal development, respectively.
Euthanasia procedure. Cervical dislocation was performed in pregnant females after the anesthesia procedure with Isoflurane (BioChimico, Itatiaia, Rio de Janeiro and Cristália ® , Itapira, Sao Paulo). In the males, cervical dislocation 48 was performed with no anesthesia to minimize possible seminal alterations. In a parallel experiment (data not showed) we verified sperm aggregation and agglutination when isoflurane was used for the euthanasia procedure.
Reproductive performance. We calculated the mating rate (number of females with vaginal plug/total number of females that copulated × 100) 55 per experimental group.
In vitro embryo development. Experimental groups included 7 males in the senile group and 6 males in the young group, those males were mated monogamously with 13 hormonally synchronized females (2-4 months old). Vaginal plug was not observed in two females of the senile group, with consequent absence of embryos and further embryo development. Therefore, we used 6 hormonally synchronized females for young group and 5 females for senile group, providing 9 degrees of freedom for residue in statistical analyses. The females were euthanized at 1 th day of gestation, the reproductive system was accessed according to Nagy et al. 54 49 . The presumptive zygotes were released from the oviduct after washing with HH and exposed to a 0.1% hyaluronidase solution in phosphate buffered saline (PBS) to remove cumulus cells, and washed two times in HH plus 5% fetal calf serum (FCS) and then in KSOM medium (MR 020P-5D, Millipore, Massachusetts, USA). Zygotes were IVC in 30 µl drops of KSOM medium, covered with 1-2 ml mineral oil, at 38.5 °C, 5% CO 2 , 5% O 2, and 90% N 2 , under high humidity, for 4.5 days.
On day 1.5 of IVC, we assessed cleavage rate (number of cleaved embryos/number of total structures × 100). On day 4.5, blastocyst rate including early blastocyst, expanding blastocyst, expanded blastocyst, hatching blastocyst, and hatched blastocyst (number of blastocysts/number of total structures × 100) and embryo development rate (number of blastocysts/number of cleaved embryos × 100) were assessed. Embryo evaluations were performed in stereomicroscopy (Olympus SZ61, Olympus ® , Tokyo, Japan) under 60× magnification.
Fetal development. For fetal development assessment, experimental groups included 20 senile males and 18 young, which were monogamously and polygamously mated with 43 hormonally synchronized females (2-4 months old), 22 females for the senile group and 21 females for young group. Vaginal plug was observed in 10 females mated with senile males and 17 mated with young males. At 16th day of gestation, females mated with senile and young males were euthanized to evaluate the number of total structures, viable fetuses, resorption sites, length and weight of the fetuses, and the area, length and weight of the placenta per male. The litter average from each dependent variable per male was used to perform the statistical analysis. We considered only 5 litters from senile males and 9 litters from young males, therefore 15 senile males and 9 young males did not generate litters. The experiments were conducted with 14 experimental units.
Females were euthanized 16 days after vaginal plug detection. The female reproductive tract was accessed as described previously. Hysterectomy (technique adapted from Olson and Bruce 56 ) was performed and the uterus was placed in a 35 mm Petri dish. Uterine horns were sectioned longitudinally, and the content examined. The number of total structures (viable fetus plus resorption sites), viable fetus (compatible with gestational age, E16.5) 57 and resorption sites were recorded. Fetuses and their extraembryonic tissues were removed, weighed on a digital analytical balance (model AG245, Marshall Scientific®, Hampton, USA), and the weight of fetal and placental ratio.
Photos of fetuses and placentas were taken using a megapixel digital color camera (Olympus LC30, Olympus ® , Munster, Germany) attached to a stereomicroscope (Olympus SZ61, Olympus ® , Tokyo, Japan) to measurement of fetal length (crown to rump) and placental length and area were performed by ImageJ Software (Image Processing and Analysis in Java version 1.52 j, public domain, National Institutes of Health, USA) and CellSens ® Software (Olympus Live Science ® , Olympus ® , Tokyo, Japan).
Sperm evaluation.
Five animals of the senile group and four animals of the young group were randomly selected for the semen evaluation, providing 7 degrees of freedom for residue used in statistical analysis. We performed the power analysis (PROC POWER, SAS System for Windows 9.3) in the sperm variables (CASA and cytometry) to decide the minimal number of the experimental units to provide a power higher than 80%.
Sperm was collected from epididymis cauda and vas deferens according to Yamashiro et al. 58 and Kishikawa et al. 59 www.nature.com/scientificreports/ according to Castro et al. 60 . For being a metachromatic probe, mitochondria with low and medium potential fluoresce on green and with high potential fluoresces in red. In a dark room, 0.5 µL of JC-1 (1 µM final concentration) was added to 7.5 µL with 17.5000 sperm and 30 µL of CZB-Hepes medium. Samples were analyzed by flow cytometry after 5 min' incubation. Sperm plasma membrane and acrosome integrity was evaluated by flow cytometry using propidium iodide (PI; 0.5 mg/mL, 0.9% NaCl v/v) and fluorescein isothiocyanate conjugated with Pisum sativum agglutinin (FITC-PSA; 100 µg/mL, sodium azide solution at 10%) according to Hamilton et al. 61 . Propidium iodide (PI) fluoresces when it binds to DNA, however, it only penetrates the cell when the membrane is damaged, thus indirectly revealing MP damage, emitting the red fluorescence. While PSA has specificity to acrosome membrane glycoproteins, when conjugated to FITC it marks the damaged acrosome in yellowish-green fluorescence. In a dark room, FITC-PSA solution was prepared to add 190 µL of sodium azide solution at 1% and 10 µL of FITC (final concentration 24.3 µg/mL), then 11.3 µL of PI (final concentration 6.87 µg/mL) was added to solution. About 175,000 sperm were incubated with 13 µL of this solution and 30 µL of CZB-Hepes. Samples were analyzed by flow cytometry after 5 min of incubation. This association of probes separates four sperm populations: intact membrane and intact acrosome (IMIA), intact membrane and damaged acrosome (IMDA), damaged membrane and intact acrosome (DMIA), damaged membrane, and damaged acrosome (DMDA).
The fluorescent probe CellROX green ® (Molecular Probes, Eugene, OR, USA) was used to evaluate oxidative stress. CellROX green ® quantifies intracellular Reactive Oxygen Species (ROS) when the oxidation occurs, and subsequent binding to DNA, emitting a more intense green fluorescence. According to de Castro et al., in a dark room, 0.6 µL CellROX green ® (final concentration 5 µM) was added to 60 µL of CZB-Hepes. About 175,000 sperm were incubated with 1.85 µL of this solution, after 20 min of incubation was added 0.7 µL of PI (final concentration 6.87 µg/mL) 60 . Samples were analyzed by flow cytometry after 10 min of incubation.
The mean of these scans was used for statistical analysis on Hamilton Thorne IVOS Ultimate 12. Statistical analysis. Statistical analysis was performed using the Statistical Analysis System 9.3 software (SAS Institute, Cary, NC, USA). The samples were tested to the normality of residues and homogeneity of variances. We performed the T-test procedure for independent variables. A correlation test (Spearman) was performed between sperm traits, embryo development in vivo, and fetal development, considering or not the experimental groups. The probability (p) values will be presented along with the results topic for each variable, considering p significance less than 0.05, to reject the null hypothesis. The data are presented as mean ± SEM (standard error of the mean). www.nature.com/scientificreports/ | 2022-07-31T06:16:45.481Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "13dfa75de00033b677878da35c30efcadfc3fc9a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0717d574824382cfb9e1696de26963947df14f18",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41768809 | pes2o/s2orc | v3-fos-license | Aspilota-group (Hymenoptera: Braconidae: Alysiinae) diversity in Mediterranean Natural Parks of Spain
Abstract This work analyses the biodiversity of the Aspilota-group (Hymenoptera: Braconidae: Alysiinae) in three Mediterranean Natural parks: Natural Park of La Font Roja, Natural Park of Las Lagunas de la Mata-Torrevieja and Natural Park of La Tinença de Benifassà. Samples were carried out from April 2004 to December 2007. In total, 822 specimens, belonging to 52 species, were collected. Alpha, beta and gamma diversities were analysed, and the Tinença Park was proven to have higher diversity than the Font Roja and Torrevieja. Also, the structure of the Aspilota-group community was analysed.
Introduction
Mediterranean ecosystems are very important in terms of biodiversity, and are thus considered hotspot areas (Myers et al. 2000). Landscapes and habitats grow in complexity over time, as a consequence of ecological processes. For example, Mediterranean forest landscapes rich in evergreen species frequently intersect with brushwood, pasture, farming and ranching areas. In close proximity to these areas, however, it is often possible to identify zones that have been reclaimed by highly diverse natural communities after the cessation of human intervention. Despite the huge resistance displayed by Mediterranean biotopes to human pressure, isolation and fragmentation are unavoidable (Pungetti 2003), resulting in the emergence of isolated patches in the landscape.
In land environments, the information provided by arthropods can be very valuable for the adoption of measures aimed at guaranteeing the diversity and welfare of protected forests (Pyle et al. 1981, Pearson and Cassola 1992, Kremen et al. 1993, especially insects with a high sensitivity to alterations in environmental resources and conditions. Parasitoids Hymenoptera of the Braconidae family, with around 40,000 catalogued species, are especially pertinent in this respect due to their particular biology (Wharton et al. 1997).
Braconidae are the second largest family within Hymenoptera; the majority of species are primary parasitoids of immature stages of Lepidoptera, Coleoptera and Diptera (Sharkey 1993). These wasps are of enormous ecological interest because of their role in controlling the population of phytophagous insects, causing direct effects in the host species' population size and indirect effects on the diversity and survival of host plants (LaSalle and Gauld 1992). Additionally, they can indicate the presence or absence of said populations (Matthews 1974, LaSalle andGauld 1992). Finally, some species can also be relevant from an economic point of view, because of their potential for pest control (González and Ruíz 2000).
Because of the type of relationship established between Braconidae populations and host species, and the effect that climatic factors and human activity pose upon this, we can consider that Braconidae (especially those adopting koinobiont strategies) are a valid parameter for the determination of human effects on these communities and the assessment of specific diversity within a region (González and Ruíz 2000).
average temperatures throughout the year (15-20°C), and the low average rainfall, the park is classified is dry and thermo-Mediterranean.
The Natural Park of Las Lagunas de la Mata-Torrevieja is located to the south of Alicante province, and extends over 3,700 ha, 2,100 of which are covered by water. The park is notable for its saline soils, extensive wild orchid population (Orchis collina Banks and Sol. ex Russell), differentiated areas of Senecio auricula Bourgeau ex Coss and salt marsh plants of the genus Limonium, reed and bulrush areas with abundant grass plants such as Arthrocnemum sp. and Juncus sp., and Mediterranean areas populated by Quercus coccifera L., Pinus halepensis Mill. and Thymus sp. The climate is arid with an annual rainfall below 300 mm and high temperatures.
The Natural Park of La Tinença de Benifassà is located to the north of Castellón province, and extends over approximately 25,814 ha. The park covers an extensive and wellpreserved mountainous area, encompassing numerous and widely varied landscapes associated with medium and high-altitude Mediterranean regimes and hosting a high biodiversity of fauna and flora. It is possible to differentiate forests of Pinus sylvestris L., Pinus uncinata Mill. and Fagus sylvatica L., Juniperus communis L., and Quercus ilex L., etc., alternating with crops of Prunus sp., Corylus sp., etc. Climate conditions are continental humid, with annual average temperatures below 12°C: freezing conditions are possible throughout most of the year. Rainfall varies in different zones according to topographical features, and the annual precipitation ranges from 600 to 1,000 l/m . The park is contained within the supramediterranean bioclimate.
Sampling Design and Data Collection
Sampling stage covered the period among April 2004 and December 2007. During this period, in each natural park, a Malaise trap to collected specimens was placed. Weekly, each area was visited to replace the collecting bottle. Specimens captured were preserved in 70% ethanol until final preparation.
Once separated, the specimens were determined by subfamily keys of van Achterberg (1990) to work only with Alysiinae specimens. Subsequently, the identification to genera was carried out using key. Finally, species identification was did it by Fischer (1993b), Fischer (2003), Fischer et al. (2008a), Fischer et al. (2008b) and Tobias' keys (Tobias et al. 1986). The studied specimens are deposited with a bar code labels in the Entomological Collection at the University of Valencia (Valencia, Spain; ENV). General distribution data were provide from Yu et al. (2012). 2
Data analysis
Once the specimens of Aspilota-group had been identified, alpha, beta and gamma biodiversity indexes for each trap and habitat were calculated to gain insight into the richness, abundance, dominance and complementarity values of each area.
Alpha diversity reflects the richness in species of a homogeneous community. This sort of diversity was measured by taxa richness, abundance and dominance.
•
Taxa richness: used for valuing richness of sampling areas. It was measured using the Margalef index, a measure of specific richness that transforms the number of species per sample into the proportion to which the species are added by expansion of the sample, establishing a functional relationship between number of species and total number of specimens (Moreno 2001). • Species richness estimators: It was measured using Chao 2 to know what percentage of the total known species of possible species (Moreno 2001). • Abundance: used for valuing faunal composition of a given area (Magurran 1988). This was undertaken using the Shannon-Weaver index because it measures equity, indicating the degree of uniformity in species representation (in order of abundance) while considering all samples. This index measures the average degree of uncertainty that predicts which species an individual randomly picked from a sample belongs to (Magurran 1988, Moreno 2001, Villarreal et al. 2004). • Dominance: occurrence of genera or dominance value was calculated with the Simpson index, often used to measure species dominance values in a given community, with negative values thus representing equity. It measures the representativity of the most important species without considering the other species present. It expresses the probability that two individuals randomly picked from a sample will belong to the same species (Magurran 1988). • Community structure: In order to complement the diversity analyses and enquire into community structure, log-series, log-normal and broken-stick models were also applied (Magurran 1988). The log-series model represents a community composed of a few abundant species and a high number of rare species. The broken-stick model refers to maximum occupation of an environment with equitable sharing of resources between species. Finally, the log-normal reflects an intermediate situation between the two (Soares et al. 2010). Using the data obtained from the parks, each of these models was applied to calculate the expected number of species, and log grouping species according to abundance (Magurran 1988, Tokeshi 1993, Krebs 1999. To test the significance of the models, the expected species values were compared with those of the observed species through chisquare analysis (Zar 1999).
Beta diversity is the degree of change or substitution in species composition between different communities within the same landscape. In order to measure beta diversity, Jaccard and Complementarity indexes were used and cluster analyses were also performed.
• Jaccard index: relates the total amount of shared species to the total amount of exclusive species. It is a qualitative coefficient, the interval of which will go from 0 when no species are shared between both sites to 1 when both sites have an identical composition (Moreno 2001, Villarreal et al. 2004). • Complementarity index: indicates the degree of similarity in species composition and abundance between two or more communities (Moreno 2001, Villarreal et al. 2004). • Cluster analysis: employed to calculate the degree of correlation based on similarity/dissimilarity. For the calculation of these values, statistics-processing software PAST was used (Hammer et al. 2001).
Finally, gamma diversity measurement indicates the diversity value of all environments under study, as expressed in the richness indexes for each area (alpha diversity) and the difference between them (beta diversity) (Schluter andRicklefs 1993, Villarreal et al. 2004).
However, the species were not evenly distributed when different Natural Parks are considered separately. Thus, 39 species were identified in the Natural Park of La Tinença de Benifassà (Tinença), 23 were identified in the Natural Park of Carrascal de La Font Roja (Font Roja) and 21 were identified in the Natural Park of Las Lagunas de la Mata-Torrevieja (Torrevieja).
The genus Dinotrema was the most abundant with 343 examples, followed by the genera Synaldis (271) and Aspilota (108). On the other hand, when analysing the number of captures, it was observed that 383 individuals were collected in Tinença, 257 in Torrevieja and 182 in Font Roja. In Tinença the most captured genera was Dinotrema with 202 specimens followed by Synaldis with 95. However, in Torrevieja and Font Roja the most captured genera was Synaldis with 105 and 71 specimens respectively, followed by Dinotrema with 93 and 48.
The Margalef index (D ) shows that Natural Park of Tinença hosted a higher species richness with D = 6.389, while Font Roja reached a value of 4.228 and Torrevieja 3.604. These values might be so discordant as a consequence of the identified species differing widely from Tinença (39 species) to other habitats. Font Roja and Torrevieja has a similar D value because its species number is very close (23 and 21 species respectively).
On the other hand, with the estimators of species richness (Chao 2), it is possible conclude that the Natural Park where our sampling effort has enabled a closer approximation to the estimated maximum richness is Font Roja with a value of 94.62%, followed by Tinença and Torrevieja with values of 82.97% and 79.75% respectively.
When analyzing the structure of the community it is needed to distinguish between two types of analysis: proportional abundance indices or parametric models.
First, the community structure is studied by proportional abundance indices in which differentiate dominance indices as Simpson or Berger-Parker and equity index as Shannon-Wiener.
The results obtained with the Simpson and Berger-Parker index (Table 1) show a dominance of the community structure by one or more species with high population abundance. The Shannon index suggested a similar trend in the distribution of dominant genera; discrepancies were merely due to different numbers of rare genera (those represented by few specimens). Finally, applying parametric models, ( Table 2) the analysis of the Aspilota-group community structure showed that Font Roja and Torrevieja present compliance with the log-series model indicating that these communities have an unstable structure, composed by few abundant species and a large number of rare species. These results show that habitat does not determine community structure because the sampling area presents very specific botanical and faunal composition and climatic conditions.
Mg
Mg Mg Table 1.
Expected frequency of species (exp f) according to abundance models (log-series, log-normal and broken-stick) for the Aspilota-group community. However, Tinença shows compliance with log-series and log-normal models presenting, more or less, the same p-value (0.501 and 0.513 respectively). This fact could be indicating two types of behaviour. On the one hand, this community could be unstable and composed by few abundant species and large number of rare species. And, on the other hand, it could be indicating that the specimens number of this community is conditioned by a large number of factors associated with high temperatures and low rainfall that occur in this area causing that species must adapt to very strict conditions.
In order to obtain beta diversity (similarity/dissimilarity) values between the different areas under consideration, the Jaccard index was calculated. The resulting value indicated a certain degree of dissimilarity between the Natural Parks although, Font Roja and Tinença are the closest parks (I = 0.377) while Font Roja and Torrevieja are the farthest parks (I = 0.189). These results were also observed in the Jaccard cluster obtained through cluster analysis, of which the level of correlation was r = 0.8863 (Fig. 1).
Expected frequency of species (exp f) according to abundance models (log-series, log-normal and broken-stick) for the Aspilota-group community.
However, applying the Principal Component Analysis (PCA) (Fig. 2) shows that there are many unique species to each Natural Park (16 for Tinença, 7 for Torrevieja and 6 for Font Roja) while the rest of species are usually present shared (17 for Font Roja-Tinença, 12 for Tinença-Torrevieja and 7 for Font Roja-Torrevieja). This could be due to the fact that Tinença and Font Roja are Mediterranean forests while Torrevieja is a lagoon.
The indices of species replacement by the Whittaker index (Table 3) show that the Natural Park of La Tinença de Benifassà has not a lot of replacement with species from other Natural Parks, while, Torrevieja and Font Roja show some replacement. This relationship could be possible thanks that these natural parks are close while Tinença is far. The Complementarity index (C) suggested that the Font Roja and Torrevieja has the highest complementarity (0.810) followed by Tinença and Torrevieja with 0.723 and Tinença and Font Roja with 0.622. These results showed a fair degree of complementarity, but also indicated the presence of different species in each habitat (Table 3). This fact could be explained because these natural parks are close to each other while Tinença is farther apart.
Finally, gamma diversity reached a value of 52.954, which is practically identical to the value of the total species richness caught in the three Natural Parks (species number = 53).
Discussion
Regarding the faunistic study, four species captured are new records for Spain: Aspilota delicata, Aspilota procreata, Dinotrema costulatum and Dinotrema crassicostum. While, regarding the biodiversity study, it is possible to see that the Natural Park of La Tinença de Benifassà presents greater abundance and species diversity, followed by the Font Roja and Torrevieja parks. On the other hand, the analysis of the structure of the network has showed that the Font Roja and the Torrevieja Natural Parks show a model of community that matches the log-series model. This indicates that these communities are unstable and are composed of few abundant species and a large number of rare species. While the Table 3.
Whittaker index and Complementarity index values for Aspilota-group between Natural Parks. community of the Aspilota-group present in the Tinença, is adapted to the models of logseries and log-normal. This demonstrates that the structure of the community is not determined by the habitat, but conditioned by a large number of factors associated with the high temperatures and low rates of precipitation, which may force the species to adapt to strict environmental conditions. Furthermore, when comparing parks, it can be seen that La Tinença and La Font Roja show the most similarities between each other, whilst the Font Roja Park and the Torrevieja Park show a larger group of species that complement each other.
On the other hand, checking with the studies realized in other areas of Spain as Artikutza about the Aspilota-group show that this group was the most abundant captured with approximately 75.77% (Peris-Felipo and Jiménez-Peydró 2011). The information about the abundance is very interesting due to the relationships that these parasitic wasps have with their hosts. This information could be used to estimate the biodiversity appearing in each area.
Finally, we conclude that, although this study was conducted to determine the diversity and community structure of the Aspilota-group, it is recommended further studies of Braconidae in different areas together with DNA-barcode studies to increase the knowledge of this large group that still remains largely unknown. | 2018-04-03T05:10:58.563Z | 2014-07-21T00:00:00.000 | {
"year": 2014,
"sha1": "1e576bdf12f29f86a07a53b41dbfbacbc76f3d0f",
"oa_license": "CCBY",
"oa_url": "https://bdj.pensoft.net/article/1112/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e576bdf12f29f86a07a53b41dbfbacbc76f3d0f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
126181820 | pes2o/s2orc | v3-fos-license | 3D dose computation algorithms
The calculation of absorbed dose within patients during external photon beam radiotherapy is reviewed. This includes the modelling of the radiation source i.e. in most cases a linear accelerator (beam modelling) and examples of dose calculation algorithms applied within the patient i.e. the dose engine. For the first part - the beam modelling, the different sources in the treatment head as target, filters and collimators etc are discussed as well as their importance for the photon and electron fluence reaching the patient. The consequences of removing the flattening filter, which several vendors now have made commercially available, is also shown. The pros and cons regarding different dose engines ability to consider density changes within the patient will is covered (type a and b models). Engines covered are, for example, pencil-beam models, collapsed cone superposition/-convolution models and combinations of these, as well as a glimpse on Monte Carlo methods for radiotherapy. The different models’ ability to calculate dose to medium (tissue) and or water is. Finally, the role of commissioning data especially measurements in today’s model based dose calculation is presented.
Introduction
Clinically used dose planning systems have for many years used calculation algorithms for X-rays and γ-beams, which make use of empirically, determined inhomogeneity corrections. These corrections have been applied to either measured dose distributions using e.g. film or simple analytical models describing the distribution in homogeneous water. Modelling of the dose distribution has not relied on basic photon and electron interaction (basic principles). The methods in use today are often model based algorithms where one separates the modelling of the radiation source from the in-patient dose calculation, the socalled dose engine [1].
Historical algorithms
A short description of older algorithms seems to be in place and we divide then depending on their ability to model scattered radiation. These models can be summarised as correction models applied to the dose distribution in water.
Algorithms without Scatter Modelling
In this group of algorithms, e.g. effective attenuation, isodose shift, tissue air ratios (TAR), effective source skin distance (SSD) [2] the only influence of the patient geometry is thru a scaling of the radiological depth for the calculation of absorbed dose. More or less sophisticated versions existapproximations of homogeneous media, regional corrections or pixel by pixel corrections. These methods only account for inhomogeneities with respect to their density along the fan line. A better method was introduced with the algorithm by Batho where the position of the inhomogeneity also is considered [3] These methods were all, in principle, developed prior to the use of CT. The resulting dose distributions from these models can in some clinical cases differ up to 20 % from measurements, especially in low density media irradiated with narrow (5 x 5 cm 2 ) high energy photons beams [4]. Deviations of the same magnitude between measurements and calculations were found for a tangential beam geometry using the effective attenuation method. Of these methods, the Batho method estimates the absorbed dose with the highest accuracy. Lulu and Bjärngard have extended the Batho algorithm to account also for the lateral extension of inhomogeneities [5,6].
Algorithms with Scatter Modelling
The next group of methods all use the radiological depth to determine the primary dose component but also accounts for scatter produced in the irradiated volume.
The Equivalent Tissue Air Ratio, ETAR algorithm relies on a calculation of an effective homogeneous density which is assigned to each point using weighting factors applied to the surrounding points (voxels) [7]. Geometric dimensions (e g depth, field size) are scaled from unit density to the effective density using O'Connor's theorem [8,9]. The weighting factors used are calculated using inelastic scattering cross sections for water, i.e. basic principles are applied. The agreement with measurements are better for this method than for the simple algorithms. An accuracy of ± 3 % can be expected in most cases [7], however, in some situations, the deviation for the ETAR method can increase, for example, tangential beams, where deviations up to 5 % can be found [10].
An extension of the ETAR method using TAR's calculated by Monte Carlo have been proposed [10]. The new TAR's are more accurate especially for small fields where they fall to zero for zero field size. Therefore, the absence of lateral electronic equilibrium for narrow beams in low density media is modelled better. The final equation used is a superposition integral similar to the methods discussed by Mackie et al [11] and Mohan et al [12].
In general, none of the methods discussed deals with the lack of electronic equilibrium for narrow beams of high energy X-rays in low density media. All these models assume that the energy transferred to electrons is absorbed locally i.e. the collision kerma is equal to the absorbed dose.
Multi source models
The limited accuracy of the methods described above has initiated much work on methods based on physical principles. The approach used mainly today is fluence or energy fluence multi source models. The latter is describing the radiation source, usually the treatment head of the accelerator down to different detailed levels. For example, the target generating x-rays from the impinging electron which is considered the primary radiation beam, scattered radiation from primary collimator, beam flattening filter (if present), monitor chamber, multi leaf collimator and conventional jaw system. A common simplification is to reduce all scatter sources to one single usually placed at the position of the flattening filter. This position is chosen since the flattening filter accounts for the majority of the scattered radiation from the treatment head.
Once the model of the radiation source is set up, one calculates the output from it, as an energy fluence distribution. In the case of using Monte Carlo this can be a phase space of the energy fluence i.e. a description of type of particles, their position, direction, and energy. The energy fluence distribution is modulated by field shaping parts of the treatment head. The sources lateral distribution and relative magnitude is usually adjusted based on measurements of conventional measurements e.g. lateral profiles, depth doses, output factors in air and water.
In-patient or dose engine
Dose calculation is based on the previous energy fluence impinging on and into the patient. The patient is represented by a 3D voxel matrix where each voxel is representing the density (relative physical or electron density depending on system). For some models, also a medium may be given for each voxel.
Point kernels
The energy fluence is ray traced thru the patient matrix and attenuated considering the density of each voxel and when appropriate also the medium of the voxel. For each voxel, the TERMA (Total Energy Released in Mass is calculated as the product of the energy fluence Ψ and the mass energy absorption coefficient µ/ρ. The following equation gives the TERMA T differential in energy E at an arbitrary point in the 3D room given by the vector r. From the TERMA distribution one gets the absorbed dose by a convolution with a point kernel describing the transport of energy by primary electrons release (collision kerma) and scattered photons. (The differential in energy has been excluded for clarity, the integral has to be performed over all energies).
The differential in energy has been excluded for clarity, the integral has to be performed over all energies. It is, however, quite common to perform the integral over energy before the convolution, thus only one TERMA matrix and one single point kernel is required. This time-consuming integral can be solved using Fourier transforms where the convolution integral is replaced by the inverse Fourier transform of the product between the transforms of T and P. This approach is fundamental in the work by Boyer who used Fast Fourier Transform (FFT) to solve the convolution of dose distribution kernels with photon fluence distributions to give the final dose in 3D [13,14]. However, if the kernel function P varies with position due to changes in density, an analytical superposition integral has to be solved instead.
The point dose kernels used, which includes electrons released in the first interaction as well as single and multiple scattered photons, are generally calculated using Monte Carlo simulations but analytical methods are also applicable.
Convolution in inhomogeneous medium
Inhomogeneous media can be included in the convolution integral either using a large number of dose distribution kernels, one per density, or a single kernel combined with the following scaling theorems: "In a medium of a given composition exposed to a uniform flux of primary radiation (such as X-rays or neutrons) the flux of secondary radiation is also uniform and independent on the density of the medium as well as of the density variations from point to point." [15] "In a given irradiation system the ratio of scattered photons to primary photons remains unchanged when the density of the irradiated material is changed if all the linear dimensions of the system are altered in inverse proportions to the change in density." [8,9] Density in this context should be interpreted as interaction sites per unit volume, e g electron density for incoherent scattering of photons. It is also assumed that the amount of secondary radiation produced at a point is proportional to the local number of interaction sites.
Mackie et al used superposition kernels calculated for several densities with interpolation and scaling between them [11]. A similar approach using kernels for only a single density combined with the scaling theorems has been applied [12,[16][17][18].They all use ray tracing of scatter between interaction and absorption point and employ the scaling theorem for inhomogeneities. Transportation of energy by electrons is included in the kernels, therefore, electronic disequilibrium will be modelled.
Pencil beam kernels
A method to increase the calculation speed to make it clinical feasible was the introduction of pencil beam kernels where the dose calculation problem has decreased by going from 3D to 2D convolution. By integrating the point kernel along the depth direction, we get a 2D description of the dose distribution. Pencil beams have been created in various ways for example from measurements by radial differentiation of relative dose on central axis from broad beam dosimetric quantities [19]. Another method was the differentiation of radial beam data [20]. Monte Carlo methods have also been used to calculate pencil beam kernels in water [16,21,22]. Usually the pencil beams are parametrized and the actual values of these are found by a fitting process to measured data [22].
Applying the pencil beam convolution is described in principle by the following equation: Here the energy fluence distribution Ψ at a certain specified plane is convolved with the pencil beam kernel P. In heterogeneous media, the pencil beam kernel is scaled along the propagation direction by replacing the z with the radiological depth zradiol considering the density of the voxels along the ray. No scaling is performed, in the initial implementations, perpendicular to the propagation direction. Thus, the pencil beam convolution is a so-called type a algorithm [23]. There is, however, today one pencil beam model where a lateral scaling of the kernel has been added [24]. The commercial implementation is the analytical anisotropic algorithm (AAA) [25,26]. The performance of pencil beams algorithms have been reported in several reports [27][28][29][30][31][32].
Collapsed cone convolution
Another approximation to increase speed (which maybe is not a problem today) as well as managing inhomogeneities is to discretize the point kernel. The kernel is discretized in about 100 directions and by letting each direction representing a cone where all energy is collapsed to the center of the cone [33,34]. When collapsing the energy in the cone into the central ray the inverse square dependence is removed. The distribution of cones is not isotropic, instead a higher density is used in the forward direction because the majority of energy in high energy beams is transported in this direction. A few cones take care of the backscatter. Density of cones should be high enough where significant amount of energy is deposited such that each voxel is passed by one or more collapsed cones.
The convolution integral is commutative thus changing the order of the functions does not change the result This property can be interpreted as shown in the following figure: For the dose deposition view, energy or TERMA from the surrounding voxels is summed up to the deposition voxel. When all contributions are summed up, the total dose in the voxel is determined. In the interaction view, the TERMA in a voxel is spread out to the surrounding voxels and all contributions from interaction points have to be covered. In principle, the first approach can be used to calculate the absorbed dose in a single point/voxel only. The implementations based on this approach utilize the possibility to have a dose grid with different spacing to speed up calculations. For example, in areas with small gradients in TERMA or patient density a coarse grid can be used and in high gradients a finer one.
Another approach used to efficiently perform the convolution is to align the point kernels such that the cones in the same direction overlaps. This makes it possible to ray trace through the TERMA matrix along each cone direction and recursively pick up energy from each traversed voxel, transport, attenuate and then deposit energy along the axis of the cone [33].
Linear Boltzmann transport equation
A more direct way of performing in-patient dose calculations is to start with the linear transport equation by Boltzmann (LBTE). For many years one have used Monte Carlo (MC) techniques to statistically solve the problem [35][36][37][38]. MC methods use stochastic distributions describing the physics to simulate the transport of individual particles (photons and charge particles) through the density and medium matrix describing the patient geometry. The limitation, at least up today, is the large number of particles is required to get a sufficient high uncertainty of the deposited dose in each voxel of interest. MC can be seen as a microscopic solution of the LBTE.
Recently analytical solutions have been applied to LBTE for radiotherapy planning problems in a macroscopic approach i.e. instead of individual particles we create, follow, and attenuate energy fluencies in a matrix describing the patient. An exact analytical solution is not possible, mainly due to its complexity, thus a numerical have to be used, i.e. discrete ordinate method or grid based solvers [39][40][41][42]. Evaluation of a commercial implementation have been presented by e.g. Fogliata et al [43,44] and Hoffman et al [45].
In short, the grid based method can be described as; the properties of the radiation source and the absorbing medium (patient) are described, the multi-source model (e.g. by a phase space) is used to get the energy fluence to the patient and then ray traced through the irradiated volume (observe this is very similar to the point kernels methods discussed above). In the next step the discrete ordinates is applied to solve the LBT equation giving photon energy fluence and the electron sources in each voxel. Finally, the electrons are transported to obtain electron fluence distribution in each voxel and by applying mass stopping power the absorbed dose is determined. The energy of photons and electrons are discretized, 25 and 49 levels, retrospectively. Transport directions is discretized based on the energy involved between 32 and 512 directions (These data are from the Acuros implementation [46]).
Interesting to comment is that both solutions to the LBTE show inherent errors, for MC we have stochastic uncertainties. For discrete ordinate solvers, the concept of discretization may produce uncertainties. As always this can be improved by prolonging the computation by increasing the number of particles in the MC case and for the solver using a finer discretization.
Dose to medium or dose to water
For many years, there have been a discussion on how to report absorbed dose during radiotherapy procedures [47]. No consensus really exists if the dose should be reported as absorbed in water or in the actual medium. With new algorithms as MC and grid based LBTE solvers getting more common this problem is growing. Both these approaches can report either.
The issue can be divided into two parts, a) if the energy/TERMA during the ray trace considers the physical properties of the transversed voxels or all media is considered to be water, and b) the other step is how the deposition of energy is accomplished. Introducing the formalism D transport, deposit where transport and deposit are the medium used respectively. They can either take the value water (w) or medium (m). Considering the kernel approaches, pencil beam and point kernels, are all determined in water, analytical, MC or from experiments. Looking in detail to pencil beam algorithms, scaling according to the transversed density (radiological depth scaling) assumes all media is water and no corrections are performed during dose deposition. Thus pencil beam models transports and deposit dose in water, Dw,w.
For the point kernel models available, no consistent handling exists. In principle, the TERMA or equivalent can be ray-traced considering the medias physical properties or not. The kernels are determined in water but when dose deposition occurs a correction using mass stopping power ratio between medium and water may be applied. No single conclusion on what is reported can be given, however, we probably have one of the following situations: Dw,w, Dm,m or Dm,w.
MC models have the possibility to transport in water or medium and the same is valid for dose deposition. Most common is to transport in medium and deposit in medium. If the user want dose to water, a mass stopping power ratio between water and medium is applied to each voxel [48]. Thus we have Dm,m or Dm,w. When applying the mass stopping power ratio one usually have to use a macroscopic value for the energy deposit in the voxel since the electron energy fluence differentiated in energy in the voxel is not known. Alternatively, this could be done microscopically if done for each single energy deposition.
The discrete ordinate solvers for the LBTE determines the electron energy fluence differentiated in energy for each voxel and either the mass stopping power for the medium or for water is used to determine the absorbed dose. All transport is accomplished in the medium thus we have Dm,m or Dm,w. The choice is user selectable in the only existing commercial implementation [49,50].
Summary
Today we have very accurate tools available for estimating the absorbed dose within the patient during radiotherapy [51][52][53][54][55][56][57]. We still have, however, several problems to solve. The models discussed here are in principle for static patients without movements i.e. intrafractional and/or interfractional movements. The latter can probably be solved by daily imaging and adapting the today's plan. This can be accomplished by e.g. a library of plans or on-line re-optimisation and calculation. For intrafractional movements, we have used PTV, ITV etc. and margin recipes to assure that the target get the correct dose. This will in many cases lead to an over-irradiation of healthy tissues surrounding the tumour. Techniques with synchronisation of the patient with the treatment delivery e.g. gating and tracking is close to be clinical routine at many departments.
Another problem that has been noticed when introducing more accurate algorithms is a tendency of optimizers to put in too much dose in the vicinity of the tumour to assure an adequate dose in the whole tumour. This is especially noted when working with the PTV concept and tumours in low density regions such as the lung. In these cases, one should probably use a combination of the type a and b algorithms where optimisation is performed using type a and then followed by a recalculation with an appropriate type b model. More physically one can describe this as an optimisation based on energy fluence (pencil beam model) followed by dose calculation with full lateral electron modelling. | 2019-04-22T13:06:33.902Z | 2017-06-05T00:00:00.000 | {
"year": 2017,
"sha1": "17d98e5ee313abfa5b0fb35c5392b99b0dc11746",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/847/1/012037/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "942b2d8f97c7ecc514c5157970da510f0286d53a",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
21323352 | pes2o/s2orc | v3-fos-license | ‘You feel you've been bad, not ill’: Sick doctors’ experiences of interactions with the General Medical Council
Objective To explore the views of sick doctors on their experiences with the General Medical Council (GMC) and their perception of the impact of GMC involvement on return to work. Design Qualitative study. Setting UK. Participants Doctors who had been away from work for at least 6 months with physical or mental health problems, drug or alcohol problems, GMC involvement or any combination of these, were eligible for inclusion into the study. Eligible doctors were recruited in conjunction with the Royal Medical Benevolent Fund, the GMC and the Practitioner Health Programme. These organisations approached 77 doctors; 19 participated. Each doctor completed an in-depth semistructured interview. We used a constant comparison method to identify and agree on the coding of data and the identification of central themes. Results 18 of the 19 participants had a mental health, addiction or substance misuse problem. 14 of the 19 had interacted with the GMC. 4 main themes were identified: perceptions of the GMC as a whole; perceptions of GMC processes; perceived health impacts and suggested improvements. Participants described the GMC processes they experienced as necessary, and some elements as supportive. However, many described contact with the GMC as daunting, confusing and anxiety provoking. Some were unclear about the role of the GMC and felt that GMC communication was unhelpful, particularly the language used in correspondence. Improvements suggested by participants included having separate pathways for doctors with purely health issues, less use of legalistic language, and a more personal approach with for example individualised undertakings or conditions. Conclusions While participants recognised the need for a regulator, the processes employed by the GMC and the communication style used were often distressing, confusing and perceived to have impacted negatively on their mental health and ability to return to work.
INTRODUCTION
Many occupational surveys and reports indicate a high prevalence of mental ill health and addiction in doctors, [1][2][3][4] with suicide rates being considerably higher than population averages. 5 This is a problem not only for doctors but also for their patients. Several studies have highlighted the difficulties faced by doctors in taking sick leave, and how this can impact on their subsequent return to work. [6][7][8][9] Recent research has identified that the changing role of medical regulators appears to have become a barrier for successful return to work for doctors with complex health problems. 10 The General Medical Council (GMC) is the regulatory body for doctors in the UK. It has a number of roles aimed at protecting, promoting and maintaining the health of the public. It maintains a register of medical practitioners; sets standards of professional and ethical conduct; and oversees the process of revalidation of doctors.
Doctors can be referred to the GMC by anyone concerned that their fitness to practise (FTP) may be impaired. Doctors can also Strengths and limitations of this study ▪ We obtained detailed rich personal accounts from 19 doctors from across the UK who were or had been away from work for more than 6 months, 15 of whom contributed data on their views of the General Medical Council (GMC). ▪ We identified four discrete themes; our study generated detailed quotations on the feelings generated by the GMC, including clear examples of suggestions for improvements. ▪ Our methodology meant that we have no way of knowing anything about the doctors who were approached by our partner organisations but decided not to take part. Further, regarding our participants we have only the doctors' own accounts and no independent way of understanding for example the relationship between their initial reason for stopping work and their current problems, nor the precise reason where applicable for their involvement with the regulator.
self-refer. The GMC's policy document, Good Medical Practice 11 outlines the standards expected of doctors. The GMC website hosts an explanation of what the GMC means by FTP and the reasons a doctor's FTP may be brought into question. 12 Both Good Medical Practice and the FTP document discuss the possibility that ill health might impair a doctor's FTP if 'the doctor does not appear to be following appropriate medical advice about modifying his or her practice as necessary in order to minimise the risk to patients'. The GMC adopts the same investigation procedures whether or not the doctor has been referred for health problems or for misconduct. GMC data suggest that mental health problems are the most common category of health issues leading to FTP investigations. 13 For doctors with mental disorders the GMC may request that two independent psychiatrists assess the doctor and prepare a report, including recommendations regarding FTP and management of the doctor's health problems. The outcomes of GMC FTP investigations (and the instructions to specialist examiners) are summarised as (1) fit to practise generally; (2) fit to practise with limitations and (c) unfit to practise. Where a doctor with health problems is considered fit to practise only with limitations, he or she is invited to agree to 'Undertakings', which usually include following the recommendations of his or her general practitioner and treating specialists, and consenting to communications between the GMC and those treating the doctor. In some instances, where a doctor's FTP has been found to be impaired they are required to have a GMC supervisor that is, an appropriate specialist who liaises with those treating the doctor and reports regularly to the GMC regarding the doctor's adherence with their restriction on practice, and makes recommendations whether such restrictions should continue. In cases where a doctor is considered currently unfit to practise, he or she is suspended. Suspension may be for a finite time, or indefinite, but this is subject to review.
The Shipman Inquiry heavily criticised the GMC for allegedly acting to protect doctors rather than protecting patients. 14 15 The GMC responded by implementing a number of reforms around its FTP procedures. Recently the Medical Practitioners' Tribunal Service has been established. This has separated the GMC's role in investigating doctors, and its role in holding hearings into such cases.
These reforms have not been universally welcomed. A qualitative study of randomly selected GPs, psychiatrists and others involved in medical regulation, 16 designed to explore views and experiences of transparent forms of medical regulation in practice, described three key emerging themes regarding current medical regulation. The doctors they interviewed described feeling 'guilty until proven innocent', highlighted the excessive transparency of the system which can be distorting 17 and associated this with a 'blame culture'.
Despite the negative responses to the reforms highlighted in previous studies, it is important to keep in mind that the GMC exists chiefly to protect the public. The GMC therefore has the difficult task of protecting the public in a manner that is humane, fair and transparent for the doctors they seek to regulate. This present study aims to explore doctors' views and experiences of how the GMC deals with these issues, and how doctors perceive an effect of GMC proceedings on their mental health or return to work.
METHOD
This paper forms part of a wider set of analyses designed to explore doctors' perceptions of obstacles to returning to work after at least 6 months away from work. Our methods and initial findings have previously been reported. 10 Study design and participants Doctors either currently off work for at least 6 months or who had experienced a period of at least 6 months off work that ended within the previous year were eligible for inclusion in the study. Doctors were eligible to participate if absent from work for one or more of the following reasons-psychiatric illness, physical illness, addiction, substance misuse problem or suspension by employer or GMC.
Participants were recruited from the following sources: the Royal Medical Benevolent Fund (a charity which provides financial support and advice to doctors), the Practitioner Health Programme (a service providing confidential care to doctors and dentists with physical or mental health needs) and the GMC (the regulator). We requested that these organisations identified potentially eligible doctors, and sent them an information letter explaining the purpose and design of the study. Potential participants were invited by these partner organisations to make contact with the researcher directly if they were interested in taking part. If still interested after this telephone or email discussion, the doctor was invited for interview.
Semistructured interviews lasting approximately 2 h were conducted. 18 A topic guide, consisting questions on health and illness experiences, work and professional relationships, financial situations, regulatory issues and possibility of return to work was developed Interviews were digitally recorded and transcribed. Thematic analysis 19 was used to identify patterns and themes by manual coding by two researchers (LdB and SKB) working independently using Nvivo (V.8, QSR International). The researchers compared codes and reached consensus on the emerging themes by discussion leading to a final agreed master list of themes and subthemes. Emerging themes were discussed regularly by the research team. This type of thematic analysis is inductive, that is, the themes emerged from the data itself and were not imposed by the researchers.
Both researchers engaged in a process of reflexivity. They each recorded details of the interviewing interaction, and reflected on their own experience which may have had an impact on the interpretation of data. A clinician with extensive experience of caring for doctors with mental health problems (Henderson) was available should either the participant or the researcher become distressed, although in practice this was not needed. Support was also available from the wider research team which comprised a balanced mixture of non-clinicians and clinically trained researchers.
Ethics
In line with the British Psychological Society's (2006) ethical guidelines, participants were informed of their right to withdraw from the interview at any time and assured of their right to confidentiality and anonymity.
RESULTS
Nineteen participants of the 77 approached took part in the study. Demographic and health information is shown in table 1. Of the 19, 4 were suspended by their employers and 3 were suspended by the GMC.
Fourteen of the participants (73.7%) had experience of dealing with the GMC. Of these, 7 had something positive to say about the GMC. Thematic analysis resulted in the identification of 4 main themes: perceptions of the GMC; GMC processes; impact on health and suggested improvements.
Perceptions of the GMC Participants discussed their perceptions of what the GMC is and what it does, and this led us to three important subthemes: the importance of the GMC, support (or lack of ) and understanding (or lack of ) of doctors' needs, particularly in the context of mental health.
Importance of the GMC Participants acknowledged GMC processes as necessary, particularly in terms of protecting patients. One participant who did not have any GMC involvement had 'always wanted the GMC to be involved' (P18, female, 20s) as she felt she needed 'someone in authority' (P18) to declare if she was fit for work or a danger to her patients. Several other participants suggested it was useful to have this assessment and were grateful for the 'breathing space' they were given if they were declared not fit to work (P19, female, 50s). Participants agreed that the GMC 'needs to exist' (P8, female, 30s).
I would now feel as if I have the weight of the GMC with me, on my behalf and they did deal with me very professionally and at no stage, you know you look back over it, at no stage were patients allowed to be put at risk which is good. (P16, male, 40s) Support Doctors tended to view their GMC experiences as positive when individual GMC supervisors were supportive. Empathy and support were important, with participants more likely to view their experience as positive if they found their supervisors 'easy to talk to and discuss things with' (P2, female, 40s). Other qualities that were appreciated included 'supportive', 'helpful', 'kind' and 'fair'. Some participants felt they would find the process a lot easier if they were able to choose a supportive supervising consultant. However, perceived lack of support from the GMC could be stressful. The GMC was frequently referred to as uncaring, unfriendly and impersonal. It could be perceived as unsupportive about returning to work and several participants felt this was not encouraged.
But as, as regards the GMC sending somebody round saying, "Look why don't we sit down and talk about this? How can we get you back to work?" (...) Zilch. Absolutely zilch. Quite, quite the opposite. The impression I got every year for nine years was; we don't want you working. (P4, male, 60s) I think that somewhere like the GMC is so big that there may not even be a person who's appropriate to reply and so it just gets lost and gives you the impression that no-one cares because there's nothing set up to help people like me. (…) there is no personal contact, it's all very generic and it's-it was the same with the Foundation School that, everything is so fixed, that any suggestion or any difficulty you have, you get the same generic answer. (P6, female, 20s)
Understanding
It also appeared important for participants to feel understood by the GMC. However, participants often felt the GMC did not understand mental health problems. I don't think that the panel have sufficient understanding of mental health issues to draw their own conclusions, so they would go on the report and they would see it as black and white. You're either ill or you're not ill, and you can't be somewhere in between. (P7, female, 50s) The one disappointment that I have and where I think the GMC didn't help is that they dealt with me as if I were well, and I wasn't, and they don't have any… Yes, they've got their health committee but they punished me for things that I'd done when I wasn't well and it became very punitive. (...) I think because the people on the panel aren't aware of mental illness. Often there's somebody there to give advice, but the actual committee don't have any mental health training. So I don't think that they take it into account. They look at this erratic behaviour and, 'We can't have a doctor behaving like that.' (P7, female, 50s) The perceived lack of understanding from the GMC reinforced low self-esteem, with participants feeling that they were being judged as 'bad' rather than 'ill'. The 'judgmental' tone perceived by participants negatively affected their confidence.
GMC processes
Participants discussed their experiences of GMC processes, which were described as stressful and confusing. Participants emphasised the 'accusatory' tone and legal jargon in GMC correspondence as being particularly uncomfortable. The duration of the process was also considered stressful. Some participants were left confused about their ability to work during the process.
Like a court case
The GMC was often seen as punishing, with GMC processes being perceived as 'like a court case' (P15, male, 40s) where doctors who reported to us a difficulty with health rather than misconduct described being made to feel they had done something wrong. Several participants commented that they felt like criminals throughout the process, and were being punished rather than helped. Several other participants described communication from the GMC as overly negative, accusatory and judgemental; they felt that the GMC implied that they were a 'bad' doctor rather than an 'ill' doctor who might need treatment and support.
I have to say that I've been extremely unimpressed with the amount of pressure that they put doctors under. I mean (...) at the end of the day it comes back to what I was saying before. We're still people; we can get ill just like anybody else can. (...) They make you feel like you're a really bad person (...) and you know send you countless letters and it's all... I mean they legally term them so you know which I suppose they have to do but you know it kind of doesn't help when you're already struggling with a whole load of other issues. (P3, female, 40s) I also feel that they [the GMC] could look at the format of their letters and perhaps (...) make them sound less judgemental and less punishing and a bit more supportive and treat you like a person with an illness rather than somebody who has done something horribly wrong. (P3, female, 40s)
Correspondence
Participants generally described communication from the GMC as poor, and the formal letters sent to everyone which mention 'allegations' can be stressful for a doctor who has done nothing wrong. I know there must be situations where people's health has led to negligence, but equally there's negligence and maltreatment that have got nothing to do with people's health. Like myself, whilst I was ill at work, no patient ever came to harm, I didn't do anything to anybody that was wrong. It was always about my health. And I think there should be a separate way of dealing with it. They do, in that the hearings are private and things, but I don't see why you get the same mail merge letter about allegations. (P8, female, 30s) It's just been very stressful and because it's this whole one procedure and because they obviously have standard forms and things, I keep getting these letters about this allegation against me and it frustrates me so much. There is no allegation. (…) it puts you as if you've done something wrong but actually I've done nothing wrong. All I've done is been ill and made a statement to that effect in accordance with good medical practice so what have I done wrong there? (P14, female, 30s) The perceived accusatory wording and legal terminology used in GMC letters was described as daunting and added to the feeling that a doctor was being judged or had done something wrong.
The whole process is very stressful because it's…I can say it's all very legal…essentially it's a court case the actual Fitness to Practice Hearing... with prosecuting barristers and then defence barristers and the panel themselves... and all the paperwork is in legalese if that's the right word. (P15, male, 40s) I think because it has to be in legalese it's actually very frightening, it's a language and I didn't know the language and facing that is very daunting when you suddenly realise that these white envelopes post marked at Manchester arrive for which you have to sign, so it's all very, very formal and that is very daunting in many respects and I can see that a lot of people would get very upset with that indeed. (P17, male, 40s) Participants acknowledged that, as an official regulatory body, the GMC needs to be formal in its communication; however, they found it daunting, and this contributed to feelings of anxiety at an already stressful time.
The GMC's correspondence was also criticised in terms of showing unawareness of each individual's case: "Some of the letters I got, it was almost like people were sending the letter and weren't aware of my case" (P8, female, 30s). Participants suggested they would like better communication from the GMC, explaining to them what the process will involve, and taking a more individual approach rather than sending formal letters to everyone.
Lengthy process and inability to work
Participants referred to GMC proceedings as long and drawn-out. This lengthy, time-consuming process was frustrating and stressful as participants said they often had little or no explanation of why the process was taking so long, and claimed it was hindering their return to work.
They did have an effect on my teaching post. I always left it to my lawyer, but when he was phoning the GMC and was saying we were still waiting on your decision as to the way it is affecting my client's work, there was a sort of a 'Oh that doesn't matter, we can't do it any quicker than we're doing it.' (P8, female, 30s) Participants were unclear about why the process was so time-consuming. You know I sat on what was effectively a waiting list for, in effect, eighteen months unable to work, unable to do anything, alone in the wilderness courtesy of GMC procedures. If they could get it on and get sorted, they would save themselves a lot of money, they'd save the NHS a lot of money and they'd save the doctors a lot of anguish and a lot of suffering. (P17, male, 40s) Two participants reported that the GMC would not allow them to return to clinical practice without supervision, but they could not find anybody prepared to supervise them. Several had had to return to work in non-clinical posts.
The GMC was saying that they couldn't allow me to go back to any type of clinical practice unless I had supervision. (…) My colleague weren't particularly happy about supervising me so that became a problem and there was no way back in through that. The employing Trust wouldn't allow me back in without the GMC saying yes and the GMC wouldn't say yes until I had supervision so I was falling between the two and it took a lot of leverage to try and get that resolved. (P16, male, 40s) Impact on health For some participants, being suspended came as a relief and allowed them time and space to recover without the pressures of work. Several other participants described the GMC process as worsening their mental health. 'Some found that sudden suspension was difficult to cope with-n it's not like you're choosing to leave a job. You're suddenly just adrift and I don't know what to do' (P13, female, 50s). My wife will tell you, she'd say it every year, she'd say "Oh God", she said, "I know when there's a GMC meeting coming up 'cause for about six weeks before you're getting wound up, as soon as the first letter arrives". (P4, male, 60s) I certainly think that somebody should be having this…a serious look at how the GMC deal with people. (...) because that actually on top of the stress of losing my license (...) was almost unbearable really at times. (...) and certainly it didn't help the fact that I mean I think that cost me a relapse back into drinking, not into depression. So in fact the problem that I had actually overcome, they actually started it off again. I mean I can't blame them for that but they certainly contributed to it. (P3, female, 40s) Another participant highlighted their response to an interim order panel as a normal response to a stressful experience rather than as a part of their illness: I was worried that I would get upset, because at times I did get upset when I was speaking to the solicitor and I was worried that I would get upset and they would take that as a sign that I wasn't mentally well, whereas it was just really a sign that it was an overwhelming process-it wasn't anything to do with me being mentally well or not mentally well. (P14, female, 30s)
Suggested improvements
Several participants made suggestions about what could be done to improve the experience of doctors going through GMC processes. These suggestions included being able to talk to other people in the same situation; transparency-clearer and less impersonal explanations from the GMC; the GMC being more flexible regarding undertakings; and the GMC supporting doctors as well as protecting patients. Participants suggested that undertakings need to be more individualised.
The best way I would have learnt about the process was talking to other people who'd been through it, because to be honest the lawyers weren't particularly helpful (...) to them it's like everyday, it's their job and they do it every day. They lose sight of that and for me it's the first time I've been through any of this. A bit more explanation would help, as I say, speaking to other people who've been through the process particularly helpful. (P15, male, 40s) I think it's lack of any clarity and any transparency and the fact they have undertakings and conditions which are identical. Undertakings are agreed to, conditions get imposed. But they are identical whether you've actually been stealing class A drugs or whatever so it makes no difference, there's no flexibility and it's a one size fits all which I think is a problem. (P19, female, 50s) One participant implied that improved cooperation between employing Trusts and the GMC, or Trusts understanding how the timing of their decisions impacts this process, would be helpful: They couldn't let me go back to work until they knew there was an offer from the Trust. Really we were waiting for the Trust to come up with an offer and the Trust was waiting for some type of relaxation of undertakings from GMC so the GMC could have maybe relaxed things a little bit sooner because there was really a year wasted and I could have been back to work. (P16, male, 40s) Participants also suggested that 'snapshot' assessments are unhelpful, and that the GMC should have a good understanding of each individual doctor.
If you look at my assessments they were very snapshot, one particularly so and I only talked to him for about twenty five, thirty minutes (…) You will not ever get someone, in the snapshot. You know, it's never works like that. (P17, male, 40s) This highlights the importance of individualised contact with the GMC. Some participants also suggested that the GMC should consider the 'positives' for each doctor and not merely focus on the negatives: Participants suggested that the GMC should be more encouraging and assure patients that they are not being judged.
It was also suggested that proceedings could be initially handled at a local level, rather than Interim Orders Panels always being held in Manchester or London, which was a long journey for several participants to make. However, they acknowledged that there might be practical reasons for this. I can understand there are ways of doing it and the GMC need a panel with doctors and lawyers and patients' representatives or whatever. So, but from a sick doctor's point of view: that's a horrendously scary trip to make. (P9, male, 40s) I think as far as the GMC goes they need to be far more transparent and give us some idea as to what they're trying to achieve basically. (P19, female, 50s) It is important to note that there were also specific suggestions contained in the themes previously discussed: for example, less legalistic language, more empathy from supervisors, more clarity about the process as a whole and more obviously separate pathways for pure health complaints.
Overall, participants acknowledged the necessity of the GMC process, but many suggested that it is a process which could benefit from improvement. I think the GMC is good, I think it needs to exist, but I think it needs to come into the modern age a bit. (P8, female, 30s)
DISCUSSION
We carried out detailed semistructured interviews with nineteen doctors who had been away from work for a variety of reasons for at least 6 months. Fourteen of these had personal experience of GMC procedures, including three who were currently suspended by the GMC.
Key findings
Our analysis demonstrated that while doctors' experiences with the GMC can be positive, especially with supportive supervisors and caseworkers, GMC processes were often anxiety-provoking and distressing. Our participants described a sense of dealing with what they perceived to be an unaccountable bureaucracy. They described a lack of clear information as well as a lack of consideration of the impact of the tone of correspondence and procedures, particularly regarding referrals for health reasons.
Participants likened the GMC process to a 'court case' where they felt accused, rather than 'ill', echoing the findings of McGivern and Fischer. 16 This perception was not helped by the reported legal language and impersonal tone of GMC letters. It was seen to be a timeconsuming and anxiety-provoking process, with little support regarding getting back to work. This was felt to be distressing and even detrimental to health.
The majority of participants interviewed had experience of the GMC and often had strong opinions. While criticisms of the GMC were often firmly worded, participants recognised that the privileges of medicine require a regulatory body, and many accepted that this regulator would have a valid interest in them and their difficulties.
Study strengths and limitations
Given the nature of the research, we do not know the background to any of the cases discussed here. As with all qualitative research, the aim is to collect participants' perspectives of their experiences. There may be an element of social desirability bias in these interviews, and participants' accounts may have omitted or incorrectly recalled information. We appreciate that while these analyses emerge from a wider study of doctors perceptions of obstacles to their return to work, doctors who volunteered, may have held with stronger views, either positive or negative about the regulatory process.
All interpretations are our own, and therefore may reflect any biases or interests that we may have. However, we employed various strategies to ensure that the research was reliable and valid. Reflexivity, a methodological tool to ensure fair and ethical representations, was used, with the researchers continually scrutinising the process and reviewing the research throughout, being constantly aware of the researchers' own theoretical position; and inter-rater reliability was ensured by having two researchers code the data separately. 20 21 However, we acknowledge that meanings are not absolute and that others may have different interpretations of the data. A larger study would be useful in exploring how widespread the experiences and attitudes displayed in this study really are.
Doctors can be referred to the GMC for a wide range of reasons of which health may be one or a part. We do not know the reasons our participants were initially referred to the GMC, though by the time of the study all but one had some form of mental health problem. What we heard though was the perception that GMC communications made even sick doctors feel they had 'done something wrong'.
Conclusions
The GMC's duty is to protect the public, and it is possible for sick doctors to be a risk to patients. Doctors are no more immune to ill health or its consequences than the people they care for; doctors can be patients too. We identified concerns about the extent to which the GMC understands the specific difficulties posed by mental ill health, and drug and alcohol dependence. The conflation of ill health with misconduct seems at best inappropriate and at worst counter-productive. This discourages self-referral and creates an adversarial system where doctors report being made to feel that by becoming ill they have somehow done wrong. A more supportive, less judgemental, approach would both encourage engagement and might lead to better outcomes. We propose that the GMC should consider the possibility that there may be a health component whenever doctors are referred, and if evidence of ill health is found that doctors are diverted through a separate set of proceedings. If, when the episode of ill health has concluded, issues of conduct remain these can be addressed separately. Our interviews indicate that many participants felt that the GMC lacked understanding of mental disorders and it may be that proceedings should be more sensitive to the needs of doctors with mental disorders.
There were a number of comments from participants about the real workplace impact of GMC sanctions below the level of being 'struck off'. Is it possible that having certain conditions imposed could make it impossible for some doctors ever to return to work, thus making them practically indistinguishable from erasure? This perception of at least one of our participants, reflects our own experience of working with sick doctors, and may be much more widespread. A longitudinal study of the outcomes for doctors who have been through GMC processes would provide valuable data. For example, comparing actual outcomes with those intended by FTP panels would be instructive both for the regulator and for doctors.
It may be that the nature of the GMC as a regulator means communications with doctors undergoing FTP proceedings will always be anxiety provoking, and a degree of formality is necessary and appropriate. The question our data raises is whether the GMC could pursue some of its regulatory responsibilities in relation to sick doctors without generating the level of fear reported here. Doctors involved in GMC proceedings may feel unable to raise concerns or criticisms about the manner in which the regulator acts for fear of worsening their own situation. This would appear to be an unhealthy position for both doctors and the GMC and we hope that this study can help generate a wider debate about these issues. | 2016-05-12T22:15:10.714Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "dc0a5be48bab90758257c7dcb9cf99029556c4ae",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/4/7/e005537.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "dc0a5be48bab90758257c7dcb9cf99029556c4ae",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258276944 | pes2o/s2orc | v3-fos-license | Fast viral dynamics revealed by microsecond time-resolved cryo-EM
Observing proteins as they perform their tasks has largely remained elusive, which has left our understanding of protein function fundamentally incomplete. To enable such observations, we have recently proposed a technique that improves the time resolution of cryo-electron microscopy (cryo-EM) to microseconds. Here, we demonstrate that microsecond time-resolved cryo-EM enables observations of fast protein dynamics. We use our approach to elucidate the mechanics of the capsid of cowpea chlorotic mottle virus (CCMV), whose large-amplitude motions play a crucial role in the viral life cycle. We observe that a pH jump causes the extended configuration of the capsid to contract on the microsecond timescale. While this is a concerted process, the motions of the capsid proteins involve different timescales, leading to a curved reaction path. It is difficult to conceive how such a detailed picture of the dynamics could have been obtained with any other method, which highlights the potential of our technique. Crucially, our experiments pave the way for microsecond time-resolved cryo-EM to be applied to a broad range of protein dynamics that previously could not have been observed. This promises to fundamentally advance our understanding of protein function.
Observing proteins as they perform their tasks has largely remained elusive, which has left our understanding of protein function fundamentally incomplete.To enable such observations, we have recently proposed a technique that improves the time resolution of cryo-electron microscopy (cryo-EM) to microseconds.Here, we demonstrate that microsecond time-resolved cryo-EM enables observations of fast protein dynamics.We use our approach to elucidate the mechanics of the capsid of cowpea chlorotic mottle virus (CCMV), whose large-amplitude motions play a crucial role in the viral life cycle.We observe that a pH jump causes the extended configuration of the capsid to contract on the microsecond timescale.While this is a concerted process, the motions of the capsid proteins involve different timescales, leading to a curved reaction path.It is difficult to conceive how such a detailed picture of the dynamics could have been obtained with any other method, which highlights the potential of our technique.Crucially, our experiments pave the way for microsecond time-resolved cryo-EM to be applied to a broad range of protein dynamics that previously could not have been observed.This promises to fundamentally advance our understanding of protein function.
Cowpea chlorotic mottle virus is an icosahedrally symmetric plant virus in the Bromoviridae family that infects cowpea plants (Vigna unguiculata) 1 .As with most viruses, CCMV faces the challenge of safely packaging its genetic material for transport, but then releasing it at the appropriate time to infect the host.As illustrated in Fig. 1, CCMV is thought to achieve this by detecting a change in its chemical environment upon entering the host cell, which causes its capsid to swell, increasing in diameter by about 10% 2 .This extended state, which is unstable and disassembles, releases the viral RNA, thus infecting the host 3 .The capsid swelling is triggered by a decrease in the concentration of divalent ions and a simultaneous increase in pH upon entry into the host cell.This causes calcium ions to vacate their binding sites on the capsid interior, where they are complexed by several negatively charged sidechains 3,4 .Once the calcium ions are no longer present to compensate these negative charges, electrostatic repulsion causes the capsid to expand [3][4][5] .In the absence of divalent ions, the virus can also be artificially contracted by lowering the pH below 5, which protonates the negatively charged residues and thus removes their repulsion 6 .
Our understanding of how the CCMV capsid functions has remained incomplete due to a lack of direct observations of the fast motions of this intricate nanoscale machine, or even methods that would enable such observations.A comparison of the contracted and expanded virus structures 6 suggests that the mechanics of the capsid motion must involve several large-scale translations and rotations of the capsid proteins.However, it is unclear whether they occur in a concerted or asynchronous fashion 3,5 .Moreover, it is unknown how fast these motions are.The pH induced contraction of another icosahedral virus, Nudaurelia Capensis ω, was observed to be complete after 10 ms, but is thought to be significantly faster 7 .This suggests that the capsid motions of CCMV would be too fast to be captured by traditional time-resolved cryo-EM, which affords only millisecond time resolution 8 .Ultrafast x-ray crystallography 9,10 , while sufficiently fast, is likely not suitable either to study these dynamics, since the crystal environment would hinder the large-amplitude motions involved 10,11 .The difficulty of observing the fast motions of the CCMV capsid exemplifies the broader challenge of observing proteins as they perform their tasks.This has largely remained elusive, which has left our understanding of protein function fundamentally incomplete 12 .In order to enable such observations, we have recently proposed microsecond time-resolved cryo-EM [13][14][15][16][17] , which affords a time resolution of about 5 µs or better 13 and enables near-atomic resolution reconstructions 17 .Here, we demonstrate that microsecond timeresolved cryo-EM enables observations of fast protein dynamics.
Results
Here, we employ microsecond time-resolved cryo-EM to observe the pH jump induced contraction of CCMV and elucidate its capsid mechanics.The experimental approach is illustrated in Fig. 2a-d.Cryo samples of extended CCMV at pH 7.6 (containing no divalent ions) are prepared in the presence of a photoacid (NPE-caged-proton).The pH of the cryo sample is then lowered to 4.5 by releasing the photoacid through UV irradiation (266 nm).Even though the fully contracted state of CCMV is the most stable at this low pH 18 (Fig. 2e), the matrix of vitreous ice surrounding the particles prevents their contraction, locking them in their extended configuration.However, when we rapidly melt the sample with a laser beam (532 nm, Fig. 2b), the particles begin to contract as soon as the sample is liquid (Fig. 2c).After 30 µs, we then switch off the heating laser, and the sample cools and revitrifies within microseconds, trapping the particles in their partially contracted configurations (Fig. 2d), which we subsequently image.
Figure 3a shows a single-particle reconstruction of CCMV in its extended state (3.9 Å resolution).The sample was plunge-frozen at pH 7.6, after which the pH was lowered to 4.5 by releasing the photoacid.The capsid has a diameter of 32 nm (Materials and Methods), with the disordered RNA in its interior not resolved.The reconstruction is indistinguishable from one obtained without first lowering the pH (Supplementary Fig. 2a).This confirms that the vitreous ice matrix has prevented the particles from contracting, even though the negatively charged residues whose repulsion keeps the capsid inflated are likely protonated due to the high proton conductivity of vitreous ice 19 .In contrast, a sample prepared at pH 5.0 yields the structure of the fully contracted state with a diameter of 28 nm (Fig. 3c).The resolution of 1.6 Å is considerably higher than that of the extended state, which is more flexible and prone to partial disassembly.
When we prepare CCMV in its extended state and lower the pH to 4.5, melting and revitrification of the sample allows the particles to partially contract.A reconstruction from a revitrified sample (Fig. 3b, 8.0 Å resolution) features a particle diameter of 31 nm, which lies in between that of the extended and the contracted configurations.This is also evident in Fig. 3d, which shows a cross section of the three reconstructions overlayed.Since the particles do not contract in samples that are UV irradiated, but do not contain any photoacid (Supplementary Fig. 2b), we conclude that the contraction is induced by the pH jump.
The partially contracted CCMV particles obtained after revitrification feature substantial conformational heterogeneity, which limits the resolution of the reconstruction in Fig. 3b to 8.0 Å.This is confirmed by a variability analysis (cryoSPARC 4.0.1 20 ). Figure 3e proteins.The three configurations appear as distinct clusters, with the extended and partially contracted ensemble closer to each other and partly overlapping.Interestingly, the variability analysis in Fig. 3e suggests that the reaction path may be curved, indicating that different motions involved in the contraction process occur on different timescales.
An analysis of the translations and rotations of the capsid proteins confirms that contraction involves different timescales.We divide the particle distribution in Fig. 3e into 30 equally spaced slices along the first variability component and perform a reconstruction for each (Supplementary Fig. 4).We then dock atomic models of the extended configuration into slices 1-12 (extended and partially contracted particles) and of the contracted configuration into slices 25-30 (fully contracted particles), from which we determine the motions of the capsid proteins (Supplementary Methods).Figure 4a illustrates these motions, with the icosahedral capsid shown in its extended form.The 180 identical capsid proteins are arranged in 12 pentamers and 20 hexamers.The asymmetric unit (highlighted) contains three subunits A, B, and C (magenta, green, and blue, respectively).Figure 4b displays the particle diameter as a function of the slice number, with the diameter measured along the five-fold symmetry axis (indicated in Fig. 4a).In slices 1-5, which contain extended particles, as well as in slices 25-30, which contain fully contracted particles, the diameter is constant.In contrast, a continuous distribution of diameters is found for the partially contracted particles in slices 8-12, with the particle diameter in slice 12 about halfway between the fully extended and contracted configurations.This wide distribution highlights the conformational heterogeneity of the ensemble obtained after melting and revitrification.Evidently, the particles contract at different speeds.
The contraction of CCMV is accompanied by a simultaneous anticlockwise rotation of the pentamers and hexamers, which are both rotated by about 5 degrees in the fully contracted state.Figure 4c displays the rotation angles as a function of slice number.It reveals that the pentamers rotate about twice as fast as the hexamers, adopting a rotation angle of over 3 degrees in slice 12, while the hexamers reach only about 1.5 degrees.This rotation of the capsomeres is accompanied by a superimposed rotation of the capsid subunits around the axes indicated with black arrows in Fig. 4a.This rotation causes the capsomeres to adopt a domed structure in the contracted state.Figure 4d reveals that subunits A-C rotate with similar speeds.
However, whereas subunit A has already reached its final rotation angle of ~7 degrees in slice 12, subunits B and C have completed only about half their rotations of ~11 and ~13 degrees, respectively.Clearly, while the different motions of the capsid proteins occur simultaneously, they are associated with different timescales.
Our experiments elucidate the capsid mechanics of the pH jump induced contraction of CCMV.Given the large amplitude of the motions involved, the contraction is surprisingly fast, with some particles completing half the contraction within the time window imposed by the 30 µs laser pulse.The process thus resembles a collapse that is triggered when the electrostatic repulsion is removed that keeps the capsid inflated.While this collapse is a concerted process, the associated translations and rotations of the capsid subunits occur on slightly different timescales, which results in a curved reaction path.Our analysis also reveals a large spread in the speed with which the particles contract.This is an expected result since the contraction occurs in a dissipative medium.For the same reason, conformational heterogeneity will likely be a feature of most protein dynamics than can be observed with microsecond time-resolved cryo-EM.It is therefore advantageous to design experiments such that they start from a homogeneous ensemble.As we have shown here, conformational sorting 21,22 will be crucial to obtain detailed structural information and elucidate reaction paths.We note that CCMV exists in three virions, which each package a different RNA strand 23,24 and will therefore likely contract at slightly different speeds.A further contribution to the observed spread in contraction speeds will likely arise from small variations in the temperature evolution of the revitrified area 13 .
While we have previously characterized our technique with the help of proof-of-principle experiments, we here show that microsecond time-resolved cryo-EM can be successfully employed to study fast protein dynamics that occur in vivo.We demonstrate a general approach for triggering such dynamics with the help of photorelease compounds 14 .Instead of uncaging the compound while the sample is liquid, we already do so with the sample still in its vitreous state.This offers the advantage that much larger changes in the chemical environment of the embedded particles can be induced, in particular for caged compounds with small quantum yields.Our experiments confirm that while the particles remain trapped in the matrix of vitreous ice, they cannot react to this stimulus, but will only begin to undergo conformational dynamics once the sample is melted with the laser beam.It should be possible to extend this principle to a wide range of other stimuli that can be applied with photorelease compounds, including caged small molecules, ATP, ions, amino acids, or peptides 25,26 .This suggests that microsecond time-resolved cryo-EM will be broadly applicable and that it has the potential to elucidate the dynamics of a wide variety of proteins that previously were too fast to be observed.
It is difficult to conceive how a similarly detailed view of the fast structural dynamics of the CCMV contraction could have been obtained with any other technique.This is particularly true for methods that do not offer a sufficiently high time resolution and are therefore limited to observing proteins at equilibrium.For example, an ordinary conformational analysis of a cryo sample prepared under equilibrium conditions would be unable to access the partially contracted transient configurations that we observe.This highlights the need for fast observations of protein dynamics under out-ofequilibrium conditions.In fact, it is a defining feature of life that it occurs far from equilibrium 27 .By enabling fast observations of nonequilibrium dynamics, microsecond time-resolved cryo-EM thus promises to fundamentally advance our understanding of living systems.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
EM sample preparation and data acquisition.S.V.B. and O.F.H. performed the cryo-EM data processing.S.V.B performed the structure modeling and refinement.S.V.B., M.D, and U.J.L. performed the data analysis.Acquiring funding, project administration and supervision was performed by U.J.L.The writing of the original draft and data visualization was performed by S.V.B, O.F.H., and U.J.L.The reviewing and editing of the manuscript were performed by S.V.B, M.D., and U.J.L.
displays the distribution of the particles in the extended, intermediate, and contracted configurations (21,675 randomly selected particles of each) as a function of the first two variability components.The first component predominantly corresponds to a change in particle diameter, while the second is associated with motions of the capsid
Fig. 1 |Fig. 2 |
Fig. 1 | Illustration of the entry of CCMV into the plant cell.The virus in its contracted state (green) enters the plant cell through damaged sites in the cell wall.Once inside the cytoplasm, the virus experiences a decrease in the concentration of divalent ions and an increase in pH, which causes its capsid to swell.The extended state, which is unstable and disassembles, then releases the viral RNA, thus infecting the host.
Fig. 3 |
Fig. 3 | Single-particle reconstructions of different stages of the contraction of CCMV, and variability analysis.a-c Comparison of the expanded state (plunge frozen at pH 7.6), the partially contracted configuration obtained with a 30 µs laser pulse, and the fully contracted state (prepared at pH 5.0).Before acquiring micrographs of the expanded state, the pH of the cryo sample was lowered to 4.5 by releasing a photoacid through UV irradiation.The particles nevertheless remain expanded because the surrounding matrix of vitreous ice prevents their contraction.d Cross section of an overlay of the three reconstructions (filtered to 8 Å), highlighting that the structure obtained after 30 µs of laser irradiation (blue) corresponds to a partially contracted configuration.e Variability analysis (cryoSPARC 4.0.1 20 ) of the extended, partially contracted, and fully contracted configurations (21,675 randomly selected particles of each).The particle distribution is shown as a function of the first two components, which are predominantly associated with the diameter change of the particle (component 1) and with motions of the capsid proteins (component 2).The particle distribution is divided into 30 slices along the first component, and a reconstruction is obtained for each.
Fig. 4 |
Fig. 4 | Analysis of the capsid motions involved in the contraction of CCMV. a Geometry of the icosahedral capsid (extended form).The asymmetric unit (highlighted) contains three protein subunits A, B, and C (magenta, green, and blue, respectively).Five-, three-, and two-fold symmetry axes are indicated with a star, triangle, and ellipse, respectively.Arrows indicate the rotation axes of the capsid proteins.b-d Analysis of the motions of the capsid proteins.The particle distribution in Fig. 3e is divided into 30 slices along the first component, and reconstructions are obtained for each.Atomic models are then docked into the density, from which the motions of the capsid proteins are extracted for each slice.b Particle diameter as a function of slice number.c, Rotation angles of the capsid pentamers and hexamers.d Angles of the superimposed rotation of the A, B, and C subunits around the rotation axes indicated with arrows in a. | 2023-04-23T13:10:22.018Z | 2023-04-19T00:00:00.000 | {
"year": 2023,
"sha1": "20b2e5c84b5f2877f1b1d9a03951887e676677c3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-41444-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b114a156a991437c85902b217468cac08b56252",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16923660 | pes2o/s2orc | v3-fos-license | A Regulatory Pathway, Ecdysone-Transcription Factor Relish-Cathepsin L, Is Involved in Insect Fat Body Dissociation
Insect fat body is the organ for intermediary metabolism, comparable to vertebrate liver and adipose tissue. Larval fat body is disintegrated to individual fat body cells and then adult fat body is remodeled at the pupal stage. However, little is known about the dissociation mechanism. We find that the moth Helicoverpa armigera cathepsin L (Har-CL) is expressed heavily in the fat body and is released from fat body cells into the extracellular matrix. The inhibitor and RNAi experiments demonstrate that Har-CL functions in the fat body dissociation in H. armigera. Further, a nuclear protein is identified to be transcription factor Har-Relish, which was found in insect immune response and specifically binds to the promoter of Har-CL gene to regulate its activity. Har-Relish also responds to the steroid hormone ecdysone. Thus, the dissociation of the larval fat body is involved in the hormone (ecdysone)-transcription factor (Relish)-target gene (cathepsin L) regulatory pathway.
Introduction
In holomatabolous insects, larva undergoes a complete transformation during metamorphosis to form adult. This transformation is accomplished by the destruction of larval tissues and organogenesis of the adult tissues, and is called as tissue remodeling. The extracellular matrix (ECM), which functions in cell adhesion, cell signaling, and the structural maintenance of tissues, must be degraded during tissue remodeling. The ECM alteration is important for embryogenesis, metamorphosis, and cell migration, and it is also degraded during the course of many diseases, for example, cancer growth and metastasis [1,2]. Two protein families, matrix metalloproteinases and cysteine proteases, are involved in degradation of ECM and intercellular protein from bacteria to mammals [1][2][3], especially cysteine protease cathepsins in cancer.
Previous studies demonstrated that metamorphosis in insects is developmentally regulated by the steroid hormone 20-hydroxyecdysone (20E or ecdysone), the ecdysone binds to its receptors EcR and USP, and mediates a cascade gene expression to promote metamorphosis process, including tissue remodeling [4]. The insect fat body is an important organ, comparable to vertebrate liver and adipose tissue, which performs a myriad of metabolic activities including intermediary metabolism and the homeostatic maintenance of hemolymph proteins, lipids, and carbohydrates [5,6]. Moreover, fat body also contributes to developmentally specific metabolic activities that produce, store, or release components central to the prevailing nutritional requirements or metamorphic events of the insect [6]. Recently, molecular regulatory mechanism showed that fat body can regulate growth and development through mediating release of the brain hormone [7,8]. Therefore, understanding the fat body remodeling is crucial for insect development and metamorphosis, and the fat body dissociation is the first step to understand the remodeling of the fat body.
The fat body is made up of a single layer of cells that are encased by a thin basement membrane and forms sheets of tissue. The dissociation of larval fat body involves extensive proteolysis, which makes proteases to degrade basement membrane and ECM between fat body cells, and then causes release of individual fat body cells into hemolymph. An insect cysteine protease, hemocyte cathepsin B has been suggested to participate in the dissociation of larval fat body in Dipteran species, Sarcophaga peregrine [9,10]. This 29 kD cathepsin was excreted from pupal hemocytes, bound to the basement membrane of larval fat body, and led to the fat body dissociation. Rabossi et al. observed the hemocyte binding to the fat body of another Dipteran, Ceratitis capitata, and an aspartyl protease was purified from C. capitata fat body [11]. The temporal activity profile of the enzyme during metamorphosis was correlated well with the fat body dissociation, but it is unclear whether the aspartyl protease was derived from the fat body or hemocyte. Hori et al. [12] and Kobayashi et al. [13] suggested that a 200 kD hemocyte-specific recognition protein could interact with the fat body to trigger the release of cathepsin B through an unknown mechanism. However, the 200 kD recognition protein was demonstrated to be myosin heavy chain derived from degraded larval muscle, but not from pupal hemocyte [14]. These results implied that the fat body dissociation is done by a certain internal factor.
In Drosophila melanogaster, the matrix metalloproteinases 2 (MMP 2) was related to histolysis of larval tissues, proventriculus and gastric caeca, but not fat body [15]. Recently, Nelliot et al. elegantly demonstrated that fat body remodeling in D. melanogaster is a hemocyte independent process based on a strategy to ablate the hemocytes by ectopically expressing a cell death gene bead involution defective [16]. Bond et al. [17] proved that bFTZ-F1 is involved in Drosophila fat body remodeling by the regulation of the MMP2 expression. Obviously, fat body dissociation is caused by an internal factor, but not hemocyte. However, little is known about the mechanism for the fat body dissociation other than Drosophila.
In previous study, we showed an important role for cysteine protease cathepsin L in larval moulting of the cotton bollworm Helicoverpa armigera [18]. In whole-body larvae and larval hemolymph, the activity and expression of H. armigera cathepsin L (Har-CL) was low after the larval ecdysis (4 th -5 th instar and 5 th -6 th instar) and increased significantly before next moulting, which suggests that Har-CL is regulated strictly in larval development through degradation of ECM for larval moulting. However, a major difference of expression and activity of Har-CL between whole body and hemolymph was found in day 0 pupae. In hemolymph, Har-CL expression and activity in day 0 pupae was much lower than in day 5 of sixth instar larvae. In contrast, Har-CL expression in day 0 whole body pupa was comparable to that of day 5 of sixth instar larvae. The difference may be the result of high Har-CL expression in a certain tissue other than the hemolymph, such as fat body, during early pupal development. If so, Har-CL may be crucial in the dissociation of the larval fat body.
Developmental arrest, called as diapause in insects, is a good model to study individual or tissue development [19]. As a pupal diapause species the moth, H. armigera, larval fat body will remain integral in diapause-type pupae for months, whereas the dissociation of larval fat body will start on day 0 after pupation in nondiapause-type ones. H. armigera, therefore, is a suited animal to study the fat body dissociation. In the present paper, we study the activity and expression of Har-CL in nondiapause-and diapausedestined H. armigera individuals. These results show that Har-CL in day 0 of nondiapause-type pupae is released into the extracellular matrix of the fat body for its tissue dissociation, but not in diapause-type pupae. The inhibitor and RNAi experiments demonstrate clearly that Har-CL functions in the fat body dissociation. Further, transcription factor Har-Relish, which is a member of NF-kB family and functions in the immune response through regulating antimicrobial peptide gene expression in insects, specifically binds to the promoter of Har-CL to regulate its activity. Har-Relish transcription can respond to ecdysone in vivo. Thus, a new regulatory mechanism, ecdysone-Relish-cathepsin L signaling pathway, is involved in the larval fat body dissociation.
Changes of proteolytic activity in fat body
Using a synthetic substrate of cathepsins B and L, Z-Phe-Argmethyl-coumarylamide (Z-F-R-MCA), we examined the pattern of proteolytic activity in the fat body from the fifth instar larvae to new pupae. In the fifth instar larvae of nondiapause-destined, activity was low on day 0, and then increased on day 1, followed by a gradual decline ( Figure 1A). During the early development (day 0-2) of the sixth instar larvae, proteolytic activity decreased continually, reaching the lowest point on day 2. Activity then increased progressively until pupation and reached a peak in day 0 pupae. In diapause-destined individuals, the trend of activity was similar to nondiapause type ( Figure 1B). However, activities in the fat body of diapause-type individuals were much higher than that of nondiapause-type ones.
To determine the relative contribution of cathepsin B or L activity in fat body of day 0 pupae, we added the cysteine protease broad-range inhibitor E-64, the cathepsin B-selective inhibitor CA074 or the cathepsin L-selective inhibitor CLIK148 to the Z-F-R-MCA reaction mixture. In nondiapause-or diapause-destined pupae, 10 mM E-64 or CLIK148 inhibited more than 90% of the Z-F-R-MCA proteolytic activities from the fat body, while cathepsin B-selective inhibitor CA074 was much less potent than E-64 or CLIK148 ( Figure 1C). These observations suggest that Har-CL is the major cysteine protease in the fat body.
Har-CL expression in fat body
Using a combination of competitive RT-PCR and Southern blot analysis and Western blot, the temporal patterns of Har-CL mRNA and protein from the fifth instar larvae to new pupae were analyzed in nondiapause-destined individuals. Har-CL mRNA was barely detected on day 0 and 1 of the fifth instar, but the mRNA levels increased sharply to a peak on day 2 ( Figure 1D). The expression then declined to low levels during days 0-2 of the sixth instar larvae, and increased gradually again to the second peak at the end of the sixth larval instar and day 0 pupae. The Western blot showed a similar pattern to changes of Har-CL mRNA and a 47 kDa propeptide and a 39 kDa mature peptide were detected in the fat body, especially high protein expression at late stage of the sixth instar larvae and new pupae ( Figure 1E). In the diapause-destined individuals, similar patterns of Har-CL mRNA ( Figure 1D) and protein ( Figure 1E) expression were observed. These results showed clearly a high expression of Har-CL in day 0 pupal fat body, implying that Har-CL may exert its effect on the fat body dissociation.
Author Summary
Insect fat body is the intermediary metabolism organ and the main source of hemolymph components, and it is crucial for insect development and metamorphosis. However, molecular mechanism for the fat body remodeling is almost unknown other than in Drosophila melanogaster. A pupal diapause species the cotton bollworm, Helicoverpa armigera (Har), is a useful model to study individual or tissue remodeling, because larval fat body will remain integral in diapause-type pupae for months, whereas the dissociation of larval fat body will start on day 0 after pupation in nondiapause-type ones. Here, we find that H. armigera cathepsin L (Har-CL) is released from fat body cells into the extracellular matrix for tissue dissociation. A nuclear protein is identified to be transcription factor Har-Relish, which regulates the promoter activity of Har-CL gene. Har-Relish also responds to the steroid hormone ecdysone. Thus, a new regulatory mechanism, ecdysone-Relish-cathepsin L signaling pathway, is involved in the larval fat body dissociation.
20E-Relish-CatL Regulates Fat Body Dissociation
Localization of Har-CL in the fat body Using immunocytochemical methods, we investigated the distribution of Har-CL in the fat bodies of diapause-and nondiapause-destined individuals. To compare accurately the difference of Har-CL expression between diapause-and nondiapause-destined individuals, both two type larvae were reared in the same temperature (20uC) with long day (nondiapause type) or short day (diapause type) to synchronize developmental time. In feeding 6 th instar larvae (day 5) and pre-pupae (day 9), Har-CLpositive signals were similar in the fat bodies of diapause-and nondiapause-destined individuals (Figure 2A and 2B), but a significant increase was found on day 9 in both type of fat body cells. These observations are consistent with high levels of Har-CL expression and activity at the late stage of the sixth instar larvae The results were presented as per cent (%) of proteolytic activity, defined in (A), relative to untreated controls. Each column represents the mean 6 SD of three separate experiments in A, B, and C. Developmental changes of Har-CL mRNA (D) and protein (E) from the fifth or sixth instar larvae to new pupae in nondiapause type (N) and diapause type (D). Har-CL mRNA was detected by semi-PCR that truncated Har-CL (T-HarCL) mRNA (1 ng) and total RNA (1 mg) was reversetranscribed, PCR-amplified with 24 cycles, and subjected to Southern blot analysis. The truncated-Har-CL is as internal control for semi-quantitative PCR. The numbers in the X axis represent days of the 5th and 6th instar larvae. Har-CL protein was detected by Western blot that fat body protein (20 mg) from each stage were separated, and transferred to membrane and hybridized by using Har-CL polyclonal antibodies. doi:10.1371/journal.pgen.1003273.g001 20E-Relish-CatL Regulates Fat Body Dissociation ( Figure 1). However, a significant difference of Har-CL expression in the fat body was seen in day 0 pupae. The amounts of Har-CLpositive signals in the fat body of nondiapause-destined pupae were similar with that of diapause-destined ones in 0 h after pupation, but the intensities of Har-CL signals were more clear in nondiapause-destined ones ( Figure 2C). The result implied that Har-CL was moved to the plasma membrane of the fat body cells in nondiapause-destined individuals, and Har-CL in diapausedestined ones still remained in the middle of fat body cells. In 24 h after pupation, Har-CL from lysosomes of the fat body cells were released in nondiapause-type individuals ( Figure 2D), whereas those Har-CL still remained in lysosomes of the fat body and randomly distributed in cytoplasm in diapause-type ones as the same with 0 h after pupation. We also did the fat body of diapausing individuals (day 20 after pupation, Figure S1), which is similar with day 0 pupae.
To demonstrate whether Har-CL from lysosomes of the fat body cells releases into the hemolymph in nondiapause type pupae on day 0, we detected Har-CL protein and activity in the hemolymph and fat body. The result showed that both Har-CL protein and activity in hemolymph of day 0 pupae are very less or low compared with the fat body ( Figure S2), suggesting that Har-CL protein from the fat body is not released into hemolymph. Therefore, Har-CL protein from the fat body of nondiapausedestined pupae is likely released into the ECM of fat body cells, and is closely correlated with the fat body dissociation.
Har-CL function on the fat body dissociation
We first investigated the numbers of the fat body cells in hemolymph from 6 h to 24 h after pupation, the dissociated numbers of the fat body cells were less in 6 h and 12 h, increased significantly in 18 h, and reached heavily dissociation in 24 h ( Figure S3). We then injected specific inhibitor CLIK 148 (solution in 1% DMSO) for Har-CL into new pupae 2 h after pupation. Compared with a control injected with 1% DMSO, the dissociated fat body cells in hemolymph were significantly less with inhibitor in 24 h after pupation ( Figure 3A), indicating that Har-CL plays a key role in the dissociation of the larval fat body. Further, we injected dsRNA directly against Har-CL or GFP into nondiapause-type larvae on the last day of the sixth instar, approximately 18 h before pupation, and the dissociated fat body cells in hemolymph and Har-CL expression in the fat body at the mRNA and protein levels were investigated in 24 h after pupae. The programmed fat body dissociation was significantly suppressed by injection of dsRNA against Har-CL, compared with a control that injected with the GFP dsRNA ( Figure 3B). Har-CL mRNA ( Figure 3C) and protein ( Figure 3D) in the fat body were detected by real-time PCR and Western blot. Both Har-CL mRNA and protein had a significant decrease by RNAi, compared with the control injected with the GFP dsRNA. This result showed that dsRNA causes a decrease of the dissociated fat body cells in hemolymph through knock-down of Har-CL expression. Taken together, Har-CL indeed functions in the dissociation of the larval fat body.
Characterization of 59-upsteam regulative region of Har-CL gene
To characterize the regulatory mechanism of Har-CL gene expression at the transcription level, a 1920-bp fragment from the 59-upsteam region of Har-CL gene was cloned using the genome walking technique that was described in the Materials and Methods section, and the cloned region of Har-CL gene was subsequently sequenced. The potential consensus sequences for regulatory elements in the promoter were analyzed using the TFSEARCH website (http://www.cbrc.jp/research/db/TFSEA RCH.html) [20], and several possible transcription factor-binding sites were shown in Figure S4. These sites include POU, CdxA, MyoD, BR-C Z, NF-kB, E-box, and GATA-1.
We first cloned eight truncated promoter sequences of Har-CL gene into a pGL3-basic luciferase reporter vector, after which we measured promoter activity to confirm that the potential ciselements were involved in the regulation of Har-CL transcription ( Figure 4A). These constructs were co-transfected into HzAM1 cells, which originate from the ovaries of Helicoverpa zea [21], using the pRL-TK plasmid as an internal control to determine the transfection efficiency. Promoter activity was measured using a dual-luciferase reporter system, and two strong luciferase activity signals were detected when the HzAM1 cells were transfected with the +30 to 2345 bp and +30 to 21911 bp segments, respectively.
To identify cis-elements for the promoter activity that were located between 2345 and 2256 bp, several deletion constructs were generated as follows: HLP1D between 2345 and 2308, HLP2D 2308 and 2287, and HLP3D between 2287 and 2256 ( Figure 4B). The promoter HLP (+30,2345) had the highest activity level, whereas the promoter activity that was associated with the HLP1D and HLP2D constructs was dramatically reduced. This finding suggests that 2345,2308 and/or 2308,2287 regions are important for activating the transcription of Har-CL gene.
Electrophoresis mobility shift assay (EMSA) was performed with the nuclear extracts from H. armigera fat bodies to further characterize the potential transcription factor activity that mediated the activator domain of Har-CL gene, and the probe LP was designed according to the 2345,2256 region, which spans the activation region of Har-CL promoter ( Figure 4C). The labeled probe (LP) bound to four different complexes, which were termed Har-CL modulating binding proteins 1-4 (HCLMBP-1-4) ( Figure 4D). These DNA-protein interactions were specific; the specific probe sL (unlabeled LP) competes for binding sites with the HCLMBPs, whereas a non-specific probe nL does not. HCLMBP-2 was also able to bind the probe LP1 (2345,2308), which can be inferred based on the observation that the HCLMBP-2 band could not be detected when the unlabeled probe LP1 was used as a competitor. As a control, a non-specific probe nS of the same length could not compete with probe LP1. Thus, LP1 interacts with a specific transcription factor.
HCLMBP-2 may be a member of the NF-kB family
After investigating the LP1 sequence, we deduced the presence and locations of two cis-elements, an E-box and an NF-kB-binding site, in the segment. We designed two mutations, one of which was located in the E-box (Em) and one of which was located at the NF-kB-binding site (Nm). These are shown in Figure 4E. HCLMBP-2 could be detected using probe LP1, and unlabeled probe Em could effectively bind to HCLMBP-2 in a competitive manner. However, the probe Nm failed to bind competitively to HCLMBP-2 in the presence of probe LP1 ( Figure 4F). This result indicates that the HCLMBP-2 may bind to the NF-kB-binding site in LP1. To further confirm the location of the HCLMBP-2binding site, we prepared a probe from a Bombyx mori NF-kBbinding sequence that was called ATTkB [22,23]. That probe could effectively and competitively inhibit the binding of HCLMBP-2 to LP1.
Furthermore, a competition EMSA was performed using probes Em and Nm. The presence of HCLMBP-2 was still detected when the probe Em was used, and unlabeled Em could bind competitively with HCLMBP-2. However, no evidence that HCLMBP-2 bound to the Nm was found ( Figure 4G). Thus, it appears that HCLMBP-2 binds to the NF-kB site but not to the Ebox, which suggests that HCLMBP-2 may belong to the NF-kB transcription factor family.
Cloning and characterization of the NF-kB transcription factors Relish and Dorsal
To obtain the cDNA sequence of the NF-kB transcription factors, we designed several pairs of degenerate primers for Relish and Dorsal based on the Relish and Dorsal sequences in D. melanogaster and B. mori. A 356-bp fragment for Relish and a 332-bp fragment for Dorsal were amplified via RT-PCR. The amino acid sequences that were deduced from this procedure showed high degrees of homology with known Relish and Dorsal sequences, respectively. Therefore, the specific primers RR1R, RR2R, RR1F, and RR2F (for Relish) and the primers DR1R, DR2R, DR1F, and DR2F (for Dorsal) were subjected to 59-and 39-rapid amplification of the cDNA end (RACE) according to the two cDNA sequences that had been obtained from the partial amplification. The two entire cDNAs that encoded Relish (GenBank No. JN315690) and Dorsal (GenBank No. JN315687) in H. armigera were obtained using this procedure.
Har-Relish cDNA contains an open reading frame (ORF) that encodes a protein that is composed of 945 amino acids and that has identity levels of 52%, 44%, and 33% with the reported ORFs in the Relish genes from B. mori (GenBank No. NP 001095935), D. melanogaster (GenBank No. AAF20133) and A. aegypti (GenBank No. AAM97895), respectively. The RHD and ANK domains of Har-Relish are conserved with these domains in the other three species ( Figure S5A-a and S5B-a). A caspase cleavage site that is located between Asp and Ser at amino acid residues 509,510 is found in Har-Relish protein ( Figure 5A). This site can be cut by Dredd, which is a caspase-8 homolog. In D. melanogaster, Dredd cleavage of the Relish resulted in an active form of Relish, Rel-D, and the resulting Rel-D protein enters nucleus to regulate gene transcription [24]. Thus, mature Har-Rel-D might also have a similar post-translational regulatory role.
Har-Dorsal cDNA encodes a protein that is composed of 564 amino acids and that has 51%, 53% and 56% identities with the Dorsal ORFs of B. mori
Expression patterns of Har-Relish and Har-CL gene during larval-pupal development
We first performed Northern blots to estimate the size and the alternative splicing of the endogenous Relish transcript. The total RNA was extracted from six tissues in the sixth instar larvae, and Har-Relish was detected solely in the fat body, not in the remaining five tissues. The size of the hybridized transcript was approximately 3.4 kb, which was consistent with the predicted size ( Figure S6A), and thus, the cloned cDNA most likely represents the full-length Har-Relish mRNA.
The RNA and protein extracts that were obtained from the fat bodies of the sixth instar larvae and new pupae were used to perform a temporal expression analysis. According to the results of competitive RT-PCR procedures, Har-Relish mRNA expression level was lowest on day 1, and it gradually increased until the animal reached the new pupa stage ( Figure S6B). This expression pattern is similar to the pattern of Relish expression in D. melanogaster [25]. A 58-kDa protein band was found using an antibody that was specific for Har-Relish, and this band was identical to that predicted for Har-Rel-D ( Figure S6C). Har-Rel-D, which is the active form of Har-Relish, was detected on day 5 of the sixth instar, and gradually increased as the larva progressed towards the new pupa stage. Thus, we have shown that the level of protein expression may correspond to the level of mRNA expression during larval-pupal development.
Overexpression of Har-Relish can activate Har-CL promoter
To confirm the role(s) of Har-Relish and/or Har-Dorsal in the regulation of Har-CL gene expression, three recombinant plasmids (Har-Relish, Har-Rel-D, and Har-Dorsal) were constructed. We co-transfected HzAM1 cells with Har-CL promoter and the three recombinant plasmids. As shown in Figure 5B, the forced expressions of Har-Relish and Har-Rel-D significantly activated Har-CL promoter, and Har-Rel-D was particularly potent. In contrast, the forced expression of Har-Dorsal from its recombinant plasmid did not result in the activation of Har-CL promoter.
Har-Relish had a regulatory effect on Har-CL gene promoter, and thus, we further examined its binding characteristics to determine whether they matched those of HCLMBP-2. We performed an EMSA in which nuclear extracts were incubated with either the labeled probe LP1 or with in vitro-translated Har-Rel-D. Har-Rel-D that had been translated in vitro bound to the promoter efficiently, and the binding activity of Har-Rel-D could be competitively reduced by incubation with the unlabeled specific competitor LP1. In contrast, the unlabeled nonspecific competitor nS was not able to compete with the binding of LP1. Figure 5C shows that HCLMBP-2 and Har-Relish have similar DNA sequence-binding specificities. Finally, we constructed a mutation in the NF-kB-binding site of Har-CL promoter (MutkBHCLp); the mutated promoter was then introduced into the reporter plasmid and co-transfected with Har-Rel-D ( Figure 5D). The activity of the mutant Har-CL promoter MutkBHCLp, which could not bind Har-Rel-D, was significantly lower than that of the wild-type Har-CL promoter ( Figure 5E). When Har-Relish expression in HzAM1 cells was knocked down using dsRNA, the luciferase activity of Har-CL promoter was lower than that in control cells that had been treated with GFP dsRNA ( Figure 5F). We used a ChIP assay to measure Har-Relish binding activity to Har-CL promoter so that we could verify whether Har-Relish is able to bind to Har-CL promoter in vivo. A positive band that corresponded to Har-CL promoter was detected by PCR when we used the anti-Har-Relish antibody; negative controls were not detected ( Figure 5G). All these results support the notion that Har-Relish is the most likely identity of the HCLMBP-2 protein and that this protein binds to Har-CL promoter.
Relish and cathepsin L can be regulated by ecdysone
It is well known that ecdysone is one of the most important hormones for metamorphosis and development in insects. Previous radiolabeled probe LP1 was used in the initial EMSA, and 100-fold excesses of unlabeled LP1, nS, Em, Nm, and ATTkB probes were used in competition EMSAs (lanes 3-7). ATTkB, B. mori NF-kB-binding sequence. (G) Em (left) and Nm (right) were used as probes in EMSAs. A 100-fold excess of either a specific probe (Em) or a non-specific probe (nS) was used for competition analysis. HCLMBP-2 was indicated with an arrow. doi:10.1371/journal.pgen.1003273.g004 20E-Relish-CatL Regulates Fat Body Dissociation studies have demonstrated that ecdysone regulates the expression of Har-CL gene [18], but the mechanism by which it accomplishes this is still unknown. The results that have been described above indicate that Har-Relish can also regulate Har-CL gene expression, and thus, we speculate that ecdysone regulates Har-CL gene expression by activating the transcription factor Har-Relish. To confirm this hypothesis, ecdysone was added into the cell culture medium of HzAM1 cells that had been co-transfected with Har-CL promoter and Har-Relish or Har-Rel-D recombinant plasmids. The luciferase activities of cells that had been transfected with either Har-Relish or Har-Rel-D were significantly higher than those of the controls, especially after a 1 mM of ecdysone was added to the culture medium ( Figure 6A-a). The luciferase activities of cells that had been co-transfected with both Har-CL promoter and Har-Relish or Har-Rel-D gradually increased from 6 h to 24 h after cells were exposed to ecdysone ( Figure 6A-b). Different doses of ecdysone were injected into the sixth instar larvae on day 6, and the amounts of Har-Relish protein in the fat bodies increased gradually from 1 to 4 mg in a dose-dependent manner ( Figure 6Ba). We then injected 2 mg of ecdysone into the sixth instar larvae on day 6, and we found that the concentration of Har-Relish protein gradually increased from 12 h to 36 h after injection ( Figure 6B-b). The results of these in vivo experiments clearly show that Har-Relish responds to exposure to ecdysone.
Finally, we also investigated the expression and activity levels of Har-CL protein when ecdysone was injected into the sixth instar larvae on day 6. As the same as Har-Relish above, Har-CL proteins in the sixth instar larvae had increased significantly after injection of 1-to 4-mg of ecdysone, and when 2 mg of ecdysone were injected into the larvae, the amount of Har-CL protein in the fat body gradually increased from 24 to 36 h after the injection ( Figure 6B-a and b). Similarly, the proteolytic activity in the larvae also gradually increased in response to injections of various doses of ecdysone ( Figure 6C-a), whereas proteolytic activity in these larvae was obviously repressed when the inhibitor CLIK148 was added. The highest level of activity was detected 36 h after a 2-mg injection of ecdysone ( Figure 6C-b). From these results, it can be inferred that both the expression and activity levels of Har-CL protein can be up-regulated by exposure to ecdysone.
Ecdysone and Relish are required for regulating Har-CL expression
To determine whether ecdysone can up-regulate Har-CL expression and activity via the activation Har-Relish, we performed RNAi experiments to suppress the expression of Har-Relish in HzAM1 cells or the sixth instar larvae. In the HzAM1 cell line, exposure to ecdysone is capable of up-regulating the expression of both Har-Relish and Har-CL mRNAs when the cells are transfected with dsRNAs that encode a GFP. However, the expression levels of both Har-Relish and Har-CL mRNAs clearly decreased when Har-Relish was knocked down by transfection with Relish dsRNA ( Figure S7A). Otherwise, the downregulation of Har-Relish protein in the fat body ( Figure S7B-a) can result in a decrease of Har-CL protein ( Figure 7A-a). The result showed Relish regulating Har-CL expression again. Two microgram of 20E was then injected into the sixth instar larvae that had been treated with the GFP dsRNA for 48 h. After injection of 20E, the level of Har-CL protein expression in the fat body was elevated between 12 and 36 h after injection ( Figure 7A-b). However, in the larvae in which Relish expression had been down-regulated via a 48-h treatment with Har-Relish dsRNA, the level of Har-CL protein expression in the fat body did not increase significantly from 12 to 36 h after the injection of 20E.
Finally, we also performed RNAi experiments to suppress Har-EcR expression in the sixth instar larvae to block ecdysone pathway, because EcR is ecdysone receptor which binds ecdysone to mediate signaling transduction. Har-EcR mRNA had a decrease significantly from 48 to 72 hours after the injection of Har-EcR dsRNA into the sixth instar larvae ( Figure S7B-b). After injection of Har-EcR dsRNA, both Har-CL and Har-Relish proteins in the fat body decreased, especially at 72 h ( Figure 7B-a), implying that both of Har-CL and Har-Relish are regulated by ecdysone signaling pathway. Two microgram of 20E was then injected into the larvae that had been treated with the GFP dsRNA for 48 h, both of Har-Relish and Har-CL proteins in the fat body was elevated from 12 to 36 h after injection with 20E ( Figure 7B-b). However, in the larvae in which EcR expression had been down-regulated via a 48-h treatment with Har-EcR dsRNA, the levels of Har-Relish and Har-CL proteins in the fat body did not increase significantly from 12 to 36 h after the injection with 20E. The results clearly show that both of Har-Relish and Har-CL can response to hormone ecdysone.
Discussion
Insect fat body is the intermediary metabolism organ and the main source of hemolymph components associated with growth and development. In natural development (nondiapause-type pupa), the fat body must be dissociated into individual fat body cells which are later removed by cell death in Drosophila [6]. In Lepidoptera species H. zea, a closely related species with H. armigera and diapause also is at pupal stage, most of larval fat body cells from the peripheral fat body completely disappear during pupal development, and less fat body cells from the perivisceral fat body reaggregate into adult fat body [26]. The larval fat body cells are likely source of metabolic reserves for pupal-adult development. Inhibition of the fat body dissociation is associated with pharate adult lethality [16]. Thus, the fat body dissociation is an essential developmental event. In contrary, the larval fat body will remain integral in diapause-type pupae, and result in developmental arrest (diapause) towards adult. When diapause is broken, the fat body starts to be dissociated and then pupal-adult development is restarted. Therefore, H. armigera pupal diapause is a useful animal model to study the molecular mechanism for tissue remodeling in the fat body.
Har-CL functions in the dissociation of the larval fat body
In mammalian, lysosomal cysteine proteases, such as cathepsin L, B and K, are synthesized and targeted to the acidic Figure 6. Har-Relish and Har-CL can respond to ecdysone. (A) Har-CL promoter activity in response to ecdysone exposure. (a) Dose-related response to ecdysone administration. To conduct luciferase activity assay, recombinant Har-Relish or Har-Rel-D plasmid was transfected into HzAM1 cells that contained Har-CL promoter (Har-CLp), and the cells were cultured in the presence of 0, 1, 2, or 4 mM ecdysone solution for 12 h. (b) Timerelated response to ecdysone administration. HzAM1 cells that contained Har-CLp were transfected with Har-Relish or Har-Rel-D plasmids, after which they were cultured in a medium that contained a 1 mM ecdysone solution for 0, 6, 12 or 24 h. The activity of Har-CLp without ecdysone was used as the control. (B) Har-Relish and Har-CL proteins in response to ecdysone administration. (a) Dose-related response to ecdysone. Various doses of ecdysone (0, 1, 2, and 4 mg) were injected into day 6 of the sixth instar larvae, and protein was extracted from the fat body 12 after injection. (b) Timerelated response to ecdysone exposure. Two mg of ecdysone were injected into day 6 of the sixth instar larvae; each injection was followed by a 12 hinterval after which protein was extracted from the fat body. Har-actin was used as a control. (C) Har-CL proteolytic activity in response to ecdysone. (a) Dose-related response to ecdysone. Various quantities of ecdysone (0, 1, 2, and 4 mg) were injected into day 6 larvae of the sixth instar, and protein was then extracted from larval fat body 12 h after injection. (b) Time-related response to ecdysone administration. Two mg of ecdysone was injected into day 6 larvae of the sixth instar, and protein was subsequently extracted from the larval fat body after a 12-h interval. Error bars represent S.D.s that were obtained based on three independent experiments; * indicates a p value of ,0.05, and ** indicates a p value ,0.01; p values were determined using paired one-sample t-tests. compartments, lysosomes and endosomes, by either the mannose-6-phosphate receptor-dependent pathway [27,28] or the mannose-6-phosphate receptor-independent pathway [29]. Cysteine proteases are also released to ECM and involved in ECM degradation by ''focal contact'' [30,31]. The regulatory process may include: the proteases are moved to plasma membrane and released into ECM by exocytosis; the formation of a localized acidic environment in a zone of contact that excludes the surrounding extracellular milieu and increased expression of vacuole-type H + -ATPase [30,31]. However, it is unknown whether the regulatory mechanism is conserved in insects. Our previous study demonstrated the Har-CL protein sequence, including pro-region and mature enzyme region, possesses considerable sequence homology with human cathepsin (54%) [18], it implies that a similar mechanism for the ECM degradation by cathepsin may exist in insects, although the detailed regulatory and molecular factors require further investigation.
In previous investigation of Har-CL involvement in larval moulting, we found a clear cathepsin activity and expression difference between the whole body and hemolymph of day 0 pupae [18]. We concluded that high cathepsin activity in the whole body may originate from the fat body. To test this hypothesis, we investigated the changes of cathepsin activities in the fat body from the 5 th instar larvae to day 0 pupae of diapauseand nondiapause-destined. The results demonstrated that high cathepsin expression in whole body of day 0 pupa is derived from the fat body, and more than 90% of proteolytic activity in the fat body of day 0 pupae was from cathepsin L protease. However, the functional significance of Har-CL in pupal fat body still remains uncertain.
Larval fat body must be degraded at early pupal stage, and then new adult fat body is formed during pupal-adult development. But, the dissociation mechanism for fat body is poorly reported. Previous studies suggested a hypothesis that the dissociation of the fat body is mediated by hemocytes that associate with and degrade the basement membrane of the fat body through release of cathepsin B or aspartyl protease [9][10][11]. However, the fat body dissociation and remodeling in D. melanogaster was recently demonstrated to be a hemocyte independent process [16]. More recently, Bond et al. [17] showed that bFTZ-F1 was involved in Drosophila fat body remodeling through the regulation of the MMP-2 expression. These results showed that the dissociation of the fat body may be caused by an internal factor, but not hemocyte. In this study, we demonstrated that cathepsin L protease, an internal factor, participates in the degradation of the larval fat body at early pupal stage.
Firstly, both Har-CL mRNA and protein expressed heavily in the fat body of day 0 pupa and its activity also is high, coincident with the dissociation of the fat body on day 0. Secondly, immunostaining demonstrated that Har-CL-positive signals in nondiapause pupae moved to the surface from middle of the fat body cells and then released Har-CL from the lysosomes, but Har-CL in diapause-type pupae still remained in lysosomes of the fat body cells and randomly distributed in cytoplasm. Western blot and activity analysis for cathepsin demonstrated that Har-CL of the fat body did not release into hemolymph of day 0 nondiapause pupae. These results suggest that Har-CL in the fat bodies of day 0 nondiapause-type pupae is associated with degradation of the ECM and basement membrane for remodeling new adult fat body. Finally, the inhibitor for Har-CL and dsRNA against Har-CL experiments demonstrated clearly that the fat body dissociation can be repressed significantly by inhibiting cathepsin L activity or knock-down of Har-CL expression. In treated individuals, less dissociated fat body cells are still detected in the hemolymph, implying that cathepsin L in H. armigera is the most important enzyme to degrade extracellular matrix for fat body dissociation, and other cathepsin members may also contribute to the fat body dissociation, as cathepsin L-selective inhibitor CLIK148 can inhibit approximately 90% of the activity in the fat body as shown in Figure 1C. Thus, Har-CL, an internal factor from the fat body, does play a crucial role in the dissociation of larval fat body in H. armigera.
Relish is required for the regulation of Har-CL gene expression
In insects, the NF-kB homolog Dorsal was first identified as having a role in the regulating the specification of embryonic dorsal-ventral polarity in D. melanogaster [32]. Later, Relish and Dorsal/Dif were identified in D. melanogaster, B. mori, and other species, but these genes generally have functions in regulating the transcription of antimicrobial peptide genes that support innate immunity [33][34][35][36]. In the present study, we identified Har-CL promoter region which has an NF-kB homolog-binding site, and the nuclear extract protein HCLMBP-2 could bind to the NF-kBbinding site. The result strongly implied that HCLMBP-2 may be a member of the NF-kB protein family. The Dorsal and Relish cDNAs from H. armigera were then cloned, and we found that Har-Relish was able to bind to the promoter of Har-CL gene and regulate its activity, whereas Dorsal was unable to modulate the activity of Har-CL gene. Our results suggest that the nuclear protein HCLMBP-2 is Har-Relish, which is an NF-kB homolog based on the following four observations: (i) the interaction between HCLMBP-2 and its specific probe LP1 could be competitively disrupted by the unlabeled ATTkB probe, which is a short probe that has previously been reported to bind NF-kB in B. mori [22,23]; (ii) complexes of LP1 and HCLMBP-2 had the same mobility as complexes of LP1 and in vitro-translated Har-Rel-D; (iii) both Har-Relish and Har-Rel-D can effectively activate Har-CL promoter in co-transfection assays, whereas a deletion in the NF-kB-binding site can significantly reduce the activity of Har-CL promoter; (iv) Relish can bind to Har-CL promoter in vivo.
Members of NF-kB family have a key role in the inflammatory and immune responses by inducing expressions of cytokines, chemokines, and their receptors [37]. NF-kB also has other roles except transcription regulation, such as reducing expression of transcription factor MyoD by disturbing mRNA stability, and affecting tumor promoting or suppressing by regulating oncogenes cyclin D1, c-Myc, p53 [38]. In insects, all Relishs are related to innate immune response [39]. Interestingly, Relish in H. armigera can bind to the promoter of Har-CL gene and activate Har-CL expression to regulate the dissociation of insect fat body.
From Northern blot, Har-Relish was highly expressed in the fat body. In Drosophila, the fat body has important functions in growth and metamorphic through TSC/TOR or insulin signaling pathways [8,40,41]. Therefore, fat body plays an essential role in insect development and metamorphosis by activating certain gene expressions, such as Har-CL and Har-Relish. In this study, both Har-CL and Har-Relish are highly expressed in the fat body, and have the same expression pattern during larval-pupal development. The result implies a closely relationship between Har-CL and Har-Relish. When down-regulation of Har-Relish expression by RNAi, both Har-CL mRNA and protein decreased significantly. Thus, Har-Relish expressed in the fat body contributes to the dissociation of the fat body at early pupal stage by regulating Har-CL expression.
20E-Relish-CatL Regulates Fat Body Dissociation
Har-Relish can respond to ecdysone signaling Ecdysone is an important hormone to regulate development and metamorphosis in insects. Likewise, ecdysone can genetically regulate immune response genes in D. melanogaster and B. mori, implying that ecdysone may regulate the expression of NF-kB [42,43]. Previous studies have demonstrated that Har-CL gene expression in H. armigera larvae responded to exposure to ecdysone, but the underlying mechanism that mediates Har-CL gene regulation is unknown [18,44]. In the present study, the expression of Har-Relish in the 6 th instar larvae increased significantly during the 12 h following an injection of ecdysone; this response to ecdysone was more rapid than the response of Har-CL, the expression of which only increased significantly 24-36 h after ecdysone injection. Har-Relish did not increase significantly during the 12-36 h following the injection of distilled water, which was used as a control (data not shown). This finding is contrary to the reports that in Drosophila, injection with nothing could temporarily increase the expression of Relish, but the change in the level of Relish expression was not significant after 6 h [45,46]. If Har-Relish expression had been knocked down as a result of RNAi, the level of Har-CL expression was significantly decreased until 36 h after treatment with ecdysone. This result shows that there is a cascade of gene expression in which ecdysone regulates the expression of Har-Relish, after which Har-Relish activates Har-CL gene expression both in vivo and in vitro. Thus, one mechanism for the action of ecdysone is that it is released into hemolymph from the prothoracic glands as an up-stream signal, after which it binds to its receptors EcR and USP in the fat body, and finally regulates the expression of Har-CL gene by activating the transcription factor Har-Relish. This hormone (ecdysone)transcription factor (Relish)-target gene (cathepsin) regulatory pathway for the dissociation of larval fat body is shown in Figure 7C.
Recently, we detected ecdysone titers in diapause-and nondiapause-destined pupae, showed that ecdysone level in diapause-type pupae is much lower than in nondiapause-type ones [47]. It is well known that if injection of ecdysone into diapausing pupae to elevate ecdysone level, the fat body will start dissociation and restart pupal-adult development. The results showed that high level of ecdysone promotes pupal-adult development with Har-CL release from the fat body and low ecdysone induces pupal diapause without Har-CL release, implying that ecdysone not only regulates Har-CL expression during larval development, as well as mediates Har-CL release after pupation.
In addition, we found evidence that Har-Dorsal, another NF-kB homologue that we had cloned in a previous experiment, is a transcription factor that belongs to the NF-kB family. Har-Dorsal contains an RHD, a proline-rich region, and a leucine zipper, and, as we found in a preliminary experiment, it was also expressed in the fat body (data not shown). However, Har-Dorsal was unable to bind to Har-CL promoter to regulate its transcriptional activity. In D. melanogaster and B. mori, Relish regulates some genes that cannot be regulated by Dorsal, and Dorsal regulates some genes that cannot be regulated by Relish [22,23,34]. Additional future experiments are needed to clarify the precise function of the Dorsal in H. armigera.
The activity assay of Har-CL promoter showed that one segment (from 2308 to 2287) resulted in a strong transcriptional activity, and two segments (from 2522 to 2345 and from 21554 to 21292) had suppressive activities ( Figure 4A). Therefore, we assume that a specific transcription factor may bind to the activating region of the promoter. A homology search was used to identify an E-box-binding site at which the transcription factor Myc is able to binds. It will be interesting to conduct further investigations of whether Myc regulates Har-CL gene because Myc is an important regulator of cell growth and proliferation. Some inhibitors may bind to sites in the suppression regions of the promoter and may then down-regulate Har-CL transcription. However, in conducting a homology search, we were unable to find a candidate element that was capable of binding to the suppression regions. Further tests that aim to identify inhibitory binding elements require EMSA experiments that utilize the incubation of the two suppression segments and with nuclear protein extracts.
In promoter activity analysis, HLP2D promoter, which deleted the 2287,2308 region, but contained the 2345,2308 region, still obtained a low activity ( Figure 4B). We speculate that loss of 2287,2308 region may affect transcription factor binding to 2345,2308 region, and result in low luciferase activity, as transcription factor binding or no will affect adjacent other transcription factor binding as reported previously [48], implying that the interactions by transcription factors in the promoter are important for activating the transcription of Har-CL gene.
Insects
H. armigera larvae were reared on an artificial diet at 2561uC, with a light-dark cycle of L14:D10 (nondiapause type) and 1861uC, with a photoperiod of L10:D14 (diapause type) [49]. In nondiapause type, development for the sixth instar larvae is approximately 6 days, and 13 days in diapause type. To accurately compare gene expression at the same time, H. armigera larvae were also reared at 20uC, with L14:D10 (nondiapause type) or L10:D14 (diapause type) to synchronize developmental time.
Proteolytic activity assay
Proteolytic activity was assayed using a specific substrate of cathepsin B and L, Z-Phe-Arg-MCA (Z-F-R-MCA, Sigma). Protein extracts (10 mg) or hemolymph (2 ml) were preincubated at 37uC for 10 min in 80 ml of Na 2 HPO 4 -citrate buffer (pH 4.4), containing 1.25 mM EDTA and 10 mM cysteine to activate the cysteine proteases, and then incubated for another 10 min after 10 ml 1 mM Z-F-R-MCA was added. After incubation, 100 ml 10% SDS and 2 ml Tris-HCl buffer (pH 9.0) were added to terminate the reaction. The fluorescence of liberated aminomethylcourmarin (AMC) molecules was measured by a fluorometer at an excitation of 370 nm and an emission of 460 nm. To inhibit proteolytic activity, 1 and 10 mM of the cysteine protease inhibitor E-64 (Sigma), the cathepsin B-selective inhibitor CA074 (Sigma) or the cathepsin L-selective inhibitor CLIK148 [50] were used. The inhibitors were first dissolved in DMSO to make a 20-mM stock solution, stored at 220uC, and prior to assay diluted with ultra-pure water. Thus, cathepsin L activity was deduced from the total activities and cathepsin B activity.
Western blot analysis
For Western blot analysis, protein extracts (80 mg for Har-Relish, 20 mg for Har-CL of the fat body sample, and 30 mg for Har-CL of the hemolymph sample) were separated on 12% SDS-PAGE and transferred on to a PVDF membrane (Hybond-P, Amersham). Non-specific binding was blocked using a 5% milk without fat solution, and Har-Relish protein was detected with the Novex Chemiluminescent Substrate (Invitrogen, Carlsbad, USA) in three separate experiments independently.
Cathepsin L is synthesized as precursors, and activated by proteolytic removal at the N-terminal of propeptide [51]. Both propeptide and mature protein represented protein level of cathepsin L as reported in Liu et al. [18] and Jean et al. [52], so we quantified from the two bands (propeptide and mature peptide) as described in Tang et al. [53], and normalized to the level of Har-Actin.
Whole-mount immunocytochemistry
H. armigera larvae were reared at 20uC, with L14:D10 (nondiapause type) or L10:D14 (diapause type) to synchronize developmental time. Fat body was dissected in 0.75% NaCl, fixed in PBT (PBS+0.3% Triton-X100) containing 3.7% formaldehyde for 2 h at room temperature, and extensively washed in PBT. Tissues were then blocked for 30 min in PBT containing 10% normal goat serum. Anti-Har-CL antibody (1:500) was incubated overnight at 4uC and secondary antibody (goat anti-rabbit Alexa flour 488, 1:1000) for 1 h at room temperature. Nucleus was stained with 5 mg/ml Hoechst 33342. Tissues were mounted in mounting medium, and the fluorescence images were acquired using a confocal laser scanning microscope (TCS-SP5, Leica). In the control experiments, the primary antibodies were replaced by pre-immunized rabbit serum.
Transient transfection and luciferase assay
HzAM1 cells were seeded in 96-well cell culture plates, cultured at 27uC with Grace's Insect Cell Culture Medium (GIBCO) supplemented with 10% fetal bovine serum overnight to let cells grow to log phase. Then cells were cultured without antibiotics or serum. Transfection was performed with the Cellfectin II Reagent (Invitrogen) as described in the user manual. The recombinant plasmid (200 ng) and 0.6 ml Cellfectin II were mixed in 50 ml Grace's Insect Medium without antibiotics or serum. After incubation at room temperature for 20 min, the DNA-lipid mixture was added into the cell culture medium gently. After 5-7 h, the transfection mix was replaced by complete medium with 10% fetal bovine serum. Each transfection was repeated three times. After incubation for 48 h, the cells were washed with cold PBS and harvested in Passive Lysis Buffer (Promega). Luciferase activity was assayed using the Dual-Luciferase Reporter Assay System (Promega) according to the manual by a microplate luminometer MikroWin2000 (Mikrotex) in three separate experiments independently.
In vitro translation and EMSA assay
The pET28-Har-Rel-D plasmid was used as a template for in vitro translation in the TNT Quick Coupled Transcription/ Translation System (Promega). The reaction system contained pET28-Har-Rel-D plasmid (1 mg), TNT T7 Quick Master Mix, 1 mM methionine, and was incubated at 30uC for 1 h. The translation product was used for the EMSA assay.
Nuclear protein extracts were prepared from H. armigera fat body according to the procedures described in Zhang et al. [48]. The probes used in the experiment were prepared through PCR, then digested with EcoR I or by annealing two overlapping oligonucleotides. The gaps in the probes produced by annealing oligonucleotides with partial overlapping were filled using a Klenow fragment (TaKaRa) at 37uC for 30 min in the presence of [a-32 P]-dATP, dCTP, dGTP and dTTP for labeling probes. Competitive probes were produced by the same method, but dATP was used instead of [a-32 P]-dATP. DNA-binding reactions were performed by incubating nuclear extracts (5-10 mg) at 27uC for 30 min with 32 P-labelled double-stranded DNA (10 000 c.p.m.) in binding reaction buffer [10 mM Hepes-K + (pH 7.9), 10% glycerol, 50 mM KCl, 4 mM MgCl 2 , 1 mM DTT, 0.5 mg/ml BSA, 0.1 mM PMSF and 1 mg of poly(dI/dC) (Sigma)]. Samples were resolved on 5% (w/v) non-denaturing polyacrylamide gel in 16TBE at 150 V. After electrophoresis, the gel was dried and subjected to autoradiography using an intensifying screen at 280uC for 16 h. For competition experiments, a 100-fold excess of unlabeled probe was pre-incubated with the nuclear protein extracts at 27uC for 10 min and then subjected to the procedures as mentioned above.
RNA interference
Using a micro-injector (Hamilton Company), Har-CL (15 mg) dsRNA was injected into larvae (the last day of the sixth instar), approximately 18 h before pupation, and GFP dsRNA was injected as a control. By our preliminary experiments, 15 mg of Har-CL dsRNA was injected into larvae, over 80% larvae could normally molt into pupae. DsRNAs of Har-EcR (25 mg) or Har-Relish (15 mg) were injected into day 4 of the sixth instar larvae, and the GFP dsRNA was used as a control. Forty-eight hours after injection, 2 mg of 20E was injected into the larvae again.
Insect treatments, protein preparation, polyclonal antibody generation, Northern blot, competitive RT-PCR and Southern blot analysis, genome walking, cloning of Relish and Dorsal cDNAs, construction of reporter gene and deletion mutagenesis, construction of the overexpression system, chromatin immunoprecipitation (ChIP) assay, and DsRNA generation Details of these methods can be found in the Supporting Materials and Methods section of Text S1 and Table S1.
Statistical analysis
All data were statistically analyzed by independent sample t-test. Asterisks indicate significant differences (*, p,0.05; **, p,0.01). Figure S1 Immunohistochemical analysis of Har-CL in fat body. Har-CL positive signals (green) are localized in the fat body of nondiapause-and diapause-destined individuals (bar = 50 mm), and the nucleus were stained with blue. The mock was treated with preimmunized rabbit serum as the control. Fat body from day 5 (6L5, feeding stage) (A) and day 9 (6L9, pre-pupal stage) (B) of the 6 th instar larvae; Fat body from 0 h (P0h) (C) and 24 h | 2018-04-03T04:43:55.079Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "77e1bc62404e3759bedafaf32089d0b371e5c7e5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1003273&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77e1bc62404e3759bedafaf32089d0b371e5c7e5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
83464228 | pes2o/s2orc | v3-fos-license | Forceful mastication activates osteocytes and builds a stout jawbone
Bone undergoes a constant reconstruction process of resorption and formation called bone remodeling, so that it can endure mechanical loading. During food ingestion, masticatory muscles generate the required masticatory force. The magnitude of applied masticatory force has long been believed to be closely correlated with the shape of the jawbone. However, both the mechanism underlying this correlation and evidence of causation remain largely to be determined. Here, we established a novel mouse model of increased mastication in which mice were fed with a hard diet (HD) to elicit greater masticatory force. A novel in silico computer simulation indicated that the masticatory load onto the jawbone leads to the typical bone profile seen in the individuals with strong masticatory force, which was confirmed by in vivo micro-computed tomography (micro-CT) analyses. Mechanistically, increased mastication induced Insulin–like growth factor (IGF)-1 and suppressed sclerostin in osteocytes. IGF-1 enhanced osteoblastogenesis of the cells derived from tendon. Together, these findings indicate that the osteocytes balance the cytokine expression upon the mechanical loading of increased mastication, in order to enhance bone formation. This bone formation leads to morphological change in the jawbone, so that the bone adapts to the mechanical environment to which it is exposed.
During the growth period, the jawbone is subjected to certain forces imposed by the surrounding tissues, including the teeth and oro-facial muscles (masticatory, lingual and mimetic muscles). Therefore, the resulting adult facial profile is believed to be closely associated with these forces, especially the ones generated by the masticatory muscles, which play a central role in food ingestion: a brachyofacial pattern (short face) in individuals with strong masticatory force and a dolicofacial pattern (long face) in individuals with weak 15 . Animal models of reduced mastication, induced by liquid or powder diets, recapitulate the phenotypes seen in humans 16,17 . However, it is unclear if masticatory force by itself is a sufficient cause in such skeletal phenotypes. The underlying cellular and molecular mechanisms remain elusive as well.
Here, we established a novel increased mastication mouse model by feeding mice with a HD, thus burdening maxillofacial tissues with a higher mechanical load. HD increased mastication frequency and enlarged the masseter muscles. Using a computer simulation, mechanical loading onto the lower jawbone (mandibular bone) by the masseter muscle, the masticatory muscle with the largest muscular force, was indicated to induce extrusion of the masseteric ridge and a shortening of the mandibular ramus. Micro-CT analysis showed that the extrusion and the shortening were more prominent in mice fed with the HD. In the extruded masseteric ridge of these mice, there was an increase of IGF-1 and a decrease of sclerostin in osteocytes, both of which are implicated in the promotion of bone formation. IGF-1 was shown to enhance osteoblastogenesis of the cells in the tendon.
These findings taken together indicate that the osteocytes in the mandibular bone increase IGF-1 and decrease sclerostin expression as the result of mechanical load. The alteration in the local cytokine milieu adjacent to the enthesis leads to bone formation, resulting in morphological change in the mandibular bone so the mechanical stress in the bone can be alleviated.
Results
Establishment of a novel increased mastication model. In order to impose a higher mechanical load on the musculoskeletal tissue of the maxillofacial region, we developed a HD in which nutrition component is changed so that the hardness is greater than in a normal diet (ND) (Fig. 1a). The compression strength of the HD was approximately three times higher than that of a ND (Fig. 1b). The change did not result in any significant difference in food intake or body weight between the ND-and HD-fed mice (Supplementary Fig. 1a and b). Analyses on the bone and muscle of the limbs indicated that the difference between the diets did not cause either hypertrophy or atrophy in these tissues ( Supplementary Fig. 2a-c).
During the ingestion of the two diets, mastication frequency and time were significantly higher in the mice fed with the HD than the ND (Fig. 1c). The fiber width of the masseter muscle, which is crucial for occlusion and mastication, was larger in the mice fed with the HD than the ND (Fig. 1d-f). The neuronal activity of the primary motor cortex (M1) region of the cerebrum, which controls the masticatory muscles, was assessed according to the expression of cFos, as its expression is related to masticatory stimulation 17,18 . The M1 activity was higher in the HD-fed mice (Fig. 1g,h). These findings indicate that the newly developed HD increased mastication and the mechanical load imposed on the maxillofacial region.
Increased mastication results in bone formation at the enthesis of the masseter muscle. Masticatory force is closely related to the shape of the bones in the maxillofacial region, especially the lower jawbone, i.e. the mandibular bone 15 . However, it is unclear if it is masticatory force itself that changes mandibular morphology. Thus, we predicted the morphological change of the mandibular bone under a mechanical loading condition, using a remodeling simulation. We adopted a computer model in which bone formation occurs at a point where the mechanical stress is higher than in the surrounding area 19,20 , so that the non-uniformity of the mechanical stress is reduced (Fig. 2a). A voxel finite element (FE) model of the mandibular bone of a mouse was constructed and the muscular force of the masseter muscle was set to tract the masseteric ridge antero-superio-laterally (Fig. 2b). At the initial state, stress distribution exhibited a non-uniformity, which was reduced after the bone remodeling (Fig. 2c). In the mandibular bone after loading, there was a prominent extrusion of the masseteric ridge and a shortening of the mandibular ramus (Fig. 2d).
We then investigated if such morphological changes of the mandibular bone actually occur in vivo using micro-CT images of the mandibular bone. As predicted by the computer simulation (Fig. 2d), the masseteric ridge was more extruded and the mandibular ramus was shorter in the HD-fed than ND-fed mice (Fig. 3a,b). Angular and linear analyses further revealed the extrusion of the masseteric ridge to be related to the thickening of the cortical bone at the site (Fig. 3c,d). In addition, the lingual inclination of the molar was larger and the mandibular height was lower in the HD-fed mice than ND-fed mice (Fig. 3c,d). With the parameters obtained (Fig. 3d), principal component analysis (PCA) indicated that the increased mastication model mice exhibit a distinct mandibular morphology (Fig. 3e).
The extrusion of the masseteric ridge in the HD-fed mice prompted us to analyze the bone formation and resorption around the ridge, a site where the tendon of the masseter muscle inserts into the mandibular bone. Histological analyses showed that there was an increase in the number of osteocalcin-positive cells, osteoblasts, in the enthesis, while osteoclasts were undetectable (Fig. 4a-c). Although large masticatory force may be harmful on the condyle and result in the shortening of the ramus, there was no erosion on the surface of the condyle of mice fed with HD (Fig. 4d,e). No obvious differences in the cartilage and subchondral bone were observed (Fig. 4e), indicating that the increased mastication by the HD affects the mandibular bone morphology without damaging the cartilage in the condyle. Thus, increased mastication was revealed to remodel the mandibular bone by enhancing bone formation in the enthesis of the masseter muscle.
Osteocytes produce IGF-1 in response to increased mastication, promoting osteoblastogenesis of tendon cells. Next, we examined the mechanism by which increased mastication enhances bone formation. Because osteocytes are known to sense mechanical stress imposed on the bone and release cytokines 8,9 , we hypothesized that the osteocytes in the masseteric ridge would exhibit this activity. In search of such cytokines, we found a significantly higher expression of Igf1 in the masseteric ridge of the HD-fed than ND-fed mice (Fig. 5a). There was no significant difference in Igf1 expression in the long bone ( Supplementary Fig. 3). Immunohistological analysis also showed that the protein level of IGF-1 was higher in the HD-fed mice, especially in the subsurface area of the masseteric ridge but not on the lingual side of the mandibular bone (Fig. 5b,c). The expression of the IGF-1 receptor, Igf1r, in the masseteric ridge was comparable between the mice fed with the HD and ND ( Supplementary Fig. 4). Interestingly, in the masseter muscle, Igf1r expression was higher in the mice fed with the HD ( Supplementary Fig. 4), suggesting that osteocyte IGF-1 can induce muscle enlargement by increased mastication (Fig. 1d-f).
Based on the findings described above, cells were collected from the tendon and cultured in an osteogenic medium in the presence of IGF-1. Osteoblastogenesis, as indicated by the activity of alkaline phosphatase, was enhanced by IGF-1 (Fig. 6a,b). The expression of osteoblastic genes, including Sp7, Alpl and Bglap1 (encoding Osterix, alkaline phosphatase and osteocalcin, respectively) was upregulated by IGF-1 (Fig. 6c). In order to examine if mechanically-stimulated osteocytes function as a supplier of IGF-1 for tendon cells in the process of bone formation, another in vitro experimental system was established. An osteocyte cell line, IDG-SW3 cells were cultured in a stretch chamber and underwent cyclic stretching and compression stimulation (see Materials www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ and Methods). This mechanical stimulation induced an upregulation of Igf1 expression (Fig. 6d). Supernatant of these cells was collected and added to the tendon cell culture, either in the presence or absence of an anti-IGF-1 antibody. The addition of supernatant of mechanically-stimulated osteocytes enhanced the osteoblastogenesis of the tendon cells, which was cancelled by the anti-IGF-1 antibody (Fig. 6e,f). These results indicate that osteocytes produce IGF-1 in response to increased mastication to induce osteoblastogenesis of tendon cells.
Increased mastication reduces sclerostin expression in osteocytes.
Osteocytes are known to produce the inhibitor of Wnt signaling sclerostin (encoded by the Sost gene), the expression of which is decreased by mechanical loading 21,22 . Thus, we analyzed its expression in the mandibular bone. The expression of Sost was significantly lower in HD-fed mice than in ND-fed mice (Fig. 7a). There was no significant difference in the expression of Tnfsf11 or Tnfrsf11b (encoding RANKL and OPG, respectively) (Fig. 7a). There were no significant differences in the long bone ( Supplementary Fig. 3). In immunohistological analysis, sclerostin-positive osteocytes were detected in the inner compartment of the mandibular bone of both the HD-fed and ND-fed mice (Fig. 7b,c). The ratio of sclerostin-expressing cells in the masseteric ridge was significantly lower in the HD-fed mice (Fig. 7d). The frequency of osteocytes that highly expressed sclerostin (sclerostin hi osteocytes), as indicated by high fluorescence intensity, was lower in the HD-fed mice (Fig. 7e,f). In the lingual side, where morphological www.nature.com/scientificreports www.nature.com/scientificreports/ difference was not detected (Fig. 3d), sclerostin expression was almost comparable between the mice fed with ND and HD (Fig. 7b,c). These findings indicate that increased mastication reduces the expression of sclerostin in the osteocytes, which can result in the promotion of bone formation.
Discussion
"Adaptation to the environment" is a universal dogma in biological systems. In bone homeostasis, it is well known that bones under mechanical loading remodel themselves so that they become able to endure the loaded force. Variation in the facial profile is believed to be in part the result of differences in the force of mastication and occlusion 15 . However, the evidence for the causation was unclear, despite the apparent correlation. Although this correlation was well observed in experimental animal models 16,17,[23][24][25] , the precise mechanism linking mechanical stimulation and mandibular morphology was unclear. The absence of a proper increased mastication animal model might have stagnated investigations on such mechanisms. In this study, we indicated that mechanical loading onto the jawbone itself can induce bone remodeling, using a novel hard diet. Furthermore, we found that the loading induces IGF-1 and suppresses sclerostin in osteocytes so as to enhance bone formation in the enthesis, leading to morphological change of the bone ( Supplementary Fig. 5).
Because bone remodeling is affected not only by mechanical loading but also many other stimuli inside and outside the body, it is extremely difficult to isolate the effect of mechanical load from those of other factors. Computer simulation is one of the means by which the effects of a single factor can be isolated: Our in silico remodeling simulation showed that a mechanical load imposed by the masseter muscle results in a bone phenotype similar to that of a human exhibiting strong occlusal force (Fig. 2c,d), indicating masticatory force is a causative factor in different facial patterns. Subsequent in vivo micro-CT analysis revealed that increased mastication leads to a mandibular bone phenotype that highly resembles the one that was simulated (Fig. 3a-d). With the theoretical rationale acquired by the simulation, our experimental results have become more reliable than results obtained by statistical analyses alone.
The jawbone is exposed to mechanical loading by mastication, occlusion and orthodontic forces. Because osteocytes are considered to act as a mechanosensor in bone, it has been speculated that these cells are active in www.nature.com/scientificreports www.nature.com/scientificreports/ jawbone remodeling. This study has underscored the significance of osteocytes by demonstrating that they upregulate the expression of IGF-1, which enhances bone formation in the enthesis so as to alleviate the disparity of the stress in the mandibular bone during mastication (Figs 2c, 5a-c).
Increased mastication suppressed sclerostin expression in the mandibular bone at the same time as an increase in IGF-1 expression (Figs 5-7). It is reported in humans that the serum levels of these factors are negatively correlated 26 . It is also reported that osteocyte-specific Igf1 deficiency relieves the suppression of sclerostin expression induced by mechanical loading, suggesting the regulation of sclerostin by IGF-1 27 , although the underlying mechanism has yet to be identified. The IGF-1 induced in osteocytes by increased mastication may regulate bone formation not only by enhancing osteoblastogenesis, but also by suppressing a bone formation inhibitor.
Clinicians have expected that masticatory force might serve a promising target in the treatment of skeletal anomalies such as skeletal open bite. However, there have been very few reports on clinical trials for such therapies 28,29 , possibly because of the lack of insight into their molecular basis. This study has indicated that masticatory force results in morphological change in the facial profile by modifying the function of osteocytes. Concerns about the adverse effects on the condyle might have distanced clinicians from such therapies. Our data suggests that increased mastication, within a certain extent, can alter the shape of the jawbone without damaging the condyle. Thus, there would be an opportunity for targeting masticatory force in the treatment and prevention of skeletal anomalies in the oro-maxillofacial region.
Materials and Methods
Experimental animals. C57BL/6 J mice 3 weeks of age were obtained from Clea Japan, Inc. These mice were maintained at Tokyo Medical and Dental University under specific pathogen-free conditions and fed with a normal diet (AIN93M: ND) or hard (AIN93M: HD) containing the same nutrition components except a modified cornstarch ratio. The body weight of the mice and their food intake were measured once a week. At 14 weeks of age, mice were sacrificed and the bone, muscle and brain tissues extracted were used for the subsequent analyses. The weight of the soleus muscle was measured after extraction. The number of the mice used in each experiment is described in the corresponding figure legend. All of the animal experiments were approved by the Institutional Animal Care and Use Committee and Genetically Modified Organisms Safety Committee of Tokyo Medical and Dental University (approval No. A2018-024A and 2015-007C, respectively) and conducted in accordance with the guidelines concerning the management and handling of experimental animals.
Measurement of compression strength.
The compression strength of the ND and HD was measured using a universal tester (Autograph AG-IS 5 kN, Shimazu). In the testing process, a pellet of ND or HD was set www.nature.com/scientificreports www.nature.com/scientificreports/ between 100 mm metal disks and underwent the compression process. The test was performed at the temperature of 23 ± 2 °C and humidity of 50 ± 5%. The maximum strength was measured during compression.
Determination of mastication frequency. After fasting for 24 hours, a subject mouse was placed in a transparent box in which a ND or HD pellet was fixed on the wall. The mastication activity of the mouse was monitored and recorded for 5 minutes. The weight of the ingested pellet was expressed as the difference between the weights before and after the experiment. The number of mastication cycles and biting time were obtained from the recorded data, and the mastication frequency required for the ingestion of 1 mg pellet was calculated.
Histological analyses of paraffin-embedded sections. Histological analyses of paraffin-embedded
sections were performed as previously described 11,30 . Excised tissues were fixed by immersion in 4% paraformaldehyde (PFA) at 4 °C over night. Tissues containing bone further underwent decalcification in OSTEOSOFT (Merck Millipore) at 4 °C for 3 weeks. Fixed (and decalcified) tissues were dehydrated and embedded in paraffin. 6-μm-thick sections were cut.
Hematoxylin and eosin staining was carried out by staining deparaffinized sections with hematoxylin (Muto Pure Chemicals) for 3 minutes followed by 2 minutes of staining with eosin (Wako). The muscle fiber width (minor axis) was measured using measurement software (BZ-X analyzer, Keyence). The sections of the condyle were sequentially stained with Weigert's iron hematoxylin solution (SIGMA) for 10 minutes, 0.05% Fast Green www.nature.com/scientificreports www.nature.com/scientificreports/ Mathematical model for bone remodeling. The mathematical model for bone remodeling simulation used in this study is based on the hypothesis that remodeling progresses so as to achieve a locally uniform mechanical state. This is a generic model to represent bone adaptation to mechanical loading, the validity of which has been shown previously 19,20 . In this model, bone remodeling was assumed to be driven by the local stress non-uniformity Γ, which is defined as the natural logarithm of the ratio of the stress at an arbitrary point on the bone surface, σ c , to the weighted average surface stress within the sensing distance l L , σ d .
Incorporation into voxel based modeling. The mechanical stress (von Mises equivalent stress) in bone
was numerically computed by means of a voxel finite element method (FEM). A voxel FE model of the mandibular bone was reconstructed from serial micro-CT images. The whole bone was discretized by using cubic voxel finite elements with an edge size of 37.2 μm. This voxel size is sufficiently small compared to the scale of mandibular bone that the dependency of simulated stress distribution on the voxel size is negligible. Because heterogeneous distribution of material properties does not drastically change the stress distribution, and therefore will not essentially influence bone morphological changes by remodeling, the bone was assumed to be a homogeneous and isotropic elastic material, with Young's modulus E = 20 GPa and Poisson's ratio ν = 0.3.
In the framework of voxel modeling, the changes of bone morphology can be expressed by the removal/addition of voxel elements from/to the bone surface. Here, we considered only bone formation, i.e. the addition of voxel elements. Despite the fact that bone formation is discretely expressed, the rate of bone formation on the bone surface M can be described as the continuous function of Γ by taking M as the probability of bone formation. The following continuous function was introduced in order to describe the relationship between M and Γ: where Γ u indicates the threshold value of bone formation. The parameters for remodeling simulation were set as follows: l L = 1.12 mm, Γ u = 0.6 and = M 1 max (element/simulation step).
In vitro osteoblastogenesis of tendon cells. The calcaneal, tail and masseteric tendons were dissected and cut into small pieces. The tissue fragments were digested with 0.1% collagenase (Wako) and 0.2% dispase II (GODO SHUSEI) at 37 °C with shaking at 150 rpm for 30 minutes. After filtration with a nylon filter, the cells were cultured and expanded for one to two days. The cells were plated on 48-well plate (3.0 × 10 4 cells per well) and stimulated with an osteogenic medium (50 μg ml −1 ascorbic acid, 10 nM dexamethasone and 10 mM β-glycerophosphate in Dulbecco's modified Eagle medium (DMEM)) supplemented with recombinant mouse IGF-1 (1 ng ml −1 ) 24 hours after the plating. Supernatant from mechanically-stimulated IDG-SW3 cells (see below) was added to the tendon cell culture at 25% of total volume, in which the cells were plated on 96-well plate (1.0 × 10 4 cells in 100 μl per well). Either goat anti-IGF-1 antibody (1 μg ml −1 , R&D systems) or control goat IgG (1 μg ml −1 , R&D systems) was added to the culture. Alkaline phosphatase staining was performed as previously described 30 . 7 days after the induction of osteoblastogenesis, cultured cells were fixed with 4% PFA for 15 minutes on ice. Alkaline phosphatase staining solution containing Napthol AS-MX phosphatase, 0.06 mg ml −1 ; N, N-dimethylformamide, 1%; and Fast blue BB salt, 1 mg ml −1 in 0.1 M Tris-HCl pH 8.0 was added after rinsing with PBS, and the cells were stained for 15 minutes at room temperature. The staining solution was washed away, and the stained cells were air dried. The color intensity was analyzed using measurement software (BZ-X analyzer, Keyence).
Mechanical stimulation of IDG-SW3 cells.
An osteocyte cell line, IDG-SW3 cells were maintained as previously described 31 . Cells were incubated in minimum essential medium α (MEM α) containing 10% FCS, 100 units ml −1 of penicillin, and 50 µg ml −1 of streptomycin on type I collagen (Corning) coated polydimethylsiloxane chambers (SC4Ha, Menicon Life Science) for 2 days at 33 °C. Osteogenic differentiation was induced in MEM α and 10% FCS supplemented with 50 μg ml −1 ascorbic acid and 4 mM β-glycerophosphate at 37 °C. On day 21 after differentiation, cells were subjected to mechanical loading using a cyclic unidirectional stretching device (ShellPa Pro, Menicon Life Science) as follows: stretch ratio, 5%; stretch frequency, 1 cycle per second; and stretch time, 4 hours. Total RNA and culture supernatant were collected after mechanical stimulation. | 2019-03-20T13:46:13.329Z | 2019-03-20T00:00:00.000 | {
"year": 2019,
"sha1": "de97c95d048e7aeed802b4f901977955d622714d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-40463-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de97c95d048e7aeed802b4f901977955d622714d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252405826 | pes2o/s2orc | v3-fos-license | Total esophagogastric dissociation (TEGD) in neurologically impaired children: the floor to parents
Total esophagogastric dissociation (TEGD) was proposed to treat gastroesophageal reflux disease (GERD) both as a rescue in case of fundoplication failure and as first-line surgery in neurologically impaired children (NIC). Aim of the study is to evaluate the impact of TEGD on the quality of life (QoL) of both NIC and their caregivers focusing on the parents’ point of view. A retrospective observational study was conducted on all NIC who underwent TEGD in our center between 2012 and 2022. A questionnaire centered on the parents’ point of view and investigating QoL of NIC and their caregivers was administered to all patients’ parents. Data were compared using Fisher exact test and Mann–Whitney test; a p-value < 0.05 was considered statistically significant. 12 patients were enrolled in the study. Parents reported improvements in weight gain (p = 0.03), sleep disorders, apnea, regurgitation and vomiting (p < 0.01). Caregivers also declared a decrease in number of hospitalizations, particularly related to severe respiratory infections and ab ingestis pneumonia (p = 0.01). We also documented a reduction of caregivers’ worries during food administration (p < 0.01). 50% of parents whose children were subjected to both fundoplication and TEGD would suggest TEGD as first line surgical treatment instead of fundoplication. According to parents’ point of view, TEGD improves significantly NIC QoL and 50% of them would enthusiastically suggest TEGD as first-line surgical approach to GERD in NIC.
Introduction
Gastroesophageal reflux disease (GERD) is a common affection interesting up to 70% of neurologically impaired children (NIC) causing serious malnourishment, vomiting, aspiration pneumonia and sleep disorders [1]. A fundamental difference between healthy children and NIC is that, in the latter, the natural history of this disease rarely evolves toward resolution, due to persistence or even worsening of factors such as esophageal dysmotility, delayed gastric emptying, poor posture, repeated seizures, scoliosis and drugs that may favorite gastroesophageal reflux. Treatment of GERD in these patients represents a challenge since conservative management is often not sufficient. Although surgical treatments such as Nissen fundoplication must be considered, they have poor results in NIC, with even higher probability of failure in case of redo [2][3][4].
In 1997, Adrian Bianchi described for the first time a new surgical approach named total esophagogastric dissociation (TEGD).
Thought initially as a last choice, some authors have supposed that it might have already caught momentum for the future of GERD surgery [5]. In the last 20 years, major technical progresses have been made and now TEGD is considered feasible even with minimally invasive approaches.
Having established feasibility and efficacy of TEGD, surgeons should deal with its impact on QoL not only of NIC themselves but even of their caregivers [6]. NIC's parents must spend a lot of time and energies to cope with the needs Sara Maria Cravano and Marco Di Mitri have contributed equally to the work.
3
of this fragile population. Feeding NIC is time consuming and requires long lasting meals to avoid GER as well as being a threat considering the high rates of aspiration and apnea. Therefore, TEGD not only improves NIC clinical conditions and consequently their QoL but can even reduce caregivers' stress and anxiety with positive repercussions on the entire family [7].
Aim of the study is to investigate mid-and long-term clinical outcome of TEGD according to parents' point of view and to collect the latter impressions on the improvements in families' everyday life.
Study design and population
A retrospective observational study was conducted in our Department of Pediatric Surgery, IRCCS Sant'Orsola-Malpighi University Hospital of Bologna, following Ethical Committee approval (CHPED-21-02-DEG). Clinical records were retrospectively analyzed to find out all neurologically impaired children who underwent TEGD for GERD in our department between January 2011 and January 2021. Patients with less than one-year follow-up were excluded from the study.
A questionnaire ( Fig. 1) was administered to all patients enrolled in the study investigating: • Pre-operative data, including weight gain, sleep disorders, apnea, vomit, cough, episodes of hospitalization due to airways infections, time required for feeding in terms of number and duration of meals. • Post-operative data: the same parameters asked for the pre-operative time were investigated to compare results. • Parents' point of view on clinical outcome, improvement on feeding management, anxiety during food administration, QoL and satisfaction after TEGD.
For 6 questions in the parents' point of view section, caregivers could answer "yes", "no" or "I don't know".
Surgical procedure
Indication to TEGD is failure of fundoplication with recurrence of GER and/or recurrent aspiration due to significant swallowing discoordination.
TEGD, in our center, is performed with open approach. This surgical technique features a Roux-en-Y esophagojejunal anastomosis and a jejunal end-to-side anastomosis with a gastrostomy opening. Access to the peritoneal cavity is obtained through a longitudinal median xipho-pubic incision. The abdominal esophagus is isolated at the esophageal hiatus and detached from the stomach with a linear stapler ( Fig. 2A). The jejunum is then examined and sectioned 20-30 cm distally to the ligament of Treitz (Fig. 2B). The distal stump is brought up to perform an end-to-side anastomosis with the esophageal stump (Fig. 2C, D), whereas the proximal jejunal stump is mobilized for a Roux-en-Y jejunum-jejunal end-to-side anastomosis (Fig. 2E). Being the exclusion of the stomach pivotal to achieve a permanent resolution of gastroesophageal reflux, it is necessary to fashion a gastrostomy for feeding purposes. On the other hand, the esophagojejunal anastomosis ensures the swallowing of saliva and small meals. The last step of the procedure entails a pyloroplasty to prevent gastroparesis, since the vagus nerves are often severed during the esophagogastric dissociation ( Fig. 1F) [8].
Statistical analysis
Continuous data are presented as mean ± standard deviation (DS) and median. Data were analyzed with Fisher exact test and Mann-Whitney test. A p-value < 0.05 was considered statistically significant.
3 (20%) patients passed many years after surgery due to respiratory failure because of worsening of their diseases; therefore, 12 (80%) were enrolled in the study.
Parents reported a statistically significant improvement in terms of weight gain during post-operative follow-up (p = 0.02), reduction of sleep disorders and apnea episodes (p < 0.01).
Vomit and regurgitate decreased (p < 0.01) as well as respiratory symptoms like cough, airways infections (p < 0.01) and hospitalizations for pneumonia (p = 0.01). Data are showed in Fig. 3. Caregivers seemed more confident and less scared during food administration after TEGD (p < 0.01), and the time required to feed the patients drastically decreased from a mean of 4.9 ± 5.8 h (range: 0.25-17 h) to 2.4 ± 3.1 h (range: 0.8-12 h) per meal (Fig. 4).
Between the 4 patients who underwent both Nissen fundoplication and TEGD, caregivers said in 2 cases (50%)
Discussion
Gastroesophageal reflux disease is particularly common in NIC with an incidence up to 70% [9,10]. GERD can arise with plenty of symptoms such as regurgitation, irritability, emesis, feeding refusal, growth retardation, epigastric pain, chronic cough, hoarseness, halitosis, dental erosions, apnea and ab ingestis pneumonia [11][12][13]. These conditions make NIC's care complex especially for parents who often spend most of their time and energies to cope with their children needs [14].
To demonstrate how GERD can heavily affect everyday life of children, Marlais et all, in a retrospective observational study, investigated QoL in 40 children between 5 and 18 years old, who presented with gastrointestinal symptoms at their center from February to May 2009. They showed how children with GERD had a significantly lower QoL score than children with IBD (74.0 vs. 81.8, p < 0.01) and healthy children (74.0 vs. 84.4, p < 0.01) [15].
Various possibilities are available to diagnose gastroesophageal reflux (GER). Commonly, 24 h pH-impedancemetry is used to evaluate GER, because it measures both the pH value and the retrograde or anterograde bolus transport in the esophagus, thereby allowing the detection of all episodes of reflux over a 24-h period [16].
An upper gastrointestinal barium contrast study is mainly helpful to detect anatomical reasons of GER and to assess its severity.
GERD unresponsive to conservative treatment is a challenge for pediatricians and pediatric surgeons. There are several surgical techniques to increase pressure on the lower esophageal sphincter (LES) to avoid reflux in pediatric age such as Nissen, Dor, Toupet and Thal fundoplication. Nowadays, the gold standard technique to treat reflux in NIC is considered laparoscopic or robotic Nissen fundoplication [17]. When minimally invasive approaches are not advisable, open technique is necessary [18].
If reflux persists, a redo surgery is reasonable but the feasibility of a laparoscopic approach decreases from 89 to 68% in case of a second revision [19,20].
In 1997 Adrian Bianchi suggested for the first time TEGD in NIC when fundoplication did not show clinical improvement or as a first-line alternative approach in GERD [5,21].
Lall et al. analyzed their experience with 50 patients who underwent TEGD: 34 as primary approach and 16 as rescue after failure of fundoplication. They state that TEGD is safe, feasible, with low post-operative mortality and morbidity suggesting it as first-line surgical treatment in severe neurologic impairment with GERD coupled to significant oropharyngeal incoordination [22].
Coletta et al. too, analyzing 66 NIC who underwent TEGD for GERD, showed how it seemed a reliable surgical option as both primary or rescue procedure [23]. Buratti et al. comparing Nissen fundoplication with TEGD concluded that the latter guarantees best results in terms of improvements of all anthropometric and nearly all biochemical parameters taken in exam and the decrease of episodes of respiratory infections, hospitalizations and feeding time [6,24,25].
More than 20 years after the first proposal, considering also the experiences reported in literature and discussed above, we can definitely assume that, when performed by experienced surgeons, TEGD is a safe, feasible and effective procedure. Thus, it may be redundant to discuss about surgical details or outcome of this procedure actually [26,27].
What we think should be better investigated is the point of view of caregivers in terms not only of efficacy of TEGD but also how it can impact the everyday family life if patients.
In our study, parents reported improvements in weight gain, a decrease in episodes of post-prandial regurgitation and vomit and apnea, better sleep quality, less need of hospitalization especially for ab ingestis pneumonia [28].
According to aim of the study, the data that interested us the most were the following. Parents declared to fell less frustrated and scared when feeding their children. This extremely important result is a consequence of the improvement of GER symptoms that made nutrition easier and safer.
Moreover, parents declared a decrease of the time required to administrate a single meal to their children.
If we couple the two previous findings with the reduction of hospitalizations, it is clear how this surgical procedure affected positively the QoL of caregivers and their family: parents are less frustrated and can spend and enjoy more Moreover, happier parents, better NIC's general conditions, less time spent in hospital and more free time improve the whole family daily life quality.
In our cohort, 4 patients underwent a Nissen fundoplication, which proved ineffective in preventing GER, prior to being subjected to TEGD. Upon completion of the surgical iter, the parents of two of these patients (50%) admitted they would have preferred TEGD at first instance. Such decision is entirely acceptable and deserves full support based on the consideration that quality of life of NIC is profoundly hampered by the underlying conditions and GER. Therefore, parents are often exhausted and request a single definitive solution. Moreover, as experience with this procedure increases, there is a growing number of authors proposing TEGD as a primary surgical treatment in NIC, rather than a rescue procedure.
On the other hand, we fully understand the reasons of the remaining two parents (50%), who declared to agree with a first attempt with the fundoplication before proceeding to TEGD. TEGD is a major surgical procedure which implies a total disruption of the anatomy of the child, making it difficult to be welcomed initially.
The present study, at our knowledge, is the only one in literature entirely focused on parents' perception of the TEGD's outcome. Although, it harbors some limitations.
First, a study that uses a not validated questionnaire may be subject to measurement error and the conclusions drawn cannot be made with total confidence. On the other hand, there is not a validated survey for this specific population and the common PedsQL questionnaires cannot be applied to NIC. Moreover, the low numerosity impedes us from generalizing our conclusions. Nevertheless, we are convinced that our results, of extreme interest for pediatric surgeons, could encourage other centers to investigate the topics faced in this paper with further multicentric studies.
Conclusion
Parents of NIC who underwent TEGD are enthusiastic of the outcome obtained reporting improvements in almost all the areas investigated (symptoms, quality of sleep, respiratory infections and ab ingestis pneumonia, weight gain, general conditions).
Moreover, the reduction of GER symptoms during feeding made this procedure safer and easier and parents feel less scared and stressed during this important activity.
Feeding duration after TEGD decreased and that gives caregivers the possibility to have more time that can be spent in other activities with their child and the rest of the family. This is even more true considering the reduction of episodes of hospitalization reported after TEGD.
50% of caregivers of NIC who underwent Nissen fundoplication and, due to failure, subsequent TEGD, would choose TEGD as first-line treatment for GERD.
Further studies are needed to confirm our results and to discuss the opportunity of TEGD as first-line treatment for GERD in NIC.
Funding Open access funding provided by Alma Mater Studiorum -Università di Bologna within the CRUI-CARE Agreement.
Conflict of interest
The authors declare no conflict of interest.
Ethical approval
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of IRCCS Sant'Orsola Malpighi University Hospital (CHPED-21-02-DEG).
Informed Consent
Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-09-22T06:16:47.936Z | 2022-09-21T00:00:00.000 | {
"year": 2022,
"sha1": "130f0a9a5422cb046bb44d1a90bf606d6ee078d5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13304-022-01384-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "36207f264644430d07b35aafd1a5226a79f8bc4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255186274 | pes2o/s2orc | v3-fos-license | Behavioral Cloning via Search in Video PreTraining Latent Space
Our aim is to build autonomous agents that can solve tasks in environments like Minecraft. To do so, we used an imitation learning-based approach. We formulate our control problem as a search problem over a dataset of experts' demonstrations, where the agent copies actions from a similar demonstration trajectory of image-action pairs. We perform a proximity search over the BASALT MineRL-dataset in the latent representation of a Video PreTraining model. The agent copies the actions from the expert trajectory as long as the distance between the state representations of the agent and the selected expert trajectory from the dataset do not diverge. Then the proximity search is repeated. Our approach can effectively recover meaningful demonstration trajectories and show human-like behavior of an agent in the Minecraft environment.
Introduction
This study was motivated by the MineRL BASALT 2022 challenge [1]. In the challenge, an agent must solve the following tasks: find a cave, catch a pet, build a village house, and make a waterfall [1]. The provided dataset of experts' demonstrations contains trajectories of image-action pairs. Additionally, both the MineRL BASALT dataset and environments do not contain reward information. Therefore, our primary focus was on Behavioural Cloning (BC) and Planning [2] [3] methods to address the tasks, rather than deep reinforcement learning (DRL) [4] [5].
Methods
A dataset of expert demonstrations solving the following tasks was provided [1]: find a cave, catch a pet, build a village house, and make a waterfall. Each episode is a trajectory of image-action pairs. No reward information is provided.
In our approach, we use experts' demonstrations to reshape the control problem as a search problem over a latent space of partial trajectories (called situations). Our work assumes that: • Similar situations require similar solutions or actions. • A situation can be represented in a latent space. • The situations latent space is a metric space. Therefore, we can assess the numerical similarity between any two situations. Video PreTraining (VPT) model Our approach uses a provided VPT model [6] for encoding a situation in a latent space (see Figure 2). The model uses the IMPALA [7] convolutional neural network (CNN) as backbone for the encoding of individual images. The CNN network encodes each image into a 1024-dimensional vector. The stack of 129 CNN outputs passes through four transformer blocks (see Figure 2). Additionally to the current frame, a memory stack stores the last 128 embeddings for each transformer block. The output of the last transformer block are 129 embedding vectors, each 1024-dimensional. The architecture discards 128 output embedding vectors of the last transformer block and processes further only the current's frame embedding vector. Two MLP output heads take as an input the current's frame embedding vector to predict actions. The first output head predicts a discrete action (one out of 8641 possible combinations of compound keyboard actions). The second output head predicts a computer mouse control as a discrete cluster of one of the possible 121=11x11 mouse displacement regions (±5 regions for X times ±5 regions for Y). The architecture is shown in Figure 2.
Search-based BC Search-based behavioral cloning (BC) aims to reproduce an expert's behavior with high fidelity by copying its solutions from past experience. We define a situation as a set {(o τ , a τ )} t+∆t τ =t of consecutive observationaction pairs coming from a set of provided expert's trajectories, where ∆t is less or equal to the number of input slots of a transformer block that processes embedding vectors of input images.
We encode the expert's past situations through a provided VPT model [6]. Thus, we obtain a latent space populated by N-dimensional situation points. Due to the expert's optimality assumption, we can assume that each situation has been addressed and solved in an optimal way. We encode each sampled situation with the same network. Then, we search the nearest embedding point in the dataset of situation points. Once the reference situation has been selected, we copy its corresponding actions. After each time-step we update the current and reference situations, by updating the queue of embedding vectors of images for the current situation, while shifting to the next time-step in the recorded trajectory from the dataset for the reference situation. To assess the similarity, we compute the L1 distance between the current situation and the reference situation. In most cases, the reference and the current situations will evolve differently over time, thus, their L1 distance will diverge. Therefore, at each timestep we recompute the similarity of the current and reference situations. A new search is performed whenever either one of two conditions is met: • The L1 distance between current and reference situations overcomes a threshold (see red lines in Figure 1); • The trajectory from the dataset has been followed for more than 128 time-steps (see blue lines in Figure 1).
Choosing feature divergence as a criterion for controlling search comes with a major advantage: whenever the copied actions can not be performed (e.g. there are physical constraints that limit the agent movement space), the features will diverge even faster. Thus, our agent will quickly perform a new search and address the faulty situation.
Our approach is illustrated in Figure 3. We refer to the generation of the latent space as the "training" procedure of our agent. Rather, it is a preprocessing step needed to ensure prior knowledge to our agent.
Experiments and Results
We applied our method to the MineRL BASALT Challenge 2022 [1], where it ranked top of the leaderboard at the end of Round 1. The agent had to demonstrate human-like behavior while completing the tasks. Our agent produces visually human-resembling behaviour in the tasks.
In Table 1 we report quantitative measurements of the L1 distance before and after a new search for the best matching trajectory from the dataset. In all four tasks, we found that the average L1 distance after a search is much lower than before it. A situation encapsulates both current and past information. Therefore, at the very beginning of an episode, the situation embedding may be not informative. To mitigate this, we allow the agent to warm up, by keeping it still for the first second of a new episode. This way, the agent can gather some images and produce a more informative representation of the current situation. Using the warm up phase can be vital whenever the agent faces a dangerous situation at the beginning of an episode, e.g. when spawning close to a lava pit.
Discussion & Conclusion
Here we presented our approach that represents the control problem as a search problem over a latent space of partial trajectories (called situations) from a dataset of experts' demonstrations. Our approach can effectively recover meaningful demonstration trajectories and show human-like behavior of an agent in the Minecraft environment. Possible directions for improving the approach are methods of selfsupervised segmentation of important objects in first-person views [8], multi-modal fusion of segmented representations [9], modularization of control [10] [11] and involvement of working memory [12]. | 2022-12-29T06:42:26.737Z | 2022-12-27T00:00:00.000 | {
"year": 2022,
"sha1": "4112f6193a59a2982d5515eee78d1463a0bf20e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4112f6193a59a2982d5515eee78d1463a0bf20e9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
271240919 | pes2o/s2orc | v3-fos-license | Molecular identification and morphological variations of Amblyomma lepidum imported to Egypt, with notes about its potential distribution under climate change
The tick Amblyomma lepidum is an ectoparasite of veterinary importance due to its role in transmitting livestock diseases in Africa, including heartwater. This study was conducted in 2023 to monitor Amblyomma spp. infestation in dromedary camels imported from Somalia, Ethiopia, and Sudan to Egypt. This study inspected 200 camels at the Giza governorate’s camel market that had been imported from Somalia, 200 from Ethiopia, and 200 from Sudan for tick infestation. Specimens were identified using morphological characteristics and phylogenetic analyses of the 12S and 16S rRNA genes. Clusters were calculated using an unweighted pair-group method with arithmetic averages (UPGMA) dendrogram to group the specimens according to their morphometric characteristics. The morphometric analysis compared the body shape of ticks collected from different countries by analyzing dorsal features. Principal component analysis (PCA) and canonical variate analysis (CVA) were performed to obtain body shape variation among specimens from different countries. Results indicated that camels were infested by 57 males Amblyomma lepidum, and no female specimens were observed; among these specimens, one may have a morphological abnormality. The results suggest that A. lepidum specimens collected from camels imported to Egypt from African countries exhibit locally adapted morphology with variations among specimens, particularly variations in body size. This adaptation suggests minimal potential for genetic divergence. Ecological niche modeling was used to predict the areas in Africa with suitable climates for A. lepidum. The study confirmed that East African countries might have the most favorable climatic conditions for A. lepidum to thrive. Interestingly, the amount of rain during the wettest quarter (Bio16) had the strongest influence on the tick’s potential distribution, with suitability decreasing sharply as rainfall increased. Future predictions indicate that the climatic habitat suitability for A. lepidum will decrease under changing climate conditions. However, historical, current, and future predictions indicate no suitable climatic habitats for A. lepidum in Egypt. These findings demand continuous surveillance of A. lepidum in camel populations and the development of targeted strategies to manage tick infestations and prevent the spread of heartwater disease. Supplementary Information The online version contains supplementary material available at 10.1007/s00436-024-08284-0.
Introduction
Ticks have been considered important medical and veterinary ectoparasites of livestock worldwide (Jongejan and Uilenberg 2004).The Amblyomma genus, which is represented by approximately 137 species and distributed in Neotropical, Afrotropical, and Australasian faunal regions (Guglielmone et al. 2015;Soares et al. 2023), has been widely associated with the spread of several pathogens such as Rickettsia, Ehrlichia, and Theileria (Eberhardt et al. 2020;Mnisi et al. 2022;Smit et al. 2023).In Africa, Ehrlichia ruminantium, the causative agent of fatal heartwater disease, is transmitted by several Amblyomma species (Faburay et al. 2008;Esemu et al. 2013;Getange et al. 2021).Among these species, Amblyomma lepidum transmits E. ruminantium, which causes heartwater disease in goats, cattle, and sheep (Walker et al. 2003).
Amblyomma lepidum is a three-host tick that is distributed in East Africa in more arid savannah countries, especially in central and eastern Sudan, Ethiopia, southern Somalia, eastern Uganda, Kenya, and the northern region of central Tanzania (Hoogstraal 1956;Walker et al. 2003).This species was introduced from Sudan and East Africa with imported cattle into Egypt but has not been established yet (Hoogstraal 1952;Liebish et al. 1989;Okely et al. 2022b).Recently, several studies collected A. lepidum from Egypt, but only male specimens were recorded (Youssef et al. 2015;Hassan et al. 2017;Okely et al. 2021;Abouelhassan et al. 2023).
Morphological and genetic variations within the same tick species from different geographic areas can occur due to adaptation to environmental conditions (Dantas-Torres et al. 2013).Variations in scutal ornamentation among Amblyomma species have been observed in African countries, such as A. variegatum and A. tholloni (Hoogstraal 1956).Morphometrics and morphological analyses have been conducted to examine shape variations in other Amblyomma species such as A. mixtum, A. gemma, A. variegatum, and A. hebraeum (Pretorius and Clarke 2001;Aguilar-Domínguez et al. 2021b).However, no study has yet investigated variations within the A. lepidum population.
The ecological niche modeling technique was used to understand the distribution pattern of disease vectors (Peterson et al. 2004).In recent years, several studies have anticipated the current and future potential distribution of tick vectors of medical and veterinary importance belonging to different genera (Boorgula et al. 2020;Aguilar-Domínguez et al. 2021a;Polo et al. 2021;Gillingham et al. 2023;Noll et al. 2023).However, no study has predicted the potential distribution of A. lepidum in its geographical range.
Here, we report the occurrence of A. lepidum in Egypt imported from three African countries (Ethiopia, Somalia, and Sudan) based on morphological and molecular characterization of ticks.We also describe shape variations in the imported ticks and employ climatic niche models to estimate historically suitable climatic habitats for A. lepidum across Africa.This approach allows us to identify areas where the vector may have thrived in the past and to assess potential shifts in its climatic suitability under current and future climate change scenarios.By evaluating the climatic suitability for A. lepidum in Egypt, we can determine whether the newly discovered samples are capable of establishing a population in Egyptian climates.This information is crucial for constructing proactive measures to manage the vector and mitigate its impact.
Specimen collection and morphological identification
As a continuation of the collection trips to monitor Amblyomma ticks in Egypt (Abouelhassan et al. 2023), we examined dromedary camels monthly in 2023 in Giza governorate (Supplementary file 1), imported from three African countries: Somalia, Ethiopia, and Sudan.Six hundred (600) dromedary camels were inspected: 200 camels imported from Somalia, 200 from Ethiopia, and 200 from Sudan at the camel market in Giza governorate.All Amblyomma tick specimens were removed from camels using fine forceps and stored in vials containing 70% alcohol and 20% glycerol for transportation to the Okely's Tick Collection (Department of Entomology, Ain Shams University, Cairo, Egypt) for morphological identification.Specimens were individually examined under a Labomed CZM4 Stereo Microscope (Labomed, Fremont, CA, USA) to identify them to species level based on morphological characteristics and taxonomic keys (Hoogstraal 1956;Walker et al. 2003;Okely et al. 2021;Abouelhassan et al. 2023).
Imaging and morphological feature digitization
An Am Scope MU1000 10MP Microscopic camera (Am Scope, Irvine, CA, USA) linked to a Labomed CZM4 Stereo Microscope (Labomed, Fremont, CA, USA) was used to photograph specimens at multiple focal planes.Selected morphological keys were manually digitized (Figs.1a and 2a).Procrustes analysis of variance (ANOVA) in MorphoJ v.1.07software (Klingenberg 2011) was conducted to reduce errors in morphological key imaging due to differences in scale, position, and orientation from key coordinates.
Morphometric analysis
To evaluate shape variation among specimens from three different countries, PCA was performed to obtain shape changes; CVA was also conducted to assess the difference in body shape according to geographical variations.The tps files were transported to MorphoJ software (Klingenberg 2011).We used the Procrustes fit function and edited classifiers, and then PCA was used to observe the total shape variation in transformation grids.Boxplots were constructed to illustrate the variation in the size of body parts of specimens from each country.
Phenetic relationships among specimens
To examine phenetic relationships among specimens based on morphological measurements, a UPGMA dendrogram was constructed based on different similarity indices by PCA using PAST software v 4.03 (Hammer et al. 2001).
Genetic analysis
A total of 18 A. lepidum samples from the current study were analyzed.DNA was extracted from individual tick samples using the QIAamp DNA Mini Kit (Qiagen) according to the manufacturer's instructions and then stored at − 20 °C until use.PCR was performed to amplify 16S rRNA and 12S rRNA genes.The primers used for the 16S rRNA gene were (5′-TTG GGC AAG AAG ACC CTA TGAA-3′ and 5′-CCG GTC TGA ACT CAG ATC AAGT-3), while those for the 12S rRNA gene were (5′-GAG GAA TTT GCT CTG TAA TGG-3′ and 5′-AAG AGT GAC GGG CGA TAT GT-3′) according to Norris et al. (1999).
The amplified PCR products were checked using a 1.6% agarose gel containing 0.4 µg/ml of ethidium bromide.Sanger sequencing was accomplished by Solgent (Daejeon, South Korea).The generated sequences of 16S rRNA and 12S rRNA genes were analyzed using BLAST (Johnson et al. 2008).The nucleotide sequences were submitted to GenBank and then aligned and compared with closely related reference sequences retrieved from the GenBank database.The phylogenetic tree of 12S rRNA was constructed using a total of 42 sequences, including nine Amblyomma lepidum from the current study, 31 Amblyomma spp.from the GenBank database, and two Ixodes spp.sequences as an outgroup for rooting the phylogenetic tree.The phylogenetic tree of 16S rRNA was constructed using a total of 40 sequences, including nine Amblyomma lepidum from the current study, 29 Amblyomma spp.from the GenBank database, and two Ixodes spp.sequences as an outgroup for rooting the phylogenetic tree.
The phylogenetic trees were computed in IQTREE version 1.6.12(Nguyen et al. 2015) using the maximum likelihood method, ModelFinder (Kalyaanamoorthy et al. 2017) to select the best model(s), and 2000 bootstrap replications.In this analysis, the best-fit model was K3Pu + F + G4, selected according to the Bayesian information criterion (BIC).The phylogenetic trees were visualized in MEGA11 software (Tamura et al. 2021).
Ecological niche modeling
Occurrence records for A. lepidum were compiled from VectorMap (www.vecto rmap.org; 530 records), the Global Biodiversity Information Facility (GBIF; www.gbif.org; 7 records), and data from Egyptian tick surveillance programs conducted by Mohammed Okely (M.O.) during the years 2019 to 2023 and documented in previous literature (Okely et al. 2021;Abouelhassan et al. 2023), including one record from Egypt.These 538 records were merged and underwent rigorous cleaning to minimize biases and overpredictions in current and future model estimations (Okely et al. 2020).Only records with precise coordinates and metadata indicating their curation source were retained.Duplicates were removed using Microsoft Excel and spatially rarefied using SDMtoolbox 2.4 within ArcGIS 10.3 to eliminate redundant data occurring within ≤ 2.5′ (≈ 5 km 2 ) (Brown et al. 2017;Okely and Al-Khalaf 2022).This yielded 517 unique, spatially rarefied records, randomly divided into calibration (259) and testing (258) subsets.
Environmental data for historical climatic conditions were sourced from WorldClim version 2.1 (www.world clim.org) at a 2.5 min (≈5 km 2 ) spatial resolution.Parallel data were obtained for two climate change scenarios: BCC-CSM2-MR (Beijing Climate Center Climate System) and IPSL-CM6A-LR (Institute Pierre-Simon Laplace Climate Model), representing climatic responses to ongoing climate change in the current period (2021-2041) and three future periods (2041-2060, 2061-2080, and 2081-2100).The two most pessimistic socioeconomic pathways (SSP.370 and SSP.585) were selected for these scenarios.Sixteen future scenario combinations (2 SSPs × 4 time periods × 2 scenarios) were utilized to describe current and future climatic conditions under changing climate.These datasets (historical and climate change scenarios) encompassed 19 bioclimatic variables derived from monthly temperature and precipitation records from 1970 to 2000 (Escobar et al. 2014) for the historical dataset.Parallel anticipations for these variables were available for the two climate change scenarios in each period and SSP.Notably, the Mean Temperature of the Wettest Quarter (Bio.8),Mean Temperature of the Driest Quarter (Bio.9),Precipitation of the Warmest Quarter (Bio.18), and Precipitation of the Coldest Quarter (Bio.19) were excluded due to spatial artifacts (Datta et al. 2020;Okely et al. 2023).Highly correlated variables were omitted using Pearson correlation (r >|0.7|;Dormann et al. 2013).The final variables for predicting A. lepidum climatic habitat suitability were Annual Mean Temperature (Bio.1),Isothermality (Bio.3),Temperature Seasonality (Bio.4),Min Temperature of Coldest Month (Bio.6), and Precipitation of the wettest quarter (Bio.16).Each bioclimatic layer was clipped to the study area ("Africa") using the "extract by mask" tool implemented in ArcGIS 10.3 (Nasser et al. 2019) for historical and climate change scenarios.It is worth noting that the historical data may not fully capture recent climate change trends.However, the model calibrated with this data is projected onto current (2021-2040) and future (2041-2100) under the Shared Socioeconomic Pathways (SSPs), representing anticipated climatic conditions under various climate change scenarios.A comprehensive list of the 19 bioclimatic variables is available in the supplementary file 2. The historical climatic niche model for A. lepidum was developed using the maximum entropy algorithm (MaxEnt v3.3.3e;Phillips et al. 2006), calibrating 259 occurrences with five bioclimatic variables (Bio1, 3, 4, 6, and 16) derived from the Pearson correlation coefficient thresholding and clipped to the African study area.The median across 100 bootstrap model replicates represented the species' climatic habitat suitability under historical conditions.Model accuracy was evaluated using three approaches: the area under the curve (AUC; 0-1; Swets 1988; Nasser et al. 2021) where AUC = 0.5 indicates the model has no predictive ability and performs no better than random guessing; AUC = 1 represents a perfect model that can perfectly discriminate between positive and negative cases; and AUC > 0.5 shows the model has some predictive ability, with higher values indicating better performance.Partial receiver operating characteristic (pROC) statistics, with 500 bootstrapped iterations (Osorio-Olvera et al. 2018) in NicheToolbox, and the True Skill Statistic (TSS; -1 to 1; Allouche et al. 2006) were also used.Positive TSS values indicate strong agreement between predicted models and actual species distribution.To assess shifts in climatic habitat suitability due to current and future climate change, the historical MaxEnt model was projected onto the anticipated climatic conditions.Medians across the two models of each scenario at each period and SSPs were calculated using ArcGIS 10.3, mitigating the uncertainty of a single climate model (Shao et al. 2022).These medians represented the species' climatic habitat suitability under climate change.Future models were estimated for 2041-2060, 2061-2080, and 2081-2100 for two SSPs (370 and 585), while 2021-2040 represented ongoing climate change conditions for the same SSPs.
Results
In total, 57 male Amblyomma ticks were recorded without female specimens being observed.All specimens were identified as A. lepidum based on the mesial area of enamel ornamentation with dense, coarse punctations, lateral median areas of enamel ornamentation, and festoons with enamel on 6-8 of 11 festoons (Figs. 1 and 2).Morphological variations in body shape among specimens were observed (Figs. 1 and 2).Interestingly, one specimen may have a morphological abnormality represented by an indistinct posteromedian stripe, atrophy of the second left leg, indistinct lateral median area of enamel ornamentation on the conscutum on the left side, and abnormality in the enamel ornamentation on the conscutum (Fig. 3a-c).There was also an abnormality in festoon enameling, where the central festoon and the outer festoon on the right side have ornamentation, although usually, there is no enamel on the central and two outermost festoons (Fig. 3d).
The measurements for all fifteen morphological variables for thirty A. lepidum (Supplementary file 3) were subjected to principal component analysis (PCA) due to correlations among some of these variables; the first 3 principal components (PCs) were used to estimate the cluster and correlation analyses.The first 3 PCs summarized about 90% of the overall variance in morphological data.Based on the morphometric measurements, the A. lepidum males were grouped into different groups with different similarity indices (Fig. 4).Pearson correlation coefficients between fifteen characters were calculated (Supplementary file 4).There is a highly negative correlation among different morphological traits and a positive correlation among other morphological traits.The highest positive correlation was detected between basis capituli ventral width and basic capituli dorsal width.
Principal component analysis (PCA) and canonical variate analysis (CVA) supported significant morphological differentiation (P < 0.05) between A. lepidum from Sudan, Somalia, and Ethiopia.The PCA showed shape variations, and the three groups separated distinctly, especially Somalia vs. Sudan/Ethiopia along PC1 (Fig. 5a).The CVA showed that the most significant variation was between Somalia vs. Sudan/Ethiopia along CV1 (Fig. 5b).The deformation grid of the first principal component for specimens with mouthparts revealed significant differences in feature numbers 1, 2, 14, 21, 3, 4, and 11 (Fig. 6a), whereas the deformation grid of the first principal component for specimens without mouthparts revealed significant differences in feature numbers 7, 2, 4, 10, and 12 (Fig. 6b).
There were significant differences between measurements for the morphological characters of A. lepidum ticks imported from the three countries (P < 0.05).The main differences among the fifteen morphological characters seem to be related to length measurements (body, capitulum, and hypostome), with Somalian ticks generally larger than Ethiopian and Sudanese ticks (Fig. 7).
The identification of tick species was confirmed through phylogenetic analysis of the 12S rRNA and 16S rRNA genes.The sequences for 12S rRNA and 16S rRNA were submitted to GenBank.Assigned accession numbers for 16S rRNA are OQ947777-85, and those for 12S rRNA are OQ955297-305.The lengths of amplified 12S rRNA and 16S rRNA sequences in all examined species in Egypt were similar (approx.300 bp); the alignment length was ca.250 bp after trimming the low-quality ends of each sequence.
Suitable climatic habitats for A. lepidum in the historical period were predicted to be high and medium in Kenya, Ethiopia, Tanzania, Uganda, parts of Northern Eritrea, parts of Southern Somalia, some areas of Southern Sudan, Southwestern Angola, and narrow zones in West Africa, especially in Senegal (Fig. 9a).Lower suitable climatic habitats were predicted in zones of Central and Western Africa (Fig. 9a).Precipitation of the wettest quarter (Bio16) showed higher effects on the potential distribution of A. lepidum relative to other predictive bioclimatic variables (Fig. 9b).The climatic habitat suitability of A. lepidum decreased sharply with increasing precipitation of the wettest quarter (Bio16) (Fig. 9c).Median of future predictions for two climatic scenarios in SSPs 370 and 585 from 2021 to 2100 showed differences between diverse SSPs from 2021 to 2100 (Fig. 10).For the period from 2021 to 2040, the predictions showed high agreement in suitable climatic habitats compared to the predicted climatic habitats under historical conditions, especially in East Africa.However, the number Fig. 6 Transformation grids for visualizing a shape change for the first principal component.a Among Amblyomma lepidum males with mouthparts.b Among Amblyomma lepidum males without mouthparts Fig. 7 Box plots of variation in the size of body parts of Amblyomma lepidum male imported to Egypt from Sudan, Ethiopia, and Somalia.a Body length.b Mesial area length.c Hypostome length.d Basis capituli ventral length of highly suitable pixels increased in Southern Somalia in the period from 2021 to 2040, and the number of low suitable pixels in Central Africa decreased in the same period (Fig. 10a, b).For the future climatic conditions influenced by climate change from 2041 to 2100, the suitable climatic habitat is expected to decrease, especially from 2061 to 2100 (Fig. 10c-h).
Discussion
This study focused on monitoring Amblyomma tick species imported from three African countries (Somalia, Ethiopia, and Sudan) to Egypt.All the collected specimens were identified as A. lepidum using morphological characteristics and sequence data for the 12S and 16S rRNA genes, which are reliable genetic markers for species identification (Abouelhassan et al. 2019).Although these ticks have been reported in several studies from Egypt (Adham et al. 2009;Hassan et al. 2017;Okely et al. 2021;Abouelhassan et al. 2023), this is the first report of morphological variations and abnormalities for this species.Morphological abnormalities were observed in only one specimen from a total of 57 A. lepidum specimens recorded during this study.Though this is the first report of local anomalies in A. lepidum from Egypt, a previous study from Uganda (Balinandi et al. 2019) has reported local and general anomalies for the same species.Morphological abnormalities and anomalies in Hyalomma dromedarii and H. rufipes in ticks from Egypt have previously been observed in the country (Okely et al. 2022a).The observation of anomalous ticks in Egypt in this study and the previous one (Okely et al. 2022a) may be due to antiparasitic treatments used in Egypt.Mutations resulting from antiparasitic treatments such as insecticides and acaricides that hard ticks are exposed to may give rise to morphologically anomalous ticks (Luz et al. 2023).The distribution of A. lepidum is strongly affected by rainfall (Walker et al. 2003), and it prefers arid areas with 250-750 mm of rainfall.The same results were acquired from the response curves (Fig. 9c), indicating this species' peak distribution in habitats within the range documented in Walker et al. (2003).
The current study also revealed the shape variations of the body shape of male A. lepidum using geometric morphometric analysis.Canonical variate analysis (CVA) and principal component analysis (PCA) were employed to assess these variations.CVA effectively separates geographic populations and is sensitive to small statistical trends, while PCA is less sensitive but less prone to bias.The combined results of CVA and PCA indicate statistically significant biological differences among tick populations.The main differences seem to be related to length measurements (body, capitulum, and hypostome), where the larger ticks tend to be in Somalia.Variations in body size among ticks can be attributed to factors such as the quality and quantity of the host's blood consumed during feeding (Sonenshine 1993;Brunner et al. 2011).Additionally, the habitat of the host may also influence tick size.For instance, tick species that parasitize semi-aquatic animals, like Amblyomma dubitatum and Amblyomma romitii, exhibit larger spiracular plates (Luz et al. 2020).Environmental effects might also cause morphological differences between different populations of the same species (Hutcheson et al. 1995).
Although the trade of livestock across borders can be financially beneficial, it poses significant health risks.In Africa, the transhumance and trading of livestock can transport ticks carrying pathogens to new regions, potentially leading to the establishment of new tick populations and the spread of tick-borne pathogens (Silatsa et al. 2019;Perveen et al. 2021).For instance, recent studies showed how livestock movement contributes to the geographical expansion of the invasive Rhipicephalus microplus tick in new zones in Africa, and subsequently, the pathogens it carries (Madder et al. 2012;Kamani et al. 2017;Ouedraogo et al. 2021;Addo et al. 2023).Cross-border animal trade and unrestricted movements of live animals led to the widespread spread of R. microplus in Africa (Silatsa et al. 2019;Kanduma et al. 2020;Muhanguzi et al. 2020).The movement of livestock also plays a pivotal role in the spread of tick-borne diseases (Parola and Raoult 2001).Animal movements were believed to be responsible for the dissemination of diseases such as Rift Valley fever in Africa (Chevalier et al. 2004).Regulations and adopting biosafety strategies governing the movement of livestock serve as an established strategy for controlling infectious diseases with international standards provided by the World Organization for Animal Health (WOAH) (Fèvre et al. 2006).In Egypt, it is crucial to monitor tick species and their associated pathogens on imported wild and domestic animals entering the country (Abouelhassan et al. 2023).
In conclusion, this study provides morphological and molecular analyses for A. lepidum ticks collected from imported camels in Egypt.Furthermore, the results suggest that A. lepidum has adapted morphology with variations among specimens from the three countries of origin, with minimal potential for genetic divergence.The study highlights the importance of monitoring and controlling the movement of livestock to prevent the introduction and spread of ticks and tick-borne diseases.
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
Fig. 3
Fig. 3 Abnormalities in male Amblyomma lepidum collected from imported camels to Egypt. a Abnormality in the enamel ornamentation on the conscutum with indistinct posteromedian stripe.b Indistinct enamel ornamentation of the lateral median area on the left side.c Atrophy of the second left leg.d Abnormality in festoon enameling
Fig. 4 Fig. 5
Fig. 4 Cluster of the morphometry for Amblyomma lepidum males in four similarity indices | 2024-07-18T06:17:51.053Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "6b6ac1b38297438ec24a10c803e190d5f42094ca",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1bb53458c6757418ac6f8170d926b00f9e83ef1f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265682003 | pes2o/s2orc | v3-fos-license | Mapping Urban Disaster Adaptation Typology of Cebolok Community of Semarang City
The concept of urban resilience is related to disaster risk management. A city that is resilience can be indicated from the adaptive capacity of the community to stress and shock, preparedness when a disaster occurs, and quick recovery after a disaster. This article explores strategies for increasing urban resilience as community adaptation measures for reducing the risk of flood disasters in urban villages of Cebolok, Semarang. Quantitative method was used by distributing questionnaires and in depth-surveys to 40 households. Results show 2 evidences. First, Cebolok Community modified their housing as a form of physical adaptation to floods. Second, it confirms that community adaptation strategy is related to necessity in maintaining livelihood assets.
Introduction
Current climate change such as increased heat waves and increased heavy rain is expected to increase of flood hazard intensity [1].Semarang City is one of the cities on the North Coast of Java which has a serious threat of flood and inundation.Flood in urban area creat a threat to urban development, urban infrastructure, and community settlement [2].In urban areas, apart from climate change, flood also be caused by the human factor.A drainage system that does not function optimally due to the deposition of waste is one of the causes of flood.In addition, urbanization phenomenon is also contributed to increase the flood risk [3].The threats arising from disaster led to increased community vulnerability.
Disaster risk management process in vulnerable area would increase the resilience.Resilience is the ability of a community exposed to a hazard in order to resist and recover from the effects of risk in a punctual and efficient manner [4].Resilience can be viewed via an adaptation process to changes and focuses more on disaster experiences to increase the learning ability and self-organization capacity [5,6].Promoting disasters resilience requires careful attention to disaster risk management.Disaster management is a reactive approach starting from the preparedness, response, and recovery stages which involve a community perform to the adaptation process [7].Hence, community that is resilient to disasters is a community that has an effective adaptive capacity.Resilience could be measured via adaptation actions in the disaster risk management process.1264 (2023) 012015 IOP Publishing doi:10.1088/1755-1315/1264/1/012015 2 Carter et al. [4] promoted several reasons underlying the importance of carrying out disaster adaptation measures in an urban area.The trend of urbanization has led to population growth which has resulted in increased dependency on urban infrastructure, reduction of water catchment areas, and an increase in the number of poor and eldery people so that their vulnerability to disasters has increased [8].Improving resilience emphasizes to building community capacity in adapting to changes, especially adaptation to disasters [5].The authors argue that adaptation is important things to do by urban communities with the goal of protecting and reducing the impact of disaster risk at the household level.The types of adaptation measures are structural and non-structural measures as an additional option to reduce the losses [9].
Semarang City is one of the big cities of the northern part of Java which often experiences floods.This research is focused on the Cebolok area at Jalan Gajah Raya, located in Sambirejo Village, as one of the urban villages which regularly effected by floods.This area located in strategic location so that it has the potential to experience rapid settlement growth because it is near to the city center.It is important to pay attention to the problem of flooding in this area, not only because of the settlement's density but also because this area is part of Semarang City social-cultural strategic area, especially in the area around the Central Java Great Mosque.The demands as a social-cultural strategic area require that this area be able to provide optimal service.Many studies have shown that floods have an impact on time and material losses so the community takes action to adapt [9][10][11].Communities who are vulnerable to floods will increase their adaptation to adjust to the conditions in which they live.According to Marfai et al. [12], currently many local communities have developed small scale household adaptation strategies based on experiences from previous flood disasters.The threat of flood in the Cebolok area stimulate the households to adapt to the flood prone living conditions.
The previous research has discussed a lot about various type of community adaptation.In this research, adaptation is also identified in the spatial context with concluded the adaptation typology.The strategies for increasing resilience and reducing vulnerability to disasters require the involvement of various stakeholders more seriously.Many impact of disasters are seen as a failure of urban management by the government and institutions that pay little attention to disaster issues [13].The city government has the responsibility to provide adequate services and infrastructure as an effort to reduce vulnerability to disasters.However, Dodman & Satterthwaite [13] states that the implementation of city government responsibilities in developing countries is still not maximal in fulfilling the goals of increasing disaster resilience.Therefore, this study objective is to analyze the community adaptation strategy in flood prone area of Cebolok area.A brief review of disaster risk management at the city scale in the study area is also necessary to identify existing gaps in the disaster management process.It is important to mapping urban resilience to disaster to understand the specific context of resilience for decision makers in formulating climate change resilience sensitive policy [14].
Method
This study used a quantitative method by using descriptive statistical analysis techniques from the results of data processing questionnaires and interviews.The respondents were selected to every household in the Cebolok settlement who had lived for more than 5 years and had been affected by floods.Data calculation is presented using Geographical Information System (GIS), diagram and percentages.
The number of samples used was 40 household units taken from each neighborhood units (Rukun Tetangga) in Cebolok area.Samples were taken using snowball sampling technique.This technique was conducted by starting with trusted respondents.For example, it started with the village unit head (Rukun Warga), then the respondent provides a reference or recommendation regarding other respondents whom are considered adequate to provide information.In this article, 2 types of analysis are carried out, as seen at Figure 1.Cebolok Area is an urban village in Semarang located in Gayamsari District.This area has been experiencing floods every year.Based on this data, the community expected to have done adaptation strategies against the floods.The output of this analysis is a map of flood level, the trend of physical adaptation measures, and a typology map of physical adaptation in Cebolok area.
Characteristic of Floods in Cebolok Area
Related to this research, road network and drainage system are the infrastructure that need attention.Local road network in Cebolok area is 2-5 meters width.There is a 10 meters long road in Cebolok 2 Alley which in a damaged condition due to floods in this area.Drainage system in research area consist of primary drainage, secondary drainage, and tertiary drainage (Figure 2).Secondary drainage on Gajah Raya Road and Soekarno Hatta Road in Pictures (d) and (e) of Figure 2 have 1-2 meters width but filled with rubbish.On normal days, the water level in the drainage almost as high as the road.This condition is one of the causes of floods in Cebolok when it rains continuously.The main problem of this research is the flood incidents which are a threat to Cebolok area, Sambirejo Village.Floods in the area have become a routine problem.The flood incident in the Cebolok area was not a flood that resulted in loss of life and property.Floods occur with height ranging from 30 cm to a maximum of 80 cm.Although it is not a flooding on a large scale, this remains a serious problem because the area is not only a place for people to live but also accommodates the economic activities of the local community.These conditions hampered various activities in the Cebolok area.Floods also occur regularly every year so that people need to anticipate and adapt to reduce losses and risks due to flood disasters.
Floods that occurred in the Cebolok area caused by natural factors and human factors.The existence of climate change conditions resulted in unpredictable rainfall intensity.High rainfall is one of the causes of flooding in this area.Vulnerability to flooding also arises from the inadequate condition of urban infrastructure to anticipate floods.The drainage system on Jalan Gajah Raya, including in the Cebolok area, is not functioning optimally due to its insufficient capacity.In fact, Zhou et al. [15] has shown that the provision of urban infrastructure in the form of a drainage system is very important in reducing the risk of flood disasters.In addition to the physical condition of the drainage canals which is inadequate to accommodate water, the deposition of waste and sedimentation that occur have resulted in a decrease capacity of the drainage.Frequently, the condition of the drainage system has a water level that is almost same as the road.Under these conditions, when rainfall intensity is high, there will be an overflow of water from the drainage which causes of flooding in the Cebolok area.
Based on the interviews with local community, the settlements growth is also one of the factors causing the flooding conditions in the Cebolok area worsened.The construction of new housing of the western part of the Cebolok area is considered to be the cause of the reduced water catchment area.The community in Cebolok perceive that the presence of housing has exacerbated the flood conditions in the area where they live.Many studies have found that one of the risk sources of flood-prone areas is the poor settlement planning and rapid population growth [11,16,17].Handayani et al. [3] also stated that the urbanization that occurred in the northern region of Java resulted in an increase of land use changes to built-up land and was followed by a disasters increase as indicated by the continuous occurrence of floods.
The height of the flood that occurred in Cebolok area can be identified into two types.Small flood of 15-20 cm high will often appear if the rains fall in a long time and continuous.The intensity of flood cannot be predicted, but it will often appear during the rainy season.Meanwhile, floods with a height of 50-80 cm generally occur twice a year.Figure 3 shows several photos at the time when the floods came.Picture (a) is a photo in Cebolok 1 Alley with a flood height of 60 cm, pictures (b) and (c) are photos of the flood incident in Cebolok 2 Alley with a water level of 70 cm, while the water level in pictures (d) and (e) is 65cm.The problem of flooding in the Cebolok area has an impact on disrupting residents' activities in the area.This is because the overflow water enters into the houses.High density of the settlement conditions has an impact on higher losses due to flooding.Result from several news from online portals informed that floods in the Cebolok area occur every year.One incident covered in the news at February 20, 2020 mentioning flood heights reaching 30-60 cm (tribunnews.com).It continued to incident in 2021, the floods occur twice, namely at February 8, 2021 (halosemarang.id) and at September 28, 2021 (tribunnews.com)with a flood height of around 50-60 cm.The last flood occurred at December 31, 2022.The people of Cebolok area stated that the water started to rise at 06.00 a.m. on December 31, 2022.Based on interview results, the flood height at that time ranged from 50-80 cm.The flood occurred over a period of 2 days.Generally, the flood level of the central part of the Cebolok area is higher than of the western area because the road level is slightly lower.The Rukun Tetangga 4 area was more severely affected by flood because it is located on the edge of Gajah Raya Road.The higher position of Gajah Raya Road causes more water to enter the houses.Based on information from online news media, interviews with some local community, and documentation reported that the flood level in the Cebolok area can be identified based on the height of the flood at December 31, 2022, as follows.Figure 4 shows that flood in Cebolok area had 50-80 height in 31 st December 2022.The highest floods are in the area beside Jalan Gajah Raya.
Community Efforts to Build Resilience Through Adaptation
Adaptation is a long-term process of responses to climate vulnerability to create an adaptive capacity and build resilience.It refers to capacity of a system, included an individual or household to increase capacity based on the experiences of disasters [18].From this understanding, resilience at the household unit scale in this study is reviewed from the actions of each household which aims to reduce risk and increase protection against floods that occur in their neighborhood.The study is focused on actions in the form of physical processes or physical adaptations which can then be mapped spatially to determine their typology.
The study found that there were several physical adaptation strategies in the form of modification of the houses carried out by the inhabitants.This action was conducted independently by each household.The following diagram explains the percentage of households taking action to modified their physical housing.Figure 5 shows that there is still 32.5% of people who have not modified their house.From the results of this analysis, it is vivid that the majority of the community as much as 37.5%.modified the house by upleveling the floor of the house in several rooms.They tend to have preferences to upleveling certain rooms.Most people choose to elevate the living room.There are some people whose choose elevating their front rooms, that is usually to be used as store and trade warehouse.It is considered important to them, as these rooms is a means for income generating support.These facts is suitable with findings from studies that have been conducted on increasing resilience and disaster risk prevention processes carried out by households [2,12].People of Cebolok settlement rose the floor of their house as high as 50-150 cm depending on the needs and abilities of each household.The example of modified the housing can be seen in Figure 6.Picture (b) of Figure 6 show a household that raise the stall to protect his source of income.Meanwhile, pictures (c) and (d) of Figure 6 are two example the household that build an additional floor to protect their goods.
Physical adaptation was conducted in the form of building additional floor to the house.This adaptation was conducted only by few households.According to Abass [9], storey buildings is one action to escape from menace of constant flooding.Thus, the function of an additional floor is to safe valuable assets of the family.However, the action of building additional floor is not necessary for households in this part of the neighborhoods due to the relatively not too high floods level, approximately only 50-80 cm.
Figure 6 shows 2 types of scattered locations; where the households did physical adaptation or not.Households of the western part of the Cebolok area chose to postpone building additional floors of the house.It because the topography in the western part of the Cebolok area is higher than of the eastern part, hence, the flood water level is not as high as in the eastern part.The choice to do physical adaptation was not merely caused by spatial consideration.This strategy based on household preference of social and financial condition.This has also been proven in several studies [19,20].Amadi [21] is also said that level of education, duration of residence, age, and income of household are some variables that impact on the willingness to undertake adaptation action.In the Cebolok area, people with a low financial affordability will try to reduce adaptation costs by adding ground to their existing living room without having modification to the house roof.In several flood incidents, only adding the level of living room trigger more losses when other rooms inside the house were flooded.Another physical adaptation taken is to construct sand embankments to prohibit water enter houses.Most of the Cebolok people is those whom have lived in the area since birth.They experienced flooding at least once.It is vividly perceived that the community has taken steps to increase resilience to disasters to protect families from the dangers and risks of flood disasters.They took these adaptation strategy based on their previous experience of flooding.Aside physical modification of houses, the Cebolok community also taken anticipatory actions via joint participation activities, such as voluntary work to clean up trash blocking drainage canals.Participation from the community shows an increase in individual capacity.In the experience of research in South Africa [22], it is evident that the importance of building individual capacity to enable disaster response is created in long-term planning.This condition represents the emergence of a sense of belonging to the region's quality of life.
Gaps in Disaster Risk Management
The adaptation strategy in Cebolok were a form of social learning process from their experiences in previous flood disasters.It is also said by Haque, et al. [20] that direct encounter with flooding would transformed into flood-related knowledge and they would understand or find some solutions to dealing with the flood.This strategy is at household scale and depends on social and economic characteristics of the households.However, to be able to achieve city resilience to disaster threats, especially floods in Semarang City, disaster risk management on a broader scale is needed.Handayani et al. [7] said that there were three important milestones in disaster risk management in Indonesia, namely the formation of the National Board for Disaster Management (INBDM), Disaster Management Board (DMB) at the provincial and district/city levels, and the existence of the Local Preparedness Group (LPGs).In Semarang City, there were already 35 LPGs units in 2018 under the Semarang DMB [7].The results of the interviews revealed the fact that in the Sambirejo Village, there is no availability of an officially LPGs formed from Semarang City DMB.Post disaster response action coordinated by the head of the Rukun Warga and there was no formal community dedicated to helping coordinate flood management.In conditions of limited resources, it has been proven that the adaptation strategies conducted are related to how they ensure the sustainability of their livelihoods.It can be concluded that the actual strategy adopted by Cebolok residents does not depend on how to measure the magnitude of the disaster risk they will face, but rather on how much the impact of the disaster affects their livelihoods [23,24].
As had been said, the cause of the flooding that occurred in the Cebolok area was apart from climate change which made unpredictable rainfall, also due to the condition of the drainage system which could not function optimally.This indicates the unpreparedness of urban infrastructure in tackling flood disasters.Dodman & Satterthwaite [13] said that to increase resilience and minimize the impact of disasters, it is necessary to involve various stakeholders in urban governance, including infrastructure.With the issue of climate change, the government has a responsibility to be able to minimize hazards and reduce the vulnerability of the population.This is also in line with what was said by Setyowati et al. [25], that the government's role is necessary to take fast and appropriate action as a form of protecting the community from disaster hazard.Based on the results of the interviews, the Department of Public Works and Spatial Planning of Semarang City repaired and uplevelled roads affected by floods, including Gajah Raya Road to improve the quality of roads so they are not easily damaged.However, according to local inhabitants, these activities stimulated bad consequence.The higher road resulted in rainwater runoff flowing into their residential area which is located on the edge of Gajah Raya Road, and crates more flood incidents.
In the case of the Cebolok area, flooding was also caused by the existing drainage system.It is not functioning optimally because it was not able to accommodate all the water that came during high rainfall.The drainage system accommodates a lot of waste not only from settlements, but also from service and trading activities.According to the results of interviews with Villages head, the government has taken action to clean up the drainage canals along Gajah Raya Road, but the results have not been able to reduce flooding.This evidence confirm that the government is still taking short-term actions in the disaster risk management process.In the case of Durban, South Africa, the government has taken an active role in the adaptation process.The government has highlighted the current issues and has included climate change issues in longterm urban planning so that it is expected to effectively deal with vulnerability to disasters [26].In sum, all stakeholders need to participate in the adaptation process to achieve resilience and help increase preparedness for future disasters.
Conclusion
This paper clearly described the problems of the flood disaster in the Cebolok area.It aims to analyze the disaster management process of community adaptation actions in the flood-affected areas of the Cebolok as a form of disaster resilience.Flood disasters occur every year in this area and almost all houses are affected.Cebolok Communities took adaptation measures based on their experiences with flood disasters that had occurred before.Each household modified their physical appearance of the housing by rising the floors of the house and building additional floors to increase resilience from disasters.This indicates that there is awareness from community groups for disaster management at the household scale.However, from the perspective of Semarang City governance, there are still gaps in the disaster risk management process in terms of managing urban infrastructure, especially problems with urban drainage systems.Urban IOP Publishing doi:10.1088/1755-1315/1264/1/01201510 infrastructure has not been able to minimize the risk of flooding and the impact for the community.It is evidenced form this research that resilience to disasters is a form of community capacity to recover and fight against the effects and risks of disasters.Hence, the authors recommend important availability of coordination and involvement from various stakeholders to achieve disaster resilience.
Figure 3 .
Figure 3. Photos of the Flood Disaster in Cebolok Area (Personal Documentation, 2022)
Figure 4 .
Figure 4. Map of Flood Level in Cebolok Area (Analysis of Author, 2023)
Figure 5 .
Figure 5. Percentage of Household doing Physical Modification in Cebolok Area (Analysis of Author, 2023) | 2023-12-07T20:02:36.157Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "e1e9640e21b3e74c7c99c8969a032b3a1fe61612",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1264/1/012015/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e1e9640e21b3e74c7c99c8969a032b3a1fe61612",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Sociology"
],
"extfieldsofstudy": [
"Physics"
]
} |
24652448 | pes2o/s2orc | v3-fos-license | Self-perceived health in institutionalized elderly
This study aimed to verify health self-perception, its prevalence and associated factors in institutionalized elderly. A cross-sectional study is presented herein, conducted in 10 LongTerm care Institutions for the Elderly (LTIE) in the city of Natal (Northeast Brazil), between October and December 2013. Sociodemographic variables were analyzed, along with institution-related and health state variables. Descriptive and bivariate analyses were carried out (Chi-squared test, Fisher’s exact test or linear trend Chi-squared test), as well as multivariate analysis (logistic regression). The final sample consisted of 127 elderly. The prevalence of negative self-perceived health was 63.19% (CI 95%: 55.07-70.63), and was associated with weight loss (PR: 1.54; CI 95%: 1.19-1.99), rheumatic disease (PR: 1.46; CI 95%: 1.05-2.01) and not-for-profit LTIE (PR: 1.37; CI 95%: 1.031.83), adjusted by sex. More than half of the elderly reported negative self-perceived health, which was associated with weight loss, rheumatic disease and type of institution. Actions must be developed to promote better health conditions in LTIE, such as nutrition consulting and physical therapy, to improve quality of life.
Introduction
Advances in medicine and technology have extended human life expectancy, and along with decreased fecundity and mortality rates, have caused an increase in the Brazilian elderly population.Although this phenomenon is a benefit to society, it is also an important challenge if these additional years of life are not lived in adequate health conditions 1 .
Health is primordial to guarantee independence, autonomy, and continuity in the contribution of the elderly to society.Especially as the aging process progresses, health issues become more evident, and therefore the self-perception of health becomes mostly negative, interfering in the wellness levels reported by elderly 2 .
The search for satisfactory self-perceived health is connected to sociodemographic, economic, cultural, and psychological aspects, as well as to physical capacity conditions.However, there is discrepancy when measuring the latter, due to different contexts in which population is inserted.One of the mechanisms applied to verify these aspects is self-perceived health, which can be measured by assessments carried out by the individuals themselves and/or referred morbidity.
There is strong evidence that self-perceived health is an excellent predictor of objective health, i.e., of the number of chronic diseases, degree of functional disability and depression, resulting in a conjecture of mortality in elderly populations 3 .Also, sociodemographic aspects such as age, sex, education level and income are some of the factors associated with self-perceived health found in literature 2 .Men present a higher capacity of transforming physical disease in emotional suffering, when compared to women; women, in turn, report their health more frequently as "bad" when compared to men 4,5 .Individuals that consider their health to be "bad" present a higher risk of hospitalization, institutionalization and mortality, when compared to those that considered health as "excellent" 2,6,7 .
There are few studies on self-perceived health in Latin America, especially in institutionalized elderly 8 .It is important to deepen knowledge on the aspects involved in self-perceived health, enabling the identification of more vulnerable areas and/or elderly subgroups, and contributing to the elaboration of health promotion programs.The objective of the work presented herein is to verify self-perceived health in institutionalized elderly and the factors associated with "bad" self-perceived health.
Methods
A cross-sectional study is presented herein, carried out in ten (71.4%) of the 14 long-term care institutions for the elderly (LTIE) registered in the Sanitary Vigilance of the municipality of Natal (Northeast Brazil).Five institutions were private and five were not-for-profit (there were no public LTIE).The other four (28.6%)LTIE refused to participate in the study.Data were collected from updated medical records of the elderly at each institution, including in the study the individuals at least 60 years old that were present at the institutions throughout the research period.The elderly that were not physically present at the LTIE due to hospitalization, those in terminal state or without sufficient cognitive capacity to answer questionnaires were excluded from the study.
Data were gathered in the period between October and December, 2013, after a pilot study was carried out with 25 elderly at the first LTIE studied.The questionnaires were applied by previously trained researchers, who attended meetings with team members.Self-perceived health was assessed by the question "how do you consider your current health state?".The dependent variable of the study was dichotomized: good perception (categories "excellent" and "good") and bad (categories "regular", "bad" and "very bad") 4,9 .
For each elderly, information was collected on sociodemographic conditions (age, sex, race, education level, marital status, number of children, type of LTIE, time and reason for institutionalization, free time occupations, retirement, money administration, health plan and number of elderly per caregiver) and health state (chronic diseases, daily use of medicine, consumption of tobacco and alcohol, practice and level of physical activities, exhaustion, body mass index, weight loss, presence of urinary and fecal incontinence, mobility state, functional and cognitive capacities).The diseases analyzed included: arterial hypertension, diabetes, cancer, pulmonary disease, cerebrovascular accident, dementia (including Alzheimer's), Parkinson's disease, osteoporosis, kidney failure, cardiovascular disease, rheumatic disease, mental illness, depression, dyslipidemia, and other unspecified illnesses.Information was obtained from medical records or provided by personnel at institutions (social assistants, nursing technicians or caregivers).
Cognitive capacity was evaluated by Pfeiffer's Test, which assesses short-and long-term mem-ory, orientation, information on daily events and mathematical capacity.This instrument enables the classification of the elderly in intact mental function, and slight, moderate or severe cognitive decline, taking into consideration the education level of the individual 10 .Presence or absence of functional disability was considered when the individual presented (or not) dependence in one or more Basic Activities of Daily Life (BADL) of the Katz Index 11 .
Body Mass Index (BMI) was calculated from the relationship between the weight (in kg) and the squared height (in m).Weight and height measurements were taken according to the techniques recommended by the World Health Organization (1995).An electronic Tanita ® scale was utilized, with 150 kg capacity and 100 g precision.Total height was obtained according to the average of two measurements, with an exact-height portable stadiometer (1 mm precision).Classification of the BMI values considered what was established in the Food and Nutrition Surveillance System (SISVAN) (2008) for elderly: underweight (< 22 kg/m 2 ), eutrophic (≥ 22 and < 27 kg/m 2 ) and overweight (≥ 27 kg/m 2 ) 12 .Involuntary weight loss was evaluated by the question "throughout the last year, have you lost more than 4.5 kg or 5% of your body weight unintentionally (without dieting)?" 13.If the elderly were unsure or did not know, the nutritionists at the institution were consulted.
The short version of the International Physical Activity Questionnaire (IPAQ) was applied to evaluate the level of physical activity of the individuals.IPAQ is a transculturally adapted instrument, validated to the Brazilian elderly population, which takes into consideration the time dedicated within the previous week, with minimum duration of ten continuous minutes, to three activities: walking, and moderate or vigorous intensity activities.The overall energy expended (metabolic equivalent of task: MET-minutes/week) was multiplied by the weight of the elderly and divided by 60 kg.The lowest quintiles of these results, stratified by gender, were identified and utilized as cutoff point to classify a low level of physical activity 14 .
The work presented herein is part of a wider research project, entitled "Human aging and health: the reality of institutionalized elderly in the city of Natal/RN", approved by the Research Ethics Committee of the Federal University of Rio Grande do Norte.The university approved an amendment to the original project to add other variables and carry out the present study.Sig-nature of free informed consignment forms was mandatory, either by the elderly or legal tutor, and also by the direct caregiver.
Initially, a comparative analysis was carried out between the individuals included and excluded in/from the study, through chi-squared and Student's T tests.Descriptive statistics of each group were also presented.Inferential statistics were applied to carry out bivariate analysis by chi-squared (or Fisher's exact test) and linear trend chi-squared tests.Magnitude of association was verified by the prevalence ratio for each independent variable regarding the dependent variable, to a significance level of 95%.The variables with p values under 0.225 were analyzed by logistic regression to build the multivariate model, using the Stepwise Forward method.Permanence of the variable in multiple analysis depended on the Likelihood Ratio Test, absence of multicollinearity, as well as its capacity of improving the model by the Hosmer and Lemeshow test.Odds ratio was converted into prevalence ration according to Miettinen and Cook 15 .
Results
Of a total of 350 residents, 11 (3.1%) individuals refused to answer the questionnaire, 1 (0.3%) was under the age of 60, and 189 (54.0%) were excluded due to incapacity to answer the questionnaire, 4 (1.1%) were hospitalized and 1 (0.3%) was in terminal state.
The final sample was constituted of 144 elderly, mostly of the female sex (79.2%) and average age 79.4 (SD:8.2).The majority of residents belonged to not-for-profit institutions (64.6%), was retired (91.7%) and did not have private health plan (58.3%).It was verified that 80 (55.6%) individuals had children, with an average number of children 1.9 (SD:2.4).Average residence time was 57.3 months (SD:62.8)and there were 7.1 elderly per caregiver (SD:4.3)at the institutions.The majority depended on medication (97.2%) and the average number of medicines per elderly was 5.5 (SD:3.1).
Table 1 shows that 36.8% of the individuals presented mobility restrictions, and 53.5% presented functional disability in one or more BADL.The frequency of cognitive disability, according to Pfeiffer's scale, was 79.9%.
Finally, Table 3 shows the final model, with a Hosmer-Lemeshow test value = 0.920.After multivariate analysis, it was verified that negative self-perceived health in institutionalized elderly was associated with involuntary weight loss during the previous year (p = 0.001), rheumatic disease (p = 0.023) and not-for-profit LTIE (p = 0.033), controlled by sex (p = 0.216).
Discussion
The descriptive analysis of the work presented herein showed that approximately 63% of individuals considered their health as "bad".As expected, this proportion was higher than in non-institutionalized Brazilian elderly, whose "bad" self-perceived health rates vary between 11 and 40% 3,9,16,17 .The rates obtained herein were also superior to those identified in other studies with institutionalized elderly 7,18,19 .The only Brazilian study on the subject verified that approximately 51% of the LTIE residents in the city of Pelotas (South Brazil) evaluated their health as "bad" 7 .International studies have reported lower rates: research carried out in China reported a 53% prevalence 18 , while a representative sample of LTIE residents in Madrid (Spain) presented 45% 19 .
Regarding the methodological criteria applied to assess health perception, several studies have dichotomized this variable, considering the categories "regular/reasonable" and "bad" 2,4,7-9,17 or "regular/bad/very bad" 18 , as applied herein.Other authors have included the "regular" category within "good" and "very good/excellent" 3 .Herein it was decided to include "regular" within the negative consideration of health, most common option found in literature, especially because data distribution facilitated inferential analysis.
Concerning the causes for the elevated prevalence of negative self-perceived health identified herein, the two components of health self-assessment should be highlighted.On one hand, this is a partial consequence of more subjective aspects, representation of social and emotional dimensions of health and well-being, which in turn could have been influenced by the ele-vated frequency of depressive symptoms in the sample [18][19] .On the other hand, negative health evaluation could have a direct relationship with objective indicators, which has been consolidated in scientific literature 18 .Among these indicators, the sociodemographic health determinants must be taken into consideration.In this direction, two studies have identified higher proportions of negative health perception in the Northeast population, which is attributed to worst health assistance and health conditions 9,20 .Also, the institutionalized elderly in the city of Natal are characterized, generally, by a high degree of disability and weak health, although the individuals presenting higher cognitive decline were excluded from the study presented herein 21 .
Regarding the factors associated with bad health perception, herein sociodemographic factors, such as age and sex, were not associated with the outcome, as observed in a study carried out in Spain 19 .Multiple analysis showed that negative health evaluation was independently associated with specific health-related variables, such as weight loss in the last year and presence of rheu- matic disease, as well as with a variable related to the institution (not-for-profit LTIE).
Among these factors, involuntary weight loss was the most strongly associated with negative health self-assessment.It is a variable that can indicate a decline in health, and represents, classically, one of the specific indicators of the frailty phenotype 13 .In a study carried out with institutionalized elderly in the USA, a statistically significant relationship was established between frailty and lower life quality levels.Due to the strong relationship between self-perceived health and life quality, it has been suggested that actions against the frailty process could improve life quality of this increasing population group 22 .
The elderly that suffered from chronic diseases presented a higher proportion of negative health self-reports.Negative self-perceived health out a cross-sectional study with more than 2,000 elderly in the municipality of Sao Paulo (Southeast Brazil), and established a strong association between health perception and the number of morbidities.Other research corroborated the same finding in institutionalized elderly, along with no association with age or sex 19 .However, herein negative self-perceived health there was no statistically significant association with the presence or number of chronic diseases, and the only chronic pathology that remained in the final model was rheumatic disease.
The patients that present musculoskeletal pathologies frequently suffer pain, mobility restrictions and functional limitations, and these factors could lead to worse quality of life and bad self-perceived health 23 .The additional negative effect of rheumatic disease on the morbidity load has been verified, which in turn affects functionality and life quality 24 .Herein bad self-perceived health was not associated with functional dependency for BADL, in opposition to a study carried out in Spain 19 .
Another associated factor identified herein was the type of institution: in not-for-profit LTIE the proportion of residents that considered their health as "bad" was higher than in for-profit LTIE, and this association was significant in the final model.Other authors have not established statistically significant differences in health perception when comparing not-for-profit and for-profit LTIE.In Spain, the profile of residents is not too different depending on the type of LTIE; how-ever, this is not the case for Brazil 19 (and it must be highlighted that no philanthropic LTIE were available herein).
In Brazil, most LTIE have philanthropic nature.Public LTIE represent than less 7% of the total number; in fact, in Natal there are no public LTIE.Despite the hybrid sociosanitary function of these collective residences, the frequently deteriorated health state of residents causes medical services to prevail over the offer of leisure and social activities, especially in not-for-profit LTIE 25 .Low stimulus to social integration, along with the lack of professionals and social abandonment reality (more characteristic of philanthropic institutions) could explain the higher proportion of elderly that perceived their health as "bad" in this type of LTIE.
At this point, there are some limitations that must be recognized herein.Chronic diseases could have been under-registered, due to the lack of health professionals and subsequent diagnostics.However, the maximum amount of available information was gathered, consulting medical records and interviewing the personnel at institutions and the elderly.Type-II error could have occurred due to the size of the sample, which was affected by the elevated proportion of excluded cases due to cognitive disability to answer questionnaires.
The study presented herein contributes with the representativeness of the sample, obtained thanks to the participation of the majority of LTIE in the city of Natal and also to the low num- 1.00 0.82 (0.60-1.12) ber of refusals from residents.After a systematic and exhaustive literature review, this is the first study that analyzed the factors associated with self-perceived health in Brazilian institutionalized elderly.
Conclusion
More than 60% of the institutionalized elderly in the city of Natal (Northeast Brazil) considered their health as "bad", which is a high prevalence when compared to other national and international studies."Bad" self-perceived health was associated with weight loss, rheumatic disease and not-for-profit LTIE, indicating the importance of these variables related to health state and institutionalization characteristics.It is important to develop control actions towards chronic diseases, oriented to the improvement of health in this population group.Considering the knowledge gap that currently exists in scientific literature, more studies are necessary, focused on self-perceived health in institutionalized elderly.
Collaborations
All authors contributed to the conception and design or analysis and interpretation of data; writing of manuscript or critical review; and approval of final version to be published.
Table 1 .
Frequency distribution for sociodemographic and health-related variables of institutionalized elderly.Natal,RN, Brazil, 2015.
NA: not applicable.
Table 2 .
Alves e Rodrigues 2 carried Bivariate analysis between negative self-perceived health and independent variables in institutionalized elderly in the city of Natal/RN.Natal, RN, Brazil, 2015.: contains the variables "age" and those with p values under 0.225 that were not included in the final model.* Chi-squared test; † Fisher's Exact Test; ‡ Linear trend chi-squared test.Source: elaborated by the authors Note
Table 3 .
Bivariate analysis by chi-squared test and multivariate analysis of the variables included in the final model for negative self-perceived health in institutionalized elderly.Natal,RN, Brazil, 2015. | 2018-12-31T06:07:16.047Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "c48e1b24077b6c4474d4b7a8f7a416d42d1e1b44",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/csc/v21n11/en_1413-8123-csc-21-11-3367.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c48e1b24077b6c4474d4b7a8f7a416d42d1e1b44",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
229363750 | pes2o/s2orc | v3-fos-license | Tractability frontiers in probabilistic team semantics and existential second-order logic over the reals
Probabilistic team semantics is a framework for logical analysis of probabilistic dependencies. Our focus is on the axiomatizability, complexity, and expressivity of probabilistic inclusion logic and its extensions. We identify a natural fragment of existential second-order logic with additive real arithmetic that captures exactly the expressivity of probabilistic inclusion logic. We furthermore relate these formalisms to linear programming, and doing so obtain PTIME data complexity for the logics. Moreover, on finite structures, we show that the full existential second-order logic with additive real arithmetic can only express NP properties. Lastly, we present a sound and complete axiomatization for probabilistic inclusion logic at the atomic level.
Introduction
Metafinite model theory, introduced by Grädel and Gurevich [20], generalizes the approach of finite model theory by shifting to two-sorted structures that extend finite structures with another (often infinite) domain with some arithmetic (such as the reals with multiplication and addition), and weight functions bridging the two sorts. A simple example of a metafinite structure is a graph involving numerical labels; e.g., a railway network where an edge between two adjacent stations is labeled by the distance between them. Metafinite structures are, in general, suited for modeling problems that make reference to some numerical domain, be it reals, rationals, or complex numbers.
A particularly important subclass of metafinite structures are the R-structures, which extend finite structures with the real arithmetic on the second sort. The computational properties of R-structures can be studied with Blum-Shub-Smale machines [6] (BSS machines for short) which are essentially register machines with registers that can store arbitrary real numbers and which can compute rational functions over reals in a single time step.
A particularly important related problem is the existential theory of the reals (ETR), which contains all Boolean combinations of equalities and inequalities of polynomials that have real solutions. Instances of ETR are closely related to the question whether a given finite structure can be extended to an R-structure satisfying certain constraints. Moreover, as we will elaborate more shortly, ETR is also closely related to polynomial time BSS-computations.
Descriptive complexity theory for BSS machines and logics on metafinite structures was initiated by Grädel and Meer who showed that NP R (i.e., non-deterministic polynomial time on BSS machines) is captured by a variant of existential second-order logic (ESO R ) over R-structures [22]. Since the work by Grädel and Meer, others (see, e.g., [11,26,28,38]) have shed more light upon the descriptive complexity over the reals mirroring the development of classical descriptive complexity.
Complexity over the reals can be related to classical complexity by restricting attention to Boolean inputs. The so-called Boolean part of NP R , written BP(NP R ), consists of all those Boolean languages that can be recognized by a BSS machine in non-deterministic polynomial time. In contrast to NP, which is concerned with discrete problems that have discrete solutions, this class captures discrete problems with numerical solutions. A well studied visibility problem in computational geometry related to deciding existence of numerical solutions is the so-called art gallery problem. Here one is asked can a given polygon be guarded by a given number of guards whose positions can be determined with arbitrary precision. Another typical problem is the recognition of unit distance graphs, that is, to determine whether a given graph can be embedded on the Euclidean plane in such a way that two points are adjacent whenever the distance between them is one. These problems [1,40], and an increasing number of others, have been recognized as complete for the complexity class ∃R, defined as the closure of ETR with polynomial-time reductions [39]. The exact complexity of ∃R is a major open question; currently it is only known that NP ≤ ∃R ≤ PSPACE [8]. (1) Interestingly, ∃R can also be characterized as the Boolean part of NP 0 R , written BP(NP 0 R ), where NP 0 R is nondeterministic polynomial time over BSS machines that allow only machine constats 0 and 1 [7,41]. It follows that ∃R captures exactly those properties of finite structures that are definable in ESO R (with constants 0 and 1). That ∃R can be formulated in purely descriptive terms has, to the best of our knowledge, never been made explicit in the literature. Indeed, one of the aims of this paper is to promote a descriptive approach to ∃R. In particular, our results show that certain additive fragments of ESO R , which correspond to subclasses of ∃R, collapse to NP and P.
In addition to metafinite structures, the connection between logical definability encompassing numerical structures and computational complexity has received attention in constraint databases [4,21,37]. A constraint database models (e.g., geometric data) by combining a numerical context structure (such as the real arithmetic) with a finite set of quantifier-free formulae defining infinite database relations [32].
Renewed interest to logics on frameworks analogous to metafinite structures, and related descriptive complexity theory, is motivated by the need to model inferences utilizing numerical data values in the fields of machine learning and artificial intelligence. See e.g. [24,44] for declarative frameworks for machine learning utilizing logic, [10,42] for very recent works on logical query languages with arithmetic, and [31] for applications of descriptive complexity in machine learning.
In this paper, we focus on the descriptive complexity of logics with so-called probabilistic team semantics as well as additive ESO R . Team semantics is the semantical framework of modern logics of dependence and independence. Introduced by Hodges [29] and adapted to dependence logic by Väänänen [43], team semantics defines truth in reference to collections of assignments, called teams. Team semantics is particularly suitable for a formal analysis of properties, such as the functional dependence between variables, which only arise in the presence of multiple assignments. In the past decade numerous research articles have, via re-adaptations of team semantics, shed more light into the interplay between logic and dependence. A common feature, and limitation, in all these endeavors has been their preoccupation with notions of dependence that are qualitative in nature. That is, notions of dependence and independence that make use of quantities, such as conditional independence in statistics, have usually fallen outside the scope of these studies.
The shift to quantitative dependencies in team semantics setting is relatively recent. While the ideas of probabilistic teams trace back to the works of Galliani [16] and Hyttinen et al. [30], a systematic study on the topic can be traced to [14,15]. In probabilistic team semantics the basic semantic units are probability distributions (i.e., probabilistic teams). This shift from set based semantics to distribution based semantics enables probabilistic notions of dependence to be embedded to the framework. In [15] probabilistic team semantics was studied in relation to the dependence concept that is most central in statistics: conditional independence. Mirroring [17,22,36] the expressiveness of probabilistic independence logic (FO(⊥ ⊥ c )), obtained by extending first-order logic with conditional independence, was in [15,26] characterised in terms of arithmetic variants of existential second-order logic. In [26] the data complexity of FO(⊥ ⊥ c ) was also identified in the context of BSS machines and the existential theory of the reals. In [25] the focus was shifted to the expressivity hierarchies between probabilistic logics defined in terms of different quantitative dependencies. Recently, the relationship between the settings of probabilistic and relational team semantics has raised interest in the context of quantum information theory [2,3].
Another vantage point to quantitative dependence comes from the notion of multiteam semantics, defined in terms of multisets of variable assignments called multiteams. A multiteam can be viewed as a database relation that not only allows duplicate rows (cf. SQL data tables), but also keeps track of the number of times each row is repeated. Multiteam semantics and probabilistic team semantics are close parallels, and they often exhibit similar behavior with respect to their key logics (cf. [14,23,45]). There are also differences, namely because the two frameworks are designed to model different situations. For instance, a probability of a random variable can be halved, but it makes no sense to consider a data row that is repeated two and half times in a data table. For this reason, the so-called split disjunction is allowed to cut an assignment weight into two halves in one framework but not (always) in the other.
Of all the dependence concepts thus far investigated in team semantics, that of inclusion has arguably turned out to be the most intriguing and fruitful. One reason is that inclusion logic, which arises from this concept, can only define properties of teams that are decidable in polynomial time [18]. In contrast, other natural team-based logics, such as dependence and independence logic, capture non-deterministic polynomial time [17,36,43], and many variants, such as team logic, have an even higher complexity [35]. Thus it should come as no surprise if quantitative variants of many team-based logics turn out more complex; in principle, adding arithmetical operations and/or counting cannot be a mitigating factor when it comes to complexity.
In this paper, we study probabilistic inclusion logic, which is the extension of first-order logic with so-called marginal identity atoms x ≈ y which state that x and y are identically distributed. Our particular focus is on the complexity and expressivity of sentences. It is important, at this point, to note the distinction between formulae and sentences in team-based logics: Formulae describe properties of teams (i.e., relations), while sentences describe properties of structures. This distinction is even more pointed in probabilistic team semantics, where formulae describe properties probabilistic teams (i.e., real-valued probability distributions). On the other hand, sentences of logics with probabilistic team semantics can express variants of important problems that are conjectured not to be expressible in the relational analogues of the logics. Decision problems related to ETR (i.e., the likes of the art gallery problem) are, in particular, these kind of problems. Another motivation to focus on sentences is our desire to make comparison between relational and quantitative team logics. As discussed above, the move from relational to quantitative dependence should not in principle make the associated logics weaker. There is, however, no direct mechanism to examine this hypothesis at the formula level, because the team properties of relational and quantitative team logics are essentially incommensurable. Fortunately this becomes possible at the sentence level. The reason is that sentences describe only properties of (finite) structures in both logical approaches.
The main takeaway of this paper is that there is no drastic difference between a relational team logic and its quantitative variant, as long as the latter makes only reference to additive arithmetic. While inclusion logic translates to fixed point logic, its quantitative variant, probabilistic inclusion logic, seems to require linear programming. Yet, the complexity upper bounds (NP/P) of first-order logic extended with dependence and/or inclusion atoms are preserved upon moving to quantitative variants. In contrast, earlier results indicate that this is not necessarily the case with respect to dependencies whose quantitative expression involves multiplication (such as conditional independence [26]).
Our contribution. We use strong results from linear programming to obtain the following complexity results over finite structures. We identify a natural fragment of additive ESO R (that is, almost conjunctive (∃ * ∀ * ) R [≤, +, SUM, 0, 1]) which captures P on ordered structures (see page 4 for a definition). The full additive ESO R is in turn shown to capture NP. Additionally, we establish that the so-called loose fragments, almost conjunctive L-(∃ * ∀ * ) [0,1] [=, SUM, 0, 1] and L-ESO [0,1] [=, +, 0, 1], of the aforementioned logics have the same expressivity as probabilistic inclusion logic and its extension with dependence atoms, respectively. The characterizations of P and NP hold also for these fragments. Over open formulae, probabilistic inclusion logic extended with dependence atoms is shown to be strictly weaker than probabilistic independence logic. Moreover, we expand from a recent analogous result by Grädel and Wilke on multiteam semantics [23] and show that probabilistic independence cannot be expressed in any logic that has access to only atoms that are relational or closed under so-called scaled unions. In contrast, independence logic and inclusion logic with dependence atoms are equally expressive in team semantics [17]. We also show that inclusion logic can be conservatively embedded into its probabilistic variant, when restricted to probabilistic teams that are uniformly distributed. From this we obtain an alternative proof through linear systems (that is entirely different from the original proof of Galliani and Hella [18]) for the fact that inclusion logic can express only polynomial time properties. Finally, we present a sound and complete axiomatization for marginal identity atoms. This is achieved by appending the axiom system of inclusion dependencies with a symmetricity rule.
This paper is an extended version of [27]. Here we include all the proofs that were previously omitted. In addition, the results in Sections 6 and 7 are new.
Existential second-order logics on R-structures
In addition to finite relational structures, we consider their numerical extensions by adding real numbers (R) as a second domain sort and functions that map tuples over the finite domain to R. Throughout the paper structures are assumed to have at least two elements. In the sequel, τ and σ will always denote a finite relational and a finite functional vocabulary, respectively. The arities of function variables f and relation variables R are denoted by ar( f ) and ar(R), resp. If f is a function with domain Dom( f ) and A a set, we define f ↾ A to be the function with domain Dom( f ) ∩ A that agrees with f for each element in its domain. Given a finite set S , a function f : S → [0, 1] that maps elements of S to elements of the closed interval [0, 1] of real numbers such that s∈S f (s) = 1 is called a (probability) distribution, and the support of f is defined as where the reduct of A to τ is a finite relational structure, and each g A is a function from A ar(g) to R, is called an R-structure of vocabulary τ ∪ σ. Additionally, A is also called (i) an S -structure, for S ⊆ R, if each g A is a function from A ar(g) to S , and (ii) a d[0, 1]-structure if each g A is a distribution.We call A a finite structure, if σ = ∅.
Our focus is on a variant of functional existential second-order logic with numerical terms (ESO R ) that is designed to describe properties of R-structures. As first-order terms we have only first-order variables. For a set σ of function symbols, the set of numerical σ-terms i is generated by the following grammar: where y can be any tuple of variables and include variables that do not occur in i. The interpretations of +, ×, SUM are the standard addition, multiplication, and summation of real numbers, respectively, and c ∈ R is a real constant denoting itself. In particular, the interpretation [SUM y i] A s of the term SUM y i is defined as follows: where [i] A s[ a/ y] is an interpretation of the term i. We write i( y) to mean that the free variables of the term i are exactly the variables in y. The free variables of a term are defined as usual. In particular, the variables in x are not free in SUM x i( y).
where i and j are numerical σ-terms constructed using operations from O and constants from C; e ∈ E; R ∈ τ is a relation symbol; f is a function variable; x, y, and x are (tuples of) first-order variables; and ψ is a in prefix form whose quantifier prefix is in the language defined by L, where∃ denotes existential function quantification, and ∃ and ∀ first-order quantification.
Expressivity comparisons. Let L and L ′ be some logics defined above, and let X ⊆ R. For φ ∈ L, define Struc X (φ) to be the class of pairs (A, s) where A is an X-structure and s an assignment such that A | = s φ. Define Struc fin (φ) (Struc ord (φ), resp.) analogously in terms of finite (finite ordered, resp.) structures. Additionally, Struc d[0,1] (φ) is the class of (A, s) ∈ Struc [0,1] (φ) such that each f A is a distribution. If X is a set of reals or from {"d[0, 1]","fin", "ord"}, we write L ≤ X L ′ if for all formulae φ ∈ L there is a formula ψ ∈ L ′ such that Struc X (φ) = Struc X (ψ). For formulae without free first-order variables, we omit s from the pairs (A, s) above. As usual, the shorthand ≡ X stands for ≤ X in both directions. For X = R, we write simply ≤ and ≡.
Data complexity of additive ESO R
On finite structures ESO R [≤, +, ×, 0, 1] is known to capture the complexity class ∃R [7,22,41], which lies somewhere between NP and PSPACE. Here we focus on the additive fragment of the logic. It turns out that the data complexity of the additive fragment is NP and thus no harder than that of ESO. Furthermore, we obtain a tractable fragment of the logic, which captures P on finite ordered structures.
A tractable fragment
Next we show P data complexity for almost conjunctive (∃ * ∃ * ∀ * ) R [≤, +, SUM, 0, 1]. Proof. Fix φ. We assume, w.l.o.g., that variables quantified in φ are quantified exactly once, the sets of free and bound variables of φ are disjoint, and that the domain of s is the set of free variables of φ. Moreover, we assume that φ is of the form ∃ y∃ f ∀ xθ, where f is a tuple of function variables and θ is quantifier-free. We use X and Y to denote the sets of variables in x and y, respectively, and g to denote the free function variables of φ.
We describe a polynomial-time process of constructing a family of systems of linear inequations S A,s from a given τ ∪ σ-structure A and an assignment s. We introduce • a fresh variable z a, f , for each k-ary function symbol f in f and k-tuple a ∈ A k .
In the sequel, the variables z a, f will range over real numbers.
Let A be a τ ∪ σ-structure and s an assignment for the free variables in φ. In the sequel, each interpretation for the variables in y yields a system of linear equations. Given an interpretation v : Y → A, we will denote by S v the related system of linear equations to be defined below. We then set S A,s := {S v | v : Y → A}. The system of linear equations S v is defined as S v := u : X→A S u v , where S u v is defined as follows. Let s u v denote the extension of s that agrees with u and v. We let θ u v denote the formula obtained from θ by the following simultaneous substitution: If (ψ 1 ∨ ψ 2 ) is a subformula of θ such that no function variable occurs in ψ i , then (ψ 1 ∨ ψ 2 ) is substituted with ⊤, if and with ψ 3−i otherwise. The set S u v is now generated from θ u v together with u and v. Note that θ u v is a conjunction of first-order or numerical atoms θ i , i ∈ I, for some index set I. For each conjunct θ i in which some f ∈ f occurs, add Let θ * be the conjunction of those conjuncts of θ u v in which no f ∈ f occurs. If A | = s u v θ * , remove S v from S A,s .
Since φ is fixed, it is clear that S A,s can be constructed in polynomial time with respect to |A|. Moreover, it is straightforward to show that there exists a solution for some S ∈ S A,s exactly when A | = s φ.
Assume first that there exists an S ∈ S A,s that has a solution. Let w : Z → R, where Z := {z a, f | f ∈ f and a ∈ A ar( f ) }, be the function given by a solution for S . By construction, (2) and the related substitutions, we obtain that For the converse, assume that A | = s φ. Hence there exists an extension s v of s and an expansion The above proposition could be strengthened by relaxing the almost conjunctive requirement in any way such that (2) can be still decided (i.e., it suffices that the satisfaction of ψ i s do not depend on the interpretations of the functions in f ). Proof. Fix an almost conjuctive ESO R [≤, +, SUM, 0, 1]-formula φ of relational vocabulary τ of the required form. Given a τ ∪ ∅ structure A and an assignment s for the free variables of φ, let S be the related polynomial size family of polynomial size systems of linear inequations with integer coefficients given by Proposition 3. Deciding whether a system of linear inequalities with integer coefficients has solutions can be done in polynomial time [33]. Thus checking whether there exists a system of linear inequalities S ∈ S that has a solution can be done in P as well, from which the claim follows.
We will later show that probabilistic inclusion logic captures P on finite ordered structures (Corollary 24) and can be translated to almost conjunctive L-(∃ * ∀ * ) [
Full additive ESO R
The goal of this subsection is to prove the following theorem: First observe that SUM is definable in ESO R [≤, +, 0, 1]: Already ESO R [=] subsumes ESO, and thus we may assume a built-in successor function S and its associated minimal and maximal elements min and max on k-tuples over the finite part of the R-structure. Then, for a k-ary tuple of variables x, SUM x i agrees with f (max), for any As ESO R [≤, +, 0, 1] subsumes ESO, by Fagin's theorem, it can express all NP properties. Thus we only need to prove that any ESO R [≤, +, 0, 1]-definable property of finite structures is recognizable in NP. The proof relies on (descriptive) complexity theory over the reals. The fundamental result in this area is that existential second-order logic over the reals (ESO R [≤, +, ×, (r) r∈R ]) corresponds to non-deterministic polynomial time over the reals (NP R ) for BSS machines [22,Theorem 4.2]. To continue from this, some additional terminology is needed. We refer the reader to Appendix A (or to the textbook [5]) for more details about BSS machines. Let C R be a complexity class over the reals.
• C add is C R restricted to additive BSS machines (i.e., without multiplication).
• C 0 R is C R restricted to BSS machines with machine constants 0 and 1 only. • BP(C R ) is C R restricted to languages of strings that contain only 0 and 1. A straightforward adaptation of [22,Theorem 4.2] yields the following theorem.
add on R-structures. If we can establish that BP(NP 0 add ), the so-called Boolean part of NP 0 add , collapses to NP, we have completed the proof of Theorem 6. Observe that another variant of this theorem readily holds; ESO R [=, +, (r) r∈R ]-definable properties of R-structures are recognizable in NP add branching on equality, which in turn, over Boolean inputs, collapses to NP [34, Theorem 3]. Here, restricting branching to equality is crucial. With no restrictions in place (the BSS machine by default branches on inequality and can use arbitrary reals as machine constants) NP add equals NP/poly over Boolean inputs [34,Theorem 11]. Adapting arguments from [34], we show next that disallowing machine constants other than 0 and 1, but allowing branching on inequality, is a mixture that leads to a collapse to NP.
Proof. Clearly NP ≤ BP(NP 0 add ); a Boolean guess for an input x can be constructed by comparing to zero each component of a real guess y, and a polynomial-time Turing computation can be simulated by a polynomial-time BSS computation.
For the converse, let L ⊆ {0, 1} * be a Boolean language that belongs to BP(NP 0 add ); we need to show that L belongs also to NP. Let M be a BSS machine such that its running time is bounded by some polynomial p, and for all Boolean We describe a non-deterministic algorithm that decides L and runs in polynomial time. Given a Boolean input x of length n, first guess the outcome of each comparison in the BSS computation; this guess is a Boolean string z of length p(n). Note that each configuration of a polynomial time BSS computation can be encoded by a real string of polynomial length. During the BSS computation the value of each coordinate of its configuration is a linear function on the constants 0 and 1, the input x, and the real guess y of length p(n). Thus it is possible to construct in polynomial time a system S of linear inequations on y of the form where a i j ∈ Z, such that y is a (real-valued) solution to S if and only if M accepts ( x, y) with respect to the outcomes z. In (3), the variables y j stand for elements of the real guess y, and m + l is the total number of comparisons. Each comparison generates either a strict or a non-strict inequality, depending on the outcome encoded by z.
Without loss of generality we may assume additional constraints of the form y j ≥ 0 (1 ≤ j ≤ p(n)) (cf. [12, p. 86]). Transform then S to another system of inequalities S ′ obtained from S by replacing strict inequalities in (3) by Then determine the solution of the linear program: maximize ( 0, 1)( y, ǫ) T subject to S ′ and ( y, ǫ) ≥ 0. If there is no solution or the solution is zero, then reject; otherwise accept. Since S ′ is of polynomial size and linear programming is in polynomial time [33], the algorithm runs in polynomial time. Clearly, the algorithm accepts x for some guess z if and only if x ∈ L.
Probabilistic team semantics
Let D be a finite set of first-order variables and A a finite set. A team X is a set of assignments from D to A. A probabilistic team is a distribution X : X → [0, 1], where X is a finite team. Also the empty function is considered a probabilistic team. We call D the variable domain of both X and X, written Dom(X) and Dom(X). A is called the value domain of X and X.
Let X : X → [0, 1] be a probabilistic team, x a variable, V ⊆ Dom(X) a set of variables, and A a set. The projection of X on V is defined as Pr V (X) : Let us also define some function arithmetic. Let α be a real number, and f and g be functions from a shared domain into real numbers. The scalar multiplication α f is a function defined by In particular, if f and g are probabilistic teams and α + β = 1, then α f + βg is a probabilistic team.
We define first probabilistic team semantics for first-order formulae. As is customary in the team semantics context, we restrict attention to formulae in negation normal form. If φ is a first-order formula, we write φ ⊥ for the equivalent formula obtained from ¬φ by pushing the negation in front of atomic formulae. If furthermore ψ is some (not necessarily first-order) formula, we then use a shorthand φ → ψ for the formula φ ⊥ ∨ (φ ∧ ψ).
Definition 9 (Probabilistic team semantics). Let A be a τ-structure over a finite domain A, and X : X → [0, 1] a probabilistic team. The satisfaction relation | = X for first-order logic is defined as follows: The satisfaction relation | = s denotes the Tarski semantics of first-order logic. If φ is a sentence (i.e., without free variables), then A satisfies φ, written A | = φ, if A | = X ∅ φ, where X ∅ is the distribution that maps the empty assignment to 1.
We make use of a generalization of probabilistic team semantics where the requirement of being a distribution is dropped. A weighted team is any non-negative weight function X : X → R ≥0 . Given a first-order formula α, we write X α for the restriction of the weighted team X to the assignments of X satisfying α (with respect to the underlying structure). Moreover, the total weight of a weighted team X is |X| := s∈X X(s).
Definition 10 (Weighted semantics). Let A be a τ-structure over a finite domain A, and X : X → R ≥0 a weighted team. The satisfaction relation | = w X for first-order logic is defined exactly as in Definition 9, except that for ∨ we define instead: We consider logics with the following atomic dependencies: Definition 11 (Dependencies). Let A be a finite structure with universe A, X a weighted team, and X a team.
• Marginal identity and inclusion atoms. If x, y are variable sequences of length k, then x ≈ y is a marginal identity atom and x ⊆ y is an inclusion atom with satisfactions defined as: • Probabilistic independence atom. If x, y, z are variable sequences, then y ⊥ ⊥ x z is a probabilistic (conditional) independence atom with satisfaction defined as: We also write x ⊥ ⊥ y for the probabilistic marginal independence atom, defined as x ⊥ ⊥ ∅ y.
• Dependence atom. For a sequence of variables x and a variable y, =( x, y) is a dependence atom with satisfaction defined as: For probabilistic teams X, the satisfaction relation is written without the superscript w.
Observe that any dependency α over team semantics can also be interpreted in probabilistic team semantics: For a list C of dependencies, we write FO(C) for the extension of first-order logic with the dependencies in C. The logics FO(≈) and FO(⊆), in particular, are called probabilistic inclusion logic and inclusion logic, respectively. Furthermore, probabilistic independence logic is denoted by FO(⊥ ⊥ c ), and its restriction to probabilistic marginal independence atoms by FO(⊥ ⊥). We write Fr(φ) for the set free variables of φ ∈ FO(C), defined as usual. We conclude this section with a list of useful equivalences. We omit the proofs, which are straightforward structural inductions ((ii) was also proven in [25] and (v) follows from (i) and the flatness property of team semantics).
Proposition 12. Let φ ∈ FO(C), ψ ∈ FO(≈, C), and θ ∈ FO, where C is a list of dependencies over team semantics. Let A be a structure, X a weighted team, and r any positive real. The following equivalences hold:
Expressivity of probabilistic inclusion logic
We turn to the expressivity of probabilistic inclusion logic and its extension with dependence atoms. In particular, we relate these logics to existential second-order logic over the reals. We show that probabilistic inclusion logic extended with dependence atoms captures a fragment in which arithmetic is restricted to summing. Furthermore, we show that leaving out dependence atoms is tantamount to restricting to sentences in almost conjunctive form witḧ ∃ * ∀ * quantifier prefix. Expressivity comparisons.. Fix a list of atoms C over probabilistic team semantics. For a probabilistic team X with variable domain {x 1 , . . . , x n } and value domain A, the function f X : A n → [0, 1] is defined as the probability distribution such that f X (s( x)) = X(s) for all s ∈ X. For a formula φ ∈ FO(C) of vocabulary τ and with free variables {x 1 , . . . , x n }, the class Struc where f X = f A and A ↾ τ is the finite τ-structure underlying A. Let L and L ′ be two logics of which one is defined over (probabilistic) team semantics. We write L ≤ L ′ if for every formula φ ∈ L there is φ ′ ∈ L ′ such that Struc d [0,1] Theorem 13. The following equivalences hold: We divide the proof of Theorem 13 into two parts. In Section 4.3 we consider the direction from probabilistic team semantics to existential second-order logic over the reals, and in Section 4.4 we shift attention to the converse direction. In order to simplify the presentation in the forthcoming subsections, we start by showing how to replace existential function quantification by distribution quantification. The following lemma in its original form includes multiplication (see [26,Lemma 6.4]) but works also without it. Proof. Any formula θ involving 0 or 1 can be equivalently expressed as follows: where n is nullary. The total sum of the weights of any interpretation of a function occurring in φ on a given structure, whose finite domain is of size n, is at most n k . We now show how to obtain from φ an equivalent formula in L-ESO d [0,1] [SUM, =, 0, 1]; the idea is to scale all function weights by 1/n k . Note first that the value 1/n k can be expressed via a k-ary distribution variable g as follows: where θ is quantifier free, and let g 1 , . . . , g t be the list of (non-quantified) function symbols of φ. Define is an ar( f j ) + 1-ary (ar(g j ) + k + 1-ary, resp.) distribution variable and ψ and θ ′ are as defined below. The universally quantified variables x ′ list all of the newly introduced variables of the construction below. The formula ψ is used to express that each f ′ j (g ′ ( j), resp.) is an 1/n k -scaled copy of f j (g( j), resp.). That is, ψ is defined as the formula where y l and z l (here and below) denote the last elements of the tuples y and z, respectively. 3 Finally θ ′ is obtained from θ by replacing expressions of the form f j ( y) and g j ( y) by f ′ j ( y, y l ) and g j ( y, z, z l ), resp., and the real constant 1 by 1 n k . A straightforward inductive argument on the structure of formulae yields that, over [0, 1]-structures, φ and φ ′ are equivalent. Note that φ ′ is an almost conjunctive formula of the prefix class∃ * ∀ * , if φ is.
From probabilistic team semantics to existential second-order logic
Let c and d be two distinct constants. Let φ( x) ∈ FO(≈, =(· · · )) be a formula whose free variables are from the sequence x = (x 1 , . . . , x n ). We now construct recursively an L-ESO [0,1] [=, SUM, 0, 1]-formula φ * ( f ) that contains one free n-ary function variable f . In this formula, a probabilistic team X is represented as a function f X such that X(s) = f X (s(x 1 ), . . . , s(x n )).
, where g i is of the same arity as f and defined as g i ( x) := g( x, i).
This translation leads to the following lemma, Proof. By item (ii) of Proposition 12, we may use weighted semantics (Definition 10). Then, a straightforward induction shows that for all structures A and non-empty weighted teams X : Furthermore, the extra constants c and d can be discarded. Define ψ( f ) as where φ * * ( f ′ ) is obtained from φ * ( f ) by replacing function terms f (t 1 , . . . , t n ) with f ′ (t 1 , . . . , t n , c, d). There are only existential function and universal first-order quantifiers in (5). By pushing these quantifiers in front, and by swapping the ordering of existential and universal quantifiers (by increasing the arity of function variables and associated function terms), we obtain a sentence ψ * ( f ) ∈ L-(∃ * ∀ * ) d We conclude that φ * ( f ) interprets the dependence atom in the correct way and it preserves the almost conjunctive form and the required prefix form. Recall from Proposition 3 that almost conjunctive (∃ * ∃ * ∀ * ) R [≤, +, SUM, 0, 1] is in PTIME in terms of data complexity. Since dependence logic captures NP [43], the previous lemma indicates that we have found, in some regard, a maximal tractable fragment of additive existential second-order logic. That is, dropping either the requirement of being almost conjunctive, or that of having the prefix form∃ * ∃ * ∀ * , leads to a fragment that captures NP; that NP is also an upper bound for these fragments follows by Theorem 6.
From existential second-order logic to probabilistic team semantics
Due to Lemma 14 and Proposition 16, our aim is to translate L-ESO d[0,1] [=, SUM] and almost conjunctive L-ESO d[0,1] [=, SUM] to FO(≈, = (· · · )) and FO(≈), respectively. The following lemmas imply that we may restrict attention to formulae in Skolem normal form. 4 We first need to get rid of all numerical terms whose interpretation does not belong to the unit interval. The only source of such terms are summation terms of the form SUM x i( y), where x is a sequence of variables that contain a variable z not belonging to y; we call such instances of z dummy-sum instances. For example, the summation term SUM x n, where n is the nullary distribution and x a dummy-sum instance, is always interpreted as the cardinality of the model's domain. Proof. Let k be the number of dummy sum-instances in φ. Without loss of generality, we may assume that each dummy sum-instance is manifested using a distinct variable in v = (v 1 , . . . , v k ), whose only instance in φ is the related dummy sum-instance. It is straighforward to check that for any structure A with cardinality n, the interpretation t A of any term t appearing in φ is at most n k .
We start the translation ψ → ψ * by scaling each function f occurring in φ by 1 n k as follows. Define f ( x) → f * ( x, v). For Boolean connectives, =, SUM, and first-order quantification the translation is homomorphic. In the case for existential function quantification, the functions are scaled by increasing their arity by k and stipulating that their weights are distributed evenly over the arity extension: Let f 1 , . . . , f t be the list of free function variables of φ with arities | x 1 |, . . . , | x t |, respectively. Now, define It is now straightforward to check that φ + and φ are equivalent, and that there are no dummy-sum instances in φ + . Proof. By the previous lemma, we may assume without loss of generality that φ does not contain any dummy-sum instances. That is, any summation term occurring in φ is of the form SUM v i( u v), where it is to be noted that the variables of v occur free in the term i. This, in particular, implies that the terms of φ can be captured by using distributions. First we define for each second sort term i( x) a special formula θ i defined recursively using fresh function symbols f i as follows: where g is a function symbol, then θ i is defined as f i ( u) = g( u). (We may intepret g( u) as SUM ∅ g( u)).
The translation φ → φ * then proceeds recursively on the structure of φ. By Lemma 15 we may use the real constant 0 in the translation.
where f lists the function symbols f k for each subterm k of i or j.
(ii) If φ is an atom or negated atom of the first sort, then φ * := φ.
It is straightforward to check that φ * is of the correct form and equivalent to φ. What happens in (v) is that instead of guessing for all y some distribution f y with arity ar( f ), we guess a single distribution f * with arity ar( f ) + 1 such that f * (y, u) = 1 |A| · f y ( u), where A is the underlying domain of the structure. Similarly, we guess a distribution g * for each free distribution variable g such that g * (y, u) = 1 |A| · g( u). Observe that case (iv) does not occur if φ is in L-(∃ * ∀ * ) d[0,1] [SUM, =]; in such a case, a straightforward structural induction shows that φ * is almost conjuctive if φ is.
Using the obtained normal form for existential second-order logic over the reals we now proceed to the translation. This translation is similar to one found in [15], with the exception that probabilistic independence atoms cannot be used here.
Proof. By item (ii) of Proposition 12, we can use weighted semantics in this proof. Without loss of generality each structure is enriched with two distinct constants c and d; such constants are definable in FO(≈, =(· · · )) by ∃cd(=(c)∧ = (d) ∧ c d), and for almost conjunctive formulae they are not needed. Let where f i := X ′ ↾ y i . Observe that the claim implies that A | = w X Φ iff A | = φ( f ). Next, we show the claim by structural induction on the construction of Θ: (1) If θ is a literal of the first sort, we let Θ := θ, and the claim readily holds.
(2) If θ is of the form f i ( x i ) = SUM x j0 f j ( x j0 x j1 ), let Θ := ∃αβψ for ψ given as where x is any variable from x, and the first-order variable sequence y j that corresponds to function variable f j is thought of as a concatenation of two sequences y j0 and y j1 whose respective lenghts are | x j0 | and | x j1 |. Assume first that for all a ∈ M, we have (A, f, f 1 , . . . , x j0 a j1 ). To show that Y satisfies Θ, let Z be an extension of Y to variables α and β such that it satisfies the first two conjuncts of (7). Observe that Z satisfies xα ≈ xβ if for all a ∈ M, Z x= a satisfies α ≈ β. For a probabilistic team X and a first-order formula α, we write |X α | rel for the relative weight |X α |/|X|. Now, the following chain of equalities hold: Note that the absolute weights |Y| and |Z| are equal. The third equality then follows since x and y i are independent by the construction of Y. It is also here that we need relative instead of absolute weights. Thus α and β agree with x in Z x= a for the same weight. Moreover, x is some constant a in Z x= a , and whenever α or β disagrees with x, it can be mapped to another constant b that is distinct from a. It follows that Z x= a satisfies α ≈ β, and thus we conclude that Y satisfies Θ.
For the converse direction, assume that Y satisfies Θ, and let Z be an extension of Y to α and β satisfying (7). Then for all a ∈ M, Z x= a satisfies α ≈ β and thereby for all a ∈ M, For the second equality, recall that x is a constant in Z x= a . Thus (A, f, f 1 , . . . , f n ) | = θ( a) for all a ∈ M, which concludes the induction step. (3) If θ is θ 0 ∧ θ 1 , let Θ := Θ 0 ∧ Θ 1 . The claim follows by the induction hypothesis.
Alternatively, if θ 0 contains no numerical terms, let Θ := θ 0 ∨ (θ ¬ 0 ∧ Θ 1 ), where θ ¬ 0 is obtained from ¬θ 0 by pushing ¬ in front of atomic formulae. Assume first that (A, f, f 1 , . . . , f n ) | = θ 0 ∨ θ 1 for all a ∈ M. Then M can be partitioned to disjoint M 0 and M 1 such that We have two cases: • Suppose φ( f ) is not almost conjunctive. Let Z be the extension of Y to z such that s(z) = c if s( x) is in M 0 , and otherwise s(z) = d, where s is any assignment in the support of Z. Consequently, Z satisfies =( x, z). The converse direction is shown analogously in both cases. This concludes the proof.
Further, the induction hypothesis implies that
The "≥" direction of item (i) in Theorem 13 follows by Lemmata 14,20,and 21; that of item (ii) follows similarly, except that Proposition 16 is used instead of Lemma 14. This concludes the proof of Theorem 13.
Interpreting inclusion logic in probabilistic team semantics
Next we turn to the relationship between inclusion and probabilistic inclusion logics. The logics are comparable for, as shown in Propositions 12, team semantics embeds into probabilistic team semantics conservatively. The seminal result by Galliani and Hella shows that inclusion logic captures PTIME over ordered structures [18]. We show that restricting to finite structures, or uniformly distributed probabilistic teams, inclusion logic is in turn subsumed by probabilistic inclusion logic. There are two immediate consequences for this. First, the result by Galliani and Hella readily extends to probabilistic inclusion logic. Second, their result obtains an alternative, entirely different proof through linear systems.
We utilize another result of Galliani stating that inclusion logic is equiexpressive with equiextension logic [17], defined as the extension of first-order logic with equiextension atoms x 1 ⊲⊳ x 2 := x 1 ⊆ x 2 ∧ x 2 ⊆ x 1 . In the sequel, we relate equiextension atoms to probabilistic inclusion atoms.
For a natural number k ∈ N and an equiextension atom x 1 ⊲⊳ x 2 , where x 1 and x 2 are variable tuples of length m, where z and z 0 are variable tuples of length k, and y is obtained by concatenating k times some variable y in u. Intuitively (9) expresses that a probabilistic team X, extended with universally quantified u, decomposes to Y + Z, where Y(s) = f s X(s) for some variable coefficient f s ∈ [ 1 n k , 1], and |Y x 1= u | = |Y x 2 = u |, for any u. Thus (9) implies that x 1 ⊲⊳ x 2 . On the other hand, x 1 ⊲⊳ x 2 implies (9) if each assignment weight X(s) equals g s |X| for some g s ∈ [ 1 n k , 1]. In this case, one finds the decomposition Y + Z by balancing the weight differences between values of x 1 and x 2 . More details are provided in the proof of the next lemma.
Lemma 22. Let k be a positive integer, A a finite structure with universe A of size n, and X : X → R ≥0 a weighted team.
n m X * , where X * is defined as the sum X[ a 1 / u] + . . . + X[ a l / u], and a 1 , . . . , a l lists all elements in A m . By Proposition 12(iii) it suffices to show that X * satisfies the formula obtained by removing the outermost universal quantification of ψ k . By Proposition 29 it suffices to show that each X[ a i / u] individually satisfies the same formula. Hence fix a tuple of values b ∈ A m and define Y := X[ b/ u]. We show that Y satisfies Observe that we have here fixed u → b and y → c, where c is some value in b. We have also removed u from the marginal identity atom in (9), for it has a fixed value in Y.
Fix some d ∈ A that is distinct from c, and denote by Y be the support of Y. For existential quantification over v i , extend s ∈ Y by v i → c if s( x i ) = b, and otherwise by v i → d, so as to satisfy the first two conjuncts. Denote by Y ′ : Y ′ → R ≥0 the weighted team, where Y ′ consists of these extensions, and the weights are inherited from Y.
Observe that Y ′ (s) ≥ |X| n k for all s ∈ Supp(Y ′ ). Fix i ∈ {1, 2}, and assume that |X x i = b | > 0. Then |X x i = b | ≥ |X| n k , and thus using |X x 1 = x 2 | = 0 and |X| = |Y ′ | we obtain Since X | = x 1 ⊲⊳ x 2 , we obtain that w 1 and w 2 are either both zero or both at least |Y ′ | n k . Next, let us describe the existential quantification of z 1 (later we show how the universal quantification of z 0 can be fitted in). The purpose of this step is to balance the possible weight difference between |Y ′ (i) if s ′ (v 1 ) = c and s ′ (v 2 ) = d, allocate respectively w 2 |Y ′ | and 1 − w 2 |Y ′ | of the weight of s ′ to s ′ ( c/ z 1 ) and s ′ ( d/ z 1 ); (ii) if s ′ (v 1 ) = d and s ′ (v 2 ) = c, allocate respectively w 1 |Y ′ | and 1 − w 1 |Y ′ | of the weight of s ′ to s ′ ( c/ z 1 ) and s ′ ( d/ z 1 ); or (iii) otherwise, allocate the full weight of s ′ to s ′ ( c/ z 1 ).
Denote by Z the probabilistic team obtained this way, and define Z ′ := Z z 1 = c . We observe that Finally, let us return to the universal quantification of z 0 , which precedes the existential quantification of z in (10). The purpose of this step is to enforce that for each s ∈ Supp(Y ′ ), the extension s( c/ z 1 ) takes a positive weight. Observe that w i |Y ′ | is either zero or at least 1 n k , for w i is either zero or at least |Y ′ | n k . Furthermore, note that universal quantification distributes 1 n k of the weight of s ′ to s ′ ( c/ z 0 ). Thus the weight of s ′ can be distributed in such a way that both the conditions (i)-(iii) and the formula z 0 = c → z 1 = c simultaneously hold. This concludes the proof of case (i).
(ii) Suppose that the assignments in X mapping x 1 to b have a positive total weight in X. By symmetry, it suffices to show that the assignments in X mapping x 2 to b also have a positive total weight in X. By assumption there is an extension Z of X[ b/ u] satisfying the quantifier-free part of (10). It follows that the total weight of assignments in Z that map v 1 to c is positive. Consequently, by z 0 = c → z 1 = c where z 0 is universally quantified, a positive fraction of these assignments maps also z 1 to c. This part of Z is allocated to v 1 ≈ v 2 , and thus the weights of assignments mapping v 2 to c is positive as well. But then, going backwards, we conclude that the total weight of assignments mapping x 2 to b is positive, which concludes the proof.
We next establish that inclusion logic is subsumed by probabilistic inclusion logic at the level of sentences. ( x 1 , x 2 ). We can make four simplifying assumption without loss of generality. First, we may restrict attention to weighted semantics by item (ii) of Proposition 12. Thus, we assume that A | = w X φ for some weighted team X and a finite structure A with universe of size n. Second, we may assume that the support of X consists of the empty assignment by item (iv) of Proposition 12. Third, since FO(⊲⊳) is insensitive to assignment weights, we may assume that the satisfaction of φ by X is witnessed by uniform semantic operations. That is, existential and universal quantification split an assignment to at most n equally weighted extensions, and disjunction can only split an assignment to two equally weighted parts. Fourth, we may assume that any equiextension atom x 1 ⊲⊳ x 2 appears in φ in an equivalent form ∃uv(u v ∧ x 1 u ⊲⊳ x 2 v), to guarantee that the condition |X x 1= x 2 | = 0 holds for all appropriate subteams X. We then obtain by the previous lemma and a simple inductive argument that A | = w X φ * . The converse direction follows similarly by the previous lemma.
Consequently, probabilistic inclusion logic captures P, for this holds already for inclusion logic [18]. Another consequence is an alternative proof, through probabilistic inclusion logic (Theorem 23) and linear programs (Theorems 13 and 4), for the PTIME upper bound of the data complexity of inclusion logic. For this, note also that quantification of functions, whose range is the unit interval, is clearly expressible in ESO R [≤, SUM, 0, 1]. Proof. Recall that FO(⊆) ≡ FO(⊲⊳). Let φ be an FO(⊲⊳) formula, A a finite structure, and X a uniform probabilistic team. Let * denote the translation of Theorem 23. Now where X is the support of X and Dom(X) = {x 1 , . . . , x n }.
Definability over open formulae
We now turn to definability over open formulae. In team semantics, inclusion logic extended with dependence atoms is expressively equivalent to independence logic at the level of formulae. This relationship however does not extend to probabilistic team semantics. As we will prove next, probabilistic inclusion logic extended with dependence atoms is strictly less expressive than probabilistic independence logic. The reason, in short, is that logics with marginal identity and dependence can only describe additive distribution properties, whereas the concept of independence involves multiplication.
We begin with a proposition illustrating that probabilistic independence logic has access to irrational weights. 5 Let A be a finite structure with domain A of size n, and let X be a probabilistic team. Then Proof. Suppose A | = X φ(x), and let Y be an extension of X, in accord with the quantifier prefix of φ, that satisfies (11). Then in Y c is constant and z uniformly distributed over all domain values. Hence z equals c for weight 1 n , and consequently x and y simultaneously equal c for the same weight. Since x and y are independent and identically distributed, in isolation they equal c for weight 1 √ n . Since X and Y agree on the weights of x, the claim follows.
It follows, then, that independence atoms are not definable in additive existential second-order logic.
It is easy to see that A | = Ψ( f ) if and only if Ψ * ( f (0), f (1)) holds in the real arithmetic. Consequently, Ψ * has only irrational solutions. On the other hand, Ψ * can be transformed to the form ∃x 1 . . . ∃x n i j C i j , where each C i j is a (strict or non-strict) linear inequation with integer coefficients and constants. Since Ψ * is satisfiable, some system of linear inequations j C i j has solutions, and thus also rational solutions. 6 Thus Ψ * has rational solutions, which leads to a contradiction. We conclude that φ(x) does not translate into ESO R [≤, +, 0, 1].
The following result is now immediate. There are, in fact, more than one way to prove that FO(⊥ ⊥) FO(=(· · · ), ≈). Above, we use the fact that probabilistic independence cannot be defined in terms of additive existential second-order logic, which in turn encompasses both dependence and marginal independence atoms. Another strategy is to apply the closure properties of these atoms.
Let φ be a formula over probabilistic team semantics. We say that φ is closed under scaled unions if for all parameters α ∈ [0, 1], finite structures A, and probabilistic teams X and Y: In the weighted semantics, we say that φ is closed under unions if for all finite structures A and weighted teams X and Y: We say that φ is relational if for all finite structures A, and probabilistic teams X and Y such that Supp(Y) = Supp(X): A | = X φ if and only if A | = Y φ. We say that φ is downwards closed if for all finite structures A, and probabilistic teams X and Y such that Supp(Y) ⊆ Supp(X): A | = X φ implies A | = Y φ. Furthermore, a logic L is called relational (downward closed, closed under scaled union, resp.) if each formula φ in L is relational (downward closed, closed under scaled unions, resp.).
Proposition 29. The following properties hold:
• FO(=(· · · )) is relational. [Self-evident] • FO(≈) is closed under scaled unions. [25] In the context of multiteam semantics, Grädel and Wilke have shown that probabilistic independence is not definable by any logic that extends first-order logic with a collection of atoms that are downwards closed or union closed [23,Theorem 17]. In fact, their proof works also when downwards closed atoms are replaced with relational atoms (which, in their framework as well as in the probabilistic framework, is a strictly more general notion). While their proof technique does not directly generalise to probabilistic team semantics, it can readily be adapted to weighted semantics (Definition 10).
Theorem 30 (cf. [23]). Let C be a collection of relational atoms, and let D be a collection of atoms that are closed under unions. Then under weighted semantics FO(⊥ ⊥) FO(C, D).
This theorem can be then transferred to probabilistic semantics by using the following observations: For any probabilistic n-ary atom D, we can define an n-ary atom D * in the weighted semantics as follows: The final piece of the puzzle is the following generalisation of [25,Proposition 8]. The original proposition was formulated for concrete atomic dependency statements satisfying the proposition as an atomic case for induction. The inductive argument of the original proof works with any collection of atoms that satisfy the proposition as an atomic case.
Axiomatization of marginal identity atoms
Next we turn to axioms of the marginal identity atom, restricting attention to atoms of the form x 1 . . . x n ≈ y 1 . . . y n , where both x 1 . . . x n and y 1 . . . y n are sequences of distinct variables. It turns out that the axioms of inclusion dependencies over relational databases [9] are sound and almost complete for marginal identity; we only need one additional rule for symmetricity. Consider the following axiomatization: For a set of marginal identity atoms Σ ∪ {σ}, a proof of σ from Σ is a finite sequence of marginal identity atoms such that (i) each element of the sequence is either from Σ, or follows from previous atoms in the sequence by an application of a rule, and (ii) the last element in the sequence is σ. We write Σ ⊢ σ if there is a proof of σ from Σ. For a probabilistic team X and a formula φ over the empty vocabulary τ ∅ , we write X | = φ as a shorthand for A | = X φ, where A is the structure over τ ∅ whose domain consists of the values in the support of X. We use a shorthand X | = φ, for a team X, analogously. We write Σ | = σ if every probabilistic team Y that satisfies Σ satisfies also σ. The proof of the following theorem is an adaptation of a similar result for inclusion dependencies [9]. Proof. It is clear that the axiomatization is sound; we show that it is also complete.
Assume that Σ | = σ, where σ is of the form x 1 . . . x n ≈ y 1 . . . y n . Let V consist of the variables appearing in Σ∪{σ}. For each subset V ⊆ V, let i V be an auxiliary variable, called an index. Denote the set of all indices over subsets of V by I. Define Σ * as the set of all inclusion atoms u 1 . . .
To show that Σ ⊢ x 1 . . . x n ≈ y 1 . . . y n , we will first apply the chase algorithm of database theory to obtain a finite team Y that satisfies Σ * , where the codomain of Y consists of natural numbers. The indices i V in Y, in particular, act as multiplicity measures for values of V, making sure that both sides of any marginal identity atom in Σ appear in Y with equal frequency. This way, the probabilistic team Y, defined as the uniform distribution over Y, will in turn satisfy Σ. Finally, we show that the chase algorithm yields a proof of σ, utilizing the fact that Y satisfies σ by assumption.
Next, we define a team X 0 that serves as the starting point of the chase algorithm. We also describe how assignments over V that are introduced during the chase are extended to V ∪ I.
Let X 0 = {s * }, where s * is an assignment defined as follows. Let s * (x i ) = i, for 1 ≤ i ≤ n, and s * (x) = 0, for x ∈ (V ∪ I) \ {x 1 , . . . , x n }. For a team Y with variable domain V ∪ I and an assignment s with variable domain V, define s Y : V ∪ I → N as the extension of s such that In what follows, we describe a chase rule to expand a team X. We say that an assignment s ′ witnesses an inclusion atom x ⊆ y for another assignment s, if s( x) = s ′ ( y). Consider the following chase rule: Chase rule. Let X be a team with variable domain V ∪ I, s ∈ X, and σ := u 1 . . . u l i U ⊆ v 1 . . . v l i V ∈ Σ * . Suppose no assignment in X witnesses σ for s. Now let s ′ be the assignment with variable domain V that is defined as Then we say that s and σ generate the assignment s ′ X . Next, let S = (X 0 , X 1 , X 2 , . . .) be a maximal sequence, where X i+1 = X i ∪ {s ′ X i } for an assignment s ′ X i generated non-deterministically by some s ∈ X j and τ ∈ Σ * according to the chase rule, where j ≤ i is minimal. Define Y as the union of all elements in S. Note that Y is finite if S is. In particular, if Y is finite, then it equals X i , where i is the least integer such that the chase rule is not anymore applicable to X i . Below, we will show that Y is finite, which follows if the chase algorithm terminates.
It is easy the verify that the following holds, for each i ∈ N: For any U = {u 1 , . . . , u n } and s ∈ X i , if the team That is, the values of i U in X s form an initial segment of N of size |X s |. Therefore, if s ∈ X i has no witness for u 1 . . . u l i U ⊆ v 1 . . . v l i V in X i , then for any t ∈ X i such that s(u 1 . . . u l ) = t(v 1 . . . v l ), we have s(i U ) > t(i V ). It follows that We will next show how Σ ⊢ x 1 . . . x n ≈ y 1 . . . y n follows from the following two claims. We will then prove the claims, which concludes the proof of the theorem.
Claim 1. Y is finite.
Claim 2. If Y contains an assignment s that maps some sequence of variables z j , for It follows by construction that Y | = Σ * . Since Y is finite by Claim 1, we may define a probabilistic team Y as the uniform distribution over Y. By the construction of Y and Σ * , it follows that Y | = Σ, and hence Y | = x 1 . . . x n ≈ y 1 . . . y n follows from the assumption that Σ | = σ. Consequently, Y contains an assignment s which maps y i to i, for 1 ≤ i ≤ n. We conclude that by Claim 2 there is a proof of x 1 . . . x n ≈ y 1 . . . y n from Σ. 7 To complete the proof, we prove Claims 1 and 2.
Proof of Claim 1. Assume towards contradiction that Y is infinite, which entails that the sequence S = (X 0 , X 1 , X 2 , . . .) is infinite. W.l.o.g. the chase rule is always applied to s that belongs to the intersection X i ∩ X j , for minimal j ≤ i.
. .) as the sequence, where X ′ 0 = X 0 , and X ′ i+1 is defined as X j where j is the least integer such that all s ∈ X ′ i and σ ∈ Σ * have a witness in X j . Due to the application order of the chase rule, it follows that assuming X ′ −1 = ∅. Moreover, S ′ is a subsequence of S which is finite iff S is. We first define some auxiliary concepts. For an assignment s in X, we use a shorthand Base(s) for s ↾ V, called the base of s. We also define Base(X) := {Base(s) | s ∈ X}. The multiplicity in X of an assignment s is defined as |{s ′ ∈ X | Base(s) = Base(s ′ )}|. Note that Base(Y) is finite, for Base(s) is a mapping from V into {0, . . . , n} for all s ∈ Y. Thus, since Y is infinite, it contains assignments with infinite multiplicity in Y. Next, we associate each assignment s with the set of its positive variables Pos(s) := {x ∈ V | s(x) > 0}, the size of which is called the degree of s.
Let k be some integer such that X ′ k contains every assignment in Y that has finite multiplicity in Y, and denote X ′ k by Z. Let M ∈ {1, . . . , n} be the maximal degree of any assignment in Y with infinite multiplicity in Y, that is, the maximal degree of any assignment in Y \ Z. Then, take any s L ∈ X ′ L \ X ′ L−1 of degree M, where L > k + S for S := |Base(Y)|. By property (15), we find a sequence of assignments (s 0 , . . . , s L ), where s i+1 ∈ X ′ i+1 \ X ′ i , for i < L, was generated by s i ∈ X ′ i \ X ′ i−1 with the chase rule. Since S is sufficiently large, this sequence has a suffix (s l , . . . , s m , . . . , s L ) in which each assignment belongs to Y \ Z, has degree M, and where l < m and Base(s l ) = Base(s m ).
It now suffices to show the following subclaim: Subclaim. If t, t ′ ∈ Y \ Z are two assignments with degree M such that t ′ was generated by t by the chase rule, then t(i Pos(t) ) ≥ t ′ (i Pos(t ′ ) ).
The subclaim implies that s l (i Pos(s l ) ) ≥ s m (i Pos(s m ) ), which leads to a contradiction. For this, observe that the assignment construction in (13), together with Base(s l ) = Base(s m ), implies that s l (i) < s m (i) for all indices i. In particular, we have s l (i Pos(s l ) ) < s m (i Pos(s m ) ) since Pos(s l ) = Pos(s m ). Hence, the assumption that Y is infinite must be false.
Proof of the subclaim. Suppose t ′ is generated by t and u 1 . . . u l i U ⊆ v 1 . . . v l i V ∈ Σ * . Without loss of generality Pos(t) = {u 1 , . . . , u M }, in which case Pos(t ′ ) = {v 1 , . . . , v M }. We need to show that t(i Pos(t) ) ≥ t ′ (i Pos(t ′ ) ). Now, (t(u 1 ), . . . , t(u l )) is a sequence of the form (i 1 , . . . , i M , 0 . . . , 0), where i j are positive integers. By the assumption that t ∈ Y \ Z, there is an integer m such that t ∈ X m+1 \ X m and Z ⊆ X m . We obtain that Here, the assignment construction in (13) entails (16), and it is also used in (17). For the summation term appearing in (17), we note that each assignment whose degree is strictly greater than M must belong to Z. It remains to consider (18); the last equality is symmetrical to the composition of the first four equalities. [26] FO(≈) < [25] FO(≈, =(· · · )) < * FO(⊥ ⊥) ≡ [25] FO(⊥ ⊥ c ) Table 1: The known expressivity hierarchy of logics with probabilistic team semantics and corresponding ESO variants on metafinite structures. The results of this paper are marked with an asterisk (*).
Proof of Claim 2. Note that, if s ∈ Y, then there exists a minimal i such that s ∈ X i \ X i−1 . We prove the claim by induction on i. For the initial team X 0 = {s * }, we have s * (x i ) = i, for 1 ≤ i ≤ n. By reflexivity we obtain x i 1 . . . x i k ≈ x i 1 . . . x i k , and thus the claim holds for the base step.
For the inductive step, suppose s ∈ X i+1 \ X i is generated by some s ′ ∈ X j \ X j−1 , j ≤ i, and some u 1 . . . u l i U ⊆ v 1 . . . v l i V in Σ * . For a variable v i from v 1 , . . . , v l we say the variable u i from u 1 , . . . , u l is its corresponding variable. Let z 1 , . . . , z k be variables as in the claim, i.e., s(z j ) = i j ≥ 1, for 1 ≤ j ≤ k. Now from the construction of s (i.e., (13)) it follows that z 1 , . . . , z k are variables from v 1 , . . . , v l . Let z ′ 1 , . . . , z ′ k from u 1 , . . . , u l denote the corresponding variables of z 1 , . . . , z k . Since s was constructed by s ′ and u 1 . . . u l i U ⊆ v 1 . . . v l i V , it follows that s(z 1 , . . . , z k ) = s ′ (z ′ 1 , . . . , z ′ k ). By applying the induction hypothesis to s ′ , we obtain that Σ yields a proof of x i 1 . . . x i k ≈ z ′ 1 . . . z ′ k . Since u 1 . . . u l ≈ v 1 . . . v l or its inverse is in Σ, using projection and permutation (and possibly symmetricity) we can deduce z ′ 1 . . . z ′ k ≈ z 1 . . . z k . Thus by transitivity we obtain a proof of x i 1 . . . x i k ≈ z 1 . . . z k . This concludes the proof of the claim.
Conclusion
Our investigations gave rise to the expressiveness hierarchy in Table 1. Furthermore, we established that FO(≈) captures P on finite ordered structures, and that FO(≈, =(· · · )) captures NP on finite structures. Its worth to note that almost conjunctive (∃ * ∃ * ∀ * ) R [≤, +, SUM, 0, 1] is in some regard a maximal tractable fragment of additive existential second-order logic, as dropping either the requirement of being almost conjunctive, or that of having the prefix form ∃ * ∃ * ∀ * , leads to a fragment that captures NP. We also showed that the full additive existential second-order logic (with inequality and constants 0 and 1) collapses to NP, a result which as far as we know has not been stated previously.
Lastly, extending the axiom system of inclusion dependencies with a symmetry rule, we presented a sound and complete axiomatization for marginal identity atoms. Beside this result, it is well known that also marginal independence has a sound and complete axiomatization [19]. These two notions play a central role in statistics, as it is a common assumption in hypothesis testing that samples drawn from a population are independent and identically distributed (i.i.d.). It is an interesting open question whether marginal independence and marginal identity, now known to be axiomatizable in isolation, can also be axiomatized together.
Acknowledgements
We would like to thank the anonymous referee for a number of useful suggestions. We also thank Joni Puljujärvi and Richard Wilke for pointing out errors in the previous manuscripts. | 2020-12-24T02:15:50.110Z | 2020-12-23T00:00:00.000 | {
"year": 2020,
"sha1": "ffa5c194b7474271d966e4c500fa498ad7d8c2df",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.12830",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ffa5c194b7474271d966e4c500fa498ad7d8c2df",
"s2fieldsofstudy": [
"Philosophy",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118431166 | pes2o/s2orc | v3-fos-license | Computation with Coherent States via Teleportations to and from a Quantum Bus
In this paper we present results illustrating the power and flexibility of one-bit teleportations in quantum bus computation. We first show a scheme to perform a universal set of gates on continuous variable modes, which we call a quantum bus or qubus, using controlled phase-space rotations, homodyne detection, ancilla qubits and single qubit measurement. The resource usage for this scheme is lower than any previous scheme to date. We then illustrate how one-bit teleportations into a qubus can be used to encode qubit states into a quantum repetition code, which in turn can be used as an efficient method for producing GHZ states that can be used to create large cluster states. Each of these schemes can be modified so that teleportation measurements are post-selected to yield outputs with higher fidelity, without changing the physical parameters of the system.
INTRODUCTION
Bennett et al. [1] showed that an unknown quantum state (a qubit) could be teleported via two classical bits with the use of a maximally entangled Bell state shared between the sender and receiver. The significance of teleportation as a tool for quantum information was extended when Gottesman and Chuang [2] showed that unitary gates could be performed using modified teleportation protocols, known as gate teleportation, where the task of applying a certain gate was effectively translated to the task of preparing a certain state. Since then teleportation has been an invaluable tool for the quantum information community, as gate teleportation was the basis for showing that linear optics with single photons and photo-detectors was sufficient for a scalable quantum computer [3]. Moreover, Zhou et al. [4] demonstrated that all previously known fault-tolerant gate constructions were equivalent to one-bit teleportations of gates.
Recently, the use of one-bit teleportations between a qubit and a continuous variable quantum bus (or qubus) has been shown to be important for fault-tolerance [5]. Using one-bit teleportations to transfer between two different forms of quantum logic, a fault tolerant method to measure the syndromes for any stabiliser code with the qubus architecture was shown, allowing for a linear saving in resources compared to a general CNOT construction. In terms of optics, the two different types of quantum logic used were polarisation {|0 = |H , |1 = |V } and logical states corresponding to rotated coherent states {|α , |e ±iθ α }, although in general any two-level system (qubit) which can interact with a continuous variable mode (qubus) would suffice. The relative ease with which single qubit operations can be generally performed prompted the question of whether a universal set of gates can be constructed with this rotated coherent state logic. In this paper we describe one such construction, which we call qubus logic.
The fault-tolerant error-correction scheme using a qubus [5] exploits the fact that entanglement is easy to create with coherent cat states of the qubus, such as |α + |αe iθ , and single qubit operations are easily performed on a two-level system. In this paper we describe how these cat sates can be used as a resource to construct other large entangled states, such as cluster states [6,7,8], using one-bit teleportations between a qubit and a qubus.
Although the average fidelities of qubus logic and cluster state preparation are dependent on how strong the interaction between the qubit and the qubus can be made, and how large the amplitude α is, these fidelities can be increased arbitrarily close to 1 through the use of postselection during the one-bit teleportations, demonstrating the power and flexibility of teleportation in qubus computation for state preparation.
The paper is organised as follows. First, in Section II we revisit one-bit teleportations for the qubus scheme. Next, in Section III we present a technique to perform quantum computation using coherent states of the qubus as basis states. To do this we make use of controlled (phase-space) rotations and ancilla qubits. This coherent state computation scheme is the most efficient to date. In Section IV we show how we can efficiently prepare repetition encoded states using one-bit teleportations, and how such encoders can be used to prepare large cluster states.
II. ONE-BIT TELEPORTATIONS
In the original quantum teleportation protocol an arbitrary quantum state can be transferred between two parties that share a maximally entangled state by using only measurements and communication of measurement outcomes [1]. Modifications of the resource state allow for the applications of unitaries to an arbitrary state in a similar manner, in what is known as gate teleportation [2]. The main advantage of gate teleportation is the fact that it allows for the application of the unitary to be delegated to the state preparation stage. In some arXiv:0804.4344v1 [quant-ph] 28 Apr 2008 physical realisations of quantum devices, it may only be possible to prepare these states with some probability of success. In that case, the successful preparations can still be used for scalable quantum computation [2]. When dealing with noisy quantum devices, it is important to encode the quantum state across multiple subsystems, at the cost of requiring more complex operations to implement encoded unitaries. In order to avoid the the uncontrolled propagation of errors during these operations, one can also employ gate teleportation with the extra step of verifying the integrity of the resource state before use [2,3,4,9,10]. In the cases where the teleportation protocol is used only to separate the preparation of complex resource states from the rest of the computation, simpler protocols can be devised. These protocols are known as one-bit teleportations [4]. Unitaries implemented through one-bit gate teleportation can also be used for fault-tolerant quantum computation [4] as well as measurement based quantum computation [6]. The main difference between one-bit teleportation and the standard teleportation protocol is the lack of a maximally entangled state. Instead, in order to perform a one-bit teleportation it is necessary that the two parties interact directly in a specified manner, and that the qubit which will receive the teleported state be prepared in a special state initially.
Some unitary operations on coherent states can be difficult to implement deterministically, while the creation of entangled multimode coherent states is relatively easy. Single qubits, on the other hand, are usually relatively easy to manipulate, while interactions between them can be challenging. For this reason, we consider one-bit teleportation between states of a qubit and states of a field in a quantum bus, or qubus. The two types of one-bit teleportations for qubus computation are shown in Fig. (1), based on similar constructions proposed for qubits by Zhou et al. [4]. The one-bit teleportation of the qubit state a|0 + b|1 into the state of the qubus, in the coherent state basis {|α , |αe iθ }, is depicted in Fig. (1a). The qubit itself can be encoded, for example, in the polarisation of a photon, i.e. |0 = |H and |1 = |V . The initial state, before any operation, is a|0 + b|1 |α . The controlled phase-space rotation corresponds to the unitary which applies a phase shift of θ to the bus if the qubit state is |1 , and does nothing otherwise 1 . After the controlled rotation by θ the state becomes a|0 |α +b|1 |e iθ α . Representing the qubit state in the Pauli X eigenbasis, this is |+ a|α + b|e iθ α / √ 2 + |− a|α − b|e iθ α / √ 2. When we detect |+ we have successfully teleported our qubit into |α , |e iθ α logic. When we detect |− we have the state a|α − b|e iθ α . The relative phase discrepancy can be corrected by the operationZ, which approximates the Pauli Z operation in the {|α , |αe iθ } basis. This correction can be delayed until the state is teleported back to a qubit, where it is more easily implemented. The one-bit teleportation of the state a|α + b|αe iθ of the qubus to the state of the qubit can be performed by the circuit depicted in Fig. (1b). That is, we start with the state a|α + b|αe iθ (|0 + |1 )/ √ 2.
A. Average fidelities
In order to quantify the performance of the protocols just described, consider the process fidelity [18,19,20].
The process fidelity between two quantum operations is obtained by computing the fidelity between states isomorphic to the processes under the Choi-Jamio lkowski isomorphism. For example, in order to compare a quantum process E acting on a D dimensional system to another quantum process F acting on the same system, we compute the fidelity between the states In the case of single qubit processes, we just need to consider the action of the process on one of the qubits of the state 1 √ 2 (|00 ± |11 ). The operational meaning of the process fidelity is given by considering the projection of the first qubit into a particular state a|0 + b|1 . In this case the second qubit collapses into the state corresponding to the output of the process acting on the state a|0 + b|1 . Thus a high fidelity between |E and |F implies a high fidelity between the outputs of the E and F.
Consider the state produced by the circuit in Fig. (1a) which depends on the qubit measurement outcome. As the relative phase is known, and the correction can be performed after the state is teleported back to a qubit, for each of the outcomes we can compare this state with the ideal state expected from the definition of the basis states for the qubus. This results in the process fidelity of 1 for one-bit teleportation into the qubus.
For the case where we teleport the state from the qubus back into the qubit, using the circuit in Fig. (1b), we consider the action of the process on the second mode of the state |ψ + from Eq. (6). This is not, strictly speaking, the Choi-Jamio lkowski isomorphism, but it gives the same operational meaning for the process fidelity as a precursor to the fidelity between the outputs of the different processes being compared, as any superposition of {|α , |αe iθ } can be prepared from |ψ + by projecting the qubit into some desired state. We expect the output state to be 1 √ 2 (|00 + |11 ) from the definition of the basis states, but we instead obtain the unnormalised states The normalised output state, averaged over all x outcomes, is (9) so that the average process fidelity for one-bit teleportation into a qubit is where x d = 2α(1 − cos(θ)) ≈ αθ 2 for small θ. Teleportation from the qubus into the qubit is not perfect, even in the ideal setting we consider, because the states |α and |e iθ α cannot be distinguished perfectly. However, F p can be made arbitrarily close to one by letting x d → ∞, or αθ 2 → ∞ if θ 1, as seen in Fig. (2). This corresponds to increasing the distinguishability of the coherent states |α and |e iθ α .
B. Post-selected teleportation
In order to improve the average fidelity of the teleportations without changing the physical parameters α and θ of the basis states, one can post-select the outcomes of the x-quadrature measurements when teleporting states from the qubus mode to a qubit, as these outcomes essentially herald the fidelity of the output state with the desired state. Discarding the states with fidelity below a certain threshold allows for the average fidelity to be boosted, even in the case where αθ 2 1, at the cost of a certain probability of failure. This is particularly useful for the preparation of quantum states which are used as resources for some quantum information processing tasks.
Instead of accepting all states corresponding to all x outcomes of the homodyne measurement which imple-ments Z, we only accept states corresponding to outcomes which are far enough away from the midpoint x 0 , since the state at x 0 has the lowest fidelity with the desired state. More explicitly, we only accept states corresponding to measurement outcomes which are smaller than x 0 − y or larger than x 0 + y. This post-selection can only be performed for one-bit teleportation from the qubus to the qubit, yielding a probability of success given by and process fidelity conditioned on the successful outcome given by The effect of discarding some of the states depending on the measurement outcome for the teleportation in Fig. (1b) is depicted in Fig. (3). In particular, we see that the process fidelity can be made arbitrarily close to 1 at the cost of lower probability of success, while α and θ are unchanged, since lim y→∞ F p,y = 1.
As the probability mass is highly concentrated due to the Gaussian shape of the wave packets, the probability of success drops super-exponentially fast as a function of y. This is because for large z we have [21] 2 This fast decay corresponds to the contour lines for decreasing probability of success getting closer and closer in Fig. (3). Thus, while the fidelity can be increased arbitrarily via post-selection (by increasing y), this leads to a drop in the probability of obtaining the successful outcome for post-selection. Note that, despite this scaling, significant gains in fidelity can be obtained by postselection while maintaining the physical resources such as α and θ fixed, and while maintaining a reasonable probability of success. In particular, if x d = 2.5, increasing y from 0 to 1.25 takes the fidelity from 0.9 to 0.99 while the probability of success only drops from 1 to 0.5. If the probability of success is to be maintained constant, a linear increase in x d can bring the fidelity exponentially closer to unity, as is evident in Fig. (3). As x d is proportional to the amplitude α of the coherence state, this can be achieved while maintaining θ constant. Since θ is usually the parameter which is hard to increase in an experimental setting, this is highly advantageous. Instead of discarding the outputs with unacceptable fidelity, one can also use the information that the failure is heralded to recover and continue the computation. In the case of the one-bit teleportations described here, such an approach would require active quantum error correction or quantum erasure codes -the type of codes necessary for heralded errors -which have much higher thresholds than general quantum error correcting codes [9]. We will not discuss such a possibility further in this paper, and will focus instead on post-selection for quantum gate construction and state preparation.
III. UNIVERSAL COMPUTATION WITH QUBUS LOGIC
Previous work by Ralph et al. [12,13] and Gilchrist et al. [14] illustrated the construction of a universal quantum computer using what we call coherent state logic. In these schemes a universal set of gates is applied to qubit basis states defined as |0 L = | − α and |1 L = |α , using partial Bell state measurements and cat states of the form (| − α + |α ) / √ 2 as resources.
To perform a universal set of gates a total of sixteen ancilla cat states are necessary [13]. For α ≥ 2 the qubits |−α and |α are approximately orthogonal since Using the one-bit teleportations in Fig. (1) we can also perform a universal set of gates on a Hilbert space spanned by the states |0 L = |α and |1 L = |e ±iθ α , which we call qubus logic. As mentioned in the previous section, the two states defined for the logical |1 L are indistinguishable when we homodyne detect along the x-quadrature, a fact that will become important later. The overlap between these basis states | α|e ±iθ α | 2 = e −2|α| 2 (cos θ−1) ≈ e −|α| 2 θ 2 (for small θ) is close to 0 provided αθ 1, so that we may consider them orthogonal -e.g. for αθ > 3.4, we have | α|e iθ α | 2 ≤ 10 −6 . It can be seen that our basis states are equivalent to the basis states of coherent state logic given a displacement and a phase shifter. That is, if we displace the arbitrary state a|α + b|αe iθ by D(−α cos (θ/2) e iθ/2 ) and apply the phase shifter e i(π−θ)n/2 we have a|α sin (θ/2) + be iα 2 sin(θ)/2 | − α sin (θ/2) . If we now set α = α sin (θ/2) ≈ αθ/2, for small θ, we see that our arbitrary qubus logical state is equivalent to an arbitrary coherent state qubit. The e iα 2 sin(θ)/2 phase factor can be corrected once we use a single bit teleportation. If α ≥ 2 then αθ ≥ 4, which is already satisfied by the approximate orthogonality condition αθ 1. It is important to note that, although the basis states are equivalent, the gate constructions we describe for qubus logic are very different than the gate constructions for coherent state logic.
We compare qubus logic and coherent state logic based on resource usage, i.e. the number of ancilla states and controlled rotations necessary to perform each operation. Since the cat state ancillas needed in coherent state logic, (| − α + |α )/ √ 2, can be made using the circuit in Fig. (1a) with an incident photon in the state we consider the sixteen ancilla cat states required in [13] for a universal set of gates to be equivalent to sixteen controlled rotations.
In the next two sections, we describe how to construct arbitrarily good approximations to any single qubit unitary rotation as well the unitary CSIGN = diag(1, 1, 1, −1) in qubus logic, as this is sufficient for universal quantum computation [22].
A. Single Qubit Gates
An arbitrary single qubit unitary gate U can be applied to the state c 0 |α + c 1 |e iθ α by the circuit shown in Fig. (4). We first teleport this state to the qubit using the circuit in Fig. (1b) and then perform the desired unitary U on the qubit, giving U c 0 |0 + c 1 |1 . We can teleport this state back to the qubus mode with Fig. (1a), while the Z correction can be delayed until the next single qubit gate, where it can be implemented by applying a Z in addition to the desired unitary. If it happens that this single qubit rotation is the last step of an algorithm, we know that thisZ error will not effect the outcome of a homodyne measurement (which is equivalent to a measurement in the Pauli Z eigenbasis), so that this correction may be ignored. In total this process requires two controlled rotations.
Since arbitrary single qubit gates are implemented directly in the two level system, the only degradation in the performance comes from the teleportation of the state from the qubus to the qubit, resulting in the fidelity given in Eq. (10) In the case that we wish to perform a bit flip on the qubit c 0 |α + c 1 |e iθ α we can simply apply the phase shifter e −iθn to obtain c 0 |e −iθ α + c 1 |α , similarly to the bit flip gate in [13].
Post-selected implementation of single qubit gates
The fidelity of single qubit gates in qubus logic can be improved simply by using post-selected teleportations. For simplicity, if we disregard the second one-bit teleportation which transfers the state back to qubus logic, we obtain the probability of success given in Eq. (11) and the conditional process fidelity given in Eq. (12).
B. Two Qubit Gates
To implement the entangling CSIGN gate we teleport our qubus logical state onto the polarisation entangled state 1 2 |00 + |01 + |10 − |11 . The state where H represents a Hadamard gate, can be produced offline by any method that generates a maximally entangled pair of qubits. As described previously in the context of error correction, such a state can be produced with controlled rotations [5]. If we start with the qubus coherent state | √ 2α and an eigenstate of the Pauli X operator (|0 + |1 )/ √ 2 incident on Fig. (1a), we obtain | √ 2α + | √ 2e iθ α . Next we put this through a symmetric beam splitter to obtain 1 √ 2 |α, α + |e iθ α, e iθ α [14]. If we now teleport this state to polarisation logic with Fig. (1b) we have, to a good approximation, the Bell state |00 + |11 / √ 2, and with a local Hadamard gate we finally obtain 1 2 |00 + |01 + |10 − |11 . To make this state we have used three controlled rotations and one ancilla photon. Since we are only concerned with preparing a resource state which in principle can be stored, we can perform post-selection at the teleportations to ensure the state preparation is of high fidelity, as described in Section II B.
After this gate teleportation onto qubits, we teleport back to the qubus modes after a possible X correction operation. The overall circuit is shown in Fig. (5). This CSIGN gate requires four controlled rotations. As with the single qubit gates,Z corrections may be necessary after the final teleportations of Fig. (5), but these corrections can also be delayed until the next single qubit gate. We can see what affect the condition αθ 2 1 has on the function of the gate in Fig. (5) by looking at the process fidelity. As this gate operates on two qubits, the input state to the process we want to compare is From the basis states we have defined, we expect the output The unnormalised state output from Fig. (5) is where x and x are the outcomes of the Z measurements (top and bottom in Fig. (5), respectively). For simplicity, we disregard the final teleportations back to qubus modes, as we have already discussed how they affect the average fidelity of the state in Section II. Since we have two homodyne measurements to consider, we need to look at the four cases: (i) x greater than x 0 and x greater than x 0 ; (ii) x greater than x 0 and x less than x 0 ; (iii) x greater than x 0 and x less than x 0 ; (iv) x less than x 0 and x less than x 0 . The necessary corrections for each of these cases are (i) 1 Integrating over x and x for these four different regions, one finds the process fidelity to be which just corresponds to the square of the process fidelity for a one-bit teleportation into qubits, as the only source of failure is the indistinguishability of the basis states for qubus logic. A plot showing how this fidelity scales as a function of x d is shown in Fig. (6).
Post-selected implementation of the entangling gate
We can counteract the reduction in fidelity shown in Fig. (6) in a similar way to the single qubit gate case, by only accepting measurement outcomes less than x 0 − y and greater than x 0 + y. We find the success probability and conditional fidelity to be respectively. As before, we see that the process fidelity can be made arbitrarily close to 1 at the cost of lower probability of success. It should also be immediately clear that as y → 0, we have P CSIGN → 1 and We see the effect of ignoring some of the homodyne measurements in Fig. (7). Even though performance is degraded because of the use of two one-bit teleportations, the general scalings of the fidelity and probability of success with respect to y and x d are similar to the onebit teleportation. In particular, we see that the fidelity can be increased significantly by increasing x d (or equivalently, α).
C. Comparison between Qubus Logic and Coherent State Logic
The total number of controlled rotations necessary to construct our universal set of quantum gates on qubus logic, consisting of an arbitrary single qubit rotation and a CSIGN gate, is nine -the construction of an arbitrary single qubit gate required two controlled rotations and the construction of a CSIGN gate required seven, three for the entanglement production and four for the gate operation. This is in contrast to the sixteen controlled rotations (where we assume each controlled rotation is equivalent to a cat state ancilla) necessary for a universal set of gates in coherent state logic [12,13,14], where an arbitrary single qubit rotation is constructed via exp −i ϑ 2 Z exp −i π 4 X exp −i ϕ 2 Z exp i π 4 X , with each rotation requiring two cat state ancilla, and a CNOT gate requiring eight cat state ancilla.
As a further comparison we compare the resource consumption of the qubus logic scheme with the recent extension to the coherent state logic scheme by Lund et al. [23] that considers small amplitude coherent states. In this scheme gate construction is via unambiguous gate teleportation, where the failure rate for each teleportation is dependent on the size of the amplitude of the coherent state logical states. Each gate teleportation requires offline probabilistic entanglement generation. On average, an arbitrary rotation about the Z axis would require three cat state ancilla and both the Hadamard and CSIGN gate would each require 27 cat state ancilla.
The scheme proposed here yields significant savings compared to previous schemes in terms of the number of controlled rotations necessary to apply a universal set of gates on coherent states.
IV. CONSTRUCTION OF CLUSTER STATES
As we have pointed out in the previous section, the GHZ preparation scheme used for fault-tolerant error correction with strong coherent beams [5] can be used to perform CSIGN gate teleportation. This approach can be generalised to aid in the construction of cluster states [6], as GHZ states are locally equivalent to star graph states [24,25]. Once we have GHZ states we can either use CNOT gates built with the aid of a qubus [11,17] to deterministically join them to make a large cluster state, or use fusion gates [8] to join them probabilistically.
Recent work by Jin et al. [26] showed a scheme to produce arbitrarily large cluster states with a single coherent probe beam. In this scheme, N copies of the state (|H + |V ) / √ 2 can be converted into the GHZ state (|H ⊗N + |V ⊗N )/ √ 2 with the use of N controlled rotations and a single homodyne detection . However, the size of the controlled rotations necessary scales exponentially with the size of the desired GHZ state -the N th controlled rotation would need to be 2 N −1 − 1 times larger than the first controlled rotation applied to the probe beam. For example, if we consider an optimistic controlled rotation θ of order 0.1, once N reaches 10 we would require a controlled rotation on the order of π, which is unfeasible for most physical implementations. In the next section we describe how to prepare GHZ states that only require large amplitude coherent states, while using the same fixed controlled rotations θ and −θ.
A. GHZ State Preparation and Repetition Encoding
We mentioned a scheme in the previous section to construct the Bell state |00 + |11 , but this can be generalised to prepare GHZ states of any number of subsystems. We first start with the state (|0 +|1 )/ √ 2 and teleport it to a qubus initially in the larger amplitude | √ N α . This will give (| √ N α + | √ N αe iθ )/ √ 2. Sending this state through an N port beam splitter with N −1 vacuum states in the other ports gives (|α ⊗N + |αe iθ ⊗N )/ √ 2.
Each of these modes can then be teleported back to qubits, yielding (|0 ⊗N + |1 ⊗N )/ √ 2. The resources that we use to make a GHZ state of size N are N +1 controlled rotations, N + 1 single qubit ancillas, a single qubit measurement and N homodyne detections. This circuit can also function as an encoder for a quantum repetition code, in which case we can allow any input qubit state a|0 + b|1 and obtain an approximation to a|0 ⊗N +b|1 ⊗N . In order to evaluate the performance of this process, we once again calculate the process fidelity by using the input state 1 √ 2 (|00 +|11 ) and acting on the second subsystem. Using a generalisation of Eqn. (17) we calculate the effect of αθ 2 1 on the production of a GHZ state of size N to be Again, this corresponds to the process fidelity of a single one-bit teleportation into a qubit raised to the N th power. The fidelity of preparing repetition encoded states drops exponentially in N . In Fig. (8) we show the fidelity as a function of x d for N = 3 and for N = 9.
B. Post-selected Implementation of GHZ State Preparation and Repetition Encoding
The reduction in fidelity due to αθ 2 1 in Eq. (21) can be counteracted, as before, by simply performing post-selection during the one-bit teleportations into the qubits.
We find the success probability and conditional fidelity to be As y → 0 we see that P REP → 1 and F REP,y → F REP . The effect of discarding some of states corresponding to undesired homodyne measurement outcomes can be seen in Figs. (9) and (10). Thus, as discussed in Section II B, one can prepare a state encoded in the repetition code with an arbitrarily high process fidelity, regardless of what θ and α are. The expected degradation in performance due to the additional teleportations is also evident in the faster decay of the probability of success with larger y.
V. DISCUSSION
We have described in detail various uses for one-bit teleportations between a qubit and a qubus. Using these teleportations, we proposed a scheme for universal quantum computation, called qubus logic, which is a significant improvement over other proposals for quantum computation using coherent states. This scheme uses fewer interactions to perform the gates, and also allows for the use of post-selection to arbitrarily increase the fidelity of the gates given any interaction strength at the cost of lower success probabilities. The one-bit teleportations also allow for the preparation of highly entangled N party states known as GHZ states, which can be used in the preparation of cluster states. Moreover, the same circuitry can be used to encode states in the repetition code which is a building block for Shor's 9 qubit code. In this case, where we are interested in preparing resource states, the power and flexibility of post-selected teleportations can be fully exploited, as the achievable fidelity of the state preparation is independent of the interaction strength available.
The main property of the qubus which is exploited in the schemes described here is the fact that entanglement can be easily created in the qubus through the use of a beam splitter. Local operations, on the other hand, are easier to perform on a qubit. The controlled rotations allow for information to be transferred from one system to the other, allowing for the advantages of each physical system to be exploited to maximal advantage.
The fidelity suffers as the operations become more complex, as can be seen in Figs. (11) and (12). This is because multiple uses of the imperfect one-bit teleportation from qubus to qubit are used. As the process fidelity is less than perfect, error correction would have to be used for scalable computation. However, as we have discussed, the fact that the homodyne measurements essentially herald the fidelity of the operations, it is possible to use post-selection in conjunction with error heralding to optimise the use of physical resources.
While the scheme presented has been abstracted from particular physical implementations, any physical reali- sations of a qubit and a continuous variable mode would suffice. The only requirements are controlled rotations, along with fast single qubit gates and homodyne detection, which are necessary to enable feed-forward of results for the implementation of the relevant corrections. | 2008-04-28T08:44:19.000Z | 2008-04-28T00:00:00.000 | {
"year": 2008,
"sha1": "6653871934b755fcc806a8c6f73e3c2e27190807",
"oa_license": null,
"oa_url": "https://espace.library.uq.edu.au/view/UQ:191800/UQ191800_OA.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6653871934b755fcc806a8c6f73e3c2e27190807",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
269345260 | pes2o/s2orc | v3-fos-license | SP16. Are Drains Necessary In Gender Affirming Top Surgery? Outcomes From A Consecutive Series Of 40 Mastectomies
Purpose: Angiosarcoma (AS) is a rare, highly aggressive malignant tumor of the vascular endothelium, constituting only 1-2% of all soft tissue sarcomas. It predominantly manifests as cutaneous AS, with the head and neck (HN) region being the most frequently affected site. AS has a high rate of recurrence and metastasis, with overall survival ranging from 6 to 16 months. Given its infrequency, there is also a lack of consensus regarding the factors influencing a worse prognosis and the potential effects of different treatments on prognosis and disease progression. This study aims to gain insight into the factors contributing to the observed differences in tumor spread patterns and overall survival (OS) among different racial groups and explore racial disparities in head and neck AS. Methods: We conducted a retrospective analysis of HN AS patients treated at Cleveland Clinic from 1997 to 2017. We extracted patient, tumor, and treatment data from the electronic medical record and used a Cox proportional hazards model to identify factors affecting OS among racial groups. Results: 46 patients were diagnosed with cutaneous HN AS during the study period. Most cases (95.5%) involved the face or scalp—patients presented at a median age of 78 and with a median tumor size of 4.05 cm. Significant differences were observed between White (n=15) and Black (n=19) patients. Although recurrence rates were comparable between groups, Black patients experienced recurrences at a rate twice as fast (810 days vs 451
recurrence patterns and overall survival between White and Black patients.These findings suggest the possibility of a more aggressive subtype among the Black population, necessitating customized treatment approaches.Comprehensive investigations into the genetic factors and underlying mechanisms are required to clarify the root cause of this disparity.Purpose: Gender-affirming mastectomy (top surgery) treats gender dysphoria and provides a gender congruent chest for transgender male and non-binary patients.Close-suction drains are commonly utilized to mitigate seromas.However, drains may increase scarring, pain, and number of clinic visits.This study aims to evaluate clinical outcomes and adverse events in patients undergoing drainless top surgery.
SP16. Are Drains
Methods: An IRB approved retrospective chart review was performed on a series of 20 patients who underwent bilateral drainless top surgery between 2021 and 2022.In the surgical approach, quilting sutures were used to anchor the mastectomy flaps and eliminate the dead space.Age, BMI, smoking status, diabetes, mastectomy specimen weights and post operative pain measurement were curated.Two sample t-test and Fishers exact test was used to assess significance in risk factors and adverse events.
Results:
The follow up period ranged from 3 -12 months.Among this cohort there was one major complication (pulmonary embolism requiring anticoagulation therapy) and two minor complications (surgical site cellulitis resolved with antibiotics, small self-resolved hematoma.There were no seromas or reoperations.The average BMI and age of these patients were 31.59 ± 5.30 and 25.45 ± 5.74 years old, respectively.There was no significant difference in demographic data, medical comorbidities or mastectomy weight in patients who had major or minor complications versus those who did not (p>0.05).
Conclusion:
Gender affirming top surgery is not without risk.However, our results showed no increased risks of Background: Gender affirmation surgeries (GAS) has increased in recent years due to expanded sociocultural acceptance and improvements in insurance coverage.However, some surgeons employ body mass index (BMI) criteria (usually ≥35 kg/m 2 ) for surgical candidacy, thereby limiting access among patients with morbid obesity.This study aims to characterize the effect of morbid obesity (i.e., ≥35 kg/m 2 ) on postoperative complications following a variety of vaginoplasty techniques.
Methods: A single-center retrospective review of all transgender and non-binary patients undergoing vaginoplasty (penile inversion vaginoplasty (PIV), peritoneal flap vaginoplasty (PFV), intestinal segment vaginoplasty (ISV) from December 2018 to April 2023 was conducted.Per World Health Organization, Class II/III obesity was defined as ≥35 kg/m 2 .Patient characteristics, perioperative details, and postoperative complications were collected.Postoperative complications were categorized into short-(<30 days) and long-term (≥30 days).
Conclusion:
Patients with a BMI ≥35 kg/m 2 may be at a high risk of developing complications following vaginoplasty, which may ultimately necessitate operative revision.While patients with a BMI ≥35 kg/m 2 should remain candidates for vaginoplasty, proper preoperative counseling is critical to setting expectations and optimizing outcomes. | 2024-04-25T15:23:53.970Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "367bf2273e8695301ed6aa93fa8db5e2674cb51f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/01.gox.0001015628.40201.c3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "920a5cca67feb3c33adc5298d019b8ba348b157c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
268449461 | pes2o/s2orc | v3-fos-license | Decentralized control using selectors for optimal steady-state operation with changing active constraints
We study the optimal steady-state operation of processes where the active constraints change. The aim of this work is to eliminate or reduce the need for a real-time optimization layer, moving the optimization into the control layer by switching between appropriately selected controlled variables (CVs) in a simple way. The challenge is that the best CVs, or more precisely the reduced cost gradients associated with the unconstrained degrees of freedom, change with the active constraints. This work proposes a framework based on decentralized control that operates optimally in all active constraint regions, with region switching mediated by selectors. A key point is that the nullspace associated with the unconstrained cost gradient needs to be selected in accordance with the constraint directions so that selectors can be used. A main benefit is that the number of SISO controllers that need to be designed is only equal to the number of process inputs plus constraints. The main assumptions are that the unconstrained cost gradient is available online and that the number of constraints does not exceed the number of process inputs. The optimality and ease of implementation are illustrated in a simulated toy example with linear constraints and a quadratic cost function. In addition, the proposed framework is successfully applied to the nonlinear Williams–Otto reactor case study.
Introduction
The integration of optimization and control is very important when designing the control system for a process.The main objective of the control system is to keep the process stable and operating at the economically optimal operating point.Although these two objectives can be assessed simultaneously, for example, using economic model predictive control (EMPC) [1], a simpler, and in most cases equally optimal,1 approach is to decompose the system hierarchically into an optimization and a control layer as shown in Fig. 1, where setpoints are used to connect the two layers.The setpoints may need to be updated due to disturbances that affect the process economics.In the standard implementation in Fig. 1, the real-time optimization (RTO) and setpoints update is performed on a slow time scale based on a detailed nonlinear process model and the estimated states of the process.In most cases, the RTO layer is static.
Based on the concept of Morari et al. [2] of feedback optimizing control, the aim of the current paper is to move the real-time optimization, or at least parts of it, into the control layer.A recent review on this topic is given in Krishnamoorthy and Skogestad [3], where the authors state some of the challenges with RTO implementation, including the cost of developing the model, the uncertainty related to the model and its parameters (or disturbances), and human aspects related to the maintenance of an optimization layer in addition to the already existing digital control system (DCS).The importance of feedback optimizing control lies in being able to reject disturbances that affect economic performance in a simple manner, without relying on an upper optimization layer that may sometimes not even exist.To that end, an appropriate selection of the controlled variables (CVs) for the control layer is important.This is the main idea of self-optimizing control [4].It is particularly important to include the active constraints as CVs, that is, the constraints that are optimally at their limiting value [2,5].If information about the cost gradient is available, the optimal CVs are the active constraints plus the reduced cost gradients, and by controlling these at a constant setpoint of zero we may eliminate the optimization layer [6].This choice of CVs is valid if the set of active constraints does not change in the considered operating region.https://doi.org/10.1016/j.jprocont.2024.103194Received 5 October 2023; Received in revised form 24 February 2024; Accepted 3 March 2024 Fig. 1.Standard optimizing control implementation with separate layers for realtime optimization (RTO) and control (, which can be ..MPC or PID). denotes the (economic) cost function to be minimized, the process model, the process constraints, the model states, the disturbances, and the process inputs (MVs).
Dealing with changes in active constraints has been a concern in previous works.For example, Cao [7] implemented a cascade control structure with selectors to avoid constraint violation by the lower self-optimizing layer, and Graciano et al. [8] applied MPC with zone control to the same end.A global self-optimizing control method for changing active constraints has been proposed by Ye et al. [9], where the goal is to minimize the average loss obtained with a single set of CVs.However, in a new active constraint region, not only do the active constraints change, but the directions related to the reduced cost gradient change accordingly.This means that to eliminate the RTO layer one needs to change the control layer in Fig. 1 during operation, both in terms of the selected CVs and the corresponding feedback controller .With this perspective, Manum and Skogestad [10] has considered a centralized, steady-state analysis on switching control structures, with different CVs for each region.
However, the implementation of such a region-based control strategy quickly becomes impractical.This is because the number of active constraint regions grows exponentially with the number of constraints.Let denote the number of process inputs or manipulated variables (MVs) and the number of independent constraints.The upper bound on the number of active constraint regions is 2 , which is reached when all constraint combinations are feasible [11].In each region, we ideally need a new controller , and if we want to use decentralized control then we need to design single-input single-output (SISO) controllers in each region.For example, with = 4 and = 5, there could be up to 2 4 = 16 constraint regions, which may require the tuning of 2 ⋅ = 16 ⋅ 5 = 80 SISO controllers.Even though some CVs are reused between regions, the number of necessary SISO loops will be high.
The key contribution of this paper is to propose a simple and generic region-based control structure with only + SISO controllers, as represented in Fig. 2, with the same set of unconstrained variables ( 0 and 0 ) in all operating regions. 2 Considering the previous example, this structure would have only 4 + 5 = 9 SISO controllers.In the paper, we show that the unconstrained variables are obtained from projections of the full cost gradient with respect to the inputs, ∇ .This leads to gradient controllers ( 0 and 0 ) and constraint controllers ( ).However, at any given time, only a subset with of the + controller outputs is implemented as process inputs, with the switching logic choosing between the controller outputs 0 and .Fig. 2. Proposed optimizing control implementation, assuming ≥ .The controllers 0 , 0 and are usually single-variable PID controllers.The projection (nullspace) matrices 0 and are defined in Eq. ( 7) and Eq. ( 8), respectively.There is no CV 0 , 0 , and 0 if = .Note that the optimization layer in Fig. 1 is eliminated, and an estimate ∇ Ĵ of the cost gradient is needed.The switching logic takes care of the change between active constraint regions.In this paper, this logic is decentralized to individual blocks, see Fig. 3, which can be implemented as min or max selectors according to Theorem 3.
The second key contribution of this paper is to show that the switching logic in Fig. 2 can be effectively implemented using min or max selectors, which are well-known advanced control elements and commonly used in practical control applications.An important decision is to pair each constraint to an MV, but this pairing problem is not addressed in this paper (the interested reader is referred to Skogestad and Postlethwaite [12]).The main assumptions in this work are that we have at least as many MVs as constraints ( ≥ ), and that an estimator for the unconstrained cost gradient ∇ is available.In terms of cost gradient estimation, there are several methods available (see Krishnamoorthy and Skogestad [3]), and in this work, we use the simple model-based approach of dynamic state estimation and model linearization proposed by Krishnamoorthy et al. [13].
Selectors have been used in industry to switch between CVs since the 1940s [14].Selectors are also used in academic case studies on optimal operation [11,15].In these case studies, a control structure is proposed for the nominal operating region, with added logic elements and control loops to deal with the neighboring regions.However, the treatment of the unconstrained degrees of freedom is not clear.Krishnamoorthy and Skogestad [16] proposes a framework for constraint handling using min and max selectors, focusing on systems with a single MV, and therefore not considering the changes of reduced gradients for the unconstrained variables.To the best of the authors' knowledge, even though a general scheme for the paradigm of region-based control is proposed in the review paper by Krishnamoorthy and Skogestad [3], a systematic procedure for designing a decentralized control structure for optimal operation of generic multivariable systems has not yet been explored, as well as whether there are any fundamental limitations for the design of such systems.In this work, we explore these topics, and we describe a class of multivariable systems for which a decentralized control structure is always possible.
Decentralized control framework for optimal operation
We consider a generic, steady-state optimization problem given by: is the function that returns the vector of inequality constraints, ∈ R is the vector of decision variables (MVs), and ∈ R is the vector of disturbances.Note that the states (see Fig. 1) have been formally eliminated from the equations, such that and are functions only of the independent variables and .Introduce the Lagrange function (, , ) = (, ) + (, ).Then, for a given value of , define * as the solution of Eq. ( 1), which satisfies the Karush-Kuhn-Tucker (KKT) conditions [17]: Here, is the vector of Lagrange multipliers associated with the inequality constraints, and * is its optimal value.We remark that the KKT conditions only imply that the solution is a stationary point, and they are also satisfied by local minima or maximum and saddle points.We do not address these issues in this work, and we consider that the optimization problem in Eq. ( 1) is convex.While these optimization problems can be efficiently solved using numerical methods, we here focus on how to solve these problems with feedback control.For this, we rewrite the KKT conditions as control objectives, which allows us to embed the optimization into the control layer design.
The set of active constraints is defined as the set that satisfies ( * , ) = 0 for ∈ .For convenience, define ∶ R × R → R as the function that returns the active constraints.Define the matrix: as the gradient of the constraints with respect to the MVs, and the matrix = ∇ (, ) as the gradient of the active constraints with respect to the MVs.If the set of active constraints is known, Jäschke and Skogestad [6] prove that optimality can be attained by controlling to zero the active constraints and the associated reduced cost gradient.Their result is given by the following theorem: Theorem 1 (Optimal Controlled Variables).Consider the optimization problem in Eq. (1), where we assume that linear independence constraint qualification (LICQ) holds.We assume that the set of optimally active constraints is known.Let ∈ R ×( − ) be a basis for the nullspace of such that: Further, define the reduced cost gradient as: Then controlling (, ) = 0 and ∇ , (, ) = 0 results in optimal steady-state operation.
Proof ([6]).If the active constraints are known, the necessary optimality conditions (2) are equivalent to: where * > 0 is the optimal vector of Lagrange multipliers for the active constraints.Premultiplying ∇ ( * , ) by leads to: Since by definition = 0, the optimality conditions are equivalent to ( * , ) = 0 and ∇ ( * , ) = 0, which are equations that fully determine * because is full rank, and the associated optimal Lagrange multiplier can always be found as ) −1 ∇ ( * , ).Therefore, enforcing ( * , ) = 0 and ∇ ( * , ) = 0 leads to satisfying (6), which is equivalent to satisfying (2).□ In terms of feedback control, Theorem 1 says that and ∇ , (both with setpoints 0) are the steady-state optimal CVs for a given operating region where the active constraints do not change.Here, the reduced cost gradient ∇ , = ∇ is defined as the gradient in the unconstrained directions as given by the nullspace of the active constraints [6].If the system is to operate at another active constraint region, however, the CVs need to change, and if shifts in operating regions happen in real-time, the control system needs to automatically detect these region switches.The main idea of this work is to design a decentralized control structure, see Fig. 2, for all possible active constraint regions of the optimization problem in Eq. ( 1).The main assumption for guaranteeing the existence of this decentralized control structure is as follows: The matrix is always full row rank, and the number of constraints is not greater than the number of MVs, that is, rank( ) = , and ≥ .
This not only guarantees LICQ for any set of constraints that may be optimally active, but it also guarantees the existence of decoupled CVs for optimal operation, as shown in the next theorem.For use in the next theorem, define 0 as an orthonormal basis of the nullspace of , that is: The matrix 0 represents the unconstrained directions that are never in conflict with constraint control.Note here that 0 is an empty matrix (nonexistent) if we have as many constraints as inputs ( = ).Further define − as the matrix containing all but the th row of , and define: as a matrix of columns, where each column is a unitary vector such that: Each vector represents the direction that may conflict with the corresponding constraint , as shown next.
Theorem 2 (Optimal Switching Between CVs).Given that Assumption 1 holds and that the active constraint index set is , the following control strategy allows for optimal operation: Proof.To prove Theorem 2, it is sufficient to prove that the controlled variables are equivalent to the necessary first-order optimality conditions.Firstly, it is useful to note that, due to its construction, = ( ) ê , with being the th row of , and ê being the th unit vector from the standard basis.Additionally, if the active constraint set is , and the inactive constraint set is = {1, … , }−, the optimality conditions can be written as: Let be the matrix with columns equal to for ∈ .Then, premultiplying ∇ by [ 0 ] leads to: Here, ( ) * = 0, because = ( ) ê , and, from the optimality conditions, ê * = * = 0 for ∈ .Therefore, the optimality conditions become which are the CVs proposed in addition to (, ) = 0 for ∈ .Similarly to Theorem 1, this fully defines the operational degrees of freedom, and a suitable vector of Lagrange multipliers can be found.□ Note that the matrix used in the proof of Theorem 2 is a particular parametrization of the nullspace matrix from Theorem 1, and therefore both results are equivalent for a given active constraint region.In Theorem 2 however, we specify an ideal association between CVs such that the handling of region switching may be done in a decentralized fashion, avoiding changes in the rest of the control structure.For instance, if the th constraint changes from inactive to active, only the corresponding unconstrained degree of freedom 0 = ∇ will become uncontrolled, and the remaining CVs are kept unaltered.In addition, the matrix is designed to be full row rank, and therefore all operational degrees of freedom are filled for any active set .The choice of building the vectors unitary and orthogonal to 0 is purely for the uniqueness of the solution, as one could propose another projection ′ = + 0 for any nonzero scaling factor and any vector , and optimal operation would still be attained, as 0 ∇ is always optimally zero.
Theorem 2 states a general set of feedback control objectives to attain optimal operation.It does not specify the type of controller to be used, and one may apply these results to obtain optimal operation with conventional tracking MPC with switching objectives to eliminate the RTO layer.This would be useful for cases where decentralized control performs poorly, but one still wishes to propose a simple control layer.In this work, however, we choose to explore the implications of this result for decentralized control, which is often more easily implemented in practice.
Pairing of MVs and CVs.It should be noted that Theorem 2 makes no distinction about the pairing between MVs and CVs, and it is left for the practitioner to make this pairing taking into account controllability and performance aspects.However, the theorem states the optimal association between CVs for region switching, which means that the control of = and 0 = ∇ must be performed by the same MV in the case of a decentralized framework.From now on, it is considered that the MVs are ordered such that is used to control the pair = and 0
𝑖
for ≤ .These considerations lead to the control structure presented in Fig. 3. Here, ∇ Ĵ represents the estimate of the cost gradient (∇ ), represent the individual constraint controllers, 0 represent the individual gradient controllers that are conditionally active (..only one of and 0 is active at any given time), and 0 represent the individual gradient controllers that are always active.It is important that the controllers and 0 include anti-windup action so that the integral modes in the inactive controllers do not grow indefinitely.We finally focus on the applicability of min/max selectors as the logic elements to switch between active constraint regions, which were left undetermined in Fig. 3 as ''select'' blocks.These selectors are applied on the controller outputs and 0 associated with the controlled variables and 0 , respectively, resulting in the process input (MV) to be applied to the system.This methodology was Fig. 3. Decentralized control structure for optimal operation according to Theorem 2. The ''select'' blocks are usually max or min selectors (see Theorem 3).adopted in Krishnamoorthy and Skogestad [16] for optimal operation in the scalar case, ..with a single MV, where it was concluded that a constraint with a positive gain ( > 0) requires a min selector, whereas a negative gain ( < 0) requires a max selector.In the next theorem, we present similar results for the multivariable case.(𝑑), with > 0 and arbitrary , (), and that is constant.Consider the control structure in Fig. 3 and Theorem 2, and assume that every possible subsystem is stable.
Theorem 3 (Decentralized Control. Applicability of Min/Max Selectors). In addition to Assumption 1, assume that the Hessian of the cost function with respect to the inputs is constant and positive definite, that is,
Let 0 denote the value of that controls 0 = ∇ (, ) = 0, and let denote the value of that controls (, ) = 0.For a given active set , the associated nullspace of the active constraint gain matrix . Define the scaled projection matrix and the transformed constraint gain matrix as: The optimal input is given by * = min ( 0 , ) if the th diagonal element of the transformed gain matrix is positive (( ) > 0) for any active set that does not include .Conversely, the optimal input is given by Proof.See Appendix A □ ) changes sign for different active sets, a single type of selector would not account for all theoretical regions.The singleinput case of Theorem 3 can be easily verified by writing − 0 = − 1 .As > 0 for a convex optimization problem, > 0 leads to * = min ( 0 , ), and < 0 leads to * = max ( 0 , ), which is equivalent to the result in Krishnamoorthy and Skogestad [16].
Can some of the assumptions in Theorem 3 be removed?According to Theorem 3, the use of max-(or min-) selectors in Fig. 3 assumes that ( ) remains positive (or negative) for any active set that does not include .This is to rule out cases where the steady-state gain for control of the constraint changes sign, as this would lead to instability with integral action in the controller.In other words, this is to rule out interacting processes where * = min ( 0 , ) for a given active set , and * = max ( 0 , ) for another.However, it is not clear whether this is a restriction in practice.Thus, it is possible that the assumption about no sign change for the diagonal elements ( ) is not needed.This is left as an open research issue.
Cascade implementation.It is anyway possible to avoid this restriction by using the cascade switching implementation in Fig. 4.That is, for this implementation the simple selector logic is always optimal without the assumption about the sign of ( ) in Theorem 3. In the cascade implementation in Fig. 4, the constraints are always controlled in the lower layer, and the optimal constraint setpoint will either be the value ,0 that controls 0 = 0 or the constraint's limit value itself, such that = min ( ,0 , 0) leads to optimal operation.This result can also be obtained by rewriting Theorem 3 in terms of , where it can be verified that the condition ( , ) > 0 is always satisfied.This result is presented in Appendix B.
The idea of using cascade control for self-optimizing control and constraint satisfaction has been previously proposed in Cao [7].There, the cost gradient is controlled in the unconstrained case, while the lower layer keeps the system feasible by saturating the setpoint from the upper layer.This approach ensures feasibility and self-optimizing behavior at the unconstrained region, but optimality at all active constraint regions is only ensured by carefully selecting the CVs at the upper layer, which is the main idea of the present work.In addition, even though the cascade structure will always operate optimally at steady state, it requires that the outer controllers 0 are sufficiently slower than the inner controllers , and therefore the generic structure in Fig. 3 offers more flexibility in terms of loop tuning and implementation of further control overrides.The simulations presented in this paper are for the implementation in Fig. 3.
Case study 1 -Toy example
In this section, to illustrate the implementation of the proposed control structure, we consider a linear process with a quadratic cost function and 2 linear constraints.The process has 2 dynamic states , 3 inputs (MVs) , and 2 disturbances .The linear state-space model is: with 1 = 1 and 2 = 2.It is assumed that both states are measured, that is, = + with = and = 0.The steady-state optimization problem in terms of the states is: At steady state, the states can be eliminated to give the following static optimization problem: For given disturbances , we can solve the problem in Eq. ( 15) to find the optimal steady-state inputs * and the active set . From this, we can graphically represent the active constraint regions as a function of the two disturbances as shown in Fig. 5.Note that this is done for visualization purposes only and is not a part of the proposed method.In fact, for the proposed method we do not need to know what the disturbances are; what is needed is measured or estimated values for the constraints and the unconstrained cost gradient ∇ Ĵ .We see in Fig. 5 that all 2 = 2 2 = 4 combinations of constraints are possible.Each region has a specific set of CVs for optimal operation, namely the active constraints and the corresponding reduced gradients, as given in Theorem 2. We have = 3 and = 2, so with the proposed method, we need to design + = 5 SISO controllers with = 2 selectors to obtain optimal steady-state operation.Since > , we always have − = 1 unconstrained degree of freedom corresponding to the controlled variable 0 = 0 ∇ Ĵ .From the nullspace of the full matrix, we find that this direction is given by In addition, there are two unconstrained directions related to the two constraints.We have that 0 1 = 1 ∇ Ĵ should be controlled when 1 is not active, and 0 2 = 2 ∇ Ĵ should be controlled when 2 is not active.These directions are: For designing a decentralized control structure, a pairing between the constraints and the MVs must be performed.From the steadystate gain matrix we see that 3 should not be used to control 1 (because of zero gain).Otherwise, there are no clear restrictions, and 1 is arbitrarily paired to 1 , and 2 is paired to 2 .We must require that the corresponding unconstrained optimal CVs are paired accordingly, meaning that 0 1 is paired to 1 , 0 2 is paired to 2 , and 0 is paired to 3 .
For selector design, Table 1 shows the transformed constraint gains calculated using Eq. ( 12) for all active constraint sets, and we verify that the gains are always positive for both constraints.This means that selectors are possible for both control loops and that both selectors should be ''min''-selectors.The resulting control structure is shown in Fig. 6.
The cost gradient is estimated through a relinearization of the dynamic model at the current estimated state to obtain the following linear model: where, by setting ẋ = 0, the estimated steady-state cost gradient becomes [13] ∇ Ĵ = − −1 + (17) Fig. 6.Decentralized control structure for case study 1.
For state estimation, the model is augmented to include the disturbances as integrating states, according to: To estimate the states, a continuous-time Kalman filter is implemented with this augmented model, and the estimated state x and current input are used to evaluate the matrices in Eq. ( 16) at all times, leading to the estimated cost gradient ∇ Ĵ in (17) .The matrices , , , , and are as defined in Eq. ( 13), and and are calculated from Eq. ( 14) to give: We emphasize that analytical expressions for these derivatives are available due to the simplicity of this case study, and we encourage the use of automatic differentiation tools to obtain these matrices in more realistic case studies.
The constraint controllers 1 and 2 were designed according to the SIMC rules [18] with the choice ,1 = 0.1 s and ,2 = 0.01 s.In terms of gradient control, we assume that the effect of the inputs on the estimated ∇ Ĵ is that of a pure gain process, neglecting any dynamics associated with the gradient estimation, and therefore the gradient controllers 0 1 , 0 2 , and 0 become pure integral controllers.These were tuned according to the SIMC rules with = 0.5 s.All controllers linked to selectors are implemented with anti-windup action based on the back-calculation strategy [14,19], with a tracking time of = 0.01 s.The resulting controller tunings are summarized in Table 2.
The closed-loop simulations are shown in Fig. 7. To validate the optimality of the control structure, the disturbances were changed stepwise every 15 s (see lower left plots) to make the system operate in all four active constraint regions (see lower right plot).It can be seen that constraint changes are effectively handled, giving up the corresponding gradient projection when a constraint becomes active, and that operation is driven to the optimal steady state for all disturbances.
Case study 2 -Williams-Otto reactor
The control structure proposed in Section 2 depends on using projection matrices.These are constant only when the constraints are linear in the MVs.We now consider a nonlinear case study where this assumption is not satisfied and one may expect economic losses in some regions.The case study is based on the process described by Williams and Otto [20] and studied in [16], see Fig. 8.It consists of a continuously stirred reactor tank with perfect level control, in which A and B are mixed, generating the main product P, the less interesting product E and the undesired byproduct G.The reactions and reaction rates are given by: The component mass balances for the six components give the following set of ODEs: ] , where is the relative change in the price of the main product, as defined in Eq. (20).
The active constraint regions as a function of the two disturbances are shown in Fig. 9.In contrast to the previous case study, the lines delimiting each region are not straight.This alone should not affect the optimality of the proposed framework, as the optimality only requires that the constraints are linear in the MVs.However, since the latter does not hold for the case study, the use of constant projection matrices will lead to some economic loss.
We have = = 2 so Assumption 1 is satisfied.With the proposed method we need to design + = 4 SISO controllers with = 2 Since = , the system has no completely unconstrained degrees of freedom, so there are no variables 0 that are always controlled.The gradient projections 1 and 2 for the two potentially unconstrained degrees of freedom become: For MV-CV pairing, we choose 1 = for controlling 1 and 0 1 = 1 ∇ Ĵ , and 2 = controlling 2 and 0 2 = 2 ∇ Ĵ .This pairing choice was made based on the steady-state RGA for constraint control, which gives = 0.638 for the chosen pairing.For designing the selectors according to Theorem 3, a local analysis of the transformed constraint gains given in Eq. ( 12) was made at the nominal point and is summarized in Table 5.The projected gains are negative for both constraints regardless of the active set, which means that both selectors should be ''max'' selectors.The resulting control structure is presented in Fig. 10.
To tune the controllers, we obtained the following transfer functions from the MVs to the constraints (with time in hours): , 22 (s) = −0.002410.072 s + 1 Based on this, the PI controllers for the constraints were tuned using the SIMC rules [18] with ,1 = 0.005 h and ,2 = 0.01 h.Similar to the previous case study, the method for gradient estimation is again considered to be instantaneous with respect to the inputs, meaning that the gradient controllers become integral controllers, tuned using the SIMC rules with = 0.05 h.All four controllers were implemented with anti-windup action based on back-calculation with a tracking time of = 0.01 h.The controller tunings are summarized in Table 6.
Closed-loop dynamic simulations are presented in Fig. 11.The disturbances were changed so that all four active constraint regions were explored.Since we consider that the cost gradient ∇ Ĵ is an available measurement (from an estimator with a perfect model), operation in the fully unconstrained region (from = 21 h to = 27 h) is optimal at steady state, which can be seen by the input values converging to the exact steady-state optimal value.Since we assume that the constraints are directly measured (which is a mild assumption), the same logic applies to the fully constrained region from = 9 h to = 15 h.In addition, operation is optimal at the nominal point by design.In the two remaining partly constrained regions, the system does not converge exactly to the steady-state optimum, but the constraints are always satisfied (except for short dynamic transients, which may be avoided by introducing a back-off for the constraints).It is interesting to note that for the third set of disturbances ( = [1.0,−0.2] from = 6 h to 9 h), the second constraint ( 2 = 0) is not controlled, even though it should be optimally controlled together with 1 = 0. Instead, the selector logic results in the control of 0 2 = 0, which can be done without violation of 2 , that is, constraint 2 is ''over-satisfied''.The reason for this non-optimal operation is that the selected value for projection matrix 2 is not optimal in this operating region.
The steady-state economic loss is better visualized as a function of the disturbances in Fig. 12.The highest losses are observed around where we ideally should switch between the partly constrained and the fully constrained regions.The optimal switch between these regions (black lines) does not coincide with the actual switch obtained with the selectors (blue lines).Economic loss is observed before the optimal switch due to the inaccuracy of the projection matrices.For the same reason, and because this further leads to suboptimal performance of the selectors, economic loss is also seen between the optimal and actual switch of CVs.However, the optimal switch between the fully unconstrained and the partly constrained regions coincides with the actual switch between the corresponding CVs.This happens because, before the switch, the full cost gradient ∇ Ĵ is controlled to zero, leading to zero economic loss, and the constraint becomes active immediately at the switch.Therefore, at this switch, the economic loss is zero, and it continuously grows as the system moves further into the partly constrained region.
Steady-state cost gradient estimation
The results in this paper assume the availability of the steadystate cost gradient ∇ during operation.This can be fulfilled through model-based estimation, model-free estimation, or a combination of both methods [3].Model-free methods usually depend on the perturbation of the inputs, and when the constraints are being controlled the perturbation can be done in their setpoints instead.In the presented case studies, we used a model-based approach, where a Kalman Filter was used to estimate the current dynamic state and disturbance with an augmented model ( 18), and then setting ẋ = 0 in the linearized model ( 16) leads to the gradient estimate (17).
Because the cost gradient ∇ is, by definition, a steady-state variable, it is not well-defined during a dynamic transition, and any gradient estimator must make some steady-state assumption or prediction.The necessity of estimating the cost gradient is related to ensuring exact optimality.In practice, one would wish to use an approximation of the cost gradient that is more easily implementable, even if that means accepting some economic loss.In that sense, data-driven approaches for this estimation would be appealing, as well as self-optimizing control methods that provide an approximation for the cost gradient through a static combination of measurements [6].
However, a simpler approach is to use a static estimation of ∇ directly based on the measurements .In another paper [21], we prove the optimality of a simple linear steady-state gradient estimate of the form where are the measurements, and the constant vector and the constant matrix are obtained using the ''exact local method'' of selfoptimizing control.In addition, a correction of from a more accurate gradient estimator may be applied on a slower time scale, for example, using a model-based approach like RTO or a data-based perturbation method like extremum-seeking control.
Handling of constraints
Constraints in process systems are usually measured or estimated, and our approach is optimal for such cases, as control loops are implemented to handle these constraints.In MPC applications, where the problem is formulated as a dynamic trajectory optimization, it is common that process constraints are posed as constraints on the dynamic states, but this is not how our approach handles this issue.Rather, our method is focused on process constraints that may become active at steady state and influence process economics.
Updating of projection matrices
In the simulations presented in this paper, we assumed that a linearization of the constraints at a nominal operating point would be sufficiently accurate for capturing the transitions between active constraint regions.This simplification was primarily made to ensure a control structure that can be easily implemented, and it led to acceptable results even for a nonlinear case study (see Fig. 12).However, it is possible to enhance economic performance by updating the projection matrices and 0 during operation.To accomplish this, an accurate estimator for the complete constraint gradient matrix is required, and typically such an estimator is only available at a time scale similar to that of RTO.However, our primary objective is to achieve acceptable economic losses in fast time scales, and this could be accomplished by using constant projection matrices.
Controller tuning
Even though the proposed control structure only has + SISO controllers to be designed, the tuning of these controllers may prove to be challenging.This is because these controllers must work in many different regions (up to 2 theoretical regions), and the interaction between loops will change depending on which controllers are active.The pairing between MVs and CVs should consider this, and the tuning for the loops should be robust in the sense that acceptable performance is attained for every operation mode.This issue was not noticed in the case studies in this work, but it is easy to see that it may arise in practice.
Limitations for systems with many constraints
In this paper, we consider a class of problems with ≥ , so it is possible to devise a simple, decentralized control structure.There is a particular case of systems with more constraints than inputs that can fit into the framework proposed in this work.That would be the case where the constraints can be arranged into groups, where each group is comprised of constraints that have parallel gain vectors respect to the inputs, .. the constraints and would belong to the same group if ∇ = ∇ for some nonzero .In practice, this would represent a process variable with lower and upper bounds, or constraints of similar nature caused by different factors, .. a maximum processing rate due to upstream or downstream conditions.Each of these groups has a unique characteristic direction in terms of the rows of , which can be used to calculate the gradient projections with the methodology described in Theorem 2. Each of these groups should then be organized internally following the single-variable methodology described by Krishnamoorthy and Skogestad [16].As the methodology devised in this paper mitigates the correlation between each group of constraints, the gradient projections that serve as unconstrained CVs remain constant with respect to changes in the remaining loops, and therefore no additional logic is required in the implementation of max/min selectors for constraint handling.
The main case not covered by the present methodology is when there are > independent constraints that may become active, expressly violating Assumption 1.In this case, considering that some pairing between MVs and constraints is done, the first problem that arises is the possibility of constraints paired with the same MV becoming active at the same time, requiring that one or several constraints become controlled by other MVs.A heat exchanger case with = 3 > = 2 was studied in Bernardino et al. [22], where it was shown that a region-based approach similar to the one studied in the present paper fails to achieve optimality for some disturbance scenarios, whereas the primal-dual approach always reaches the optimal steady state.To achieve optimality with a region-based approach, an adaptive pairing strategy may be used, as described for this case study in Bernardino et al. [23].This gives optimal operation for all disturbances, but the adaptive pairing becomes quite complicated (see Figure 2 in [23]).
In this sense, general strategies for switching pairings require more complex logic, and currently, there is no systematic arranging of classical control logic blocks that can account for that.On top of that, even if conflicting constraints are not an issue for the considered operating window, ..constraints paired to the same MV do not become active at the same time for the considered disturbances, the design of controllers for the unconstrained degrees of freedom becomes more complicated.As the constraints are assumed to be independent, the gradient projections optimally controlled to zero will be different when each of them is active.This entails that the remaining control loops have to change depending on which controller related to this MV is active.Therefore, proposing decentralized control structures for the optimal operation of systems with more constraints than MVs inevitably leads to complex and interacting control loops, and centralized strategies such as the primal-dual feedback optimizing control presented in Krishnamoorthy [24] or MPC become more appealing.
For the same reason, the proposed framework has limitations in optimal dealing with input saturation.In real systems, every MV has physical bounds ≤ ≤ in addition to the process constraints .Therefore, every physical system in a way has more constraints than MVs, and one must identify the constraints that are more likely to become active if one follows Theorem 2 for designing a control structure.The choice of not pairing an MV that may saturate with an important CV, in this case, an economic constraint, agrees with the rule of thumb ''pair an MV that may saturate with a CV that may be given up'' [25], as the gradient projections paired to that MV should by design be given up in case of MV saturation.
The proposed framework attains optimal operation in a wide operating range, by enforcing optimality conditions at steady state for all possible active set combinations for a maximum of independent constraints.It should also be emphasized that less frequent constraints can still be dealt with in the current framework by the implementation of more selectors, even if > , bearing in mind that steady-state optimal operation will not be guaranteed when those become active due to changes in the unconstrained CVs.However, violation of such infrequent constraints would be prevented, which is the main goal of such additional control loops.
Stability and optimal convergence of selectors
The control strategy proposed in this work relies on switching blocks to perform optimal operation in different operating regions.Analyzing the stability of switched systems is more complex, as the stability of a switched system may not necessarily match that of its corresponding continuous subsystems [26].In Theorem 3, we assume that each subsystem within the switching system is stable, which is a condition already present and well described when using decentralized control in multivariable systems.By ensuring that every subsystem is stable, the overall stability of the switching system can be guaranteed.This can be achieved by enforcing a sufficiently large average dwell time [27], which is a practical and easily implementable solution.
The conditions for implementing min/max selectors to detect switches in active constraints optimally are outlined in Theorem 3.This theorem is based on a local analysis of the optimization problem and is rigorously applicable to problems with a constant positive definite Hessian and constant constraints gain .A relevant case in practice is that of linear economic objectives, for which is positive semidefinite, but this case is always solved by active constraint control, as there are no unconstrained degrees of freedom to be determined.While the presented proof does not address generic nonlinear optimization problems, it provides a useful local test that can eliminate certain impossible configurations resulting from the chosen MV-CV pairings or the formulation of the optimal operation problem itself.If the conditions specified in Theorem 3 are not satisfied, we recommend utilizing the cascade framework presented in Fig. 4.
The condition derived in Theorem 3 for applicability of selectors would only be violated by highly interacting systems, where the sign of the transformed constraint gain ( ) would change depending on the active loops.This condition is conjectured to be associated with the decentralized integral controllability (DIC) of each potential subsystem [28].In our study, we could not find an example of a linear system with a convex objective function and without DIC that does not satisfy the conditions stated in the selector theorem.This further suggests a connection between these concepts and that the DIC conditions possibly imply the applicability of selectors.The link between the conditions of Theorem 3 and controllability aspects remains an open challenge that requires further investigation.
Other switching approaches
In this work, we have proposed the use of selectors in the controller outputs for detecting switches in active constraints.However, other strategies for adaptively controlling constraints in the context of optimal operation have been proposed.Manum and Skogestad [10] studied the problem of active constraint switching in self-optimizing control by tracking the self-optimizing CVs in neighboring regions, where the switching happens when there is a change of sign in the monitored variable.In the notation herein presented, this would be equivalent to the following switching logic: The problem with implementing such logic lies in the resulting dynamics of the control system.As this logic implies that the reference variable is perfectly controlled for accurate detection, the logic should operate in a slower time scale than that of the closed-loop system, which would in turn result in undesired behavior, especially constraint violation.Operating the switching logic in fast time scales could in turn lead to the appearance of limit cycles, due to self-sustained switching between control loops.
We have also presented the cascade control structure in Fig. 4 as an alternative switching strategy.A similar idea has been proposed by Cao [7] to promote self-optimizing operation at the nominal region while sub-optimally coping with constraint satisfaction.There are however some disadvantages to this approach related to the limitations that the cascade structure imposes.If constraint control is slow, controlling the corresponding gradient projection becomes unnecessarily slow.Moreover, even though constraint control may help with decoupling the system, it may also cause the opposite problem, and the interaction between loops may impose limitations on the performance of the upper layer.Therefore, the use of a cascade framework for optimal operation may be beneficial, but the improvement that it may bring must be assessed for each particular case study.
Recently, the work of Ye et al. [29] has tackled the problem of changing active constraints by embedding the switching constraints into the CV design, generating a single nonlinear CV.Because the resulting CV design problem was deemed intractable in most cases, a neural network was used to approximate the behavior of this theoretical CV.It is interesting to note that the switching behavior still happens in the designed CV, with the exact ideal CV being in general non-smooth.This is expected because of the nature of the problem, and although neural networks can approximate these variables, the interpretability of the resulting CV is lost, and constraint control must be explicitly performed elsewhere.In the present work, we deal with the switching explicitly, controlling the constraints directly when it is optimal.
Conclusion
We propose a simple framework for decentralized optimizing control with changing active constraints.The starting point is that at steady state, optimal economic operation in a given active constraint region is achieved by keeping the controlled variables = [ ; ∇ ( * , ) ] at constant setpoints = 0 (Theorem 1 [6]).Here denotes the set of active steady-state constraints, and ∇ ( * , ) is the reduced steady-state cost gradient for the remaining unconstrained degrees of freedom.
There are some degrees of freedom in the choice of the directions in the unconstrained nullspace and to implement constraint switching in a simple manner, these should be chosen in accordance with the constraint directions.The main contribution of this paper is to prove in Theorem 2 for the case with ≥ , that we should control the unconstrained variables 0 = 0 ∇ (which are not affected by the constraints), and in addition, depending on whether the constraint is active or not, either control the constraint = or the associated unconstrained variable 0 = ∇ , where 0 and are calculated according to ( 7)-( 9).This can be implemented with the simple control structure in Fig. 2. Furthermore, Theorem 3 shows that the switching can be performed with min/max selectors, which leads to the simple control structure in Fig. 3. Here, no centralized supervisor is needed to determine the active constraints, as the switching logic uses local feedback controllers.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
L.F. Bernardino and S. Skogestad Substituting this equation into the second optimality condition gives the following expression for * : The variables can be partitioned as follows: ) is equivalent to a scaled projection to the nullspace of , () [17], according to the identity = () ( () () , and is therefore positive semidefinite.The effect of inclusion of an arbitrary constraint is therefore given by: We can see that the th component of the vector ( ) dictates the steady-state behavior of the th MV when the th constraint becomes active and the system is operating at the active set .Following the notation introduced in the statement of Theorem 3 and in Fig. 3, we have = ( * ) and 0 = ( ) , since these are the MV values such that = 0 and 0 = ∇ = 0, respectively.This means that, for ( ( ) ) > 0, − 0 < 0 when > 0 and consequently should be selected, and − 0 > 0 when < 0 and consequently 0 should be selected, meaning that we must use * = min ( 0 , ) for guaranteeing optimality.Similar analysis can be performed for ( ( ) ) < 0, leading to * = max ( 0 , ).Since this analysis is performed for arbitrary and ∌ , guaranteeing that ( ( ) ) = ( ) has the same sign for any possible is sufficient for guaranteeing the theorem statement, which completes the proof.□
Appendix B. Optimality of min selectors in cascade structure
In this section, we restate Theorem 3 for the cascade case illustrated in Fig. 4, proving its optimality.In this control structure, the manipulated variables as seen from the higher layer can be represented at steady-state as: Note that we must require that , the Jacobian for the change of variables, is full rank, such that the optimality conditions for Eq.(B.1) and Eq. ( 1) are equivalent.This is a mild assumption related to the steady-state controllability of the constraints in the lower layer with the chosen pairing, and it results in a transformed Hessian = that is also positive definite.Also, for the transformed problem, we have the gain matrix , from the transformed inputs to the constraints: This allows us to write the optimal CVs in terms of the projection matrices and 0 for the transformed problem as: The procedure for obtaining the difference in the optimal solution for neighboring regions presented in Appendix A is also valid for the transformed problem, and we must therefore analyze the sign of the diagonal of the matrix product , .Recall that = () ( () () ) −1 () , where the matrix () () is positive definite, being here a principal submatrix of which selects the rows and columns with indexes not in .Therefore, becomes a positive semidefinite × matrix, where the diagonal elements are zero for ∈ and positive for ∉ .
Finally, we can see that ( , ) > 0 for ∉ , since the first elements of the diagonal of , are the same as those of .It follows that, according to Eq. (A.1) and the results here obtained, the optimal solution is given by * = min ( 0 , 0) for = 1, … , , which completes the proof.
Fig. 4 .
Fig. 4. Decentralized control structure for optimal operation, using an alternative cascade implementation.
Fig. 5 .
Fig. 5. Active constraint regions for case study 1 as a function of disturbances.
L.F.Bernardino and S. Skogestad
Fig. 9 .
Fig. 9. Active constraint regions for case study 2 as a function of disturbances.
11 .
Closed-loop simulation results for case study 2.
12 .
Steady-state closed-loop economic loss for case study 2.
Table 1
Diagonal elements of
Table 2
PI controller tunings for case study 1.Note that = ∕ is the integral gain.The first four controllers have anti-windup with tracking time = 0.01 s.
Table 3 .
The economic cost includes the cost of reactants and and the selling price of products and , and the operational constraints are related to maximum allowed values for and .The steady-state optimization problem becomes min + − ( + ) [ (1 + ) + ] =
Table 4
Nominal operating point for case study 2.
steady-state model of the constraints.In the following simulations, we use the linearization performed at the nominal operating point presented in Table4, leading to fixed CVs for operation in all regions.
Table 5
Diagonal of
Table 6
PI controller tunings for case study 2.
is being controlled to zero, a change of sign in means that the th constraint became active, as this sign change corresponds to constraint violation; • Conversely, if is being controlled to zero, a change of sign in 0 = ∇ means that the th constraint became inactive, as this sign change corresponds to a change in the objective function slope. | 2024-03-17T15:34:28.647Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "efe501b193e6176fd1243d5d6aba363294fc4603",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jprocont.2024.103194",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "20a206876ba7f30977f802d1c8286a944dd505e0",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
} |
58009732 | pes2o/s2orc | v3-fos-license | Nutritional interventions for the treatment of frailty in older adults: a systematic review protocol
Supplemental Digital Content is available in the text
Introduction
Frailty has been defined as a clinical syndrome of multicausal origin characterized by a reduction of physiologic reserves that increase the vulnerability of an individual to adverse outcomes such as the development of functional dependence and death. [1] Considered one of the most important geriatric syndromes, frailty's prevention and management represent important goals for gerontology and geriatrics. [2] The concept of frailty has greatly contributed to the development of this field by highlighting a multiplicity of subclinical factors (i.e., going beyond the presence of functional dependence and comorbidities) and contributing to the reduction of the capacity of older adults to maintain their homeostasis when exposed to stressor events. [3] In fact, studies using different operational definitions of frailty have shown that it represents an important risk factor for a variety of negative outcomes. For example, frail older adults were found to be at an increased risk of falling by 84%, when compared to those who are nonfrail. [4] The frailty syndrome has also been associated with 70% greater chance of fractures, [5] 30% increase in the risk of developing dementia, [6] and 90% increase in the risk of hospitalization. [7] An inverse association between frailty and quality of life of older adults living in the community has also been observed. [8] These data are especially relevant when one considers the results of studies reporting the prevalence of this syndrome among older adults and the perspectives of population aging worldwide. [9] A systematic review on the prevalence of frailty among communitydwelling elderly identified that prevalence ranged from 4% to 59%, with a weighted average of 11%. [10] A significant increase in the prevalence of this syndrome is also noted among individuals of a more advanced age, reaching an average of about 27% among adults older than 85 years of age. [11] Amid institutionalized older adults, the prevalence of frailty ranged from 19% to 76%, with a weighted average of 52%. [11] An important meeting of experts, leading to the 1st successful international consensus on the definition of frailty, considered that there was some evidence suggesting possible benefits of 4 types of interventions for managing this condition: physical exercise, caloric and protein support, vitamin D supplementation, and reduction of polypharmacy. [1] Loss of muscle mass is one of the consequences of weight loss in older adults, along with reduction of strength, mobility, and immune dysfunction, which represent typical characteristics of frailty. In addition, malnutrition in older adults increases the risk of hospitalization, functional dependence, and death in this population. [12] The association between nutritional factors and the occurrence of frailty was also observed in the systematic review of Lorenzo-López et al that analyzed data from 19 observational studies. [13] The nutritional factors examined by this review were micronutrients, macronutrients, diet quality, antioxidants, and score in the Mini Nutritional Assessment. [13] Due to the global phenomenon of population ageing, [2] the increased prevalence of frailty at more advanced ages and the negative consequences of this syndrome, studies about efficacy and effectiveness of interventions to manage this syndrome have great importance, particularly aiming at the prevention of such adverse events. In view of the relevance of the topic and the arguments presented above, we propose the present systematic review with the aim to assess the effectiveness of nutritional interventions for the treatment of frailty in older adults living in the community or in long-term care facilities.
Study registration
This systematic review protocol has been registered on PROSPERO under the number of CRD42018111510, and was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocol. [14] This is a literature-based study, so ethical approval is unnecessary.
2.2. Selection criteria 2.2.1. Types of studies. We will include only parallel-group randomized clinical trials published since 2001 in English, Portuguese, or Spanish. We will accept trials whereby the unit of randomization consisted of individuals or clusters of individuals.
Types of participants.
We will include studies that recruited older adults (aged 60 years or older) with a diagnosis of frailty or prefrailty and living in the community or in long-term care facilities. We will accept any criteria used by original studies to diagnose that syndrome. Studies that have been performed during hospitalization episodes will not be included.
Types of interventions.
We will include studies that have implemented at least one of the following nutritional interventions: nutritional education/dietary prescription, the use of hypercaloric or hyperproteic dietary oral supplements and the delivery of specific diets. Additionally, we will also include studies that adopted any of the above interventions concomitantly with another single or multifactorial intervention provided that the comparator was the same set of interventions without the nutritional intervention component. We will accept as comparators standard treatment, placebo, other nutritional interventions, and multifactorial interventions without a nutritional component.
Types of outcomes.
We will include studies if they report at least one of the following outcome measures.
Search methods for study identification
Two independent researchers will examine the lists of references identified through electronic search. We will also hand-search reference lists of relevant publications including review articles on frailty and of original studies considered eligible for the review. Additionally, we will contact experts in the field of nutrition and frailty to ask for references to published and unpublished data. We also intend to contact researchers to request relevant unpublished data whenever possible. For all studies identified, 2 authors will independently screen and review the titles and abstracts. Full versions of potentially relevant studies will be obtained. Where applicable, we will contact the authors of selected studies to ask for additional data. Disputes regarding the inclusion of a study will be resolved through discussion with a 3rd reviewer.
Data extraction and management
Two reviewers will extract data independently using a standardized prepiloted form including the following data: complete reference; time period when the study was conducted; geographical location; presence of divergences between the study protocol and published results; study design; types of interventions and comparators; duration of the intervention and of follow-up; inclusion/exclusion criteria; sample size; characteristics of the population; balance between groups at the baseline; funding source; method of randomization; presence of simultaneous interventions; diagnostic criteria of frailty; nutritional interventions; details of the intervention, including type, dose, frequency, and duration; control treatment; outcome measures; blinding (patients, field professionals and outcome assessors); duration of follow-up; loss of follow-up; results; intention-to-treat analysis; conclusions reported by the study authors; and research limitations. In addition, there will be a field for the registration of other information deemed relevant by the reviewers. Disagreements about extracted data will be resolved by consensus, and an independent reviewer will be consulted if disagreement persists.
Assessment of bias risk.
To assess the risk of bias in the included studies, 2 review authors will independently use The Cochrane Collaboration's Risk of Bias tool for randomized clinical trials. [15] Accordingly, the following domains will be assessed: random sequence generation, allocation concealment, blinding, incomplete outcome data, selective reporting, and other bias. Each of these criteria will be assigned one of the following categories: low risk of bias; high risk of bias; or unclear risk of bias, where unclear relates to the lack of precise information or uncertainty over the potential for bias.
Where applicable, the investigators of selected trials will be contacted to provide additional relevant information. Disagreements between the authors regarding the assessment of risk of bias will be resolved by consensus, and a 3rd reviewer will be consulted when needed.
2.5.2.
Rating quality of evidence. We will analyze the overall strength of the evidence for each outcome using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) tool. This system represents a method that evaluates the quality of evidence in systematic reviews explicitly, comprehensively, transparently, and pragmatically. [16] The GRADE system evaluates the following dimensions regarding the quality of evidence: study limitations/risk of bias, inconsistency, indirect effects, inaccuracy, publication bias, and factors that may increase the quality of evidence. According to GRADE, the quality of the evidence regarding each outcome analyzed is classified into 1/4 levels: high, moderate, low, and very low. [16] We will use the GRADE profiler software (GRADEPRO) to create "summary of findings" tables with outcome specific information concerning the overall quality of evidence and the magnitude of effect of the interventions examined by the examined body of evidence.
Measures of treatment effects.
Dichotomous data: the results will be presented as the risk ratios with 95% confidence intervals. Continuous data: the results will be presented as the mean difference, if outcomes are measured using similar scales between trials. We will use the standardized mean difference to combine trials that measure the same outcome using different scales or instruments.
2.5.4. Unit of analysis issues. The appropriate unit of analysis will be the individual patient, rather than hospitals or health centers. In studies with multiple intervention groups, we will include only the comparisons between groups that meet our eligibility criteria. If more than 1 pair of intervention comparisons are eligible for a given meta-analysis and those pairs of comparisons have at least 1 intervention group in common, we will proceed using one of the methods recommended by the Cochrane Collaboration in the following order of preference according to the feasibility of each approach: we will attempt to merge the intervention groups to yield a single pairwise comparison; we will attempt to account for the correlation between correlated comparisons by calculating a weighted average of the different pairwise comparisons; and we will perform a network meta-analysis.
2.5.5. Missing data. Where applicable, we will contact the chief investigators of clinical trials with missing data or unclear information (e.g., unclear risk of bias). Whenever possible we will include in meta-analyses data from intention-to-treat analyses. We will not perform imputation procedures for missing data.
2.5.6. Assessment of reporting biases. If there are sufficient numbers of trials (at least 10), we will construct a funnel plot and we will apply the Egger tests and the Trim and Fill method in the evaluation of publication bias.
2.5.7. Data synthesis. We will organize the synthesis of data according to the types of nutritional interventions studied, the types of comparators, and populations studied (i.e., older adults living in the community or in long-term care facilities).
If the included studies are sufficiently similar in terms of population, inclusion criteria, interventions, and results, we will perform quantitative synthesis using the random effects models.
2.5.8. Assessment of heterogeneity. If the available data allow the performance of meta-analyses, we will assess statistical heterogeneity by means of I 2 statistics, which will be interpreted according to the current Cochrane Collaboration guidance as follows: 0% to 40% might not be important; 30% to 60% may represent moderate heterogeneity; 50% to 90% may represent substantial heterogeneity; 75% to 100% considerable heterogeneity. [15] If we find substantial heterogeneity, we will attempt to perform subgroup analyses as described in the following sections.
2.5.9. Subgroup analyses. If sufficient data are available, we will perform the following subgroup analyses: concerning specific details of nutritional interventions (e.g., components and duration), research scenario (i.e., community or long-term care facilities), risk of bias, and criteria used to diagnose frailty.
2.5.10. Sensitivity analysis. We have not planned any sensitivity analyses.
Discussion
Nutrition plays an important role within the multifactorial susceptibility of this syndrome; however, up to the present no systematic review addressed the effectiveness of nutritional interventions for the treatment of frailty. The systematic specifically reviews identified in the literature on this topic emphasize interventions related to physical activity without any particular focus to nutritional interventions, which were generally analyzed briefly and in a secondary manner. [ | 2019-01-22T22:23:25.978Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "f549746aaa7ab05eaf9b57cccc3bdbd9e24d8921",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000013773",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f549746aaa7ab05eaf9b57cccc3bdbd9e24d8921",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41080404 | pes2o/s2orc | v3-fos-license | Online Laboratory Sessions in System Design with DSPs using the R-DSP Lab
—This paper presents the online conduction of the System Design with DSPs laboratory sessions through the R-DSP Lab (Remote Digital Signal Processor Laboratory). This interactive RL (Remote Laboratory) supports the verification of DSP applications which are written offline by the students in C and/or assembly programming language, through an internet accessible and user friendly control environment. The latest and most important feature of the R-DSP Lab which is also proposed in this paper, is the remote control of student’s DSP applications through GUIs (Graphical User Interfaces) developed by them. In order to demonstrate the above feature, the implementation and verification processes of one laboratory session are presented. The assessment and evaluation results of both the System Design with DSPs laboratory sessions and the R-DSP Lab, are also discussed.
INTRODUCTION
Recently there is a growing research interest in the area of embedded systems. This is originated from the fact that these systems are widely used in consumer-oriented devices. Nowadays, the embedded systems are realized using appropriately configured DSPs (Digital Signal Processors), FPGAs (Field Programmable Gate Arrays), and ASICs (Application Specific Integrated Circuits).
In most cases the embedded systems undertake the implementation of complex real-time digital signal processing algorithms which cover the needs of a wide range of applications from aerospace to multimedia and digital communications. These algorithms are quite demanding and their implementation requires relatively high computational load. As a result, in the most cases embedded systems are equipped with powerful DSPs.
Many universities worldwide have already organized courses on embedded systems which are based on DSPs or FPGAs [1]- [6]. These courses are usually divided in two parts, the lectures and the laboratory sessions. The laboratory sessions are very important for the students' education because they allow students to verify their theoretical knowledge in practice and they improve students' practical skills. This procedure is also very important for their future professional career.
Traditionally, the laboratory sessions take place in hands-on laboratories which allow students to perform experiments in real conditions through the interaction with instruments. However, the cost per workstation is quite high and it depends on the corresponding laboratory equipment. Consequently, the number of the available workstations is usually limited. In addition, the conduction of any laboratory experiment, in the hands-on laboratories requires the physical presence of one or more instructors. As a result, the access to the students is limited in specific working hours per week.
In order to overcome the disadvantages of the hands-on laboratories, RLs (Remote Laboratories) were introduced by many universities and research groups. Nowadays, the RLs cover many scientific and engineering fields and allow students to access the laboratory equipment from a distance in order to perform their experiments [7]- [17]. These RLs are available twenty four hours per day and they do not require the physical presence of the students and the instructors. The total cost of the RLs is usually less than the corresponding hands-on laboratories because of the better exploitation of the laboratory equipment.
The initial purpose of this paper is to present the online conduction of the System Design with DSPs laboratory sessions utilizing the R-DSP Lab (Remote Digital Signal Processor Laboratory) which is an interactive RL. In addition, a totally new feature of the R-DSP Lab (Remote Digital Signal Processor Laboratory) which increases the sustainability of the presented interactive RL is proposed. According to this feature the R-DSP Lab users (students and instructors) are able to design and perform their own remote experiments by developing their own GUIs (Graphical User Interfaces). These GUIs undertake the remote control of the users' DSP applications which have been uploaded to the R-DSP Lab by the users and they are executed by the DSP development board of an available R-DSP Lab Workstation. This paper is organized as follows: the organization and the laboratory sessions of the System Design with DSPs postgraduate course are briefly discussed in the second section, while the structure and the features of the R-DSP Lab are presented in the third section. The development stages of a real-time DSP application and its verification process utilizing the R-DSP Lab, from the student's point of view, are demonstrated through an example in the fourth section. The assessment and evaluation results of both the System Design with DSPs laboratory sessions and the R-DSP Lab are discussed in the fifth section.
II. LABORATORY SESSIONS IN SYSTEM DESIGN WITH DSPS
The System Design with DSPs is a postgraduate course which is delivered in the second semester curriculum of two Master degree programs in the Physics Department of the University of Patras, entitled "Electronics and PAPER ONLINE LABORATORY SESSIONS IN SYSTEM DESIGN WITH DSPS USING THE R-DSP LAB Communications" and "Electronics and Information Processing". An average of twenty students attends this course in every academic year and most of them are expected to be familiar at least with the basic theoretical concepts of digital signal processing. They also have the required background in microcomputer architecture and C programming language, and this obtained through the appropriate courses within their undergraduate program curriculum.
The main objective of the System Design with DSPs course is to provide knowledge to the students according to the requirements of the DSP industry. Due to this, the students are trained in the design and implementation of real-time digital signal processing algorithms and applications on DSP based hardware. In order to serve the above targets, the System Design with DSPs course which is a three credit-hour course, is divided into three modules: the lectures, the laboratory sessions and the student projects.
The lectures of this course introduce students to concepts associated with the design and implementation of real-time DSP systems. They also provide students with the necessary theoretical background on a wide range of real-time DSP applications [18], [19]. The organization and the time schedule of the lectures are presented in Fig.1.
In parallel with the lectures, the students carry out a set of six laboratory sessions which are based on the TMS320C6713 DSP of TI (Texas Instruments). The initial purpose of these laboratory sessions is to become the students familiar with the TMS320C6713 DSP and the CCS (Code Composer Studio) IDE (Integrated Development Environment) of TI. In addition, the available set of laboratory sessions includes the design and implementation of real-time digital signal processing applications using the DSK (DSP Starter Kit) C6713 development platform of Spectrum Digital. The learning outcomes of the supported laboratory sessions are presented in detail in Table I.
Following the successful completion of these laboratory sessions the students have acquired a significant experience in the development of real-time DSP applications in C and/or assembly programming language. They have also become familiar with the development of GUIs based on Matlab and LabVIEW for the control and verification of their DSP applications. This experience is In order to accomplish successfully the requirements of this course, groups of two or three students undertake projects. These projects which are different for each student group, include, but they are not limited to, the implementation of audio effects, adaptive filters, signal modulation/demodulation, image processing etc. This process has a great impact into the students' confidence because they realize how to approach and solve a practical problem.
Traditionally, the laboratory sessions and student projects are performed in the hands-on laboratory which is equipped with ten workstations. This hands-on laboratory is available to students for up to three hours per week and two instructors undertake to support the students. In the past, many students in order to complete in time their laboratory sessions and projects, requested to have access to the hands-on laboratory for more hours per week. In order to response to this major request an interactive RL, called R-DSP Lab, was designed and developed. Through this RL, the students are able to perform their laboratory sessions and projects without place and time barriers. The architecture and the operation modes of the R-DSP Lab are described in the following section.
III. THE R-DSP LAB
The R-DSP Lab is an interactive RL, designed and developed at the Physics Department of the University of Patras, which can be accessed at http://rdsplab.physics. upatras.gr. It has a flexible and upgradable structure which is based in two basic structural elements, the Main Web Server and the Workstations [20]. The Main Web Server communicates with one or more Workstations over Internet or Ethernet through a MySQL database. The number of Workstations that will be used depends on the occasional needs and it can be modified without influencing the operation of the R-DSP Lab. The architecture of the R-DSP Lab is presented in Fig. 2.
The Main Web Server [20] undertakes the reception and the authentication of visitors. The home page of the Main Web Server provides information to the visitors about the R-DSP Lab, the equipment and the laboratory sessions of the System Design with DSPs course. After a successful login, the authorized user who wishes to perform a remote experiment is redirected to experiments web page of Main Web Server in order to select the type and the parameters of the remote experiment which will be performed. Subsequently, the Main Web Server checks the availability of the R-DSP Lab Workstations by retrieving the corresponding data from the MySQL database and reserves the Workstation which serves the fewer users. The user's selections are stored in the database by the Main Web Server and the authorized user is redirected to the reserved Workstation. In addition, the Main Web Server stores statistical information about the usage of the R-DSP Lab in the MySQL database. The above functions of the Main Web Server were developed with PHP. The HTML (HyperText Markup Language) web pages of the Main Web Server are hosted by the Apache web server.
The Workstations are the most critical part of the above architecture because they allow the interaction of the authorized users with the laboratory equipment through an internet connection. At this time, in order to reduce the cost per R-DSP Lab workstation, the equipment of the corresponding hands-on laboratory which includes a Windows based PC (Personal Computer), the TG 2000 function generator of TTi (Thurlby Thandar Instruments) and the TDS1012 oscilloscope of Tektronix, was exploited. Each Workstation is also equipped with a web camera and this increases the sense of reality to the R-DSP Lab users due to the fact that the current status of the available laboratory instruments is displayed.
Every R-DSP Lab Workstation executes three same applications which were developed with LabVIEW and they are called WS_Applications [20]. Each one of these applications was built as an executable file (.exe) and it includes an embedded web server which is provided by LabVIEW. Due to this, the installation of LabVIEW in the Workstation's PC is not required. The main purpose of each WS_ Application is to allow only one authorized user to control the above laboratory instruments through a user-friendly control environment which is accessible through a common web browser such as Internet Explorer, Mozilla Firefox and Google Chrome utilizing the remote panels technology of NI (National Instrument). Due to this, the run time engine of LabVIEW which is freely provided by the NI, must be installed in the user's PC.
Consequently, every R-DSP Lab Workstation is able to serve up to three simultaneous authorized users who perform different remote experiments, according to its mode of operation. In the case that the R-DSP Lab Workstation serves more than one authorized users the activated WS_Applications are able to access the available laboratory equipment according to a queue priority logic.
The great advantage of the R-DSP Lab control environment is the accurate representation of the real instruments (oscilloscope and function generator). This environment was designed to support most of the real instruments' features and to allow the users to be experienced with the operation and the handling of the corresponding instruments. Furthermore, there is no risk for the hardware due to the fact that all the necessary hardware connections between the real instruments and the DSP based development platform were implemented by the R-DSP Lab administrators. This fact does not limit the flexibility of the R-DSP Lab because the desired remote experiment is determined by the corresponding executable file which is loaded into the DSP by the user.
PAPER ONLINE LABORATORY SESSIONS IN SYSTEM DESIGN WITH DSPS USING THE R-DSP LAB
From the other hand the supported remote experiments are not time consuming, therefore the users are able to repeat any remote experiment in a few seconds. As a result, the experimental data is not necessary to be saved in the database.
The high security level of the R-DSP Lab is ensured by several unique procedures which are built in both the Main Web Server and the WS_Applications. Initially, the user is identified by the Main Web Server using the user's credentials (username and password) which are stored in the database of the R-DSP Lab. At this time, the above credentials are used only into the R-DSP Lab. When an authorized user requests the execution of a remote experiment, the Main Web Server records the current IP (Internet Protocol) address of the user's PC and redirects the user to a WS_Application of an available Workstation. During the redirection procedure, the corresponding WS_Application retrieves from the database the user's IP address and activates its embedded web server only for the retrieved IP address. This procedure ensures that only the authenticated user has access to the reserved WS_ Application. In addition, the WS_Application checks periodically if the authenticated user is still connected. It also monitors if a non-authenticated user attempts to access the WS_Application from the same IP address. In this case it prohibits the access to the non-authorized user. If the authorized user is disconnected, the embedded web server of the corresponding WS_Application is deactivated. Finally, the WS_Application updates the database in order to inform the Main Web Server that it is available to serve a new authorized user.
The communication between the Main Web Server and the Workstations is achieved through the database which is installed into the Main Web Server. Furthermore, the security of the database is very important. Using a firewall the access to the database is permitted only for the IP addresses of the R-DSP Lab Workstations. In addition as a second security level, the access to the database is secured by a password.
The R-DSP Lab, from user's point of view, operates in two different modes, the verification of a DSP executable code and the control of a DSP application through GUIs developed by the students (GUI mode) as it is presented in Fig. 3. In both cases the user should login into the R-DSP Lab through the Main Web Server home page. The mode of operation is selected by the user through the experiment selection web page.
A. Verification of a DSP executable code
In this mode of operation, the authorized student uploads to the database of the Main Web Server the executable code of her/his DSP application. This code, written in C and/or assembly programming language, was produced offline by the user using the CCS IDE. Subsequently, the user is redirected to the web page of an available WS_Application. Through the control environment web page the user is able to control the laboratory equipment in order to verify the operation of her/his DSP application. When the user requests the control of the laboratory instruments the WS_Application checks if the laboratory equipment is reserved by another WS_Application. If the instruments are available, the WS_Application configures the instruments according to the user's selections and the executable code of user's DSP application is downloaded to the DSK C6713 of the corresponding Workstation. Subsequently, the WS_ Application retrieves the instrument data and updates the control environment web page. If the user exits, the WS_Application updates the database in order to inform the Main Web Server that it is available to serve a new authorized user.
This mode of operation was designed to serve the first four laboratory sessions of the System Design with DSPs course according to Table I. The most important advantage of this mode of operation is that each Workstation can simultaneously serve up to three users who are able to perform different experiments using the same laboratory equipment. This feature is based on a queue priority logic which allows only one WS_Application per time to have access to the available laboratory equipment. Therefore, the cost per Workstation is significantly reduced without any influence in its operation.
B. Remote control of a DSP application through GUIs
In the second mode of operation of the R-DSP Lab only one WS_Application of the corresponding Workstation is activated. After the redirection of the authorized user to the available Workstation, the activated WS_Application enables a dedicated server which is called R-DSP Server and it is installed in every Workstation. The R-DSP Server allows the control of the DSK C6713 and the user's DSP application though a GUI developed by the user. This GUI which communicates with the R-DSP Server through TCP/IP (Transmission Control Protocol/Internet Protocol) messages, is running in the user's PC. When the communication between the R-DSP Server and the user's GUI is established the user's GUI sends the executable iJOE -Volume 10, Issue 4, 2014 PAPER ONLINE LABORATORY SESSIONS IN SYSTEM DESIGN WITH DSPS USING THE R-DSP LAB code of the user's DSP application which is written in C and/or assembly language. This code is automatically downloaded to the DSK C6713 of the corresponding Workstation and the execution of the DSP application is started. At this point the user through her/his GUI is able to control her/his DSP application. In addition the user utilizing the R-DSP Lab control environment of the corresponding WS_Application is able to control and observe the laboratory instruments (oscilloscope and function generator) in order to take the necessary measurements for the verification of her/his DSP application.
In this scenario the R-DSP Server acts as a slave and waits to receive commands from the user's GUI. When the R-DSP Server receives a command, executes the corresponding procedure. The supported procedures include the loading of the executable code in the DSK C6713 development platform and the control of both the DSP and the CCS. It also supports the communication with the DSP either by direct access of the DSP memory or by utilizing the RTDX (Real-Time Data eXchange) technology of TI DSPs. When each one of the above procedures is completed, the R-DSP Server sends to the GUI a reply TCP/IP message including data and information about the execution of the corresponding procedure.
The above TCP/IP messages are formatted according to the R-DSP Protocol which is based on the MODBUS master/slave protocol. Anyone of these TCP/IP messages is divided into different fields with various lengths which include binary data or information in ASCII (American Standard Code for Information Interchange) format. The encoding and decoding procedures of the above TCP/IP messages can be developed using any programming language. Therefore, the users are able to develop their GUIs with any programming language. More detailed descriptions of both the R-DSP Server and the R-DSP Protocol have been presented in [21].
This mode of operation was designed to support the conduction of the last two laboratory sessions (Table I) and student projects of the presented course. In addition, this totally new feature increases the sustainability of the R-DSP Lab because it allows with a small effort the development of new remote experiments from instructors and students. The disadvantage of the proposed approach is the increased development complexity of such GUIs. This is due to the fact that the users should write from scratch the appropriate encoding and decoding procedures. For educational purposes, the reduction of the above development complexity is quite critical. In order to support this target, two sets of functions for Matlab and LabVIEW users respectively were developed and proposed in [21]. These sets of functions are called R-DSP Matlab and R-DSP LabVIEW Toolkits and undertake the communication between the user's GUI and the R-DSP Server through TCP/IP messages. Using the functions of the above toolkits the development of GUIs either with Matlab or LabVIEW is simplified and accelerated, because it is not required the end users to know in depth the details of both the R-DSP protocol and the R-DSP server.
The operation of the R-DSP Lab in this mode is analytically described in the following section through the verification process of the presented example.
IV. DESIGN, IMPLEMENTATION AND VERIFICATION OF A REAL-TIME DSP APPLICATION
The design and implementation of a real-time DSP application requires several development and verification stages. In order to demonstrate this process, from the student's point of view, the online conduction of the Lab. 6 is described. According to this laboratory session, the students are asked to design and implement a real-time DTMF (Dual Tone Multi Frequency) encoder/decoder which is controlled through a simple LabVIEW based GUI developed by them. This laboratory exercise is related to the basic principles on the encoding and decoding of DTMF signals which are used for signaling in telecommunication systems. It also allows the familiarization of students with the RTDX technology of TI DSPs and the development of GUIs with LabVIEW.
The design process begins by understanding the basic principles of the DTMF encoder/decoder in the classroom using the appropriate textbooks [18], [19]. According to the demands of this laboratory session, the specifications of the DTMF encoder/decoder as they defined by the ITU (International Telecommunication Union) in [22] should be fulfilled. The selected implementation of the DTMF decoder consists of a filter bank based on the modified Goertzel algorithm [18]. This algorithm is one of the most common and efficient methods for the detection of DTMF signals.
The implementation of the real-time DTMF encoder/decoder is divided in two parts, the DSP application and the GUI. The interaction of these parts and their operation are presented in the flow chart of Fig. 4. During the conduction of this laboratory session the students separately design and develop the above parts. The students start with the design and offline simulation of the DTMF encoder/decoder using Matlab. Through this process, they are able to understand the synthesis of two sinusoidal signals for the production of the DTMF signals and the operation of the Goertzel algorithm. The next development stage requires the implementation of the DTMF encoder/decoder in C programming language using the CCS IDE. The C code should also contain the necessary procedures for the communication between the DSP application and the GUI which will be developed by the students in the following development stage.
According to this laboratory session, the C code receives the dialed number from the student's GUI and returns the DTMF signal and the decoded number using the RTDX technology. The C code should also utilize the audio codec (coder/decoder) of the DSK C6713 for the input and output of the signals. In this case, the output signal which is the produced DTMF signal becomes the input signal of the decoding procedure using a loopback technique. The operation of the C code which will be loaded to the DSP is also presented in Fig. 4.
The development of such applications by the students is quite demanding and the students with low programming skills have difficulties to write their own code from scratch. In order to facilitate them, guidelines and supplementary material which includes example codes in C and/or assembly language are offered through the available laboratory handouts.
In the following development stage, the students have to design and implement a quite simple GUI which will control and communicate with the above real-time DTMF application. This GUI (Fig. 5) represents a 4x4 telephone keypad which allows the students to dial the desired number and to observe the results of their DSP code. The communication and the control of the DSP application through the GUI is achieved utilizing the features of both the R-DSP Server and the R-DSP Lab. Due to this, anyone of the students should integrate in her/his GUI the appropriate functions of the R-DSP LabVIEW Toolkit which are explained in the available laboratory materials. The operation of the GUI is described in the flow chart of Fig. 4.
In order to verify the DTMF encoder/decoder using the R-DSP Lab, the student should login and select the appropriate mode of operation. Through this process the reservation of the corresponding Workstation is achieved and the student is able to start the execution of the selected experiment.
Subsequently, she/he activates the implemented GUI which is running locally in the student's personal computer. When the connection between the GUI and R-DSP Server is established, the executable code is downloaded to the DSK C6713 development platform of the reserved Workstation and the student remotely controls the DSP application through her/his GUI. In this phase the initialization procedures of both the GUI and the DSP application are completed.
The student using the GUI dials the desired number and observes both the produced DTMF signal and the decoded number. In order to take measurements, the student is able to control the oscilloscope using the R-DSP Lab control environment. This environment is presented in Fig. 6 and is accessible through a common web browser. The real oscilloscope is connected to the line-out of the DSK C6713 and displays the produced DTMF signal (Fig. 7). In earlier evaluations the students asked for more access to the hands-on laboratory as the result of the required amount of work for the conduction of the laboratory sessions. In response to this major concern the authors designed and developed the R-DSP Lab. According to the presented evaluation, the 78% of students answered St. A. or M.A. in question 6. These results are significantly improved compared to previous evaluations which were approximately 35%. This fact indicates that the R-DSP Lab allows a better exploitation of students' time. In addition, an essential improvement of students' performance has been recorded by the authors the last three academic years. According to the discussion with the students, the authors are deeply convinced that the time management of students has a great impact in their performance.
The evaluation of the R-DSP Lab is achieved through the questions 7, 8 and 9. As it is indicated by the results of question 7, most of the students have utilized the R-DSP Lab for the conduction of their laboratory sessions. The students' response in question 8 confirms that the R-DSP Lab has a user friendly graphical environment and, consequently, the acceptance of the R-DSP Lab by the students is attributed to its graphical environment.
Question 1
The course's organization into lectures, laboratory sessions and projects, is satisfactory.
Question 2
The laboratory sessions are well organized and the objectives are well explained.
Question 3
Handouts and reading assignments cover the requirements of the laboratory sessions.
Question 4
The laboratory sessions provided me with a better understanding of DSP concepts learned in the class.
Question 5
The gained experience is very valuable both for my education and future professional carrier.
Question 6
The time schedule for the conduction of the laboratory sessions was satisfying.
Question 7
The R-DSP Lab contributed to the conduction of the laboratory exercises.
Question 8
The R-DSP Lab is user friendly
Question 9
I did not experience any problems using the R-DSP Lab. Therefore, the graphical environment of a RL is very important because it renders the corresponding RL attractive to students. According to the students' response in question 9, most of them did not mention any problems during the use of the R-DSP Lab. Only a few problems which were mostly associated with the version of the operating system of students' PCs were reported and these were successfully treated by the administrators. In addition, some students mentioned that the time response of the R-DSP Lab is quite large but it is satisfactory for a web application. This could be improved through a replacement of the laboratory instruments but in this case the total cost per Workstation will be significantly increased.
From the above evaluation and assessment it can be concluded that the students are strongly satisfied with both the laboratory sessions and the R-DSP Lab. Besides the results of the above evaluation, the authors noticed a significant improvement of the students' performance compared to previous semesters. In the above period the 85% of students completed successfully the requirements of the System Design with DSPs course. In addition, the average final mark of the students, who passed this course, approached the 8.2 out of 10. According to the records of the R-DSP Lab database, during this period, almost 2400 remote experiments were performed by 44 students. The average time which was spent for the conduction of each remote experiment was about the 15 minutes. This time period is quite short due to the fact that the most time consuming procedures which include the implementation of the DSP applications and GUIs, were performed by the students offline. Consequently, the authors' decision to integrate the R-DSP Lab in the conduction of the laboratory sessions turned to be a great educational benefit for the students.
VI. CONCLUSIONS
The System Design with DSPs postgraduate course is totally focused on the design and development of realtime signal processing applications which are executed on hardware based on DSPs. The laboratory sessions of this course which are presented in this paper, are based on the DSK C6713 development platform. As opposed to the traditional laboratory sessions which are carried out in the hands-on laboratories, the students of this course are able to perform the corresponding experiments remotely using the R-DSP Lab.
This interactive RL allows the students to access the laboratory equipment from a distance twenty four hours per day without the physical presence of an instructor. The students are able to verify their DSP applications which are written by them in C and/or assembly programming language, through the control environment of the R-DSP Lab. This environment supports the remote control of both the function generator and the oscilloscope.
A totally new feature of the R-DSP Lab which is proposed in this paper provides the students with the ability to successfully carry out not only the laboratory sessions Lab.5 and Lab.6 but also their own projects. According to this feature the R-DSP users are able to design and develop their own GUIs in order to remote control their DSP applications which are executed by the DSK C6713 of an available R-DSP Lab Workstation. This feature is demonstrated in this paper by presenting the implementation and verification processes of the DTMF encoder/decoder (Lab.6) from the student's point of view.
The assessment and evaluation of both the System Design with DSPs laboratory sessions and the R-DSP Lab are also thoroughly discussed in this paper. According to the evaluation results, these laboratory sessions have a great impact in students' education because they provide students with better understanding of the theoretical concepts. In addition, the contribution of the R-DSP Lab in the conduction of the laboratory sessions and projects allows a more efficient exploitation of students' time. As a result, an essential improvement of the students' total performance has been recorded.
Future plans include the integration of new laboratory sessions on digital image and video processing which will be supported by the R-DSP Lab. Due to this, the features of the R-DSP Lab will be expanded in order to support DSP development platforms such as the DSK C6416 and the TMDSVDP6437 which are designed for digital image and video processing applications respectively. In addition, the design and development of new RLs in different areas such as digital electronics and computer architecture seems to be a challenge. | 2018-04-03T04:57:02.291Z | 2014-06-08T00:00:00.000 | {
"year": 2014,
"sha1": "7ab4f73e53ad85725f46cb65fe44739fb0a3e885",
"oa_license": "CCBY",
"oa_url": "http://online-journals.org/index.php/i-joe/article/download/3270/3173",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "30e4b654d9835db9a06311bd0ceb4355726dd142",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
242476126 | pes2o/s2orc | v3-fos-license | The Effects of Coronavirus on Global Relations and Strategy Making in International Arena
This research explores the effects of coronavirus on global relations and strategy making in the international arena. Coronavirus or Covid-19 is a newly discovered virus in the world examined dynamically at the existing world. The indisposition of the virus might retain up enhancing the opinion of the spread of the contagion rate and the absence of a suitable vaccine (Lipsitch, Swerdlow, & Finelli, 2020). Covid-19 was initially testified from Wuhan, China, and formerly on it directed to a dynamic spread leading it to convert as a worldwide pandemic which led to a consequential international situation which has obstructed many subdivisions Internationally, including Worldwide Policies, Systematic signs of progress, Financial Effects over the several nations and worldwide relations between the countries. Thus, this paper tries to determine the potential effects and consequences of the Covid-19 pandemic over the global level and few conceivable ways to be controlled efficiently.
Introduction
At present, Coronavirus outbursts are spread, and high indisposition is intimidating for the human race (Rothan & Byrareddy, 2020).Once the pandemic is finished, the typical situation is that there will be extensive universal variations that can affect the Post Covid-19 World.The scientist attempts to antedate some outcomes that might alter the international position after the COVID-19 pandemic.The procedures might also continue at the global level because of the pandemic and disturb the worldwide zones (Budget, Global relations, World Administrations).Moreover, the foremost international authorities should perform an accountable role to endure after the Covid-19 has improved and optimistic.By looking at human history, one can look into many pandemics since the old times.Pandemics have been documented in human history many times since its beginning.To characterize a pandemic, we could express that a pandemic is a virus outburst.A virus spreads and influences across nations and landmasses, and its effects are on an extensive scale.Almost all pandemics gazed from a bit of explosion and ales it to shape a plague and lastly prompted a pandemic: the off chance that fits or pestilences not much controlled later turn into Pandemics.Looking into the statistic of pandemic outbursts in history, there have been several outbursts.Some of the outbreaks that affected most individuals are considered and noted.Spanish Flu: Its episode started in the summer/spring of 1918 and lasted till summer 1919.It had almost four waves.The morbidity rate recorded was 500 million, and the mortality rate was 50+ million (Amanda, O;, 2010).Influenza Pandemic: outbursts in 1957 and lasted till 1958.The recorded mortality rate was 1million (Viboud, et al., 2016).SARS Outburst erupted between November 2002-July 2003.The morbidity rate was 8,096, and mortality was 774.Swine Flu Pandemic: Outbreaks in January 2009 and lasted till August 2010.The confirmed cases from labs were, 491,382 and the suspected cases were more than 1 billion.The mortality rate recorded was 18 449 (Chan, 2009).Middle East Respiratory Syndrome Coronavirus is outbreak started in 2012.The morbidity cases recorded were 2494, and the mortality rate was 912 (ECDE, 2012).Western Africa Ebola Virus Epidemic: Outburst in 2013 and lasted till 2016.The recorded morbidity rate was 28, 646 and the mortality rate was 11 323 (CDC, 2019).Since the last 100 years, the six most wellknown pandemic outbursts occur in human history.It has been noticed that the maximum casualties took place because of the Spanish Flu, in which 50 million individuals died and 500 million individuals got infected.The main reason for this outbreak was the lack of appropriate health care services and the uncaring defiance (as the pandemic started precisely after World War I) of colonial powers reigning numerous nations that became the reason for many expiries worldwide.Suppose in future pandemics were considered after the Spanish Flu.In that case, one could notice a decline in casualties' patterns generally because of progressive study and improved handling techniques of the pandemic.Moreover, one can also realize that the last identified pandemic was the Swine Flu Pandemic in the previous decade, still sound controlled.However, it developed into a worldwide pandemic.
Theoretical Context 2.1. Fighting pandemics
It is often noted that due to the outbreak of any epidemic disease in any particular country, the government and authorities always take different steps so that the explosion does not turn into a widespread virus.By some means, if it spreads in a specific country, it is noticed by the global community and administrations that even if it is patent into a pandemic, what is the process to tackle this pandemic.Some steps can be taken in such a situation: 1.The care and cleanliness procedures are taken into account by the Governments 2. It is essential to create some awareness of the specific pandemic Virus 3. Make sure that local people are following the awareness 4. Research of treatment choices for the people who became the victim of the pandemic virus. 5. Establishing more and enough Health care Facilities for People 6. Collecting and maintain enough Assets and Health care facilities for Small and emerging nations 7. Arranging for the development of vaccine immediately 8. Advertising the Vaccine 9. Try to reduce the Ailment and Death rate by using the vaccine.
It was observed that most of the epidemics were taken in control by the steps mentioned earlier.However, much mismanagement and mishandling were noticed in China.Even in International Arena, as soon as the coronavirus was an outburst in Wuhan in November 2019.Because of the local people's mishandling and the governments of Europe (as more quickly as the COVID-19 reached Europe), coronavirus led to the spread of viruses worldwide.The increase in China's Covid-19 cases was that their government did not take sufficient precautionary steps to control the outbreak.They also tried to hide critical information related to coronavirus from people, which became another reason for the virus's outspread.Italy was considered the epicenter and the reason for the outbreak of coronavirus all over Europe.It was reported about Italy's people that they did not take the proper precautionary steps as people of China, which became the cause of Italy's highest number of cases.The comparative example was acknowledged for the USA and UK.In this regard, the public authority did not make appropriate strides and rules.As a result, the cases radically expanded in the USA and the UK individually.Besides, nations like Slovenia and Israel tend to be seen that a pandemic was all around taking care of.Suitable measures were also taken to guarantee that the people do not get mass tainted by the coronavirus.As of now, it is perceived that Slovenia is the Covid-19 free Country in Europe.Israel has consistently set itself up for a Pandemic because of the Israeli government's advancement for the Biological Warfare division and their exploration in life sciences.
However, India began a lockdown early, and governments accomplished decent work at the different state and focal levels, which contrasted with the enormous population.Even though it is widely chipping away to control the pandemic, there have been a few instances of mismanagement in India's areas.In general, the exhibition of India in the coronavirus pandemic is acceptable.What can be expressed that from that, there was worldwide participation among the nations of the West in battling the Covid-19?Nevertheless, it was seen that there was mismanagement from the Chinese region, which drove this infection to turn into a pandemic.Noncollaboration on allocating the logical data has been hazardous in supervising the pandemic, which has prompted this catastrophe.
Impact of Covid-19 on Different Aspects
Coronavirus impacts human diseases, death, and indisposition rates.Many different perspectives regarding the flare-up of Covid-19 also is considered, which affect the short term and long runs in the future.Several analysts suggest a New World order, which is undoubtedly conceivable.However, these impacts will not be seen immediately yet will be found over the long haul.People will notice noticeable results like increment of poverty, financial effects, and new logical turns of events in transient.Over a long time, worldwide approach changes might understand post-pandemic.It has been noticed that the people who had a tough time and suffered the most during the phase of lockdown are the poor people of Established, Emerging, and small countries.So it is at the state level.The three countries with the maximum number of poor people are India with 12 million people, Nigeria with poor people around five million, and the Democratic Republic of Congo, having two million poor people almost (Daniel & Christophr, 2020).For example, nations such as Indonesia, South Africa, and China are likewise estimated to have more than 1,000,000 individuals drove into outrageous poverty due to coronavirus.When considering the pandemic's effect on extreme poverty lines, for instance, the number of individuals living under $3.20 or $5.50 every day, more than 100 million individuals will be driven into poverty.Latin America, East Asia, the Caribbean, and Pacific, and the Middle East and North Africa are anticipated to have a minimum of 10 million, a more significant number of individuals living under $5.50 every day (BBC, 2020).It is assessed that the USA and EU's finances will take around 5-6years to recuperate its finance (Irfan, 2020).Another apparent perspective is that the People's Republic of China will attempt to make its monetary compulsion, which several nations have not preferred.However, reality cannot be overlooked that China has a tremendous exchange hold for itself and has a parcel of exchanges supported shifted towards itself.Going in such an exchange limitation with PRC may prompt a massive downturn in the EU and US, which might deteriorate the circumstance.A few stages for restricting the Chinese impact on the worldwide economy limit the Chinese speculations worldwide and different business sectors, moving the creation units from China to other Countries (India being a late shift model).Such measures can restore nations' monetary restoration and halt the World Finances' syndication by the People's Republic of China.
According to a United Nations report on Food Crises, 2020, the report impulses the nations to perform and move with insistency to evade a starvation emergency.With the catastrophe of apparatus, medications, and competent workers.World Food Programme (WFP) is giving funds and assets to almost 100 million individuals.This report assesses that around 0.3 million individuals can starve every day for more than a quarter of a year without help.The report expresses that somewhere in the range of 265 million is near starvation, which bends over the 135 million earlier.Almost 130 million became a part of this list because of the pandemic.The people of the following countries are at a higher risk of famine: There was a constant increase by WFP in the funds to eliminate 2016-2018.The revenue noticed in 2016 was$53 billion, which raised to $6.5 billion in 2018.However, this year, they have demanded from the world's leaders to raise an extra two billion to give assets to the nations helpless against the Covid-19 pandemic.If these reports are paid enough attention and acted appropriately, at that point, starvation post-Covid-19 can be kept away from many countries (France24, 2020).
Vaccine Developments and Treatments
When coronavirus turned into an outburst, there was a flawless endeavor to make effective medicines to battle Covid-19 and, above all, make an antibody that is exceptionally important in holding and monitoring the pandemic.Producing a successful antibody has been occurring.There is a race between the nations to make the antibody and promote it quickly as time permits.Hydroxychloroquine is the best treatment for the Covid-19, which is utilized at a more significant level; however, no successful antibody has been accessible.There have been endeavors to make the USA, United Kingdom, and Israel vaccination alongside critical exploration efforts from India, Australia, and different nations, including France.American-based organization Moderna has been effective by going into stage II as the preliminary outcomes of the antibody, mRNA1273, have been discovered very useful and may get a vaccine before the end of 2020 or by the start of 2021 (WHO, 2021).The international collaboration in the Covid-19 study for antibodies and handling is essential for outreaching the nations to tackle this pandemic in the minimum time possible.
Research Methodology
This research used the qualitative method of analysis to explore the effects of Covid-19 on global relations and strategy making in the international arena, besides concluding the global role of fighting pandemics.The method examines the problem and often require a review of official documents mainly by World Health Organization, official release related to vaccine developments and treatments and articles related to Covid-19.Perhaps the most notable distinction between the qualitative method and its sub-parts is that this method confronts its sub-components and assumes the possibility of multiple interpretations, each arising from a unique point of view.Secondary data has been used to achieve the research purpose; however, secondary data is obtained from official reports and academic sites.
The current research results are considered reliable because we explored the official documents mainly by the World Health Organization.The reliability of the results is related to whether the search results would be consistent if the research were re-examined with the same data and method.Thus, reliability is of undeniable importance in the analyzes as it is more evident if the results are consistent.In this specific research, we explored the effects of Covid-19 on global relations and strategy-making in the international arena, along with concluding the worldwide role of combating pandemics; Secondary data of this type is often very reliable.
Analyzing the Effects of Covid-19 on Global Relations
In this research, the Covid-19 pandemic is viewed as the crucial point for global relations.Many individuals foresee that Covid-19 will be set apart as the beginning stage of the new world order (Anuraag, 2020).Notably, massive changes do not happen; they are created over time gradually for the time being.Consequently, relations are set up.Some critical perspectives which have affected global relations are referenced as follows: 4.1.Part of China and its relations with USA, India and Australia and European Union and Global role: Notably, Covid-19 started from China as a small outburst, and later because of its mishandling, it turned into a worldwide pandemic.China made some massive mistakes by not sharing legitimate data about the virus.As a result, it prompted an immense emergency.Also, China is enchanting a little chance by creating investments in worldwide business sectors in the EU, stressing relations with the EU and USA.Moreover, it is spreading its purposeful publicity by its mass media towards the worldwide press.This has prompted chaos in China's worldwide local community's discriminating opportunist plans to restrain infrastructure over the World Economic in the pandemic circumstance.In any case, a significant factor cannot be overlooked that the EU or USA has gigantic Chinese speculations, and stressing any relations with China could be an immense misfortune for these nations.So the means that could be taken to counter this can be made by moving and concentrating on different nations for financial exchange.This will create an equilibrium for World Economic and get less dependent on China.Likewise, there have been hostile moves by the Chinese Military on its Eastern Borders and Western Neighbors (India) and infuriating a portion of its Central Asian Neighbors as a Chinese site indicated that central Asian nations like Kyrgyzstan and Kazakhstan had been a necessary part of China and with Kazakhstan considerably "anxious to return to China" (EagleSpeak, 2020).These Chinese moves have been mainly in furor by the worldwide local authorities over these sorts of acts.
Role of World Health Organization
The role and part of WHO remained awful in taking over this pandemic because they did not make vital strides in monitoring the pandemic in a suitable time.In this manner, the disappointment can be incompletely be accused on WHO too.Therefore, changes ought to be made in WHO with the end goal that different pandemics can be maintained a strategic distance from well in the future.Furthermore, coronavirus arose when escape clauses were seen in established nations with better medical care and foundation.Hence, countries ought to create themselves in an improved way for controlling pandemics on the record of things to come.
Conclusion
This research explored the effects of coronavirus on global relations and strategy making in the international arena.In this regard, the researcher attempted to conceal all perspectives for the progressing pandemic over global issues.However, considering the present impression of the pandemic circumstance made by the Covid-19, the focus of the previously mentioned opinion may help understand the foreign relations post-pandemic.We have both conviction and vulnerability concerning the world in the coming year(s).In this research, we realize that the pandemic will remain with us until 2021 and perhaps past that.We additionally realize that the pandemic does not in a general sense modify a few patterns that went before it, including a strained worldwide climate, the weaponization of exchange, a dread for globalization and vote based system, and an increment in disinformation.Be that as it may, we have some vulnerability regarding the options chiefs will make in this unique situation, both inside Europe and elsewhere. | 2021-08-30T15:39:12.272Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "cd8aaa49ca33f4517384124b5802b8f3aecb6d88",
"oa_license": null,
"oa_url": "https://ejmss.tiu.edu.iq/wp-content/uploads/2015/12/4.-Peshawa-Mohammed-Ali-.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b3559e991f9553460c2f8bfcad341494474d5abd",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
30599079 | pes2o/s2orc | v3-fos-license | Therapeutic biliary and pancreatic endoscopy in Qatar- a five year retrospective audit
Endoscopic Retrograde Cholangiopancreatography (ERCP) is an advanced endoscopic procedure in which a specialized side viewing duodenoscope is passed into the duodenum, allowing accessories to be pushed via biliary or pancreatic ducts for diagnostic and therapeutic intervention. It is one of the most complex endoscopic procedures, requiring specialized equipment and proficient and skilled operators and assistants. Today therapeutic ERCP is the intervention of choice for many pancreaticobiliary disorders.
The Division of Gastroenterology & Endoscopy under the Department of Medicine has been performing ERCPs since the inception of the Endoscopy Unit in Hamad Medical Corporation (HMC), and it is the only endoscopy unit in Qatar performing ERCPs. We performed a retrospective audit of ERCPs performed over the last 5 year period from January 2006 to December 2010 in our Endoscopy unit.
INTRODUCTION
Endoscopic Retrograde Cholangiopancreatography (ERCP) is an advanced endoscopic procedure in which a specialized side viewing duodenoscope is passed into the duodenum, allowing accessories to be pushed via biliary or pancreatic ducts for diagnostic and therapeutic intervention. It is one of the most complex endoscopic procedures, requiring specialized equipment and proficient and skilled operators and assistants. Today therapeutic ERCP is the intervention of choice for many pancreaticobiliary disorders. The Division of Gastroenterology & Endoscopy under the Department of Medicine has been performing ERCPs since the inception of the Endoscopy Unit in Hamad Medical Corporation (HMC), and it is the only endoscopy unit in Qatar performing ERCPs. We performed a retrospective audit of ERCPs performed over the last 5 year period from January 2006 to December 2010 in our Endoscopy unit.
OBJECTIVES
Our aim was to audit the indications, findings, therapeutic interventions carried out, safety profile and technical success of endoscopic retrograde cholangiography (ERCP) carried out, during a 5 year period from January 2006 to December 2010.
METHODS
Our data base of electronic and paper based ERCP reports between January 2006 and December 2010 was searched. Additional information if required was obtained from patient chart review from medical records. All ERCPs were performed by two experienced operators, during the entire study period. An ethical approval was obtained from the Research Committee of Hamad Medical Corporation.
RESULTS
A total of 621 ERCP procedures were carried out on 456 patients over the 5 year study period. The mean age of the patients undergoing ERCP was 51.67 years with a male female ratio of 384: 237(1.6:1) ( Table 1). The pre procedure indication for ERCP was predominantly biliary indications (82.4%) of which the vast majority was done for calculus biliary disease ( Table 2). Pancreatic indications were few (4.6%). The miscellaneous indications included biliary complications in post liver transplant patients. Major reason for all biliary interventions was calculous biliary disease, requiring clearance of the common bile duct stones, post laparoscopic cholecystectomy biliary leak, biliary strictures and palliation of biliary obstruction due to malignancy.
Biliary interventions formed majority of (73.1%) all therapeutic interventions which were further comprised of papillotomy and common bile duct stone clearance with balloon or basket and biliary stenting for various indications (see Table 3). Pancreatic therapeutic interventions were few. Forty eight patients had incomplete procedures due to diverse reasons which included failed cannulation. Our initial cannulation success was 91.62%. In 52 patients we had failure in achieving deep biliary cannulation at the initial attempt. However on reattempts 24-72 hours after initial endoscopy, we succeeded in cannulation in 27 of these patients, thus giving an overall cannulation success rate of 95.97%. Major complications encountered are mentioned in Table 4.
DISCUSSION
Our center, which is the only endoscopy unit performing ERCPs in the State of Qatar has moderate volumes of activity in therapeutic biliary and pancreatic endoscopy with a volume of around 100 to 120 ERCPs per year 1 Adequate volumes are needed to maintain proficiency and skills and individual endoscopists who perform more than 40 endoscopic sphincterotomies per year 2 or at reported and this was mentioned in a study 8 which looked at claims for compensation from ERCP related complications. Among the nine fatal cases, the procedure was diagnostic in six, which were potentially avoidable ERCPs. Clearance of common bile duct stones 4 and management of post-operative bile leaks formed the major indications for our biliary intervention. 9 We have undertaken ERCP as the first line therapy for the management of postoperative bile leaks as it is the accepted standard of care today. 9 ERCP was performed in the work up of Idiopathic Recurrent Acute Pancreatitis in 8 patients. 10 Palliation of malignant biliary obstruction formed another important indication for biliary drainage. 11 Placement of bridging or trans papillary pancreatic stent for pancreatic leak due to pancreatic duct disruption from trauma has been yet another indication for pancreatic endotherapy. 12 ERCP today is accepted as a tool for drainage of symptomatic pancreatic pseudocyst 13 and we have done successful pseudocyst drainage in a few patients. Biliary complications in our post liver transplant patients constituted another indication for biliary endotherapy. The commonest indication for ERCP in post-transplant patients was biliary strictures. 14 Though hyperamylasemia is common after ERCP occurring in up to 75% of patients, acute clinical pancreatitis, defined as a clinical syndrome of abdominal pain and hyperamylasemia requiring hospitalization is much less common. In our audit there were 5 patients (1.2%) who developed severe clinical pancreatitis of which one suffered mortality. It has been our practice to avoid repeated pancreatic duct instrumentation or guide wire passage and to do limited pancreatic duct injections, which are well known risk factors for post ERCP pancreatitis. 15 Post ERCP acute pancreatitis can be graded as mild, moderate or severe based on the consensus definition. 16 Mild pancreatitis is defined as patients having serum amylase at least 3 times more than the normal 24 hours after the procedure, requiring admission or prolongation of planned admission by 2 to 3 days, moderate pancreatitis is severe enough to require hospitalization of 4 to 10 days and severe pancreatitis requires hospitalization for more than 10 days with phlegmon or pseudocyst which requires percutaneous intervention or surgery. The incidence of acute pancreatitis has been estimated in several large clinical trials and most studies demonstrate a rate of 4 to 5%. However our incidence of severe clinical pancreatitis of 1%, may be an underestimate since being a retrospective audit only severe cases of pancreatitis requiring prolonged in hospital stay were documented. Low complication rates could also be attributed to the fact that all procedures were performed by two experienced operators during the entire study period. Operator experience and volumes are major factors determining outcomes and complications in ERCP. 17 Bleeding was the most dreaded complication when therapeutic biliary interventions were first introduced. 18 Because of advance in equipment and better experience, it has become a relatively uncommon complication of ERCP and is mostly reported only after sphincterotomy. Post ERCP bleeding can be graded as mild, moderate or severe based on the consensus definition. 16 Mild bleeding is when there is clinical evidence of bleeding (not just endoscopic) with Hemoglobin(Hb) drop less than 3 gram % and without the need for transfusion. Moderate bleeding is defined as bleeding with need for transfusion of 4 units or less, but with no angiographic intervention or surgery. Severe bleeding is deemed to have happened with transfusion requirement of 5 units or more or in situations where intervention by angiography or surgery is required to control bleeding. We had 7 cases of bleeding (1.5%). 2 patients with clinical bleed did not require blood transfusion and had a Hb drop of less than 3 gram % and were managed with just local epinephrine injection and thus was deemed as mild bleeding. Another 5 patients required blood transfusion with local injection and heater probe injection, of which 3 patients achieved haemostasis and were considered to have moderate post ERCP bleed. The other 2 patients failed to achieve haemostasis with local measures, were transfusion dependent and eventually required surgery or angioembolisation and were categorized as severe post ERCP bleeders. Minor episodes of bleeding without .1 gram Hb drop and with good haemostasis with local therapy were not considered to be a significant complication, in our analysis. Our practice ensured a platelet count above 80,000/cc and Internationalized Ratio (INR) ,1.2 as a requirement for all ERCPs which reduced complications from bleeding. 19 We had 1 incident of suspected perforation, which was managed conservatively.
We had an overall selective cannulation success rate of over 95%. We practice wire guided cannulation of the bile duct, which has improved success rates for selective biliary cannulation and reduced incidence of post ERCP pancreatitis. 20,21 Limiting the privileges for therapeutic ERCP to two expert operators has probably kept the major complication rates low, despite our moderate volumes. In conclusion, our audit of ERCPs indicates judicious volumes of mostly biliary therapeutic interventions for diverse indications with technical success and low complication rates, at par with accepted international standards. | 2017-09-13T10:02:23.147Z | 2013-11-01T00:00:00.000 | {
"year": 2012,
"sha1": "5eb6113897b1b25e327f76dd02e13d20581ad942",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5339/qmj.2012.2.8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "935f7f32edf8e148c39920b6a8f15535acb79f0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220876056 | pes2o/s2orc | v3-fos-license | A publicly accessible database for Clostridioides difficile genome sequences supports tracing of transmission chains and epidemics
Clostridioides difficile is the primary infectious cause of antibiotic-associated diarrhea. Local transmissions and international outbreaks of this pathogen have been previously elucidated by bacterial whole-genome sequencing, but comparative genomic analyses at the global scale were hampered by the lack of specific bioinformatic tools. Here we introduce a publicly accessible database within EnteroBase (http://enterobase.warwick.ac.uk) that automatically retrieves and assembles C. difficile short-reads from the public domain, and calls alleles for core-genome multilocus sequence typing (cgMLST). We demonstrate that comparable levels of resolution and precision are attained by EnteroBase cgMLST and single-nucleotide polymorphism analysis. EnteroBase currently contains 18 254 quality-controlled C. difficile genomes, which have been assigned to hierarchical sets of single-linkage clusters by cgMLST distances. This hierarchical clustering is used to identify and name populations of C. difficile at all epidemiological levels, from recent transmission chains through to epidemic and endemic strains. Moreover, it puts newly collected isolates into phylogenetic and epidemiological context by identifying related strains among all previously published genome data. For example, HC2 clusters (i.e. chains of genomes with pairwise distances of up to two cgMLST alleles) were statistically associated with specific hospitals (P<10−4) or single wards (P=0.01) within hospitals, indicating they represented local transmission clusters. We also detected several HC2 clusters spanning more than one hospital that by retrospective epidemiological analysis were confirmed to be associated with inter-hospital patient transfers. In contrast, clustering at level HC150 correlated with k-mer-based classification and was largely compatible with PCR ribotyping, thus enabling comparisons to earlier surveillance data. EnteroBase enables contextual interpretation of a growing collection of assembled, quality-controlled C. difficile genome sequences and their associated metadata. Hierarchical clustering rapidly identifies database entries that are related at multiple levels of genetic distance, facilitating communication among researchers, clinicians and public-health officials who are combatting disease caused by C. difficile .
INTRODUCTION
The anaerobic gut bacterium Clostridioides difficile (formerly Clostridium difficile) [1] is the primary cause of antibiotic-associated diarrhea in Europe and North America [2]. Molecular genotyping of C. difficile isolates has demonstrated international dissemination of diverse strains through healthcare systems [3][4][5], the community [6] and livestock production facilities [7,8]. Previously, genotyping was commonly performed by PCR ribotyping or DNA macrorestriction. More recent publications have documented that genome-wide single-nucleotide polymorphisms (SNPs) from whole-genome sequences provide improved discrimination, and such analyses have enabled dramatic progress in our understanding of the emergence and spread of epidemic strains [9][10][11][12] and the epidemiology of local transmission [13,14]. Eyre and colleagues have argued that transmission of C. difficile isolates within a hospital environment can be recognized with high probability as chains of genomes, which differ by up to two SNPs whereas genomes, which differ by at least ten genomic SNPs represent unrelated bacteria [13,15]. However, SNP analyses require sophisticated bioinformatic tools and are difficult to standardize [16,17]. A convenient alternative to SNP-based genotyping is offered by the commercial software SeqSphere, which implements a core-genome multilocus sequence typing scheme (cgMLST) for the analysis of genomic diversity in C. difficile [18] and other organisms. Indeed, cgMLST [18] confirmed the prior conclusion from genomic SNP analyses [19] that a common clone of C. difficile had been isolated over two successive years at a hospital in China [18]. However, a recent quantitative comparison of the two methods showed that SeqSphere's cgMLST achieved a low predictive value (41 %) for identifying isolate pairs that were closely related by the ≤2 SNPs' criterion [20]. cgMLST of genomic sequences of a variety of bacterial pathogens can also be performed with Enter-oBase (http:// enterobase. warwick. ac. uk/), which has been developed over the last few years with the goal of facilitating genomic analyses by microbiologists [21]. EnteroBase automatically retrieves Illumina short-read sequences from public short-read archives. It uses a consistent assembly pipeline to automatically assemble these short-reads into draft genomes consisting of multiple contigs, and presents the assembled genomes together with their metadata for public access [22]. It also performs the same procedures on sequencing data uploaded by its registered users. Assembled genomes that pass quality control are genotyped by MLST at the levels of seven-gene MLST, ribosomal MLST (rMLST), cgMLST and whole-genome MLST (wgMLST) [21,22]. EnteroBase supports subsequent analyses based on either SNPs or cgMLST alleles using the GrapeTree or Dendrogram visualization tools [23]. EnteroBase also assigns these genotypes to populations by hierarchical clustering (HierCC), which supports the identification of close relatives at the global level [22]. Originally, EnteroBase was restricted to the bacterial genera Salmonella, Escherichia, Yersinia and Moraxella but since January 2018, EnteroBase has included a database for genomes and their metadata for the genus Clostridioides. In June 2020, EnteroBase contained 18 254 draft genomes of C. difficile plus one genome of C. mangenotii. These included over 900 unpublished draft genomes that were sequenced at the Leibniz Institute DSMZ, as well as 80 complete genome sequences based on Pacific Biosciences plus Illumina sequencing technologies. It also included 862 unpublished draft genomes that were sequenced at the Wellcome Sanger Institute.
Here we show that comparable levels of resolution and precision are attained by EnteroBase cgMLST as by SNP analyses. We also summarize the genomic diversity that accumulated during recurring infections within single patients as well as transmission chains within individual hospitals and between neighbouring hospitals in Germany, and show that it can be detected by HierCC. We also
Impact Statement
Clostridioides difficile is a major cause of healthcareassociated diarrhea and causes large infection outbreaks. Whole-genome sequencing is increasingly applied for genotyping C. difficile, with the objectives to monitor and curb the pathogen's spread. We present a publicly accessible database for quality-controlled genome sequences from C. difficile that enables contextual interpretation of newly collected isolates by identifying related strains among published data. It also provides a nomenclature for genomic types to facilitate communication about transmission chains, epidemics and phylogenetic lineages. Finally, we demonstrate that genome-based hierarchical clustering is largely compatible with previously used molecular typing techniques, thus enabling comparisons to earlier surveillance data.
demonstrate that HierCC can be used to identify bacterial populations at various epidemiological levels ranging from recent transmission chains through to epidemic and endemic spread, and relate these HierCC clusters to genotypes that were identified by PCR ribotyping and k-merbased diversity analysis. These observations indicate that cgMLST and HierCC within EnteroBase can provide a common language for communications and interactions by the global community who is combatting disease caused by C. difficile.
Implementation of MLST schemes in EnteroBase
cgMLST in EnteroBase consists of a defined subset of genes within a whole-genome MLST scheme that represents all single-copy orthologues within the pan-genome of a representative set of bacterial isolates. To this end, we assembled the draft genomes of 5232 isolates of C. difficile from public short-read archives, and assigned them to ribosomal sequence types (rSTs) according to rMLST, which indexes diversity at 53 loci encoding ribosomal protein subunits on the basis of existing exemplar alleles at PubMLST [24]. We then created a reference set of 442 genomes consisting of one genome of C. mangenotii [1], 18 complete genomes from GenBank, 81 closed genomes from our work and the draft genome with the smallest number of contigs from each of the 343 rSTs (https:// tinyurl. com/ Cdiff-ref). The Clostridioides pan-genome was calculated with PEPPA [25] and used to define a wgMLST scheme consisting of 13 763 genetic loci (http:// enterobase. warwick. ac. uk/ species/ clostridium/ download_ data). EnteroBase uses the wgMLST scheme to call loci and alleles from each assembly, and extracts the allelic assignments for the subsets corresponding to cgMLST, rMLST and seven-gene MLST from those allelic calls [22]. The cgMLST subset consists of 2556 core genes, which were present in ≥98% of the reference set, intact in ≥94% and were not excessively divergent (Fig. 1).
Comparison of cgMLST and SNPs for analyses of transmission chains
We compared the numbers of cgMLST allelic differences and the numbers of non-recombinant SNPs in isolates from multiple epidemiological chains. These included 176 isolates from four patients with recurring CDI (C. difficile infection), 63 isolates from four transmission chains in multiple hospitals [14,19,26], and a comprehensive sample of 1158 isolates collected over several years in four hospitals in Oxfordshire, UK [13]. A strong linear relationship (R 2 , 0.71-0.93) was found in all three analyses between the pairwise differences in cgMLST alleles and non-recombinant SNPs (Fig. S1, available in the online version of this article). The slope of the regression lines was close to 1.0, indicating a 1 : 1 increase in cgMLST allelic differences with numbers of SNPs. The same data were also investigated with cgMLST calculated with the commercial program SeqSphere [18], with similar correlation coefficients but a lower slope due . The genetic diversity was calculated using the GaussianProcessRegressor function in the sklearn module in Python. This function calculates the Gaussian process regression of the frequency of genetic variants on gene sizes, using a linear combination of a radial basis function kernel (RBF) and a white kernel [57]. The shadowed region shows a single-tailed 99.9% confidence interval (≤3 sigma) of the prediction. Altogether, 2556 loci fell within this area and were retained for the cgMLST scheme, while four were excluded due to excessive numbers of alleles.
to lesser discriminatory power of the SeqSphere cgMLST scheme (lower panels in Fig. S1).
Eyre et al. [13] concluded that direct transmission between two hospital patients can be detected because their bacterial genomes differ by two SNPs or less. Our analysis indicated that these transmission chains in the Oxfordshire dataset would also have been recognized by cgMLST in EnteroBase. Genomes that differed by two cgMLST alleles usually also differed by ≤2 SNPs according to a binary logistic regression model (probability=89%; 95% confidence interval, 88-89%) (Fig. 2). Of 3807 pairs of genomes with ≤2 allelic differences, 3474 also differed by ≤2 SNPs, yielding a positive predictive value of 91 % for identifying isolate pairs with ≤2 SNPs by EnteroBase cgMLST and a sensitivity of 62 % (≤2 cgMLST allelic differences were found in 3474 of 5707 pairs with ≤2 SNPs). The comparable values for SeqSphere were 78 % positive predictive value and 99% sensitivity.
We also compared the genetic distances between 242 genomes from Oxfordshire, which had been isolated during the initial 6 months and 916 genomes from the actual testing period (April 2008 to March 2011) [13]. Overall, 35% (318/916) of the latter genomes matched at least one genome collected earlier by two or less EnteroBase cgMLST alleles and 34% (316/916) matched an earlier genome by ≤2 SNPs. The two sets of genomes were 89% concordant. Thus, cgMLST is equivalent to SNP analysis for detecting inter-patient transmission chains.
Hierarchical clustering for tracing local and regional spread SNP analyses are computer intensive, and are only feasible with limited numbers of genomes [27]. cgMLST-based relationships can be analysed for up to 100 000 genomes with GrapeTree, but analyses involving more than 10 000 genomes remain computer intensive [23]. EnteroBase implements single-linkage hierarchical clustering (HierCC V1) of cgMLST data in pairwise comparisons at multiple levels of relationship after excluding missing data [22]. These are designated as HC0 for hierarchical clusters of indistinguishable core-genome sequence types (cgSTs), HC2 for clusters with pairwise distances of up to two cgMLST alleles, etc. EnteroBase presents cluster assignments for C. difficile at the levels of HC0, HC2, HC5, HC10, HC20, HC50, HC100, HC150, HC200, HC500, HC950, HC200 and HC2500. Here we address the nature of the genetic relationships that are associated with these multiple levels of HierCC among 13 515 publicly available C. difficile genomes, and examine which levels of pairwise allelic distances correspond to epidemic outbreaks and to endemic populations.
In our analyses of 176 C. difficile isolates from four patients with two recurrent episodes of CDI, multiple genomes were assigned to patient-specific HC2 clusters, some of which were isolated from the initial episode as well as the recurrence 80-153 days later (Fig. 3, patients D, F and G; 4 to 36 isolates had been collected per episode; Table S1). For these patients, relapsing disease likely reflected continued colonization after initially successful therapy. However, some isolates from patient F differed by 12-21 cgMLST alleles from the bulk population (Fig. 3), which indicates that the patient was co-infected simultaneously with multiple related strains. In patient E, the two genomes from the two CDI episodes differed by >2000 allelic differences (Fig. 3), which indicates that the second incident of CDI represented an independent infection with an unrelated strain. Hence, discrimination . Binary logistic regression model to determine the probability that two genomes are related at ≤2 SNPs, given a certain difference in their cgMLST allelic profiles, based on the Oxfordshire dataset [13]. The number of SNPs was encoded as a binary dependent variable (1 if ≤2 SNPs, 0 if otherwise) and the number of allelic differences was used as a predictor variable. between relapse and reinfection based on cgMLST appears to be straightforward except that two episodes of CDI might arise by reinfection with identical strains from an environment that is heavily contaminated with C. difficile spores [28]. We note that the time intervals (16-22 weeks) investigated here exceeded the currently recommended threshold of 8 weeks for surveillance-based detection of CDI relapses [29,30] but still yielded almost identical strains in three of four patients.
Our examinations of multiple local outbreaks have revealed individual, outbreak-specific HC2 clusters. However it is also conceivable that multiple HC2 clusters might be isolated from a single epidemiological outbreak due to the accumulation of genetic diversity over time. Alternatively, multiple HC2 clusters within a single outbreak may represent the absence of crucial links due to incomplete sampling. Incomplete sampling of outbreaks is not unlikely because asymptomatic patients are only rarely examined for colonization with C. difficile [31][32][33] even though they may constitute an important reservoir for transmission. Indeed, some of the outbreaks investigated here did consist of more than one HC2 cluster (Fig. 4). For example, nine isolates from a recently reported ribotype 018 (RT018) outbreak in Germany [26] encompassed four related HC2 clusters, and outbreaks with RT027 and RT106 in a hospital in Spain [14] were each affiliated with two or three HC2 clusters (Fig. 4).
We identified 23 HC2 clusters encompassing 133 genome sequences in a dataset of 309 C. difficile genome sequences collected from CDI patients in six neighbouring hospitals in Germany. These HC2 clusters were associated with individual hospitals (Χ 2 , P=8.6×10 −5 ; Shannon entropy, P=4.2×10 −5 ) and even with single wards in these hospitals (Χ 2 , P=0.01; Shannon entropy, P=6.2×10 −3 ). We investigated whether these HC2 clusters reflected the local spread of C. difficile within institutions by retrospective analyses of patient location data. Sixty six patients (50 %) were found to have had ward contacts with another patient with the same HC2 cluster (median time interval between ward occupancy: 63 days; range, 0 to 521). These results are consistent with the direct transmission on wards of C. difficile isolates of the same HC2 cluster (Fig. 5). For patients such as P1 and P2 where the shared ward contacts were separated in time (Fig. 5), transmission may have occurred indirectly through asymptomatically colonized patients or from a common reservoir, such as environmental spore contamination [14,31,32]. We also detected 15 HC2 clusters that included isolates from two or more hospitals in the region. Subsequent analyses of patient location data confirmed that some of these HC2 clusters were associated with patient transferrals between the hospitals (Fig. 5). Hence, hierarchical clustering of C. difficile genome sequences in conjunction [14,19,26] with retrospective analysis of patient movements revealed multiple likely nosocomial transmission events, none of which had been detected previously by routine surveillance.
Hierarchical clustering for identification of epidemic strains and endemic populations
International epidemic spread of C. difficile over up to 25 years has been inferred previously on the basis of molecular epidemiology with lower resolution techniques [34]. For multiple representatives of those epidemic strains in EnteroBase, the majority of these epidemic groups corresponded to HC10 clusters, including epidemic RT017 [11] (HC10_17), the two fluoroquinolone-resistant lineages of RT027 [9] (HC10_4, HC10_9), or livestock-associated RT078/126 [35] (HC10_1) (Fig. 6).
Endemic populations have also been described by ribotyping and phylogenetic analyses, some of which have acted as sources for the emergence of epidemic strains [2,9]. Many endemic populations seem to be represented by HC150 clusters. Clustering at HC150 was well supported statistically (Fig. S2), and the frequency distribution of pairwise genomic distances indicated that multiple database entries clustered at <150 cgMLST allelic differences (Fig. S3). HC150 clusters also correlated well with k-mer-based classification [36]. When applied to the dataset of 309 C. difficile genomes from six hospitals in Germany, the two methods implemented in EnteroBase and PopPUNK found 51 and 48 clusters, respectively, the majority of which coincided (adjusted Rand coefficient, 0.97).
A cgMLST-based phylogenetic tree of 13515 C. difficile genomes showed 201 well-separated HC150 clusters, each encompassing a set of related isolates, plus 209 singletons (Fig. 7). Because these HC150 clusters are based on cgMLST genetic distances, we refer to them as 'cgST complexes', abbreviated as CCs. Genomes from each of the major CCs have been collected over many years in multiple countries, indicating their long-term persistence over wide geographic ranges (Table 1).
We compared HC150 clustering with PCR ribotyping for 2263 genomes spanning 84 PCR ribotypes for which PCR ribotyping data were available in EnteroBase. These included 905 genomes, which we ribotyped (Table S2), as well as several hundred other genomes for which ribotype information was manually retrieved from published data. The correlation between HC150 clustering and ribotyping was high (adjusted Rand coefficient, 0.92; 95% confidence interval, 0.90-0.93). However, our analysis also revealed that PCR ribotypes did not always correspond to phylogenetically coherent groupings. PCR ribotypes 002, 015 and 018 were each distributed across multiple phylogenetic branches (Fig. 8). Furthermore, some genomes with indistinguishable cgMLST alleles were assigned to multiple ribotypes, including RT001/RT241, RT106/RT500 and RT126/RT078 (Fig. 8, Table 1). In these cases, both ribotypes occurred in several, closely related clades (Fig. 8), indicating that similar ribotype banding patterns had evolved multiple times. In contrast, HC150 clusters corresponded to clearcut phylogenetic groupings within a phylogenetic tree of core genes (Fig. 8b).
Higher population levels
HierCC can also identify clusters at still higher taxonomic levels, up to the levels of species and sub-species [22]. In C. difficile, HC950 clusters seem to correspond to deep evolutionary branches (Fig. S4) and HC2000 clusters were congruent with the major clades reported previously [37], except that cluster HC2000_2 encompassed clade 1 plus clade 2 (Fig. S5). Finally, HC2500 may correspond to the subspecies level, because it distinguished between C. difficile and distantly related 'cryptic clades' (Fig. S6).
DISCUSSION
Infectious disease epidemiologists frequently seek to know if new isolates of bacterial pathogens are closely related to others from different geographical origin, i.e. if they are part of a widespread outbreak. Unlike a previous cgMLST implementation [18], EnteroBase supports this goal by taking full advantage of rapidly growing, public repositories of short-read genome sequences [22]. In contrast to short-read archives, however, where stored sequence data are not readily interpretable without specialized bioinformatic tools [38], EnteroBase enables contextual interpretation of a growing collection (18 254 entries as of June 2020) of assembled, quality-controlled C. difficile genome sequences and their associated metadata. At least the collection date (year), the geographic origin (country) and the source (host species) are available for the majority of database entries. Importantly, phylogenetic trees based on cgMLST allelic profiles from many thousand bacterial genomes can be reconstructed within a few minutes, whereas such calculations are currently prohibitively slow based on SNP alignments [22]. Genomesequencing reads from newly sampled C. difficile isolates can be uploaded to EnteroBase and compared to all publicly available genome data within hours, without requiring any command-line skills.
We demonstrate that the application of cgMLST to investigations of local C. difficile epidemiology yields results that are quantitatively equivalent to those from SNP analyses. This is a major advance because SNP analyses require specific bioinformatic skills and infrastructure, are time consuming and not easily standardized [16]. A web platform for centralized, automated SNP analyses on bacterial genomes is limited to food pathogens currently, and does not offer any analyses on C. difficile genomes [39]. Even though a cgMLST scheme for C. difficile had been published recently [18], its ability to identify closely related isolates and the inferred genomic distances was shown to be inferior to SNP analyses due to an excess of errors introduced by the de novo assembly of sequencing reads and a lack of per-base quality control [20].
In EnteroBase, cgMLST is also based on de novo assembly, but EnteroBase uses Pilon [40] to polish the assembled scaffolds and evaluate the reliability of consensus bases of the scaffolds, thereby achieving comparable accuracy to mapping-based SNP analyses. When applied to a large dataset of C. difficile genomes from hospital patients in the Oxfordshire region (UK), cgMLST and SNP analysis were largely consistent (89% match) at discriminating between isolates that were sufficiently closely related to have arisen during transmissions chains from others that were epidemiologically unrelated.
After assembly, draft genomes contain missing data and many cgSTs have unique cgST numbers but are identical to other cgSTs, except for missing data. Hence, individual cgST numbers are only rarely informative. However, indistinguishable cgSTs are clustered in common HierCC HC0 clusters, which ignore missing data. In June 2020, the Clostridioides database contained >12 000 HC0 clusters, indicating that the majority of genomes was unique. Similarly, EnteroBase provides cluster designations at multiple levels of HierCC, enabling rapid identification of all cgSTs that are related at multiple levels of genetic distance. The data presented here shows that HierCC designations can facilitate communications between researchers, clinicians and public-health officials about transmission chains, epidemic outbreaks, endemic populations and higher phylogenetic lineages up to the level of subspecies.
EnteroBase cgMLST identified numerous HC2 clusters of strains in C. difficile isolates that seem to have arisen during transmission chains in six neighbouring hospitals in Germany. These assignments were in part consistent with retrospective investigation of patient location data, although none of the nosocomial outbreaks (defined by German law as two or more infections with likely epidemiological connections [http://www. gesetze-im-internet. de/ ifsg/]) had been detected previously by standard epidemiological surveillance by skilled clinical microbiologists. Recent publications propose that prospective genome sequencing of nosocomial pathogens should be applied routinely at the hospital level to guide epidemiological surveillance [41]. Our data indicates that the combination of genome sequencing with cgMLST and HierCC may identify nosocomial transmission routes of C. difficile more effectively than presently common practice, and hence could help to reduce pathogen spread and the burden of disease. Reliable identification of transmission chains requires interpretation of pathogen genome sequence data in its epidemiological context, however [42].
HierCC will also enable comparisons to previously published data because we have provided a correspondence table between HC150 clusters and PCR ribotypes (Table 1). Rarefaction analysis indicated that the currently available genome sequences represent about two-thirds of extant HC150 (CC) diversity, which extrapolated to about 600 CCs (Fig. S7). At least some of this enormous diversity may be due to the occupation of multiple, distinct ecological niches, as exemplified by differential propensities for colonizing non-human host species (Table 1) [43,44]. Individual CCs may also differ in their aptitudes for epidemic spread, as indicated by drastically different proportions of genomes assigned to HC2 chains: only 7% of CC141 were assigned to HC2 clusters versus 35% of CC34 and 77% of CC4 (Table 1). A full understanding of the population structure of C. difficile and its relationship to epidemiological patterns will require additional study because many of the clusters described here have not yet been studied or described. However, this task can be addressed by the global community due to the free public access to such an unprecedented amount of genomic data from this important pathogen.
Sampling
In total, 309 C. difficile isolates were collected at a diagnostic laboratory providing clinical microbiology services to several hospitals in central Germany. To assemble a representative sample, we included the first 20 isolates from each of six hospitals from each of three consecutive calendar years (Table S2). For investigation of recurrent CDI, a set of 176 C. difficile isolates were collected in a diagnostic laboratory in Saarland, Germany. Here, primary stool culture agar plates were stored at 4 °C for 5 months to eventually enable the analysis of multiple plates representing episodes of recurrent C. difficile infection from individual patients, who had developed recurrent disease by then and could be chosen with hindsight. It was attempted to pick and cultivate as many bacterial colonies from each selected plate as possible, resulting in 6 to 36 isolates per CDI episode (Table S1). In addition, we sequenced the genomes from 383 isolates that had been characterized by PCR ribotyping previously, including 184 isolates sampled from piglets [8], 71 isolates from various hospitals in Germany [3], and 108 isolates from stool samples collected from nursery home residents (unpublished; Table S2).
Whole-genome sequencing
For Illumina sequencing, genomic DNA was extracted from bacterial isolates by using the DNeasy Blood and Tissue kit (Qiagen), and libraries were prepared as described previously [46] and sequenced on an Illumina NextSeq 500 machine using a Mid-Output kit (Illumina) with 300 cycles. For generating complete genome sequences, we applied SMRT longread sequencing on an RSII instrument (Pacific Biosciences) in combination with Illumina sequencing as reported previously [46]. All genome sequencing data were submitted to the European Nucleotide Archive ( www. ebi. ac. uk/ ena) under study numbers PRJEB33768, PRJEB33779 and PRJEB33780.
SNP detection and phylogenetic analysis
Sequencing reads were mapped to the reference genome sequence from C. difficile strain R20291 (sequence accession number FN545816) by using BWA-MEM and sequence variation was detected by applying VarScan2 as reported previously [46]. Sequence variation likely generated by recombination was detected through analysis with ClonalFrameML [47] and removed prior to determination of pairwise sequence distances [15] and to construction of maximum-likelihood phylogenetic trees with RAxML (version 8.2.9) [48].
Statistical analyses
To determine the probability that two genomes are related at ≤2 SNPs, given a certain difference in their cgMLST allelic profiles, we inferred a logistic regression model using R ( [53], pp. 593-609). Genomic relatedness was encoded as a binary response variable (1 if ≤2 SNPs, 0 if otherwise) and the number of core-genome allelic differences was used as a predictor variable. We applied this model to a dataset of 1158 genome sequences from a previous study, representing almost all symptomatic CDI patients in Oxfordshire, UK, from 2007 through 2011 [13]. While that original study had encompassed a slightly larger number of sequences, we restricted our analysis to the data (95 %) that had passed quality control as implemented in EnteroBase [21]. We used the SNP data from Eyre's report [13].
The hierarchical single-linkage clustering of cgMLST sequence types was carried out as described [22] for all levels of allelic distances between 0 and 2556. We searched for stable levels of differentiation by HierCC according to the Silhouette index [54], a measure of uniformity of the divergence within clusters. The Silhouette index was calculated based on d^', a normalized genetic distance between pairs of STs, which was calculated from their allelic distance d as follows: d^'=1-(1-d)^(1/l), where l is the average length (937 bp) of the genes in the cgMLST scheme.
We further evaluated the 'stability' of hierarchical clustering using two other criteria. The Shannon index is a measure of diversity in a given population. The Shannon index drops from nearly 1 in HC0, because most cgSTs are assigned to a unique HC0 cluster, to 0 in HC2500, which assigns all sequence types to one cluster. The gradient of the Shannon index between the two extremes reflects the frequencies of coalescence of multiple clusters at a lower HC level. Thus, the plateaus in the curve correspond to stable hierarchical levels, where the Shannon index does not change dramatically with HC level. We also evaluated the stability of hierarchical clustering by pairwise comparison of the results from different levels based on the normalized mutual information score [55] (Fig. S3).
For clustering C. difficile diversity with PopPUNK [36], we used a sketch size of 10 5 and a K value (maximum number of mixture components) of 15. Of note, the resulting number of clusters for the tested dataset was identical for all K between 15 and 30.
To estimate concordance between cgMLST-based hierarchical clustering and PCR ribotyping or PopPUNK clustering, respectively, we calculated the adjusted Rand coefficient [56] by using the online tool available at http:// www. comparingpartitions. info/. To test statistical associations of HC2 clusters with specific hospitals and hospital wards, respectively, we compared Χ 2 values and normalized Shannon entropy values (R package 'entropy' v.1.2.1) from contingency tables containing real isolate distributions (Table S3) and randomly permuted distributions (n=1000), by using the non-parametric, two-sided Mann-Whitney U test (R package 'stats' v.3.5.0).
Funding information
This work was partially funded by the German Center for Infection Research (DZIF), by the Federal State of Lower Saxony (Niedersächsisches Vorab VWZN2889/3215/3266), by the EU Horizon 2020 programme (grant agreement number 643476), the Wellcome Trust (098051), and the UK Medical Research Council (PF451). EnteroBase development was funded by the BBSRC (BB/L020319/1) and the Wellcome Trust (202792/Z/16/Z), and the salary of Z.Z. was also provided by The Wellcome Trust. The funders had no role in the study design, preparation of the article or decision to publish. | 2020-07-30T02:02:09.317Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "8159b09fd26edd16ed43103066cdca962b2552aa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1099/mgen.0.000410",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9150fad2da4bd7f08a0e0b939246857006aff5de",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.