id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
29923036
pes2o/s2orc
v3-fos-license
Stacking Characteristics of Close Packed Materials It is shown that the enthalpy of any close packed structure for a given element can be characterised as a linear expansion in a set of continuous variables $\alpha_n$ which describe the stacking configuration. This enables us to represent the infinite, discrete set of stacking sequences within a finite, continuous space of the expansion parameters $H_n$. These $H_n$ determine the stable structure and vary continuously in the thermodynamic space of pressure, temperature or composition. The continuity of both spaces means that only transformations between stable structures adjacent in the $H_n$ space are possible, giving the model predictive and well as descriptive ability. We calculate the $H_n$ using density functional theory and interatomic potentials for a range of materials. Some striking results are found: e.g. the Lennard-Jones potential model has 11 possible stable structures and over 50 phase transitions as a function of cutoff range. The very different phase diagrams of Sc, Tl, Y and the lanthanides are understood within a single theory. We find that the widely-reported 9R-fcc transition is not allowed in equilibrium thermodynamics, and in cases where it has been reported in experiments (Li, Na), we show that DFT theory is also unable to predict it. In 1611, Kepler suggested that stackings of triangular layers was the most efficient way to pack hard spheres 1 .This conjecture was only recently proved 2 . Many elements crystallise in close-packed crystal structures, but the concept of "close-packed" is not part of crystallographic categorization.This is because there are an infinite number of stacking arrangements with equal packing density, spanning a wide range of space group symmetries.Most observed structures have short repeat sequences such as face-centered cubic (fcc) or hexagonal close packed (hcp), but there is no general theory to explain why these should have the lowest energy. Predicting the stable crystal structure for a material is a longstanding challenge in condensed matter physics.One underlying reason is that crystal structures are defined by discrete symmetry groups and integer numbers of atoms per unit cell.Aside from the atomic positions themselves, there are no continuous variables which cover the entire space of possibilities, thus we are searching for a minimum in a discontinuous space. Among close-packed structures, only fcc has close packing enforced by symmetry.For all other stackings, there is an "ideal" ratio between interlayer spacing and interatomic separation (c/a = 2/3) which gives closepacking.Generally, materials adopting structures within a few percent of "ideal" are regarded as close-packed. Stacking sequences are typically defined as a series of layers labelled A, B, and C with atoms positioned at 0a + 0b; 1 3 a + 1 3 b; and 2 3 a + 2 3 b respectively, where a and b are the in-plane lattice vectors.This ABC notation is not unique: a more compact notation 5 uses h for layers with identical neighbours (ABA), f for those with different (ABC).For examples see table I. The most widely-used model for atomistic modelling is the Lennard-Jones potential, which describes the van der Waals bonding of inert gases.It has hcp as the most stable structure at low temperature, transforming to fcc at high temperature 3 .More sophisticated modelling of electronic structure using density functional theory can be applied across the periodic table, and gives quantitative agreement with experiment 4 although it is impossible to check all possible stacking sequences. In this paper we show that the energies of the infinity of stacking sequences can be represented by a convergent series, and that phase boundaries between some pairs of crystal structures cannot occur.We demonstrate the extraordinary complexity of the Lennard-Jones phase diagram.We show that deviations from "ideal" c/a ratios are correlated with stability.We also investigate the role of pressure and uncover some deep-seated inadequacies in interatomic potentials. To define the stacking sequence with periodicity M, we introduce a set of parameters α n where δ i,i+n is 1 when the i and i + n layers have the same ABC symbol, and 0 otherwise.Physically α n can be thought of as "The fraction of the atomic positions R i for which there is another atom at R i + nc", where c is the interlayer separation.As M → ∞, or for an arbitrary density of stacking faults, the αs become continuous variables, The set of α's up to α M univocally describes any possible stacking with an M −fold or fewer periodicity.All translationally, rotationally or reflectionally equivalent stackings have the same unique set of α n , unlike the ABC and hf notations which have considerable redundancy.Trivially, α 0 = 1 and α 1 = 0 for all close-packed structures.Only certain ranges of α n s correspond to physically-realizable structures (see Fig. 1). Utilizing the CASTEP simulation package 6 , wellconverged energies for various stackings were determined in the framework of density functional theory using the PBE exchange-correlation functional 7 range of pressures.In addition to the DFT calculations, we calculate energies of the same structure set using a number of interatomic potentials, both pairwise and many-body, which were fitted to represent the same materials.Our structure set consists of all 43 possible stacking sequences for up to 10 atomic layer repeats in the ABC notation (c/f Table I) excluding redundant strings (i.e.those with identical α n ).Calculations are performed starting from hexagonal style unit cells with cell angles 90 • , 90 • , 60 • ; Internal coordinates and lattice parameters were fully relaxed, and double-checked to ensure that each structure remained in its initial metastable state, with each atom in the structure retaining 12-fold coordination and undergoing only small distortion from close-packing.Each material is characterized by parameters H n which are obtained by a least squares fit to the 43 calculated enthalpies assuming a linear dependence on α n , Every material is therefore represented as a point in an Ndimensional H n -space, and every point in the H n -space has an associated most-stable stacking structure calculated by minimizing Eq.2 with respect to α n .e.g.consider the summation in Eq.2 up to only n = 3, the enthalpy varies linearly with α 2 and α 3 , and it follows that the most stable structure must be located at a corner of the triangle of physically-possible states shown in figure 1(a), allowing only fcc, hcp, or dhcp.More complex structures may be stable if considering H 4 and higher terms. The H 2 and H 3 values for a range of materials and pressures are shown in Fig. 2(a).The residuals in the fit to DFT data are of order tenths of meV per atom, about 1% of the enthalpy differences between structures.For Eq.2 to be useful it must be rapidly convergent, and in Fig 1(b) we show that the terms do indeed decay rapidly with n.Typically, the H 2 and H 3 contributions are dominant. The key to the usefulness of this result is that we have transformed the discrete representation (ABC or hf) of the crystal structure to a continuous space one (α n ).This enables us to anticipate phase transitions arising from continuously changing thermodynamic variables such as temperature, pressure or composition.To do this, consider the N-dimensional H n space.Any stacking will have some region of stability if N is large enough 9 .Geometrically, these regions are hyperpyramids which meet at the origin where enthalpy is independent of stacking.If we change the pressure continuously, the H i also change continuously, tracing a path through the H n -space which can be evaluated based on DFT calculations at different pressures a given material.When this path crosses from the stability region of one phase to another, this corresponds to a phase transition.A dramatic physical consequence is that transformations between phases whose stability regions are non-adjacent in H n space (Fig 3), such as fcc and 9R, are not thermodynamically possible in any system for which the H n representation converges.If the H n are fitted to free energy calculations, temperature-driven transitions can also be intimated. There are similarities with the long-ranged 1D Ising model [9][10][11] , in which possible stackings (here h and f ) are represented by spins [12][13][14][15][16] .In that case H 2 maps to the field, while the Ising interaction terms are linear combinations of our H i .The Ising representation turns out to be less useful because it converges slowly.To understand why, consider the strings ABACB and ABABC, which give .hff. and .hhf. for the Ising representation.In the first case the next neighbour hf interaction is between unlike (BC) layers, in the second between like (BB) layers.In the physical system, the set of separations between atoms in B-C is different from B-B, and the associated enthalpy differences are well represented by H i .In the Ising picture, this difference emerges from correlations between longer range interactions, which have an unintuitive mathematical origin. For a given material, the H n vary continuously with pressure, temperature or, for alloys, with composition.Fig. 2(a) shows trajectories projected into (H 2 , H 3 ) space for pressures up to 20 GPa.The clustering of elements' H 2 and H 3 values and the similarities of their pressure dependence corresponds to periodic table groupings, indicating an electronic origin of the observed properties. Many further inferences can be drawn from thr H n space, for example Group 11 metals lie close to the origin, and low values of H n suggest changes in α are not energetically costly.As a consequence, stacking faults (incremental change in α n ) have low energy, meaning that dislocations can glide easily and Group 11 materials are soft and malleable. The set of α n describe the relationship between closepacked layers, so non-close-packed phases such as bcc or the ω phase of titanium are not accounted for. Yttrium is a particularly interesting case.Projection of its pressure trajectory onto the (H 2 , H 3 ) plane moves The insets are colored to show the stable phase for given (H2, H3) using the same color scheme; when H4 is positive (left), all six phases appear, for negative H4 (right) only fcc, hcp and dhcp are possible.The line shows changing values of (H2, H3) with pressure.Because H4 for Y is also pressure-dependent, this is a projection onto the plane of constant H4 which it intersects: the line is colored green when the H4 > 0 and yellow when H4 < 0 to show that it passes through the wedge of hhf stability, but not hf f .Small dots indicate 10 GPa intervals. it from hcp stability into the dhcp phase (see Fig. 2).Experimentally 17,18 , yttrium does this via and intermediate Sm-type phase, also called 9R, which consists of 9 layers: ABACACBCB and can be described in the hf notation as hhf (Table I).However hcp, 9R, and dhcp all have α 3 values of zero, and are hence degenerate in situations where H 2 = 0. Consequently 9R lies on the boundary of the hcp phase with the dhcp phase in figure 1.Once n = 4 terms are included in equation ( 2), there is a wedge of 9R stability for H 4 > 0. This must be traversed as an intermediate phase between hcp and dhcp, as observed.Qualitatively, we find that yttrium transforms from hcp to 9R at 4 GPa, then to dhcp at around 10 GPa (Fig. 3).These numbers agree with other DFT calculations 19,20 but are lower than observed experimental pressures, which might be due to hysteresis since the experiments were done with increasing pressure only. Scandium and thallium appear to behave similarly to yttrium (see Supplemental materials), but Sc is known to transform to a complex non-close-packed structure at a lower pressure than where its trajectory would cross the hcp-dhcp boundary in figure 2(a).The trajectory for thallium goes towards the transition line with pressure, but H 4 < 0 so it passes below the origin and hcp-fcc is the only observed transition. The 9R and fcc structures are not adjacent in Fig 1.Therefore, no thermodynamic phase boundary can exist between 9R and fcc.This prohibition of pressure-driven transitions in any system is curious because such transi- tions have been reported in lithium and sodium.However, Li 9R was very recently proved not to be stable 21 , and we find both Li and Na to be more stable in fcc than 9R at all pressures.By contrast, the 9R phase is adjacent to hcp and dhcp, (Fig. 1), so its presence in the samarium phase diagram is expected.Interestingly, the lanthanide sequence of structures dhcp/9R/hcp/fcc 22,23 is also consistent with the model. Figure 4 shows that the c/a ratio is strongly correlated with a material's preference for the hcp or fcc phase (H 2 ).Typically, hcp materials have c/a < 2/3, whereas metastable structures of fcc materials have larger than ideal c/a.Curiously, the primary effect of pressure is to move c/a towards ideal, irrespective of the change in H 2 (Sc being an exception). The H 2 and H 3 values for a selection of interatomic potentials are displayed alongside the first principles data (Fig 2).We used the Lennard Jones potential, a set of embedded atom and Finnis-Sinclair potentials 8,24-30 , the Empirical Oscillating Potential 31 , and Pettifor's three term oscillating potential for Al, Na, and Mg 32,33 as implemented in the LAMMPS code 34 .Remarkably, these potentials almost all fall into a narrow region of Fig. 2(a), shown expanded in Fig. 2(b), the spread on H 3 being some two orders of magnitude smaller than for the DFT calculations. This weak dependence of enthalpy on stacking sequence implies low basal-plane stacking faults, which leads to systematic erroneously low barriers to basal slip.Furthermore, the phase stability is highly sensitive to pressure and to the details of the empirical potentials. We find truly remarkable results for the Lennard Jones 6-12 forcefield (Fig 5).This most widely-used of potentials is in practice invariably applied with truncation 34 , with H the Heaviside function and and σ defining length and energy units.As r cut → ∞, H 2 converges to a value of around −0.0009 , which accounts for most of the difference in energy between the fcc and hcp phases, while H 3 converges to a value two orders of magnitude smaller, indicating a stable hcp ground state.The dependence of the H n values on r cut is erratic; discontinuities occur as new coordination shells come within range, with even H 2 changing five times.This means that a large number of minimum enthalpy phases are observed as a function of the cutoff, as indicated in figure 5. Calculation using an alternative truncation with the energy and force shifted to remove the discontinuities at the cutoff distance, is better behaved, but still undergoes five transformations with increasing cutoff, with regions of fcc, hcp and dhcp phases (see Supplemental Materials). The interatomic potentials exhibit more pressure induced phase transitions than the DFT calculations.We propose that this is because they have a fixed characteristic lengthscale associated with the zero pressure fitting data.In reality, the characteristic length for metallic interactions might be the Fermi wavelength, which reduces with pressure.The long ranged oscillations of Pettifor potentials scale with the Fermi vector, meaning that the position of shells of neighbouring atoms is unchanged relative to the maxima and minima of the potential 33 .Consequently, Pettifor potentials show fewer pressureinduced transitions than other models. In summary, we showed that different stackings of monatomic close packed metals can be uniquely described by a set of structure-specific continuous variables α n , and that an enthalpy expansion in these quantities leads to a multidimensional H n space containing regions of stability for all stackings.The material-specific fitted expansion coefficients H n converge quickly with n, and allow the stablest structure to be determined.Changes in H n with pressure allow us to identify phase transformations. Using the model, we predict that a boundary between fcc and 9R (α−Sm-type) phases cannot exist in any phase diagram, requiring a reassessment of stability of the re-ported 9R in Na and Li, but not in the Sm prototype.We reproduce and interpret the phase transformation sequence in Y, Sc, and Tl.We identify excess polytypism as problematic for simple interatomic potentials in general, and demonstrate an unprecedented amount of polytypism in the Lennard-Jones system. FIG. 1 : 9 i=2 FIG. 1: (left) Physically realizable stackings projected onto the α2-α3 plane.Configurations for up to 25 atomic layer repeats are shown in red.Blue points indicate the 43 structures used in our calculations.(right) Box plots of normalised enthalpy H n vs n showing the rapid convergence of Eq. 2. Data is taken from DFT calculations across all elements and pressures.The structure-independent H0 are omitted.Specifically H n = |Hn| 9 i=2 |H i | . 1 FIG. 2 : FIG. 2: (a) Figure showing close packed materials plotted against their (H2, H3).Lines show the movement under pressure according to DFT calculations.Blue dots show the position of interatomic potentials at equilibrium volume.The outlying interatomic potential is Fortini's Ru EAM potential 8 .The regions of fcc, hcp and dhcp stability are shown, boundaries calculated for the slice where H4 and higher terms are zero.(b) Expanded view of the position of interatomic potentials in the region of H2-H3 space bound by the rectangle in (a).The lines again show the effects of compression. 3 FIG. 3 : FIG.3:(top) DFT calculated enthalpies for phases of Yttrium with pressure.(bottom) Fitted Hn values with pressure.The insets are colored to show the stable phase for given (H2, H3) using the same color scheme; when H4 is positive (left), all six phases appear, for negative H4 (right) only fcc, hcp and dhcp are possible.The line shows changing values of (H2, H3) with pressure.Because H4 for Y is also pressure-dependent, this is a projection onto the plane of constant H4 which it intersects: the line is colored green when the H4 > 0 and yellow when H4 < 0 to show that it passes through the wedge of hhf stability, but not hf f .Small dots indicate 10 GPa intervals. FIG. 4 : 3 . FIG. 4: Correlation between the stability of hcp over fcc (H2) and the divergence from the ideal close-packed ratio of (c/a)0 = 2 3 .The effect of pressure up to 20 GPa is again shown as paths coloured to correspond to the relative volume. FIG. 5 : 6 H FIG. 5: Zero-pressure H2, H3, and H4 for the Lennard-Jones potential as a function of the interaction range.The diagonal dotted line demonstrates the regular introduction of new series at intervals of the interplanar spacing.The upper of the two ribbons at the top of the graph shows the minimum enthalpy structure at each value of the cutoff, the lower shows the minimum enthalpy structure predicted by equation 2 using the Hn values up to n = 4, .The different colors represent different structures described using the hf notation as follows; Red: f, Blue: h, Green: hf, Purple: hhf, Yellow: hhhf, Pink: hhff, White: hhfff, Olive: hhhhhf, Lime: hhhhff, Cyan: hhhhhhf, Brown: hhhhfff, Black: hhffhhf. TABLE I : Representation of various structures in terms of basal stacking in the different notations.Note that ABC and ACB represent the same structure, fcc and that structures are not uniquely defined by α2, α3.
2017-08-04T11:42:27.000Z
2017-08-04T00:00:00.000
{ "year": 2017, "sha1": "e2f9d10c3df90567b1ff148d9ee24a6f5ceca5be", "oa_license": null, "oa_url": "https://www.pure.ed.ac.uk/ws/files/177774209/1708.01460.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e2f9d10c3df90567b1ff148d9ee24a6f5ceca5be", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
2754002
pes2o/s2orc
v3-fos-license
Some physical factors in toxicological assessment tests. Many thousand organic compounds are in common use and new ones introduced daily. With many of these materials, little is known about their toxic hazard. For years scientists have been investigating the relation of structure and properties to biological activity. Among the factors relating to toxicity are bioaccumulation and persistence in the organism. In this study, the relation of partition coefficient and solubility to bioaccumulation of some organochlorine compounds was investigated as was also the reactivity of several organophosphates. The work adds confirmation to the relation of molecular parameters to penetration, accumulation, and persistence in toxic action. Introduction Some 30,000 or more chemicals of varying types are in common use. They range over the spectrum of simple inorganic compounds through metallorganic to very complex organic molecules and polymers (I). Just as the chemicals vary widely, so do the uses. One substance may be used for a solvent, another as a pesticide, and a third as a plastic. Some chemicals are used individually, others in complex formulations. The use of many of these chemicals result in their introduction to the environment, with the consequent exposure of man to the substance. It is an axiom of toxicology that any substance in sufficient concentration can be injurious. Just so, there is reason to believe that many of the chemicals in use may afford a greater or lesser hazard to man upon prolonged exposure. Whether or not the hazard is significant can be determined only on the basis of some knowledge of the toxicity of the compound and the levels to which exposure occurs and the interaction of environmental factors (2). This requires information about the bioavailability, rate of uptake, accumulation, persistence, metabolism, and excretion, as well as the basic mechanism by which the effect is produced. Such information is available for only a relatively few of the some 30,000 compounds in use. If the toxicity and hazard of these chemicals were to be evaluated in time to protect man, there is needed (a) a rapid method of measuring toxicity; and (b) a method of predicting probable toxic hazard of new or as yet untested compounds (3,4). In view of the magnitude of the problem, both are needed, the first to give empirical data that can be extrapolated to man and allow the setting of standards or quality criteria that would avoid toxic consequences. The second method, namely that of predicting probable hazard, would provide early warning with new or as yet untested compounds, point to methods of handling such compounds, and further, provide guidance in the development of rapid assessment of toxicity by empirical means. It is to this latter approach to evaluating toxicity that this paper is addressed. Background Since the 19th century, investigators have been intrigued by the problem of relating biological activity to structure and properties of chemicals (5,6). The interest was stimulated by the new and novel compounds, the young science of organic chemistry produced for pharmaceutical testing. Shortly, of course, wider interest in biological activity and toxicity developed as these compounds were put to other purposes, e.g., pesticides, or as a result of effect on man from industrial exposure. It was felt that relating biological activity to molecular parameters offered the advantage of being able to predict biological activity and toxicity on the basis of rela-tively rapidly obtainable chemical data, and that it would enable the "tailor-making" of desirable compounds or early recognition of possibly hazardous substances. Many scientists over the years have devoted attention to various aspects of the problem of predicting biological activity from structure and properties of compounds. Some of the earliest studies involved examination of the structure of organic compounds in relation to their activity. In these early studies, particular attention was given to the composition and substituent groups of compounds including such things as halogen, nitro, alkyl, amino, thio, and mercapto substitution. This enabled investigators to identify groupings within molecules that conferred a measure of biological activity, e.g., chlorine substitution on aromatic compounds. Paralleling these studies were investigations attempting to relate various other properties to biological activity, including such things as boiling point, vapor pressure, solubilities, partition coefficients, molar volumes, and polarizability. Limited successes were achieved in these early studies as evidenced by the information that was developed enabling prediction of greater or lesser biological activity based on organic structure, positional isomerization, and the nature of the substituent group. For the most part, however, this information was applicable only within specific homologous series and rarely could be extended to another class of organics. More and more successes have been recorded in relating structure and properties to biological activity with application of newer and more sophisticated knowledge and techniques in studying the problem. Moreover, the wealth of information developed over the years relating a specific type of activity with a given group or region of composition in a molecule has been of value. The application of quantum mechanical concepts in relating structure to activity has been fruitful on certain types of problems (6)(7)(8)(9). Thus, application of the results of molecular orbital calculations has revealed certain characteristics of the molecules that correlate well with activity. Similarly, determination of electron density in regions of molecules has been shown to correlate well with oncogenic activity with certain types of chemicals (10). Other factors that have been studied and shown to correlate to a greater or lesser degree with biological activity include symmetry of the molecule, electromagnetic absorption, particularly infrared, the activity and the constitutive property of partitioning. It becomes increasingly apparent that the biological activity (or toxicity) of a chemical is the sum of a variety of molecular characteristics interacting to a varying degree in the several events leading up to the basic reaction. These characteristics can be shown to include the composition and configuration of the molecule, isomerism, spatial geometry, and the various thermodynamic properties that determine the constitutive and colligative characteristics. Analyses of the processes and events leading to the expression of toxicity provides some insight as to the various factors, molecular and otherwise, and the role they play in biological activity. The "critical path" of a chemical to an interaction with the target site is seen as involving four components or events, namely, (1) movement through and interaction of the chemical with the environment; (2) interaction of the chemical with the boundary between the organism and the environment; (3) passage of the drug through the boundary, i.e., absorption and diffusion; and (4) intracellular action of the drug. In each of these steps, several reactions may occur. Certain of these reactions may favor increased intracellular concentration, others tending to limit it. Having reached the boundary of the organism, interaction with the boundary and passage through it ensues. This is followed by transport and distribution within the tissues themselves until a biologically significant concentration is reached at the sensitive site. Obviously, the flux of the chemical to the target site is influenced by both the character of the system as well as the molecular properties of the chemical. This is true whether the processes are enzymatically mediated or purely physical. Since the interest this instance is on the chemical, the analyses, then, should focus on its characteristics. Table 1 is an attempt at just such analyses. In the course of the work of this cooperative study between the US and USSR health scientists, it became of interest to investigate some of the physicochemical properties in relation to their role in biological activity. The purpose was to further the understanding of molecular properties, both chemical and physical, as a basis for making predictions regarding toxicity. Various factors were examined as they related to the potential for accumulation, persistence, and ultimately the toxic action. Findings, it was felt, apply not only to predicting toxicity of new or untested compounds but are also of value in developing appropriate, rapid toxicity assay methods and in setting standards. Materials The chemicals utilized for the most part were analytical standard grade of greater than 95% pur- Size, geometry, orientation, dimension between bonding sites, exclusion volume ity. The other chemicals, such as solvents and salts, were reagent grade or purified before use. All water utilized was distilled and run through a 2 x 10 cm XAD-2 macroreticular resin column. This effectively removed trace organics from the water. The preliminary studies indicated the reagent grade octanol was unsuitable for partitioning work, possibly due to trace impurities which serve to stabilize the emulsion formed at the octanol/water interface. It was found that a number of these impurities could be removed by distillations. Analyses All samples were analyzed by gas/liquid chromatography, using an 63Ni electron capture detector operating in the pulse mode. Glass columns act with the appropriately coated solid phase for each type of chemical used. Oven temperatures and gas flow rates found suitable in the prior work for the compounds were used as the operating conditions. Hydrolysis The rates of hydrolysis were determined using a modification of the method described by Ruzicka et al. (ll). An amount of chemical approximately equal to half the aqueous solubility limit in 100 ml deposited on the walls of the flask by evaporating off an ether solution. After the residual ether was removed by a nitrogen stream, flasks were filled with an aqueous buffer solution. The phosphate buffer was 0.008695M KH2PO4 and 0.03043M Na2HPO, having a pH of 7.4. The flasks were shaken vigorously for 5 min and several aliquots immediately removed for zero time analyses. Samples of the chemical in the buffer solution were maintained at 37.5 + 1°C and at 20 + 1°C and the hydrolysis rate followed. The concentration at any serial time was determined by analyses for the remaining amount of parent compound. The half-life was then determined from a first-order rate plot and the enthalpy of activation for hydrolysis calculated from the differences in rate between the two temperatures. Partition Coefficient Stock solutions of appropriate concentration usually about 1 mg/ml in octanol were prepared. A 2-ml portion of the stock solution was added to 20 ml organic-free distilled water in a screw-top (Teflonlined) 25 ml Corex centrifuge tube. Tubes were shaken in horizontal position for 24 hr at 20°C, and then 1 ml of the octanol solution was removed for analyses. The remaining octanol was withdrawn along with the top few milliliters of the aqueous phase and discarded. The remaining aqueous phase was centrifuged for 20 min (17,500 rpm, approximately 39,000g) in a Servall refrigerated centrifuge at 20°C. An additional few milliliters was again dis- carded to remove octanol separated by centrifugation and a 10 ml sample withdrawn for analyses. The samples were then diluted (octanol) or extracted with hexane to the correct volume for analyses by gas-liquid chromatography (GLC). Solubility Sufficient chemical to be approximately five times the estimated water solubility was evaporated onto the walls of a 1 liter Erlenmeyer flask from an ether solution (12). The flasks were filled with organic-free distilled water and fitted with an inverted fritted gas dispersion tube. The tube was attached to a Teflon stopcock to facilitate removal of aliquots of the solution. The second tube, not extending below the liquid level, allowed the use of air pressure to remove the sample through the dispersion tube for analyses. The flasks were magnetically stirred and samples removed for analyses at regular intervals. The sampling continued until five consecutive samples with less than 5% variation in concentration were obtained. The solubility given is the average of these five samples. A visible excess of all compounds remained when the determinations were terminated. Results and Discussion The solubilities of organochlorine and organophosphate compounds used in this study are shown in Tables 2 and 3. Included in these tables also is the logarithm of the partition coefficient. The solubilities presented in these tables are the best values either obtained experimentally or from the literature. The solubility behavior for the most part is about what would be expected with more complex and less polar compounds showing a reduced water solubility. In the case of the organophosphates, the larger the alkyl group (ethyl versus methyl) the lower the solubility. Conversely, the partition coefficient is larger. It would be expected, therefore, that these compounds might more readily partition through lipophilic membranes and thus gain more ready access to the interior of the cell and to the target site (13,14). Figure I is a plot of the log of the water solubility versus the logarithm of the partition coefficient. Despite some differences in temperatures at which solubilities were determined, there is an excellent correlation between the partition coefficient and water solubilities for the wide range of compounds studied. An inverse linear relationship in the log-log plot is noted, and this relationship obeys the regression equation: log K = -0.670 log S + 5.00 The correlation coefficient of this relationship is 0.985. As a further assessment of the validity of the relation between partition coefficient and accumulation, these two factors were compared (13,14). Table 4 presents limited number of data of partition coefficient and bioaccumulation. Though the data are limited, the correlation is obvious. A plot of these data shows a linear relationship with a correlation coefficient of 0.983. The regres-sion equation relating the dependence of bioconcentration on aqueous solubility is: log B = -0.624 log S + 3.72 Though the persistence of a compound in the mammalian body is influenced by deposition, excretion, and enzyme-mediated metabolism, it was felt that the rate of hydrolysis in physiological pH buffers would give an indication of possible persistence. This persistence would relate to the organisms ability to accumulate and store the material, and hence acquire sufficiently high concentration for prolonged exposure. Table 5 gives the results of studies of the hydrolysis of several of the organophosphates. The organochlorines were not studied in this connection because of the known stability and resistance to hydrolysis. Further proof of the relation of partition coefficient indicate accumulation and potential toxicity was found with dichlofenthion and leptophos. Here the individual had a single oral exposure but symptoms of intoxication and high residue levels upon biopsy were found for 30 + days following the exposure (15). Leptophos, of course, has a relatively low acute toxicity in short-term tests but has been demonstrated to be neurotoxic as a result of its accumulation and persistence. Summary and Conclusions It is shown that a variety of molecular parameters are involved in the penetration, accumulation, persistence, and toxic action of a chemical. The solubility-partitioning is an important factor in penetration and accumulation. It also appears to have a significant relationship in terms of indicating the intrinsic toxicity. Applying this information further, these properties can be shown to be related to biological activity through the relationship BA = log S IS,,. It is concluded on the basis of experiment and observation that persistence is a significant factor in toxic action. If the basic molecular moiety has particular stability-except for activation reactions-it is highly probable that such a compound will exhibit chronic effects. This is illustrated in the case both in the case of the organochlorines and the more stable organophosphates. From these data and observations it is concluded that toxicity is the algebraic sum of the interaction of a number of molecular parameters.
2014-10-01T00:00:00.000Z
1979-06-01T00:00:00.000
{ "year": 1979, "sha1": "1eeab11479b5c55a7f3d9f1743f77f621d0eec4b", "oa_license": "pd", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.793075", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1eeab11479b5c55a7f3d9f1743f77f621d0eec4b", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
254446991
pes2o/s2orc
v3-fos-license
Stem Cell Transplantation in Glanzmann's Thrombasthenia: A Report of Two Adult Patients Glanzmann’s thrombasthenia (GT) is an autosomal recessive bleeding disorder characterised by mucocutaneous bleeding. At molecular level, defect in platelet receptor glycoprotein (GP) IIb/IIIa leads to defective platelet aggregation. Anti-fibrinolytic agents, platelet transfusions, and factor rVIIa are used for prophylaxis before invasive procedures and treatment of bleeding events. Allogeneic stem cell transplant is the only curative option. Here, we report cases of two adult male patients who underwent matched sibling donor stem cell transplantation for GT with recurrent bleeding requiring platelet and red cell transfusions. Both showed marked improvement in quality of life. To conclude, stem cell transplant is a viable treatment option for severe, difficult-to-control cases of GT. Since platelet count and coagulation profile were normal, he was investigated for platelet function defects and was found to have GT.When taken to transplant, he had received more than 40 units of red cell concentrates and more than 250 units of platelets.He was fully HLA-matched with his younger sister (age 16 years).Both patient and donor had the same ABO blood group but Rh incompatibility was present (the recipient was Rh positive and the donor was Rh negative). During pretransplant workup, he was incidentally found to have situs inversus and dextrocardia.His general condition was good, with Karnofsky Performance score of 90% and haematopoietic stem cell transplant comorbidity index (HCT-CI) score of zero (0).Both donor and recipient tested positive for CMV and EBV IgG antibodies. After conditioning chemotherapy with busulfan, 14 mg/m 2 , cyclophosphamide, 120 mg/m 2 , and ATG 10 mg/kg (BU 14 CY 120 ATG 10 ), he received stem cells collected via bone marrow harvest and peripheral blood apheresis (PBSC) with CD34 dose of 5.6×10 6 /kg.He received graft vs. host disease (GvHD) prophylaxis with twice daily dosing of Cyclosporine (CSA) starting from Day -1 and IV Methotrexate (MTX) 10 mg/m 2 on day +1, and 8 mg/m 2 on day +3 and +6.Standard prophylaxis for herpes zoster and pneumocystis jirovecii was also given.The posttransplant period was complicated by febrile neutropenia starting on day +6, which responded to broad--spectrum antibiotics.Grade 2 mucositis occurred after the second dose of MTX.However, he managed his oral feed and did not require parenteral nutrition. CASE 2: A 23-year male presented to the outpatient department in 2015, with history of epistaxis and recurrent gum bleeding from childhood.Regular red cell transfusions were required (~2 red cell concentrates per month) to maintain Hb of around 7-8 g/dl.The diagnosis of GT was confirmed on platelet aggregation studies and, given the severity of his disease, the option of allogeneic stem cell transplant was explored.Fortunately, he had an HLA-matched sister available with no ABO mismatch.Myeloablative conditioning chemotherapy with busulfan, 12.8 mg/m 2 , cyclophosphamide 120 mg/m 2 and ATG 15 mg/kg (BU 12.8 CY 120 and ATG 15 ) was given, followed by infusion of bone marrow harvested stem cells (CD34+ dose 6.9×10 6 cells/kg).Posttransplant GvHD prophylaxis was with cyclosporine and methotrexate.Neutropenic fever occurred in an immediate posttransplant period which settled with broad-spectrum antibiotics and anti-fungal agents.Neutrophil engraftment occurred on day +14 and platelet engraftment on day +19. A month posttransplant (day +28), his renal functions deteriorated with creatinine rising to 886 umol/l and urea 27.2 mmol/l.Cyclosporine was withheld and replaced with methylprednisolone.Renal functions improved gradually on cessation of nephrotoxic medication.This was followed by the development of CMV reactivation which was managed with IV ganciclovir initially and oral valganciclovir, later on. On day +81, he developed poor graft function with donor chimerism dropping to 60%.A stem cell boost was planned but the patient and family declined.Immune suppression was optimised with low-dose cyclosporine and mycophenolate mofetil (MMF).The patient was followed closely for graft function, which improved over time.At 1-year posttransplant, whole blood donor chimerism improved to 85%.Lineage-specific chimerism was 75% for T-cells and CD15 (myeloid) 85% percent at 18 months. Currently, he is 23 months posttransplant, off immunosuppression and without GvHD.Short Tandem Repeats (STRs) for donor chimerism show stable mixed chimerism (85%).He is living a good quality life with normal blood counts, and no bleeding or transfusion requirement. DISCUSSION Data reported till date regarding hematopoietic stem cell transplant in GT is limited to case reports and series and includes transplants carried out in children and young adults with serious bleeding symptoms, both with and without antiplatelet antibodies, using bone marrow, umbilical cord, or peripheral blood stem cells. 5To our knowledge, patients presented here are the first ever reported cases from Pakistan. GT is an autosomal recessive bleeding disorder, with prevalence of 1 per million worldwide.In certain ethnic groups, with increased incidence of consanguinity, prevalence rate of 1 in 200,000 have been reported. 1The true burden of the disease is not known in Pakistan; however, a study from Karachi showed GT in 9.6% (n=27) of patients who presented with bleeding history, making it the third most common disorder after von Willebrand disease and fibrinogen deficiency in autosomal recessive bleeding disorders (ARBDs). 6In another study from Lahore, the incidence was reported to be 20.4%. 2 The male-to-female ratio was 1.2:1.Mean age at diagnosis was 7 ±2.5 years ranging from 3 months to 35 years.Consanguinity was observed in 65% patients. 2 Treatment options in response to surgical, traumatic or spontaneous bleeding include anti-fibrinolytic therapy and platelet transfusion.Repeated platelet transfusions carry the risk of development of platelet alloimmunisation, in addition to the risks of transfusion-transmitted infections, transfusion-associated lung injury and volume overload common to all blood products.Antibody formation has been reported to occur in 25-70% of patients. 1,3 Apart from anti-HLA antibodies, antibodies against missing platelet glycoproteins are also formed, which is of particular concern in women of child-bearing age as these antibodies may cross the placenta causing severe foetal thrombocytopenia. 1 Transfusing HLA-matched and leukocytedepleted platelets reduce but do not eliminate the risk of antibody formation.These processes, while recommended, may not be feasible in many clinical settings.Recombinant factor VIIa is approved both for prophylaxis before invasive procedures and treatment of bleeding episodes, especially in platelet refractory cases. 3 Despite tremendous advances in understanding the molecular nature of the disease, satisfactory treatment of GT remains a challenge.Recurrent spontaneous bleeding and persistently high haemorrhagic risk impair the quality of life significantly.Apart from allogeneic stem cell transplant, the other treatment options mentioned above are non-curative.Indications for stem cell transplant in GT are not clearly defined but include recurrent, severe bleeding episodes, platelet refractoriness and red cell transfusion dependency due to recurrent blood loss. 7So far, 19 cases of stem cell transplant in GT have been reported. 4The median age of patients was 5 years.At a median follow-up of 25 months, all patients were alive.Busulfan plus cyclophosphamide was the most common conditioning regimen used. 7-9 The outcome of stem cell transplant is compromised by various complications including conditioning toxicity, infectious complications and GvHD.While data in GT is limited, fully matched sibling donors, reduced conditioning regimens and well-optimised GvHD prophylaxis strategies improve transplant outcomes in hematologic disorders in general.In Pakistan, the large average family size and high prevalence of consanguinity make it possible to find matched sibling donors.In 70% of cases, a matched family donor is identified. 10In our case, the recurring requirement of red cell concentrates and platelet transfusion in face of inadequate transfusion facilities was the main indication for stem cell transplantation.Both of our patients had fully matched sibling donors available and received ATG in addition to busulfan and cyclophosphamide in conditioning regimen.Both are doing well with adequate graft function, have not experienced bleeding symptoms posttransplant and are transfusion-independent. With gene therapy still in the experimental phase, haematopoietic stem cell transplant remains the only curative option. 5It is indicated in cases with recurrent life-threatening bleeding complications, particularly if patients are refractory to platelet transfusions.Transplant is usually considered in younger population with probably lower risks of associated complications mainly GvHD and platelet refractoriness. In adults, haematopoietic stem cell transplant should be assessed on an individual basis and the risk of transplantation complications should be balanced against the risk of bleeding problems of GT and the ability to control bleeding with the available therapy.
2022-12-09T16:09:41.085Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "f8b7ab778ef3b52951210ad6b653320fc67e5285", "oa_license": null, "oa_url": "https://www.jcpsp.pk/oas/mpdf/generate_pdf.php?string=cUE3ZHlVUlVBaFVKaGx0Y21mZmhCUT09", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2a4c2bba097e0d8561d4e1bdeb273fe165806e6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259286564
pes2o/s2orc
v3-fos-license
Anti-neurofascin 155 Antibody-positive Neuropathy in a Human Immunodeficiency Virus-infected Patient Human immunodeficiency virus (HIV)-associated neuropathy is a common complication of HIV infection and has several clinical subtypes. HIV-associated chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is a demyelinating neuropathy whose clinical features are known to differ from those of CIDP in the HIV-uninfected population. We herein report a case of CIDP in an HIV-infected patient who was finally diagnosed with anti-neurofascin 155 (NF155) antibody-positive neuropathy. The clinical features, including clinical findings and therapeutic responses, were typical of paranodal antibody-mediated neuropathy. To our knowledge, this is the first case of anti-NF155 antibody-associated neuropathy in an HIV-infected patient. Introduction Human immunodeficiency virus (HIV)-associated neuropathy is a common complication of HIV infection.While HIV itself causes macrophage dysregulation by proinflammatory cytokine release or direct neurotoxicity by its envelope protein, it also evokes autoimmune processes, such as chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) (1). HIV-associated CIDP is a chronic demyelinating neuropathy occurring in patients suffering from HIV and has distinct clinical features compared with the HIV-uninfected population, including a younger age of onset, a monophasic progressive course, slightly higher cerebrospinal fluid (CSF) protein levels, and a significantly better response to corticosteroid treatment (2).Although these distinct clinical features and therapeutic responses to immune modification therapy suggest that some specific autoimmune conditions might exist in this disease, no definite evidence supports this implication (2). CIDP is a chronic inflammatory neuropathy with diverse subtypes.Recent progress in neuroimmunology has revealed a growing number of autoantibodies targeting the peripheral nervous system.There is increasing evidence that neuropathy associated with antibodies targeting the node and paranode of myelinated peripheral nerves, autoimmune 'nodo-paranodopathies', should be recognized as a pathologically distinct disease entity.In particular, antineurofascin 155 (NF155) antibody, a representative nodal and paranodal antibody, is associated with a specific phenotype versus seronegative CIDP.Characteristics of anti-NF 155 antibody-positive neuropathy include predominantly distal weakness, marked elevation of CSF protein levels, and nerve root enlargement and enhancement on magnetic resonance imaging (MRI).Patients with this antibody tend to respond well to corticosteroids or rituximab, whereas immunoglobulins provide only transient or partial relief (3).We herein report a case of neuropathy in an HIV-infected patient who was ultimately diagnosed with anti-NF155 antibody-positive neuropathy.To our knowledge, this is the first case of anti-NF155 antibody-positive neuropathy in an HIV-infected patient.HIV evokes a wide variety of autoantibody-mediated diseases, such as Graves' disease, an-tiphospholipid syndrome, and immune thrombocytopenic purpura (4,5), and anti-NF155 antibody in our case might be one such HIV-associated autoantibody.This case has clinical features of both anti-NF155 antibody-positive neuropathy and HIV-associated CIDP and suggests an autoantibody-mediated feature of HIV-associated CIDP. Case Report A 50-year-old man with HIV infection visited our institution with a 7-month history of dysesthesia and weakness of the extremities.At 36 years old, the patient was identified as infected with HIV.While combination antiretroviral therapy (cART) that was initiated at 40 years old had maintained HIV-RNA at an undetectable level, 6 months prior to the visit, HIV-RNA became detectable (37 copies/mL) for the first time in 10 years.Paresthesia in the soles of both feet gradually expanded proximally over 7 months accompanied by weakness.The patient's antiretroviral therapy regimen did not include any agents known to cause neuropathy.A neurological examination revealed mild symmetric weakness in the toe extensors, with diminished tendon reflexes in all four extremities.Symmetric dysesthesia and hypoesthesia in the distal extremities were noted.Deep sensation, including vibratory sensation and proprioception, was preserved.A cranial nerve examination was normal.Ataxia and tremor were absent.These findings suggested progressive polyneuropathy. An examination of the CSF revealed prominent elevation of protein (164 mg/dL) but was otherwise normal (1 white cell/μL, IgG index 0.69).MRI revealed enlarged nerve roots in both the brachial and lumbosacral plexuses, suggesting nerve root involvement (Figure B).Nerve conduction studies (NCSs) showed prolonged distal latencies and slow conduction velocities (Figure C).The F-waves were delayed and reduced in occurrence.Sensory nerve action potential amplitudes were decreased (Table 1).These findings met the electrodiagnostic criteria for CIDP (6), and a preliminary diagnosis of HIV-associated CIDP was made. High-dose intravenous immunoglobulin (IVIg) (2 g/kg) and subsequent weekly subcutaneous immunoglobulin (SCIg, 0.2 g/kg) administration mildly relieved the patient's symptoms; however, sensory disturbance continued to progress.During gradual neurological deterioration under SCIg, the patient was found to be positive for anti-NF155 IgG4 antibodies in a flow cytometric assay (7).Considering the more favorable response to corticosteroids in anti-NF155 antibody-positive neuropathy (7), intravenous corticosteroid administration followed by initiation of 0.4 mg/kg/day oral prednisolone (equivalent to 0.5 mg/kg/day in consideration of drug interaction with anti-retroviral agents) was administered. The sensory disturbance resolved almost completely in three months, and amplitudes and conduction velocities improved in both the motor and sensory NCSs (Figure A).Notably, sensory nerve action potentials became detectable in the ulnar nerve for the first time, providing electrophysiological evidence of the therapeutic response.Although anti-NF155 antibody titers are reported to be a marker of disease activity (8), unfortunately, we did not have the opportunity to test posttreatment titers. Discussion In our case, the patient initially presented with HIVassociated neuropathy, but the clinical features, including distal dominant weakness, enlarged nerve roots on MRI, and therapeutic response to corticosteroids, were consistent with anti-NF155 antibody-positive CIDP (3).To our knowledge, this is the first case of anti-NF155 antibody-positive neuropathy in an HIV-infected patient. HIV-associated neuropathy has several subtypes with a complex pathophysiology (1).The most common type of HIV-associated neuropathy is distal sensory neuropathy due to HIV-associated macrophage dysregulation (9) or direct neurotoxicity of the HIV envelope protein gap120 (10).Another type of HIV-associated neuropathy, inflammatory demyelinating polyradiculoneuropathy, has an autoimmune aspect and definite characteristics in its clinical course and therapeutic response, as presented in our case.Interestingly, HIV evokes a wide variety of autoantibody-mediated diseases, such as Graves' disease, antiphospholipid syndrome, and immune thrombocytopenic purpura (4).Neurological diseases, such as myasthenia gravis and Guillain-Barré syndrome, have also been reported (5), and anti-NF155 antibody in our case might be have been one such HIVassociated autoantibody.Autoimmune conditions are more likely to be present in the early phase of HIV infection or after the initiation of cART, when CD4 cell counts are relatively preserved and immunodeficiency is not severe (11).In our case, HIV infection was relatively well-controlled, which might support the existence of HIV-associated autoimmune conditions. NF155 is an essential adhesion molecule located in paranodal septate-like junctions of peripheral and central myelinated axons (12).Recent studies have reported that anti-NF155 antibody evokes a specific type of inflammatory neuropathy, including a relatively young onset of age, predominantly distal weakness, frequent tremors and ataxia, marked elevation of CSF protein levels, and nerve root enlargement and enhancement on MRI (3,7).However, CIDP in the HIV-infected population has distinct features compared to CIDP in the HIV-uninfected population, including a relatively young age of onset, a monophasic progressive course, a significantly better response to corticosteroid treatment, and slightly higher CSF protein levels (2).Although ataxia and tremor were absent and CSF protein levels were lower than in typical cases of anti-NF155 antibody-associated neuropathy, the distal dominant phenotype and therapeutic response were findings consistent with those of anti-NF155 antibody-associated neuropathy. Our case showed findings consistent with anti-NF155 antibody-associated neuropathy and its clinically relevant autoantibody in an HIV-infected patient for the first time.Furthermore, our case has clinical features of both anti-NF155 antibody-positive neuropathy and HIV-associated CIDP (Table 2) (2, 7), suggesting a possible link between these two disease entities. Conclusion Neuropathy in an HIV-infected patient may present as anti-NF155 antibody-positive neuropathy.HIV-induced immune dysregulation causes multiple autoantibody-mediated diseases, and anti-NF155 antibody-positive neuropathy should be considered an important clinical presentation of HIV-associated neuropathy.Further studies should be conducted to determine the frequency of positive anti-nodal and paranodal antibodies in HIV-infected patients with CIDP and their therapeutic response to delineate the differences in clinical features of CIDP with or without HIV infection. Figure . Figure.Clinical findings in the patient.(A) The disease course of the patient.Following the administration of corticosteroids, sensory disturbance improved prominently.(B) Magnetic resonance imaging revealed nerve root hypertrophy with gadolinium enhancement in both the brachial plexus (a, b) and lumbosacral plexus (c, d) [a and c, magnetic resonance neurography (3D-NerveVIEW); b and d, gadolinium-enhanced T1-weighted imaging].Dashed lines on neurography indicate the level of the axial image shown.Arrowheads indicate nerve root hypertrophy.(C) A nerve conduction study showed prominent demyelination, which improved after corticosteroid therapy.Distal latencies, conduction velocities and combined motor action potential (CMAP) amplitudes were improved in motor conduction studies.Sensory nerve action potential (SNAP) of the ulnar nerve was initially undetectable.Three months after initiating corticosteroids, SNAP was detected.IVIg: intravenous immunoglobulin, SCIg: subcutaneous immunoglobulin, IVMP: intravenous methylprednisolone, PSL: prednisolone, MCS: motor conduction studies, SCS: sensory conduction studies, DL: distal latency, dCMAP: distal combined motor action potential amplitude, MCV: motor conduction velocity, SCV: sensory conduction velocity, SNAP: sensory nerve action potential
2023-06-30T06:16:40.239Z
2023-06-28T00:00:00.000
{ "year": 2023, "sha1": "f3186220b84025a2497e83d8571ebe2c3acea4af", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/advpub/0/advpub_1919-23/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac9266fa55dd58b31abd0059033762c008f36753", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234497308
pes2o/s2orc
v3-fos-license
Bifidobacterium dentium-derived y-glutamylcysteine suppresses ER-mediated goblet cell stress and reduces TNBS-driven colonic inflammation ABSTRACT Endoplasmic reticulum (ER) stress compromises the secretion of MUC2 from goblet cells and has been linked with inflammatory bowel disease (IBD). Although Bifidobacterium can beneficially modulate mucin production, little work has been done investigating the effects of Bifidobacterium on goblet cell ER stress. We hypothesized that secreted factors from Bifidobacterium dentium downregulate ER stress genes and modulates the unfolded protein response (UPR) to promote MUC2 secretion. We identified by mass spectrometry that B. dentium secretes the antioxidant γ-glutamylcysteine, which we speculate dampens ER stress-mediated ROS and minimizes ER stress phenotypes. B. dentium cell-free supernatant and γ-glutamylcysteine were taken up by human colonic T84 cells, increased glutathione levels, and reduced ROS generated by the ER-stressors thapsigargin and tunicamycin. Moreover, B. dentium supernatant and γ-glutamylcysteine were able to suppress NF-kB activation and IL-8 secretion. We found that B. dentium supernatant, γ-glutamylcysteine, and the positive control IL-10 attenuated the induction of UPR genes GRP78, CHOP, and sXBP1. To examine ER stress in vivo, we first examined mono-association of B. dentium in germ-free mice which increased MUC2 and IL-10 levels compared to germ-free controls. However, no changes were observed in ER stress-related genes, indicating that B. dentium can promote mucus secretion without inducing ER stress. In a TNBS-mediated ER stress model, we observed increased levels of UPR genes and pro-inflammatory cytokines in TNBS treated mice, which were reduced with addition of live B. dentium or γ-glutamylcysteine. We also observed increased colonic and serum levels of IL-10 in B. dentium- and γ-glutamylcysteine-treated mice compared to vehicle control. Immunostaining revealed retention of goblet cells and mucus secretion in both B. dentium- and γ-glutamylcysteine-treated animals. Collectively, these data demonstrate positive modulation of the UPR and MUC2 production by B. dentium-secreted compounds. Introduction The gastrointestinal epithelium functions as a barrier to prevent undesirable luminal antigens or irritants from entering the body. 1 The intestinal barrier is maintained by both the maintenance of intact epithelial cells and by the protective mucus layer that overlays the epithelium. Intestinal mucus is synthesized and secreted from goblet cells. 2,3 Mucus synthesis starts with dimerization of mucin MUC2 proteins in the endoplasmic reticulum (ER), followed by O-glycosylation in the Golgi. After further oligomerization, mature mucins are stored as granules until they are released from intestinal goblet cells. 4 Since mucin synthesis requires precise continuous folding in the ER, goblet cells are particularly sensitive to ER stress. 4,5 ER stress occurs when misfolded proteins accumulate, and this stress induces signaling pathways that initiate the unfolded protein response (UPR). 6,7 UPR is initiated by the heat shock protein family chaperone GRP78, which then activates distinct signal transducers. 5 ER stress can also generate reactive oxygen species (ROS) and activate NF-kB. 8,9 The balance of ROS levels in the ER is critical for homeostasis as excessive accumulation of ROS leads to further accumulation of misfolded proteins, thereby creating a cycle of ER stress. 10,11 Excessive or chronic ER stress and oxidative stress in goblet cells reduces MUC2 production, depletes the mucus barrier, and induces cell injury and inflammation. Bacterial Culture Bifidobacterium dentium ATCC 27678 (ATCC, American Type Culture Collection), a human fecal isolate, was grown in an anaerobic workstation (Anaerobe Systems AS-580) with a mixture of 5% CO 2 , 5% H 2 , and 90% N 2. B. dentium was grown in de Man, Rogosa, and Sharpe (MRS) medium (Difco) from single colonies at 37°C overnight anaerobically. B. dentium was subcultured into a fully defined media, termed LDM4, at an optical density (OD 600nm ) = 0.1 as previously described. 67 LDM4 cultures were grown anaerobically for 24 hr at 37°C. After incubation, cultures were centrifuged at 5,000 x g for 5 min. The supernatant was adjusted to a pH of 7 and sterile filtered through a 0.2 µmpore PVDF-membrane (Polyvinylidene Fluoride, Millipore). This supernatant is termed "conditioned media." For animal experiments, B. dentium was grown overnight anaerobically in MRS and centrifuged at 5,000 x g for 5 min. Bacteria were then washed 2x with sterile anaerobic PBS and adjusted to 10 9 CFU mL −1 . These bacteria were used for oral gavage. Bacterial viability was confirmed for each gavage session by serial plating B. dentium on MRS agar to calculate CFUs. Mass Spectrometric Analysis of γ-glutamylcysteine The liquid chromatography-tandem mass spectrometry (LC-MS/MS) system was comprised of a Shimadzu Nexera X2 MP Ultrahigh-Performance Liquid Chromatography (UHPLC) system (Kyoto, Japan) coupled to a Sciex 6500 QTrap hybrid triplequadrupole/linear ion trap MS system from Danaher (Washington, DC, USA). Operational control of the LC-MS/MS was performed with Analyst® (Ver. 1.6.2), and quantitative analysis was performed using MultiQuant™ (Ver. 3.0.1). The targeted LC-MS/MS-based metabolomics methods used for the quantitative analysis of the γ-Glu-Cys content of the LDM4 medium are described in their entirety in the Supplemental Materials Section. Culturing conditions Human colon T84 cells (ATCC CCL-248) were obtained from ATCC and grown in Gibco Dulbecco's Modified Eagle Medium (ThermoFisher) supplemented with 10% fetal bovine serum (FBS) in a humidified atmosphere at 37°C, 5% CO 2 (see supplemental methods for additional details). T84 cells were grown to confluence on 24-well tissue culture treated plates and 1 µg/mL ER stressor thapsigargin (Tocris #1138), 10 µg/mL ER stressor tunicamycin (Sigma #T7765-1 MG), or 0.1 µg/mL IL-1β in the presence or absence of various concentrations of B. dentium LDM4 conditioned media, 2 mM γ-glutamylcysteine (Bachem # 4028244.025), or 50 ng/mL IL-10 (Peprotech #200-10) in DMEM without glucose and without FBS for 6 hr. Following incubation, cells were incubated with TRIZOL for RNA extraction. For western blot analysis, cells were seeded at 5 × 10 4 cells/cm 2 in 12-well tissue culture treated plates (Corning) until the cells reached confluence. Once cells reached confluency, T84 cells were serum starved by incubation overnight in DMEM without glucose and without FBS at 37°C, 5% CO 2 . Cells were then treated with 1 µg/mL thapsigargin with or without 50% B. dentium LDM4 conditioned media or γ-glutamylcysteine in DMEM without glucose and without FBS for 8 hr. After incubation, cells were lysed in lysis buffer and stored at −80°C until processing. Cell viability was examined by propidium iodide staining (see supplemental methods). ROS and Glutathione Analysis To examine ROS, T84 cells were pretreated with 5 μM 2′,7′-Dichlorofluorescin diacetate (H 2 DCFDA; Sigma Aldrich Cat# D6883) for 1 hr at 37°C, 5% CO 2 . Cells were then washed gently 2x with PBS and treated with 10 µg/mL ER stressor tunicamycin, or 2 mM H 2 0 2 in the presence or absence of various concentrations of B. dentium LDM4 conditioned media, 2 mM γ-glutamylcysteine, or IL-10 in DMEM. Cells were incubated with treatment conditions for 3 hr, washed 3x with PBS, and then H 2 DCFDA fluorescence was examined in cells in PBS on a Synergy H1 plate reader at excitation 485 nm/emission 520 nm. ThiolTracker Violet (ThermoFisher #T10095), an intracellular thiol probe used to detect glutathione levels. T84 cells were incubated with B. dentium LDM4 conditioned media, 2 mM γ-glutamylcysteine, or IL-10 in DMEM for 3 hr at 37°C, 5% CO 2. Cells were then washed and incubated with 20 μM ThiolTracker Violet in PBS for 30 min at 37° C, 5% CO 2. After incubation, cells were washed and fluorescence was examined on a Synergy H1 plate reader at excitation 404 nm/emission 526 nm. γglutamylcysteine uptake was examined using fluorescein (see supplemental methods). NF-kB Activation and IL-8 Analysis To examine NF-kB activation, T84 cells at 80% confluence were transiently transduced with an NF-kB secreted luciferase reporter (Clontech) in Opti-MEM (ThermoFisher) using the XtremeGene HP DNA transfection reagent (Roche). 68 The final concentration of 0.6 μL XtremeGene HP:0.3 μg DNA per well. Cells were then incubated for 48 hours at 37°C, 5% CO2. Following transfection, cells were treated with 1 µg/mL thapsigargin with or without 50% LDM4 un-inoculated media, 50% B. dentium LDM4 conditioned media or γ-glutamylcysteine in DMEM without glucose and without FBS overnight. Supernatant was examined for luciferase activity using a Lonza Lucetta tube luminometer with a 2 second delay and a 10 second measurement time. To examine IL-8 production, T84 cells were seeded into 96-well plates (10,000 cells/well) overnight and the following day, cells were serum starved for 3 hr in DMEM without glucose and FBS. Then cells were treated with 1 µg/mL thapsigargin or 0.1 µg/mL IL-1β in the presence or absence of various concentrations of B. dentium LDM4 conditioned media or γglutamylcysteine. Cells were incubated overnight (16 hr) and supernatants were examined for IL-8 production by IL-8/CXCL8 DuoSet ELISA (R&D, #DY208-05). Mouse Bone Marrow-Derived Dendritic Cell Culture Mouse bone marrow dendritic cells were isolated as previously described. 69 Briefly, bone marrow was flushed from the femur and tibia of 8 week old male Swiss Webster mice, treated with red blood lysis buffer and 10 5 mL −1 bone marrow cells were seeded into 10 cm Petri dishes in 10 mL RPMI-1640 with 10% (v/v) heat-inactivated FBS and 100 ng/mL murine GM-CSF (peprotech #315-03) and IL-4 (peprotech #214-14). Cells were incubated for 7 days at 37°C, 5% CO2 and media were changed on day 3. On day 6, cells were trypsinized, seeded in new dishes at 2 × 10 5 cells/mL and incubated overnight. On day 7, dendritic cells were treated with 100 ng/mL LPS, un-inoculated LDM4 or B. dentium LDM4 conditioned medium and incubated overnight. The following day, the supernatant was removed and examined by IL-10 ELISA (ThermoFisher #88-7105-22). Mouse Colonic Organoid Culture Mouse colonic organoids were generated as previously described. 70 Briefly, the colon was excised from 8-week -old male Swiss Webster mice and washed thoroughly in ice-cold Ca 2+ /Mg 2+ -free DPBS. Tissue was incubated in 3 mM EDTA, DTT, and sucrose for 30 min at 4°C. Crypts were collected in chelation buffer, centrifuged at 300 x g for 10 min, and embedded in Matrigel (BD Biosciences). After Matrigel polymerization, Matrigel domes were covered with complete media with growth factors (CMGF+) containing 10 µM Y-27632 rock inhibitor. 71 Colonic organoids were used in experiments after two passages to ensure cellular debris was removed. For differentiation, colonic organoids were grown for 48 hr in CMGF+, then the medium was changed to differentiation media. Delivery of bacterial conditioned media to the luminal membrane of colonic organoids was achieved by microinjection of 17.6 nL of solution (media control, uninoculated LDM4, LPS or B. dentium LDM4 conditioned media) using a Nanoject microinjector (Drummond Scientific Company) as previously described. 72 Colonic organoids were incubated overnight and supernatant was analyzed using an IL-10 ELISA (ThermoFisher #88-7105-22). Animal Models All animal experimental procedures were approved by the Institutional Animal Care and Use Committee (IACUC) at Baylor College of Medicine, Houston, TX. For gnotobiotic experiments, animals were housed in filter-top cages in sterile isolators at the Baylor College of Medicine germ-free facility. Swiss Webster germ-free mice were gavaged with sterile MRS media (Germ-Free controls) or were gavaged with 3.2 × 10 8 CFU mL −1 B. dentium ATCC 27678 grown in MRS (B. dentium mono-associated). Both groups contained equal numbers of male and female mice to exclude gender bias (n = 5 males/5 females per treatment group). To ensure colonization, mice received oral gavage treatments once every other day for one week and a final gavage a week later as previously described. 67 Colonization was confirmed by plating fecal samples on MRS and Blood Agar (Hardy Diagnostics). To confirm the absence of other bacteria, agar plates were incubated anaerobically and aerobically at 37°C for 48 hr. For TNBS experiments, BALB/c mice (8-12 weeks old) were purchased from Taconic and housed in the Baylor College of Medicine animal facility (Feigin Tower). Mice were pretreated by oral gavage with B. dentium 10 9 CFU mL −1 or 1 mg/kg y-glutamylcysteine (Bachem). After 1 week of pretreatment, mice were anesthetized by isoflurane inhalation and 5% (wt/vol) 2,4,6-Trinitrobenzenesulfonic acid (TNBS) in ethanol was rectally administered. To ensure TNBS retention, mice were maintained in a vertical position for 2 min. Following TNBS administration, mice received daily oral gavage of either microbial or yglutamylcysteine treatment until euthanasia (3-5 days). Histological scores of colitis were assessed by a Texas Children's Hospital pathologist. Staining was performed on paraffin embedded colon sections (see supplemental methods). Colon tissue was also collected in TRIZOL and used to isolate RNA (see supplemental methods). Serum cytokines were analyzed using a Cytokine Magnetic bead panel (Millipore, cat. #MCYTOMAG) with a MagPix instrument (see supplemental methods). Statistics Data are presented as mean ± standard deviation. Comparisons between groups were made with Student's t-test, One-way or Two-way Analysis of Variance (ANOVA), using the Holm-Sidak posthoc test to determine significance between pairwise comparisons. Graphs and statistics were generated using GraphPad (GraphPad Software, Inc. La Jolla, CA). A *p < .05 value was considered significant while n is the number of experiments performed. B. dentium secretes γ-glutamylcysteine which promotes epithelial glutathione production and diminishes ROS and NF-kB activation γ-glutamylcysteine, the precursor to glutathione, is a modulator of both oxidative and ER stress. To determine the ability of B. dentium to produce γglutamylcysteine, we grew the bacteria in a fully defined medium termed LDM4 for 16 hr and assessed the concentration of γ-glutamylcysteine in the supernatant by mass spectrometry (MS/ MS). B. dentium secreted high levels of γ-glutamylcysteine (2.2 ± 0.7 µg/mL) in LDM4. No levels of microbial glutathione were detected. In the intestine, γ-glutamylcysteine can be taken up by PEPT1 and PEPT2 transporters where it can feed into the host glutathione pathway. To model the colon, we selected the mucin-producing colonic cell line T84, which expresses the γ-glutamylcysteine transporter PEPT1, secretes mucus, and has been previously used to examine ER stress. [73][74][75][76] To assess whether B. dentium secreted γ-glutamylcysteine could be incorporated by the host, we fluorescently labeled all cysteine-containing compounds, including γ-glutamylcysteine, in B. dentium conditioned LDM4 with fluorescein-5-maleimide and examined intracellular localization in T84 cells by flow cytometry and microscopy (Figure 1a,b). As a control, we also labeled purified γ-glutamylcysteine with fluorescein-5-maleimide. Consistent with the high levels of γ-glutamylcysteine observed in B. dentium-conditioned LDM4, we found high expression of cysteine-labeled compounds in T84 cells. In contrast to unstained T84 cells (3.6 ± 0.03%) and 50% inoculated fluorescently labeled-LDM4 controls (9.2 ± 1.9%), B. dentium fluorescently labeled LDM4-conditioned media was present in 89.1 ± 1.91% of cells by flow cytometry ( Figure 1a). Moreover, purified γ-glutamylcysteine was found in 76.3 ± 1.63% of cells. Fluorescence microscopy confirmed the presence of fluorescently labeled B. dentium supernatant in T84 cells ( Figure 1b). These data indicate that microbial γglutamylcysteine can enter the intestinal epithelium. To confirm that microbial γ-glutamylcysteine could regulate host glutathione, we added purified γ-glutamylcysteine and B. dentium conditioned LDM4 containing γ-glutamylcysteine to T84 cells and measured glutathione production using a fluorescent thiol-tracker (Figure 1c). We observed elevated levels of glutathione in response to B. dentium and γ-glutamylcysteine treatment, indicating that microbial-derived γ-glutamylcysteine is capable of elevating host glutathione levels. As a control, we also included IL-10, which has been shown to suppress goblet cell ER stress and ROS. 4,9,77,78 Interestingly, we did not observe any change in glutathione levels in IL-10 treated cells compared to their respective media controls. Glutathione is known to minimize ROS, a byproduct of ER stress and thereby suppress NF-kB activation. 8,9,45 To address the role of microbial γ-glutamylcysteine in suppressing ROS, we fluorescently labeled T84 cells with H 2 DCFDA and examined ROS fluorescence after treatment ( Figure 1d). As expected, γ-glutamylcysteine and IL-10 suppressed ROS generated by ER stress (thapsigargin and tunicamycin) as well as oxidative stress (H 2 0 2 ). B. dentium conditioned LDM4 and γ-glutamylcysteine suppressed all forms of ROS, indicating that microbial compounds can promote host glutathione and suppress ROS. Finally, we examined NF-kB activation using T84 cells transiently transfected with an NF-kB secreted luciferase reporter Figure 1. B. dentium γ-glutamylcysteine enter host cells, upregulate glutathione and reduce ROS, NF-kB, and cytokine synthesis. a. Fluorescein-5-Maleimide was used to fluorescently tag cysteine residues in y-glutamylcysteine, B. dentium conditioned LDM4 media, or uninoculated LDM4 media. Representative histograms from flow cytometry analysis of T84 cells after exposure to cysteine-tagged y-glutamylcysteine, B. dentium-conditioned LDM4 media, or uninoculated LDM4 media (control) (n = 3/experiment). b. Representative images of T84 cells following incubation with Fluorescein-5-Maleimide-tagged B. dentium conditioned LDM4 (which fluorescently labels cysteine residues), counterstained with nuclear dye Hoechst (scale bar = 50 µm). c. Measurement of glutathione levels in T84 cells after 3 hr using a Thiol-tracker, as measured on a fluorescence plate reader (ex/em: 405/528) (n = 3/experiment). d. Measurement of ROS levels in T84 cells after 3 hr in cells stained with H 2 DCFDA, as measured on a fluorescent plate reader (ex/em: 485/ 528) (n = 3/experiment). e. Secreted NF-kB luciferase quantified from T84 monolayers treated for 16 hr (n = 4/experiment). f. IL-8 levels of T84 cells after 16 hr incubation with treatment as measured by ELISA(n = 3/experiment). All data is expressed as mean ± st dev and all experiments were repeated 3-4 independent times. *p < .05, Multi-Way ANOVA. ( Figure 1e). In this assay, we also observed decreased levels of NF-kB in response to ER stress (thapsigargin) and pro-inflammatory cytokines (IL-1β) in our γ-glutamylcysteine, B. dentium cellfree supernatant and IL-10 treated cells. Our NF-kB activation by IL-1β was consistent with levels of IL-8, a downstream target (Figure 1f). We found that B. dentium conditioned LDM4, γ-glutamylcysteine, and IL-10 diminished IL-1β-induced IL-8 production. These data provide strong evidence that B. dentium secreted products, such as γ-glutamylcysteine, could suppress the ER stress phenotype. B dentium and y-glutamylcysteine suppress thapsigargin and tunicamycin-induced ER stress in mucinproducing cell lines Next, we sought to determine if B. dentium conditioned LDM4 could dampen ER stress signaling components. GRP-78 is the major regulator of ER stress and its activation contributes to the initiation and regulation of inflammatory processes and apoptosis. 79,80 We first examined ER stress signals GRP-78, CHOP, and xsBP1 by qPCR in T84 cells (Figure 2a,b). We observed elevated levels of GRP-78, CHOP, and xsBP1 in response to ER stressors thapsigargin and to a lesser degree tunicamycin. However, treatment with 50% B. dentium LDM4 conditioned medium, γ-glutamylcysteine, and IL-10 significantly suppressed the expression of all ER stress proteins in the presence of both thapsigargin and tunicamycin. Chronic ER stress promotes apoptosis, so we also examined cell death using propidium iodide after 48 hr of incubation (Figure 2c). Significant propidium iodide staining, and thus cell death, was observed in thapsigargin, tunicamycin, and H 2 0 2 treated cells. Similar propidium iodide staining was observed in uninoculated LDM4 bacterial media controls. In contrast, significantly less cell death occurred in B. dentium-conditioned LDM4, γ-glutamylcysteine, and IL-10 treated wells. These data indicate that B. dentium secreted products, including γ-glutamylcysteine, can suppress ER stress and apoptosis in mucin-producing cells. Mono-association of mice with B. dentium stimulates IL-10 and MUC2 production Given the dramatic suppression of ER stress by IL-10 and B. dentium-conditioned LDM4, we next sought to determine if B. dentium colonization promoted mucus-production and IL-10 secretion in vivo. We mono-associated germ-free mice by oral gavage with live B. dentium and examined the colonic architecture and mucus layer by H&E and PAS-AB staining (Figure 3a). We observed normal crypt architecture by H&E in B. dentium monoassociated mice, with increased numbers of goblet cells compared to germ-free controls. Periodic Acid Schiff-Alcian Blue (PAS-AB) mucus staining confirmed that B. dentium colonization increased mucin-positive goblet cells. This observation was consistent with increased MUC2 mRNA levels in B. dentium mono-associated mice compared to germfree counterparts (Figure 3b). We also examined whole colon IL-10 production by qPCR ( Figure 3c). We observed elevated IL-10 mRNA and serum levels in B. dentium mono-associated mice compared to germ-free controls (Figure 3c,d). Importantly, we did not observe any changes in the expression of ER stress genes (GRP-78, CHOP, or xsBP1) in B. dentium-colonized mice, suggesting that B. dentium colonization promotes mucus production without stimulating goblet cell ER stress. IL-10 is commonly produced by dendritic cells and can be produced in response to bacterial stimulation. 81,82 To determine if B. dentium secreted factors could stimulate IL-10 from immune cells, we generated bone marrow-derived mouse dendritic cells. Addition of uninoculated LDM4 had no effect on IL-10 levels as measured by ELISA (Figure 4a). However, addition of B. dentium LDM4 conditioned media and γ-glutamylcysteine both stimulated IL-10 production. To confirm that the epithelium was not responsible for IL-10 synthesis, colonic organoids were generated from germ-free mice and treated with uninoculated LDM4, B. dentium-conditioned LDM4, or γ-glutamylcysteine (Figure 4b). The epithelial cells in the organoids were unable to produce IL-10, indicating that B. dentiumsecreted products can promote IL-10 from immune cells such as dendritic cells. B. dentium and γ-glutamylcysteine elevate IL-10 and protect against TNBS colitis Colitis-inducing compounds, including TNBS, are known to activate ER stress. [33][34][35][36] We therefore investigated whether B. dentium and γ-glutamylcysteine could downregulate the molecular features of ER stress and minimize experimental colitis. We induced colitis in mice by rectal administration of TNBS in ethanol, which causes severe colitis as assessed by histological scoring. As anticipated, we observed extensive microscopic damage to colonic architecture in PBS-vehicle control mice with can be suppressed by B. dentium, γ-glutamylcysteine, and IL-10. a. qPCR analysis of T84 monolayers after 6 hr incubation with or without the ER-stressor thapsigargin. Cells were treated with either media, 50% un-inoculated LDM4 (LDM4), 50% B. dentium LDM4 (Bd), 2 mM γ-glutamylcysteine (yGC), or 100 ng/mL IL-10 (IL-10) (n = 6/experiment). b. qPCR analysis of T84 monolayers after 6 hr incubation with or without the ER-stressor tunicamycin (n = 6/experiment). c. Propidium iodide staining of T84 cells after 48 hr incubation with ER stressors (thapsigargin or tunicamycin) or oxidative stressor hydrogen peroxide (H 2 0 2 ) (n = 6/experiment). *p < .05, Multi-Way ANOVA. TNBS compared with untreated mice (Figure 5a). We observed immune infiltration, transmural inflammation with thickening of the muscularis, and loss of crypts and goblet cells in the colons of PBS-treated TNBS mice; all hallmarks of disease activity. In contrast, B. dentium-and γ-glutamylcysteine-treated TNBS mice exhibited significant improvement in colonic histopathology compared with PBS-treated TNBS mice, which is reflected in the histological scores (Figure 5b). Serum analysis by Magpix revealed elevated anti-inflammatory IL-10 in B. dentium-and γ-glutamylcysteine-treated mice with TNBS compared with PBS-vehicle treated TNBS and untreated mice (Figure 5c). Additionally, pro-inflammatory cytokines (IFNγ, IL-1α, IL-1β, IL-12, IL6, KC and TNF) were increased in PBS-treated TNBS mice and were reduced in TNBS mice treated with B. dentium and γ-glutamylcysteine. We also observed decreases in ER stress-related genes GRP-78, CHOP and xsBP1 in B. dentium-treated TNBS mice compared with PBS-treated TNBS mice (Figure 5d-F). Furthermore, we noted decreased levels of GRP-78 and CHOP in γ-glutamylcysteine-treated TNBS mice compared with PBS-treated TNBS mice. Since we observed dramatically enhanced goblet cell numbers in B. dentium-and γ-glutamylcysteine-treated TNBS mice, we also assessed goblet cells by PAS-AB and immunostaining (Figure 6a, b). While the mucus layer was disrupted in PBStreated TNBS mice, B. dentium and γ-glutamylcysteine administration promoted retention of the mucus layer and preservation of MUC2-positive goblet cells. Analysis of colonic tissue by qPCR confirmed that B. dentium and γ-glutamylcysteine elevated MUC2 and IL-10 levels compared to PBStreated TNBS mice (Figure 6c,d). Collectively these data support the role of B. dentium secreted compounds, such as γ-glutamylcysteine, in promoting IL-10 and suppressing oxidative and ER stress in vitro and in vivo. These findings point to the potential for B. dentium to be used as a targeted therapeutic for goblet cell-related diseases. Discussion Our data indicate a beneficial role for B. dentium in reducing activation of ER stress proteins GRP-78, CHOP, and xsBP1; proteins that are key mediators of ER stress in goblet cells. Our work also suggests that B. dentium-secreted products can suppress ER stress-driven ROS, elevate glutathione levels, suppress NF-kB, and diminish pro-inflammatory cytokines. We have identified that B. dentium secretes γ-glutamylcysteine, which mirrors the activity of B. dentium-conditioned LDM4 in our studies. Using bone marrow-derived dendritic cells, we found that B. dentium-conditioned LDM4 can stimulate IL-10 production, an effect we also observed in vivo in gnotobiotic and conventionalized mice. We believe that these two systems, γ-glutamylcysteine synthesis and IL-10 elevation, work in synergy to decrease ER stress and ROS, promote goblet cell homeostasis, and maintain the intestinal mucus layer. This study is among the first to link a commensal microbe and its secreted products to modulation of ER stress. Bifidobacteria is known to beneficially modulate the host. 66,[82][83][84][85][86][87][88][89][90][91] Although multiple mechanisms are likely involved, modulation of intestinal mucin production and reduction of inflammation are likely key pathways Bifidobacteria employ to promote intestinal homeostasis. Bifidobacteria can upregulate MUC2 production 67 and alleviate ER stress. 66 Goblet cells are particularly sensitive to ER stress 16,24 and thus modulation of goblet cell ER stress by Bifidobacteria may represent a significant pathway for promoting intestinal health. Although no microbial metabolites have been previously identified which suppress ER stress, we reasoned that γ-glutamylcysteine may ameliorate goblet cell ER stress. γ-glutamylcysteine is known to feed into the glutathione pathway and reduce oxidative stress. 33,35,41,44,49,55 In this study, we found that B. dentium secretes γ-glutamylcysteine, which can be converted into the powerful antioxidant glutathione and suppress oxidative stress. Our work indicates that bacterial secreted products harboring γ-glutamylcysteine, as well as purified γglutamylcysteine, enter cells and upregulate glutathione levels. Since ER stress activates ROS, we speculate that bacterial γ-glutamylcysteine can suppress the negative consequences of ER stress by acting on ROS. Recent work has suggested that γglutamylcysteine alone may likewise serve as an antioxidant. 92 Thus, it is possible that γ-glutamylcysteine could also act directly by suppressing ROS. By suppressing ROS, we speculate that γ-glutamylcysteine inhibits activation of NF-kB and its initia-tion of the ER stress regulator, GRP-78. This is consistent with literature, which suggests that acti- All analyses were performed in germ free (n = 10) and B. dentium mono-associated mice (n = 10). *p < .05, students t-test. vation of GRP-78 requires ROS. 93 In this way, we reason that our B. dentium-secreted y-glutamylcysteine may be modulating ER stress. Commensal Lactobacilli also harbors the GSHA genes to produce γ-glutamylcysteine. Using the Integrated Microbial Genomes (IMG) database (http://img.jgi.doe.gov), we found that L. plantarum, L. salivarius, L. antri, and L. reuteri genomes contained the gshA gene (glutamate-cysteine ligase). Interestingly, we found an equal number of Bifidobacteria genomes, B. adolescentis, B. bifidum, B. pseudocatenulatum, and B. dentium, harboring the gshA gene. Using LC-MS/MS, we confirmed that Lactobacilli could generate γ-glutamylcysteine (data not shown). However, B. dentium produced ~4x higher concentrations of γ-glutamylcysteine than our representative lactobacilli. Moreover, B. dentium can bind to MUC2, 67 potentially increasing the access of B. dentium secreted metabolites such as γ-glutamylcysteine to the host epithelium. Not all lactobacilli species can adhere to intestinal mucus, 94,95 which may limit the exposure of the epithelium to this beneficial compound. We have previously demonstrated that B. dentium lacks were incubated with either media, 25% un-inoculated LDM4 media, 25% B. dentium conditioned LDM4 media or 2 mM γ-glutamylcysteine for 16 hr. b. Representative phase-contrast image of colonic organoid generated from germ-free mice and IL-10 measurements of organoid supernatant by ELISA. Colonic organoids (400x, scale bar = 50 μm) were incubated with either media, 25% uninoculated LDM4 media, 25% B. dentium-conditioned LDM4 media or 2 mM γ-glutamylcysteine for 16 hr. n = 3/experiments, repeated 2 independent times. *p < .05, One-Way ANOVA. the glycosyl hydrolases necessary to degrade mucin 67 and secretes compounds, including acetate, that increase MUC2 expression. 67 This makes B. dentium ideal for treatment in mucin-depleted states such as that observed in IBD patients. Another potential benefit of using Bifidobacteria is that these microbes can be increased in density by common prebiotics such as inulin, plant-based βglucans, or oligofructose. 96 As a result, we propose that bifidobacteria generated γ-glutamylcysteine may provide a suitable strategy for elevating epithelial glutathione and suppressing goblet cell ROS and inflammation. In addition to production of y-glutamylcysteine, we observed that B. dentium conditioned LDM4 stimulated IL-10 production in immune cells. We speculate that B. dentium-conditioned media harbors other compounds that promote IL-10. Bifidobacteria is decorated in exopolysaccharides (EPS), a cell wall component that can be released into the milieu. Purified EPS from B. longum W11 stimulates IL-10 from human peripheral blood mononuclear cells (PBMCs). 97 EPS from B. longum BCRC 14634 also stimulated IL-10 production from J77A.1 macrophages. 98 Therefore, it is possible that EPS from B. dentium could be contributing to IL-10 production by dendritic cells and other immune cells. In addition to secreted compounds such as EPS, B. dentium metabolites may also contribute to IL-10 production and ER stress reduction. At present these compounds remain unidentified, but we believe future studies should focus on identifying these molecules. Previous work has shown that IL-10 alleviates ER stress by regulating recruitment of GRP-78 and promotes secretion of mucins from goblet cells. 4,5,9 In vivo, IL-10 administration in Winnie mice reduced MUC2 misfolding and inflammation. Additionally, IL-10 was able to reduce tunicamycin induced ER stress in LS174T mucin-producing cells. 4 Consistent with these findings, we observed that recombinant IL-10 alleviated tunicamycinand thapsigargin-driven ER-stress in mucin-producing T84 cells. We believe that B. dentium stimulation of dendritic cells to produce IL-10 may also alleviate goblet cell ER stress in vivo to promote colonic mucus secretion. Although it is difficult to delineate which route is more important for suppressing ER stress (IL-10 vs γ-glutamylcysteine) in our model, we speculate that both work together Figure 6. B. dentium and γ-glutamylcysteine promote the retention of colonic goblet cells and mucus. a. Representative images of PAS-AB stains of untreated control animals and TNBS treated animals receiving PBS vehicle, live B. dentium or γ-glutamylcysteine, and B. dentium mono-associated colon (20x, scale bar = 50 µm). b. Representative images of MUC2 and γ-actin of untreated control animals and TNBS-treated animals receiving PBS vehicle, live B. dentium or γ-glutamylcysteine, and B. dentium mono-associated colon (scale bar = 50 µm). c. Colonic mRNA expression of Muc2 in untreated control animals or TNBS-treated animals receiving PBS-vehicle, B. dentium (Bd) or γ-glutamylcysteine (yGC). d. Colonic mRNA expression of IL-10 in untreated control animals or TNBS-treated animals receiving PBS-vehicle, B. dentium (Bd) or γ-glutamylcysteine (yGC). n = 5 mice/group. *p < .05, One-Way ANOVA. during TNBS colitis to suppress inflammation and preserve goblet cell numbers and epithelial barrier integrity. We selected the T84 human colonic adenocarcinoma cell line in our experiments as it is well characterized by anion [99][100][101][102][103] and mucin secretion.- 73,104 In T84 cells, approximately 10% of the cell population is mucin-secreting cells. 73,104 This mirrors the approximately 16% goblet cell population in the human distal colon. 105 Similar to native goblet cells, T84 cells can be stimulated to secrete mucin by a number of secretagogues, including ATP, calcium ionophore A23187, diacylglycerol (DAG), phorbol ester PMA, forskolin, Vasoactive intestinal peptide (VIP), γ-aminobutryic acid (GABA), and prostaglandin E1. 67,73,104,106,107 Moreover, inhibition of calcium-activated potassium channels with barium chloride (BaCl2), Trimethylamine (TEA), and quinine, as well as inhibition of calcium mobilization by BAPTA and autophagy by 3-methyladenine (3-MA) reduces mucin output significantly. 67,104,106 In addition to shared pathways, electron microscopy analysis of T84 goblet-like cells reveals structural similarities to colonic goblet cells 73,107 and T84 cells can respond to bacterial stimuli to synthesize and secrete MUC2. 67 T84 cells have also been previously used to examine ER stress, 73-76 making this model ideal for our analysis. We speculate that our findings with T84 cells likely have many parallels with native tissue. However, additional studies using human colonic tissue or human colonoids (or organoids) would be beneficial in the future. The intestinal mucus layer is essential to maintain the proper distance between the luminal contents and the host immune system. The importance of this barrier is highlighted by the fact that disruption of the intestinal mucus layer increases inflammation. This has been elegantly demonstrated in several mouse models (Winnie, MUC2 −/-, AGR2 −/-, glycan-deficiency, etc.). 4,[108][109][110][111][112] Moreover, these animal model phenotypes appear to mirror findings in IBD patients. 31,[113][114][115][116] Ulcerative colitis patients in particular have abnormal goblet cell number, altered mucin glycosylation, decreased mucus layer thickness, and reduced mucus integrity. 22,[114][115][116][117][118][119] Loss of both the thickness and integrity of the mucus layer is thought to promote bacterial-epithelial interactions and drive inflammation. 120,121 Chronic inflammation leads to ER stress and activation of NF-κB. 18 The cycle of inflammation and ER stress responses is speculated to worsen IBD. 9,32 Although antioxidants protect cells from damage induced by ROS, long-term use of antioxidants can increase the risk of some forms of cancer. For example, N-acetylcysteine (NAC), another compound which feeds into the glutathione pathway, increases the risk and accelerates lung cancer progression in mice. 122,123 These findings suggest that longterm administration of ROS suppressing compounds, such as γ-glutamylcysteine, should be approached with caution. B. dentium has a relative abundance of 3.8% according to the Human Microbiome Project consortium and other studies. 67,[124][125][126][127][128] Since B. dentium does not make up a large portion of the microbiome under normal conditions, we predict that oral administration of B. dentium in patients would likely only elevate B. dentium concentrations short term and that B. dentium levels would return to baseline after administration had ceased. Further studies are necessary to identify the ability of B. dentium to colonize the colon long term in adults. Given the link between ER stress, mucus production, and inflammation, many researchers and clinicians have begun looking into reducing ER stress as a potential therapeutic target for IBD. Our work points to the novel role of B. dentium in alleviating ER stress, promoting mucus production, and minimizing inflammation. B. dentium is already a member of the healthy human gut microbiome and could be employed to promote a healthy gut. Based on these findings, we believe that B. dentium could serve as a next-generation probiotic for intestinal diseases associated with ER stress and disrupted mucus, such as IBD. Disclosures JV receive unrestricted research support from BioGaia AB, a Swedish probiotics company. JV serves on the scientific advisory board of Seed, a U.S.-based probiotics/prebiotics company. JV also serves on the scientific advisory board of Biomica, an Israeli informatics enterprise and on the scientific e1902717-14 advisory board of Plexus Worldwide, a U.S.-based nutrition company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2021-05-15T06:16:54.953Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fea493bf45f2e6ff44dbaa9751df631dda5e945e", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19490976.2021.1902717?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99d8dfbc82c48291d50048b98d9a60398923bdf8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4578510
pes2o/s2orc
v3-fos-license
Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. delayed alternation performance 11 , response selection 12 , and reversal learning 13 , while sparing sensorimotor functions 14 . The NCL can to some extent be subdivided based on hodological and histochemical evidence 10,15 , but it is unknown whether those subdivisions support different types of functions, as has been suggested for the PFC 16 . Several single-neuron recording studies have demonstrated that NCL neurons' firing rates are strongly modulated during the presentation of reward-predicting visual stimuli [17][18][19][20][21] , during a post-stimulus delay phase preceding reward delivery 22,23 , and during the consumption of food and water rewards [17][18][19][20]24 . Possibly, activity in the delay (i.e. post-choice) phase represents a mixture of different processes like information about the relevant visual stimulus 25 , task-related rules 26 , reward expectancy 22 , and information about the upcoming behavioral choice 23,27,28 . The situation is less explored regarding initial (i.e. pre-choice) stimulus presentation. In principle, modulation during this phase could result from three different sources: visual characteristics of the stimuli (low-level properties such as color or luminance), sensorimotor correlates of stimulus-directed behavior (pigeons emit pecking responses towards reward-predicting visual stimuli), or some learned functional aspects of the stimuli, such as their association with reward. Previous studies have hinted at the possibility that NCL neurons represent the reward value of conditioned visual stimuli 17,19,20 , based on the observed reward-related neural modulation during post-choice, pre-reward delay phases as well as during reward consumption itself 20,22,29 . Also, a recent study demonstrated that some NCL neurons either significantly increase or decrease firing for different conditioned stimuli signaling different reward amounts 19 . Here, we asked whether NCL neurons signal the 'integrated value' of visual cues, i.e. subjective value integrated across two different dimensions of reward -magnitude and delay to presentation. Moreover, we aimed to factor out sensorimotor contingencies confounded with cue value as a modulator of stimulus-related response modulation 20 . To this end, we subjected birds to a sign-tracking paradigm in which distinct visual cues predicted different outcomes, namely rewards of large or small magnitude, available after a short or long delay, or non-reward. Because the animals' pecking rate at the visual stimuli scales monotonically with the desirability of the predicted reward, we used this measure to index subjects' CS valuation 30 . Results Pecking rate but not pecking force reliably indexes stimulus value. We trained five pigeons on a sign-tracking paradigm in which discrete visual stimuli predicted food rewards of small or large magnitude ( "m" and "M", respectively) and featuring either short or long delays until delivery ("d" and "D") or the unavailability of food on that trial (CS-; Fig. 1A). Reward magnitude was operationalized as the duration of food availability after stimulus offset and equaled 1-1.5 s (m) or 5-6 s (M), while delay to reward was either 1-2 s (d) or 5-6 s (D; durations were custom-tailored for each bird to achieve discriminative response behavior). For behavioral analysis, we registered all pecking responses directed towards the stimulus during the 5-s sample phase. Subjects were trained until responses clearly and stably differentiated between all stimuli and maintained the same ordinal ranking across at least four of five consecutive sessions. The birds took a median number of 29 sessions to achieve this criterion (range [23][24][25][26][27][28][29][30][31] and subsequently received movable multi-electrode array implants into the NCL for electrophysiological recordings. In the vast majority of recording sessions, subjects ranked the stimuli in the same order, namely CS-, mD (small magnitude, long delay), md (small magnitude, short delay), MD (large magnitude, long delay) and Md (large magnitude, short delay). Occasionally, the rank order contained a single inversion. Seven out of forty recording sessions were excluded from analysis either because of stimulus inversion or because one of the four reward-associated stimuli received less than 25 key pecking responses, precluding analysis of neural responses relative to key peck events for that stimulus (see below). In the remaining 33 sessions, response rates reliably indicated stimulus ranking (Friedman test, p < 0.001, Fig. 1B, left and middle panels). The degree to which subjects' pecking rates differentiated between stimuli was quantified using the area under the receiver operating characteristic curve (AUROC 31 ). The average discriminability value across all sessions and stimulus pairs equaled 0.87 (chance: 0.5, perfect discriminability: 1). The lowest values were obtained for the md-MD stimulus pair (mean: 0.71; Fig. 1B, right panel). Previous work showed that pecking rate indexes subjective value and predicts choice behavior in subsequent tests of stimulus preference 30 . Having established that cue value strongly influences the rate of responding, we investigated whether it similarly modulates the intensity (i.e. force) of individual key-pecking responses as well. Figure 1C illustrates the mean force of pecking responses directed at a given stimulus, measured with a mechanoelectric transducer attached to the response key. Visual inspection suggests mean force to be largely similar for responses directed at the four reward-predicting stimuli (left and middle panels). Although there was a significant effect of stimulus across all five stimuli (Friedman test, p < 0.001), discriminability values were considerably lower than for response rate (right panel; average discriminability across all sessions and stimulus pairs 0.62; all averages ≤ 0.75). Importantly, response force did not increase monotonically with stimulus value for three of the four subjects for which force measurements were conducted. Thus, while frequency of conditioned responding was found to be a useful indicator of cue value, force was not. Behavioral evidence of stimulus discrimination unfolds within the first half second of stimulus presentation. The above analyses focused on data collapsed across the five seconds of stimulus presentation. In order to visualize the temporal dynamics of conditioned responding, we conducted a time-resolved analysis of key pecking. Figure 1D shows pecking rate as a function of time during the sample phase, separately for each stimulus and averaged over all animals and sessions. Following the peak at time 0 when the animal triggers stimulus presentation, the trajectories of the curves begin to diverge after a few hundred milliseconds. After one second of stimulus presentation, animals did not respond to the CS-(blue) anymore and exhibited maximum pecking rates to the most highly valued stimulus Md (green). For the other three conditioned stimuli, pecking rates increased from low to moderate values within the remaining 4 s of stimulus presentation, as is frequently found for fixed-interval schedules of reinforcement. CS-Md Pecking rate (Hz) Following an intertrial-interval, the initialization stimulus was presented for at least 2 s, after which the first registered key peck initiated the trial. One of five stimuli was then presented for 5 s, during which no behavioral response was required. The sample was then extinguished, followed by either 1) a variable delay (short or long), and a variable reward period during which food was accessible via a food hopper for a short or long period of time, or 2) a 2 s time-out punishment period. Right: Stimuli and their associated reward properties. Stimuli signaled small ("m") or large ("M") magnitude of an upcoming reward (duration of food hopper activation), and a small ("d") or large ("D") delay until the reward delivery. The most rewarding stimulus Md thus predicted a large reward after a short delay. Rewards were delivered with 50% probability; in case of a reward omission, the delay was increased by the designated feeding time. Figure 1E provides a close-up of pecking rates in the first second following stimulus onset. It is evident that differential key pecking is present already after 200-300 ms, implying that animals have identified the conditioned stimuli at that time. Accordingly, neural correlates of stimulus valuation are expected to be found as early as 100-200 ms into the sample phase. Note that the drop in pecking rate in the first 200 ms is not the result of a sudden behavioral arrest resulting from stimulus presentation, but simply a reflection of the fact that stimulus presentation was triggered by a key peck (at time 0) and that consecutive pecks are on average separated by about 300 ms. Therefore, the stimulus-dependent reduction of pecking rate compared to the initialization phase ( Fig. 1D) indicates behavioral discrimination. Neural stimulus discriminability emerges within 200 ms after stimulus onset. We analyzed response patterns of 162 neurons from 33 recording sessions. Figure 2 shows the histological reconstruction of recording tracks, which were all found to be within the borders of the NCL 10,15 . To gain a first impression of the extent to which NCL neurons discriminate between conditioned stimuli at different time points during the sample phase, we calculated an effect size index (η 2 ) for each neuron in 200-ms sliding windows (advanced in steps of 50 ms). This index denotes the fraction of the total variance in firing rates that can be attributed to the factor 'stimulus' and thus quantifies the degree to which neural firing rates differentiate between the conditioned stimuli. Figure 3A shows the magnitude of η 2 as a function of time during the sample phase, individually for all neurons (top and middle panels) and averaged across all 162 neurons (bottom panel). η 2 increases markedly shortly after stimulus onset, then decreases just as sharply within the first second and stays elevated over baseline levels during the remainder of the sample phase and into the delay preceding the outcome phase. Thus, the neural population differentiates best between the stimuli early in the sample phase (starting around 150-200 ms), leading the first behavioral indication of stimulus discrimination by about 50 ms (Fig. 3B). Figure 4A illustrates a neuron which showed an overall increase of firing rate during stimulus presentation (middle panel); in addition, firing rate differed significantly between stimuli (Kruskal-Wallis, p < 0.001), with maximum firing rates to most highly valued stimulus (Md) and minimum firing rates to the least valued stimulus (CS-), a response pattern suggestive of integrated value coding. However, it is not straightforward to assess the source of these activity differences from this kind of analysis, since neural activity in the NCL could also be modulated by the animals' behavior -recall that birds direct a substantially higher number of key pecks to high-valued stimuli than to low-valued stimuli. NCL neurons exhibit highly heterogeneous response profiles during stimulus presentation. In order to disentangle stimulus-and sensorimotor-related activity modulation, we constructed 'peri-peck time histograms' (PPTHs), referencing neural activity to the occurrence of individual key pecks, separately for each conditioned stimulus (the CS-received too few key pecks to allow construction of PPTHs). That way, the frequency of key pecks during presentation of different stimuli is effectively eliminated as a contributing factor to firing rate modulation. For the present example neuron, activity still differed between stimulus conditions after compensating for pecking rate (Kruskal-Wallis, p < 0.001; Fig. 4A, rightmost panel). Moreover, average firing rates were lowest for the least attractive reward-associated stimulus mD and highest for the most attractive stimulus Md, with the rank order of mean firing rates in the time window + /− 100 ms relative to key pecks similar to the rank order of the stimuli, as quantified by Kendall's rank correlation coefficient (tau = 0.67). Thus, the observed response pattern of this neuron is consistent with the hypothesis that NCL neural activity during stimulus presentation reflects subjective value integrated across two dimensions of reward. In contrast, Fig. 4B shows an example neuron which fired least for the highest-value and most for the lowest-value stimulus, but again the rank order of activity did not align perfectly with the behavioral responses (tau = − 0.67). Figure 4C shows an example neuron that responds preferentially to two of the stimuli, namely those that predict large reward magnitudes (MD and Md); again, this preference was evident in both stimulus-and peck-referenced analyses (Kruskal-Wallis, p's < 0.001). This pattern resembles that of neurons responsive to single dimensions of reward which have been described in primate prefrontal cortex 32 . The neuron shown in Fig. 4D exhibited an obvious preference for one of the visual stimuli, and this preference was equally strong in both stimulus-and peck-referenced displays (Kruskal-Wallis, p's < 0.001). Finally, the neuron in Fig. 4E fired phasically after onset of the CS-but was virtually silent both before and during stimulus presentation for all other stimuli. Overall, 127/162 units (78%) exhibited significant stimulus-modulated activity during the 5-s sample phase (p < 0.05, Kruskal-Wallis). The vast majority of NCL neurons (109/162, 67%) retained significant stimulus-related activity modulation when neural activity was referenced to key pecks, and the examples shown in Fig. 4 illustrate that some neurons' response patterns are consistent with a value-coding account, even after compensating for sensorimotor contingencies (i.e., neurons exhibiting significant modulations of their firing rate, along with highly positive or negative tau correlation values). No evidence for value coding at the neural population level. Overall, 29/109 neurons (27%) exhibited perfect tau correlations (+ 1 or − 1) between the animals' preference and firing rate. However, many more neurons displayed significant firing rate modulations during stimulus presentation that were unrelated to integrated cue value (e.g. Fig. 4D,E). Before concluding that the 29 neurons with perfect tau correlations indeed code for value, it is important to relate their frequency to that expected under the null hypothesis that NCL neurons represented other aspects of the visual stimuli. If a subset of NCL neurons indeed represented cue value, this should lead to an increased occurrence of units with high tau correlations (positive or negative) between behavioral and neuronal response rates. Focusing on all neurons whose firing rates differed significantly between stimuli (Kruskal-Wallis p < 0.05), we statistically compared the empirical distribution of tau values (histogram in Fig. 5A) to that expected by chance (Fig. 5A, black line, obtained from 1,000 simulations in which spike counts were randomly allocated to stimuli) using the chi-square goodness-of-fit test. This account of chance expectancy corresponds to the hypothesis that neurons do discriminate between stimuli (as evidenced by significant response modulation) but are insensitive to their associated reward value. The empirical distribution of tau values (computed from PPTHs constructed using all pecks in the 5-s sample phase) shown in Fig. 5A is unimodal and centered close to zero; its shape bears close similarity to that of a random distribution, and these distributions accordingly do not differ significantly (chi-square test, p = 0.61). However, the neural response pattern shown in Fig. 3 -stimulus discriminability rising after 200 ms, peaking around 350 ms, declining until about 1000 ms and being maintained at a constant level following stimulus onset -prompted us to repeat the analysis, separately for 200-1000 ms and 1000-5000 ms of the stimulus presentation phase. We obtained no significant difference of observed tau values for either early or late stimulus presentation epochs (chi-square test, p = 0.111 and p = 0.23, and Fig. 5B,C, respectively), although it should be noted that only 29 neurons met our inclusion criteria for the early response epoch, with 12 of these being significantly modulated. Our analyses so far have focused on single-neuron correlates of integrated value, testing the hypothesis that single neurons' firing rates scale monotonically with value. A different way of representing integrated value would be to have a larger number of neurons responding to high-value than to low-value stimuli. However, the distribution of preferred stimuli across all neurons was flat and did not deviate significantly from a uniform distribution (chi-square test, p = 0.705, based on 99 units with significantly modulated PPTHs during the late stimulus epoch; there were too few neurons to analyze the early epoch). The above analyses failed to yield evidence that NCL neurons represent cue value integrated over both dimensions of reward that were manipulated in this experiment. However, it is possible that value coding is indeed present in the NCL, but that individual neurons are sensitive only to a single reward dimension, such as magnitude. Such neurons have been reported in the primate frontal lobe 6 , and we indeed found neurons whose response patterns appeared consistent with this hypothesis (see example in Fig. 4C). Accordingly, we tested whether average firing rates in PPTHs were modulated by the predicted rewards' magnitude, delay, or both, by means of a two-way analysis of variance (using the full 5-s stimulus epoch). Neurons were classified as 'pure' encoders of a certain dimension when they exhibited a significant main effect for that dimension in the absence of both a significant main effect for the other dimension and a significant interaction. By that criterion, 9/109 (8%) neurons were deemed pure magnitude encoders (including the neuron shown in Fig. 4C), and 16/109 (15%) neurons were deemed pure delay encoders. But again, caution is warranted before relating these numbers to chance expectancy. To get an idea on how many neurons one should expect on the basis of chance, we ran a simulation in which the allocation of spike count distributions to cue values were shuffled. These stimulations showed that, on average, 11-12 pure encoders of each dimension are to be expected, with a 95% range of 6-17. Therefore, this analysis does not provide any evidence for the existence of neurons in the NCL which are sensitive to single reward dimensions. NCL neurons cluster according to their preferred visual cue. These results provide no evidence for the hypothesis that NCL neural activity during stimulus reflects cue value in our paradigm. However, the fact remains that the vast majority of neurons did show significant stimulus-related modulation both with and without factoring out sensorimotor contingencies. If not their associated reward value, what aspects of the stimuli might be represented by these neurons? Finding an answer to this question is complicated by the strong heterogeneity of NCL neurons' response patterns. As illustrated by the examples in Fig. 4, some neurons exhibited a graded response to the visual cues, others clearly preferred a single or two stimuli. Accordingly, visual inspection of all neurons' peri-stimulus time histograms (PSTHs) did not reveal obvious clusters that might aid in the interpretation of NCL response patterns. To conduct an unbiased and systematic investigation into the question whether NCL neurons can be grouped into distinct functional classes, we performed a cluster analysis for 98/162 neurons with moderate to high stimulus discriminability values (η 2 > 0.1 in any 100-ms bin during the 5-s sample phase). For each neuron, we computed PSTHs (100-ms non-overlapping bins) across the sample phase for all five stimuli, concatenated the five resulting PSTHs, and performed hierarchical cluster analysis with Pearson correlation as similarity measure. Plotting neurons in principal component space (Fig. 6A), however, did not reveal clearly discernible clusters. Hierarchical clustering analysis confirmed this impression, yielding a rather flat dendrogram with relatively small distances between clusters (Fig. 6B). Nonetheless, examination of various cluster sizes ranging from 5 to 16 consistently resulted in the formation of clusters with a clear preference for one of the sample stimuli (Fig. 6C shows average z-transformed PSTHs for a 7-cluster solution). Importantly, a similar result was obtained when using concatenated PPTHs (4 consecutive 50-ms bins within ± 100 ms relative to key pecks) rather than PSTHs (Fig. 6D-F), and the clusters were better separated, as visible in the dendrogram in Fig. 6E. Together, these results suggest that the main variable by which different neurons' response patterns can be separated is stimulus preference, and this becomes more clear when sensorimotor contingencies are factored out by using PPTHs (compare Fig. 6B-E). Discussion We set out to test the hypothesis that stimulus-related modulation of NCL neural activity represents cue value. We subjected pigeons to a Pavlovian sign-tracking paradigm in which different conditioned stimuli predicted different rewards. We then used pecking rates of the animals during stimulus presentation as a behavioral indicator of the subjects' differential valuation of the conditioned stimuli. We observed stimulus-related activity modulation in the majority of NCL neurons both with and without compensating for sensorimotor contingencies. While some neurons' firing rates correlated with stimulus valuation, these neurons occurred about as often as expected by chance. Moreover, the numbers of neurons preferring either highly or lowly valued stimuli were roughly equal. Therefore, NCL neural responses are unlikely to reflect integrated cue value, at least under our experimental conditions. However, the strong stimulus-related firing rate modulation we observed, as well as the finding that many neurons preferentially fired for one of the visual cues, matches with the recent description of neurons in corvid NCL that are involved in the representation and maintenance of visual stimulus information for cue-guided behavior. We will first briefly discuss the behavioral findings before moving on to an interpretation of the neurophysiological results. Two behavioral parameters -frequency and force of pecking responses -were analyzed as possible indices of cue value. Previous work has demonstrated that response frequency scales with reward expectancy and predicts stimulus preference in subsequent forced-choice tests 30 . We are unaware of previous studies investigating whether the force of key pecking is systematically related to stimulus valuation; however, pecking force has been shown to differ between first-and second-order conditioned stimuli, between stimuli predicting food and water reward, and between different degrees of food and water deprivation 33,34 . Although we did detect significant differences in response force between differently valued stimuli, these differences were minor in comparison to those obtained with response frequency, especially for the four stimuli associated with reward. Moreover, response force did not increase monotonically with cue value for three of four subjects. It has been suggested that stimulus-directed pecking responses are in fact 'substituted' pecks for food consumption 35 , indicated by several stereotypical features such as a closing of the eyes immediately before the forward thrust of the head 36 ; this reported stereotypy is consistent with the small variability in pecking force which we observed. Our finding that response frequency but not response force strongly covaries with cue value shows that the former but not necessarily the latter factor needs to be taken into account when trying to link neural activity and cue value. In freely moving pigeons, many NCL neurons exhibit motor-related firing rate modulation 20,21 . Analyzing NCL firing rates relative to key pecks directed at the conditioned stimuli effectively factors out response frequency as a contributor to firing rate modulations, thus providing the opportunity to correlate response frequency (as an indicator of cue value) with neural firing rates largely untainted by sensorimotor contingencies of differential key pecking. Several previous studies have reported stimulus-related neural response modulation in the NCL. In a recent study from our lab 20 , we recorded NCL neurons from freely moving pigeons in a go/no-go task in which several visual stimuli varying along a single dimension -spatial frequency -were presented, but only a single stimulus was associated with reward. However, because stimuli were arranged to be perceptually highly similar, animals responded to several stimuli, and response rate varied as a function of perceptual distance to the go-stimulus. Many neurons exhibited stimulus-related response modulation, and in a substantial number of cases pecking frequency and neural activity were (mostly negatively) correlated. We did not systematically manipulate reward value in that study, and due to relatively low numbers of key pecks, we were unable to conduct analyses of firing rates referenced to key pecks as in Figs 4-6 of the present report, thus leaving open the question whether stimulus-related response modulation of these neurons reflected reward value. A recent study from Koenen and coworkers 19 asked whether NCL neurons are modulated by reward amount. In a 'no-choice condition' , pigeons were confronted with either of three differently colored reward-predicting stimuli on each trial. These stimuli were associated with either no, a small, or a large amount of food, and animals were trained to simply peck at each of these stimuli to obtain the associated reward. In the 'choice condition' , animals instead had to choose either of two simultaneously presented visual stimuli signaling different reward amounts (stimuli and associated reward amounts were identical in both conditions). Based on results from monkey premotor cortex obtained in a similar paradigm 37 , the authors expected to find neural signatures of reward amount in the choice but not the no-choice condition. Contrary to expectation, reward-modulated activity was observed in both conditions. At first glance, this finding seems to be at odds with our results. A possible explanation of this discrepancy could lie in differing behavioral procedures. Pigeons in that study were operantly conditioned, while we employed sign-tracking, an instance of Pavlovian conditioning. Although our pigeons quickly learned that the different cues were associated with different outcomes and indicated stimulus discriminability by their behavior, they were never required to decide between differently valued stimuli. It is possible that value coding in the NCL is restricted to situations in which value-based choices have to be made. This does not explain why Koenen et al. found reward modulation in both the no-choice and the choice condition, but possibly learning to conduct value-based choices between stimuli may alter the neural network to engage NCL neurons to conduct value-based comparisons, and in consequence the network might process the stimuli differently even when the stimuli are presented in a situation not requiring an active decision. If true, our failure to demonstrate value coding in the NCL might be a result of using a simple Pavlovian conditioning paradigm which does not require animals to compute the value of different choice options to guide their forthcoming actions (even though the consistent behavioral differentiation of stimuli clearly implicates a value comparison must occur somewhere). On the other hand, neural representations of value have been found in associative brain areas also in no-choice tasks, for example during trace conditioning in orbitofrontal cortex 38 , a cued-saccade task in the lateral intraparietal area 39 , and a simple go-task in dorsolateral prefrontal cortex 40 , with the caveat that in the latter two studies some kind of active response of the animals was required. Future NCL recording studies using paradigms requiring value-based decisions instructed by visual cues may resolve this question. What could be the origin of stimulus-related modulation of NCL neural activity that we observed here? Cue-associated reward value seems to be a rather unlikely candidate under the present experimental conditions. Given that the only obvious reason for differential responding was stimulus identity, and that neurons clustered according to their stimulus preference but not any other obvious response characteristic (Fig. 6), basic sensory properties of the visual stimuli could be responsible. As outlined above, decades of research have fostered the functional equivalence of NCL and PFC (see ref. 41 for review). PFC neurons respond to exteroceptive stimuli and differentiate the physical characteristics of these stimuli to some extent (e.g. dorsolateral PFC 40 and OFC 42 ). Interestingly, during learning of a visual discrimination task the number of cue-responsive PFC neurons increases with task acquisition; after learning, the vast majority of PFC neurons represents not only stimulus characteristics, but also their behavioral significance, such as their reward value or coupling to specific actions 43 . Just like PFC, the NCL is best characterized as a multimodal (associative) brain structure in which inputs from various higher sensory areas and those from memory-and motivation-related areas converge, and whose outputs are routed to premotor and motor structures such as the arcopallium and the basal ganglia 10,44 . It is unlikely that cue-evoked NCL activity forms an essential part of basic visual processing. First, visual discrimination capacity is not impaired by either permanent lesion or transient inactivation of NCL 14,24,45 . Second, in working memory Scientific RepoRts | 6:35469 | DOI: 10.1038/srep35469 tasks, NCL-neurons are only weakly active during the actual stimulus presentation but increase their activity during the delay phase after stimulus offset in which the cue has to be kept in working memory 27,28 . Third, NCL neurons encode upcoming behavioral choices rather than current stimulus input during perceptual decision making 21 . Fourth, a recent electrophysiological study has demonstrated that NCL neurons during cue presentation extract the numerosity of the displayed items while disregarding their shape or geometrical arrangement 46 . Instead, we propose that the stimulus-related modulation we observed rather serves a 'permissive' or 'informative' role, in the sense that, to use visual stimuli to guide behavior, the NCL must receive visual information about the external world to link this information to actions and action outcomes. Miller and Cohen 47 likened the PFC to a switch operator in a system of railroad tracks. In this analogy, trains (activity e.g. carrying sensory information) must be routed to their proper destination (e.g. a behavioral response). PFC steps in when multiple trains are to be coordinated and re-routed to different destinations. In this view, the PFC is constantly fed sensory-and motor-related information to monitor the environment in relation to ongoing behavior but does only intervene when currently executed behaviors have to be interrupted, and other actions should be pursued. Taking the switch operator analogy to NCL, stimulus-related firing could simply be a signature of visual information received from higher visual areas such as the entopallium 48 . Since our paradigm, for the most part, does not require pigeons to handle ambiguous situations, NCL may not become engaged in the sense that no modulation of processing in upstream sensory or downstream motor areas or in their mutual connections is required (Fig. 7). However, some simple change to our sign-tracking paradigm, such as requiring the animals to choose between the differently valued visual cues 19 , may be enough to engage NCL circuitry to perform value-based comparisons. If not in NCL, where might value signals be found in the avian brain during sign-tracking? In the mammalian brain, a wide range of structures have been implicated in the coding of reward, perhaps most notably the basal ganglia and dopaminergic brain stem nuclei 49 . Substantially less is known regarding reward processing in the avian brain, but several studies have demonstrated reward coding at the level of the basal ganglia in domestic chicks; for example, Izawa and colleagues 50 found that neurons in chick ventral striatum modulated their firing rates as a function of temporal reward proximity as well as reward magnitude in an operant color-discrimination task (see ref. 51 for similar results). Another recent study employing Bengalese finches showed that neurons in Area X, a striatal nucleus of the avian song system, are modulated by food reward 52 . This study is of particular interest because their operant task bears some similarity to our sign-tracking paradigm (finches had to peck to a visual cue for reward). Moreover, neural activity in the pigeon entopallium 48 and visual wulst 53 was found to be related to reward-predicting properties of visual cues in operant discrimination tasks. Together, these studies suggest that reward processing may be rather widespread in the avian brain, as is the case for the primate brain 54 . To conclude, response properties of NCL neurons are determined both by stimulus properties as well as behavioral responses in a highly context-dependent manner. A fruitful direction for future research may involve coupling stimuli with distinct but well-controlled visual properties to not a single, but several well-defined actions with specific outcomes, along with contextual changes requiring behavioral flexibility [55][56][57] ; such novel paradigms are needed to break down the variability in NCL neural responses into its constituent elements and refine our understanding of the mechanisms by which this brain structure exerts executive control 58 . Methods Animals. Five homing pigeons (Columba livia forma domestica) served as subjects. Birds were housed individually in a colony room kept on a 12/12h light/dark cycle (lights on at 8 am). Food access was restricted to experimental sessions and weekends, with the birds being constantly kept above 85% of their free-feeding weight. Water was available ad libitum. Subjects were kept and treated according to the German guidelines for the care 47 . S1-S3 denote environmental stimuli, R1-R3 behavioral responses. The NCL receives sensory information from higher sensory areas (inbound arrows from S1-S3) and in turn modulates sensory processing (outbound arrows to S1-S3; 'attention'). Similarly, NCL projects to downstream motor centers and in turn receives afferent information from these centers (inbound and outbound arrows from and to NCL from R1-R3). Each stimulus has a strong connection to one of the responses (bold horizontal arrows). In simple situations, NCL does not need to interfere between ongoing stimulus-response chains (arrows from S1-S3 to R1-R3). In case of conflict, NCL biases S-R connections. See Discussion for further details. Scientific RepoRts | 6:35469 | DOI: 10.1038/srep35469 and use of animals. The experiment was approved by a national ethics committee of the State of North Rhine-Westphalia, Germany. Apparatus. Testing was conducted in a custom-built operant chamber measuring 33 by 35 by 36 cm (width by depth by height) and illuminated by a light bulb set into the side wall. The chamber was surrounded by a sound-attenuating shell. White noise was played at all times to mask extraneous sounds. Conditioned responses (key pecks) onto a translucent pecking key set into an opaque back well were registered by electronic switches. The pecking key measured 5 cm by 5 cm and was located 25 cm above the floor. The force of individual key pecks was registered using a custom-built piezoelectric sensor attached to the pecking key. A flat-screen monitor mounted behind the wall was used to display visual stimuli (cues). Valid responses (i.e. those which activated the switches) were acknowledged by a feedback click. Food reinforcement following stimulus presentation was provided by a food-hopper located below the response key which controlled access to a grain reservoir. During feeding, a feeder light just above the reservoir was activated. Behavioral Paradigm. Subjects were trained on a Pavlovian sign-tracking paradigm, in which distinct visual stimuli predicted rewards of differing magnitude, delivered after a variable delay. Figure 1A illustrates a single trial and an example stimulus set. Following an intertrial interval of 8 s, an orange initialization stimulus was displayed on the response key. After the animal pecked once at the key, a fixed interval (FI) schedule of 2 s commenced, so that the first pecking response after 2 elapsed seconds initiated the trial. Failure to respond within 2 s after the FI had elapsed aborted the trial and was marked as an initialization omission. Correct initialization was followed by presentation of one of five distinct stimuli on the response key for a fixed time of 5 s, which was succeeded by the stimulus-specific outcome, irrespective of the subject's behavior. The CS-was followed by 2 s of mild punishment (playing an 80 Hz sawtooth wave sound and turning off the house lights), whereas the other four stimuli predicted reward at a unique combination of magnitude (i.e. time of access to the grain reservoir) and time until delivery. Magnitude could be small (1-1.5 s access to food, denoted as "m") or large (5-6 s access to food, "M"); similarly, delay to reward could be short (1 s of delay, denoted as "d") or long (5-6 s of delay, "D"). Reward parameters were adjusted for individual subjects within the described range to ensure a stable differentiation of stimuli for all subjects. A full stimulus set thus encompassed five stimuli: CS-, md, Md, mD, and MD. The images representing these conditions were selected from a larger pool, so that no two animals associated the same image with a given condition. A sample stimulus set and an illustration of their associated reward properties is shown in the right half of Fig. 1A. After a 5 second presentation time, the cue was extinguished and the feeding light was turned on for the stimulus-specific delay to reward. Reward was then delivered with a reward probability of 50% for all stimuli. In case of reward, feeding light and feeder were activated for the time specified by magnitude; otherwise, the feeding light was lit for the same time. Each session contained 200 trials (40 trials per stimulus), and birds were tested five days a week. Behavior Analysis. During stimulus presentation, all subjects responded copiously as previously described for sign-tracking procedures 1 . We recorded pecking responses to all stimuli during the presentation time of 5 s, interpreting response rate as an indicator of subjective value 30 . To ensure reliable differentiation of stimuli, we calculated stimulus discriminability as the area under the receiver-operating characteristic curve (AUROC) for distributions of response counts to all possible stimulus pairs, and excluded all behavioral sessions in which the pairwise discriminability for any value-predicting stimuli was below 0.5. Sessions in which value-predicting stimuli did not receive at least 25 pecking responses were also excluded because a low number of key pecks precluded reliable estimation of peri-peck time histograms (see below). Aside from response frequency, we also measured the force of pecking responses to assess whether stimuli of higher value might elicit more forceful key pecks. Pecking force was recorded via a piezoelectric transducer attached to the response key. We quantified the force of individual key pecks by rectifying and summating the vibration-induced voltage trace in a 100-ms window following registered key pecks. The measured signals were not calibrated with actual force measurements; results are thus presented in arbitrary units. We calculated stimulus discriminability by the force of responses using the AUROC characteristic, identical to what was done for response frequency. Surgical Procedures. After stimulus-specific pecking rates had stabilized, pigeons were implanted unilaterally in the NCL (AP + 6.5− 7.0 mm, ML + /− 7.5 mm) with custom-built microdrives (modified from 59 ), allowing for the linear advancement of fifteen 25 μ m formvar-coated nichrome wires and one 75 μ m nichrome wire for differential referencing. Animals were anesthetized with isoflurane with additional regular administration of the painkiller butorphanol (0.1 ml every 2 hours). Following fixation in a stereotactic apparatus, the scalp was cut and retracted to expose the skull. Eight to ten miniature stainless steel screws were driven into the skull for subsequent implant fixation. Craniotomies were performed at the indicated positions, the dura was removed and the electrodes were lowered slowly to their final positions. A layer of vaseline was applied to the brain surface and dental acrylic was used to fix the electrodes to the skull. Animals were treated with the painkiller carprofen (concentration: 10 mg/ml, dosage: 1 mg per ml per 100 g of body weight) for three days following surgery and allowed to recover for two weeks before recommencing training. Electrophysiological Recordings and Data Analysis. Electrodes were advanced by at least half a revolution of the drive screw (125 μ m) one hour before each recording session. Neural activity was recorded using the AlphaLab Stimulation and Recording System (Alpha Omega, Nazareth Illit, Israel). Signals were amplified 400-fold with a sampling rate of 22,321 Hz and stored in the AlphaLab file format. Offline analysis was performed in Spike2 Version 7 (Cambridge Electronic Design, Cambridge, UK), using custom-written MATLAB code. Signals were digitally band-pass filtered from 500 to 5000 Hz, putative spike events were extracted by amplitude thresholds and sorted using principal component analysis and correlation clustering to yield single-unit data. There was no preselection of neurons for any particular properties. Since subject behavior was heterogeneous across stimuli (recall that a stable differentiation in response rate was a prerequisite for further analysis), we could not compare neuronal activity across the entire sample phase without confounding subjects' valuation and motor output of the animal. Therefore, we controlled for differential motor output (rate of key pecking) by focusing on neural responses in the temporal vicinity of key pecks delivered onto a given stimulus. We analyzed neuronal activity within ± 100 ms of registered pecking responses to construct peri-peck time histograms (PPTHs); for visualization purposes, these were convolved with a Gaussian kernel with 25 ms as standard deviation. Importantly, all analyses were conducted on raw spike counts. To avoid double inclusion of individual spikes, pecks occurring within 100 ms after the previous peck were eliminated. We used the non-parametric Kruskal-Wallis procedure to test whether mean spike counts were significantly different (p < 0.05) across stimuli (identifying "stimulus-modulated" neurons). Furthermore, we correlated pecking responses emitted onto a given reward-predicting stimulus with the degree of neuronal modulation using Kendall's rank correlation coefficient tau. For statistical evaluation on a population level, we compared the distribution of tau values against a shuffled distribution using the Chi-Square goodness-of-fit test. Shuffled distributions were generated by randomly allocating spike count distributions to distinct stimuli, averaging over 1,000 iterations. All analyses were performed in Matlab Version R2012a (The Mathworks, Natick, MA). Histology. Upon completion of the experiment, birds were deeply anesthetized with equithesin (4.5-5.5 ml/kg body weight) and transcardially perfused with 0.9% saline at 40 °C, followed by 4% formaldehyde. Prior to anesthesia, 0.1 ml heparin was injected intramuscularly to prevent blood coagulation. Brains were embedded in gelatin before sectioning at 40 μ m and subsequent staining with cresyl violet. Electrode positions were reconstructed by microscopic identification of the deepest and/or widest electrode track using light microscopic observation in reference to the stereotaxic atlas of the pigeon brain 60 .
2017-05-01T19:21:42.832Z
2016-10-20T00:00:00.000
{ "year": 2016, "sha1": "25903652bb1c02301a0f7e971b37a2f65e1bf1c7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/srep35469", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3740da6ef8d54c0e1508331a595ea3f74f9de08", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
255786308
pes2o/s2orc
v3-fos-license
Genomic variation in tomato, from wild ancestors to contemporary breeding accessions Domestication modifies the genomic variation of species. Quantifying this variation provides insights into the domestication process, facilitates the management of resources used by breeders and germplasm centers, and enables the design of experiments to associate traits with genes. We described and analyzed the genetic diversity of 1,008 tomato accessions including Solanum lycopersicum var. lycopersicum (SLL), S. lycopersicum var. cerasiforme (SLC), and S. pimpinellifolium (SP) that were genotyped using 7,720 SNPs. Additionally, we explored the allelic frequency of six loci affecting fruit weight and shape to infer patterns of selection. Our results revealed a pattern of variation that strongly supported a two-step domestication process, occasional hybridization in the wild, and differentiation through human selection. These interpretations were consistent with the observed allele frequencies for the six loci affecting fruit weight and shape. Fruit weight was strongly selected in SLC in the Andean region of Ecuador and Northern Peru prior to the domestication of tomato in Mesoamerica. Alleles affecting fruit shape were differentially selected among SLL genetic subgroups. Our results also clarified the biological status of SLC. True SLC was phylogenetically positioned between SP and SLL and its fruit morphology was diverse. SLC and “cherry tomato” are not synonymous terms. The morphologically-based term “cherry tomato” included some SLC, contemporary varieties, as well as many admixtures between SP and SLL. Contemporary SLL showed a moderate increase in nucleotide diversity, when compared with vintage groups. This study presents a broad and detailed representation of the genomic variation in tomato. Tomato domestication seems to have followed a two step-process; a first domestication in South America and a second step in Mesoamerica. The distribution of fruit weight and shape alleles supports that domestication of SLC occurred in the Andean region. Our results also clarify the biological status of SLC as true phylogenetic group within tomato. We detect Ecuadorian and Peruvian accessions that may represent a pool of unexplored variation that could be of interest for crop improvement. Background The domestication process of crop plants led to dramatic phenotypic changes in many traits that result from changes in the genetic makeup of the wild species ancestors [1,2]. The analyses of genomic variation and the structure of genetic diversity of cultivated crops and their wild relatives provides insights into the history of domestication, adaptation to local environments, and breeding [3,4]. The resulting analyses offer valuable information for germplasm management and the exploitation of natural variation to improve crops. Cultivated tomato (Solanum lycopersicum L.) (SL) is a member of the family Solanaceae, genus Solanum L., section Lycopersicon [5]. Its wild relatives are native to western South America, including the Galapagos Islands. S. pimpinellifolium L. (SP) is thought to be the closest wild ancestor to cultivated tomato [5][6][7]. SP accessions are found in Coastal Peru and Ecuador and are divided in three main genetic groups corresponding to the environmental differences found in the coastal regions of Northern Ecuador, in the montane region of Southern Ecuador and Northern Peru, and the coastal region of Peru [8,9]. S. lycopersicum is divided into two botanical varieties: S. l. var. cerasiforme (Dunal) Spooner, G.J. Anderson & R.K. Jansen (SLC) and S. l. var. lycopersicum (SLL). SLC is native to the Andean region encompassing Ecuador and Peru, but it is also found in the subtropical areas all over the world [10]. SLC grows either as a true wild species, in home gardens, along roads, sympatrically with tomato landraces, or as a cultivated crop [9]. SLC thrives in the humid environments of Ecuador and Peru at the eastern edge of the Amazon basin whereas SP occupies the drier Peruvian coasts and valleys and the wetter Ecuadorian coast [9,11,12]. Although there is no reproductive barrier between SP and SLC [13], the Andes mountains impose strong physical and ecological barriers for cross reproduction among these species. Many details of tomato domestication remain debated, especially regarding the role of SLC in this process. The South American SLC native to the Ecuadorian and Peruvian Andes has been proposed to be an evolutionary intermediate between SP and cultivated SLL [6,9,14] or, alternatively, an admixture resulting from the extensive hybridization between SP and SLL [15,16]. The location of tomato domestication also remains uncertain. Both Mesoamerica [14] or Ecuador and Northern Peru, near the center of origin of SP [17], have been proposed as the center of domestication. If the former were true, SLC would have had to migrate north to Mesoamerica as a wild or weedy species, where it would have been domesticated into SLL. Instead, a two-step domestication process has been proposed for tomato [9]. The first step would have consisted of a selection from SP or primitive SLC by early farmers resulting in the Ecuadorian and Northern Peruvian SLC. The second step likely occurred in Mesoamerica, and consisted of further selection from these pre-domesticated SLC after their migration from Ecuador and Peru. This second step completed the domestication process of tomato. Genetic data confirmed that European SLL accessions originated from Mesoamerica and constitute the genetic base of the SLL vintage varieties [9]. It has also been proposed that a genetic bottleneck was associated with the migration of SLL from Mesoamerica to Europe [18][19][20]. Blanca et al. [9] proposed that the main bottleneck happened during the migration from Peru and Ecuador. Extensive breeding efforts have modified tomato over the last 100 years. Breeding goals were focused on improving SLL for disease resistance, adaptation to diverse production areas, yield and uniformity. These efforts resulted in the introduction of many introgressions from SP and more distant tomato relatives [21], leading to a broadening of the genetic diversity of SLL [21][22][23]. Another consequence of these breeding programs was the selection for specific traits that are characteristic of the fresh and processing markets which has led to further diversification and genetic differentiation among market classes. The traits that most likely have been selected during the domestication of tomato were fruit weight and, to a lesser extent, shape. In recent years, several genes affecting these traits have been identified [24][25][26][27][28][29]. As the underlying polymorphism causing the change in allele function for all these genes is known, the presence of the derived and ancestral alleles is easily sampled. For example, in vintage SLL the majority of the shape diversity is explained by the derived alleles of the FAS, SUN, OVATE and LC genes [30]. What is not well understood is when and where these alleles arose and how they spread through the germplasm. Quantifying the allele frequency of the loci among the SP and SLC populations will help to elucidate the process of selection that is at the foundation of tomato domestication. The aim of this study was to better delineate the evolutionary history of tomato including its domestication. By using a dataset with over 7,000 SNPs and 1,008 accessions of SP, SLC and SLL we aim to compare and contrast the genome-wide molecular diversity of populations spanning the entire red-fruited clade. Additionally, the allele frequency of six fruit weight and shape genes have been measured in order to elucidate the domestication process. Plant material and passport data We analyzed 1,008 tomato accessions from the species representing the red-fruited clade of tomato (Additional file 1: Table S1). Of these, 912 corresponded to accessions genotyped in studies conducted at COMAV, Spain [9], through the Solanaceae Coordinated Agricultural Project (SolCAP) in the USA [31] and INRA, France [32]. These data sets were combined with an additional set of 96 accessions originating from vintage and processing germplasm genotyped in Ohio (62), and from the COMAV collection (34). Altogether, these 1,008 accessions represent 952 uniquely named accessions. Several accessions were independently genotyped in different experiments. For example, Moneymaker was represented several times and these duplicates were used for quality control of the genotyping results between the laboratories. The number of uniquely named accessions per species, according to their passport data, were: Jansen (SChm; 1 accession), crosses between S. lycopersicum and S. pimpinellifolium (SL x SP; 10 accessions), and one hybrid between S. l. lycopersicum and S. pennellii. The hybrids were included to determine the ability of detecting heterozygous SNPs with the genotyping platform. A unified passport classification, which includes species name, collection site and use, was compiled for all accessions based on the information retrieved from the different sources and donors (Additional file 1: Table S1). For SP and SLC, the passport classification mainly reflected the collection site. An additional category for SLC was introduced as "SLC commercial cherry" to group the SLC accessions with a commercial purpose. For SLL, the vintage, landrace and heirloom categories were grouped together and classified collectively as vintage consistent with the nomenclature of Williams and St. Clair [19]. Additionally, a category was created in SLL to include the early breeding lines such as Moneymaker and Ailsa Craig. The SLL accessions derived from crop improvement programs currently active (i.e. contemporary to the time of writing) were categorized based on use (fresh market or processing) and location of breeding. Overall, sufficient information was available for 84% of the accessions to classify them beyond the species level. In cases where this was not possible, the passport classification only reflected the species (i.e., SP, SLC or SLL). For 48.3% of the accessions, geographic location information was available in the form of Global Positioning System (GPS) coordinates or from the location of its collection site (Additional file 1: Table S1). Genotyping and data set merging All samples were genotyped using the Tomato Infinium Array (Illumina Inc., San Diego, CA, USA) developed by the United States Department of Agriculture (USDA) funded SolCAP project (http://solcap.msu.edu/). The SolCAP SNP discovery work-flow was described [33], as were details of the array [23]. The genotyping array contained probes for 8,784 biallelic SNPs. These SNPs represented a highly filtered and selected set, based on transcriptome sequence for SLL, SLC, and SP, optimized for polymorphism detection and distributed throughout the genome. Of these, 7,720 SNPs (88%) passed manufacturing quality control [23]. All SNPs on the array have been incorporated into the Solanaceae Genome Network database (http://solgenomics.net/), the SNP annotation file is available (http://solcap.msu.edu/tomato_genotype_data.shtml), and sequences are available through the Sequence Read Archive (SRA) at the National Center for Biotechnology Information (study summary SRP007969; accession numbers SRX111556, SRX111557, SRX111558, SRX111845, SRX111848, SRX111849, SRX111850, SRX111853, SRX1 11857, SRX111858, SRX111859, SRX111862, SRX111861). Genomic DNA was isolated from fresh young leaf tissue. DNA concentrations were quantified using the PicoGreen assay (Life Technologies Corp., Grand Island, NY, USA) and diluted to 50 ng/μl in TE buffer (10 mM Tris-HCl pH 8.0, 1 mM EDTA). Genotyping was performed using 250 ng of DNA per accession following the manufacturer's recommendations. The intensity data were analyzed in GenomeStudio version 1.7.4 (Illumina Inc., San Diego, CA, USA). The automated cluster algorithm generated from the SolCAP project was used to obtain initial SNP calls. Visual inspection was used to assess the default clustering of each SNP, and calls were modified when the default clustering of a SNP was not clearly defined. There are three methods for SNP calling for the Illumina Infinium array: relative to the reference (also known as customer), the design (also known as Illumina) or the TOP strand (a designation based on the polymorphism itself and its flanking sequence). To merge data sets from three different laboratories that had used different SNP calling methods, we developed a Python script to facilitate detection, reorientation and merging of the data such that all SNPs are called relative to the design strand (the script is available upon request to J. Blanca). Selection of SNPs for downstream analyses The accessions were genotyped with 7,720 SNPs (Additional file 2: Table S2) that passed the manufacturing quality control and constituted the raw data set. Of those, we removed 240 markers (3.1%) that had more than 10% missing data and 1137 (14.7%), which had a major allele frequency above 0.95. For all analyses, except for the rarefaction and the linkage disequilibrium (LD), SNPs that mapped closer than 0.1 cM were removed as well, yielding a final dataset of 2,313 markers uniformly distributed across the genome. This filtering was done in order to avoid an overestimation of polymorphism and genetic distances among populations due to genomic introgressions from wild relatives. For this purpose a minimum genetic distance of 0.1 cM was chosen as a trade-off between the number of markers left for the analysis and the LD minimization. Genetic distances were based on the genetic maps of Sim et al. [23]. Genetic classification and sample filtering Principal Component Analyses (PCA) were used to explore the patterns of genomic variation in the entire collection without considering the a priori classification based on passport data (i.e., species, location and use). A three level classification scheme, based on a series of hierarchical PCAs, was used to define genetic groups within species and genetic subgroups within genetic groups. PCAs were performed with the smartPCA application included in the Eigensoft 3.0 package [34,35]. This genetic classification was used in the subsequent analyses unless mentioned otherwise. Pairwise genetic distances were computed among accessions within each group at each level of the hierarchical classification. Kosman and Leonard's distance method [36] was used and a violin plot was produced for each hierarchy level using the R package 'vioplot' [37]. When an accession was genotyped more than once and both genotypes were inconsistent (e.g., both samples were classified in different subgroups in the PCA) all data for the accession was removed from the analysis (see Additional file 1: Table S1), unless it was clear based on the passport information, which genotype was correct (e.g., two entries from the same SLC accession collected in Peru, one grouping with other Peruvian accessions and another grouping with the mixture group). In total 8 genotypes out of the 1,008 were removed due to inconsistent data. We assume that these rare inconsistencies were related to uncontrolled cross pollinations or seed mixing during regeneration. Genetic distances among samples of the same uniquely named accession were evaluated (see above) to check the reproducibility between genotyping datasets coming from different laboratories. For the genetic analyses, unless stated to the contrary, only one randomly chosen genotype representative of the uniquely named accessions was used. Diversity and genetic differentiation For polymorphic loci with a major allele frequency lower than 0.95 (P95), the expected (H e ) and observed (H o ) heterozygosity were calculated using custom scripts for each hierarchy of the genetic classification. Differentiation among genetic subgroups was explored by calculating differentiation index D est [38] using custom scripts and F st using Arlequin v. 3.5.1.3 [39]. Only groups with at least 5 individuals were considered for genetic diversity estimates and mixture groups (SP mixture, SLC mixture and mixture) were not included in these analyses. Statistical significance of D est and F st was assessed after 1,000 permutations. An unrooted network was built based on the genetic differentiation matrix using the Neighbor-net algorithm implemented in SplitsTree v.4.13.1 [40]. Additionally, a neighbor-joining tree was created using the same distance matrix. Bootstrap values were obtained from 1,000 trees. The tree was built using functions included in PyCogent v. 1.5.3 library [41]. Allelic richness and private allelic richness (private alleles are defined as alleles found exclusively in a single population) were estimated using the rarefaction method implemented in the software ADZE [42]. LD was calculated using TASSEL v.4.0 [43]. Pairwise r 2 was obtained for all markers within each chromosome and data was fitted using a local polynomial regression fitting (LOESS) [44] implemented in R v. 3.0.1 [45]. Rarefaction and LD analyses were performed using genetic groups defined by PCA and network analysis. These groups are defined as follows: SP, SLC Ecuador and Northern Peru, SLC non Andean, SLL vintage and SLL contemporary (split for some analyses into SLL processing and SLL fresh). Isolation by distance Correlations between genetic, geographic and climatic distances were analyzed to infer patterns of isolation by distance or the effect of ecological conditions on the genetic structure. Pairwise genetic distances between accessions were computed using Kosman and Leonard's distance method [36]. Pairwise geographic distances were calculated when GPS information was available using the haversine formula [46]. Climatic data for accessions with GPS coordinates was obtained using the R package 'raster' [47]. Current climatic data interpolated from 1950 to 2000 was obtained from worldclim (http:// www.worldclim.org) at 30 arc-seconds resolution (approx. 1 km). A PCA was carried out with all the climatic information and the resulting scores were used to obtain the pairwise climatic distances based on a Euclidean metric. Significance of the correlations between distance matrices was assessed with a Mantel test based on 1,000 permutations implemented by the PyCogent Python library [41]. A density plot for each distance comparison was created using the kde2d function in the R 'MASS' package [45]. Phylogenetic analysis A phylogenetic tree was built with SNAPP [48] to infer the evolutionary history of the tomato species in the Andean region encompassing Ecuador and Peru. SNAPP, which is part of the BEAST package [49], is a recently developed method that allows reconstructing the species tree from unlinked SNPs by using a finite-sites model likelihood algorithm within a Bayesian Markov chain Monte Carlo (MCMC). A MCMC chain was run for 2,000,000 steps with a sampling interval of 1,000 and a burn-in of 25%. Convergence of posterior and likelihood distributions, and number of estimated sample size for model parameters were assessed using Tracer v.1.5 [50]. Due to the high computational demands of SNAPP, only one accession per genetic subgroup was used. For the same reason, not all genetic subgroups were considered; only SP and Peruvian, Ecuadorian and Mesoamerican SLC accessions were included. Three outgroup species were also included, namely S. galapagense, S. neorickii and S. chmielewski. Fruit weight and shape genes genotyping Six markers that distinguish wild type and causal derived alleles of the fruit shape loci (sun, ovate, fas and lc) as well as the fruit weight loci (fw2.2 and fw3.2) were genotyped (Table 1 and Additional file 1: Table S1). lc (locule number) and fas (fasciated) control the number of locules, an important feature affecting fruit weight as well as shape.The gene lc is hypothesized to be an ortholog of WUSCHEL which is required to maintain stem cell identity [28]. The fas mutation affects a YABBY2 transcription factor which encodes a member of the family regulating organ polarity [27,51]. Two genes exhibit a major effect on fruit shape namely sun [26] and ovate [25], positive and a negative regulators of growth, respectively. The fruit weight gene fw2.2 negatively controls cell division and encodes a member of the Cell Number Regulator (CNR) family [24,52]. fw3.2 encodes an ortholog of KLUH, a P450 enzyme which increases weight through increased cell number in pericarp and septum tissues [29]. Genetic structure of the tomato accessions To detect patterns of genetic structure within the collection, we conducted a global PCA ( Figure 1) using 2,313 selected SNPs. The graphical pattern of the first two principal components (PCs) is suggestive of an arch structure with the three edges corresponding to SP, SLC and SLL respectively. The small-fruited wild relative SP forms the left side, differentiated along both PCs. SLC corresponded to the top of the arch and was also distributed along both PCs albeit less clearly than SP. SLL accessions are differentiated only along PC2, forming the right edge (positive PC1, distributed PC2). Additionally, a group of genotypes appeared in between the three main groups and they have been classified as mixture. The accessions in this region include all ten artificial SLL x wild species hybrids and the accessions BGV007985, BGV012625 and LA1909 are already classified as interspecific hybrids in their passport data, thus we have called this group "mixture". The SP category was the most genetically diverse group (H e = 0.21), followed by SLC (H e = 0.17) and SLL (H e = 0.12) ( Table 2). To identify clusters within each species (i.e., genetic groups) and sub-clusters within each cluster (i.e., genetic subgroups), additional PCAs were conducted in a hierarchical fashion with the accessions belonging to the same species (Figure 2 and Additional file 3: Figure S1, Additional file 4: Figure S2, Additional file 5: Figure S3, Additional file 1: Table S1). For SP, the first two PCs (explaining 33.5% of the total variance) showed that the SP Ecuador, that comprises Northern Ecuadorian accessions, formed a separate genetic group from the other SP accessions (Figure 2A and Additional file 3: Figure S1). These Ecuadorian accessions were further subdivided into three genetic subgroups: Ecuador 1, Ecuador 2 and Ecuador 3 (Additional file 3: Figure S1A and B). The remaining SP accessions were divided into two genetic groups: Peru (corresponding mainly to Coastal Peru and Northern Montane Peru) and Montane (Southern Ecuadorian Montane accessions) (Figure 2A and Additional file 3: Figure S1). Montane accessions were further subdivided into two genetic subgroups (Montane 1 and Montane 2), whereas the Peruvian accessions clustered into 9 categories (Additional file 3: Figure S1C-F). Accessions located in an intermediate position in the PCA were classified as SP mixture, and likely represent admixtures between SP accessions from different groups (Figure 2A). These admixtures could be from naturally occurring hybridizations or the result of accidental outcrossing events during the handling of the accessions in germplasm collections or regeneration in seed banks. The genetic diversity among the three SP groups ranged from H e = 0.09 (Ecuadorian SP) to H e = 0.15 (Peruvian SP) ( Table 2). For SLC, the first two PCs explained 16.0% of the total variance and showed a clustering based on geography ( Figure 2B; Additional file 3: Figure S1). The Ecuadorian and Peruvian SLC formed two non-overlapping clusters in the PCA representation and showed a higher genetic diversity compared to SP Ecuador and SP Montane (SLC Ecuador H e = 0.19 and SLC Peru H e = 0.18, Table 2). An SLC group which included accessions from all over the subtropical regions of the world was called SLC non-Andean, and was located between the two Andean clusters ( Figure 2B). A distinct cluster named SLC-SP Peru was identified and composed of accessions from Southern Peru. Each SLC genetic group could be further subdivided based on genetic structure. Ecuadorian SLC was split into four subgroups, three that divided Ecuador latitudinally (Additional file 4: Figure S2A and B, Additional file 6: Figure S4) and one that was named SLC vintage since it mainly included accessions collected from South American markets as vintage tomatoes. Interestingly, the SLC vintage accessions often featured big fruits with many locules, a trait that may have been selected early for cultivation and consumption (Figure 3). The SLC vintage accessions clustered closely, but separately, relative to the three Ecuadorian genetic subgroups (Additional file 4: Figure S2A and B). The Peruvian SLC was divided into three subgroups that were named from north to south as Peru 1, Peru 2, and Peru 3. The SLC non-Andean group was subdivided into: Colombia, Costa Rica, Mesoamerica, Sinaloa (Mexico), South East Asia and Other representing the rest of the subtropical regions of the world (mainly Europe, Africa and South American nations outside of Colombia, Ecuador and Peru). Similarly to SP, SLC accessions without a clear genetic clustering and without a common geographic origin were classified as SLC mixture. These mixture accessions were distributed between the Peruvian and Ecuadorian SLC clusters in the PCA ( Figure 2B). In addition, closely related to the SLC non-Andean were seven accessions with no obvious relationship according to the passport data and were referred to as SLC 1. The PCA for the SLL accessions showed that the first two PCs (13.6% of the total variance) separated five main genetic groups: vintage, fresh, processing 1, processing 2 and SLL 1 ( Figure 2C). All SLL groups had low diversity (H e = 0.06-0.10) compared with the Peruvian and Ecuadorian SLC (H e = 0.188-0.177) ( Table 2). The SLL vintage group was divided into subgroups that were differentiated using additional PCAs: Mesoamerica, vintage 1, vintage 2 and early breeding lines (Additional file 5: Figure S3A and B). The SLL fresh group was comprised of the subgroups fresh 1, fresh 2 and vintage/fresh ( Figure 2C, Additional file 5: Figure S3C and D). The latter subgroup was named vintage/fresh because it included accessions classified as vintage as well as contemporary breeding fresh market accessions. The SLL fresh 1 was composed of Florida and North Carolina accessions while SLL fresh 2 consisted of accessions from New York (Additional file 1: Table S1). The SLL processing 1 group was subdivided into three groups, 1-1, 1-2 and 1-3. The latter group was comprised of a subset of accessions from the Ohio breeding germplasm whereas the remainder of the Ohio germplasm was found in the SLL processing 1-2 subgroup. The processing 1-1 included accessions from Oregon. The group SLL processing 2 was clearly separated from the other processing groups. This group was entirely composed of New York breeding materials which represent a predominately California genetic background with Phytophthora resistance introgressed from North Carolina fresh-market accessions. Finally, the SLL1 group was located between SLL processing 1 and SLL fresh in the PCA ( Figure 2C) and was comprised by a mixture of accessions such as the plum tomatoes Rio Grande and NC EBR-6. To determine the consistency of the structure obtained by PCA analyses, we compared the distribution of genetic distances within the following hierarchy levels: species, genetic group, genetic subgroup and samples of the same uniquely named accession (Additional file 7: Figure S5). As expected, the species showed the highest distances whereas the groups and subgroups showed progressively lower genetic distance values. All pairwise genetic differentiation among subgroups assessed by F st and D st were significant (p-value < 0.05) (data not shown). The distance among repeated samples of the uniquely named accessions was very low indicating a high consistency among genotyping experiments. Comparison of the genetic and passport classifications The genetic classification derived from the PCAs was compared with the passport-based classification and demonstrated overall good agreements (Figure 4 and Additional file 1: Table S1). Most disagreements were in SLC followed by SLL (Figure 4). One striking difference between the two classifications occurred for 102 SLC accessions that were located in the PCA between SLC and SLL and classified as mixture ( Figure 1). These accessions included many of the commercial cherry tomatoes. These data imply that most cultivated cherry tomatoes are not true SLC. One of the other notable exceptions to the correspondence between genetic and passport classification was the subgroup comprised of accessions that were listed as SLL vintage, but instead were genetically classified as a SLC group closely related to SLC Ecuador. This cluster was classified as SLC vintage and consisted of genetically diverse germplasm that included accessions collected mostly at South American markets. Population relationships To determine the relationship between all subgroups, we constructed a neighbor network and population phylogenetic tree reflecting subgroup relationships based on D st distances ( Figure 5A and Additional file 8: Figure S6). All Ecuador 1 subgroups. The group SLC-SP Peru was located at a genetic position between Ecuadorian SLC and SP and appeared to be the result of an admixture between these two species. Within SLC, groups that were found in close geographical proximity also tended to cluster together. The neighbor network showed two plausible paths for the evolution of SLC to SLL: 1) SLC Ecuador 3, SLC Colombia, SLC Costa Rica, SLC Mesoamerica; and 2) SLC Peru 1, SLC Peru 2, SLC Peru 3 and SLL Mesoamerica ( Figure 5B). SLL groups also showed that SLL vintage and early breeding lines are genetically closely related to Mesoamerican SLL. The SLL fresh and SLL processing subgroups were more distant from the Mesoamerican and vintage SLL with evidence of reticulation. In general, the accordance between the proposed hierarchical genetic classification which is represented in the neighbor network and the population tree was high ( Figure 5 and Additional file 8: Figure S6). The accession-based phylogenetic tree that included S. chmielewski, S. neoricki and S. galapagense ( Figure 6) showed that the Peruvian SP groups were basal for the red-fruited group, and Ecuadorian SP was phylogenetically the closest to SLC with SLC Ecuador 1 basal to the entire SLC. Interestingly the S. galapagense (SG) accession clustered very close to the Ecuadorian SP, a grouping which was also found in the PCA (Figure 1). Isolation by distance and climate We noted that most clusters in SP and SLC corresponded to the location of where the accessions were collected. Therefore, we sought to evaluate the significance of this finding by calculating the correlation between genetic, climatic and geographic distances ( Table 3). The highest correlation was found in SP indicating a strong positive correlation between genetic and climatic distances (r = 0.67, Mantel p-value = 0.01), as well as for genetic and geographic distances (r = 0.53, Mantel p-value = 0.01) (Additional file 9: Figure S7). Two sets of accessions were explored in SLC, one including the subgroups from Ecuador and Northern Peru and the other including SLC non-Andean. For the Ecuadorian and Northern Peruvian SLC, the relationship between genetic and climatic distances was lower (r = 0.29, Mantel p-value = 0.01) than in SP, whereas the genetic versus geographic was similar (r = 0.49, Mantel p-value = 0.01). When considering the SLC accessions together, a low correlation between genetic and climatic (r = 0.11, Mantel p-value = 0.09) as well as genetic and geographic distances (r = −0.19, Mantel p-value = 0.01) were observed. Diversity and heterozygosity Expected heterozygosity (H e ) and observed heterozygosity (H o ) decreased in the succession from SP to SLC and SLL (Table 2 and Additional file 10: Figure S8). For SP, the SP Peru group retained the highest diversity followed by SP Montane and SP Ecuador. The Ecuadorian and Peruvian SLC (SLC Ecuador and SLC Peru) showed higher level of diversity (H e = 0.19 and 0.18) compared to SP Ecuador and SP Montane. In contrast with the high diversity of the Ecuadorian and Northern Peruvian SLC, the other SLC subgroups exhibited low diversity, similar to that found in vintage SLL. For SLL a similarly low level of observed heterozygosity was typical for most subgroups. However, when combining the contemporary SLL subgroups (processing and fresh), slightly higher levels of diversity were found when compared to SLL vintage (H e = 0.12 vs. 0.09), a situation that is likely due to the effect of introgression during breeding and differentiation into distinct market classes (Additional file 10: Figure S8). To avoid biases in the genetic diversity estimates due to the different number of individuals per group, a rarefaction analysis was carried out (Figure 7). To explore whether genetic diversity estimates might be inflated due to introgressed genomic segments from wild relatives present in contemporary SLL accessions, we conducted parallel analyses with two sets of markers. The first set included one marker every 0.1 cM (2,313 SNPs) (Figure 7A and C) and the second set included 6,343 SNPs, after removing monomorphic SNPs and SNPs with more than 10% of missing data (see Materials and Methods) ( Figure 7B and D). When using the smaller marker set, the average number of alleles per locus of SP and the combined set of SLC Northern Peru and Ecuador was higher than in all other clusters ( Figure 7A). When all markers were used, the SLL fresh and SLL processing Figure 4 Comparison between the passport-based classification (columns) and genetic-based classification (rows). The genetic classifications correspond to the clusters shown in Additional file 1: Table S1 and, passport classification is based on information provided (see Material and Methods for further details). Size of the squares is proportional to the number of samples corresponding to each genetic and passport group and, background colors highlight different species and botanical varieties. showed an allele richness that was intermediate between Andean SLC and SP on the one hand, and non-Andean SLC and SLL vintage on the other ( Figure 7B). When all SLL contemporary accessions were combined into one group, the analysis with the smaller marker set showed a slight increase in allelic richness compared to separate analyses of the SLL processing and SLL fresh genetic groups (Additional file 11: Figure S9A). Using all markers, the allelic richness of the combined contemporary accessions approached that of SP (Additional file 11: Figure S9B). These findings suggested that introgressions found in the contemporary accessions might lead to increased estimates of genetic diversity but also that differentiation into distinct market classes increased genetic divergence within SLL. Frequency of private alleles was also explored for the subset of markers ( Figure 7C) and the whole dataset ( Figure 7D). The highest proportion of private alleles was found in SP regardless the marker dataset used, whereas the number of private alleles was virtually the same for all other groups, except for the processing group when using the complete marker set. This finding might indicate the presence of introgressions from genetically diverse relatives in SLL processing. LD was estimated between markers at different genetic distances from one another (Additional file 12: Figure S10). From highest to lowest degree of disequilibrium the groups were: fresh, processing, vintage, Andean SLC, non-Andean SLC and SP. These results suggest that LD affects estimates of allelic richness, especially when dealing with groups with different degrees of LD. Table 3 Isolation by distance and climatic distance: correlation between climatic, geographic and genetic distances in SP, SLC Ecuador and Northern Peru and SLC non-Andean: number of accession (n), correlation coefficient (r) and p-value for Mantel test is shown Origin and migration of the derived tomato fruit shape and weight alleles Several genes involved in the transition from small and round to large and variably shaped tomato have been cloned in recent years. In all cases, the nucleotide polymorphism that is associated with the change in fruit appearance is known. We wanted to investigate when and where the derived alleles of the six fruit shape and weight loci arose and how they migrated through populations in the evolution of tomato. For all fruit morphology loci, the derived allele was at very low frequency or not found in the SP accessions ( Figure 8). The derived alleles for the fw2.2 and lc loci were both found at very low frequency in SP Ecuador but at much higher, 55% or more, frequency in the Andean SLC groups (SLC Peru and SLC Ecuador). The lc mutation was also common in SLL vintage and SLL fresh accessions whereas the derived allele was not found in the SLL processing types. The derived allele of fw2.2 was nearly fixed in all SLL groups. For fw3.2, the derived allele was found in SLC Ecuador and SLC Peru albeit at lower frequency compared to lc and fw2.2. Fixation of the derived allele did not occur in the SLL vintage but instead became nearly complete in the contemporary SLL accessions. The derived alleles of fas and ovate were most likely to have arisen in the Ecuadorian or Peruvian SLC accessions and were maintained at low frequency in the remaining SLC accessions. Of the SLL vintage, 20 and 30% carried the derived alleles of ovate and fas, respectively. In other SLL groups, the derived allele for ovate and fas were found at low frequency in this dataset. However, the derived allele of ovate is quite common among Italian vintage cultivars where 38 to 47% of the accessions carry the mutation [30]. Sun is present at low frequency in SLL vintage, fresh and processing whereas the allele has neither been detected in Ecuadorian and Peruvian SLC nor in the Mesoamerican accessions. Discussion Key questions regarding the evolutionary history of cultivated tomato include where and when the crop was domesticated and the position of SLC in this evolutionary process. In this study, we interrogated a selection of 2,313 SNPs from the SolCAP array in nearly 1,000 unique accessions comprised of SLL, SLC and the redfruited wild relative SP. By combining accessions with robust passport data we were able to test hypotheses about the origin of cultivated tomato. Our results support the two-step domestication hypothesis proposed by Blanca et al. [9], and are in line with recently published work about the origin of tomato [54]. As expected, genetic diversity was high in SP (Table 2), and genetic clusters were explained by geographic distances and climatic zones (Table 3 and Additional file 9: Figure S7). The higher number of SP accessions analyzed when compared with previous studies has allowed a more detailed definition of the SP populations, especially in Southern Peru, where sequential colonization could be proposed based on the PCA (Additional file 3: Figure S1) and the network analysis ( Figure 5). SLC accessions from Ecuador and Peru also showed genetic structure that correlated with geography (Table 3 and Additional file 9: Figure S7). Genetic diversity was high in Ecuadorian and Peruvian SLC, but was reduced in SLC from Mesoamerica and elsewhere. Our results suggest that the major genetic bottleneck did not occur due to transport of SLL from Mesoamerica to Europe, but occurred earlier coinciding with the migration of SLC from Ecuador and Northern Peru to Mesoamerica (Table 2 and Additional file 9: Figure S7). In wild populations there is a strong correlation between geography, climate, and genetic distances (Table 3 and Additional file 9: Figure S7). These correlations do not occur in the non-Andean SLC and the SLL genetic subgroups, a situation that is common for plants associated with human activities, either cultivated or weedy, due to the movements of the seeds by the humans and to the artificial modifications or their environments [55,56]. Phylogenetic relationships The phylogenetic tree (Figure 6), neighbor network analyses ( Figure 5) and the high number of private alleles (Figure 7) support the status of SP as the basal group of the redfruited species of the Lycopersicon section. Our data also supports the view that the Northern Ecuadorian SP is the closest ancestor of SLC. Northern Ecuadorian SLC was likely to have originated from Ecuadorian SP, yet its high genetic diversity and its reticulate structure in the phylogenetic network suggests a complex history. The position of SG within SP contrasts with a recent study [57] in which SG was found to be closer to SLC than SP. However, firm conclusions about the position of Galapagos accessions will require further study, as both studies, Koenig's et al. and the present one, are based on a few number of accessions and Koenig et al. lacked SP accessions from Ecuador. The data suggest two possible scenarios for the origin of SLC. Ecuadorian SLC features twice the level of the genetic diversity as Ecuadorian SP (Table 1), thus it is not likely that SLC was simply derived from this SP subgroup, despite being very close phylogenetically. One hypothesis is that the subgroup named Peruvian SLC-SP represents the origin of SLC. This genetic subgroup is also close genetically to Ecuadorian SP ( Figure 6). However the large geographic distance separating these subgroups challenges this scenario. It is possible that the Peruvian SLC-SP is the result of a secondary contact between SLC and SP. The second hypothesis is that ancestral populations of Northern Ecuadorian Coastal SP gave rise to SLC in Northern Ecuador across the Andes. Secondary gene flow between other SP populations, suggested by the reticulation of the phylogenetic network and the complex PCA structure, e.g. Montane SP from Southern Ecuador and Northern Peru may have enhanced diversity of the SLC. Alternatively, the sampling of Northern Ecuador SP may have been incomplete or an ancestral highly diverse population might have originated both Northern Ecuadorian SP and SLC. Ecologically, it is more plausible that SLC originated from Northern Ecuadorian SP. These Northern Ecuadorian SP accessions thrive in wet and forested areas of Coastal Ecuador, a climate closer to the wet environment on the Eastern side of the Andes where Ecuadorian SLC is found. In contrast, Peruvian SP is adapted to an arid climate. Climatic similarity of some SP and SLC populations may have facilitated gene flow due to animal or human movement, despite geographic differences. Previously, possible mechanisms for gene flow between SP and SLC have been proposed for this region [9,58]. The Mesoamerican SLL vintage subgroup appeared to be the most ancestral SLL according to the phylogenetic trees and the network. This SLL genetic subgroup was closely related to SLC Peru 2 in the phylogenetic network and tree. Thus, our data clearly support that SLC evolved into Mesoamerican SLL. According to the analyses with this dataset, all other SLL are monophyletic and all SLL groups originated from the SLL Mesoamerican accessions. Proposed origin and domestication based on derived alleles for fruit weight and shape The most ancestral SLC is found in Ecuador and Northern Peru and it is characterized by a high genetic diversity and morphological variability [9]. It spans a wide range of domestication (from accessions collected in markets, and presumably cultivated at production scale, to weeds) and use (from human consumption to animal feed), which suggests a certain degree of selection for SLC. This finding is supported by the fact that the derived alleles of lc, fw3.2 and fw2.2 are already prevalent in the ancestral SLC accessions from Northern Peru and across Ecuador (Figure 8). The derived allele of lc and fw2.2 may have originated in the Ecuadorian SP and could represent the earliest known mutations to arise. However, this interpretation needs to be viewed with caution as only two SP accessions carrying a single derived allele of each locus were identified (Additional file 1: Table S1). Interestingly, the SLC vintage group that clusters closely with Ecuadorian SLC included accessions that were collected from markets and feature fruits that are large, ribbed and multi-loculed ( Figure 3). The strongest selection may have taken place in this subgroup as all accessions carried the derived alleles for lc, fw2.2 and fw3.2, and half of them carried the derived allele of fas (Figure 3 and Additional file 1: Table S1). None of the other SLC subgroups were fixed for as many fruit weight and shape alleles as the SLC vintage category. Thus it appears that SLC was being cultivated and that selections for larger fruit were taking place (Figure 3 and Additional file 1: Table S1). SLC Mesoamerica carried derived and ancestral alleles for most of the fruit shape and weight loci, while SLC Asia and SLC Other were completely fixed for the derived allele of fw2.2 suggestive of selection for the SLC germplasm grown outside the Americas. SLL arose in Mesoamerica as there is no evidence of the existence of ancestral SLL in South America. All SLL accessions sampled from South America were found to carry introgressions from wild relatives suggesting that they were derived from breeding efforts taking place in the last 100 years. Therefore, to complete the domestication of SLL, SLC would have had to migrate to Mesoamerica possibly as a semi-domesticated type. According to the network analysis, PCA results, and previous knowledge of species history two SLC migrations could be suggested. SLC could have migrated from Southern Ecuador to Colombia and Costa Rica arriving in Mesoamerica in a stepwise process ( Figure 2D). However, a second possibility is also suggested by our results, SLC could have reached Mesoamerica from Northern Peru in one step. Fruit weight and shape allele distribution did not support one route of migration over the other. In any case, results from the gene diversity analysis suggest that the migration from the Ecuador or Northern Peruvian region to Mesoamerica led to a strong bottleneck which eventually resulted in reduced variation in Mesoamerican SLL, as described by Blanca et al. [9]. The second phase of tomato domestication in Mesoamerica is suggested by the increase in derived allele frequency for fw3.2. Allele frequencies for fruit weight loci suggest that selection for fw2.2 and lc were important for the origin of SLC while fw3.2 was important for the origin of SLL. Our results agree with a recent study [54] based on 360 tomato genomes. They also find evidence for a two-step domestication, and identify new QTLs implicated in both steps of domestication and breeding. The American origin of the first European tomato is confirmed by the genetic relationship between the Mesoamerican and vintage SLL subgroups ( Figure 6). It is remarkable that the vintage SLL appears to have been derived exclusively from Mesoamerican germplasm. Although large fruited vintage SLC were found in South America, they did not appear to contribute to the germplasm that migrated to Europe and the rest of the world. It is not possible with the current data to know why the Ecuadorian and Peruvian SLC did not contribute to the Spanish vintage gene pool brought to Europe, despite being those regions also under the control of the spaniards, but we could propose that climatic similarity between Mexico and Spain could have played a role. Contemporary tomato diversity Since the introduction of the modern breeding in the 20th century, the pace of genetic change in SLL has accelerated. New germplasm has been created that, according to the PCA, network and population tree, differ substantially from the vintage accessions. These results are consistent with previous findings [19][20][21][22]31]. The contemporary tomatoes can be differentiated into four broad groups: fresh, processing 1, processing 2 and SLL1. This broad differentiation among the contemporary groups reflects independent breeding efforts and selection histories between the fresh and processing accessions. The further subdivision of the contemporary groups can be explained by geographic origin or founder effects in regional breeding programs. Similar results were previously reported by Sim et al. [21,31]. These subgroups differentiate accessions coming from the main public-sector breeding programs in North America. For processing they were historically carried out in California, the Midwest of the United States, the East Coast of the United States and Ontario, Canada. These programs commonly interchanged breeding materials, thus it is to be expected that the genetic groups mix those origins, albeit in different proportions [21]. The neighbor network reticulation found in these subgroups is compatible with this history ( Figure 5). Contemporary tomatoes are the result of introgressing genes from wild species into SLL starting before 1920 [59]. The PCA and rarefaction analyses (Figure 7) provided insight into the effect of these breeding practices
2023-01-14T15:21:00.184Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "ca65ab3d25decbf080470c13d95290b03383f7c2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12864-015-1444-1", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "ca65ab3d25decbf080470c13d95290b03383f7c2", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
52922266
pes2o/s2orc
v3-fos-license
Nonscanning large-area Raman imaging for ex vivo/in vivo skin cancer discrimination Imaging Raman spectroscopy can be used to identify cancerous tissue. Traditionally, a step-by-step scanning of the sample is applied to generate a Raman image, which, however, is too slow for routine examination of patients. By transferring the technique of integral field spectroscopy (IFS) from astronomy to Raman imaging, it becomes possible to record entire Raman images quickly within a single exposure, without the need for a tedious scanning procedure. An IFS-based Raman imaging setup is presented, which is capable of measuring skin ex vivo or in vivo. It is demonstrated how Raman images of healthy and cancerous skin biopsies were recorded and analyzed. Introduction Imaging Raman spectroscopy is a powerful tool to identify chemicals and their distribution.When monochromatic light impinges on molecules, fractions of the scattered light are wavelength shifted related to the molecular vibration states.Thus, Raman spectra are fingerprints that allow a contactand label-free identification of chemical structures.In contrast to IR absorption spectroscopy that also measures vibration transitions, Raman spectroscopy works in an aqueous environment, which makes this method promising for biological analysis [1,2,3], especially in the field of medical diagnostics for an identification of cancerous tissue [4,5,6,7,8].In surgical cancer treatment, the determination of resection margins is a much discussed topic [9,10,11].Too spacious tissue removal stresses the patient, while too tight margins reduce the chances of recovery.Usually, a biopsy is taken and examined ex vivo by a pathologist.Based on the results the surgeon later makes use of his experience to determine the border between cancerous and healthy tissue.To replace this time consuming two-stage approach and to improve the accuracy, there is a demand for spectroscopic methods that allow a spatially resolved cancer detection in situ.Meanwhile, there are medical Raman microscopes available that allow clinical examinations of skin areas on patients directly [12].However, commercial Raman microscopic systems for in vivo skin cancer detection are still single channel systems, i.e. to receive a Raman image from an area of skin, a timeconsuming step-by-step scanning process is necessary [13].Raman intensities are very low (only a fraction in the range of 1  10 -7 of the scattered light is due to Raman scattering) and in addition often superposed by fluorescence.Thus, even for only a few hundred pixels the measurement time typically adds up to many minutes or even hours, which is far too long to perform routine examinations on patients in vivo. To reduce acquisition times, various methods for parallel data collection are described [14].Of particular interest are full-throughput snapshot techniques, also called "multichannel spectroscopy", "3D spectroscopy" or "integral field spectroscopy" (IFS).These techniques work without any serial scanning procedures and do not sacrifice light fundamentally during the recording procedure.IFS has been developed in astronomy more than three decades ago [15] to save scarce and expensive observation time at observatories.It is based on slicing a twodimensional image into single strip-like segments and stringing them together to one long row in front of a long-slit spectrograph.This can be e.g. using a mirror stack image slicer or with a fiber bundle converter: At the sample side, the fibers of the bundle are arranged as a two-dimensional matrix.In front of the spectrograph's input, the fiber front surfaces are arranged as a straight line, lying side by side in a V-groove holder.After passing the collimator and camera optics and the dispersive element of the spectrograph, the light signals emerging from each fiber generate a family of individual spectra on a large-area detector.A data reduction software evaluates the raw signal, applies calibrations, and finally provides a data cube containing the entire spectral and spatial information.A review of IFS in astronomy is given in [16].High-end IFS spectrographs are installed in the MUSE (Multi-Unit Spectroscopic Explorer) system [17] at the Very Large Telescope observatory in Paranal, Chile, since spring 2014.MUSE consists of 24 connected spectrograph modules and is capable of acquiring a total number of 90,000 spatial elements (also called spaxels) within one single exposure.From every single spaxel the entire spectrum from 465 to 930 nm is recorded at a spectral resolution of 0.22 nm in a total of 4300 spectral bins. To a certain degree, imaging in vivo Raman spectroscopy and astronomical observations face similar challenges, namely to detect efficiently faint signals in the presence of bright background light.This notion was the motivation to update a spectrograph based on a MUSE design for use in medical Raman spectroscopy.A fiber bundle-based optical setup was realized to record Raman images of 1 cm 2 areas of skin, which matches common sizes of lesions suspected to be cancerous. The objective of the project was to validate the concept of a future instrument that would allow dermatologists to promptly recognize in vivo the borders of the cancerous tissue without the need of a time-consuming scanning procedure.The general capability of IFS for generating large-area Raman images was presented previously [18].Similar Raman setups are described in [19,20,21]. Here, we present a discussion and a characterization of a setup using an astronomy spectrograph with regard to Raman imaging of human skin ex vivo and in vivo.Eventually, comparison measurements of healthy and cancerous human skin samples in vivo and ex vivo were performed. Spectrograph and Optical Setup Figure 1 shows a scheme of the experimental setup.To record Raman images, an image acquisition head based on two microlens arrays (MLA) was realized.Figure 15 left shows the top of the image acquisition head with a skin sample and the upper MLA underneath.A detailed description of the Raman acquisition optics was given previously [18].In brief: For excitation, a 784.5 nm diode laser with a tunable output power up to 500 mW was used.Its fiber optic output was connected to a square core fiber with 600 µm core side length.Light emerging from square core fibers shows a top-hat intensity distribution, which is favorable with regard to a preferably homogeneous illumination [22].The excitation light passes a collimation lens, a clean-up filter to remove Raman and fluorescence background generated within the fiber and spontaneous emission of the laser, and is finally guided to the sample by a 45° dichroic mirror.An MLA in front of the sample generates 20 × 20 excitation spots on the sample with a 0.5 mm pitch, i.e. a square image with 1 cm side length and a sampling of 400 pixels can be recorded.In the opposite direction, the MLA collects the Raman signal from the sample.A pair of relay lenses guides the Raman signal to a further MLA that couples the signal into the fibers of the fiber bundle.A 785 nm notch filter and a long pass (LP) filter remove the Rayleigh signal.The fiber bundle consists of 400 fibers (114/125/155 VIS/IR, NA = 0.22, Heracle, Germany).On the sample side the fibers are arranged within a square plate containing 20  20 microholes at center distances of 0.5 mm (Fig. 2).On the spectrograph input, the fibers are arranged side by side within a V-groove holder forming a pseudo slit at a length of 118 mm.There are 421 V-grooves with a pitch of approximately 0.29 mm.The fibers of the bundle are arranged in groups of twenty.Additional V-grooves between the groups are occupied with fibers that form a fan-out cable that is intended for other applications (calibration, tests).Due to the gaps, the groups of twenty are easily distinguishable in the raw data, thus facilitating a quick inspection by eye before starting the data reduction process.A detailed description of the spectrograph was given previously [23].The array of optical fibers is attached to a first plane-concave silica lens.The plane side of the lens was initially intended to use index matching gel to minimize coupling losses.However, it was far more advantageous to allow an easy back and forward sliding of the fiber holder.Pushed back, the fibers could be illuminated with a high power white light LED array.On the basis of the resulting light spots, the optical components of the head, especially the positions of the MLAs, could be adjusted.During the alignment procedure, the 785 nm long pass filter was flipped.To allow a routinely check of the alignment, the index matching gel was waived.Instead, two small pieces of adhesive tape were put on the borders of the fiber holder.The tape pieces served as 30 µm distance holders and prevented scratching when sliding the fiber holder towards the lens.As dispersive element, a volume phase holographic grating (VPHG) was used.The shape of the diffraction grating made of gelatin is 118 mm diameter circular while the entire VPHG component appears with a square shape of 122 mm side.The grating is optimized to cover a wavelength range from 350 to 900 nm.Finally, the diffracted transmitted light is focused on the image plane of a custom-made CCD (charge coupled device) camera.The detector is a large area back-illuminated chip (CCD213, e2v, Chelmsford, UK) with 4096 × 4112 pixels and 15 µm pixel size.In the range from 400 to 800 nm, the detector shows a quantum efficiency of approx.90 %, at 900 nm it is approx.60 %.The spectrograph is capable of covering a wavelength range from 350 to 900 nm with a linear dispersion of 0.13 nm / pixel.However, for sources with a spectral energy distribution larger than one octave, the overlap of second order signals must be suppressed by use of order separating filters.A description of the camera detector and its readout configuration can be found in [24].In the standard configuration, approx.30 s are needed to read out the complete CCD chip.The control software saves the raw signal as FITS file [25] and visualizes it by using the free available SAOImage DS9 software (Smithsonian Astrophysical Observatory, USA, available at http://ds9.si.edu).This viewer is useful to perform quick-look checks, e.g. to verify the presence of a suitable Raman signal by plotting a profile along the wavelength axis.For full data reduction and visualization, the raw data file is processed with the open source software P3D [26,27] (available at https://p3d.sourceforge.io/).The main features of the software are: (I) The exact trace of the spectrum corresponding to each fiber is determined.Due to optical aberrations, the signal traces of the fibers are not parallel to pixels of the detector.The related deviations are determined by continuous signal traces on the detector generated when illuminating all fibers of the bundle with white light.(II) The dispersion across the detector has an arc-shaped characteristic.This effect is commonly known as spectral smile or keystone [28].By use of a Ne-Lamp or another light source with discrete emission lines, a wavelength calibration is applied, i.e. the wavelength corresponding to each pixel is determined as a polynomial solution.(III) Every signal trace is assigned to the spatial position of the corresponding fiber in the matrix.Finally, a data cube with two spatial axes and one spectral axis is provided as a FITS file.(IV) A viewer module of the software allows to inspect the cube in different ways (display of spatial maps at a chosen wavelength, plot of spectra of selected regions) and to perform simple data analysis tasks such as flux measurements and spectral line fits. Epoxy Skin Phantoms To characterize the setup, phantoms based on epoxy resin (epoxy casting resin "waterclear", R&G Faserverbundwerkstoffe, Germany) were prepared.Since the setup is intended to measure skin samples, the absorption and scattering properties of the phantoms were aligned to human dermis [29] by adding proper amounts of TiO2 and black epoxy paste as scatter and absorber, respectively. After curing, the surface was polished, finally using a sand paper with 15 µm grain size.The scattering and absorption coefficients of different phantoms were measured (Lambda 900 UV/VIS/NIR Spectrometer with integrating sphere; Perkin Elmer, USA) to µs' = 2.1  0.3 mm -1 and µa = 0.05  0.01 mm -1 , respectively, at 785 nm wavelength. Image Acquisition Head Raman imaging with IFS ideally requires a homogeneous laser illumination of the entire image field during the recording process.To verify the conditions of our setup, the illumination of a sample image was simulated by software (OpticStudio, Zemax LLC Delaware, USA).The pseudo color image of the simulation (Fig. 3, left) shows that most of the excitation energy is contained in 50 µm diameter spots at the focus positions of every micro-lens.Because of the filling-factor of 65 %, spaces between the micro-lenses are also weakly illuminated.At least for non-scattering samples, the remaining 35 % do not contribute to excitation, especially since Raman signals arising from the gaps between the lenses are not coupled into the fibers at the other end of the image acquisition head.Due to optical aberrations, the illumination geometrical spot sizes are not exactly constant across the focal plane of the MLA, but increase slightly near the edge of the field of view. The simulation reveals that excitation energy within the spots drops from the MLA center to the penultimate rows by approximately 20 %.From the penultimate row to the last row there is an additional drop of 40 %.For comparison, Figure 3, right shows a real camera image of the excitation spot received by switching on the laser and putting a piece of scale paper on the top of the image acquisition head. Due to scattering of the paper, the spots appear larger than predicted by the simulation.To a certain extent the intensity impression is an artefact of the camera optics.The actual intensities were measured using a laser power meter in combination with a shadow mask.The total power of the excitation laser at the exit of the square fiber was 400 mW.Except for the four border rows, intensities of approximately 1 mW / 0.25 mm 2 were measured.Obviously no significant loss of laser light occurs within the excitation pathway.However, at the border rows the intensities drop to approximately 0.6 mW / mm 2 , which is in accordance with the simulation.Obviously, the shaft of the MLA holder causes vignetting.In conclusion, an image field of 18  18 pixels is approximately homogeneously illuminated to within  10 %.It should be noted that the edges of the MLA's do not run exactly between mircrolenses.Thus, the central microlenses are not precisely in the center of the holder and hence, the intensity gradients are not perfectly symmetric. Spatial Resolution for Skin Samples If a scattering sample like skin is examined, homogenous illumination of the surface causes a decrease in spatial resolution because multiple scattering in the sample leads to crosstalk between the pixels: Photons originating from illumination of a specific pixel also contribute to the Raman signal at neighboring pixels.To estimate this effect, a Monte Carlo simulation using scattering and absorption coefficients of human skin µ's = 1.46 mm -1 and µa = 0.044 mm -1 [29] has been performed, see Figure 4.Each fiber of the matrix captures light arising from certain sections of the sample surface.From the magnification factor of the image acquisition head, the aberrations, and the fiber core diameters, it was estimated that these sections are spots with approximately 100 µm diameters.The resulting effect on the lateral resolution is shown in Figure 5 for the same human-skin model, where a homogeneously distributed Raman active component c(r) is restricted laterally to a half space (c(r) = 0 for x < 0).This example is intended to correspond to an experiment where the boundary of a large-area tumor region is examined.It is assumed that the cancerous region has a specific Raman signal that is not present in the healthy part.This assumption is a simplification. In reality, cancer is not indicated by a clear presence or absence of a definite Raman peak, but rather by slightly changes of the spectrum.Nevertheless, the assumption is meaningful to depict the consequences of channel crosstalk.Figure 5 shows two situations: (I) All spots are illuminated simultaneously, as it is the case for our setup (black circles) and (II) only one spot located at the measurement position is illuminated, as it is the case for a pointwise scanning Raman microscope. For our setup, the transition of the Raman signal at the boundary (x = 0) is remarkably broadened to more than 1 mm, whereas a single spot setup would result in a resolution far better than 0.5 mm. Homogeneity of the Raman Images The epoxy phantoms were used to examine the spatial signal intensity distribution of the setup. The homogeneity of the samples, more precisely of the signals arising from the phantoms, was verified by using a scanning single channel Raman spectrometer (Laser-und Medizin-Technologie GmbH, Germany; excitation wavelength 785 nm) [5].At various positions of a phantom the spectra were measured within a square of 2 cm border length.The variation of all unprocessed spectra was less than  5 %, proving the homogeneity of the phantoms (Fig. 7).Although homogeneous phantoms are used as samples, the received signal distribution was not uniform.Figure 8 shows the average intensity distributions of the raw spectra of a phantom, which appears as a dome-like shape with center intensities that are approximately four times higher than at the corners.The intensity differences can be associated with the spectrograph's properties (I), the characteristics of the image acquisition measurement head (II), and the scattering properties of the sample (III). The spectrograph's sensitivity (I) depends on the positions of the fibers located in the V-grooves. Due to the design of the spectrograph, light emerging from fibers at the border positions of the Vgroove holder is vignetted to a certain amount.The flat field correction uncovers this characteristic. To perform the flat field measurement, the fiber bundle matrix (Fig. 2) is removed from the image acquisition head and homogeneously illuminated by using an integrating sphere.The fibers in the matrix are sorted with regards to their positions in the V-groove.Fibers at the positions (X,Y) = (1,1) and (20,20) are located at the borders of the V-groove holder, whereas the fibers at (10,20) and (11,1) are in the center.As expected, the spatial sensitivity results in a half pipe shape showing intensity.Minima at the corners are probably caused by some vignetting due to the limited size of the available integrating sphere (data not shown).A further potential source of inhomogeneities is the distribution of the excitation intensity (II).As shown above, the excitation intensity declines from the center to the border rows.A further reason for the dome-shape is the scattering of the samples (III).As described above, scattering leads to significant signal contributions from neighboring spots.Indeed, fibers in the center of the matrix receive the largest contribution from neighbors.When moving from the center to the borders, the numbers of neighbors and, accordingly, the signal decreases.Assuming homogeneous illumination and absorption and scattering coefficients of skin, a calculation shows that the relative Raman intensity drops from 1 to 0.6 when moving 10 fibers diagonally from the corner to a center.Finally, a combination of (I), (II) and (III) leads to the dome-like shape shown in Figure 7. Since the actual absorption and scattering coefficients, hence the intensity distribution, differ from sample to sample, a universal correction function cannot be applied.Therefore, it was decided to include normalization into the preprocessing of spectra.Fluorescence background was removed using a 6th order polynomial fitted to each spectrum [32].Subsequently, the standard normal variate (SNV) normalization was applied [33].It should be mentioned that the used laser unit includes two excitation lasers with 1 nm gap.Thus, the setup is capable to remove background by use of shifted excitation differential Raman spectroscopy (SERDS) [18,34].However, the SERDS option was not used in this work, since Raman spectra reconstructions based on SERDS curves considerably differ from common Raman spectra, making a comparison to previously published results difficult.Further drawbacks of SERDS are the negative impact of photobleaching and the extension of the measurement time. It was therefore decided to apply a "classic" polynomial-based background removal algorithm.As shown in Figure 9, highly uniform spectra result from the preprocessing.From neighboring pixels it was estimated that the variance between the spectra is approximately only twice the variance due to noise.To visualize the variations between the spectra of the Raman image, a principal components analysis (PCA) of the set of 400 spectra of an image was performed.Each spectrum of the image is represented as a linear combination of the orthonormal set of principal components pm(λ): (2) where is the average spectrum of the image.Most of the variance is described by a small subset of the principal components.Figure 10 shows the first three principal components (m = 1, 2, 3) for an epoxy phantom.There are variations between the spectra in the wavelength region of 810 to 840 nm, which are probably due to slight variations of filter-transmission spectra.Near the filter edge, the transmission curve shows fringes.The angle of incidence and therewith the effective filter response varies slightly with the position of each pixel.Except for two small peaks at 825 nm and 900 nm, the principal components do not contain Raman bands of the epoxy sample, therefore it seems reasonable that they only describe the instruments variation of spectral sensitivity. Comparison of Multichannel Setup with Single Channel Spectrograph The measurements of epoxy phantoms and the PCA analysis described above were also used to evaluate the influence of the Raman-image inhomogeneities in a medical application.A single channel spectrograph (constructed at the Laser-und Medizin-Technologie GmbH, Germany) has been successfully applied to examine biopsy tissue samples [5].For this setup a Raman image was realized using a motorized XY stage.The spectral resolution was 0.25 nm, i.e. similar as for our instrument.Partial least squares discriminant analysis (PLS-DA) has been used to discriminate normal from precancerous tissue.PLS-DA results in a discriminant function which assigns each tissue spectrum gk a scalar value dk by means of a scalar product that contains a weight function b and an average spectrum (Fig. 11), the tissue type is inferred from the sign of dk: For some spectra, the modification leads to a change of the sign of dk so that the quality of the discriminant analysis, characterized by its sensitivity and specificity [35] is reduced. In principle, each tissue spectrum gk can occur on any position of the Raman-image.Therefore the coefficient can attain any value found in the PCA analysis of the Raman-image inhomogeneity.It follows that for each dk a distribution of values with a standard deviation given by is found.Assuming a normal distribution for each dk , average values for sensitivity and specificity have been derived according to equation 4, as shown in Figure 12 as a function of σ(dk).The standard deviation follows from the PCA analysis of the epoxy phantom.With equation 5, a value of σ(dk) = 0.003 follows.One can conclude from Figure 12 that the Raman-image inhomogeneity will have no significant effect on the tissue discrimination in this application. Fig. 12 Reduction of sensitivity and specificity for discrimination of two tissue classes (normal, precancerous). Nevus at forearm in vivo The setup was already tested on porcine skin samples ex vivo [18].To verify the capability for human skin samples in vivo, a forearm of a volunteer was placed on the top of the measurement head.Figure 13, left delineates the test area as a skin imprint of the measurement head.A nevus with approximately 3 mm diameter can be seen in the upper left.The recording time was 2 min. Due to the fluorescence of the melanin the position of the nevus can easily be seen on the basis of the signal strength of the raw data (not shown).However, the aim of the experiment was to identify the nevus only with the aid of Raman spectra without any fluorescence background.Thus, all fluorescence background was removed by a polynomial fit followed by a normalization of the spectra.To the resulting background-free Raman spectra PCA was applied.Figure 13, center shows the Raman spectra inside and beyond the position of the nevus in vivo.Figure 13, right shows the difference of the principal components 2 and 3 (PC2 -PC3) received from the PCA.The nevus can be clearly seen as peak.Its diameter of the peak's footprint is approximately 7 pixels diameter which corresponds to 3.5 mm, i.e. the Raman image reflects the actual size of the nevus at the camera image.Thus, the achieved resolution of 0.5 mm matches the distance of the fibers in the matrix. Discrimination of skin regions in vivo As a test case Raman-spectral maps of six skin regions (forearm, volar forearm, palm, thumb, leg, foot) have been measured on four volunteer persons.The aim of this experiment was to verify whether the system is able to distinguish between different body parts.Following preprocessing as described above, principal component analysis (PCA) was applied to the total set of spectra [36].The scores of the first two principal components are shown in Figure 14.As can be seen from Figure 14 the skin regions differ in their spectral features.For example, leg (cyan points) and foot (light brown points) are well separated and can be easily discriminated.To investigate, whether one can discriminate the skin regions using more spectral information, an unsupervised clustering algorithm was applied [37].Each spectrum was reduced to the scores of four principal components.Inclusion of more than four components did not improve the result. The distribution of spectra on six clusters show that the spectra of forearm, palm and thumb and spectra of forearm and leg are not clearly separated by the cluster algorithm (Table 1).Only the spectra of foot skin are clearly distinct, probably due to the very large thickness of the stratum corneum in this case.The main reason for the missing discriminability of the skin of different body sites is the inter-individual variance of the skin.The thickness of skin layers, as well as microcirculation of blood in the papillary dermis and the melanin concentration varies for different body sites.The concentration of carotenoids, which are most concentrated in the stratum corneum, is an individual parameter, depending on the skin area [38] and volunteers' lifestyle [39].This is apparent when the cluster algorithm is applied to all spectra of a single person in vivo, as shown in Spectra from one person. For discrimination of skin of different body sites without reference to individual properties a more refined discrimination analysis including, for example, supervised learning and feature extraction is necessary [40].Eventually, inter-area differences (Fig. 14) exist and should be considered by comparison between cancerous and healthy skin samples.However, this is beyond the scope of this pilot study, and would require the investigation of many more samples: clearly the objective of a detailed follow-up study. of multiplex Raman for cancer diagnostics Raman spectra of human skin cancerous and healthy tissue were recorded ex vivo and investigated as a possible future diagnostics for skin cancer in vivo.Pairs of healthy and cancerous skin biopsy samples were supplied from the Department of Dermatology, Charité.The samples were taken from various patients after surgery and are listed in Table 3.The experiments were approved by the Ethics committee of the Charité-Universitätsmedizin, Berlin (EA1/340/16) and conducted according to the declaration of Helsinki.All volunteers gave their written informed consent.Most of the samples were cylindrical since taken by core biopsy.After removal from the patient, samples 4a and 4b were stored at 6 °C and measured one day later.All other samples were at first frozen at -30 °C and stored.For the transportation from the hospital to the location of the Raman spectrograph, an ice-filled styrofoam box was used.Before starting the Raman measurements, the samples were slowly de-frozen in a refrigerator at 6 °C. Figure 15 left shows the biopsy sample 6b (Table 3) placed at the top of the image acquisition head.Two parallel needles were used to hold the sample by clamping or skewering.Before and during the measurement, the sample was covered by a lid containing a wet paper tissue to avoid drying.For some of the samples, it was necessary to increase the excitation intensity.The maximum output power of the used laser source is 500 mW.Taking into account the MLA fill factor and losses in the excitation pathway limits the excitation intensity to approx.0.8 mW / pixel when illuminating all 400 lenses, which is safe for in vivo application.For the samples replacing the 600 µm square core fiber by a 300 µm square core fiber, the excitation power is concentrated to 100 lenses, i.e. the excitation power increases four-fold. Sample Since the number of available biopsy samples was too low to perform any reliable refined discrimination analysis, it was decided to assess the results on the basis of the average spectra. Biopsy samples of normal skin from four patients and samples of basal-cell carcinoma (BCC) from three patients have been measured, summarized and normalized.The available samples were either entirely affected with cancer or entirely healthy.Thus, within a sample, the spectral patterns were quite similar at all positions.Figure 16 shows the resulting ranges of Raman-spectral values for both types of skin.The comparison of the Raman spectra reveals the following differences between normal skin and BCC:  A narrower line shape for BCC at the Amide I -region (1640 -1680 cm -1 )  The intensity ratio of region (1220 -1290 cm -1 ) (Amide III -region) to region (1290 -1360 cm -1 ) decreases for BCC.  Decreasing bands in the region (830 -980 cm -1 ) for BCC.These differences are in accordance with published results received from a single channel Raman setup [41].However, with our setup, far more pixels could be measured in shorter time.In [41], 10 min measurement time was applied to record the Raman spectrum of one single 100 µm diameter spot.In this work, the Raman spectra of 100 spots were recorded within 2 min.Even though the experimental conditions differ to a certain extend (up-to-dateness of the hardware, excitation wavelength, excitation power), the achieved measurement speed increase from 10 min / spot to 1.2 s /spot is striking and documents the capability of IFS even in the field of optical cancer diagnosis. Conclusion and Outlook By transferring IFS from astronomy to imaging Raman spectroscopy a setup was realized that is capable to measure the Raman spectra of 400 pixels of a 1 cm 2 sample simultaneously without any scanning procedure.Within this work the applicability of this setup for examination of human skin patches was investigated.Monte Carlo simulation yielded that a spatial resolution of approximately 1 mm can be achieved when illuminating the whole sample area with the excitation light.This result was experimentally confirmed by recording the Raman image of a nevus in vivo. Finally, biopsy samples of cancerous (BCC, SCC and AK) and healthy parts of skin were examined.The changes in Raman spectra match literature values received with a classic single channel spectrometer.Although the few small biopsy samples available for this experiment did not allow to directly localize distinct malign and benign tissue on a single sample, as we would require to actually prove the capability of this method to determine resection margins as the ultimate goal for surgery, we were able to demonstrate the plausibility by means of in vivo measurements of a nevus.Clearly, the next step must be a detailed clinical study to validate this finding on the basis of statistically meaningful samples and larger pieces of tissue.However, already the results of our pilot study show great promise for the development a minimal-invasive optical medical device for detecting cancerous tissue in vivo in the future.Currently, a follow-up project is being undertaken to examine the implementation of IFS in a medical endoscope. Conflicts of Interest The authors declare that there are no conflicts of interest. Fig. 1 Fig. 1 Setup for generating Raman images from 1 cm2 skin patches without scanning.See explanation in the text. Fig. 2 Fig. 2 Metal housing of the 20  20 fiber matrix.Every fiber front surface corresponds to one pixel of the Raman image. Fig. 3 left Fig. 3 left Simulated excitation intensity distribution at the sample side.The grey scale is logarithmic.Right Unprocessed (linear) camera image of a scale paper placed on the top of the image acquisition head.The spots show the excitation laser light. Fig. 4 Fig. 4 Monte Carlo simulation of multiple scattering in skin.Left XZ projection showing the traces of photons impinging the skin sample at zero position.Right Contributions of signals arising at neighbored excitation spots. Figure 4 Figure 4 right shows the contributions of adjacent pixels in a 0.5 mm raster.A signal strength of 1.00 is assumed if only one pixel at detection position (pixel at center of the insert) would be illuminated.If also illuminating the neighboring pixels, additional Raman-scattered photons will reach the pixel in the center.Four neighboring pixels in 0.5 mm distance add to 0.23 signal strength.Four neighboring pixels in 0.71 mm distance add to 0.18 signal strength.At 1.1 mm distance there are 8 pixels contributing in total to 0.25 signal strength.Adding up all neighboring contributions, it turns out that approximately half of the signal detected on a certain position comes from scattered light originating from adjacent pixels. Fig. 5 Fig.5 Calculated Raman signal (normalized to maximum value) for a large-area Raman source restricted to x > 0 for illuminating all pixels (fiber grid) and a stepwise illumination of only one pixel (single fiber).The abscissa is the position of the receiving fiber relative to the boundary of the Raman source. Fig. 6 Fig. 6 Monte Carlo simulation of the Raman signal contribution from different depths of skin.w z (z) is the laterally integrated weight function (see Eq. 1).Bold: Illumination of all excitation spots.Thin line: Illumination at the detection spot only.The functions are normalized to unit area. Figure 6 Figure6shows the depth dependence of the weight function w(r) for Raman signal detected at one spot.The thin curve shows the signal contributions from Raman scattering at different depths as they would be received if only the detection pixel (spot) would be illuminated.In this case, most of the signal would arise in a range from the surface to approximately 0.3 mm depth.The bold curve shows the corresponding contributions if all pixel (spots) are illuminated.In comparison to single pixel illumination, the simultaneous illumination of all pixels results in the detection of Raman-scattering processes in larger depths due to crosstalk as discussed for Figure4.In the case of real skin, the most intense Raman signals arise from the epidermis and the top of the papillary Fig. 7 Fig. 7 Raw spectra of an epoxy phantom measured with a single channel spectrometer at different positons within a square of 2 cm border length. Fig. 8 Fig. 8 Intensity distribution when measuring a homogeneous phantom.Instead of a flat distribution, the result shows a dome shape. Fig. 9 Fig. 9 Average and range of variation (± σ) for normalized Raman spectra of a homogeneous epoxy sample. Fig. 10 Fig. 10 Principal component analysis: First principal components p 1 (bottom), p 2 (center), and p 3 (top) describing the largest variance between spectra (the offsets are introduced for clarity). Fig. 11 Fig. 11 Average tissue spectrum (thin line) and weight function b (thick line) of the PLS-DA. Fig. 13 , Fig. 13, left Nevus on a forearm.The circular skin imprint caused by the image acquisition head has a diameter of 20 mm.Center Raman spectra at the position of the nevus (dotted curve) and beside it (continuous curve).Right The difference of the principal components PC2 and PC 3 of the Raman spectra reflects the position of the nevus. Fig. 14 Fig. 14 Scores of two principal components of skin Raman spectra from four persons in vivo.Colors according to analyzed skin of different body sites. Fig. 15 , Fig. 15, left Human skin biopsy sample from external ear at the top of the image acquisition head (sample 6b listed in Table 3).Center Ex vivo Raman spectra of cancerous human skin tissue at position (8,10) (dotted line) in comparison with the corresponding healthy biopsy sample (continuous line).Right Intensity of the Raman signal at 1448 cm -1 in color levels. Fig 16 Fig 16 Range of spectral values (average ± σ) for normal skin (gray) and BCC biopsies (black).Wavenumber region marked with arrows are discussed in the text. Table 2 for an example.Here most of the skin spectra of a body site are assigned to a single cluster. Table 1 [37]ition of skin spectra in vivo from four persons into six clusters obtained by hierarchical clustering.Unsupervised hierarchical clustering using a minimum variance method[37]. Table 2 Partition of skin spectra in vivo into six clusters obtained by hierarchical clustering. Table 3 Biopsy skin samples.BCC: basal cell carcinoma, SCC: squamous cell carcinoma, AK: actinic keratosis.Two consecutive samples originate from one volunteer, where the first sample is normal and the second sample is cancerous.
2018-10-08T10:44:28.000Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "75befa740469e8c8c85dcbca003bb568ff2aaca2", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/Journal-of-Biomedical-Optics/volume-23/issue-10/105001/----Custom-HTML----Nonscanning/10.1117/1.JBO.23.10.105001.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "8c55b1acbfede07d7f653e5c5db56e2875407d7c", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine", "Engineering", "Materials Science" ] }
196569316
pes2o/s2orc
v3-fos-license
Lung Transplantation We have conceived the human heart as the main source of our deep emotions and feelings. A place where our very conscious resides as portrayed by Edgar Allan Poe in his famous The Tell-Tale Heart short story: “I felt that I must scream or die! And now—again!—hark! Louder! Louder! Louder! Louder!” Dr. John Gibbon Jr. used for the first time in 1953 a heart-lung respirator to keep a patient alive while performing heart surgery. Dr. Norman Shumway at Stanford developed and perfected the first surgical technique leading to heart transplantation surgery. After Dr. Christian Barnard’s first orthotopic heart transplant in December 1967, and Dr. Shumway first heart transplant in the United States in January 1968, heart transplantation became a standard therapeutic option for life-threatening congestive failure and started to be performed in the hundreds over the next following years at different centers. Heart transplant surgery faced complications due in part to rejection and infection. However, the development of more selective immunosuppressive therapy and improvements in prevention, detection, and treatment of infections allowed for heart transplant surgery to increase rapidly worldwide. Four thousand and ninety six heart (3529 adults) transplants were reported to the International Society of Heart and Lung Transplant Registry (ISHL) in 2011 [1]. The landscape of infection affecting heart transplant patients has been shaped by different factors: (A) implementation of more selective calcineurin-based immunosuppressive protocols, (B) lessened immunosuppressive induction regimens, (C) the institution of antimicrobial prophylaxis resulting in a significant decrease or delay in the emergence of major infections episodes including P. jirovecii (PCP), Nocardia spp., Listeria spp., Toxoplasma gondii, cytomegalovirus, toxoplasmosis, cytomegalovirus (CMV), herpes simplex virus (CMV), varicella zoster virus (VZV), and invasive fungal infections, (D) introduction of novel diagnostic technology facilitating earlier recognition and treatment of infections, (E) expansion in the criteria to select donors and recipients to include various scenarios dealing with HBV, HCV, and HIV infections [2], and (F) shift toward predominantly Grampositive bacterial infections and multiresistant bacteria in recent years [3–5]. A Stanford team lead by Dr. Bruce Reitz performed a Lung transplantation as a combined heart-lung transplant procedure in 1981 [6]. Shortly after, thoracic surgeons optimized the singleand double-lung transplant procedures. Improvement of surgical techniques, especially bronchial anastomosis and evolution of flush perfusion lung preservation, decreased the perioperative bronchial complications substantially. Similarly to heart transplantation, improvements in immunosuppressive regimens, antimicrobial prophylaxis, and graft preservation led to enhancement in survival among lung transplant recipients. In contrast to cardiac, lung transplantation has faced the challenge of infections unique to the transplant of this organ. Mold infections of the anastomotic site, host versus graft disease, and serious infections with Mycobacterium abscessus, Chlamydia spp., bronchiolitis, and Burkholderia cepacia complex are among infectious complications rarely observed in other transplant patients [7]. Transplantation of thoracic organs has improved the quality of life and prevented the death of thousands of 2 Louder! Louder!" Dr. John Gibbon Jr. used for the first time in 1953 a heart-lung respirator to keep a patient alive while performing heart surgery. Dr. Norman Shumway at Stanford developed and perfected the first surgical technique leading to heart transplantation surgery. After Dr. Christian Barnard's first orthotopic heart transplant in December 1967, and Dr. Shumway first heart transplant in the United States in January 1968, heart transplantation became a standard therapeutic option for life-threatening congestive failure and started to be performed in the hundreds over the next following years at different centers. Heart transplant surgery faced complications due in part to rejection and infection. However, the development of more selective immunosuppressive therapy and improvements in prevention, detection, and treatment of infections allowed for heart transplant surgery to increase rapidly worldwide. Four thousand and ninety six heart (3529 adults) transplants were reported to the International Society of Heart and Lung Transplant Registry (ISHL) in 2011 [1]. The landscape of infection affecting heart transplant patients has been shaped by different factors: (A) implementation of more selective calcineurin-based immunosuppressive protocols, (B) lessened immunosuppressive induction regimens, (C) the institution of antimicrobial prophylaxis resulting in a significant decrease or delay in the emergence of major infections episodes including P. jirovecii (PCP), Nocardia spp., Listeria spp., Toxoplasma gondii, cytomegalovirus, toxoplasmosis, cytomegalovirus (CMV), herpes simplex virus (CMV), varicella zoster virus (VZV), and invasive fungal infections, (D) introduction of novel diagnostic technology facilitating earlier recognition and treatment of infections, (E) expansion in the criteria to select donors and recipients to include various scenarios dealing with HBV, HCV, and HIV infections [2], and (F) shift toward predominantly Grampositive bacterial infections and multiresistant bacteria in recent years [3][4][5]. A Stanford team lead by Dr. Bruce Reitz performed a Lung transplantation as a combined heart-lung transplant procedure in 1981 [6]. Shortly after, thoracic surgeons optimized the single-and double-lung transplant procedures. Improvement of surgical techniques, especially bronchial anastomosis and evolution of flush perfusion lung preservation, decreased the perioperative bronchial complications substantially. Similarly to heart transplantation, improvements in immunosuppressive regimens, antimicrobial prophylaxis, and graft preservation led to enhancement in survival among lung transplant recipients. In contrast to cardiac, lung transplantation has faced the challenge of infections unique to the transplant of this organ. Mold infections of the anastomotic site, host versus graft disease, and serious infections with Mycobacterium abscessus, Chlamydia spp., bronchiolitis, and Burkholderia cepacia complex are among infectious complications rarely observed in other transplant patients [7]. Transplantation of thoracic organs has improved the quality of life and prevented the death of thousands of individuals worldwide. Graft survival and life expectancy have been markedly improved in these patients due to the introduction of more optimal immunosuppression, antimicrobial prophylaxis, and diagnostic technology allowing the earlier diagnosis and treatment of infection and rejection. Finally, further control of infection is likely to result from implementation of new approaches to assess the net state of immunosuppression in these patients. Epidemiology Infection was recognized as a major threat to thoracic transplantation from the early inception days [8]. There are several factors predisposing thoracic transplant recipients to infections: (A) factors present before transplantation: age, presence of comorbidities (e.g., chronic kidney disease, diabetes mellitus, cancer, etc.), nutrition status, latent infections, colonization with healthcare-associated organisms, and occult community-acquired infections; (B) factors during the surgery: duration of the transplant procedure, graft injury including ischemic time, colonization or latent infection of the graft, surgical instrumentation (e.g., mechanical ventilation, invasive devices such as catheters, drains, Foley catheters, etc.), ICU stay, and need for re-interventions; and (C) factors present after transplant: degree of immunosuppression, CMV infection, and rejections ( Heart Transplant Infections A total of 4096 heart transplants were performed in 2011. Heart transplant recipients have an average age of 54 years and are predominantly man (76%). They have a significant history of smoking (46%) and hypertension (45%) and have cardiomyopathy (54%) followed by coronary artery disease (37%) as the leading causes of transplant [1]. The historical (pediatric and adult transplants between 1982 and 2011) 1-year, 5-year, and 10-year survival rates are 81%, 69%, and 50%, respectively. Overall median survival is 11 years, but it increases up to 13 years for those surviving the first year after transplantation. Although not associated with increased posttransplant mortality, infections before transplant can affect up to 25% of heart transplant candidates. Being bronchitis and soft tissue infections, the more commonly present [9]. Despite no major changes in the distribution of causes of death since 1994, infections remained a predominant factor of mortality during the first 3 years after transplant. It contributes with up to almost 20% of causes of death [3]. The global incidence of infections in heart transplant ranges between 30% and 60% and the associated mortality between 4% and 15% [10]. The incidence of infection measured as major infectious episodes per patient has steadily declined from 2.83 in the early 1970s to 0.81 in the early 2000s [3,8,11]. The most frequent type of infection is bacterial (44%), followed by viral (42%), fungal including Pneumocystis jirovecii (14%), and protozoa (0.6%). Unfavorable functional outcomes are observed in patients who developed infections in the first year of transplant, mainly associated with bloodstream, CMV, and lung infections [12]. Pulmonary and central nervous system (CNS) infections are independent predictors of mortality among heart transplant recipients. Reactivation of latent parasitic infections residing in extra-cardiac tissues in the host or transmitted in the transplanted heart is an important consideration. The classic example is the reactivation of Trypanosoma cruzi. Chagas disease is a vectorborne illness transmitted by triatomine bugs, and it is endemic in Latin America. The ethnicity or origin of either the donor or the recipient from these regions should raise the concern for possible reactivation. Chagas reactivation was documented in 38.8% of cases in a cohort of Brazilian heart transplant recipients, where Chagas cardiomyopathy was the second most common indication for transplant (34.9%) [13]. Chagas can also reactivate from the transplanted heart procured from a seropositive donor and transplanted into a seronegative recipient. Although with a substantial decreased on its prevalence in the most recent eras, toxoplasmosis is another important consideration in this setting. Similarly to Chagas, Toxoplasma gondii-also with a predilection to invade the myocardium-can be transmitted by reactivation of quiescent cysts in the recipient or the transplanted heart [14]. Lung and Heart-Lung Transplant Infections By 2011, 3640 adults received lung transplantation, the highest reported number of procedures up to that date, driven mainly by the increase of double-lung transplants. Doublelung transplant is indicated for septic lung diseases (e.g., cystic fibrosis). Around 66% of recipients were aged 45-65 years old. The most frequent indications for transplant were COPD (34%), followed by interstitial lung disease (ILD) (24%), bronchiectasis associated with cystic fibrosis (CF) (17%), and α1AT deficiency-related COPD (6%) [15]. The overall (from 1994 to 2011) 1-year, 5-year, and 10-year survival rates among lung recipients are 79%, 53%, and 31%, respectively. Overall median survival is 5.6 years. Lung transplants from CMV seronegative donors have better survival rates than from CMV seropositive donor. Thirty-day mortality was led by graft failure (24.7%) and non-CMV infections (19.6%). During the remainder of the year, non-CMV infections were the leading cause of death (35.6%). Infection is still prominent as the cause of death following the first year of transplant after bronchiolitis obliterans syndrome (BOS)/chronic lung rejection or graft failure [15]. Other infections complications historically present among the ten primary causes of death within the first year include sepsis, pneumonia, and fungal infections [16]. High lung allocation score (LAS) at the time of transplantation is associated with a lower 1-year survival and higher rates of infections among lung transplant recipients [17]. Sixty-three adult Heart-Lung transplantations were reported to the ISHL registry in 2011. Sixty-six percent of recipients were in the group range from 18 to 49 years old. Sixty-three percent of the indications were for congenital heart disease and idiopathic pulmonary arterial hypertension. Heartlung transplant for CF was higher in Europe and other centers compared to North American. When compared to lung only transplants, short-term survival was worse, but long-term survival was better for the heart-lung transplant recipients. Their 1-year, 5-year, and 10-year survival rates were 63%, 44%, and 31%, respectively. The median survival was 3.3 years and 10 years for those surviving the first year. Similarly, they have graft failure (27%), technical complications (21.9%), and non-CMV infections (17.8%) as leading causes of death during the first 30 days posttransplant. Non-CMV infections (35.1%) were the top cause of death after 1 month and within 1 year of transplant. After the first year, BOS/late graft failure and non-CMV infections were the predominant causes of death [15]. Among other risk factors for mortality in lung transplantation are cystic fibrosis, nosocomial infections, and mechanical ventilation before transplant [18]. Infections in lung transplant recipients are predominantly bacterial (48%), viral (35%), fungal (13%), and mycobacterial (4%) [19]. In 60%, the infection site is pulmonary. Risk factors for infection vary by the type of organism. Mechanical ventilation (MV) for >5 days immediately following transplant surgery and isolation of Staphylococcus aureus (SA) from airway cultures in the recipient were considered risk factors for invasive SA infections in a retrospective study of patients with lung and heart-lung transplants [20]. Likewise, risk factors for the development of healthcare-associated infections with Gram-negative organisms, Aspergillus, Legionella, and MRSA (methicillin-resistant Staphylococcus aureus), include prolonging MV, renal failure, use of ATG (antithymocyte globulin), and recurrent rejections episodes [21]. Additionally, α-1-antitrypsin deficiency and repeat transplantation are also risk factors for nosocomial infections. Mycobacterium tuberculosis transmission from lung donors with latent infection has been documented in highly endemic areas [22]. Colonization with MDR organisms (Pseudomonas aeruginosa, Burkholderia, Acinetobacter, nontuberculous mycobacteria (NTM), and Scedosporium) before transplantespecially important in CF patients-can predict the development of challenging infections to treat after transplant [23]. Pretransplant Screening of Recipients Patients should undergo a comprehensive evaluation of potential infectious complications associated with transplantation. A detailed medical history including previous vaccinations, history of past infections, exposures (geographical, occupational, animal, etc.), travel, and foreign-born status among others should be obtained. Clinicians shuold perform routine serologies for the detection of pathogen-specific IgG for CMV, HSV, EBV (VCA), VZV, hepatitis B (HBsAg, HBsAb, HBcAb), HIV, hepatitis C, and syphilis. Toxoplasma IgG should also be performed in heart and heart-lung transplant candidates. Additionally, we recommend to obtain UA, urine culture, CXR, and tuberculin skin test (TST), or a Quantiferon assay. In lung and heart-lung transplant candidates, sputum should be cultured for bacterial, fungal, and AFB studies. Some centers advocate the screening of patients for colonization with MDR (multidrug resistant) bacteria such as MRSA and VRE (vancomycin resistant Enterococci), which it may have an impact on the type of antibacterial prophylaxis used preoperatively or the empirical antibiotics should sepsis develop in the immediate postoperative period. In potential lung recipients, previous respiratory colonization with MDR Pseudomonas, especially in CF patients, should not exclude them from transplant [24]. On the other hand, if colonization with B. cenocepacia (genomovar III) in CF is present transplant is relatively contraindicated [25,26]. Histoplasma capsulatum has reactivated during immunosuppressive therapy [32]. Infections after solid organ transplantation (SOT) are rare and attributable to transmission from the donor [33]. Furthermore, latent histoplasmosis can be present with negative serologies and treatment after transplant carries a good outcome. Therefore the role of screening for histoplasmosis is of questionable significance [34]. Pretransplant Screening of Donors The type of evaluation may change if the donor is alive or deceased depending on the available time to collect the samples. Similarly to recipients, donors should undertake a comprehensive assessment including a complete history, assessment of risk factors, exposures, immunizations, and previous or current infections. Donors should be screened for HIV, hepatitis B/C, syphilis, and tuberculosis. Furthermore, we recommend to obtain serologies for CMV, EBV, HSV, VZV, and Toxoplasma gondii, and for HTLV-1/ HTLV-2 in endemic areas. In high-risk donors, the use of nucleic acid amplification tests (NAAT) for HBV, HCV, and HIV should be considered. Additionally, blood cultures to document an occult bacteremia are recommended. In lung transplant donors, we recommend obtaining respiratory cultures through bronchoscopy to detect colonizing organisms and target them to prevent invasive infections in the donor. Culturing the media of the allograft during acquisition or processing have been advocated to reduce the risk of mycotic aneurysms among kidney transplant recipients, which may apply to other SOT [35]. Screening of donors for endemic mycosis is not well established. On the other hand, heart transplant donors should be screened for Chagas if the donor was born in Latin America [29]. Finally, it is important to highlight the increase recognition of emerging, unusual viral infections such as West Nile virus, lymphocytic choriomeningitis virus, rabies, and different human coronaviruses [34,36]. Testing for those organisms should be done based on individual assessments. Immunizations Immunization should be optimized before transplantation since the recipient will have better chances to mount an adequate immune response [37]. The advisory committee on immunization practices (ACIP) [38] and the guidelines for immunizations in solid organ transplantation [39] recommend inactivated influenza vaccine annually. Tetanus, diphtheria, and acellular pertussis (Tdap) should be administered to all adults who have not previously received Tdap or have an unknown status. Varicella vaccination with two doses in patients without evidence of immunity or a single dose of zoster vaccination, inactivated polio vaccine, hepatitis A/B, HPV (three series through 26 years of age), and meningococcal and pneumococcal vaccines should be administered [38]. It is remarkably important to vaccinate all household members as well. BCG and rabies vaccines can be considered under some extenuating or exposure-related indications. See Table 2.3. Avoidance of Exposures Education of the patient and the family members is a cornerstone to establishing effective preventive measures. Emphasis should be enforced about hand hygiene and food handling. Additionally, potential sources of bacteria, fungi (e.g., Aspergillus), and toxoplasmosis such as plants and flowers, cleaning pet's litter or cages, eating uncooked meat, acquiring new pets, construction areas, farming, barnyard activities, and smoking marihuana should be avoided. If those recreational or occupational exposures are unavoidable; appropriate gear, such gloves, must be worn. Education about possible community exposures is also important. Close contacts with persons with fevers or rash potentially infected with VZV, herpes zoster, or influenza should be circumvented as well. Patients should cook all meals thoroughly, wash all fruits and vegetables, and shun all unpasteurized products. Safe sex practices are recommended. If any foreign travel is planned, seeking evaluation in a specialized travel clinic is advisable. Prophylaxis Guidelines for the management of surgical antimicrobial prophylaxis list cefazolin (2 g, 3 g for patients with weight >120 Kg every 4 h) as the recommended regimen for heart, lung, and heartlung transplantation surgery. Clindamycin (900 mg every 6 h) or vancomycin (15 mg/kg) can be substituted as alternative agents in beta-lactam allergic patients [40,41]. This recommendation can be adjusted individually, based on local hospital surveillance data or previous knowledge of colonizing organisms (e.g., addition of aztreonam, gentamicin, or a single-quinolone dose). However, the widespread use of quinolones may increase the resurgence of antimicrobial resistance. The antibiotic should be administered within 60 min before surgical incision (within 120 min for vancomycin or quinolones) and to be continued for 24-48 h in heart transplants and 48-72 h and no longer than 7 days in lung and heart-lung transplant recipients. Recommendation to continue antibacterial prophylaxis until chest and mediastinal tubes are removed lacks sufficient evidence. Redosing will depend on the procedure duration and associated blood loss. The recipient does not need treatment if a localized infection was present in the donor, except during meningitis where concomitant bacteremia often coexist. In meningitis and bacteremia, it is prudent to treat the recipient for 2-4 weeks [34]. Indications for antifungal prophylaxis in heart transplant recipients are not clear. A systemic review showed no benefit of antifungal therapy to prevent invasive fungal infections in transplants recipients other than liver [42]. Although a prospective cohort of heart transplant recipients showed targeted prophylaxis-an echinocandin for a median of 30 days with the presence of at least one risk factor for invasive aspergillosis (IA) (reoperation, cytomegalovirus disease, posttransplantation hemodialysis, and another patient with IA in the program 2 months before or after the procedure)-was highly effective and safe in preventing IA episodes [43], no consensus exists for universal antifungal prophylaxis in heart transplant recipients. Most centers have adopted antifungal prophylaxis including inhaled amphotericin B, oral itraconazole, or IV targeted echinocandin prophylaxis. In lung and lung-heart transplant recipients, fungal prophylaxis should be considered, especially if pretransplantation respiratory cultures either from the donor lung or recipient airways shows Aspergillus or Candida. One approach is to use inhaled amphotericin B (50 or 100 mg in extubated or intubated patients, respectively) daily until 4 days after transplant and then weekly until hospital discharge in patients with no known colonization [44,45]. If a mold has been isolated, voriconazole is recommended up to 4 months after transplant. Although evidence and efficacy need to be confirmed, combination antifungal prophylaxis therapies is used at some centers [46]. Pneumocystis jiroveci prophylaxis is done with trimethoprimsulfamethoxazole (TMP-SMX) for 6 months, up to 1 year. Some centers extend the PJP prophylaxis to lifelong. TMP-SMX also confers protection against Toxoplasma, Nocardia, and Listeria species infections. Alternatively, dapsone, inhaled pentamidine, or atovaquone can be used in patients with a history of sulfa allergy. TMP-SMX is recommended at many centers for lifelong in toxoplasmosis seronegative recipients of seropositive cardiac donors (Toxoplasma D+/R−) [11]. CMV prevention is recommended to all D+/R− and R+ patients. There are two common strategies for CMV prevention: antiviral prophylaxis and preemptive therapy. Both approaches possess similar success rate and their advantages and disadvantages [47]. Guidelines recommend valganciclovir or intravenous ganciclovir as the preferred antivirals. Oral ganciclovir is an option in heart transplant patients, although it possesses a low oral bioavailability and therefore the theoretical risk of increased resistance. Often, CMV immune globulin is used as an adjunctive agent. In heart recipients, prophylaxis is recommended for 3-6 months in D+/R− and 3 months in R+. In lung and heart-lung recipients, the duration of prophylaxis is 12 months and 6-12 months in D+/R− and R+ recipients, respectively [48]. In D−/R− patients, otherwise not receiving CMV active agents, antiviral prophylaxis against other herpes viruses, such as HSV and VZV, should be considered. Use of oral CMX001 (oral liposomal formulation of cidofovir) in hematopoietic-cell transplants reduced CMV-related events and may have a potential role in preventing CMV in other transplant settings [49]. Refer to Table 2.4 for a list of prophylaxis recommendations. <1 Month This period is characterized more commonly for nosocomial, bacterial infections. Thus, the bacterial organisms present are often MDR (e.g., VRE, MRSA). In heart transplant recipients, skin and soft tissue infections (SSTI), surgical site infection, and mediastinitis are of concern during this period. Likewise, lung and lung-heart transplant recipients may develop infections related to previous respiratory colonization (Pseudomonas, Aspergillus). Other significant infections include aspiration pneumonitis, healthcare-and ventilatorassociated pneumonia, catheter-related bloodstream infections (CRBSI), nosocomial UTIs, and Clostridium difficile colitis. Donor-derived infections during this period can be present and will include HSV, lymphocytic choriomeningitis virus (LCMV), rhabdovirus (rabies), West Nile virus (WNV), and HIV. Toxoplasma gondii and Trypanosoma cruzi are also serious donor-derived infections in heart transplant recipients that can develop within the first 6 months posttransplantation [50]. 1-6 Months During this period, reactivation of latent infections usually occurs. Hence, bacterial infections such as those caused by Nocardia asteroides, Listeria monocytogenes, and Mycobacteria tuberculosis typically occur. Additionally, fungal infections by Aspergillus spp., Cryptococcus neoformans, and P. jiroveci and parasitic by Toxoplasma gondii, Leishmania spp., Strongyloides, and Trypanosoma cruzi can also be seen. Viral infections present during this period include herpesviruses (HSV, VZV, CMV, and EBV) and adenovirus. >6 Months Development of infections after 6 months are predominantly community-acquired pneumonia and urinary tract infections. Other diseases include Aspergillus and Mucor species, Nocardia, Rhodococcus, and late viral infections including CMV, hepatitis B and C, JC polyomavirus infection, posttransplant lymphoproliferative disorder (PTLD), HSV encephalitis, and viral community-acquired infections (e.g., coronavirus, West Nile virus, influenza). Infections It is important to recognize transplant recipients as a patient population with increased susceptibility to infections and The antibiotic should be administered within 60 min before surgical incision (within 120 min for vancomycin or quinolones) and to be continued for 24-48 h in heart transplants and 48-72 h and no longer than 7 days in lung and heart-lung transplant recipients b Doses of valganciclovir, ganciclovir, and other antibiotics may require adjustment for renal function have a low threshold to perform diagnostic workup in the presence of any concerning signs or symptoms. Infections monitoring is also done in a structured way when preemptive therapy for CMV is in place (as opposed to universal prophylaxis). Protocols vary by the transplant center but, usually, implies a weekly CMV PCR or pp65 Ag monitoring [51]. Likewise, monitoring of cell-mediated immunity (CMI) using a Quantiferon-CMV assay may be useful predicting late-onset CMV disease once CMV prophylaxis has been stopped [52]. CMI also have been monitored for EBV using an enzyme-linked immunoSpot assay [53]. Immunoglobulin G (IgG), C3, IgG2 levels, and NK cell counts have been proposed as an attempt to identify the risk of infection in heart transplant recipients within the first year [54]. Drug-Drug Interactions Significant drug-drug interactions exist among antimicrobial and immunosuppressive agents. Patient medication list should be reviewed carefully. CTP3A4 strong inducers such as nafcillin reduce tacrolimus serum concentrations. In contrast, azoles such as fluconazole can result in increased levels of tacrolimus or cyclosporine. For voriconazole, the dose of tacrolimus needs to be reduced by two-thirds [55] and the cyclosporine dose by 50% [56]. Rifamycins can have an opposite drug-drug interaction by decreasing the concentrations of prednisone, cyclosporine, tacrolimus, sirolimus, and mycophenolate mofetil (MMF) [57,58]. Likewise, tacrolimus administration along with quinolones may cause QT prolongation [59]. Infecting Microbial Agents Bacterial In heart transplant patients, bacterial infections have similar clinical manifestations commonly observed in other patient populations. However, clinical signs may be subtle or absent (e.g., afebrile). They are the most frequent type of infections in this setting, reaching up to 50% of all infections [3]. The most common are pulmonary infections followed by bacteremias, mediastinal, and skin infections. Staphylococcus aureus-predominantly methicillin-resistant-can cause SSTI, ventilator-associated pneumonia, mediastinitis, CRBSI, other forms of bacteremia, and osteomyelitis. In contrast, coagulase-negative Staphylococcus is more commonly associated with CRBSI. Among Gram-negative bacteria, Pseudomonas aeruginosa is common, usually of pulmonary origin. Escherichia coli is the primary causal organism of UTIs. Extended-spectrum β-lactamase (ESBL)producing Klebsiella pneumoniae, Escherichia coli, Klebsiella oxytoca, and Citrobacter freundii are also found in 2.2% of heart transplant recipients [60]. Nocardia species are well recognized as an opportunistic pathogen in this setting. Although relatively rare in heart transplant recipients (frequency <1%), Nocardia is only second in frequency in heart transplant after lung transplant recipients [61][62][63]. Pertinent-independent risk factors associated with the development of this infection in SOT include high-dose steroids, history of CMV disease, and high levels of calcineurin inhibitors [62]. With the almost universal prophylaxis with TMP-SMX, Nocardia infection is less common and often present late, usually after 1 year posttransplant [63]. When they occurred, they affect the lung predominantly, which is the port of entry for disseminated infections and CNS invasion. Also, it can cause skin nodules and abscesses. Listeria monocytogenes can also be seen in heart transplant recipients and can count for a significant proportion of the bacterial meningitis cases in this setting [64]. Additionally, myocarditis and myocardial abscesses with this organism have also been documented [65]. Mycobacterium tuberculosis and nontuberculous mycobacteria (NTM), although, documented to occur in heart transplantation, are rare in the United States [66,67]. However, it is important to recognize that the development of tuberculosis (TB) can be more prevalent in some endemic regions and often present with extrapulmonary involvement [68,69]. Legionellosis and Rhodococcus equi with mainly pulmonary manifestations (pneumonia, pulmonary infiltrates, or cavitation) are another significant infections among heart transplant recipients [70]. Fungal Fungal infections excluding PCP represent around 4.0% of all the infections. From them, invasive mold infections (IMI) are a significant contribution to morbidity and mortality among heart transplant recipients. The incidence in this population can reach 10 per 1000 person-years, and its associated mortality is approximately 17% [71]. Aspergillus represents up to 65% of all IMI. Its median time of onset is about 46 days, although late presentation (>90 days) has been more recently recognized associated with receipt of sirolimus in conjunction with tacrolimus for refractory rejection or cardiac allograft vasculopathy [72]. The most common clinical presentation for aspergillosis includes fever, cough, and single or multiple pulmonary nodules [73]. Extrapulmonary manifestations include spondylodiscitis, infective endocarditis, mediastinitis, endophthalmitis, and brain and cutaneous abscesses [74][75][76][77][78]. Dissemination tends to affect the CNS in a good proportion of the cases. Mucormycosis is the second most frequent mold affecting heart transplant recipients. Mucor, along with other non-Aspergillus molds (e.g., Scedosporium, Ochroconis gallopava), are associated with disseminated infections, CNS involvement, and poorer outcomes [79,80]. Pneumocystis jiroveci (PCP)-although with a marked reduction in inci-dence with the introduction of universal prophylaxis-is still a significant pathogen and cases may occur late after heart transplant. Cryptococcosis, although infrequent among SOT patients, has its higher incidence in heart transplant recipients [81]. Usually, its manifestations present late and affect the lungs and the CNS predominantly. Histoplasmosis and coccidioidomycosis occurred typically in the first year after transplant. Antigenuria was the most sensitive diagnostic test in SOT for histoplasmosis [82]. Finally, Candida infections are an important cause of morbidity and mortality as well. Rate of colonization is higher than in the general population [83]. Candida most commonly causes an oral mucosa infection. Although there has been a decline of invasive infections over time, these do occur and typically in the form of bloodstream infections secondary to catheter-related infections, tracheobronchitis, or disseminated disease [84]. Additionally, other confined end-organ injuries such as endophthalmitis and esophagitis can also be seen. Viral CMV infection is of critical importance among SOT. In heart transplant recipients, CMV has been inconsistently associated with cardiac allograft vasculopathy [85]. Furthermore, CMV leads to upregulation of pro-inflammatory cytokines, increase procoagulant response, left ventricular dysfunction, allograft rejection, and an increase of opportunistic infections [86]. The greatest risk for developing CMV disease is CMV-negative recipients of CMVpositive organs (D+/R−), followed by D+/R+ and D−/R+. A clinical report estimated that the rate of infections in heart transplant ranges between 9% and 35%, and disease is present in around 25% of patients [87]. The clinical manifestations are not unique to heart transplant recipients and include a CMV syndrome (fevers, myalgias, arthralgias, malaise, leukopenia, and thrombocytopenia). CMVassociated end-organ injury in this setting includes most frequently pneumonitis and gastrointestinal disease [10]. Other manifestations comprise myelosuppression, hepatitis, and pancreatitis. In contrast to the high frequency observed in AIDS patients, chorioretinitis in heart transplant patients is relatively rare [87]. Guidelines on CMV diagnosis and managements are discussed in more detail in Chap. 55 and also have been published elsewhere [88]. Other herpes viruses are of important consideration as well. EBV-associated T-cell PTLDs are more frequent in heart transplant recipients (0.4%) than in other SOT patients [89]. PTLD is a significant contributor to morbidity and mortality in the pediatric heart transplant population [90]. Human T-lymphotropic virus type I (HTLV1), human herpes virus (HHV)-6, HHV-7, and HHV-8 might play a role in EBV(−) T-cell PTLDs as well. Herpes viruses can manifest, as in other hosts, as mucocutaneous lesions for HSV, herpes zoster for VZV, infectious mononucleosis in the case of EBV, Kaposi sarcoma for HHV-8, and encephalitis for HHV-6/7. Hepatitis, colitis, pneumonitis, and gastrointestinal disease have also been attributed to dissemination with certain herpes viruses. Herpes viruses can present with disseminated skin lesions (with or without vesicle formation) and fever of unknown origin. Adenovirus has been associated with rejection, ventricular dysfunction, coronary vasculopathy, and the need for retransplantation. The current standard treatment for adenovirus is cidofovir, but outcomes are not optimal [91]. Chronic hepatitis without an identifiable cause should prompt testing for hepatitis E virus (HEV). Chronic HEV infection leads to the rapid development of fibrosis. HEV testing should be done with RNA PCR due to a delay in the antibody response. We recommend decreased immunosuppression and ribavirin therapy for 3 months [92,93]. Other less common manifestation that should be considered under the correct epidemiologic risk factors include HTLV-1/ HTLV-2-associated myelopathy, rabies, lymphocytic choriomeningitis virus, subacute measles encephalitis, mumps (associated parotitis, orchitis, vestibular neuritis, and allograft involvement), dengue virus, orf virus, human coronavirus, and influenza [36]. Parasitic Cardiac transplant itself is one the predictors for development of toxoplasmosis [94]. Other associated risk factors include negative serum status before transplant, diagnosis of cytomegalovirus (CMV) infection, and high-dose prednisone. Toxoplasmosis can be transmitted by the donor heart (D+/R−, especially during the first 3 months) or can reactivate from the recipient (>3 months). Most of the infections developed during the first 6 months posttransplant and are predominantly primary infections. About 22% of infected patients had a disseminated infection carrying an estimated 17% mortality. Toxoplasmosis can manifest otherwise with myocarditis, encephalitis, pneumonitis, or chorioretinitis. Diagnosis requires identification of tissue cysts surrounded by an abnormal inflammatory response, detection of Toxoplasma DNA in body fluids by PCR, or positive Toxoplasma-specific immunohistochemistry in affected organs. Posttransplant serological tests are not helpful for diagnosis and may be misleading since results may change or not regardless of the presence of toxoplasmosis [95]. The preferred treatment regimen is a combination of pyrimethamine with sulfadiazine [96]. Advanced Chagasic cardiomyopathy is a primary indication for heart transplantation in some centers [13]. Trypanosoma cruzi, the causal organism of Chagas disease, can be transmitted up to 75% of the time from infected heart donors (D+/R−) [97]. Additionally, Chagas disease can reactivate from the donor once immunosuppression is in place (R+). The reactivation rate can range between 22% and 90% in recipients with chronic chagasic cardiomyopathy undergoing heart transplant [98][99][100]. Additional risk factors for reactivation include rejection episodes, neoplasms, and use of MMF [98]. The mean onset of symptoms is approximately 112 days [101]. Once manifested, Chagas can present with nonspecific symptoms such as fever, malaise, anorexia, hepatosplenomegaly, and lymphadenopathy. Myocarditis, pericarditis, and encephalitis are also seen. Reactivation can mimic rejection and exhibits congestive heart failure, AV block and skin manifestations such as nodules and panniculitis. Increased eosinophil count and anemia can be indirect indicators of reactivation [102]. Diagnosis is made with the visualization of circulating trypomastigotes in peripheral blood. Additionally, blood and tissue PCR can be used. Tissue amastigotes can be seen in biopsy H&E preparations (Fig. 2.1). Finally, serologies are a crucial aspect in the diagnosis especially if seroconversion have been documented. In asymptomatic individuals, when the diagnosis of Chagas has been established in the donor, monitoring should be instituted with weekly blood T. cruzi PCR and microscopy [29]. Preferred antitrypanosomal therapy consists on benznidazole. Nifurtimox is an alternative treatment option. Posaconazole has anti-parasitic activity but carries high failure rates [103,104]. GI disease with Isospora (Cystoisospora) belli, Cryptosporidium, Cyclospora, and Microsporidia has been reported to affect SOT recipients. Microsporidiosis can manifest with disseminated disease: fever, keratoconjunctivitis, CNS involvement, cholangitis, cough, and thoracic/ abdominal pain [94]. Other rare parasitic infections affecting heart transplants include leishmaniasis, strongyloidiasis, and free-living amoebas [94,105]. Skin, Soft Tissue, and Bone The rate of surgical site infections (SSI)-sternal wound infections-in patients receiving antimicrobial prophylaxis ranged from 5.8% to 8.8% following heart transplant procedures [41]. Heart transplantation itself is an independent risk factor for SSIs. Other risk factors include age, prophylaxis with ciprofloxacin alone, positive wire cultures, female gender, previous left ventricular assist device (VAD) placement, BMI >30 kg/m 2 , previous cardiac procedures, and inotropic support for hemodynamic instability [41,106]. Similarly to other hosts, Staphylococcus species are the predominant organism causing SSTIs. MRSA can reach up to 21% of the cases. Gram-positive organisms: VRE (E. faecalis), coagulase-negative staphylococci, and other Enterococcus species are other etiologic agents. Candida and selected gram negatives such as Enterobacteriaceae, P. aeruginosa, and Stenotrophomonas maltophilia can cause SSIs as well [107]. Sternal osteomyelitis often complicates deep SSI. Additionally, sternal wound infections by NTM and fungi such as Aspergillus and Scedosporium have been documented [108,109]. Herpes zoster is also an important consideration and source of morbidity. Herpes zoster (HZ) is found as a complication in 19-22% of the patients with a median time of presentation ranging from 0.73 to 2.10 years [64,110]. Close to half may develop postherpetic neuralgia. Multi-dermatome involvement, zoster ophthalmicus, and meningoencephalitis are also described. Exposure to MMF is an independent risk factor. Conversely, CMV prophylaxis reduces the risk for HZ. Bloodstream Bloodstream infections (BSIs) are a risk factor for mortality among heart transplant recipients. Likewise, SOT recipient status is an independent risk factor for developing bacteremia [111]. In heart transplant recipients; the rate of BSI ranged between 16% and 24%. The median onset is about 51-191 days, and the sources are in order of frequency: lower respiratory tract, urinary tract, and CRBSI. Gram-negative bacteria were more commonly isolated. They are in order of appearance E. coli, P. aeruginosa, and K. pneumoniae. More common Grampositive bacteria were S. aureus, S. epidermidis, E. faecalis, and L. monocytogenes. Directly attributable mortality is 12.2%. Among the identifiable independent risk factors to develop BSI are hemodialysis, prolonged intensive care unit stay, and viral infections [112,113]. Infective endocarditis (IE) is seen more frequently among heart transplant recipients than in the general population. With IE occurred, it most commonly involves the mitral and tricuspid valves and Staphylococcus aureus and Aspergillus are the main etiologic organisms. The main predisposing factors in this setting are believed to be the frequent use of vascular indwelling cathe- ters and the frequency of endomyocardial biopsies [114]. Staphylococcus aureus bacteremia in heart transplant recipients ranges from 10% to 38% [11,115]. The sources of SA bacteremia in SOT are CRBSI (30%), pneumonia (24%), wound (14%), endocarditis (10%), intra-abdominal infections (9%), bone and joint (7%), cardiac devices (3%), UTI (1%), and SSTI (1%) [115]. Chest Immediately following heart transplant and during the 1st month, patients are more susceptible to develop pneumonia, most of which are healthcare or ventilator associated and therefore caused by nosocomial organisms such as MRSA, Pseudomonas aeruginosa, and other Gram negatives including Acinetobacter and ESBL-Enterobacteriaceas. Pneumonia is one the major contributors to mortality in the early postoperative period. Pneumonia-related mortality approaches 15% [116]. After the 1st month, interstitial pneumonia and pneumonitis can develop, and the differential includes herpesviruses (HSV, CMV, VZV) and respiratory syncytial virus (RSV), Toxoplasma gondii and Pneumocystis jiroveci. Pulmonary nodules with or without cavitation can be caused by fungi such as coccidioidomycosis, aspergillosis, mucormycosis, cryptococcosis; bacterial including actinomycosis, tuberculosis, atypical mycobacterial infections, Nocardia, Rhodococcus equi, and Gramnegative bacilli; and noninfectious causes like pulmonary infarction or lymphoproliferative disorders [117,118]. Pulmonary nodules are seen in about 10% of the patients, and the median detection time is about 66 days. The associated symptoms are fever and cough. The most frequent etiology is Aspergillus followed by Nocardia, and Rhodococcus. CMV is an exceedingly rare cause of pulmonary nodules. The diagnostic approach with the higher yield is transthoracic fine needle aspiration followed by bronchoalveolar lavage and transtracheal aspiration [118]. Communityacquired pneumonia caused by Streptococcus pneumonia, Legionella spp., mycoplasma, and influenza is another source of morbidity [10]. Mediastinitis is a common complication in this setting. In patients receiving antimicrobial prophylaxis, mediastinitis develops in 3-7% of the patients [107,119]. A CT scan is usually necessary to determine the extension of the infection. MRSA Staphylococcus epidermidis, Gram-negative bacteria, and Aspergillus fumigatus are frequently found as the causal organisms [120]. Antimicrobial therapy should be accompanied by aggressive surgical debridement [121]. Abdominal/Genitourinary There are not distinctive abdominal-pelvic complications among heart transplant recipients. Clostridium difficile is a common hospital-related cause of diarrhea associated with the use of antimicrobials. Other etiology for diarrhea second-ary to acute gastroenteritis can present in a protracted way in this setting. Listeria infection can present as a febrile gastroenteritis illness as well. Nontyphoid Salmonella infection has been described to complicate the early postoperative period in a center in Taiwan [122]. Acute cholecystitis can affect heart transplant recipients advocating to have a low threshold to use ultrasound as a screening method [123]. Acute pancreatitis with abscess formation has also been described [124]. As pointed above, hepatitis E can present with persistently abnormal liver tests. Although less frequent than in kidney transplant recipients, urinary tract infections are an important cause of morbidity. UTIs are predisposed by Foley catheters. The organisms most commonly involved are Gram-negative bacteria, Enterococcus, and Candida. Polyomavirus nephropathy by BK virus has been described in heart transplant recipients and might be a contributor to chronic kidney disease [125]. Central Nervous System The need for urgent transplantation and multiple transfusions are independently associated with infectious, neurologic complications. Its overall mortality can reach 12% [64]. Donor-derived meningoencephalitides affecting heart transplant recipients usually manifest within the first 30 days. These infections include West Nile virus, arenaviruses (e.g., LCMV), and rabies. WNV can manifest with a Guillain-Barré-like axonopathy with cerebrospinal fluid (CSF) pleocytosis. In addition to meningitis or encephalitis, ataxia, myelitis, optic neuritis, polyradiculitis, and seizures can also be observed [126]. WNV can be also acquired by the recipient in the community or through blood transfusions and present at a later time [127]. Other infectious forms of meningitis and encephalitis that can present after the 1st month include listeriosis, Streptococcus pneumoniae, Trypanosoma cruzi, Toxoplasma, HHV-6, and disseminated herpes virus infections (CMV, VZV, HSV, and EBV) [128][129][130]. The absence of appropriate primary prophylaxis or monitoring increases their risk. Aspergillus causes the majority of brain abscess. Additionally Toxoplasma, tuberculosis, Listeria spp., Cryptococcus neoformans, Scedosporium spp., and Nocardia can also be causative agents [129]. Concomitant pulmonary involvement is common, particularly for those whose portal of entry is the respiratory tract. Progressive multifocal leukoencephalopathy (PML), a demyelinating disease caused by the reactivation of JC virus, has a usual median onset of 27 months. It carries a marked high case fatality rate and a median survival of 6.4 months in SOT [131]. The use of rituximab as an antirejection treatment seems to confer an increased risk for PML [132]. HTLV-1-associated myelopathy (HAM) has been described as well in SOT. Infecting Microbial Agents Bacterial Bacterial infections are the most common type of infections among lung and lung-heart transplant recipients. The anatomic site most frequently affected is the respiratory tract, usually manifested with pneumonia, sinusitis, or tracheobronchitis. Previous colonization, healthcare associated, and procedures related are the primary sources. For patients with cystic fibrosis (CF), knowledge of previous colonization results may provide some diagnostic and therapeutic advantages. Pseudomonas aeruginosa is a predominant colonizing pathogen in CF. However, Acinetobacter baumannii, Burkholderia species, Stenotrophomonas maltophilia, Achromobacter xylosoxidans, NTM, Pandorea, and Ralstonia are also observed [23]. Furthermore, pathogens that are known to cause nosocomial pneumonia during the 1st month include Staphylococcus aureus, Pseudomonas aeruginosa, other Gram negatives (Klebsiella pneumoniae, Enterobacter cloacae, Serratia marcescens, Escherichia coli, Acinetobacter species), and anaerobes. Gram-positive bacteria are a common source of infections making up to 40% of them [133]. The most common sites affected were the respiratory tract, followed by bacteremia, skin, wound, and catheter related. The pathogens more frequently identified are Staphylococcus species (77%), Enterococcus species (12%), Streptococcus species (6%), Pneumococcus (4%), and Eubacterium lentum (1%). Staphylococcus aureus infection can develop up to 20% of lung recipients. SA commonly causes pneumonia, followed by tracheobronchitis, bacteremia, intrathoracic infections, and SSTIs [20]. Streptococcus pneumoniae is community acquired and present with pneumonia, usually after 6 months posttransplant. Pseudomonas aeruginosa has high rates of colonization (up to 40%) and disease (30%) [134]. Other significant bacterial infections that may present after the 1st month are Mycobacterium tuberculosis, NTM, Nocardia, Rhodococcus, and Legionella. Isolation of NTM in lung transplant recipients without evidence of disease is not associated with increased mortality [135]. Nocardiosis can occur in about 2% of the lung transplant recipients. The median time of onset ranges from 14.3 to 34.1 months [136,137]. Nocardia asteroides, N. farcinica, N. nova, and N. brasiliensis have been reported. N. farcinica appears to carry worse outcomes. This infection can present as a breakthrough in the presence of trimethoprim-sulfamethoxazole for P. jiroveci prophylaxis, although the isolates may remain susceptible. Mortality has been reported to range between 18% and 40%. The native lung is more frequently affected in single-lung transplant recipients. Nodules are the more prevalent radio-graphic finding. Extrapulmonary involvement affecting the skin and brain can be seen. Hypogammaglobulinemia and neutropenia seem to confer additional risk factors for nocardiosis in this setting [137]. Fungal Fungal infections are frequent complications in lung and lung-heart transplant. They present in about 15-35% and carry an overall mortality close to 60% [138]. Aspergillus and Candida are the most frequent causative agents. Other important fungi include Cryptococcus spp., mucormycosis, endemic fungi (Histoplasma, Coccidioides, and Blastomyces spp.), Scedosporium spp., Fusarium spp., and dematiaceous molds. Candida infections are prominent during the 1st month after transplantation. It can be one of the most common causes of BSI in this setting [139]. Although colonization of the upper airways and gastrointestinal tract is common, Candida additionally can cause mucocutaneous disease, tracheobronchitis, anastomosis site infections, CRBSI, and disseminated disease. Aspergillus spp. lead as the cause of invasive fungal infections. Its attack rate of infection is almost ten times compared to that in other SOT patients (estimated incidence of 6% among lung transplant recipients) [140,141]. A. fumigatus is the most common species, but A. terreus, A. flavus, and A. niger have been described as well. The main predisposing risk factors in this setting are intense immunosuppression, previous colonization with Aspergillus spp., airway ischemia, and BOS. Single-lung transplant possesses the greatest risk to developing an invasive Aspergillus infection carrying a higher mortality than double-lung and heart-lung transplant recipients. Single-lung recipients are usually older and more likely to have COPD as the indication for transplantation [140]. Aspergillus infections can present as tracheobronchitis, pneumonia, or disseminated disease. Extrapulmonary involvement includes sinusitis, CNS or orbits infections, and vertebral osteomyelitis. Aids in the diagnosis can include surveillance bronchoscopies (bronchoalveolar lavage stain and culture; biopsy), chest CT and serum/BAL galactomannan, beta-D-glucan, and PCR. The presence of pulmonary nodular lesions in invasive infections can carry better outcomes [142]. Voriconazole is the treatment of choice. It is important to note that immune reconstitution inflammatory syndrome (IRIS) can develop at a median of 56 days in 7% of treated lung transplant recipients [143]. In Aspergillus tracheobronchitis, nebulized amphotericin B and debridement of the bronchial anastomosis are important adjuvant measures to systemic antifungal therapy [144,145]. Pneumocystis jirovecii pneumonia manifests from 1 to 6 months. Its incidence has been reduced dramatically with universal TMP/SMX prophylaxis. Cryptococcosis with a rate of 2% in lung transplant recipients presents with pulmonary involvement, but dissemination with meningitis can occur. Furthermore, Cryptococcus skin manifestations like cellulitis and Cryptococcus-associated IRIS have been documented [146,147]. Viral Viral infections are a common cause of morbidity among lung transplant recipients. The most common viruses are (1) CMV among the herpes viruses and (2) community-acquired respiratory viruses. As in other SOT recipients, the higher risk to develop CMV infection is among D+/R−, followed by D+/R+, D−/R+, and D−/R−. This last scenario carries less than 5% of risk [48,148]. Lung transplant recipients possess higher risk for CMV than other SOT with an estimated incidence of 30-86% [87]. The lung is considered a primary reservoir for CMV latency, and abundant lymphocytic tissue surrounds the transplanted organ. Additionally, the use of antilymphocyte antibodies to treat rejection or for immunosuppression and other herpesviruses infections are additional risk factors for CMV disease [149]. Interferon (IFN)-γ (+874T/T) polymorphism increases IFN levels and may be a predisposition for CMV disease [150]. CMV is significantly associated with BOS, which reduces survival after the first year posttransplant [151]. CMV disease is most commonly manifested by pneumonitis or viral syndrome and less frequently with gastrointestinal disease. Among lung transplant recipients, ganciclovir-resistant CMV carries an increased morbidity and mortality [152]. Infections with community-acquired respiratory viruses ranged from 7.7% to 64%. These infections are associated with increased risk to develop pneumonia, graft dysfunction manifested by lung function loss, BOS, high calcineurin inhibitor blood levels, and increase mortality [153][154][155]. These viruses include influenza, parainfluenza, respiratory syncytial virus (RSV), coronaviruses, human rhinovirus, adenovirus, human metapneumoviruses, and bocaviruses. The hospitalization rates are higher for influenza and parainfluenza (50% and 17%, respectively) [154]. Symptoms are usually nonspecific. Diagnosis often requires detection of viral nucleoprotein antigens in nasopharyngeal swabs or bronchoalveolar lavage (BAL) by enzyme immunoassay or fluorescent antibody or the amplification of nucleic acid by PCR. Ribavirin may possess activity against Paramyxoviruses (RSV, Metapneumovirus, and parainfluenza). Ribavirin is administered inhaled, orally, or intravenously. Oseltamivir or zanamivir is the treatment choice of influenza A or B [156]. Adamantanes (amantadine and rimantadine) are not active against influenza B, and there is a marked increase resistance among influenza A strains [156]. Similarly to other SOT recipients, DNA viruses like non-CMV herpesviruses (HSV-1,-2), VZV, HHV-6,-7,-8, and EBV are a source of significant morbidity including but not limited to CMV-negative viral syndrome, rash, pneumonitis, hepatitis, and encephalitis [157]. Lastly, polyomavirus such as BK virus (BKV), JC virus (JCV), and simian virus 40 (SV40)-although fre-quently encountered in lung transplant recipients with an unclear causality-may cause worsening renal function or survival [158]. PTLD is also a well-recognized complication. A trend toward late PTLD presentation (>1 year) has been documented where B symptoms are more predominant as well as extra-graft involvement [159]. Parasitic As other immunosuppressive states, certain parasitic infections can complicate lung and heart-lung transplants recipients. It is critical to elicit a detailed history and geographic risk factors to determine the risk of acquisition and the potential etiologic agent. Toxoplasmosis can result from primary infection or reactivation of previous latent infections. Toxoplasmosis can develop in patients with negative epidemiological history for cat ownership or consumption of undercooked meat. In patients with primary toxoplasmosis, nonspecific symptoms such as fever, lymphadenopathy, or organ injury may be present. Reactivation can cause encephalitis with or without space-occupying brain lesions, seizures, chorioretinitis, fever of unknown origin, pneumonitis, myocarditis, and rash. Although cases of the lung fluke, Paragonimus westermani have not been reported in lung transplantation, it can be a potential threat in endemic areas where this organism is endemic. Other parasites that can target the lung in immunosuppressive states include Echinococcus, Schistosoma, and Strongyloides stercoralis [160]. Strongyloidiasis can present as hyperinfection syndrome [161]. Leishmania, although infrequently seen, has been reported among lung and lung-heart recipients [30]. Free-living amoebas can affect this population as well. Amoebic granulomatous dermatitis and disseminated infection presenting with ulcerative skin lesions, respiratory failure, and seizures have been described in lung transplant recipients [162,163]. Finally, alimentary protozoa, including Cryptosporidium, which present with diarrhea and may elevate tacrolimus levels [164], and microsporidia, which present with unusual manifestations like myositis or granulomatous interstitial nephritis, affects lung transplant recipients [165,166]. Skin, Soft Tissue, and Bone The overall rate of SSIs is about 13% with a significant proportion of infections being organ or space occupying (72%), deep incisional (17%), and superficial (10%) [18,41]. Independent risk factors to develop SSI are diabetes, female donor, prolonged ischemic time, and the number of red blood cells transfusion during the perioperative period [167]. SSIs are associated with a 35% mortality within the first year of transplantation. The most common organisms found to cause SSI or mediastinitis are P. aeruginosa, Candida species, S. aureus (including MRSA), Enterococcus, coagulasenegative Staphylococci, Burkholderia cepacia, E. coli, Proteus mirabilis, Serratia marcescens, Acinetobacter baumannii, Enterobacter cloacae, and Klebsiella species. There is a correlation in up to 33% of the patients' SSI causative organisms with previous pathogens colonizing recipients' native lungs at the time of the transplant [167]. The median onset is 25 days after lung transplant [167]. Although rare, NTM can cause SSI infections among lung transplant recipients. The most frequently encountered are Mycobacterium avium complex followed by Mycobacterium abscessus and Mycobacterium gordonae. NTM SSI infections can be complicated by progressive disseminated disease or requirement of lifelong suppressive therapy [135]. Other organisms such as Mycoplasma hominis and Lactobacillus spp. have also been described. Deep infections can affect up to 5% of the patients. Sternal osteomyelitis can reach up to 6% of these deep infections. Causative organisms for sternal osteomyelitis include Pseudomonas aeruginosa, Serratia marcescens, and Scedosporium. Non-sternal osteomyelitis affecting the calcaneus bone has complicated a disseminated infection with Aspergillus fumigatus [168]. Bloodstream Bloodstream infections (BSIs) occur with an estimated rate of 25% among lung transplant recipients. A major proportion of BSIs occur in the early posttransplant period. BSIs infections are significantly associated with worse survival [139,169]. The most common organisms encountered are Staphylococcus aureus, Pseudomonas aeruginosa, and Candida [139]. Pseudomonas aeruginosa BSI-predominantly present during the transplant hospitalization period and more commonly affecting CF patients-is followed in frequency by Burkholderia cepacia and Candida albicans. Conversely, Staphylococcus aureus was the predominant organism after transplantation discharge. In an estimated 70% of BSI, the source was pulmonary, followed in frequency by CRBSI, gastrointestinal infection, peritonitis, and UTI. A pulmonary source of bacteremia in SOT often develops into septic shock [170]. Although unusual, cases of Aspergillus fumigatus endocarditis have been described following lung transplantation [171]. Often patients had CF as the underlying lung disease and a median of 8 ± 6 months presentation. This complication carries a high mortality and often requires a combination of antifungal therapy with valvular replacement surgery. Chest Infectious complications related to the chest cavity include mediastinitis, cardiac (pericarditis and myocarditis), lung parenchyma infections (nodular infiltrates, cavitation, or pneumonia), bronchial anastomosis infections, and pleural space infections (bronchopleural fistula and empyema). Empyema followed by mediastinitis and pericarditis, in addition to surgical wound infections and sternal osteomyelitis, is the most frequent deep SSI complications affecting the chest cavity. Empyema presents in around of 3.6% of cases. It occurs during the first 6 months after transplantation (median 46 ± 39 days) carrying an estimated mortality of 28.6% [172]. Most common organisms found are Staphylococcus spp., E. coli, Enterobacter spp., Klebsiella spp., Mycoplasma hominis, VRE, and Candida. Furthermore, Mycobacterium abscessus was isolated as a rare causative agent of empyema as well [173]. The degree of immunosuppression, reduced renal function, previous sternotomy, and re-exploration due to bleeding are listed as potential risk factors for mediastinitis [119]. There is an increased prevalence of mediastinitis caused by Gram negatives and fungi among lung transplant recipients. Causative organisms for mediastinitis are similar to SSI and are listed above. Infectious pericarditis can be present up to 6% of the patients (isolated organisms include MSSA, Mycoplasma hominis, and Scedosporium prolificans) [167,174,175]. Due to their high fatal rate, fungal bronchial anastomotic infections are critical to recognize. Pneumonia is believed to affect around 21% of lung recipients and 40% of heart-lung recipients. Nosocomial organisms cause early pneumonia as in other posttransplant settings. The donor's lung seems to be the primary source for pneumonic infections, although the recipients' upper airways or sinuses are also potential sources. Preoperative colonization with Gram-negative rods and colonized infected donor bronchus or perfusate are recognized risk factors for pneumonia. Likewise, pretransplantation colonizing microorganisms from suppurative lung disease are associated with pneumonia development posttransplant [176]. The most common causal organisms are Pseudomonas aeruginosa, Staphylococcus aureus, and Aspergillus spp. Other pathogens include bacteria such as B. cepacia, Enterobacter species, S. maltophilia, Klebsiella species, S. epidermidis, and E. coli, and fungi such as Fusarium spp., Cryptococcus neoformans, and Paracoccidioides brasiliensis [176]. After the 1st month, pneumonia can present as local infiltrates, diffuse interstitial infiltrates, and nodules with or without cavitation. This type of presentation may aid in the possible causative microorganism. The list of potential pathogens is extensive and includes in addition to the already mentioned Nocardia, Chlamydia pneumonia, Legionella, TB, NTM, Pneumocystis jirovecii, Rhodococcus, herpesviruses (CMV, HSV, and VZV), respiratory viruses, endemic fungi (e.g., histoplasmosis), mucormycosis, and Scedosporium spp. [177][178][179]. Abdominal/Genitourinary Similarly to other SOT, common infectious complications affecting the gastrointestinal or genitourinary tract include Clostridium difficile colitis and UTIs. Intra-abdominal com-plication carries an overall increase mortality [180]. Frequent GI symptoms presenting posttransplant are diarrhea which can affect almost 30% of lung transplant recipients and abdominal pain. Abdominal pain should prompt further investigation for potential intra-abdominal causes. In the pediatric population, the possibility of PTLD should be investigated since it carries a high mortality [181]. Other described infectious intra-abdominal complications include digestive perforation (seen in 6%) [182], retroperitoneal abscesses, cholecystitis, perianal abscesses, esophagitis, pancreatitis, pancreatic abscesses, hepatitis, diverticulitis, appendicitis, CMV colitis, megacolon, and colon rupture [180,183,184]. In developing countries, persistently abnormal liver enzymes should prompt testing for HEV. HEV RNA should be used for screening. Oral ribavirin seems to be safe and effective in this setting [185]. Central Nervous System (CNS) CNS symptoms developing during the 1st month following lung or heart-lung transplantation should trigger the concern for donor-derived viral infections. LCMV often is accompanied by CSF normal to low glucose, marked elevated protein, and mild pleocytosis [36]. Although with unclear benefit, ribavirin has been used. Donor-transmitted rabies is an uncommon but neurologic devastating complication that occurs within the first 30 days of transplant. Lung transplantation has been described as a potential causal mechanism [186]. Other organisms known to cause meningitis in lung transplant recipients are Cryptococcus, tuberculosis, WNV, and herpesviruses [187,188]. Diagnosis of WNV in this setting requires nuclear acid amplification due to the unreliability of serologic testing. Scedosporium apiospermum infections often cause dissemination including CNS abscesses in addition to pulmonary involvement among lung transplant recipients [189]. It is important to differentiate from other molds, since amphotericin B is ineffective against Scedosporium spp. In severe cases or refractory disease without an appropriate surgical debridement, the addition of terbinafine to voriconazole may prove to be useful [190]. Other recognized organisms causing occupying brain lesions are Fusarium, Nocardia, Aspergillus, toxoplasmosis, Cryptococcus neoformans, Listeria, and Cladophialophora bantiana [191][192][193]. PML, a late manifestation, can be associated with intensified immunosuppression or rituximab. Cidofovir followed by mirtazapine can be considered as a form of therapy for PML. Conclusions Infections in heart, lung, and heart-lung transplant recipients are a complex, dynamic, and evolving process. Many factors such as demographics, timing, type of transplant, anatomy, and microbiology, among others, interplay in the development of these fatal complications. Pertinent recognition and treatment of these infections improve transplantation outcomes.
2019-07-15T22:29:23.429Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "47bff70d782cc16f2c120fae8f1a08e0942e6a6b", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/978-1-4939-9034-4_2.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8f27a7e037498feb3144e1d7f638527ab526dae2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248836625
pes2o/s2orc
v3-fos-license
Inner Nuclear Membrane Protein, SUN1, is Required for Cytoskeletal Force Generation and Focal Adhesion Maturation The linker of nucleoskeleton and cytoskeleton (LINC) complex is composed of the inner nuclear membrane-spanning SUN proteins and the outer nuclear membrane-spanning nesprin proteins. The LINC complex physically connects the nucleus and plasma membrane via the actin cytoskeleton to perform diverse functions including mechanotransduction from the extracellular environment to the nucleus. Mammalian somatic cells express two principal SUN proteins, namely SUN1 and SUN2. We have previously reported that SUN1, but not SUN2, is essential for directional cell migration; however, the underlying mechanism remains elusive. Because the balance between adhesive force and traction force is critical for cell migration, in the present study, we focused on focal adhesions (FAs) and the actin cytoskeleton. We observed that siRNA-mediated SUN1 depletion did not affect the recruitment of integrin β1, one of the ubiquitously expressed focal adhesion molecules, to the plasma membrane. Consistently, SUN1-depleted cells normally adhered to extracellular matrix proteins, including collagen, fibronectin, laminin, and vitronectin. In contrast, SUN1 depletion reduced the activation of integrin β1. Strikingly, the depletion of SUN1 interfered with the incorporation of vinculin into the focal adhesions, whereas no significant differences in the expression of vinculin were observed between wild-type and SUN1-depleted cells. In addition, SUN1 depletion suppressed the recruitment of zyxin to nascent focal adhesions. These data indicate that SUN1 is involved in the maturation of focal adhesions. Moreover, disruption of the SUN1-containing LINC complex abrogates the actin cytoskeleton and generation of intracellular traction force, despite the presence of SUN2. Thus, a physical link between the nucleus and cytoskeleton through SUN1 is required for the proper organization of actin, thereby suppressing the incorporation of vinculin and zyxin into focal adhesions and the activation of integrin β1, both of which are dependent on traction force. This study provides insights into a previously unappreciated signaling pathway from the nucleus to the cytoskeleton, which is in the opposite direction to the well-known mechanotransduction pathways from the extracellular matrix to the nucleus. INTRODUCTION The linker of the nucleoskeleton and cytoskeleton (LINC) complex is a conserved molecular bridge that spans the nuclear envelope and connects the nucleoskeleton and cytoskeleton. The LINC complexes consist of two protein families, namely the Klarsicht, Anc-1, and Syne homology (KASH) domain-containing proteins located on the outer nuclear membrane, and the Sad1 and UNC-84 (SUN) domain-containing proteins embedded in the inner nuclear membrane. The KASH and SUN domains bind to each other in the perinuclear space (Crisp et al., 2006;Razafsky and Hodzic, 2009;Starr, 2011). In the mammalian genome, six genes encode KASH-containing proteins, including four nesprins (nesprin-1-4), KASH5, and the Lymphoid Restricted Membrane protein (LRMP, also called Jaw1), whereas five genes encode SUN-domain-containing proteins, SUN1-5. Nesprins are associated with several cytoskeletal elements in the cytoplasm, including several microtubule motors, filamentous actin (F-actin), and intermediate filaments. Among the four nesprins, nesprin-1 giant (nesprin-1G) and nesprin-2 giant (nesprin-2G) interact with the F-actin in the cytoplasm (Padmakumar et al., 2004) and nesprin-2G is directly shown to be subject to myosin-dependent tension (Arsenovic et al., 2016). Among SUN1-5 proteins, SUN1 and SUN2 are widely expressed in somatic cells, whereas the expression of SUN3, SUN4, and SUN5 is largely restricted to male germ cells (Meinke and Schirmer, 2015). SUN proteins are associated with nuclear lamins and chromatin within the nucleoplasm. LINC complexes can perform diverse and tissue-specific functions, including homeostatic positioning of the nucleus, nuclear migration during development, DNA repair, nuclear shaping, chromosome movements during meiosis, signal transduction, and mechanotransduction (Horn, 2014;Hao and Starr, 2019;Birks and Uzer, 2021;Wong et al., 2021). These functions could be attributed to the variations in the LINC complex components and the availability of a wide range of their binding partners (Hieda, 2017). Integrins are receptors for the extracellular matrix (ECM), and their clustering induces the formation of nascent focal adhesions (FAs), also known as focal contacts. Certain nascent FAs mature into larger FAs, whereas others are rapidly turned over. FAs contain cytoplasmic scaffolding proteins such as vinculin and paxillin, which are associated with the force-generating actin cytoskeleton. Thus, FAs serve as the mechanical link between the ECM and actin fibers, whose contractility is essential for the maturation of FAs (Gardel et al., 2010). The actin cytoskeleton physically connects with components of the LINC complex, nesprin-1G and nesprin-2G. Accordingly, FAs communicate with the LINC complex via the actin cytoskeleton, thereby transmitting several mechanical stimuli originating from outside the cells to the nucleus through the LINC complex (Poh et al., 2012;Versaevel et al., 2012;Lovett et al., 2013;Alam et al., 2015;Cho et al., 2017). Several studies have indicated that the LINC complex affects cytoskeletal elements and the formation of FAs. For example, perturbation of the LINC complex using dominant-negative KASH (DN-KASH), which broadly interferes with nesprin-SUN interaction, causes impaired propagation of intracellular forces and disturbs the organization of the perinuclear actin and intermediate filament networks (Lombardi et al., 2011). Endothelial cells expressing DN-KASH alter cell-cell adhesion, barrier function, cell-matrix adhesion, and FA dynamics (Denis et al., 2021). In addition, nesprin-1 depletion in endothelial cells increases the number of FAs, cell traction force, and nuclear height (Chancellor et al., 2010). Conversely, nesprin-2G-knocked out fibroblasts, impaired in TAN line formation as well as the loss of cytoplasmic and perinuclear actin staining, exhibited decreased FA size, number, and expression of FA proteins, and reduced traction force (Woychek and Jones, 2019). Thus, the LINC complexes act as nuclear nodes that bidirectionally transmit signals between the cytoskeleton and the nucleus. However, the functional importance of SUN proteins in the maturation of FAs and the integrity of the actin cytoskeleton has never been directly examined. In the present study, we investigated the effects of the depletion of SUN proteins on the actin cytoskeleton and FA maturation. We report that SUN1 is essential for proper actin organization, generation of intracellular traction force, and the maturation of FA. In addition, these data suggest that the elucidation of the mechanism by which the LINC complex transmits nuclear features such as epigenetic histone code and nuclear lamina architecture to the cytoskeleton will reveal its effects on diverse cellular functions. Cell Culture and Transfection HeLa cells were obtained from the Japanese Cancer Research Bioresources (JCRB) Cell Bank and grown in Dulbecco's modified Eagle's medium (low glucose, Fujifilm Wako Pure Chemical Corporation) supplemented with 10% fetal calf serum at 37°C in a 10% CO 2 atmosphere. HeLa cells were used in this study unless otherwise stated. The human mammary epithelial cell line MCF10A (CRL-10317) was obtained from the American Type Culture Collection (ATCC) and cultured as previously described (Yokoyama et al., 2014). SUN1-knocked out HeLa cells have been described previously (Nishioka et al., 2016) and cultured as previously described. Transfection was performed using Gene Juice (Merck Millipore) and described previously (Satomi et al., 2020). siRNA-Mediated Knockdown The sequences of siRNA pools against SUN1 (UNC84A) and SUN2 (UNC84B) have been described previously (Nishioka et al., 2016). The siRNAs were obtained from Nippon Gene (Tokyo, Japan). Cells were transfected with the indicated siRNAs or a non-targeting siRNA pool (siNC, Thermo Fisher Scientific, Waltham, MA) as a negative control using Lipofectamine RNAiMAX reagent (Invitrogen, CA, United States) as previously described (Hieda et al., 2021). Briefly, all siRNAs were used at a final concentration of 10 nM, and cells were fixed or harvested 48 h after transfection unless otherwise stated. Immunostaining and Quantification of Focal Adhesions The cells were fixed with 4% paraformaldehyde and immunostaining was mostly performed as described previously using appropriate primary and secondary antibodies (Hieda et al., 2008) unless stated otherwise. For HUTS4 mAb staining, cells were fixed and stained without Triton X-100 permeabilization. Actin filaments were stained with 50 nM rhodamine-phalloidin for 30 min. For YAP staining, cells were cultured on type I collagen-coated cover glass (#4910-010, Iwaki, Japan). For Triton X-100 permeabilization before fixation, cells were washed twice with ice-cold transport buffer (TB, 20 mM HEPES, pH 7.3; 110 mM potassium acetate; 2 mM magnesium acetate; 5 mM sodium acetate; 0.5 mM EGTA; Adam et al., 1990) and subsequently incubated with TB containing 0.5% Triton X-100 for 5 min on ice, followed by fixation. Cells were viewed and captured with Olympus IX81 with Plan Apo 60×/NA1.4 or Olympus BX53 with a UPlanS Apo 40×/NA 0.95 objective lens using an Olympus DP-73 camera, Olympus U-HGLGPS light source, and Olympus U-FBNA filter (excitation 470-495, emission 510-550) and Olympus U-FGW (excitation 530-550, emission). Quantification of the number, area, and fluorescent intensity was performed using ImageJ software (https://imagej.nih.gov/ij/). The number of FAs was quantified after thresholding and segmentation. The integrated density (i.e., the sum of all pixels in the ROI, region of interest) was measured as "RawIntDen". Lysate Preparation and Western Blotting The total cell extract was collected using 2× sample buffer and sonicated, or syringe sampled 15 times using a 1 ml syringe fitted with a 24 G needle to ensure lysate homogenization and genomic DNA shearing. Afterward, the protein concentration was analyzed using Ionic Detergent Compatibility Reagent (Thermo Fisher Scientific) and Pierce 660 nm Protein Assay Reagent (Thermo Fisher Scientific) according to the manufacturer's instructions. The total cell lysate was analyzed by western blotting using the indicated antibodies. Adhesion Assay Cell adhesion assays were performed as described previously . Briefly, 96-well plates were coated with 100 μL of 5 μg/ml laminin (AGC Inc., Tokyo, Japan), 5 μg/ml vitronectin (Wako Pure Chemical, Osaka, Japan), 5 μg/ml fibronectin (AGC Inc.), 5 μg/ml collagen type IC (Nitta Gelatin, Osaka, Japan), or 3% bovine serum albumin (BSA) and blocked with 3% BSA. Next, cells were added to the plates. After 2 h of incubation at 37°C, plates were washed with phosphate-buffered saline (PBS) and cells were stained with crystal violet. The absorbance was measured using measurement filter 595 nm and reference filter 630 nm. Experiments were repeated at least four times. Integrin β1 Internalization and Recycling Assay The internalization and recycling assays of integrin β1 were performed as described previously (Maekawa et al., 2017). Briefly, integrin β1 on the cell surface was labeled with Alexa 488-conjugated TS2/16 antibody in the growth medium containing 30 mM HEPES (pH 7.6) on ice for 1 h. Next, the cells were washed with ice-cold PBS and the medium was replaced with a fresh growth medium containing 30 mM HEPES (pH 7.6). Cells were incubated at 37°C for the indicated time (mentioned in the figure caption) to allow the internalization of fluorescent integrin β1. After internalization, the remaining fluorescence on the cell surface was quenched with an anti-Alexa 488 antibody. For the internalization assay, cells were fixed, and the signals inside the cells were imaged. To monitor the recycling of integrin β1, cells were re-incubated at 37°C for the indicated time points. After re-incubation, the surface fluorescence signal of integrin β1 was quenched again. Cells were subsequently fixed, and the signals inside the cells were imaged. For both internalization and recycling assays, images were quantified using the ImageJ software. Evaluation of Traction Force Traction force was visualized using wrinkle formation assay as described previously Kang et al., 2020). Briefly, silicone substrates CY 52-276 (Dow Corning Toray, Tokyo, Japan) were mixed at a weight ratio of 1.2:1 and spread on a coverslip to have a final elastic modulus of 5.4 kPa. To measure the elastic modules in a separate experiment with the Hertz contact model , stainless beads were placed onto the substrate where the surface was coated in advance with fluorescent microbeads through silane coupling. For this process, the silicone surface was treated with 2% (3-aminopropyl) trimethoxysilane (Sigma Aldrich) in 90% EtOH for 30 min and then with 20% glutaraldehyde (Wako) for 5 min. By taking 3-dimensional images of the fluorescently labeled substrate using a confocal laser scanning microscope (FV-1000; Olympus), the indentation depth associated with the elastic modulus was measured. The coverslip was exposed to 4 mA oxygen plasma for hydrophilization for 1 min using a plasma generator (SEDE-GE, Meiwafosis, Tokyo, Japan), put on a 6-well culture plate, and coated with 10 µg/ml fibronectin (Sigma-Aldrich), on which HeLa cells (parental wild-type or SUN1-depleted) were cultured at 37°C in a 5% CO 2 stage incubator mounted on an inverted microscope (IX71; Olympus, Tokyo, Japan). 48 h after incubation, cells with wrinkle formation were imaged with phase-contrast microscopy using a ×10 semi-apochromat objective lens (NA 0.3). The acquired images were analyzed to automatically detect cellular traction force-generated wrinkles using a custom-made program written in Fiji software. Briefly, images were processed with a two-dimensional fast Fourier transformation and then with a band-pass filter to extract the wrinkles, which were skeletonized into line segments and integrated to finally obtain their total length (the number of pixels) per cell as traction index. Depletion of SUN1 Affects Actin Organization SUN1 is required for directional cell migration (Nishioka et al., 2016;Imaizumi et al., 2018); however, the underlying mechanism remains unknown. Both SUN1 and SUN2 proteins promiscuously interact with nesprin-2 and form the LINC complex (Stewart-Hutchinson et al., 2008;Ostlund et al., 2009;Haque et al., 2010;Sosa et al., 2012). Because nesprin-2G physically connects with the actin cytoskeleton (Zhen et al., 2002;Luxton et al., 2010), and its depletion reduces the ability to generate traction force (Woychek and Jones, 2019), we examined the involvement of SUN1 and SUN2 in the organization of the actin cytoskeleton using siRNAs. Depletion of SUN1 and SUN2 proteins was confirmed by immunofluorescence microscopy and western blotting ( Figures 1A,B). Quantitative analysis showed that more than 90% of SUN1 and SUN2 expressions were depleted in each siRNA-targeted knockdown cell. Interestingly, cytoplasmic actin staining in the SUN1-but not SUN2-depleted cells decreased compared with the control cells (Figures 1A,C and Supplementary Figure S1A). Reduced staining intensity in the SUN1-depleted cells was rescued by the expression of mouse SUN1, which is siSUN1 resistant (Supplementary Figure S1B). In addition, some of the SUN1-depleted cells potentially have increased actin ruffling at their periphery ( Figure 1A and Supplementary Figure S1A arrows), while obvious stress fibers and sub-nuclear actin structures were observed in the SUN2-depleted cells but not in the control cells ( Figure 1A and Supplementary Figure S1C). However, the relative expression of β-actin protein remained the same in the control cells and the SUN1-or SUN2-depleted cells ( Figure 1B and Supplementary Figure S1D). This is consistent with previous reports (Thakar et al., 2017). Thakar et al. reported that the mRNA levels of β-actin remained unaffected by the depletion of SUN1 in HeLa cells (Thakar et al., 2017). These results suggest that SUN1 depletion affects the organization of filamentous actin. Thus, we focused on the function of SUN1 in subsequent experiments. SUN1-Depletion Suppresses Vinculin Incorporation Into Focal Adhesions Directional cell migration requires continuous turnover of FAs along the direction of cell movement (Lock and Strömblad, 2008). Thus, the effects of SUN1 depletion on FAs were examined by observing the localization of vinculin, a cytoskeletal adaptor protein in FAs (Carisey and Ballestrem 2011). Depletion of SUN1 was confirmed by immunofluorescence microscopy ( Figure 1D). Accumulated vinculin signal was observed in the periphery of both control and SUN1-depleted cells ( Figure 1D) indicating that vinculin is recruited to the plasma membrane in both control and SUN1-depleted cells. Of note, we did not observe an elevated level of plasma membrane-localized vinculin in the SUN1-depleted cells, while Thakar et al. (2017) showed that SUN1-depleted HeLa cells have increased vinculin staining at the plasma membrane as well as an increased level of GTP-bound RhoA. This disparity could be caused by activation by fibronectin on the coverslips they used. We did not observe an elevated level of GTP-RhoA in the SUN1-depleted cells (Supplementary Figure S1E). Because the SUN1-depleted cells show many long cellular processes and the accumulation of actin at the tips of these processes that are not present in the control cells ( Figure 1A May 2022 | Volume 10 | Article 885859 S1C), we analyzed the effects of SUN1 depletion on FA turn over using a microtubule-induced FA disassembly assay (Ezratty et al., 2009). The result showed that SUN1-depleted cells retain the ability to disassemble their FAs (Supplementary Figure S2A). Vinculin has two distinct conformations, namely "open form" and "closed form" (Bays and DeMali, 2017). A significant amount of vinculin is a closed form at steady-state and can be washed out by Triton X-100 treatment, whereas a certain amount of vinculin is Triton X-100 insoluble because it is incorporated into FAs and binds to actin filaments (Lee and Otto, 1997;Sawada and Sheetz, 2002;Yamashita et al., 2014). To explore the incorporation of vinculin into FAs, cells were treated with Triton X-100 before fixation and stained with an anti-vinculin antibody. Triton X-100 treatment caused the dispersion of vinculin signals into small punctate patterns in SUN1-depleted cells, whereas condensed vinculin signals in siNC-transfected cells were resistant to Triton X-100 treatment ( Figure 1E). The dispersion of the vinculin signal in the SUN1-depleted cells was rescued by the expression of siSUN1-resistant mouse SUN1 (Supplementary Figure S2B). Quantified data indicated that the integrated density of Triton X-100resistant vinculin was decreased in SUN1-depleted cells compared with that in the control cells ( Figure 1F), whereas the number of vinculin-positive dots was increased in SUN1-depleted cells ( Figure 1G). The protein expression of vinculin in the SUN1-depleted cells remained unaltered ( Figure 1H and Supplementary Figure S2C). Therefore, these results indicate that SUN1 depletion does not interfere with the recruitment of vinculin to the plasma membrane; however, it suppresses the conformational change in vinculin to a Triton X-100-resistant form (i.e., incorporation into FAs), which requires cytoskeletal forces (Omachi et al., 2017). Of note, SUN1 was resistant to Triton X-100 treatment ( Figure 1E, upper panel), probably because SUN1 associates with nesprins, lamins, and/or chromatin. FIGURE 2 | SUN1 depletion reduces active integrin β1. (A) Cells were transfected with siSUN1 or siNC. Next, the cells were fixed and stained with anti-integrin β1 mAb (TS2/16), which recognizes both its active and inactive forms and anti-SUN1 pAb. Scale bar, 10 μm. (B) The integrated density of the total integrin β1 staining was quantified using the ImageJ software. The values represent the mean ± standard deviation (SD). N > 100. ***p < 0.005 compared with siNC-transfected cells. (C) Cells were transfected with siSUN1 or siNC. The cell lysate was analyzed by western blotting using anti-integrin β1 mAb (TS2/16) and anti-β-actin mAbs. The values represent the mean of the relative intensity of integrin β-1 expression to β-tubulin in the western blotting ±standard deviation (SD). (D) Cells were transfected with siSUN1 or siNC. Next, the cells were fixed and stained with anti-active integrin β1 mAb (HUTS4). Note that the epitope of HUTS4 is localized in the extracellular domain of integrin β1. The Triton X-100 permeabilization step was eliminated from the staining process in the case of HUTS4 mAb staining because it greatly increases the non-specific signal in the cytoplasm. Scale bar, 10 μm. (E) The integrated density of the active integrin β1 staining was quantified using the ImageJ software. The values represent the mean ± standard deviation (SD). N > 100. **p < 0.01 compared with siNC-transfected cells. (F) Cells were transfected with siSUN1 or siNC. Next, the cell-extracellular matrix (ECM) adhesion activity was measured using cell culture plates coated with fibronectin, vitronectin, laminin, or collagen type IC. Experiments were repeated four times and a representative result is shown. ***p < 0.005, *p < 0.05 compared with siNC-transfected cells. n.s., not significant. (G) The rate of internalization of integrin β1 was analyzed. Cells were transfected with siSUN1 or siNC. After 48 h of incubation, cell surface integrin β1 was labeled with Alexa 488-conjugated TS2/16 mAb and chased for 10 min. Next, the remaining cell surface fluorescent was quenched. The ratio of fluorescence intensity inside the cell to that on the cell surface before chasing is shown. (n = 30). The values represent the mean ± standard error of the mean (SEM). n.s., not significant. Representative images are shown in Supplementary Figure S4A. (H) Recycling of integrin β1 was analyzed in the SUN1-depleted cells. Cells were treated with siSUN1 or siNC. After 48 h of incubation, cell surface integrin β1 was labeled with Alexa 488 conjugated to TS2/16 mAb and chased for 60 min to allow endocytosis. Next, the remaining fluorescent at the cell surface was quenched (time 0) and cells were incubated to allow trafficking from endosomes to the plasma membrane. After the indicated incubation periond, cell surface fluorescence was again quenched. Representative images are shown in Supplementary Figure S4B. The fluorescence intensity inside the cells was measured and the intensity of Alexa 488-TS2/16 is shown as a percentage of that at 0 min (n = 30). Data represent the mean ± standard error of the mean (SEM). n.s., not significant. SUN1 Depletion Reduces Active Integrin β1 Levels To explore the maturation of FAs in SUN1-depleted cells, we next focused on integrin β1, which is a ubiquitously expressed FA molecule that drives the establishment of nascent adhesion sites (Schiller et al., 2013). To examine the availability of integrin β1 at the plasma membrane, we stained integrin β1 in SUN1-depleted cells using an anti-integrin β1 mAb, TS2/16 (Sanchez-Madrid et al., 1982), which recognizes both active and inactive forms of integrin β1. The intensity of total integrin β1 (i.e., the sum of active and inactive forms) at the plasma membrane in SUN1-depleted cells was obviously increased (Figures 2A,B). In addition, increased intensity of cell surface integrin β1 was observed following SUN1 depletion in both the non-cancerous breast epithelial cell line MCF10A (Supplementary Figure S3A) and unfixed cells (Supplementary Figure S4A; 0 min). Western blotting showed more than a 1.5 times increase in integrin β1 protein expression in the SUN1-depleted cells ( Figure 2C). The activation of integrins promotes the recruitment of several adaptor proteins to form submicrometer clusters and trigger FA maturation (Bouvard et al., 2013). Because the balance between active and inactive integrins is dynamically regulated (Khan and Goult, 2019), we analyzed the active form of integrin β1 using an anti-integrin β1 mAb, HUTS4 (Luque et al., 1996), which recognizes only the active form of integrin β1. Intriguingly, the intensity of cell surface-active integrin β1 was decreased in SUN1-depleted cells ( Figure 2D). HUT4-positive large structures recognized in siNC-transfected cells disappeared in siSUN1transfected cells, and weak filamentous structures were observed ( Figure 2D). Quantification of staining intensity showed a significant reduction in active integrin β1 intensity in SUN1-depleted cells ( Figure 2E). In addition, a ligand-binding form of integrin β1 in the control cells was colocalized with vinculin and these signals were Triton X-100 resistant (Supplementary Figure S3B). The decreased active integrin β1 in the SUN1-depleted cells was recovered by the expression of siSUN1-resistant mouse SUN1 (Supplementary Figure S3C). Cell adhesion to the ECM activates integrins (Shattil et al., 2010). Because SUN1-depleted cells showed diminished cell surface-active integrin β1 ( Figures 2D,E), the depletion of SUN1 could attenuate the adhesion activity of the cells. However, in contrast to our expectations, SUN1-depleted cells showed slightly but reproducibly increased adhesion activity ( Figure 2F). This is in agreement with the upregulation of integrin β1 expression at the plasma membrane (Figures 2A-C) and indicates that impaired adhesion is not responsible for the reduction of activated integrin β1 at the cell surface. We next studied the effect of SUN1-depletion on the intracellular trafficking of integrin β1. Integrins continuously cycle between the plasma membrane and internal compartments with low lysosomal degradation rates (Lobert et al., 2010;Nader et al., 2016). The amount of active integrin β1 at the plasma membrane is regulated by the rate of endocytosis from the plasma membrane and recycling from endosomes to the plasma membrane (De Franceschi et al., 2015). Thus, facilitated internalization of integrin β1 from the plasma membrane or its aberrant recycling to the cell surface could decrease the cell surface-active integrin β1. However, depletion of SUN1 affected neither the internalization efficiency of integrin β1 ( Figure 2G) nor the recycling efficiency to the cell surface ( Figure 2H). Therefore, these data indicate that SUN1 is not involved in the trafficking of integrin β1. Based on these findings, we assume that the reduction of active integrin β1 in SUN1depleted cells could be related to the impaired actin cytoskeleton ( Figure 1A) because physical forces exerted by actin fibers are transmitted to the cytoplasmic domain of integrins, thus activating it. Depletion of SUN1 Abrogates the Maturation of Focal Adhesions at the Cytoskeletal Force-Dependent Step Depletion of SUN1 suppressed the incorporation of vinculin into FAs and the activation of integrin β1. The maturation of FA involves a stereotypical sequence of protein recruitment (Kuo et al., 2011). The initial stages of adhesion assembly occur in the actin-rich region at the cell periphery. Then, clustering of integrins occurs to form nascent adhesion, which is myosin-II independent. Also, additional FA proteins such as vinculin, phosphorylated paxillin, and FAK are recruited in a process termed maturation (Zaidel-Bar et al., 2003;Oakes and Gardel, 2014). To examine which step in FA maturation is disrupted in SUN1-depleted cells, we visualized FAs using antibodies against several FA-resident proteins, such as Tyr397 phosphorylated FAK (FAK-pY397), paxillin, Tyr118 phosphorylated paxillin (paxillin-pY118), and zyxin. FAK is a key tyrosine kinase involved in integrin signaling (Schlaepfer et al., 1999). The binding of integrin to the ECM and its clustering trigger the autophosphorylation of FAK at Tyr397 (Schlaepfer et al., 1999), which is critical for the maturation of FA; however, it is independent of mechanical tension (Horton et al., 2016). Depletion of SUN1 did not influence the FAK-pY397 staining intensity although the staining pattern was slightly altered ( Figure 3A). Moreover, western blotting showed no significant differences in the expression of FAK protein and FAK Tyr397 phosphorylation between SUN1-depleted and control cells ( Figure 3B). In addition, SUN1 depletion did not affect the protein expression or staining intensity of paxillin ( Figures 3B,C), which is a scaffold protein in FAs and recruited to newly formed adhesions in a tension-independent manner. In contrast, the level of paxillin-pY118, which depends on intracellular traction force (Pasapera et al., 2010), moderately decreased in the SUN1-depleted cells (Figures 3B-D). Moreover, zyxin staining largely disappeared in SUN1-depleted cells, whereas zyxin staining patterns in control cells showed a relatively large dotted distribution ( Figures 3E,F). The decreased zyxin signal in the SUN1-depleted cells was rescued by the expression of siSUN1-resistant mouse SUN1 (Supplementary Figure S3C). Because intracellular traction forces drive FA growth and recruitment of FA proteins such Frontiers in Cell and Developmental Biology | www.frontiersin.org May 2022 | Volume 10 | Article 885859 as zyxin (Zaidel-Bar et al., 2003), these results suggest a reduction in intracellular forces in SUN1-depleted cells. SUN1 is Involved in the Generation of Intracellular Forces Depletion of SUN1 did not prevent the recruitment of inactive integrin β1 and vinculin at the plasma membrane or inhibit the autophosphorylation of FAK at Tyr397. In contrast, the loss of SUN1 reduced the number of FAs that contain vinculin and active integrin β1. These results suggest that the depletion of SUN1 suppresses the force-dependent step of FA maturation. Thus, to investigate the intracellular forces, we first visualized the nuclear localization of a transcriptional co-regulator, Yesassociated protein (YAP). Because YAP enters the nucleus in an intracellular force-dependent manner in several cells including FIGURE 3 | SUN1 depletion suppresses FA maturation. (A) Cells were transfected with siSUN1 or siNC. Next, the cells were fixed and stained with anti-Tyr379 phosphorylated FAK mAb. Scale bar, 10 μm. (B) Cells were transfected with siSUN1 or siNC. Afterward, the cell lysate was analyzed by western blotting using anti-FAK, anti-FAK-pY397, anti-paxillin, anti-paxillin-pY118, and anti-β-actin mAbs. (C) Cells were transfected with siSUN1 or siNC and stained with anti-paxillin or anti-paxillin-pY118 mAb. Scale bar, 10 μm. (D) The integrated density of the Tyr118 phosphorylated paxillin staining was quantified using the ImageJ software. The values represent the mean ± standard deviation (SD). *p < 0.05 compared with siNC-transfected cells. (E) Cells were transfected with siSUN1 or siNC. Next, the cells were fixed and stained with anti-zyxin mAb. Bar, 10 μm. (F) The integrated density of the zyxin staining was quantified. The values represent the mean ± standard deviation (SD). **p < 0.01 compared with siNC-transfected cells. Frontiers in Cell and Developmental Biology | www.frontiersin.org May 2022 | Volume 10 | Article 885859 8 HeLa (Dupont et al., 2011;Finch-Edmondson and Sudol, 2016), it can be used as an indicator of intracellular forces. As reported previously (Panciera et al., 2017), YAP proteins were mostly present in the nucleus in control cells cultured on a glass coverslip ( Figure 4A, left panel) because of the rigidity of glass as a substrate. In contrast, the depletion of SUN1 decreased the YAP signal in the nucleus and increased it in the cytoplasm ( Figure 4A, right panel). The average staining intensity of YAP in the SUN1-depleted nucleus was decreased as compared with that in siNC-transfected cells ( Figure 4B), supporting a reduction in cytoskeletal forces. Next, we directly assessed the effects of depletion of SUN1 on the generation of traction force, which can be assayed by assessing the ability of cells to induce the formation of wrinkles on deformable silicone substrates Kang et al., 2020), because the contractile forces within a cell correlate with the length of wrinkles (Burton and Taylor, 1997;Fukuda et al., 2017). The control cells exhibited extensive wrinkles on the substrate, whereas the loss of SUN1 suppressed the formation of wrinkles, indicating the significantly decreased contractile forces in SUN1-depleted cells ( Figures 4C,D). Because phase contrast images show differences in cell spreading between the SUN1-depleted and control cells ( Figure 4C), the areas of cell spreading were quantified. The SUN1-depleted cells on the silicone substrates showed less spreading than the control cells; the SUN1-depleted cells on the glass coverslips also had less spreading, but the effect was moderate (Supplementary Figure S5), suggesting a possible potential issue with mechanosensing. DISCUSSION In the present study, we demonstrated that the loss of SUN1 increases actin ruffling at their periphery and decreases cytoplasmic F-actin. In addition, loss of SUN1 weakened the intracellular forces under normal growth conditions. The maturation of FAs in SUN1-depleted cells was impaired at the force-dependent step, such as activation of integrin β1 and incorporation of vinculin and zyxin. In contrast, the loss of SUN1 did not affect the levels of FAK phosphorylation at Tyr397 or the recruitment of a Triton X-100-soluble form of vinculin to the plasma membrane, both of which occur at the early stage of FA formation in a forceindependent manner. Based on these results, we propose a model of how the inner nuclear membrane protein, SUN1, participates in the maturation of FAs and cell migration under normal growth conditions ( Figure 4E). Cells produce a contraction force by the actin cytoskeleton that connects FAs and the LINC complex ( Figure 4E, upper panel). The depletion of SUN1 perturbs the proper actin organization, thereby abrogating the generation of contraction force ( Figure 4E, lower panel). This effect on the actin cytoskeleton suppressed the maturation of FAs and cell migration. Therefore, the LINC complexes are critical not only for transmitting mechanical information from the cytoplasm to the nucleus but also for force-dependent cytoplasmic functions, such as the maturation of FAs. In addition, the findings of the present study have two important implications for our understanding of how the LINC complex functions in diverse physiological and pathological processes. First, SUN1 is an essential factor in actin organization and intracellular traction force. This is consistent with previous data showing that disruption of nesprin-1 or nesprin-2 altered the actin cytoskeleton and reduced the ability to generate traction force (Chancellor et al., 2010;Lombardi et al., 2011;Woychek and Jones, 2019). Woychek and Jones (2019) showed that nesprin-2G knockout reduced the ability of fibroblasts to exert traction force on their substrates relative to control cells. In addition to these data, we found that loss of SUN1 perturbed actin organization in the HeLa cells and reduced their ability to generate traction forces on their substrates, suggesting that FIGURE 4 | SUN1 is involved in the generation of intracellular forces. (A) Cells were transfected with siSUN1 or siNC and next stained with anti-YAP mAb. Scale bar, 10 μm. (B) The average staining intensity of YAP was measured (n > 60). The values represent the mean intensity ±standard deviation (SD). ***p < 0.001 compared with siNC-transfected cells. (C) Wrinkle formation assay was performed using SUN1 knocked out HeLa cells (Nishioka et al., 2016). In this assay, the contractile forces were visualized by wrinkle formation. (D) Quantification of the traction force-driven wrinkles (the number of pixels associated with the wrinkles) per individual cell was performed. Every test was conducted as a paired experiment to determine the relative contribution of SUN1 expression to the generation of cellular contractile forces. Data represent the mean ± standard deviation (SD). ***p < 0.001 compared with control (parental wild-type) cells. (E) Working model for the intracellular forces and FA maturation by SUN1. The control cells produce contractile force, mature focal adhesion, and migrate normally (upper panel). In contrast, SUN1 depletion affects actin organization and the SUN1-depleted cells do not produce contractile force, suppress FA maturation, and do not migrate (lower panel). Frontiers in Cell and Developmental Biology | www.frontiersin.org May 2022 | Volume 10 | Article 885859 SUN1/nesprin-2G-containing LINC complexes are key regulators of actin cytoskeletal organization. Moreover, with regard to the LINC complex-associated cytoskeleton, it has been shown that SUN1-containing LINC complexes preferentially interact with microtubules, and SUN2-containing LINC complexes preferentially interact with actin networks during the homeostatic positioning of nuclei in fibroblasts (Zhu et al., 2017). In addition, transmembrane actin-associated nuclear (TAN) lines are identified by the accumulation of nesprin-2G and SUN2 along the perinuclear actin cables on the dorsal nuclear surface of fibroblasts (Luxton et al., 2010). In contrast to these preferences of SUN1-and SUN2-containing LINC complexes to actin and microtubules, our data demonstrated the critical function of SUN1 in actin organization, suggesting that SUN1-containing LINC complexes function differently in association with actin or microtubules in different cellular contexts. The present study highlights the differential functions of SUN1 and SUN2 proteins although both proteins promiscuously interact with nesprins to form the LINC complex (Padmakumar et al., 2005;Crisp et al., 2006;Ketema et al., 2007;Stewart-Hutchinson et al., 2008;Ostlund et al., 2009). Second, the results of this study suggest a contribution of the loss of the LINC complex to cancer progression. We have previously reported the global loss of the LINC complex components, including SUN1 and nesprin-2, in human breast cancer tissues (Matsumoto et al., 2015). Analysis using The Cancer Genome Atlas (TCGA) and the Genotype-Tissue Expression (GTEx) datasets showed downregulated expression of SUN1 and SUN2 across tumor types (Sharma et al., 2021). In addition, a study of 3,000 cancer genomes across nine cancer types identified mutations in the SYNE-1 gene encoding nesprin-1 as "drivers" in the development of cancer (Cheng et al., 2015). However, the mechanism of how the loss of the LINC complex components or mutated LINC complex affects cancer progression has remained elusive. In this study, the expression of integrin β1 was significantly enhanced in SUN1-depleted MCF10A and HeLa cells. Altered expression of integrin is frequently observed in tumor cells and is associated with poor clinical outcomes and cancer progression (Hamidi and Ivaska, 2018). For instance, integrins function in oncogenic growth factor receptor signaling, facilitate anchorage-independent survival of circulating tumor cells, and determine the colonization of metastatic sites. Upregulated integrin β1 in leading cells plays a role in collective cell migration-a common feature of metastatic cancer cells (Kato et al., 2014). Over-expression of integrins associates with increased formation of metastases in several tumors (Yoshimasu et al., 2004;Tsuji et al., 2002;Sordat et al., 2002). In addition, the upregulated integrin β1 could be involved in LINC-independent cell migration (Fracchia et al., 2020). Thus, these effects of increased expression of integrin β1 in the SUN1-depleted cells may contribute to cancer progression. The LINC complex has been shown to transfer mechanical stresses from the cytoskeleton to the genome to regulate gene expression (Chancellor et al., 2010;Simon and Wilson, 2011;Rashmi et al., 2012;Alam et al., 2016), possibly through chromatin remodeling (Iyer et al., 2012;Booth et al., 2015;Toh et al., 2015), dissociation of protein complexes inside the nucleus (Poh et al., 2012), and motion of intranuclear organelles although the underlying molecular mechanism of transcriptional regulation by the LINC complex is largely unknown. The increased expression of integrin β1 protein in SUN1-depleted cells is caused by transcriptional regulation because it is widely accepted that lysosomal degradation of integrin β1 is prevented under normal growth conditions (Böttcher et al., 2012;Steinberg et al., 2012). Integrin β1 is encoded by the ITGB1 gene and its expression is regulated by myocardin-related transcription factor (MRTF)-A (also called Mkl1, Bsac, or Mal) and MRTF-B (also called Mkl2), which are transcriptional cofactors that work with serum response factor (SRF) (Miano et al., 2007). Studies conducted using a mouse model have reported that the loss of SUN1 increases the expression of SUN2 (Chen et al., 2012;Wang et al., 2015), which is also regulated in a SRF-dependent manner (May and Carroll, 2018). Thus, depletion of SUN1 may activate the expression of ITGB1 either via MRTF-A or MRTF-B. Altogether, the present study sheds light on the contribution of SUN proteins to transcriptional regulation. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ACKNOWLEDGMENTS We are grateful to Hiromasa Imaizumi (Kawasaki University of Medical Welfare) for valuable discussion. We thank Hiroshi Kimura (Tokyo Institute of Technology) for a gift of antihistone H3 mAb. We would like to thank Editage (www. editage.com) for English language editing.
2022-05-18T13:09:34.262Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "f8a851dc72df04521e7963b71811e390500ddd73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "f8a851dc72df04521e7963b71811e390500ddd73", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
42355873
pes2o/s2orc
v3-fos-license
An Image Processing Approach based on GNU Image Manipulation Program GIMP to the Panoramic Radiography : We have recently proposed, in a paper published by the International Journal of Sciences, the use of some freely available image processing tools to enhance images of the fundus of the eye. In particular, we have discussed the use of GIMP, the GNU Image Manipulation Program, and of some wavelet filters and fractional gradient methods from other image processing programs. Here we propose GIMP to enhance the images given by panoramic radiography. This approach can produce an output image, which helps detecting faint details in such radiographic plates, which are quite important for dental treatments. Some case studies are discussed. Introduction Medical imaging is a sub-discipline of the biomedical engineering, a new and rapidly evolving interdisciplinary field, which aims filling the gap between biology and several disciplines of engineering and applied science.It is therefore a branch of the applied science, mainly oriented to the medicine for both diagnostic and therapeutic purposes.In particular, the medical imaging is devoted to the study of noninvasive techniques that aim obtaining images of some internal aspects of the body.Among the wellknown techniques of medical imaging, we have radiology, ultrasonography and magnetic resonance.These techniques comprise both technical aspects of data acquisition and problems connected with diagnostic interpretation. Imaging and image processing turn out to be valuable means to infer some properties of biological structures from the corresponding observed signals [1].Sometimes, the raw images obtained from diagnostic equipment need an improvement.There are many resources useful for processing images, most of them freely available and quite friendly to use, that can help the user to separate the objects, relevant to the given study, from the background of the image.In a recent paper [2] for instance, we have proposed the use of some tools to enhance images of the fundus of the eye.In particular, we have discussed the use of GIMP, the GNU Image Manipulation Program, and the use of wavelet filters and fractional gradient tools from other image processing programs [3,4].Here we propose GIMP to enhance the images given by panoramic radiography.This approach can produce an output image, which helps the detection of faint details in such radiographic plates.Some case studies are discussed. GIMP software GIMP is a free and open-source software used for processing images and for free-form drawing.It is useful for resizing, cropping and converting images between different formats.It is also collecting several specialized tasks.GIMP is designed to be augmented with plug-ins and extensions, which can improve its functionality. According to the GIMP user manual, any image can be edited, considering it made of many layers in a stack.A GIMP image then is as a stack of transparencies, where each transparency is a layer.Each layer in an image is made of several channels.In RGB images, there are normally three channels, each consisting of a red, green and blue channel.Colour sublayers look like slightly different grey images; when put together, they make a complete image.A fourth channel can exist, the alpha channel, which measures the opacity of the image.A toolbox allows accessing the tools available for image editing.Among them, we find filters and brushes, as well as transformation, selection, layer and masking tools.For what concerns colours and grey-tones of images, we can adjust brightness and contrast, and also change them with the Curves tool.Gradients are also integrated into the toolbox: there are a number of default gradients included with GIMP, such as Laplace and Sobel, suitable for edge detections.Moreover, GIMP has more than a hundred of standard effects and filters, including those supporting sharpening and blurring of images.A part of them we can find in the GIMP Auto submenu.This submenu contains operations which automatically adjust the distribution of grey-tones, without requiring any input from the user.We can "Stretch Contrast", "Stretch HSV" and "Normalize" the histogram (the reader can find more details in the tutorials at http://www.gimp.org). GIMP supports importing and exporting with a large number of different file formats, the GIMP's native format is designed to store all information GIMP can contain about an image.The software supports image formats such as BMP, JPEG, PNG, GIF and TIFF.Other formats with read/write support include PostScript documents and X bitmap images.It can import Adobe PDF documents and the raw image formats used by many digital cameras, but cannot save to these formats. Panoramic radiography The panoramic radiography of dental arches is a procedure that produces an image of the teeth, upper and lower jaws and jawbones on a single image.To obtain the projection of the dental arches, which are curvilinear structures, it is necessary to use X-ray techniques based on a rotating tube about the patient's head.Let us note that panoramic radiography is a form of tomography and that these techniques can be compared [5].In panoramic radiography, images of multiple planes are recorded to have a composite final image.Maxilla and mandible are into focus, whereas the structures that are superficial or deep inside are blurred. The panoramic radiography is an essential element in oral radiology and dentistry [6][7][8].Its principle was described in 1922, but first commercially available machines are of the early 1960s [9].Today, such radiographic devices, which are fundamental for an initial assessment of the state of teeth prior to a dental treatment, are rather common.After a panoramic image, the dentist can perform a more targeted intraoral radiography.Panoramic plates are also useful to evaluate the state of dentition in individuals in the age of development, to highlight any irregular tooth or impacted teeth and bone lesions.Moreover, they reveal inflammatory problems or cystic tumours. In the Figure 1, we can see in the upper panel, a basic panoramic radiograph image.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.In the lower panel of the same figure, we can see the output of Sobel filter for edge detection of GIMP.This filter is detecting the edges of objects in the image: in this manner the endodontic, periodontal and coronal-radicular details are enhanced.Let us stress that a sequence of panoramic images can be interesting for the recording the evolution of teeth and the planning of their treatments.This is important, in particular for teeth that have suffered an endodontic treatments, such as root canal treatments, in their medical history [10]. In the Figure 2 we can see how the panels of Figure 2 look like after an inversion of colour tones made by GIMP.In the lower panel of Figure 2, we can see a clear enhancements of the details of the roots of teeth.Panoramic images are also used for the mixed dentition, as shown in the Figure 3 (courtesy Coronation Dental Specialty Group on Wikipedia): in it, we can see the wisdom teeth buds.The image is processed using the Sobel filter: the result is shown in the lower panel with grey-tones inverted. Here in the following, we discuss some case studies where specific tools of GIMP are applied to enhance the details in images. Cysts of the jaws Bones of jaws, mandible and maxilla, are the bones of human body with the highest prevalence of cysts.Since these cysts rarely cause any symptoms [11], most are discovered during the panoramic radiography.In the X-ray images, cysts appear as radiolucent dark areas, that is, areas permitting the passage of radiant energy that have radiopaque white borders [12].In the Figure 4 in fact, we see a cyst with the borders enhanced by the Sobel filter of GIMP.Let us emphasize that, with GIMP, we can select an area and process just the selected region. Stafne defects A Stafne defect, also known as Stafne bone cyst, is a depression of the mandible on the lingual surface, that is, the side nearest the tongue.The Stafne defect is thought to be a normal anatomical variant and does not represent a pathologic lesion.This defect is usually discovered by chance during routine dental radiography [13,14].In the Figure 5 we can see a panoramic radiograph showing Stafne defect in the right mandible, below the inferior alveolar nerve canal.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.After selecting the part of the image containing the defect, we can apply GIMP Auto submenu.It contains operations which automatically adjust the distribution of grey-tones, without requiring any input from the user.Here we show the results of "Stretch Contrast", "Stretch HSV" and of "Normalize".Using these automatic filters we can have some information on the density of bone near and inside the defect. Pattern of mental nerve As discussed in References 15 and 16, the pattern of mental nerve into the mental foramen is an important pre-surgical landmark in mandibular premolar regions.As panoramic radiographs are routinely used in pre-surgical evaluation, the researchers in [16], undertook a study to evaluate the reliability of panoramic X-ray machines for determination of the location of mandibular foramen.The study revealed that the most common pattern of entry of mental nerve was a straight one (about 79% of the total radiographs examined). The researchers in [16] tell that panoramic radiography may not be a very reliable imaging modality for identifying the presence of an anterior loop of the nerve, a condition which needs to be determined before planning the surgical procedures.In any case, we can try to enhance the images of nerve and mental foramen with GIMP.In Figure 6, we can see the mandibular canal after applying the GIMP Auto submenu "Equalize", "White Balance" and "Stretch Contrast".In the Figure 7, we can see an example of image enhancement obtained by GIMP Retinex.This tool improves visual rendering of an image when lighting conditions are not good.The algorithm, which is the root of Retinex filter, the MultiScale Retinex with Color Restoration algorithm, is inspired by the eye biological mechanisms to adapt itself to these conditions.The result of Retinex filter can be adjusted selecting different levels, scales and dynamics.In the Figure 8, we have another example of the use of Retinex. Retinex can be applied to the whole image, as proposed in the Figure 9: the enhancement of the mental canals in the output image is evident. Conclusion In this paper we have proposed the use of an image processing program to enhance the images obtained by means of X-ray panoramic radiography.Let us remark that several other researches on image processing for panoramic radiography are available in literature.For instance, image processing can be used to obtain a panoramic X-ray device suitable for complete maxillofacial diagnoses, extending therefore the diagnostic coverage of panoramic images [17].In [18], the image processing is used to enhance the images, when a reduction of the radiation dose is required, and in [19], an appropriate approach for the robust estimation of noise statistic in dental panoramic X-rays images is given.Here, aiming to reach a large audience, we have proposed the use of a program, the GIMP, which is freely available on the web.In the paper we have shown that GIMP has several tools which can be quite useful to enhance and investigate the details of panoramic images.Some of them are able of automatically adjust the distribution of grey-tones, without requiring any input from the user.These are, of course, the simplest to use in preliminary investigations.In the case of the analysis of mental canal and foramen, Retinex seems to be the tool providing the best results.In them, the pattern of mandibular canal and the mental foramen are important landmarks.We can try to enhance the related images using GIMP; here an example from the panoramic image of Figure 5, a courtesy of the Coronation Dental Specialty Group on Wikipedia.We can see the mandibular canal.After selecting the part of the image containing this canal, we can apply the GIMP Auto submenu.Here we used "Equalize", "White Balance" and "Stretch Contrast". Figure 1 - Figure 1 -In the upper panel, we can see a basic panoramic radiograph showing impacted wisdom teeth in a 16 year old.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.In the lower panel, we can see the image obtained using the Sobel filter for edge detection of GIMP. Figure 2 - Figure 2 -In the upper panel, we can see the panoramic radiograph of Figure 1 with inverted color tones, as obtained by GIMP.The same for the lower panel, showing the edge detection.Note the details of the roots of teeth. Figure 3 -Figure 4 -Figure 5 - Figure 3 -In the upper panel we can see a panoramic radiograph which is showing the mixed dentition of a nine year old child (courtesy Coronation Dental Specialty Group on Wikipedia).We can see also the wisdom teeth buds.The image is processed using the Sobel filter.The final result is proposed with the grey-tones inverted. Figure 6 - Figure6-Panoramic radiographs are routinely used in pre-surgical evaluation.In them, the pattern of mandibular canal and the mental foramen are important landmarks.We can try to enhance the related images using GIMP; here an example from the panoramic image of Figure5, a courtesy of the Coronation Dental Specialty Group on Wikipedia.We can see the mandibular canal.After selecting the part of the image containing this canal, we can apply the GIMP Auto submenu.Here we used "Equalize", "White Balance" and "Stretch Contrast". Figure 7 - Figure 7 -To enhance the pattern of the mandibular canal and to see the mental foramen, we can use another GIMP tool, the "Retinex" tool.Retinex improves visual rendering of an image when lighting conditions are not good.The algorithm, which is the root of the Retinex filter, the MultiScale Retinex with Color Restoration algorithm, is inspired by the eye biological mechanisms to adapt itself to these conditions.In this figure, Retinex is applied to the whole image (middle) and to a part of it (bottom).The original image (top) is that of Figure 1, a courtesy of the Coronation Dental Specialty Group on Wikipedia. Figure 8 - Figure 8 -Another image of mandibular canal and mental foramen.We can see the different results obtained by the GIMP Auto submenu for "Equalize" and "White Balance", and by the "Retinex" GIMP tool.The original image is a courtesy of Wikipedia User Werneuchen. Figure 9 - Figure9-This is the panoramic image of Figure1, as we can see it after applying the "Retinex" GIMP tool.
2018-12-11T07:56:26.883Z
2015-05-30T00:00:00.000
{ "year": 2015, "sha1": "ba4a7520538e7ca6f406e42f34ea15b5a03fc83c", "oa_license": "CCBY", "oa_url": "https://www.ijsciences.com/pub/pdf/V4201505721.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba4a7520538e7ca6f406e42f34ea15b5a03fc83c", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
258381869
pes2o/s2orc
v3-fos-license
Type II pleuropulmonary blastoma mistaken for rhabdomyosarcoma: A case report Introduction Pleuropulmonary blastoma (PPB) is rare, representing 0.3 % of all pediatric cancers. PPB is classified into three subtypes and may progress from type I to types II and III, with a worse prognosis. Given its rarity, the diagnosis is frequently challenging. Case presentation We report an occurrence of PPB in a 3-year-old girl, who presented recurrent pneumopathy. Imaging investigations revealed a large solid lesion in the left hemithorax. Biopsy followed by histological analysis suggested rhabdomyosarcoma. The patient received neoadjuvant chemotherapy before proceeding to complete tumor excision. Surgical exploration revealed that the tumor was primitively related to parietal pleura and lower lobe of left lung. Histopathology of the tumor retained a definitive diagnosis of PPB type II. Postoperative course was uneventful, and a cerebral MRI ruled out brain metastasis. Adjuvant chemotherapy was administered. Discussion Clinical expression of PPB is nonspecific and variable. It ranges from a dry cough to respiratory distress. Standard radiography is the first examination to perform and CT is the gold standard for characterization thoracic masses. Surgery and chemotherapy are the pillars of treatment. Indications depend on the tumor type, its extent and its resectability. Conclusion PPB is an aggressive tumor that occurs only in children. Due to the rarity of PPB, evidence on optimal treatment is still insufficient. Careful follow-up is necessary searching for local recurrence or metastasis. Introduction Pleuropulmonary blastoma (PPB) is a rare and aggressive tumor that exclusively affects children [1]. Histologically, PPB includes a combination of blastoma and mesenchymal components with no malignant epithelial tissue. We can distinguish 3 types of PPB based on morphology of the tumor: type I purely cystic, type II cystic and solid tumor and type III purely solid [2]. Progression from type I to type II then III is possible, which is a particularity of PPB. The incidence of progression has been reported to be approximately 10 % [3]. The diagnostic and therapeutic management of PPB remains challenging given its rarity, unspecific clinical presentation and severe prognosis. Here, we report a case of PPB in a 3 years old girl who presented with a one-month history of recurrent pneumopathies. Our work is reported in accordance with the SCARE criteria [4]. Our aim was to highlight the diagnostic difficulties of PPB for both clinicians and pathologists. Case report A 3-year-old girl who had been experiencing non-improving respiratory tract symptoms: non-productive cough, for one month, and recurrent bronchial infections, despite receiving symptomatic treatment and antibiotics. There was no significant family history. The patient had no underlying medical illness and there was no prenatal diagnosis of lung mass or malformation. Physical examination revealed an altered general state with an absence of breath sounds on the left lung bas, and the patient's oxygen saturation was 87 % on room air. There were no focal neurological deficits or fever. A Chest X-ray showed an opaque mass in the left lung with contralateral medistinal deviation (Fig. 1). A chest computed tomography (CT) scan was performed, revealing a large, mass that almost completely occupied the left hemithorax, and displaced the mediastinum to the right. There was a pleural effusion but no evidence of costal erosion or diaphragm invasion (Fig. 2). Scan-guided needle biopsy was performed and followed by a histological analysis suggested rhabdomyosarcoma (RMS). The remaining staging ruled out other tumor localizations. The patient received 4 cycles of neoadjuvant chemotherapy IVA (ifosfamide, vincristine, and dactinomycin), which resulted in a satisfying response, as evidenced by a 75 % reduction in the mass observed in the control chest CT (Fig. 3). Surgical exploration, through an open left thoracotomy, revealed that the tumor was primarily related to parietal pleura and lower lobe of left lung occupying its quarter. Careful mobilization of the tumor through the fifth intercostal space was performed. En-bloc wedge resection was possible, without needing the entire lobe resection. The tumor was macroscopically completely resected, including parietal pleura from the thoracic wall. The surgical specimen weighed 49 g and measured 9x5x3 cm, appearing both solid and cystic upon sectioning. The solid areas had a brain-like appearance, while the necrotic areas were whitish in color (Fig. 4A). The final histological examination of two entire slices indicated the presence of a viable malignant tumor proliferation at 60 %, comprising two cystic and solid components. The sarcomatous component was of rhabdomyosarcomatous and chondrosarcomatous types. The definitive diagnosis was confirmed as type II PPB, which was completely removed (Fig. 4B). The postoperative course was uneventful, and a cerebral MRI ruled out brain metastasis. Adjuvant chemotherapy was administered (ifosfamide, vincristine, actinomycin D, and doxorubicin), and was well tolerated. A follow-up chest CT scan confirmed the absence of tumor residue and complete regression of the pleural effusion (Fig. 5). Due to lack of funds, the patient was not tested for DICER1 gene mutations. Discussion The occurrence of primary pulmonary tumors in children is rare, accounting for only 0.3 % of all pediatric cancers [5]. Among the primary pulmonary malignancies, metastatic lesions are the most frequently encountered in children [1]. Metastatic lung tumors in children are more likely to originate from Wilm's tumor or osteosarcoma than other types of cancer, such us RMS, hepatocellular carcinoma, hepatoblastoma, or Ewing's sarcoma [1]. Primary pulmonary malignancies in children are uncommon, and PPB represents <1 % of these cases [6]. PPB was first described by Manivel et al. in 1988 [7]. In 1997, Priest et al. classified PPB into three types based on its histological features: cystic tumor (type I), mixed cystic and solid tumor (type II), and pure solid tumor (type III) [8]. Type I PPB has the best prognosis among the three types, with a 5-year disease-free survival rate of 80-90 %. It is important to note that all reported deaths associated with type I occurred with progression to type II or III, further emphasizing the importance of early detection and treatment [3]. Histologically, PPB is characterized by a mixture of blastemal islands with high mitotic activity and areas of undifferentiated loose mesenchymal spindle cells. Initially, due to the presence of rhabdomyoblastic features, particularly in PPB type III, PPB was thought to represent a RMS of the lung [9,10]. PPB remains the most well-known DICER1-related malignant tumor, up to 80 % of patients with PBP carrying a DICER1 mutation [11]. DICER1 mutations have been associated with an increased risk of several other types of tumors, including genitourinary embryonal RMS, Wilms tumor, anaplastic sarcoma of the kidney, Sertoli-Leydig cell tumors of the ovary, and others [10]. The wide spectrum of associated tumors underscores the importance of recognizing DICER1 syndrome in pediatric patients with unusual or multiple tumor types. Screening for DICER1 mutations does not have prognostic value [11,12]. PPB typically occurs in young children, with type I occurring in children younger than 3 years old, type II occurring in children aged 3 to 6 years, and type III occurring in children older than 6 years [2]. However, there have been a few reported cases of PPB in adolescence and young adulthood [3,13,14]. According to the literature, there have only been five reported cases of prenatal diagnosis, making it an exceptionally uncommon occurrence [15]. The way PPB manifests clinically is not specific and can differ significantly. Symptoms can vary from a dry cough to respiratory distress. Non-productive cough as presenting in our case was reported by Bownes et al. [3]. Increasing breathlessness and respiratory distress were respectively reported [6,14]. If a patient experience recurring upper respiratory tract infections that do not respond to initial treatment, it could indicate the possibility of PPB. It is worth noting that PPB may be discovered unintentionally [16]. Standard radiography is the initial examination performed to investigate respiratory symptoms and can lead to the discovery of these tumors. This imaging technique typically reveals a large opacity, accompanied by a contralateral mediastinal deviation. Due to its significant size, determining the origin of this opacity can be challenging. X-rays may also reveal the presence of a liquid pleural effusion, pneumothorax, or hydropneumothorax [16]. CT is considered the gold standard for characterizing thoracic masses. The appearance of the mass on CT varies depending on the type of tumor [6]: Type I tumors present as uni or multilocular cystic masses with air present within the cysts. Type II tumors appear as multilocular masses with solid portions, and Type III tumors appear as solid masses with a varied density and enhancement. PPB only shows distant metastasis in association with types II and III, and it has a propensity to metastasize to the brain, medullary spinal cord, and bone [15]. PPB can display a variety of mesenchymal components, and may show differentiation into cartilage, rhabdomyoblasts, or fibroblasts. In the current case, the biopsy might have only captured a portion of the mesenchymal component of the tumor, leading to an incorrect diagnosis of RMS. While Immunohistochemical staining (IHC) may not have much diagnostic value in pleuropulmonary blastoma, the diagnosis can be made based on histopathological examination alone [17]. In the case of type II and type III PPB, differential diagnosis often includes solid tumors that are commonly seen in pediatric patients, such as neuroblastoma, Ewing's sarcoma, RMS, and inflammatory myofibroblastic tumor. Infantile fibrosarcoma is another potential differential diagnosis, but this type of tumor exclusively occurs in neonates and infants [18]. The mainstays of treatment for PPB are surgery and chemotherapy, and the indications for each depend on the tumor type, extent, and resectability. Radical surgical resection is essential, and may include cystectomy, segmentectomy, lobectomy, or pneumonectomy. For patients with type I PPB, surgical resection alone can be curative, but adjuvant chemotherapy should be added if there is tumor spill, incomplete resection, or local invasion of adjacent structures [19]. Patient with type II and type III PPB, require both systemic chemotherapy and surgical resection. In cases of large type II or III PPB with extensive pleural spread, extrapleural pneumonectomy may be required to achieve local control. Neoadjuvant chemotherapy can be used to optimize local control with radical surgery for large tumors exceeding 10 cm [20]. In the present case, neoadjuvant chemotherapy was used to minimize the radical surgical approach, so local control was achieved with simple wedge resection. Systemic chemotherapy is recommended for patients with types II or III PPB, and the regimen includes Ifosfamide, Vincristine, Actinomycin-D, and Doxorubicin [3]. There has been no demonstrated benefit of radiation therapy for patients with unresectable residual primary tumors [3]. Conclusion Our case highlights the importance of correctly diagnosing PPB, a rare malignant tumor in children with variable clinical and radiological presentations that can mimic other solid tumors. Pathological confirmation is crucial for accurate diagnosis and appropriate treatment planning. Although surgery and chemotherapy are the mainstay of treatment, the optimal approach is not well-established due to the rarity of this tumor. Therefore, more research is needed to determine the best management strategies for PPB. Consent Written informed consent was obtained from the patient's parents/ legal guardian for publication of this case report and accompanying images. Ethical approval Ethical approval was given by Charles Nicolle Teaching Hospital Ethics Committee, Tunis, Tunisia. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2023-04-29T15:12:37.571Z
2023-04-26T00:00:00.000
{ "year": 2023, "sha1": "750a3f16185f715e2839cad0a9a39bf243479eb2", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cb1747f2c653b9cbbbf91f4287c47e0a53996ae0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119132847
pes2o/s2orc
v3-fos-license
Invariants of links from the generalized Yang-Baxter equation Enhanced Yang-Baxter operators give rise to invariants of oriented links. We expand the enhancing method to generalized Yang-Baxter operators. At present two examples of generalized Yang-Baxter operators are known and recently three types of variations for one of these were discovered. We present the definition of enhanced generalized YB-operators and show that all known examples of generalized YB-operators can be enhanced to give corresponding invariants of oriented links. Most of these invariants are specializations of the polynomial invariant $P$. Invariants from generalized YB-operators are multiplicative after a normalization. Introduction Solutions to the Yang-Baxter equation are called Yang-Baxter operators and give rise to representations of braid groups in a canonical way. It is well known that every oriented link can be obtained by closing some braid. Based on this relation between braids and links, V. G. Turaev introduced the enhanced Yang-Baxter operators (briefly, EYB-operators) and defined an isotopy invariant T S of oriented links from each EYB-operator S in [Tu]. In [RZWG] generalized Yang-Baxter operator (briefly, gYB-operator) was proposed and a whole family of (2, k, 2 t )type gYB-operators were defined. We discuss a (2,3,2)-type gYB-operator as one of the main examples. Another example of gYB-operator, (2,3,1)-type, appeared in [GHR] and three families of its variations were discussed in [Ch]. In this paper we generalize the Turaev's enhancing method to obtain isotopy invariants of oriented links from the examples of gYB-operators. Note that from any ribbon category we have a link invariant for each object. If there is a generalized localization in the sense of [GHR], then some enhancement should exist and it is reasonable to expect that the corresponding invariant recovers the one defined directly from the category. Here are the contents of this paper in more detail. In section 2 we recall gYBoperator and consider the enhanced generalized Yang-Baxter operators (briefly, EgYB-operators). In section 3 we define an invariant of oriented links associated with each EgYB-operator in a similar way as in [Tu]. In section 4 some examples of EgYB-operators and corresponding link invariants are studied. 1 Notation and convention. In this paper V denotes a finite dimensional vector space over complex number field C. However all discussion is still valid for any finitely generated free module V over a commutative ring with 1. By a link, we mean an oriented link unless otherwise stated as we mainly consider invariants of oriented links in this paper. We denote the identity map of V by Id V and identity map of V ⊗m by simply I m when the vector space V is clear from the context. Each basis {v 1 , . . . , v d } of a d-dimensional vector space V gives us a basis {v i 1 ⊗ . . . ⊗ v in |i 1 , . . . , i n ∈ {1, 2, . . . , d}} of V ⊗n . On this basis, each endomorphism f ∈ End(V ⊗n ) can be represented as a mulitiindexed matrix f j 1 ,...,jn Acknowledgements. I am grateful to Eric Rowell for his encouragement and useful discussions. 2. The generalized Yang-Baxter operators 2.1. The enhanced Yang-Baxter operators. In this subsection, we recall the EYB-operator introduced in [Tu]. Definition 2.1.1. An isomorphism R : V ⊗2 → V ⊗2 is called a Yang-Baxter operator (briefly, a YB-operator) if it satisfies the following Yang-Baxter equation: For each f ∈ End(V ⊗n ) one can define its operator trace Sp n (f ) ∈ End(V ⊗n−1 ). If {v 1 , . . . , v d } is a basis of V then for any i 1 , . . . , i n−1 ∈ {1, 2, . . . , d} Note that Sp n (f ) does not depend on the choice of basis of V and tr(Sp n (f )) = tr(f ) where tr is the ordinary trace of endomorphisms. 2.2. Extended operator trace and it's properties. The above operator trace map Sp n is obtained by fixing the last single tensor factor. We may define similar operator trace maps Sp k,m : End(V ⊗k ) → End(V ⊗k−m ), m < k, by fixing the last m tensor factors as follows: This operator trace map Sp k,m does not depend on the choice of basis of V and tr(Sp k,m (f )) = tr(f ) as well because it is simply a composition Sp k−m+1 • . . . • Sp k . In this notation, Sp n is equal to Sp n,1 . We will use Sp 3,2 in the subsection 4.2. The following lemma is obtained directly from the definition. 2.3. The enhanced generalized Yang-Baxter operators. Definition 2.3.1. An isomorphism R : V ⊗k → V ⊗k is called a generalized Yang-Baxter operator (briefly, a gYB-operator) of type (d, k, m) if it satisfies the following generalized Yang-Baxter equation and far-commutativity: Note that a (d, 2, 1) type gYB-operator is the ordinary YB-operator on V of dimension d. The (n-strand) braid group B n is defined as the group generated by σ 1 , σ 2 , . . . , σ n−1 satisfying: Each gYB-operator gives rise to a representation of braid group B n → End(V ⊗k+m(n−2) ) via . We denote this representation by ρ R n and its image by im(ρ R n ). For any vector space one may consider a trace inner product f, g on End(V ) defined by f, g = tr(f * • g) where f * is the hermitian conjugate of f . For any subset A of End(V ), we denote by A ⊥ the perpendicular subspace of A in End(V ) with respect to this trace inner product. That is, for any f ∈ A and g ∈ A ⊥ , f, g = tr(f * • g) = 0. Definition 2.3.2. Fix positive integers k and m such that m < k. An enhanced generalized Yang-Baxter operator (EgYB-operator) is a collection {a gYBoperator R : V ⊗k → V ⊗k , µ : V → V , invertible elements α, β of C } which satisfies the following conditions for all n: The condition (ii) above is strictly weaker than the condition (ii) in the Definition 2.1.2. Indeed, the EYB-operator is the case that k = 2, m = 1, and We will see some nontrivial cases in the subsection 4.1. Invariants of braids and links It is well known that any oriented link can be obtained by closing a braid, that is by connecting top endpoints with bottom endpoints by disjoint arcs (Alexander's theorem), and two braids produce isotopic links in this way if and only if these braids are related by a finite sequence of Markov moves ξ → η −1 ξη, ξ → ξσ ±1 n where ξ, η ∈ B n (Markov's theorem). (see Chapter 2 in [Bi]) 3.1. Invariants of braids. We call the braid generators σ i positive crossings and their inverses negative crossings. For each braid ξ ∈ B n , we denote the number of positive crossings by w + (ξ) and number of negative crossings by w − (ξ). Then ). The following theorem provides key properties of T S . 3.2. Invariants of links. Using the same notation T S , we define a map from the set of isotopy classes of oriented links to C by T S (L) = T S (ξ) where a link L is isotopic to the closure of a braid ξ. Markov's theorem and Theorem 3.1.1 show that this map on oriented links is well defined and isotopy invariant. We denote the trivial knot by G 1 and the trivial n-component link by G n . For a (d, k, m) EgYB-operator S, we have (1) T S (G 1 ) = β −1 tr(µ) k−m and T S (G n ) = β −n tr(µ) k+m(n−2) . We say that an invariant of links, say I, is multiplicative if I(L) = I(L 1 ) · I(L 2 ) for disjoint union L = L 1 L 2 of two links L 1 and L 2 . From the formula (1), it is easy to see that the link invariant T S obtained from EgYB-operator is not necessarily multiplicative while the invariant from EYB-operator is so (see [Tu]). This is essentially because ρ R n (σ i ) and ρ R n (σ i+2 ) possibly act nontrivially on the same tensor factor in the middle and as a result the endomorphism corresponding to a link L = L 1 L 2 is not necessarily expressed as a tensor product of two operators in End(V ⊗k+m(n−2) ). More specifically, this is the case when m < 1 2 k. However, link invariant T S obtained from any EgYB-operator S is projectively multiplicative, and hence multiplicative after a normalization. Theorem 3.2.1. If S is a (d, k, m) EgYB-operator for any m and k, then the invariant T S is projectively multiplicative: Proof is given in the Appendix. Note that in the case of EYB-operators, m = 1 and k = 2 and thus the corresponding invariant is multiplicative without the factor in equation (3.2.1). Remark 3.2.2. It is easy to see that For a link diagram L 0 , we may consider two link diagrams L + and L − which are obtained by a local deformation introducing a positive and a negative crossing, respectively (see Figure1). A Conway-type relation between the invariants of links L − ,L 0 , and L + is particularly interesting and the following theorem is reproved in [Tu]. Theorem 3.2.3. There exists a unique mapping P from the set of isotopy types of oriented links into the ring Z[x, x −1 , y, y −1 ] such that P (G 1 ) = 1 and for any triple (L − , L 0 , L + ) xP (L + ) + x −1 P (L − ) = yP (L 0 ). (2) For f, g ∈ End(V ⊗N ), it is well known that (3) Suppose that g ∈ End(V ⊗N ) acts on the last tensor factor off-diagonally. Explicitly, that is, g Then by combining two facts above, one obtains the following: tr ρ R n (ξ) • g = 0 for any ξ ∈ B n and which means that g ∈ im(ρ R n ) ⊥ . (2,3,1) EgYB-operator of type I Theorem 4.1.2. Let R be a (2,3,1) gYB-operator of type I. Let µ = Id V , α = e iπ/4 , and β = 1. Then S = (R, µ, α, β) is an EgYB-operator such that Proof. The commutativity condition (i) in Definition 2.3.2 is trivial because µ is the identity. For the condition (ii), we need to show tr ρ R n (ξ) • Id for any ξ ∈ B n . From the Remark 4.1.1, it suffices to show that Sp 3,1 (R ±1 ) − α ±1 Id ⊗2 V acts on the last tensor factor off-diagonally. A direct computation gives us the following: 0 which shows off-diagonal action on the second tensor factor. R −1 case is easily derived from the above by complex conjugating. Hence, S = (R, µ, α, β) is an EgYB-operator. Now let us prove the Conway-type relation. For any triple (L − , L 0 , L + ) we may choose a braid ξ ∈ B n such that the link L 0 is isotopic to the closure of ξ, and links L ± are isotopic to the closures of σ ± 1 ξ, respectively. The following computation comes directly from the definition of T S and the fact e −iπ/4 R + e iπ/4 R −1 = Id V ⊗3 . The last statement is obtained from the formula (1) and tr(µ) = tr(Id V ) = 2. It is well known that any link diagram can be transformed into a diagram of trivial link via finitely many crossing changes, that is, via replacing some positive crossings with negative crossings and vice versa. If we apply the Conway-type relation in Theorem 4.1.2 for each step of such process, we obtain that for any link L the invariant T S (L) is a finite sum of T S (G d )'s with integer coefficients. More specifically, the values T S (L) are multiples of 4 because the values of T S (G d ) are so. Let us normalize invariant T S as follows: Then this invariant P S has the same Conway-type relation as given in the theorem above and P S (G 1 ) = 1. This is the case of invariant P in Theorem 3.2.3 with x = y = 1, and by uniqueness it is a special case of the invariant P appeared in [Tu]. Proof. The first statement is proved in the same way as Theorem 4.1.2 with the following: The formula for trivial links is directly from the formula (1) and tr(µ) = tr(Id V ) = 2. In this case, there is no Conway-type relation as the corresponding minimal polynomial is of degree 3. Instead we have a relation where L +2 denotes the link containing two positive crossings inside the disk we considered in Figure 1. A direct consequence of equation (2) is T S (H) = 0 where H denotes the Hopf link. More generally, we consider any disjoint union link L 1 L 2 as L 0 , and construct L +2 by braiding two parallel strands twice where one of the two strands should be taken from L 1 and the other from L 2 . Then we still obtain T S (L +2 ) = 0. It does not come directly from the equation (2) though. One way to see this is the following: Any such a link L +2 can be represented as the closure of a braid which has σ 2 i for some i and no more σ ±1 i before and after. The corresponding image under the representation ρ R will have R 2 i and no more R ±1 i before and after. Now observe that R 2 acts off-diagonally on the second tensor factor while R ±1 acts diagonally on the first and third tensor factors. As a result, trace of the corresponding image is zero. T S (L) = 4 for a few simplest knots. And for a few simple links with two components, the values are either 0 or 8 depending on the linking numbers. It seems that the invariant T S depends only on the number of components and the linking numbers. Eric C. Rowell pointed out to me that the braid group representation ρ R from type II can be seen to factor over the BMW-algebra with parameters r = q = e πi/4 (see [We]). This suggests a connection to a specialization of the Kauffman polynomial F . Proof. The first statement is proved in the same way as Theorem 4.1.2 with the following: 0 The Conway-type relation can be easily shown in the same way as Theorem 4.1.2 from a relation R + R −1 = √ 2 Id V ⊗3 . The last statement is a direct consequence of the formula (1) and tr(µ) = tr(Id V ) = 2. By defining P S (L) = 1 2 √ 2 T S (L) with x = 1, y = √ 2, we see that the invariant P S satisfies the conditions in Theorem 3.2.3 and thus this is a special case of the invariant P appeared in [Tu] as well.
2012-02-17T16:13:00.000Z
2012-02-17T00:00:00.000
{ "year": 2012, "sha1": "98192903f2986cbf47b20309f465e50d4765f44e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1202.3945", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "98192903f2986cbf47b20309f465e50d4765f44e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
14216533
pes2o/s2orc
v3-fos-license
In Vivo and In Vitro Genotoxic and Epigenetic Effects of Two Types of Cola Beverages and Caffeine: A Multiassay Approach The aim of this work was to assess the biological and food safety of two different beverages: Classic Coca Cola™ (CCC) and Caffeine-Free Coca Cola (CFCC). To this end, we determined the genotoxicological and biological effects of different doses of lyophilised CCC and CFCC and Caffeine (CAF), the main distinctive constituent. Their toxic/antitoxic, genotoxic/antigenotoxic, and chronic toxicity (lifespan assay) effects were determined in vivo using the Drosophila model. Their cytotoxic activities were determined using the HL-60 in vitro cancer model. In addition, clastogenic DNA toxicity was measured using internucleosomal fragmentation and SCGE assays. Their epigenetic effects were assessed on the HL-60 methylation status using some repetitive elements. The experimental results showed a slight chemopreventive effect of the two cola beverages against HL-60 leukaemia cells, probably mediated by nonapoptotic mechanisms. Finally, CCC and CAF induced a global genome hypomethylation evaluated in LINE-1 and Alu M1 repetitive elements. Overall, we demonstrated for the first time the safety of this famous beverage in in vivo and in vitro models. Introduction Diet may modify cancer risk and tumor behavior since nongenotoxicological modulation as epigenetic regulatory processes may be susceptible to changes caused by environmental factors. Therefore, constituents in food and dietary supplements could be involved in changes in the gene expression, increasing the risk of developing some type of cancer all over the life inducing epigenetic changes [1,2]. Genotoxicological screening tests have been extensively used over time for assessing the health properties of compounds prior to being considered as safe substances. Nowadays, the list of foods with documented health-benefit activities is endless, and scientific evidence supporting the concept of healthpromoting food ingredients is steadily growing [3]. Originally developed as medical supplements, cola-based drinks and several beverages such as beer and wine were proposed as medicinal substances [4,5]. However, a relationship between the consumption of these beverages and an increase in the prevalence of several diseases such as child obesity, diabetes, hypertension, and dental diseases was also demonstrated [6][7][8]. In spite of the worldwide importance and spread of cola beverages, studies assessing their effects on health and wellbeing are quite scarce [9]. On the contrary, caffeine (CAF), which is a key ingredient in cola beverages as well as in coffee, tea, and some medicines, is one of the most investigated substances, probably due to the lack of consistent results over time [10][11][12]. In D. melanogaster, CAF has been related to a positive lifespan increase [13], but the results were contradictory when apoptotic and DNA-programmed fragmentation effects were studied [14,15]. Drosophila is being used more frequently as a model for many human diseases, including cancer [16][17][18]. Reiter et al. 2 BioMed Research International [19] determined that 77% of human disease genes are conserved in this fly, making it an important preliminary model in the study of human diseases. These flies are also used often to determine the mutagenicity of some substances. Somatic cell mutations and apoptosis-resistance, widely associated with genetic toxicity and carcinogenicity, are frequently assayed using the in vivo Drosophila melanogaster model through the Somatic Mutation and Recombination Test (SMART) [20,21], which was demonstrated as a reliable assay to detect genotoxic and antigenotoxic activity of single compounds and complex mixtures [22,23]. More recently, this fly model was also increasingly used to study life extension since there is a high homology between invertebrate and human genes involved in aging process [24,25]. On the other hand, the determination of cytotoxicity, DNA internucleosomal fragmentation, and DNA single/double strand breaks in HL-60 promyelocytic cells is also used as a first step to detect toxicity, necrosis, and apoptosis in chemoprevention processes [26][27][28]. Biomedical research is focused on modifying the methylation pattern as a tool to understand cancer processes and other diseases. Medical epigenetic might take part in the junction between the genome and the environment, to modulate the effects of deleterious genes [29,30]. Therefore, the aim of this study was to determine the potential toxicity and DNA protecting capabilities of lyophilised CCC, lyophilised CFCC, and CAF. Several endpoints related to degenerative processes, including toxicity, antitoxicity, genotoxicity, antigenotoxicity, and longevity were determined using an in vivo Drosophila model. Furthermore, in vitro chemopreventive activity of these compounds was also determined by assessing their cytotoxicity and DNA damage capability producing internucleosomal fragmentation or strand breaks in an HL-60 promyelocytic human cancer model as well as the modulation of its methylation status in genomic repetitive sequences. The analysis of CAF content was performed by HPLC/DAD (Perkin Elmer) in reverse phase (column C-18, 150 × 2.1 mm), with a gradient of water/phosphoric buffer and methanol as mobile phase at a 1 mL/min flow rate. The injection volume was 10 L and the column temperature at 45 ∘ C. The CAF identification was performed by retention time and spectrum adjustment obtained by DAD (SCAI, University of Córdoba). In Vivo Fly Stocks. Two Drosophila melanogaster strains with genetic markers that affect the wing-hair phenotype were used: (i) mwh/mwh, carrying the recessive mutation mwh (multiple wing hairs) [31] and (ii) flr 3 /In (3LR) TM3, rip p sep bx 34e e s Bd S , where flr 3 (flare) [32] marker is a homozygous recessive lethal mutation which is viable in homozygous somatic cells once larvae start developing and produce deformed trichomonas. In Vitro Cell Culture Conditions. Promyelocytic human leukaemia (HL-60) cells were grown in RPMI-1640 medium (Sigma, R5886) supplemented with heat-inactivated foetal bovine serum (Linus, S01805), L-glutamine 200 mM (Sigma, G7513), and 1x antibiotic-antimycotic solution (Sigma, A5955). Cells were incubated at 37 ∘ C in a humidified atmosphere of 5% CO 2 . Cultures were plated at 2.5 × 10 4 cells/mL density in 10 mL culture bottles and passed every 2 days. Emerging adults of all groups were counted and toxicity was determined as the percentage of hatched individuals in each treatment compared with the negative control. Antitoxicity was assessed using the same procedure and experimental concentrations as in toxicity assays, but in combined treatments with 0.15 M H 2 O 2 and comparing the percentage of emerging adults with the positive toxicant control [34]. Chi-square test was used to determine if the tested compounds significantly inhibited the survival of flies. In Vivo Negative control values were considered as those expected in Chi-square formula used in toxicity assay and positive control values in antitoxicity assays [35]. The same concentrations of toxicity and antitoxicity assays within the same substance were also compared. Genotoxicity and Antigenotoxicity Assays. Genotoxicity assays were carried out following the wing spot test standard procedure [20]. Briefly, transheterozygous larvae for mwh and flr 3 genes were obtained by crossing four-day-old virgin flr 3 females with mwh males in a 2 : 1 ratio. Four days after fertilization, females were allowed to lay eggs in fresh yeast medium (25 g yeast and 4 mL sterile distilled water) for 8 h in order to obtain synchronised larvae. After 72 h, larvae were collected, washed with distilled water, and clustered in groups of 100 individuals. Each group was fed with a mixture containing 0.85 g Drosophila Instant Medium (Formula 4-24, Carolina Biological Supply, Burlington, NC) and 4 mL water supplemented with the tested compounds at fixed concentrations (the highest and second lowest from the toxicity assays) and negative (H 2 O) and positive (0.15 M H 2 O 2 ) controls until pupae hatching (10-12 days). Adult flies were collected and stored in 70% ethanol until the wings were removed and mounted on slides using Faure's solution. Mutant spots were assessed in both dorsal and ventral surfaces of the wings in a bright light microscope at 400x magnification. The frequencies of each type of mutant clone per wing (single, large, or twin spot) were compared to the concurrent negative control and analysed applying the binomial Kastenbaum and Bowman Test [36]. Antigenotoxicity tests were performed following the method described by Anter et al. [37]. The same compounds and concentrations were assayed in combined treatment with hydrogen peroxide (0.15 M) acting as concurrent genotoxicant. Single and twin spots per wing were also recorded and compared with the concurrent positive control as described before. The recombination percentage was calculated following Valadares et al. [38] procedure and the inhibition percentages (IP) for the combined treatments were calculated from the control-corrected frequencies of clone formation per 10 5 cells, according to Abraham [39]: IP = [(genotoxin alone − combined treatment)/genotoxin alone] × 100. Chronic Treatments: Lifespan and Healthspan Assays. In order to obtain comparable results in all the in vivo assays, we used an F 1 progeny from mwh and flr 3 parental strains produced by 24 h egg-laying in yeast for all the longevity trials. We also tested the same compounds and concentrations as in the toxicity/antitoxicity experiments. Lifespan assays were carried out at 25 ∘ C according to the procedure described by Fernandez-Bedmar et al. [23]. Briefly, synchronised 72±12-hour-old transheterozygous larvae were washed in distilled water, collected, and transferred in groups of 100 individuals into test vials containing 0.85 g Drosophila Instant Medium and 4 mL of the different concentrations of the compounds to be assayed. Emerged adults from pupae were collected under CO 2 anaesthesia and placed in groups of 25 individuals of the same sex into sterile vials containing 0.21 g Drosophila Instant Medium and 1 mL of different concentrations of the compounds to be tested. Flies were chronically treated during all their life. The number of survivors was determined twice a week. In Vitro Assays 2.5.1. Cytotoxicity Assay. The effect of the assayed compounds on cell viability was determined by the trypan blue exclusion test according to our standard procedures [37]. HL-60 cells were placed in 96-well plates (2 × 10 4 cells/mL) and cultured for 72 h and supplemented with the same concentrations of CCC, CFCC, and CAF from our toxicity/antitoxicity assays. The wide range of tested concentrations was intended to estimate the cytotoxic inhibitory concentration 50 (IC 50 ). After culture, cells were stained with a 1 : 1 volume ratio of trypan blue dye (Sigma, T8154) and counted in a Neubauer chamber at 100x magnification. The survival percentage of each treatment compared with the control was recorded in three independent replicates. DNA Fragmentation Status. The ability of our compounds to induce DNA fragmentation was determined as described by Anter et al. [40]. Briefly, 10 6 HL-60 cells were cocultured with 5 different concentrations of CCC, CFCC, and CAF (as selected in the toxicity/antitoxicity assays) for 5 h. After treatment, genomic DNA was extracted using a commercial kit (Blood Genomic DNA Extraction Mini Spin Kit, Canvax Biotech, Cordoba, Spain). Subsequently, DNA was incubated overnight with RNase at 37 ∘ C and quantified in a spectrophotometer (Nanodrop5 ND-1000). Finally, 1200 ng DNA was electrophoresed in a 2% agarose gel for 120 min at 50 V, stained with ethidium bromide, and visualised under UV light. The apoptosis process is recognised by the appearance of internucleosomal DNA fragments that are multiple of 200 base pairs. Clastogenicity: SCGE (Comet Assay) . DNA integrity was assayed by SCGE as described by Olive and Banáth [41] with minor modifications. HL-60 cells (5 × 10 5 ) in exponential growing phase were incubated in 1.5 mL of culture medium supplemented with different CCC, CFCC (0.7, 6, and 25 mg/mL), and CAF (0.004, 0.032, and 0.51 mM) concentrations for 5 h. After treatment, cells were washed twice and adjusted to 6.25 × 10 5 cells/mL in PBS. Electrophoresis gels were prepared pouring a 1 : 4 dilution (cells in liquid low-melting-point agarose at 40 ∘ C, A4018, Sigma) into slides. Gels were covered with a coverslip and allowed to solidify at RT for 30 min. Once the slides solidified, the coverslips were carefully removed and slides were bathed in freshly prepared lysing solution (2.5 M NaCl, 100 mM Na-EDTA, 10 mM Tris, 250 mM NaOH, 10% DMSO, and 1% Triton X-100; pH 13) for 1 h at 4 ∘ C. Thereafter, slides were equilibrated in alkaline electrophoresis buffer (300 mM NaOH and 1 mM Na-EDTA, pH 13) for 20-30 min at 4 ∘ C. Once equilibrated, the slides underwent electrophoresis (20 V, 400 mA for 15 min) in the dark and were immediately neutralised in cold neutral solution (0.4 M Tris-HCl buffer, pH 7.5) for 10 min. Finally, slides were dried overnight at RT in the dark. Gels were stained with 7 L propidium iodide and photographed in a Leica DM2500 microscope at 400x magnification. At least 100 single cells from each treatment were analysed using the Open Comet6 software [42]. The Tail Moment (TM) data were analysed applying a one-way ANOVA and post hoc Tukey's test with SPSS Statistics for Windows, Version 19.0 (IBM 2010), to determine the effect of the tested compounds on HL-60 cell DNA integrity. Methylation Status of HL-60 Cells. HL-60 cells were treated with different concentrations of CCC (3 mg/mL and 100 mg/mL), CFCC (3 mg/mL and 100 mg/mL), and CAF (0.016 mM and 0.51 mM) for 5 hour. Then, DNA was extracted similarly to previously described DNA fragmentation assay. After that, the DNA was converted with bisulphite (EZ DNA Methylation-Gold6 Kit). Bisulphitemodified DNA was used for fluorescence-based real-time quantitative Methylation-Specific PCR (qMSP) using 5 M of each forward and reverse primer (Isogen Life Science BV), 2 L of iTaq6 Universal SYBR5 Green Supermix (Bio-Rad, it contains antibody-mediated hot-start iTaq DNA polymerase, dNTPs, MgCl2, SYBR Green I Dye, enhancers, stabilizers, Table 1: Primers information [43]. Primer Forward primer sequence 5 to 3 (N) Reverse primer sequence 5 to 3 (N) Asterisks ( * ) indicate significant differences (one tail) with respect to the hydrogen peroxide control group and (4) untreated control group: * Chi-square value higher than 5.02 [35]. Delta letter (Δ) means significant differences between the same concentrations used in toxicity and antitoxicity assays comparing within the same treated substance. and a blend of passive reference dyes including ROX and fluorescein) and 25 ng of bisulphite converted genomic DNA. PCR conditions included initial denaturalisation at 95 ∘ C for 3 minutes and amplification which consisted of 45 cycles at 95 ∘ C for 10 seconds, 60 ∘ C for 15 seconds, and 72 ∘ C for 15 seconds, taking picture at the end of each elongation cycle. After that, melting curve was determined increasing 0.5 ∘ C each 0.05 seconds from 60 ∘ C to 95 ∘ C and taking pictures. QMSP was carried out in 48-well plates in MiniOpticon Real-Time PCR System (MJ Mini Personal Thermal Cycler, Bio-Rad) and were analysed by Bio-Rad CFX Manager 3.1 software. The housekeeping Alu-C4 was used as a reference to correct for total DNA input. Alu-C4 and the target repetitive elements Alu M1, LINE-1, and Sat-were obtained from Isogen Life Science and their sequences are shown in Table 1. Each sample was analysed in triplicate. The results of each C T were obtained from each qMSP. Data were normalised with the housekeeping Alu C4 using the Nikolaidis et al. [45] and Liloglou et al. [46] comparative C T method (ΔΔC T ). One-way ANOVA and post hoc Tukey's test are used to evaluate the differences between the tested compounds, repetitive elements, and concentrations. In Vivo Assays 3.1.1. Toxicity/Antitoxicity. Toxicity assays showed that CCC, CFCC, and CAF are not toxic to D. melanogaster larvae ( Table 2, simple treatment). CFCC was significantly toxic only at the highest concentration. All the studies and results on CAF must be viewed with caution, since CAF shows a dose-dependent effect and it is known to be toxic at high concentrations [47]. Antitoxicity results showed that CCC and CFCC exerted an overall significant protective effect against H 2 O 2 -induced toxicity in Drosophila larvae, at most of the tested concentrations, with a negative dose-dependent effect ( Table 2, combined treatment). Although CCC and CFCC were able to revert in some extent the damage caused by hydrogen peroxide, the survival obtained in antitoxicity assay was lower than toxicity assay in flies treated with 6, 25, and 100 mg/mL of these beverages. On the other hand, the 2 lowest concentrations were able to totally revert the oxidative damage caused by the used genotoxin. On the contrary, none of the assayed CAF concentrations produced any significant protective effect. Table 3 shows the results obtained in the genotoxicity assays (SMART). After applying binomial Kastenbaum-Bowman Test, all tested substances were nongenotoxic with negative results. Genotoxicity/Antigenotoxicity. Hydrogen peroxide is a potent inducer of oxidative damage and mediator of ageing [48]. It has been used as a genotoxicant in many assays using Drosophila as an experimental animal [23,40] as well as in other models. The mutation rates obtained in our study for this genotoxin (0.438 clones/wing) fall into the usual range described by different laboratories, validating the accuracy of the geno/antigenotoxicity assays. One of the important characteristics of the SMART is that it allows quantification of the different types of DNA (2) Frequency of clone formation: clones/wings/24,400 cells. (3) Recombination percentage is calculated according to Valadares et al. [38]. (4) Inhibition percentage values were included when appropriate. damages induced by genotoxic compounds (recombination versus mutation). In the balancer-heterozygous genotype (mwh/TM3, Bd S ) mwh spots are produced predominantly by somatic point mutation and chromosome aberrations. By scoring mwh/TM3 balancers-heterozygous wings it is possible to quantify the recombinogenic potency of the positive control. The frequency of mwh clones on the marker transheterozygous wings (mwh single spots plus twin spots) was compared with the frequency of mwh spots on the balancer transheterozygous wings. The difference in mwh Recombinogenicity values for combined treatments ranged between 55 and 89%, where these figures are higher than their respective recombinogenicity induced by the positive control (54%). Therefore, our compounds induced antimutagenic activity rather than antirecombinogenic activity. Chronic Treatment. Kaplan-Meier curves and averages of flies' lifespan are shown in Figure 1 and Table 4, respectively. The longevity of flies was increased by the CCC tested concentrations 3.125 and 25 mg/mL ( ≤ 0.05). CAF also increased the survival rates of Drosophila at intermediate concentrations (0.032 and 0.127 mM). CFCC significantly decreased the lifespan of Drosophila only at 100 mg/mL ( ≤ 0.001). On average whereas CCC and CAF increased Drosophila lifespan more than 15%, CFCC decreased it less than 19%. In Vitro Assays 3.2.1. Cytotoxicity. Both beverages were cytotoxic to the HL-60 line, inhibiting leukaemia cell growth with a positive dose effect ( Figure 2). Furthermore, IC 50 was similar for both beverages (19 and 20 mg/mL for CCC and CFCC, resp.). CAF concentrations were experimentally increased to reach IC 50 since the original tested concentrations did not induce any remarkable cytotoxic effect on promyelocytic cells (data not shown). The highest tested concentration (20.4 mM), which was 40 times higher than the corresponding content in CCC and CFCC, could only inhibit cell growth in about 40%, without reaching IC 50 . DNA Stability Evaluation. The typical ladder pattern of cells with fragmented internucleosomal DNA was weakly induced only by CCC and CFCC at 25 mg/mL supplementation ( Figure 3) and it was not observed with any CAF treatment. The ability of the compounds to induce strand breaks in the DNA structure was determined by the alkaline comet assay. Based on the results obtained with the previous in vitro assays (cytotoxicity and DNA internucleosomal fragmentation), only three concentrations of each compound were tested. After 5 h exposure, all compounds induced a significant ( ≤ 0.001) increase in the TM parameter with respect to the control, except for CFCC at a 25 mg/mL concentration and CAF at 0.51 mM (Figure 4). Despite such significant increase, all TM values were lower than 4.4, suggesting that these compounds mainly affect HL-60 cells through a necrotic pathway. The relative normalised methylation status (RMS) of the three repetitive sequences (LINE-1, Alu, and Sat-) in HL-60 cell line treated with the tested compounds is shown in Figure 5. RMS decreased when cells were treated with CCC in both Alu M1 and LINE-1 sequences in a negative dosedependent manner. However, we obtained hypomethylation in Sat-sequences treated with 3 mg/mL and hypermethylation at the highest concentration (100 mg/mL) of CCC. CFCC induced hypermethylation in LINE-1 at 3 mg/mL concentration and hypomethylation at 100 mg/mL. A decrease of methylation status was found in Alu M1 sequences when cells were treated with 100 mg/mL CFCC. On the contrary, both assayed concentrations of CFCC were able to hypermethylate Sat-sequences. Regarding CAF, a decrease of methylation status in Alu M1 and LINE-1 repetitive elements treated with 0.016 mM CAF and 0.016 and 0.51 mM, respectively, was observed. In contrast, an increase of the methylation status was found in Sat-sequences when cells were treated with 0.016 mM CAF. The same demethylation pattern was observed at the three repetitive elements when looking at the same concentration as Tukey's test demonstrated when cells are treated with CCC and CAF, except for the lowest concentration of CAF when Sat-is analysed. Nevertheless, CFCC differs from CCC and CAF as indicated by asterisks in Figure 5. Effect of Cola Beverages and Caffeine on D. melanogaster In Vivo Model. Soft drinks have been related to several harmful effects on health, such as child obesity and appetite increase, diabetes, hypertension, and dental diseases [6][7][8]. They were even related to school intoxication outbreaks, although in the end these events were associated with a mass sociogenic illness [49]. Nevertheless, studies assessing systematically the toxicological effects of cola beverages are scarce [50,51] or showed contradictory results, as in the case of CAF. Drosophila is considered an accurate in vivo model to study human disease and further substantial contributions in this sense are expected [52]. To our knowledge, this is the first attempt to characterise the genotoxic effect of these beverages using in vivo (Drosophila melanogaster) and in vitro (HL-60) models, as well as CAF, using experimental doses mimicking the concentration used in cokes. The lack of toxicity observed in our results is reasonable since these beverages are consumed worldwide and strictly regulated by governments and agencies. Furthermore, the use of "physiological" CAF doses could explain the harmlessness of the compound, since its effect was widely demonstrated as highly dependent of the dose consumed [53]. On the other hand, differences in sugar content between beverages (11.1% versus 10.6% W/V in CFCC and CCC, resp.) could explain the different toxicity levels found in the Drosophila assays. Several toxic and side effects were reported due to the high carbohydrate concentrations of beverages, particularly referred to as glucose and fructose. In our flies, it was also demonstrated that those carbohydrates could be converted into glyoxal which reduces the number of adults emerged and the pupation time [54]. In our study, only CCC and CFCC exerted a significant antitoxic activity against H 2 O 2 -induced oxidative damage in Drosophila. On the contrary, CAF showed neither toxic nor antitoxic effects. Since the effect of CAF has been widely described as dose-dependent, the lack of toxicity observed in our experiments was probably due to the low concentrations (equal to those found in the cola beverages) tested. In this sense, it was demonstrated that CAF can exert an antioxidant effect when consumed at moderate doses; it can even be neurotoxic at higher doses by increasing dopamine release [55,56] or even inhibit autophagy in a dose-dependent manner [57]. Our results are more in agreement with Zhao et al. [58] who very recently found that CAF antioxidant properties are very weak and probably overestimated. On the other hand, it is well known that there are several extra compounds in Coca Cola, such as carbohydrate syrups, phosphoric acid (E-338), and class IV caramel colorants, but none of them has been reported as antioxidant [54,59]. Therefore, we hypothesise that the antioxidant effects of CCC and CFCC could be explained by other undeclared components of these beverages, considering that part of its formula is an industrial secret. Research using Drosophila has provided seminal insights into gene function which are relevant to human health [60]. The genomic stability (lack of genotoxicity) observed in Drosophila with all the compounds assayed confirmed their safety. Previous reports determined that cola drinks could be mutagenic by inducing chromosomal abnormalities and liver adducts in mice [61,62]. However, those results are at least controversial, since the mutagenic effects were observed after 1 day of treatment with cola intakes equal to 600 mL in humans. On the contrary, our study agrees with Tóthová et al. [63] which demosntrated in a 6-month experimental design with rats drinking cola beverages ad libitum neither harmful effects nor changes in the gene expression pattern. CAF is one of the most investigated genotoxic substances, probably because results obtained over time are not consistent (reviewed by Nehlig and Debry [64]). The absence of genotoxicity was reported a long time ago using different models: in Drosophila germ cells [65], in the Salmonella Ara test [66], or in the micronucleus assay [11]. On the contrary, mutagenic results have been reported after Sex-Linked Recessive Lethal (SLRL) test of Drosophila germ cells [67,68]. Furthermore, it was demonstrated that CAF can enhance the effect of many DNA damaging agents [64]. Our results agree with those reported by Graf and Würgler [10], using the same experimental model. These authors demonstrated that CAF genotoxic effects are weak and nonsignificant. An interesting finding was the antigenotoxic differences among both cola beverages and CAF. Our hypothesis is that the beverages effects could be mediated in part by the differential CAF content. Although in vitro studies indicated that CAF was able to scavenge hydroxyl radicals [69], this ability was not clearly observed in the highest concentration of our in vivo antigenotoxicity assays. In this sense, 0.51 mM CAF was not able to induce antigenotoxic activity although, contrarily, the lowest CAF concentration (0.016 mM) did induce it, being the most antimutagenic compound according to the recombination percentage data. In contrast, CAF has been demonstrated to be nonantimutagenic in Ames test at 0.19 mM [70] although it depends on the environmental factors [64]. Both cola beverages also revealed an inhibitory effect against the frequency of mutant spots induced by hydrogen peroxide due to an antimutagenic activity [71]. The different IP values of 166.67% and 96.93% for CCC and CFCC, respectively, at the lowest tested concentration could be due to the CAF content in CCC (0.016 mM CAF) since CFCC does not consist of CAF. This is in agreement with several reports showing CAF antigenotoxic capacity against X-rays [72,73] and ethyl methanesulfonate (SMART assay [74] and yeast (15 mM) [75]). The IP value of CCC at 100 mg/mL decreased up to 98.88% and this fact could be due to the absence of antigenotoxicity observed in the highest CAF concentration. CAF did not present antigenotoxic activity in the micronucleus test of mice [11], although these authors assayed higher concentrations than those tested herein. However, CCC and CFCC antigenotoxic ability could also be due to another undeclared compound in the beverage formula or due to the presence of fructose, reported as being demutagenic against heterocyclic amines (Trp-p-1) [76]. Drosophila melanogaster is an excellent model for the study of aging because adults show many similarities with the cellular senescence observed in mammals [77]. This is the reason why this particular model is frequently used to understand the relationship between nutrient metabolism and aging mechanisms [25]. To our knowledge, the antiageing and antidegenerative effects of CCC and CFCC were assayed for the first time using D. melanogaster in our study. We demonstrated that CCC increased both lifespan and healthspan, whereas CFCC in general decreased both longevity indexes. However, these effects may not be related to the lack of mutagenicity produced by CCC and CCFC since there were no differences between beverages in the genotoxicity assays. Environmental factors, such as the diet of larvae, play a vital role in life expectancy. This was also reported in humans, associating sugared soft drinks with diabetes and obesity, both diseases playing an important role in the life expectancy decrease [78]. Therefore, the higher carbohydrate content of CFCC (compared with CCC) could explain the differences observed in longevity assays. We also demonstrated that CAF at 0.032 and 0.127 mM significantly increased lifespan in Drosophila, without having significant effects at lower doses. Interestingly, our results showed a reduced, even though not significant, lifespan in flies when higher concentrations were assayed. This was previously reported by Nikitin et al. [13], who demonstrated a negative effect of CAF in Drosophila lifespan with higher concentrations (25-fold higher than ours). A possible explanation could be that CAF produced a slimming activity by metabolism stimulation, associated with shorter life expectancies [79,80]. Effect of Cola Beverages and Caffeine on In Vitro Cancer Model Cells. The in vitro evaluation of the anticancer properties of nutraceutical compounds or foods is the first step of a large pathway to obtain suitable conclusions to be extrapolated to humans. Here, we determined the potential chemopreventive effect of CCC, CFCC, and CAF on a human cancer cell model (HL-60 cell line). CCC and CFCC similarly decreased the survival rate of HL-60 leukaemia cells in a positive dose-dependent manner. Kapicioglu et al. [81] reported the ability of cola drinks to inhibit proliferation of gastric mucosal cells although they were not cancerous. Conversely, Nowacki et al. [82] reported that CCC was able to induce an increase in fibroblast proliferation probably due to the sugar content, which could trigger a carcinogenic process. However, the rate of increase of this proliferation depended on where the CCC was bought. Our results showed that CAF induced weak cytotoxicity in HL-60 since none of the tested concentrations reached IC 50 . Therefore, we demonstrated that CCC and CFCC cytotoxicity cannot uniquely be due to CAF content. Previous reports showed that CAF inhibited HL-60 growth at 5 mM [83]. More recently, Rosendahl et al. [84] demonstrated an inhibitory effect of CAF against human breast cancer cells, IC 50 being roughly at 5 mM. Similarly, Pitaksalee et al. [57] showed inhibition of autophagy with CAF supplementations of 10 mM in a neuroblastoma cell line. These recent reports support our findings, suggesting that CAF could be cytotoxic only at higher concentrations and in a positive dose-dependent manner. The degradation of genomic DNA into internucleosomal fragments was proposed as a major mechanism affecting cancer cell apoptosis. We determined that CCC and CFCC only induced a weak proapoptotic DNA internucleosomal fragmentation at higher concentrations. Conversely, this activity was not observed in the concurrent CAF concentration tested. In this sense, previous reports by different authors are contradictory. It has been demonstrated that CAF protects HL-60 [14] and endothelial [85] cells against certain types of induced apoptosis in a dose-dependent manner and only at higher concentrations. The existence of a dose-dependent response pattern [55,56] has recently been demonstrated by Wang et al. [86] showing that 2 mM CAF enhanced the proapoptotic effect of cisplatin lung cancer cells; these results could also explain the differences in CAF studies since they suggest that low CAF concentrations do not induce apoptosis by themselves, but by enhancing a different apoptotic pathway. For these reasons, we performed alkaline SCGE in order to detect DNA damage [87], which are widely used to determine whether cells are undergoing apoptotic and/or necrotic pathways [41]. The use of such a test in transformed cells for the screening of substances with clastogenic DNAstrand break activity could be considered as a very early stage screening in the search of molecules for the treatment of acute promyelocytic leukaemia [88]. It is assumed that apoptosis occurs when treatments induce a TM > 30 (hedgehog pattern) whereas control cells remain lower than 2 (no tails). On the contrary, necrosis shows a short comet-tail pattern since the majority of the damaged DNA remains in the comet head [89]. Our results showed that the damage induced by CCC, CFCC, and CAF in HL-60 cells was characterised by necrosis (short tails, TM < 5, Figure 4). These results agree with our cytotoxicity and DNA fragmentation assays, demonstrating that CCC and CFCC induced cell death in HL-60, probably mediated by a necrotic pathway. Both beverages and CAF had the same DNA damage pattern (class 1; TM between 1 and 5 according to Fabiani et al. [90]) whereas class 0 was detected in their concurrent controls (TM lower than 1, no visible comet). In the same way, our results agree with those of Rayburn et al. [91] who reported that CAF supplementation (0-2 mM) did not produce DNA-strand breaks in CHO cells. Consistent with our results, several authors demonstrated that CAF induced apoptotic cell death in glioma and lung cancer cells at higher doses (10-20 mM), suggesting again that CAF acts in a positive dose-dependent manner [92]. However, recent studies demonstrated that CAF could induce a comet-tail pattern even at low concentrations (0.1-2 mM; [12]), but these reports were performed in yeast or in a different cell line (K562). Therefore, this could also suggest that CAF induced apoptosis differs depending on the in vitro model employed. Another interesting point is that the SCGE assay was described as relatively insensitive since positive results (no scored comets) would not be found when the tested compounds are highly cytotoxic [93]. However, despite the fact that beverages were cytotoxic in our study, this cytotoxicity assay was performed after 72 h of treatment and SCGE assays were conducted only for 5 h. Regarding epigenetics, it is currently known that environmental factors are involved in gene expression. In cancer cells, the genome is globally hypomethylated inducing transposable element activity and thus triggering genome instability [94]. As a proof of that, the silencing of tumor suppressor genes is closely associated with hypermethylation [95]. Repetitive elements are highly methylated in somatic normal cells contributing to a global genomic hypermethylation [43,94] suppressing the transposable activity of repetitive elements. Nevertheless, a lot of information is still unknown specially in order to ascertain the mechanisms which modulate the epigenetic changes in cancer cells. Biomedical research is focused on hypomethylation agents since this therapy is highly related to gene silencing; thus this fact could activate tumor suppressor genes and be a positive highlight although its benefit on human therapies is not clear because much more investigations should be performed [96]. We studied three different repetitive elements: LINE-1, Alu M4, and SAT-. Long interspersed nuclear elements (LINE) are abundant retrotransposons and represent about 17% of the human genome. Although LINE1 has a nonrandom distribution, they are accumulated in primarily Gpositive bands, which are AT-rich regions of chromosomes [97]. LINE-1 elements are also accumulated in regions of low recombination rate mainly in X-chromosome [98]. Alu elements belong to the SINE (short interspersed nuclear elements) family, being the most abundant (accounting about 10% of the whole human genome [43]) and predominantly present in noncoding and GC-rich regions [97,99]. Sat-(satellite alpha DNA) repeats are composed of tandem repeats of 170 bp DNA sequences, are AT-rich regions, and represent the main DNA component of every human centromere, constituting about 5% of total human DNA [97,100]. Therefore, examination of the methylation status of LINE-1 and Alu regions has served as an approach for measuring global methylation levels since 32% of the human genome has been evaluated [101]. Our results of methylation status showed that CCC may generally hypomethylate the global genome although 100 mg/mL CCC hypermethylate Sat-repetitive element. We also observed a significant negative dose-dependent effect in every target repetitive element with 50% hypomethylation average rate. Nevertheless, the overall hypermethylation rate induced in CFCC treatments is 328%, and only a decrease of methylation status is observed at Alu M1 and LINE-1 sequences when treated with CFCC 100 mg/mL. This hypermethylation could be considered as a benefit since LINE-1 is associated with C-met oncogene that would be silenced [102]. Xu et al. [103] demonstrated that caffeine (0.3 mM) enhanced the methylation ratio of multiple single CpG sites, as well as the total methylation ratio at nt −358 to −77 of the hippocampal 11 -HSD-2 promoter of primary fetal hippocampal neurons in rats. However, 4 and 40 M CAF were able to induce hypomethylation of single CpG site inhibiting the DNMT3 enzyme but not decrease the global status of the proximal promoter of the human StAR gene [104]. The present results of CAF are in agreement with Ting et al. [105] since 16 M was able to induce hipomethylation of Line-1 and Alu M1 sequences as well as 0.51 mM CAF in LINE-1. However, Sat-(AT-rich elements) was methylated when cells were treated with 16 M CAF. It has been demonstrated that the expression of satellite sequences is associated with a hypomethylation triggering cancer cells; thus methylation process in satellite sequences is a potential mechanism for silencing its satellite expression in transformed cells [105]. These results could suggest that CAF may be one of the compounds responsible for the global hypomethylation status induced by CCC. Statistical analysis showed that the methylation status induced by CCC and CAF in each repetitive element was not significantly different. Conversely, CFCC resulted in inducing different methylation status. Therefore, the effects of CCC on methylation status of HL-60 cells could be explained by those induced by CAF. It is clear that much more information is needed for ascertaining on the role of food and beverages on epigenomes since hypomethylation mechanisms are not clear in every type of tumor. In addition, the hypomethylation and hypermethylation status of repetitive elements depend on both their concurrent control [102] and the target repetitive elements selected to evaluate the global methylation status. To our knowledge, it is the first attempt assessing DNA methylation changes induced by CCC, CFCC, and CAF on human leukaemia cells. An apparent scarce data on the lack of dose-dependent effect is observed at almost all parameters analysed at the individual, cell, and molecular levels. Based on the obtained results, we only found a clear-cut dose-dependent effect when CCC is tested in the antitoxicity, cytotoxicity, and methylation bioassays. A threshold level of concentration may be needed to obtain some biological effects [106]. We found this threshold in the rest of the assays and compounds for toxicity, antitoxicity, longevity, healthspan, DNA fragmentation, and SCGE. Conclusions In conclusion, our experimental results show a slight chemopreventive effect of the two cola beverages against HL-60 leukaemia cells, probably mediated by nonapoptotic mechanisms. CCC and CAF induce a global genome hypomethylation evaluated in LINE-1 and Alu M1.
2018-04-03T04:00:42.037Z
2016-07-04T00:00:00.000
{ "year": 2016, "sha1": "c7852eb8c228e97f1490d508cadddb0e51097b68", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2016/7574843.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57495fdedc301d2f2d8c8f4c786350b96a08a1b7", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
11181532
pes2o/s2orc
v3-fos-license
Acute Psychosis Associated with Subcortical Stroke: Comparison between Basal Ganglia and Mid-Brain Lesions Acute onset of psychosis in an older or elderly individual without history of previous psychiatric disorders should prompt a thorough workup for neurologic causes of psychiatric symptoms. This report compares and contrasts clinical features of new onset of psychotic symptoms between two patients, one with an acute basal ganglia hemorrhagic stroke and another with an acute mid-brain ischemic stroke. Delusions and hallucinations due to basal ganglia lesions are theorized to develop as a result of frontal lobe dysfunction causing impairment of reality checking pathways in the brain, while visual hallucinations due to mid-brain lesions are theorized to develop due to dysregulation of inhibitory control of the ponto-geniculate-occipital system. Psychotic symptoms occurring due to stroke demonstrate varied clinical characteristics that depend on the location of the stroke within the brain. Treatment with antipsychotic medications may provide symptomatic relief. Introduction Acute onset of hallucinations and delusions in older and elderly patients with no known history of previous psychiatric disorders should prompt a thorough investigation for secondary or neurologic causes of psychotic symptoms. Previous reports describe acute onset of psychosis resulting from acute stroke or other structural lesions affecting several different brain areas, including the prefrontal and occipital cortices, and subcortical locations such as the basal ganglia, thalamus, mid-brain, and brainstem [1,2]. Several different mechanisms are postulated to explain acute development of psychotic symptoms due to acquired brain lesions, including direct injury to the frontal lobes or disruption of normal frontal lobe functioning through damage to connections between the prefrontal cortices and subcortical structures causing impairment of reality monitoring functions [2,3], direct insult to primary visual cortices (Anton's syndrome) causing misinterpretation of signals from undamaged visual association cortex areas [1,4], and loss of inhibitory control of ponto-geniculate-occipital connections leading to visual hallucinations that have been described as similar in nature to rapid eye movement (REM) sleep (peduncular hallucinosis) [2]. Consequently, we hypothesized that new onset psychosis due to acquired brain lesions may display different clinical characteristics depending on the location of the responsible lesion and the underlying brain structures and mechanisms involved. In this report we compare two cases of new onset acute psychosis resulting from subcortical strokes located in 2 Case Reports in Neurological Medicine different brain regions, the basal ganglia, and the mid-brain. The cases discussed in this report underwent brain imaging studies, general physical and neurological examinations, routine blood tests, and other studies related to workup of stroke. In both cases the psychotic symptoms developed acutely were not the result of coexistent delirium and responded well to treatment with antipsychotic medications. Case Presentation 1: Left Basal Ganglia Hemorrhagic Stroke A fifty-nine-year old right-handed male with no known past medical history was brought in by emergency medical services to the emergency room after acute onset at home of right sided weakness and visual and auditory hallucinations that started approximately eight hours prior to arrival. The patient was alert and oriented to self, location, and date. Examination revealed a right lower facial droop and right hemiparesis with associated pronator drift. Sensation for light touch and pin-prick was normal in the upper and lower extremities bilaterally. Deep tendon reflexes were brisk on the right side compared to the left side. The patient reported content specific delusions that the right side of his body was "rotting, " that he had a tooth that was decaying in the right side of his mouth and that the nurses had injured the right side of his body when transporting him. Despite repeated reassurance by his treating physicians that none of these were true, he continued to display these fixed false beliefs. His visual hallucinations consisted of seeing colors and lights and hearing voices telling him that the right side of his body was "dead. " He was treated with low dose risperidone and his hallucinations steadily decreased in frequency over the course of the next two weeks. Initial laboratory assessment showed a normal serum chemistry panel, normal complete blood cell count, normal urinalysis, and negative urine toxicology screen for illicit substances. Serum HIV testing was negative and a thyroid stimulating hormone level was within normal limits. A 1.5 Tesla magnetic resonance imaging scan of the brain without contrast showed a 3.8 cm by 2.2 cm intraparenchymal hematoma located in the left basal ganglia with adjacent edema likely affecting the corona radiate and possibly extending to the optic radiations. There was no midline shift. Gray-white differentiation was preserved and the ventricles, sulci, and cisterns were normal. Additionally, no extra-axial fluid collections or significant atrophy was present, and there was no evidence of acute or subacute ischemic change. Small periventricular hyperintensities were present in the white matter on a fluid attenuated inversion recovery (FLAIR) sequence, consistent with chronic small vessel vascular disease ( Figure 1). Case Presentation 2: Peduncular Hallucinosis due to Ischemic Stroke A fifty-two-year old right-handed female with past medical history significant for type two diabetes, hypertension, and hyperlipidemia was brought to the emergency room by a friend with new onset of dizziness and unsteadiness when walking that had developed suddenly approximately one month earlier. She also reported new onset of double vision and an occipital headache both of which had developed acutely three days prior to presentation and visual and auditory hallucinations that developed one day prior to presentation. On presentation to the emergency room, the patient was obtunded but able to be aroused and she could answer questions and follow simple commands when aroused. The patient was noted to be grabbing at unseen objects by the nursing staff. The patient was alert and oriented to self, location, and date. Cranial nerve examination showed fixed dilated pupils bilaterally that were not reactive to light, bilateral exotropia of the eyes at rest, and complete paresis of ocular movements. The corneal and gag reflexes were present. The patient had purposeful movement in all extremities but was noted to have more spontaneous movement of the left side limbs than the right side. Sensation for light touch and pin-prick were normal in the upper and lower extremities bilaterally. Deep tendon reflexes were also normal and symmetrical throughout. The patient's visual hallucinations were formed and consisted of seeing a deceased uncle. The patient's auditory hallucinations consisted of intermittently hearing the deceased uncle's voice saying indistinct words and sentences. The patient demonstrated preserved insight: she was aware that the hallucinations were not real and that her uncle was deceased and therefore could not be present and talk to her. The patient received a single dose of haloperidol in the emergency room due to agitation, which temporarily resolved the auditory and visual hallucinations for the remainder of that night. The patient's hallucinations were initially worse at night but then gradually decreased on scheduled haloperidol. The initial laboratory assessment for this patient showed a normal serum chemistry panel, normal complete blood cell count, normal urinalysis, and negative urine toxicology screen for illicit substances. A 1.5 Tesla magnetic resonance imaging of the brain without contrast showed areas of restricted diffusion on DWI sequences located bilaterally in the thalami, left cerebral peduncle, the mid-brain, and right external capsule consistent with acute infarcts. Additional small, scattered white matter hyperintensities were present in the periventricular regions bilaterally on the FLAIR sequence, consistent with small vessel vascular disease. The ventricles, cisterns, and sulci were normal in appearance and there was no significant atrophy. There were no intra-axial or extra-axial fluid collections, no mass, and no midline shift (Figure 2). Discussion Psychosis is a relatively rare complication after stroke, with one large cohort study reporting a cumulative incidence of psychotic disorders of just 6.7% in the twelve years after first stroke [5]. Psychosis in patients with basal ganglia lesions is theorized to result from decreased reality testing or reality monitoring and typically manifests as a combination of content specific delusions (usually with a paranoid quality) Case Reports in Neurological Medicine and sometimes visual or auditory hallucinations, although these are reported less frequently [2,3,6]. The first case presented in this report was typical of patients who develop delusions and hallucinations due to basal ganglia lesions. The patient primarily reported fixed false beliefs related to his new physical impairment on the right side of the body, including a paranoid component as evidenced by his belief that the hospital nurses had injured him. These false beliefs were content specific and he did not exhibit delusional thinking in other areas. His visual hallucinations were unformed and his auditory hallucinations also primarily were related to the new physical impairment on the right side of his body, suggesting a content specific quality to the auditory hallucinations as well. Interestingly, the patient's lateralized nihilistic somatic delusions (that his right side was rotting and a tooth in the right side of his mouth was decaying), accompanied by auditory hallucinations telling him that the right side of his body was dead, may be consistent with Cotard syndrome (delire des negations), which has been reported in primary psychiatric disorders as well as neurologic disorders such as dementia, traumatic brain injury, and seizures [7]. Delusions and hallucinations caused by lesions in the basal ganglia are thought to occur due to disruption of normal self-corrective functions that prevent development to odd beliefs as well as impairment of the sense of familiarity which may contribute to development of paranoid ideation [3]. Normal frontal lobe functioning depends on five frontalsubcortical circuits that when damaged can alter normal behavior and contribute to development of neuropsychiatric symptoms [8]. Previous reports have demonstrated that lacunar infarction in the basal ganglia is sufficient to cause prefrontal lobe hypometabolism on positron emission tomography (PET) imaging, suggesting decreased or altered frontal lobe functioning as a direct result of the lacunar stroke [2,9,10]. The patients also were noted to have chronic white matter small vessel ischemic disease which could further be a disruption in the frontal subcortical pathways further facilitating the behavioral sequelae mentioned. In the case presented in this report, the left basal ganglia lesion was quite extensive, likely causing disruption not only of the frontal subcortical circuits mentioned above but additional subcortical structures such as the external capsule, thalamus, and posterior limb of the internal capsule. This lesion was much larger for instance than the right caudate lacunar stroke lesion previously reported to cause content specific delusions [2]. Additionally, the basal ganglia lesion described in this report affected the dominant hemisphere, while the previous case report involving the caudate lacunar stroke involved the nondominant hemisphere [2]. Taking these two case reports together, it appears that unilateral basal ganglia lesions in either the dominant or nondominant hemisphere are sufficient to produce delusions [2]. This is supported by the previous case report which included functional neuroimaging data obtained from brain fluorodeoxyglucose positron emission tomography (PET) imaging in addition to the structural analysis [2]. The pervious case report used the PET imaging to determine that a unilateral lesion affecting this system can produce bilateral alterations of prefrontal functioning, which is theorized to be necessary for generation of delusions and other psychotic symptoms [2]. However, it is not currently known and has not previously been reported if unilateral prefrontal dysfunction or lesions would be sufficient to produce psychotic symptoms. Additional research utilizing advanced neuroimaging techniques such as diffusion tensor imaging to visual disruption in specific brain pathways and connections would be useful for further identifying and defining implicated pathways involved in the generation of acquired psychotic phenomena. Peduncular hallucinosis, in contrast, was first described in the early 1920s by Jean Lhermitte [1,4]. The pathophysiology of peduncular hallucinosis was then elicited through autopsy studies and the term "Peduncular Hallucinosis" was coined in reference to cerebral peduncles being the predominant anatomic structures thought to be involved [1,4]. Peduncular hallucinosis is described as having a dream like quality, including vivid and colorful visual images [1,4]. The images reported to be seen by patients with this disorder that can be scenic or bizarre are almost always formed and consist of complex objects or people [1,4]. Lilliputian hallucinations of either animals or people have been reported as well [1,4]. Usually there is preserved insight and the hallucinations are considered to be egosyntonic [1,4]. Additionally there is a high percentage of hypnagogic hallucinations that predominantly occur in the evening when falling asleep and are thought to be related to a derangement of mid-brain and brainstem mechanisms that contribute to control of the sleep-wake cycle [1]. The case presented in this report displayed hallucinations typical to those reported to occur with mid-brain and thalamic injuries. Specifically, the patient had formed hallucinations with preserved insight that were worse at night, consistent with a hypnogogic component. However, there were no bizarre or Lilliputian qualities to the hallucinations, which have also been described to occur with peduncular hallucinosis. A previous report by Benke in 2006 described auditory hallucinations in addition to visual hallucinations in a series of five cases [11]. All of the patients described by Benke reported visual and auditory hallucinations, with three of the five patients also reporting tactile hallucinations as well [11]. The auditory hallucinations described by Benke include voices, both distinct and indistinct, and sounds made by animals and in one case a train [11]. This is similar to the auditory hallucinations described by the patient in this report, who reported hearing indistinct voices from deceased relatives. However, the patient described in this report did not experience auditory hallucinations of sounds made by animals or inanimate objects such as trains, both of which were reported in some of the patients described by Benke [11]. It is not clear from the literature if involvement of certain brain structures predisposes to auditory hallucinations of voices, animals, or inanimate objects; or perhaps if personal experience and life-experience related factors play a role in the types of auditory hallucinations experienced. Taken together these cases support the idea that psychosis is a clinical syndrome that can arise from alteration of several distinct underlying brain structures and mechanisms and that clinical differences in the quality of the psychotic symptoms may reflect the particular underlying systems involved. Specifically, content specific delusions with paranoid ideation likely suggest a process disrupting normal frontal lobe reality monitoring and checking systems, while formed hallucinations with preserved insight and related to changes in the sleep cycle may suggest a release phenomenon with decreased suppression of spontaneous visual cortex functioning by brainstem and midbrain structures. Since both types of hallucinations responded well to antipsychotic medications, there is the possibility of a shared common pathway or structure necessary for psychosis to develop that responds to antidopaminergic medications.
2016-05-12T22:15:10.714Z
2014-09-18T00:00:00.000
{ "year": 2014, "sha1": "81a3dc73a1a5020bf9ec96da9733210e7958134b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crinm/2014/428425.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7100888cfd9f786856e9ff9da85842054fc21c91", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256869425
pes2o/s2orc
v3-fos-license
Analysis and Characterization of Glutathione Peroxidases in an Environmental Microbiome and Isolated Bacterial Microorganisms Glutathione peroxidases (Gpx) are a group of antioxidant enzymes that protect cells or tissues against damage from reactive oxygen species (ROS). The Gpx proteins identified in mammals exhibit high catalytic activity toward glutathione (GSH). In contrast, a variety of non-mammalian Gpx proteins from diverse organisms, including fungi, plants, insects, and rodent parasites, show specificity for thioredoxin (TRX) rather than GSH and are designated as TRX-dependent peroxiredoxins. However, the study of the properties of Gpx in the environmental microbiome or isolated bacteria is limited. In this study, we analyzed the Gpx sequences, identified the characteristics of sequences and structures, and found that the environmental microbiome Gpx proteins should be classified as TRX-dependent, Gpx-like peroxiredoxins. This classification is based on the following three items of evidence: i) the conservation of the peroxidatic Cys residue; ii) the existence and conservation of the resolving Cys residue that forms the disulfide bond with the peroxidatic cysteine; and iii) the absence of dimeric and tetrameric interface domains. The conservation/divergence pattern of all known bacterial Gpx-like proteins in public databases shows that they share common characteristics with that from the environmental microbiome and are also TRX-dependent. Moreover, phylogenetic analysis shows that the bacterial Gpx-like proteins exhibit a star-like radiating phylogenetic structure forming a highly diverse genetic pool of TRX-dependent, Gpx-like peroxidases. Gpx are selenoproteins and employ a selenocysteine (SeCys) in place of Cys at the catalytic site. However, mammalian Gpx5, Gpx7, and Gpx8 have unknown electron donors or low Gpx activity. Those SeCys-based mammalian Gpx proteins have been named canonical Gpx and share common primary sequences and tertiary structures. While the SeCys-based canonical Gpx proteins share a similar thioredoxin fold with the Prx members in their tertiary structure, they are phylogenetically distant from the Prx family with sequence similarities lower than 20% [7]. They also harbor significantly different signatures from Prx. First, the SeCys is responsible for the redox cycle and a resolving Cys is absent. Secondly, the canonical mammalian Gpx proteins contain a functional helix/dimeric loop with the motif "PGGG" that contributes to dimeric interactions and a variable oligomerization loop at the C-terminus that mediates contacts between two dimers [7]. On the other hand, the non-mammalian Gpx orthologs normally have a Cys residue at the catalytic site and use TRX rather than GSH as the reducing agent. Recently, numerous orthologs of Gpx were identified in a variety of non-mammalian organisms, such as fungi [8,9], plant [10,11], insects [12], and rodent parasites [13][14][15]. Those non-mammalian Gpx orthologs exhibit significant sequence and structural similarities with the canonical mammalian Gpx, but were shown to have a higher preference for TRX over GSH, as the electron donor and should be functionally classified as Prx. Physiochemical study of the Gpx-like proteins in several non-mammalian organisms, such as Plasmodium [15], Drosophila [12], Chlorella [11], and Trichoderma [9], has confirmed the catalytic preference for TRX over GSH. The biochemical mechanisms of the redox interaction between Gpx and TRX from Arabidopsis were also investigated [16]. Indeed, it has been proposed that the majority of the nonmammalian Gpx-like peroxidases have substrate specificity for TRX and are more ancient than the canonical mammalian Gpx proteins, which use GSH as the reductant [17,18]. The crystal structures of several Gpx-like proteins from non-mammalian organisms, such as yeast [8], poplar [10], and Drosophila [12] were derived to investigate the genetic and structural characteristics of the redox specificity. On one hand, the studies showed that the TRX-dependent Gpx-like proteins share common structural properties with the canonical Gpx proteins, viz.,: i) both form the typical thioredoxin-fold with four strands forming internal β sheets surrounded by four α-helices which are then interconnected by several loops; (ii) both form a tetrad redox center comprising four residues, Cys (or SeCys), Gln, Trp, and Asn, where the Cys or SeCys residue is the peroxidatic residue and facilitates the nucleophilic attack on the hydroperoxides by forming hydrogen bonds with Gln and Trp [19]. On the other hand, the TRX-dependent, Gpx-like proteins exhibit significant differences from the canonical Gpx proteins: (i) a highly conserved peroxidatic Cys has been identified in TRX-dependent, Gpx-like proteins, while SeCys was found in the canonical Gpx proteins; (ii) another so-called "resolving Cys" is present in TRX-dependent, Gpx-like proteins but not the canonical Gpx, which is involved in the catalytic cycle by forming a disulfide bond with the peroxidatic Cys. The resolving Cys was considered to be required for the Gpx orthologs to use TRX as the electron donor through formation of an intramolecular disulfide bond with the peroxidatic Cys [17]; (iii) the oligomeriazation domain for formation of the multimeric structure is lacking in TRX-dependent, Gpx-like proteins, which is typical of the canonical Gpx proteins. Notably, those characteristics distinguish TRX-dependent, Gpx-like proteins from those of canonical Gpx and align with those of some PRX family proteins, e.g., the PrxQ subtype, supporting the common catalytic substrate and mechanism with this family [1]. The presence of the Gpx-like proteins in bacteria has been regarded as widespread and even ancient [20]. However, the details of the sequences, structures, and phylogeny of this class of proteins in bacteria have not been sufficiently investigated and the identification of the proteins in the environmental microbiome has not been reported. Environmental microbiomes are a rich source for discovering novel genes. In particular, the microbiome from soils has the potential to evolve a rich pool of antioxidant proteins due to the oxidative stresses in the soil niche. In the present study, we leverage this potential of the soil microbiome by mining these data from our in-house dataset (PRJNA237577 and SRP036853) and the NCBI environmental sequence dataset (env_nr) [21]. We investigated the characteristics of Gpx-like proteins from environmental microbiome metagenomes and we isolated bacterial species in terms of sequences, structures, and phylogeny. Furthermore, we provide evidence that most of the Gpx-like proteins encoded by the environmental microbiome and isolated bacterial species should be classified as TRX-dependent, Gpx-like peroxidases with TRX as the reducing agent. Gpx-Like Protein Sequences from Environmental Microbiome and Isolated Bacterial Species The microbiome Gpx-like proteins were obtained from the metagenomes of the data depository SAMN02630628 and NCBI env_nr database (last accessed on Oct. 6, 2021) (www.ncbi.nih.gov). The metagenomic sequences from the depository SAMN02630628 were assembled using velvet (version 1.0.18) [25] and the genes were predicted using MetaGeneMark (version 3.25) [26]. The Gpx-like proteins from the environmental microbiome datasets were extracted based on functional annotation of "glutathione peroxidase" or by a BLAST homology search against a set of previously prepared Gpx proteins (see the previous section in Materials and Methods). The bacterial Gpx-like proteins were collected from the UniProt database using two methods [22]: (i) by advanced search with the keyword "glutathione peroxidase" in "Gene Name" and "bacterium" in "Taxonomy" on the web at www.uniprot.org; (ii) by sequence comparison using BLAST against a set of previously prepared Gpx proteins. The protein entries derived from the two methods were combined and de-duplicated. The extracted proteins were processed by discarding those sequences that were too short or too long and trimming the divergent N-terminus and C-terminus in the multiple sequence alignment. The non-redundant protein set was derived by performing cd-hit on the processed proteins [27]. The multiple sequence alignment of the Gpx-like proteins was performed using Clustal Omega (version 1.2.2) [23] and the conservation pattern was identified using ConSurf (version 3) [28]. Structural Modeling The three-dimensional structure of MtGpx0 was modeled using the crystal structure of Schistosome Gpx4 2V1M as the template (with similarity ~ 43%) on the Phyre server (version 2.0) [29]. The structure model was also predicted using AlphaFold2 [30]. All the structures were presented and analyzed with PyMOL (version 2.5.1) [31]. The model covers residues 28-186 of MtGpx0. A robust structure is lacking at the extreme N-terminus due to the high diversity of the sequences at this region which consequently yielded these diverse structures. Phylogenetic Analysis The phylogenetic tree was constructed for the multiple aligned sequences using MEGA (version 6) [32] with bootstrap of greater than 1000 using the neighbor-joining method. The phylogenetic network for the same set of Gpx-like proteins was generated by SplitsTree (version 4.15.1) using the neighbor-net method [33]. To reduce the influence of alignment gaps on tree building, the large gap regions, such as the oligomerization domain in the multiple aligned sequences, were removed. Comparison of the Primary Sequences of Gpx-Like Proteins from Microbiome Metagenomes with the Gpx or Gpx-Like Proteins from Non-Microbial Species A total of 1,319 Gpx-like proteins were identified from the environmental microbiome metagenomes in the combined NCBI env_nr dataset and our in-house dataset (see Materials and Methods). Using this set of proteins, we created a non-redundant set of 392 proteins (averaged to 186 amino acids in length) by merging the proteins with pair-wise similarities greater than 80%. Multiple sequence alignment of the 392 proteins exhibits a wellconserved pattern (Fig. S1). Construction of their phylogenetic tree shows that many proteins are closely clustered (Fig. S2), and therefore we further selected 26 representative proteins, which are able to cover the major branches and are distant from each other (the similarities averaged to 45.7% and range between 40-65%) (Fig. S3A). To further assess the similarity between the environmental microbiome-encoded Gpx-like proteins and the non-microbial Gpx-like proteins, we performed multiple sequence comparison between nine representatives of the 26 microbiome-encoded Gpx-like sequences and those encoded by yeast, poplar, Drosophila, Schistosomes and humans ( Fig. 1). As a result, the microbiome-encoded Gpx-like sequences were shown to be highly similar to the known non-mammalian Gpx-like Prx proteins (yeast Hyr1, poplar PtGpx5, Drosophila DmGpx) or phospholipid peroxidases (Schistosomes SmGpx and human Gpx4) with average similarities of ~42% compared to the ~30% with the canonical human Gpx proteins (Gpx1-Gpx3 and Gpx5-8). The relationship is also supported by their phylogenetic structure, where the microbiome-encoded Gpx-like proteins are clustered together with yeast Hyr1, poplar PtGpx5, Drosophila DmGpx, Schistosomes mansoni SmGpx, and human Gpx4 (Fig. S3B). There is no detectable sequence similarity between the microbiome Gpx-like proteins with the canonical bacterial Prx proteins (<20%). This indicates that the microbiome-encoded Gpx-like proteins are closely related with the Gpxlike Prx proteins at the primary sequence level. The Gpx-Like Proteins from Microbiome Metagenomes Exhibit Genetic Properties Analogous to TRX-Dependent Prx We performed comparative analysis of the primary sequences and secondary structures of the Gpx-like proteins from the environmental microbiome dataset and non-microbial species to examine the characteristics of the environmental microbiome-encoded Gpx-like proteins. We found that the four active site tetrad residues (SeCys61 or Cys61, Gln95, Trp150, and Asn151, blue triangles in Fig. 1) are conserved among the compared sequences, except that the peroxidatic Cys is replaced by SeCys in mammalian Gpx (except in Gpx5, Gpx7, and Gpx8) (encoded by the IUPAC letter U, shaded in pink in Fig. 1) [34]. A second Cys residue, Cys107, was found in a region called "Cys block" on the α1a helix in the Gpx-like sequences from the environmental microbiome metagenomes and the other three organisms (yeast, poplar, and Drosophila) (framed in orange), but not in the canonical human Gpx proteins. This second Cys residue in the "Cys block" was identified as the "resolving Cys" by forming a disulfide bridge with the peroxidatic Cys in the non-mammalian Gpx [8,10,12]. The disulfide bridge was shown to be essential for regeneration of the redox state of Prx in the catalytic cycle [12] and has been proposed to be specific for reducing the TRX rather than GSH [35]. Actually, we found that the resolving Cys was highly conserved among the Gpx-like proteins from the environmental microbiome dataset (>99%), thus indicating the indispensable role of this Cys in the catalysis of the environmental microbiome (Fig. S1). The high prevalence of the two Cys residues in the Gpx-like proteins is analogous to that in the 2-Cys Prx enzymes, except that the location of the resolving Cys in the latter group is variable among different Prx subtypes [1,36]. Another notable feature of the microbiome-encoded Gpx-like proteins is the lack of the dimer loop/functional helix (framed in green in Fig. 1) and the tetramer loop/oligomerization loop (framed in light green in Fig. 1) in comparison with the canonical human Gpx, where the two loop domains are responsible for subunit interface interaction. This suggests that the microbiome-encoded Gpx-like proteins are unable to form oligomeric quaternary structures (see next sections). The absence of the multimeric domains was also observed in other TRX-dependent Gpx-like proteins, such as the monomeric Drosophila DmGpx and yeast Hyr1, in analogy to the Prx subtype PrxQ [1]. The exception is the dimeric poplar PtGPX5, where its dimerization is not induced from the typical dimer loop domain but from sporadic hydrophobic or polar residues [10]. The Gpx-Like Proteins from Microbiome Metagenomes Share a Core Tertiary Structure with Gpx and Prx Family Proteins To investigate the functional implications of the microbiome-encoded Gpx-like proteins, we studied their structural properties. Considering that the pair-wise similarities between the microbiome-encoded Gpx-like proteins are high (40-65%) and the key functional domains are conserved, we used the longest sequence (MtGpx0, 186 residues) as a representative basis for building the structural model using homology modeling with Phyre2 [29] and deep learning prediction with AlphaFold2 [30] (Figs. 2A-2C). The structures from the two methods are highly consistent with the RMSD of 0.59 Å (Fig. 2D). The modeling showed that the protein MtGpx0 forms the thioredoxin-fold typical of both Gpx and Prx as the four internal β-strands form the central β sheets flanked by three α-helices [37]. The fold comprises the N-terminal motif β1α1β2 and the C-terminal motif β3β4α3 connected by the helices α1a and α2 ( Fig. 2A). The modeled structure also contains the variable extension or insertion fragments typical of Gpx and Prx family proteins, including two additional β-strands at the N-terminus, i.e., β1a and β1b, folding into a β hairpin, and an extra α helix α1a inserted at the proximity of the C-terminus [38]. . Those with known structural information include yeast Gpx3 3CMI, populous Gpx5 2P5QA, Drosophila melanogaster DmGPx AAF47761, Schistosome mansoni Gpx4 2V1M, human Gpx4 2OBIA, human Gpx5 O75715, human Gpx3 2R37A, human Gpx6 P59796, human Gpx1 2F8AA, human Gpx2 2HE3A, human Gpx7 2P31A, and human Gpx8 3CYNA. The secondary structure was obtained from Schistosome mansoni Gpx4 and displayed on the top of the alignment. The α-helices and 3 10 -helices are represented by coils and labeled by α and η respectively. The β-strands are represented by arrows and labeled by β. The β turns are labeled by TT. The identical residues in the same columns are shaded in red and displayed in white letters, while homologous residues are displayed in red letters and framed in blue boxes. The catalytic residues (Cys61 or SeCys61, Gln95, Trp150, and Asn151) are indicated with blue triangles at the bottom of the alignment. The SeCys residues (IUPAC letter U) were shaded with pink. The block containing resolving Cys, i.e., "Cys block" is framed with orange. The regions for dimer interface loop (the functional helix) and tetramer interface loop (the oligomerization loop) in the five human Gpx proteins are framed in green and light green, respectively. The Gpx-Like Proteins from Environmental Microbiome Metagenomes Share Functional Motifs with the Trx-Dependent Prx Proteins In the modeled structure of MtGpx0, the resolving Cys107 on the helix α1a faces the peroxidatic Cys61 in the loop preceding α1. This conformation enables the two Cys to form the intramolecular disulfide bond. The formation of the disulfide bond was shown to be required for reacting with TRX [7]. The bond distance in the simulated model is estimated to be 14.3 Å, which is out of the range of the valence bond interaction. The large distance corroborates the previous observation that structural rearrangements on the Cys-containing fragments are necessary to form the disulfide bond [37]. We observed that the helix α1a and the surrounding loop are rich in residues with negatively charged residues (Glu96, Asp101, Glu102, and Glu105), or residues favoring turn-like structures (Pro97, Gly98, Ser99, Thr100, Thr104, Ser108, and Asn110) (Fig. 3A). The negative charge could cause the instability of the helix and lead to unwinding of the helix α1a, as demonstrated in the oxidized form of TRXdependent, Gpx-like protein from poplar [10]. Therefore, those residues confer flexibility to α1a and the surrounding loop, facilitating the formation of the disulfide bond. The resolving Cys is absent in canonical human Gpx proteins. (Figs. 1 and S1) and have been shown to be involved in TRX recognition in plant Gpx [10]. In contrast, the residues specific for GSH binding (Arg57, Arg185, and Met147 in bovine Gpx [3]) are absent in the microbiome-encoded Gpx-like proteins, further supporting the functional relationship of the Gpx-like proteins with TRX-dependent Prx. The tetrad active site residues (Cys61, Gln95, Trp150, and Asn151) form a cleft on the structure surface with a relative solvent-accessible area of 52.7%, making it accessible to the solvent (Figs. 4A and 4B). This ratio is comparable to the 44% for Schistosomes SmGpx and 34% for poplar PtGPX5. The active tetrad was surrounded by several non-charged residues, i.e., Ser59, Gly62, Phe63, Thr64, Phe92, Gly93, and Ser169, and one positively charged residue, Lys60 (Fig. 4C). The distribution of the surrounding residues leads to a mixed, non-charged and weak positively charged surface (Fig. 4D), in contrast to the uniformly distributed surface charges in SeCyscontaining mammalian Gpx proteins in the active site cleft [10]. To explore the oligomerization state of MtGpx0, the modeled structure was aligned to the subunit of tetrameric human Gpx3 and monomeric human Gpx4 (Fig. 5). MtGpx0 deviates from the subunit of tetrameric Gpx3 (with an RMSD of 0.737), more so than the monomeric Gpx4 (with an RMSD of 0.394). The difference between MtGpx0 Fig. 5. Structural alignment of MtGpx0 with the tetrameric human Gpx3 in (A) and monomeric human Gpx4 in (B). MtGpx0 is deviated from the subunit of tetrameric Gpx3 in the dimer interface (the so-called functional helix containing the peroxidatic Cys) and tetramer interface domain (the so-called oligomerization loop) of Gpx3 (highlighted in blue in A), but shows higher coincidence with the monomeric human Gpx4 in the two regions (highlighted in purple circles). and human Gpx3 is remarkable at the dimer interface and tetramer interface of Gpx3 and is mechanistically induced by the absence of the two interface domains in MtGpx0 (Figs. 5A and 1). Instead, MtGpx0 is well superimposed on the monomeric human Gpx4 in the two regions (Fig. 5B). The structural comparison at the oligomeric interfaces strongly indicates the monomeric state of MtGpx0. Until now, all the reported TRXdependent, Gpx-like proteins are monomers, except poplar PtGPX5, of which the dimerization is induced from alternative mechanisms other than oligomeric interface domains [10]. Taken together, the structural properties presented here for MtGpx0, i.e., the conformation of the two Cys residues, the substrate binding sites, the surface charge distribution, and the oligomeric state, clearly point to an explanation as to the functional homology of MtGpx0 with the TRX-dependent Prx in spite of the high sequence similarity with the canonical Gpx proteins. Patterns of Sequence Conservation among All Bacterial Gpx-Like Proteins Using sequence alignment and structural modeling, we have shown that Gpx-like proteins from environmental microbiome metagenomes harbor the key functional motifs required for the TRX-dependent catalysis. To determine whether the functional motifs related with the TRX-dependence could be extended to other bacterial Gpx-like proteins, we collected the known bacterial proteins homologous to Gpx-like proteins from the UniProt database and performed preprocessing to improve the sequence quality (see Materials and Methods) [22]. The final processed sequence set contains 1,997 entries with comparable lengths and identifiable homologies. The 1,997 protein entries were then used for multiple sequence alignment and conservation pattern profiling. Based on the primary sequence and secondary structure alignments, we established the conservation pattern as shown in gradient colors (Figs. S4A and S5). The proteins have pair-wise similarities of 35-55%, averaging to 45% (Fig. S4B). We observed that the conservation pattern of the bacterial Gpx-like proteins is consistent with that of the environmental microbiome-encoded Gpx-like proteins, suggesting their functional homology (Figs. S4A and S1). From the sequence alignment profile, four conserved domains were obtained (Table 1). Overall, the conserved domains represent 53% of the whole protein length involving 99 residues. The four conserved domains contain the residues essential for TRX-dependent catalysis: the four active site residues (Cys, Gln, Trp, and Asn, Domain Locus Sequences The numbering is consistent with the coordinates in Fig. 1. Solid underline: Active site residues Double underline: Resolving Cys Wave underline: Surrounding residues of catalytic center Dotted underline: Surrounding residues of the disulfide bond Val/Thr hydrophobic underlined), the resolving Cys residue (double underlined), the residues surrounding the active-site cleft (wave underlined), and the residues surrounding the disulfide bond (dotted underlined) (Tables 1 and 2). Similar to MtGpx0, the surrounding environment of the active-site cleft in bacterial Gpx-like proteins is a mixture of noncharged and charged residues ( Table 2). More importantly, most bacterial Gpx-like proteins also lack the dimerization domain and the oligomerization domain, indicating their monomeric state with the exception of a small proportion of proteins (17%) containing the oligomerization loop (Fig. S4). The four domains also harbor the conserved residues located on the core elements of the tertiary structure, such as the aliphatic group residues Gly and Pro, residing in the turn/bend of the tertiary structures, and the hydrophobic group residues Phe, Trp, Val, Leu, and Ile, constituting the inner β-strands of the tertiary structure (Table 3). Those conserved residues in the four domains are the key elements forming the inner core of the thioredoxin-fold (Fig. S4C). Taken together, the Gpx-like proteins from isolated bacterial species, similar to those from environmental microbiome, plants and fungi, share common structural elements and functional motifs specific for TRXdependent catalysis. Phylogenetic Structure of Bacterial Gpx or Gpx-Like Proteins To investigate the sequence diversity and phylogenetic structures of the bacterial Gpx-like proteins, we created a non-redundant protein set by merging the protein sequences with similarities greater than 70%, thus obtaining 376 representative sequences for phylogeny construction. Interestingly, the phylogeny exhibits a star-like structure and most of the branches have a bootstrap support value lower than 70, with the exception of some external branches (collapsed as triangles in Fig. S6A). The low bootstrap confidence in internal branches is further illustrated by the highly inter-connected phylogenetic network constructed by SplitsTree (Fig. S6B) [33]. Another notable feature of the phylogeny is that the proteins from the same phylum scatter across multiple branches while the single branches contain a mixture of proteins from multiple phyla. This highly intermingled phylogeny of bacterial Gpx-like proteins is in contrast with the clear grouping structure of canonical Gpx proteins from vertebrates, where the Gpx proteins were clustered in several clades according to their substrates or subcellular localizations [18]. The phylogenetic differences of Gpx proteins between bacteria and vertebrates further support the view that the TRX-dependent, Gpx-like proteins in bacteria are very ancient and have evolved for a long time from a common ancestor, while the occurrence of GSH-dependent, canonical Gpx proteins from vertebrates might be a recent event [17]. Discussion Peroxiredoxins (Prx) and glutathione peroxidases (Gpx), usually using TRX and GSH as reducing agents, respectively, have attracted extensive research interest due to their primary roles in protecting cells from oxidative damage. The rapidly accumulated research findings revealed the surprisingly high versatility and complexity of these two families of proteins regarding their catalytic mechanisms, sequence classifications, and evolution. In addition to the heterogeneous subtypes with subtle differences within each family, the two families of proteins also exhibit interconnections with respect to function and evolution. Previous studies have identified multiple Gpxlike peroxidases in diverse organisms, including fungi [8,9], plant [10,11], insect [12], and rodent parasites [13][14][15], in which the Gpx-like proteins exhibit significant sequence homologies with canonical Gpx proteins but higher substrate specificity for TRX than GSH. It has been proposed that most of the non-mammalian glutathione peroxidases should be classified as Gpx-like peroxiredoxins with TRX as the reducing agent [18]. However, the properties and classification of this family of proteins in prokaryotes, particularly in the environmental microbiome, have not been systematically studied. In this study, we addressed these issues for the first time by comprehensively analyzing the Gpx-like proteins from environmental microbiome metagenomes using public databases and our own dataset [21]. We also analyzed the Gpx-like proteins from individually isolated bacteria by mining a set of about 2,000 publicly available sequences, which is thus far the largest dataset in this regard. By performing primary sequence and tertiary structural analysis, we show that the Gpx-like proteins from the environmental microbiome and isolated bacteria share high sequence homologies with the canonical Gpx proteins but use TRX as the reductant and should therefore be characterized as TRX-dependent peroxiredoxins. This is the following evidence: (i) the conservation of the peroxidatic Cys in place of SeCys; (ii) the existence of a second "resolving Cys" in the helix α1a for forming the disulfide bond with the peroxidatic Cys through conformational change; and (iii) the absence of the dimerization domains and oligomerization domain required for the formation of multimeric structure. These three features were proposed as the minimal requisites for classifying Gpx-like proteins as TRX-dependent peroxiredoxins [17]. Of particular interest, a small proportion of sequences from the environmental microbiome (4.2%) and isolated bacteria species (17%) contain the oligomerization loop domain but lack the dimerization loop domain (Figs. S1 and S4). Based on parsimony evolution, it is reasonable to propose that the occurrence of the oligomerization loop is a late evolutionary event that succeeds the ancestral sequences not containing the loop. However, this oligomerization loop was not observed in Gpx-like proteins from fungi, insects or higher plants based on the analysis of the data from PeroxiBase [2]. It appears that the small proportion of bacterial sequences containing this loop are the only Gpx-like proteins other than the canonical mammalian Gpx, which "evolved" this loop. This loop is a manifestation of the interconnection between the two groups of proteins, Prx and Gpx. Moreover, based on the previous consensus that the bacterial Gpx-like proteins are more ancient than the canonical mammalian Gpx proteins [17], we propose two possible explanations for the sequences containing the oligomerization loop: (i) They are the remnants of GSH-dependent survivors in bacteria in a GSH-deficient and TRX-rich environment, but were later adopted by mammals facing a GSH-rich condition. (ii) They could be a consequence of convergent evolution between some bacteria in a GSH-only environment and mammals in the normal GSH-rich condition. Regardless, the external environments played key roles in shaping the evolution of the two groups of proteins. The TRX-dependent catalysis of Gpx-like proteins in bacteria and GSH-dependent catalysis of Gpx in mammals might simply reflect the availability of TRX or GSH in their cellular environments. In bacteria, GSH is very limited or absent, especially in gram-positive bacteria [39]. Therefore, bacteria still maintain the Cys-based Gpx using the highly available TRX as the reducing agent. The mammalian Gpx proteins may have evolved due to the high concentration of GSH in eukaryotic cells and the higher catalytic efficiency of the SeCys than Cys. It revealed an evolutionary achievement in the shift of reducing agents by only mutating a small amount of key elements without drastically changing sequences. The shaping effects of the environments were also manifested by the phylogenetic structure of the Gpx-like proteins of bacteria. The availability of the large amount of Gpx sequences allowed us to build a phylogeny capturing a sufficient diversity of bacterial taxonomy. The phylogeny of the bacterial Gpx-like proteins exhibits a star-like radiating structure, where the proteins are unable to form confident branch splitting. This structure is different from the previous report in [18], where the bacterial Gpx-like proteins form three distinct groups. It is probably due to the small amount of sequences (fewer than 50) included in the study of [18]. The star-like structure is also in clear contrast to the situation for the canonical Gpx proteins in mammals [18], where the Gpx proteins were clustered in several well-defined clades. In comparison with the relatively narrow living environments of mammals, bacteria inhabit a wide spectrum of environmental conditions and thus may face diverse oxidative challenges in these environments. The highly diversified genetic profile of Gpx-like proteins in bacteria might be the consequence of adapting to distinct oxidative stresses encountered in a wide range of environmental niches. In conclusion, we have comprehensively analyzed the Gpx-like proteins encoded by the environmental microbiome and isolated bacterial species, and identified the characteristics of the proteins in terms of sequences, structures, and phylogeny. We showed that the Gpx-like proteins from bacteria should be classified as TRXdependent peroxiredoxins using TRX as the reductant rather than GSH. The high diversity of this group of proteins may provide a genetic pool to be used as a resource for antioxidant applications or targets of antibacterial agents. In the meantime, the huge diversity of the proteins poses challenges for future functional and application studies. Novel methods of high-throughput screening are needed to identify the sequences with high antioxidant capability under specific oxidative stresses. Also, the physiological relationship between the Gpx-like proteins and the Prx proteins, and the regulatory network within bacteria, remain to be delineated and that will help to enhance the understanding of the molecular mechanisms of bacteria in protecting cells from diverse oxidative damage.
2023-02-16T06:16:19.351Z
2023-01-20T00:00:00.000
{ "year": 2023, "sha1": "5a01d99fce3ff0f3367527d422c54170bac867a9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "12f41c7c3bc8a65d627941613330274059c2a702", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21033745
pes2o/s2orc
v3-fos-license
Exact Kohn-Sham potential of strongly correlated finite systems The dissociation of molecules, even the most simple hydrogen molecule, cannot be described accurately within density functional theory because none of the currently available functionals accounts for strong on-site correlation. This problem has led to a discussion of properties that the local Kohn-Sham potential has to satisfy in order to correctly describe strongly correlated systems. We derive an analytic expression for this potential at the dissociation limit and show that the numerical calculations for a one-dimensional two electron model system indeed approach and reach this limit. It is shown that the functional form of the potential is universal, i.e. independent of the details of the system. I. INTRODUCTION Over the years the improvement in exchange-correlation (xc) functionals has made density functional theory (DFT) [1,2] the tool of choice to accurately study and predict properties of many-electron systems. Applications range from atoms to molecules and nanostructures, biomolecules and solids and cover diverse topics such as theoretical spectroscopy, e.g. optical, energy loss, and time-resolved spectroscopy, electron transport, light induced phase transitions, photochemistry, and electrochemistry [3,4]. Despite this success major basic challenges remain that usually are manifestations of strong, static and dynamic, electron correlations [5]. Van der Waals interactions, the localization in strongly-correlated systems, open-shell molecules, and molecular dissociation are poorly accounted for by present functionals [5,6]. A general measure of inter-electron correlations is the ratio of the kinetic energy to the potential energy of the Coulomb interaction between electrons. While the kinetic energy is lowered by delocalization of electrons over the system, the Coulomb repulsion works in the opposite direction trying to keep electrons far from each other and thus favoring the tendency to localization. In the condensed matter context this interplay of two opposite tendencies is commonly pictured in terms of the Hubbard on-site correlations that suppress tunneling of particles between atoms and lead to localization of electrons on lattice sites (or groups of sites). Strong Hubbard correlations are responsible for the dissociation of molecules, the physics of Mott insulators, non-itinerant magnetism in most of the magnetic dielectrics, the Coulomb blockade in quantum transport, etc. The failure of the common DFT-functionals to capture the effects of Hubbard correlations led to the development of the LDA+U method [7] and its more elaborated counterpart, the dynamical mean-field theory (DMFT) [8], to describe strongly correlated systems. On the other hand, it is absolutely clear that DFT being an "in principle exact" theory should be capable to describe the regime of strong correlations provided the proper xc potential is known. In this realm, it is fundamental to increase the knowledge of relations, fulfilled by the exact xc potential, in order to move forward on the road towards the ultimate functional, the "holy-grail of DFT". In the present work, we consider a prototypical example of a physical behavior governed by strong Hubbard correlations -the dissociation of diatomic molecules, and discuss exact features of the xc potential v xc necessary to describe the correlation-driven electron localization happening in the dissociation limit. One such feature is well known -in the dissociation of heteroatomic molecules the Kohn-Sham (KS) potential v s acquires a step in between the fragments to adjust the ionization potentials [9]. The value of this step is universal and is simply given by the difference of the ionization potentials of the two fragments of the dissociated molecule. Apparently, the presence of this step is necessary to prevent an unphysical fall of electrons to the fragment with a higher ionization potential. However, as we show below, it is not sufficient to correctly describe the dissociation, i.e. the strongly correlated, limit. In fact, in this limit the xc potential acquires a nontrivial structure even for the most simple homoatomic molecules, such as H 2 . An important step in understanding the behavior of the xc potential in the dissociation limit has been made in a series of works by Baerends and co-authors [10][11][12][13][14] who reconstructed the xc potential of a number of stretched diatomic molecules from an accurate many-body configuration interaction (CI) ground-state wave function. They noticed numerically that, in addition to the step, v xc also shows a peak structure around the middle point between the two atoms [10][11][12][13]. A subsequent analysis has shown that the peak in v xc is probably a general feature of the dissociation limit, which contradicts the common LCAO form of the molecular orbital, but can be reasonably well reproduced assuming that the two-electron wave function is of Heitler-London form constructed out of the atomic KS orbitals [14,15]. The physical nature of this peak structure and its connection to the Hubbard correlations is the main subject of our paper. We prove that the whole spatial dependence of the KS potential in the strongly correlated dissociation limit, including both the peak structure and the step (for heteroatomic molecules), is universal. It depends only on the asymptotic behavior of the density of the fragments, which, in turn, is mainly determined by the atomic ionization potentials [9]. In particular, we derive an analytic formula that allows us to recover the exact form of the xc potential in the dissociation limit from the knowledge of the ionization potentials of the independent fragments. This result adds one more item to the list of exact properties of the KS system and xc potential, such as Koopman's theorem, the exact asymptotic form of v xc for finite systems, and the exact relation of the asymptotics of the density to the asymptotics of the highest occupied KS state [9]. We also demonstrate that the peak structure in v xc can be viewed as a manifestation of the Hubbard on-site correlations at the level of noninteracting KS particles. The physical significance of this peak is that it suppresses the quantum tunneling of KS particles between two fragments, exactly what the Hubbard repulsion does for real electrons. This ensures that the fragments become physically independent. We emphasize that this effect is not accounted for by any of the currently available functionals and constitutes a stringent test for the future development of static and time-dependent functionals aimed at describing strongly correlated systems. The structure of the paper is as follows. In Sec. II we discuss the physics of the strongly correlated dissociation limit in terms of both Hubbard on-site correlations and the KS formulation of DFT. Using a simple analytically solvable model for a 1-dimensional (1D) symmetric diatomic we derive the asymptotic form of the KS potential and verify our findings numerically for more general symmetric 1D systems. In Sec. III we uncover the universal physics that governs the behavior of the KS potential in the dissociation limit, derive general exact analytic formulas valid for all two-electron systems and verify them numerically for model 1D heteroatomic molecules. We also discuss generalizations of the results for more general many-electron systems. We then conclude the paper by summarizing our main results. Within DFT the real interacting system is modeled by an artificial non-interacting KS system with the same ground-state density. The non-interacting particles are subject to an effective potential via the KS [2] equation (atomic units are used throughout the paper) Since the KS particles are noninteracting there is no way to localize them on a particular atom, independent of the distance d between the fragments. For a symmetric molecule, like H 2 or Li 2 , the KS particles responsible for the formation of the bond always occupy a symmetric orbital with a probability of 1/2 to find either particle on each atom. Apparently, the behavior of the KS particles is very different from that of real physical electrons. The difference between the real word and an artificial word of KS particles becomes especially striking in the regime of strong correlations, and the dissociation of simple molecules provides us with a bright example of this phenomenon. However, a certain physical information, namely the ground-state density, is reproduced exactly by the KS system. Therefore, the real physics should be reflected in the properties of the KS system. Establishing a map of the physics governed by the strong Hubbard on-site correlations to the properties of the KS potential, i.e. the map of the real word to the world of KS particles, is the main subject of this work. In order to find this map we mainly concentrate on a minimal model that captures all key physics of dissociation -the system of two electrons in a potential formed by two nuclei/potential wells. In Sec. III we argue that the main conclusions are transferable to a more general many-electron case. In the case of two electrons in a singlet state only one spatial KS orbital is occupied. Therefore, the density is given as n(r) = 2|ϕ 1 (r)| 2 = 2ϕ 2 1 (r), because the orbital can always be chosen to be real. Hence, from inverting Eq. (1), the exact KS potential is given by with n(r) being, by construction, the exact ground-state density of the two-electron system. Hence, given the exact two-body ground-state wave function Ψ(r 1 , r 2 ) one can calculate the density n(r) = dr 2 |Ψ(r, r 2 )| 2 , and then recover the exact KS potential by inserting n(r) into Eq. (2). This formally maps the physical two-body wave function to the KS potential. However, extracting the physics behind this formal map is not as simple as one may think since in a general 3D case the wave function Ψ(r 1 , r 2 ) is a complicated object given fully numerically, e.g. from CI calculations, and, moreover, may be numerically problematic for realistic systems when one reaches the dissociation limit . Therefore, it is instructive to look first at some simplified models and then, after the essential physics is understood, return to realistic situations. An obvious simplification, which still contains all physical ingredients of the original problem, is to consider a system of two interacting particles in one dimension. The corresponding two-electron Schrödinger equation takes the form where v ext (x) is the external potential, and v int (|x − x ′ |) is the potential of the inter-particle interaction. At the end of this section and in Sec. III we present the results based on the full numerical solution of Eq. (3). However, to gain some physical insight into the shape of v s in the dissociation limit we simplify the model even further to make it analytically solvable. B. Analytical model of strongly correlated electrons First, we assume that the external potential in Eq. (3) is given by a sum of two δ-function wells of equal strength, v, located at the points x = ±d/2. Similarly, we take the interaction to be a zero-range delta-potential of strength λ Physically, in the dissociation limit the only role of the interaction is to block the inter-atomic tunneling. Therefore, in that limit, the behavior is expected to be universal and independent of a particular form and/or strength of the interaction. This leads us to the last simplifying assumption, namely the limit of infinitely strong δ-repulsion, λ → ∞. Now the problem becomes immediately solvable by the so called Girardeau mapping [16] (see also a more recent review [17]), which allows to map the ground state of strongly interacting "hardcore" bosons (a symmetric wave function) to the ground state of noninteracting fermions (antisymmetric wave function). In our two-particle case the exact ground-state (singlet, i.e. symmetric) wave function takes the form where φ 1 (x) and φ 2 (x) are the two lowest states of the following one-particle Schrödinger In other words, the ground state of two infinitely interacting particles in a singlet state is given by the modulus of the ground state wave function of two noninteracting spinless fermions in the bare external potential v ext (x). The two lowest energy solutions of Eq. (7) with the external potential of Eq. (4) are easily where C ± are the normalization constants. The parameters α ± , which determine the corresponding eigenvalues ǫ 1,2 = ǫ ± = −α 2 ± /2, are the solutions of the following dispersion equations [22] Using the exact ground-state wave function (6) we obtain the exact density and, finally, by inserting n(x) into the 1D version of Eq. (2), the exact KS potential for our strongly correlated two Equation (11) gives the exact KS potential for any distance between the wells. In the dissociation limit, vd ≫ 1, α ± → v and ǫ ± → −v 2 /2. Therefore, the last two terms in Eq. (11) cancel while the remaining first term simplifies to I = v 2 /2 is the ionization potential of a separate fragment, the delta-potential of strength v. Hence, we have found that the exact KS potential in the dissociation limit has the form of a "wall" built up between the two fragments of the "molecule". The shape of this wall looks quite close to the peak structure observed numerically in previous works [10][11][12][13]. C. 1D model for homoatomic dissociation It is physically plausible to expect that the behavior in the dissociation limit is independent of the particular form and strength of the interaction, and that the asymptotic form of v s (x) for more general systems is similar to that given by the simple formula (12). We now verify this expectation for a 1D system of two particles in a more general, but still symmetric, external potential, namely . The choice of the 1/ cosh 2 shape of the wells and the interaction potential is arbitrary. It is simply a matter of convenience as the 1D Schrödinger equation with a 1/ cosh 2 potential is exactly solvable [19], which allows us to control the accuracy of our numerical calculations. In addition, the finite-range interaction (14) allows us to reach the dissociation limit in a controllable way without numerical instabilities. For the numerical solution of Eq. (3) with general v ext and v int , we note that the 1D two-particle problem defined by Eq. (3) can be formally interpreted as a 2D one-particle problem with the Hamiltonian where the effective 2D one-particle potential is defined as Consequently, the exact ground-state wave function Ψ(x, y) and the exact one-dimensional ground-state density for the physical two-particle system, n(x) = dy|Ψ(x, y)| 2 , can be obtained numerically from any computer code that is able to treat non-interacting electrons in two dimensions. All our calculations in this work were carried out with the OCTOPUS code [20]. The exact KS potential, v s (x), for v ext and v int of Eqs. (13) and (14) with v = 0.9 and b = 0.5, and varying interwell distance d is shown in Fig. 1. At first sight the results look very surprising: starting from a certain distance, d = 8 a.u. for these particular parameters, the shape of the KS potential saturates exactly at the form given by the analytic formula A. Universality of the Kohn-Sham potential In order to understand the nature of the universal peak in the asymptotic form of the KS potential we turn back to our first simple model with an infinite zero-range repulsion and look more closely at the behavior of the exact density determined by Eq. (10). In the dissociation limit, vd ≫ 1, the functions φ + (x) and φ − (x) become simple symmetric and antisymmetric combinations of "atomic" orbitals. Taking the squares and summing them up, as suggested by Eq. (10), we find that all interference terms, i.e. the cross-product of different atomic orbitals, cancel, and the total density reduces to a sum of two atomic densities This is exactly what Hubbard on-site correlations do -they destroy the inter-atomic tunneling/interference, which localizes the electrons on separate sites, and eventually makes the density to be the sum of the densities of two physically independent fragments. On the KS side of the mirror, the KS potential, whatever it is, cannot localize the KS particles. However, by building up a self-consistent wall between the fragments it suppresses the tunneling/interference of the atomic KS orbitals to mimic the density distribution of the two independent atoms. Thus, the physics of the on-site Hubbard correlations in the real world is mapped to the wall in the KS potential in the artificial world of KS particles. It is, therefore, not surprising that the universality of the physics in the dissociation limit is reflected in the universal form of the asymptotic KS potential. Since for DFT only the density distribution is essential, the general condition that determines the KS potential in the dissociation limit is simply n(r) = n 1 (r) + n 2 (r), (18) in other words, the total density n(r) is equal to the plain sum of the densities, n 1 (r) and n 2 (r), of the two independent fragments. The asymptotic form of the KS potential should be such that it supports the density distribution given by Eq. (18). As the densities n 1 (r) and n 2 (r) decay exponentially from different sides the only way to mimic this at the level of a single KS orbital is to insert a potential wall in the middle region. Having understood the key physics we are ready to go to more complex systems. B. Kohn-Sham potential of heteroatomic 1D molecules It is now straightforward to find the form of the KS potential in the dissociation limit for a general 1D molecule formed by two different wells. Assuming that the densities, n 1 (x) and n 2 (x), corresponding to one electron sitting in a separate well are known, we require that the total density n(x) is given by their sum, Eq. (18), and substitute this sum into Eq. (2). The result can be reduced to a form that looks structurally similar to Eq. (11) ∆v s (x) = √ n 1 ∂ x √ n 2 − √ n 2 ∂ x √ n 1 2 2(n 1 + n 2 ) 2 + I 1 n 1 + I 2 n 2 n 1 + n 2 − I, where I 1,2 are the ionization potentials of the fragments and I = min{I 1 , I 2 } is the ionization potential of the total system. Equation (19) is valid in the dissociation limit, and from its structure it is clear that ∆v s (x) has a nontrivial x-dependence (i.e. differs from a constant) only far away from the "atoms", where the densities fall off exponentially. Therefore, for the practical evaluation of v s in the dissociation limit, it is sufficient to know only the asymptotic behavior of the density of the separate fragments. In the 1D case, the asymptotics of the densities n 1 (x) and n 2 (x) have the following general form where the exponents α 1,2 are related to the ionization potentials of the atoms I 1,2 = α 2 1,2 /2 and A 1,2 are prefactors to the exponential decay. Inserting Eq. (20) for a symmetric molecule (equivalent wells with A 1 = A 2 , and α 1 = α 2 ) into Eq. (19) we immediately recover our first model result of Eq. (12) thus confirming its universality. In a general asymmetric case (different wells or a "heteroatomic" molecule) a new qualitative feature, a "shelf", appears. This shelf in v s is such that it aligns the ionization potentials of the atoms. Formally, it results from the last two terms in Eq. It is convenient to represent the KS potential as a sum of two different contributions, i.e. s (x). For the region between the wells, i.e. for −d/2 < x < d/2, the two contributions correspond to the "wall" and the "shelf" discussed before. They are given with Here, and in the following, we have assumed that α 1 ≤ α 2 , i.e. that the left fragment has a larger ionization potential. Obviously, for a symmetric configuration, v (2) s vanishes identically, i.e. there is only a peak in this case. Also, in this case x 0 = 0, i.e. the peak is exactly in the middle between the two identical fragments as expected from symmetry. with Contrary to before, v (1) s does not describe a peak but it can actually be shown that the potential is strictly monotonically increasing describing the building up of the shelf, or its return to zero depending on the direction one approaches x ′ 0 from. Also, for the symmetric case, both contributions vanish as there is no shelf in that case. For the region x > d/2 the potential decays exponentially without specific features. We emphasize that neither the specific form of the fragments nor the type of interaction between the electrons enters the derivation of our analytical result directly. The specifics of the fragments appear in the result only via the parameters α 1,2 and A 1,2 . The former describes how fast the density decays, i.e. it is directly related to the ionization potential of each fragment. The latter is connected to the normalization of the wave function and only enters the potential as a logarithmic correction to the position of the peak and shelf without changing the shape of the potential. For a symmetric system the potential is completely determined by the ionization potential of the two fragments. In all cases, symmetric and asymmetric, the functional form of the KS potential is universal, only the position and the width and height of the peak depend on the system under consideration. Both in the symmetric and in the asymmetric case the presence of the universal wall reflects Hubbard correlations. The potential wall suppresses the tunneling and drives the KS density to the density corresponding to physically independent subsystems. To ensure that our universal analytical formulas are indeed correct we performed numerical calculations for an asymmetric two-electron system with the external potential given by the sum of two different potential wells with v 1 = 0.9 and v 2 = 0.7. As before, for the interaction we keep the finite range potential of Eq. (14). Since the one-particle problem with 1/ cosh 2 is exactly solvable [19] the parameters α 1,2 and A 1,2 , entering our asymptotic formulas Eqs. (21)-(26), are available in the analytic form. In particular, for the pre-exponential factors in the asymptotics of the "atomic" densities we get where Γ denotes the usual Gamma-function. In Fig. 2 we show the comparison of the analytic KS potential given by Eqs. (21) and (22) and the KS potential obtained from the full numerical solution of the problem defined by Eqs. (3), (14), and (27). As expected, in the asymmetric case, v 1 = v 2 , the KS potential acquires a shelf structure in addition to the peak. The shelf is a direct result of the necessary alignment of the KS energy levels (the ionization potentials) in the two fragments [9,21]. It is already not so surprising to see that the KS potential again approaches the analytic asymptotic form with increasing distance. As in the analytic calculation, the exact position 21) and (22). The potential returns to zero at large negative x. of the peak and the shelf depends slightly on the distance between the two wells always being closer to the deeper well. While in the symmetric case, see Fig. 1, the dissociation limit is reached at a distance of around 8 a.u. in the asymmetric case around 11 a.u. are necessary. In both cases the numerical results agree perfectly with the analytical expression. The larger distance, necessary in the asymmetric case, is a result of the shallower right potential well in that case. Unfortunately, the analytical result of the shelf returning to zero can not be verified numerically for the systems at hand. The position x ′ 0 , Eq. (26), is so far away from the actual potential wells that the density is numerically zero. There is, however, no doubt that the shelf returns to zero exactly as predicted by the analytic formula. C. Generalizations to three-dimensional and many-electron systems The general argumentation used in the previous subsection to derive the exact KS potential in the dissociation limit is not restricted to 1D systems. The general physical condition for dissociation is that the density is given by the sum of the densities of the independent fragments, Eq. (18), because the inter-fragment tunneling is destroyed by Coulomb correlations. The inversion formula of Eq. (2) is also valid for any two-particle system independently where all notations are the same as in Eq. (19). Using this formula we can recover the exact limiting functional form of the KS potential for any two-particle object dissociating into two one-particle fragments. The only required input is the long-range asymptotics of the independent fragments, which is mainly determined by their ionization potentials. It is important to emphasize that the pre-exponential factors give only weak logarithmic corrections to the position of the wall and the shelf. As an illustration, we present the exact KS potential that controls the dissociation limit of the H 2 molecule. The final results obtained by inserting the ionization potential of the hydrogen atom, and the electronic densities of two independent hydrogen atoms, located at the points R 1 and R 2 , into Eq. (29) takes the form where r 1,2 = r − R 1,2 are the vectors between the two protons and the considered point in space. The KS potential for the hydrogen molecule, Eq. (30), is shown on Fig. 3. It is easy to see from Eq. (30) that along the molecular axis ∆v H 2 s (ρ = 0, z) is exactly of the 1D form Eq. (12), while in the perpendicular direction it has a Lorentzian shape, ∆v H 2 s (ρ, z = 0) = 1 2 with the width increasing at increasing distance between the two hydrogen atoms. Similarly, we can obtain an explicit form of the exact KS potential for any two-electron system in the strongly correlated dissociation limit. Moreover, one can argue that the general formula (29) remains valid also for many electron systems in those cases where the separate fragments have a single electron in the highest occupied KS orbital. Indeed, in this case the asymptotic behavior of the density away from the atoms is completely determined by the two KS particles in the highest occupied KS molecular orbital (KS HOMO), while the rest of the electrons effectively contribute to the rigid atomic cores. Therefore, the asymptotic form of the KS potential can be obtained by inverting only one KS equation, namely for the KS HOMO, and, hence, the two-particle formula (2) remains asymptotically valid. IV. CONCLUSIONS In conclusion, we have presented a recipe to calculate the exact KS potential of systems in their dissociation limit. The main ingredient is the ionization potential of the dissociated fragments, a quantity that is readily available from spectroscopic data. We have presented the explicit results for a one-dimensional model system and the hydrogen molecule. It is shown that the functional form of the potential is independent of the specific system and the details of the interaction as long as the latter is repulsive and sufficiently strong. For the 1D model system the numerical results approach the analytical one as the distance between the two fragments is increased. Hence, they confirm our analytical result perfectly for both a symmetric and an asymmetric system. Our results not only pose a strong constraint for the development of exchange-correlation functionals but also introduce an alternative way to look at the electron localization in strongly correlated systems. How to incorporate those effects in a density-functional treatment remains a challenge. It is especially intriguing to explore implications of our universal results for the quantum transport in the regime of Coulomb blockade. It is natural to expect that the potential wall in the KS potential should modify the tunneling probability when the transport is described in terms of KS DFT.
2009-08-05T17:34:11.000Z
2009-08-05T00:00:00.000
{ "year": 2009, "sha1": "42ef35628c4781c5f5ef6d81933529f2fe9445b6", "oa_license": null, "oa_url": "https://digital.csic.es/bitstream/10261/95936/1/Exact%20Kohn%E2%80%93Sham.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ee0f6334011d735ba9df990df7de5e738ecba262", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Physics" ] }
237158036
pes2o/s2orc
v3-fos-license
Effects of cutter parameters on shearing stress for lettuce harvesting using a specially developed fixture To investigate the optimal parameters combination of reciprocating cutter for harvesting hydroponic lettuce automatically, a shear fixture was designed for cutting lettuce stems on a universal materials tester. Effects of blade distance, sliding cutting angle, skew cutting angle, and shearing angle on shearing stress were investigated in this study. The orders of the significance of a single factor and double factors were analyzed using the response surface methodology (RSM). A scanning electron microscope was used to observe the microstructure of the lettuce stem to analyze the shearing characteristics at the microscopic level. The RSM results showed that the order of significance for single factors was (i) sliding cutting angle, (ii) shearing angle, (iii) skew cutting angle, and (iv) blade distance. The sliding cutting angle had a highly significant influence on the shearing stress. The order of significance for double factors was (i) blade distance and shearing angle, (ii) sliding cutting angle and skew cutting angle, and (iii) the sliding cutting angle and shearing angle. A quadratic model of the factors and shearing stress was built according to the response-surface results. The optimized combination of factors that gives the minimum shearing stress was observed that it reduced 69.9% of the maximum shearing stress value. This research can provide a reference for designing lettuce-cutting devices. Introduction  Hydroponic lettuce is a common greenhouse vegetable whose production scale is increasing rapidly [1] . Nowadays, hydroponic lettuce (Lactuca sativa) is mainly manually harvested in China, with low efficiency and high labor costs [2] . Mechanized or automated harvesting is the most promising way to solve this issue [3] . Blade pattern, blade installation angle, and other cutter parameters of crop harvesters have an obvious influence on the harvesting quality. Cutter design is an extremely active area of crop harvesters [4,5] . Cho et al. [6] developed a harvesting robot for hydroponic lettuce: a cylinder drove a blade to cut the lettuce, but the blade parameters were not studied. Igathinathane et al. [7] studied the effect of orientation of corn stalks on mechanical cutting: the blade had a 30° single-bevel sharp knife edge. To optimize the cutting-blade design of a sugarcane harvester, Mathanker et al. [8] investigated how the cutting speed and oblique blade angle affected the cutting energy. In China, the reciprocating cutters on rice and wheat harvesters, especially the standard blade and arrangement type of cutter, are often used as references for designing crop harvesters [9][10][11][12][13] . Reciprocating cutters have been used on commercial harvesters for lettuce planted in soil [14] . However, the existing reciprocating cutters on lettuce harvesters are unsuitable for harvesting hydroponic lettuce, because of agronomic conditions of hydroponics and soil culture are different. The stems of hydroponic lettuce are relatively short, and their physical properties differ obviously from those of long-stalked crops such as wheat. Besides, hydroponic lettuce should be harvested intact because leaf loss reduces the selling price. Therefore, cutting height is a necessary parameter for designing a lettuce harvester. Using traditional standard blade and cutter parameters for harvesting hydroponic lettuce would lead to a relatively large cutting device that would cause problems such as a significant cutting resistance and cutting too high. Therefore, foundational research is required for thinning blade patterns and installations that realize low-resistance cutting. Such research will provide a reference for designing lettuce-cutting devices. Shearing characteristics of crop stalks have been used as references for designing cutting devices of harvester [15][16][17][18][19][20][21][22][23][24][25] . Ghahraei et al. [26] selected knife edge angle, knife shear angle, knife approach angle, and knife rake angle as the test factors and studied how they affected the cutting force and cutting energy; a rotary impact cutting system was thus developed. A rotary cutter was used in the cabbage harvester, and the best cutting position and combination for cutting the cabbage roots were obtained. It showed that the best cutting combination was single point support, sliding cutting, downward inclining, and low cutting speed, while blade distance was not considered in the study. Chen et al. [27] conducted laboratory experiments on a reciprocating cutter for cutting hemp, where the cutting force and energy for differing moisture content were measured. İnce et al. [28] studied the shearing characteristics of sunflower stalks, taking the shearing stress as the test index to obtain the cutting force per unit area. Because the shearing force changed with different stalk diameters, selecting shearing stress as the test index could eliminate the influence of stalk diameter on the cutting force. Some other studies were also conducted on crops planted in the greenhouse. Gao et al. [29] designed a lettuce harvester and an experiment based on response surface methodology (RSM) to optimize the factors such as the cutting position and style. However, their cutter had no counter shear, and the characteristics of universal lettuce cutters have not yet been studied. The above foundational studies are mainly about how the factors such as the blade pattern and installation affect the shearing force and energy during blade cutting stalks. However, there are few studies about the comprehensive effects of blade shape parameter (namely sliding cutting angle) and the blade installation parameters (namely blade distance, skew cutting angle, and shearing angle) on the shearing stress for the reciprocating lettuce-cutting device. Moreover, there is little information available in the literature about the relationship between lettuce-stem microstructure and the shearing force characteristics. To optimize the cutter parameters of hydroponic lettuce cutting device and study the shearing characteristics of lettuce stems, a shear fixture was designed in this study. Four test factors were selected, i.e., (i) the blade distance, (ii) the sliding cutting angle, (iii) the skew cutting angle, and (iv) the shearing angle. An RSM test was designed to study the order of significance of these factors and their optimized combination. A scanning electron microscope (SEM) was used to observe the microstructure of the lettuce stem and analyze the shearing process at the microscopic level. It will provide a reference for designing and improving miniaturized lettuce-cutting devices. Materials and methods Naiyou variety of hydroponic lettuce was used in the experiment. The lettuces were cultivated for 45 d in a plant factory (Xutian Photoelectric Co., Ltd, Xi'an, China). A total of 100 lettuces were selected randomly, where 90 of them were used for the RSM tests (divided into 30 groups containing three lettuces each) and the remaining of them were used for the verification tests. Leaves and roots were removed manually to obtain lettuce stems as the experimental samples (Figure 1), which had diameters of 14 to 20 mm. a. Test material b. Sample preparation Figure 1 Test material and sample preparation Design of shear fixture and operating principle To measure the shearing force of the lettuce stem, a shear fixture was designed to work on a universal materials tester (HY-0230, Shanghai Hengyi Testing Instruments Co., Ltd, China). The system for measuring the shearing force is shown in Figure 2, where the maximum shearing force could reach 100 N. To measure the shearing force as accurately as possible, the cutting speed used was 50 mm/min [30] . The blade distance between the edge and the counter shear influenced the stem stiffness when the cutting combination was set constantly as single-point clamping. The blade distance was changed by moving the blade distance adjustment plate (Figure 2b). The sliding angle was an important parameter that influences the blade shape [16,30] , and blade velocity was inclining with the blade edge due to the influence of the sliding angle. During sliding cutting, the blade velocity could be resolved into the sliding cutting velocity and the hewing velocity, which were parallel and perpendicular to the blade edge, respectively. The sliding angle was the tilt angle of the blade edge, which was the angle between the sliding cutting velocity and the hewing velocity. The blade was installed on two plates, and the sliding angle could be adjusted by regulating the blade installation site on the sliding cutting angle adjustment plate ( Figure 2b). The skewing cutting angle was the angle between the cutting face and the stem axis. Skewing cutting was when the cutting face was inclining with the stem axis, while the cutting direction was perpendicular to the stem axis. The skew cutting angle was regulated by revolving the installing angle of the skew cutting angle adjustment plate ( Figure 2b). The shearing angle was defined as that between the cutting direction and the stem axis. When the cutting face of the blade was inclining with the stem axis, the shearing angle was set by revolving the installing angle of the stem fixture on shearing angle adjustment brackets ( Figure 2b). Scanning electron microscopy To analyze the shearing characteristics of lettuce stems from a microstructural perspective, the matrix and epidermis of a lettuce stem were scanned by SEM (TM3030; Hitachi, Ltd., Japan). The following are the steps to prepare a sample for SEM scanning: 1) A transverse section and a longitudinal section of a lettuce stem were made, and the thickness of the specimens was 1 mm; 2) To avoid deterioration of the ultra-structure of the specimen surface, the specimens were freeze-dried for 6 h in a freeze-drying device (SJIA-12N, Ningbo Sjialab Equipment Co., Ltd., China) at a drying temperature of -50°C; 3) to improve the efficiency of generating secondary electrons and prevent the specimens from being charged during the SEM observation, the surface of specimens was coated with Au and Pd using a metal coating apparatus (MSP-2S, IXRF Systems, Inc., USA). To observe the microstructures of the matrix and epidermis, the image magnification was set as 100×. The microstructure character of the matrix was observed from the transverse section and longitudinal section. The image magnification of matrix pictures was 1000× to observe the microstructure more clearly. Mechanical tests The smooth-edged blade used in the RSM test was made of a 0.5 mm thick carbon steel with a 60° blade edge. The fixture clamped the lettuce stem with a single-point clamping way. Shearing force-displacement curve that revealed the shearing process of lettuce stem was obtained by the universal materials tester. The accuracy of the load cell was 0.3%, and the data sampling frequency was 50 Hz. The stroke and speed accuracies were both ±0.5% of the indicated value. Ranges of test-factor values Blade distance, sliding cutting angle, skew cutting angle, and shearing angle were chosen as the test factors. From a preparatory experiment, the blade distance range of 0.5 to 2.5 mm was chosen. From a previous study [9] , the sliding-cutting-angle range of 0° to 40° was selected. The skew cutting angle and the shearing angle are the reference bases for the inclination angle of the cutter. Due to the short stem used, the blade would damage the lettuce leaves if the skew cutting angle or shearing angle was too large. Based on the requirements of lettuce harvesting, the range of values for skewing-angle and shearing-angle was set from 0° to 20°. Test index Because the shearing force changed with different lettuce stem diameters, shearing stress was chosen as the index according to previous studies [17,28,31] , and the shearing stress showed the shearing force per unit area. The shearing force measurement system ( Figure 2) could measure only the shearing force, and the shearing stress τ (in megapascals) was calculated by Equation (1) [28] . τ = F max /S (1) where, F max is the maximum shearing force, N; S is the cross-sectional area of the stem at a shearing plane, mm 2 . The profile of stem cross-sectional was depicted on graph paper, then the area was measured by the graph paper. Design of RSM test To obtain (i) the combination of factors that gave the minimum shearing stress, (ii) the order of significance of the factors, and (iii) how the factors affected the shearing stress, an RSM test using the central composite inscribed (CCI) method was designed and conducted in Design-Expert 7.0 software (Stat-Ease, Inc., USA). The four factors were the blade distance (factor A), the sliding cutting angle (factor B), the skew cutting angle (factor C), and the shearing angle (factor D). To ensure the factor levels in the range of test-factor values, the alpha (the distance between Lower level and Zero level) was set as 0.5 in the software, and the coding of the factor levels was calculated by the software (Table 1). The calculation of Design-Expert showed that there were 30 test runs. Repeated each run three times and recorded the mean shearing stress as a result of that run. The microstructure of the lettuce stem comprised mainly epidermis and matrix (Figure 3). The epidermis was composed primarily of fibers wrapped tightly around the matrix. The matrix was composed of parenchyma cell walls, which were arranged in fascicular clusters with cavities; the thin cell walls formed the vascular tissue ( Figure 4). The lettuce-stem microstructure (Figures 3 and 4) was similar to that of cabbage roots, as reported by Du et al. [30] . The microstructural characteristics of the fibers and the parenchyma cell walls and cavities were likely to affect the shearing force. Figures 3 and 4 showed that the force to cut the epidermis fibers is relatively high, while the force to cut the matrix is relatively low. The SEM images and the shearing force-displacement curve made it possible to analyze the shearing process at the microscopic level. Figure 5 shows a typical shearing force-displacement curve, the conditions being a blade distance of 1.5 mm, a sliding cutting angle of 20°, a skew cutting angle of 15°, a shearing angle of 10°, and a cutting-position area of 180 mm 2 . A typical curve ( Figure 5a) had two peaks and could be divided into three distinct sections that represent 1) the initial stage of shearing (from the start point H to the first peak I); 2) the interim stage of shearing (from the first peak I to the final peak J); and 3) the final stage of shearing (from the final peak J to the failure point K). The typical shearing force-displacement curve ( Figure 5) with its two peaks and three distinct sections was similar to the process of shearing hemp, sunflower stalks, and corn stalks [7,27,28] . In Figure 5, the shearing force increased from the start point H to the first peak I with the cut dense fibrous epidermis tissue (Figure 3b), reaching the maximum width. Once the blade had passed the point I, both the epidermis and the matrix were being cut, and only the dense fibrous tissues remain on either side of the epidermis. Because the matrix contained many cavities (Figure 4), the shearing force decreases, and the curve became smoother. When the blade reached point J (Figure 5b), the cut dense fibrous epidermis tissue (the cut position was at point J shown in Figure 5b) turned the maximum again. Hence, the shearing force reached the second peak (Figure 5a). Once the blade has passed point J, the force drops with the epidermis cut by the blade turning smaller, and the shearing was complete when the blade reaches point K. The shearing force-displacement curve showed that the position of maximum force was related not only to the blade travel but also to the structural characteristics of the lettuce stem (Figures 3 and 4). The maximum force occurs where the fibrous epidermis tissue was most dense, which was in accord with previous studies [16,27,28,30] Table 2 gave the results of the RSM test (30 runs). The maximum and minimum of shearing stress among the 30 runs were 3.9321×10 4 Pa (test group 8) and 1.2217×10 4 Pa (test group 24), respectively. They were significantly larger or smaller than other groups. Results of the RSM test From these results (Table 2), a quadratic polynomial model for the four factors and the shearing stress was built by the Design-Expert, as shown in Equation (2 where, τ is the value of the shearing stress; a, b, c, and d are values of blade distance (factor A), sliding angle (factor B), skew cutting angle (factor C), and shearing angle (factor D), respectively. The analysis of variance (ANOVA) for the quadratic model (Table 3) showed that the model of regression equation was significant (p<0.0001), and the lack of fit of the regression equation was insignificant (p=0.5678), meaning that the model could be used to find the optimal combination of factors and their order of significance. The order of significance of the single factors was (i) the sliding cutting angle, (ii) the shearing angle, (iii) the skew cutting angle, and (iv) the blade distance, where the p values of the factors were <0.0001, 0.0049, 0.0236, and 0.5057, respectively. The sliding cutting angle had a remarkable influence on the shearing stress. The order of significance of double factors was (i) the blade distance and shearing angle, (ii) the sliding cutting angle and skewed cutting angle, and (iii) the sliding cutting angle and shearing angle, where the p values were 0.0028, 0.0395, and 0.0942, respectively. Effect of significant factors on shearing stress The ANOVA results (Table 3) show that the sliding cutting angle (factor B) had a highly significant influence on the shearing stress among the single factors and the combination of blade distance and shearing angle (factors A and D) was the most notable of the double factors. Therefore, it was necessary to analyze how these factors affect the shearing stress. To analyze how the sliding cutting angle affects the shearing stress, the factors in Table 2 were selected. All the runs were with the same values for factors A, C, and D, while different values for factor B. The results ( Figure 6) showed that the shearing stress (when the sliding cutting angle was 40°) was smaller than that of 0°. The cutting state was known as hewing when the sliding cutting angle was 0° and sliding cutting when the sliding cutting angle was other than 0°. The shearing stress associated with sliding cutting was smaller than that associated with hewing; this agrees with the findings of Cheng et al. [9] , who reported on how the sliding cutting angle of a reciprocating bush cutter affected the cutting force. The shearing stress associated with sliding cutting was smaller than that associated with hewing due to the influence of the sliding cutting velocity on the blade edge (Figure 7). Because of the influence of the sliding cutting angle β, the blade velocity V could be decomposed into V n and V t , which are perpendicular and parallel to the blade edge, respectively (Figure 7a). Because V t =V sinβ, so V t increased as the sliding cutting angle increased from 0° to 40°. The edge angle of the blade changed because of the influence of the sliding cutting angle (Figure 7b). The blade velocity in the hewing state was defined as V 0 , so the edge angle of the blade was α = ∠OMQ (Figure 7b). Under the sliding-cutting state, the blade velocity changed to V 1 , and α changed to ∠TME (Figure 7b). Results showed that ∠TME was smaller than ∠OMQ, meaning that the edge angle decreased in the sliding-cutting state, and the blade edge became effectively sharper. Therefore, the lettuce-stem shearing stress was lower with sliding cutting. a. Velocity resolution b. Variation of edge angle Figure 7 Blade-velocity resolution and edge-angle variation under sliding cutting Figure 8 shows how the interaction of factors A and D affected the shearing stress, where the sliding cutting angle and skew cutting angle being 20° and 10°, respectively. Besides, the shearing angle had a more evident effect on the shearing stress than the blade distance under the A-D interaction. As the shearing angle increased, the shearing stress first decreased and then increased, reaching its minimum value at a shearing angle of around 10°. Therefore, the best installation shearing angle for a lettuce-harvester cutter was around 10° under the above interaction conditions. The effect of shearing angle on the shearing stress was consistent with Du et al. [30] that downward inclining mode was one way to decrease the force required to cut cabbage roots. Optimal factor combination and test verification To obtain the optimal combination of factors that minimizes the shearing stress, the optimization function of the Design-Expert software was applied. Various optimal combinations were found, and the one with the lowest shearing stress (namely 0.8026×10 4 Pa) was a combination of blade distance, sliding cutting angle, skew cutting angle, and shearing angle with 1.66 mm, 39.88°, 12.99°, and 11.15°, respectively. A test was conducted to verify this software-based result, repeating it by ten times ( Table 4). The shearing stress was 1.1852×10 4 Pa in average with a standard deviation of 73 Pa, and a coefficient of variation was 0.62%. The conditions of the verification test solved by the Design-Expert were precise, so the result of the verification test differs slightly from that of the software. The average shearing stress of 1.1852× 10 4 Pa in the present study is less than that (cutting force 17.4 N, and diameter range 8.0-16.5 mm) reported by Gao et al. [29] . They optimized the factors of a lettuce harvester without considering the influence of the sliding cutting angle. The test of the maximum shearing stress (3.9321×10 4 Pa) in the RSM test was chosen as the control group, and the shearing stress was when the corresponding blade distance, sliding cutting angle, skew cutting angle, and shearing angle were 0.5 mm, 0°, 0°, and 0°, respectively (No. 8 in Table 2). The shearing stress of the optimal combination was reduced by 69.9% compared to the control group, which shows an obvious optimization. Decreasing the shearing stress could provide a basis for designing a smaller cutting device, which could, in turn, ensure blade cutting the short stems of hydroponic lettuce. The optimal combination of factors presented herein could act as a reference for designing a miniaturized lettuce-cutting device. Conclusions A shear fixture was designed for providing a method to regulate blade distance, sliding cutting angle, skew cutting angle, and shearing angle. RSM was employed to optimize the cutter parameters. The typical shearing force-displacement curve has two peaks, and the maximum shearing force appears where the dense fibrous epidermis tissue is the biggest. The single-factor order of significance was (i) sliding cutting angle (factor B), (ii) shearing angle (factor D), (iii) skew cutting angle (factor C), and (iv) blade distance (factor A). The sliding cutting angle had a highly significant influence on the shearing stress. The double-factor order of significance was (i) the blade distance and shearing angle (A-D), (ii) the sliding cutting angle and skew cutting angle (B-C), and (iii) the sliding cutting angle and shearing angle (B-D). The optimal combination of factors was a blade distance of 1.66 mm, a sliding cutting angle of 39.88°, a shearing angle of 11.15°, and a skew cutting angle of 12.99°. It gave minimum shearing stress of 1.1852×10 4 Pa, which reduced 69.9% shearing stress than the maximum shearing stress.
2021-08-18T04:34:59.919Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "6afcbba4108bbdcd4cbc2967342cbd226ba1a89f", "oa_license": "CCBY", "oa_url": "https://ijabe.org/index.php/ijabe/article/download/6346/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "45e9d86c84fb954403e9b3e8ee9a22444cccda3d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
236704825
pes2o/s2orc
v3-fos-license
STRATEGIC ASPECTS OF COMMUNICATION FOR CHANGING SOCIAL HABITS IN WASTE SORTING The article analyzes the theoretical principles and practical experience in planning communication with society to change its behavior towards reduction of waste generation and promotion of its recycling. Research objective: Analysis of a case study, examining the communication of a Latvian regional utility service provider with customers and making proposals for developing the company's communication strategy to change waste sorting habits of community. Within the framework of the research, waste management and communication experts were interviewed and the previous experience of SIA “Kuldīgas komunālie pakalpojumi” was analyzed. Media content analysis (2017-2019), customer survey and focus group interviews were conducted. The analysis of the obtained results revealed that SIA “Kuldīgas komunālie pakalpojumi” lacks a unified strategic approach in its communication with customers. The company's communication strategy for 2021-2023 was developed and a follow-up analysis is planned after its implementation. The article concludes with recommendations for developing a business communication strategy to change waste sorting habits of society. INTRODUCTION How to increase collection of recyclable waste, for it not to end up in landfills, is a topical question all over the world. With the increase of population, migration to cities, and economic development, change in packaging use tendencies, we observe the rise in the amount of waste in EU in 2016 it was 483 kg per one citizen, in Latvia -410 kg per one citizen (Latvijas Zaļais Punkts, 2019b). Optimal waste management reduces waste production, mitigates socioenvironmental problems linked with waste, and boosts energy and useful material production. According to the study by the European Commission, member states with the most backward waste management are Bulgaria, Check Republic, Greece, Estonia, Italy, Cyprus, Lithuania, Latvia, Malta, Poland, Romania, and Slovakia (VARAM, 2013: 5) Ajzen, 1991;2012) In a study (Russell et al., 2017) it was found that participants with a greater sense of control and broader regulatory support to reduce waste had stronger intentions to engage in such behavior. It was found (Klöckner, 2013) that planning interventions to change individuals' behavior should not only include attitude change campaigns, but also focus on reducing behavior, strengthening social support, and increasing self-efficacy by providing specific information on how to proceed. Personal moral duty, perceived behavioral control, and subjective norm have a positive effect on young people's intentions regarding waste sorting (Shen et al., 2019), but attitudes and concerns regarding the environment do not show such effect. Chinese researchers found that satisfaction with waste sorting is associated with indicators of engagement, enthusiasm, social interaction, and active participation. However, it should be noted that the importance of this commitment varies considerably from one region of the country to another, and there are also gender differences in these indicators. Differences in age, educational background, and monthly income of the demographics are also related to differences in population sorting behavior. A study conducted in Denmark (Nainggolan et al., 2019) shows diversity of household choices related to household waste sorting and household socio-demographic indicators. There are also differences in the distribution of self-reported time for waste sorting and treatment and use of recycling facilities. It was found (Chen & Gao, 2020) that subsidies are an important factor influencing the waste sorting behavior of municipal residents. Interestingly , young individuals and people with low monthly income were found to have higher awareness of sorting behavior than others. Factors influencing waste sorting behavior of college students (Hao et al., 2020): convenience of waste sorting facilities, willingness to sort waste, knowledge of the related field, attitude towards waste sorting, peer pressure, and the existence of a reward and penalty system. It was found in a study (Hao et al., 2020) that although mandatory waste sorting measures have been introduced and college students have a basic knowledge of waste sorting, they have difficulty categorizing some secondary raw materials (glass, hazardous waste, light bulbs, etc.). RESEARCH-BASED COMMUNICATION To stimulate the introduction of new habits and ensure that new activity implementation is maintained, a set of different communication methods and activities should be used. When developing a waste management policy (Chen & Gao, 2020), it should be taken into consideration that the intensity of communication and learning among the municipal population influences their decisions on waste sorting. The study (Czajkowski et al., 2019) found that the communication of a descriptive social norm is positively related to the change of waste sorting behavior of individuals. "One of the most effective ways to motivate people is through social impact. There are two categories of social impacts. The first concerns information. The second is related to pressure from others." (Thaler & Sunstein 2009: 59). When creating educational campaigns, work with different audiences should be different, because different habits are formed more clearly for a middle-aged person than for a primary school student (Dispenza, 2015). It is communication with young people under the age of 18 that is important for influencing the attitudes and behavior of older generations towards waste management (Kozel et al., 2019). Group preference system (including family choice, organizational and social preference) plays a more significant regulatory role in waste sorting behavior . Similarly, in a study of changing the habits of people to promote cycling, the authors (Wunsch et al., 2016) found that social dynamics (motivating others or motivating others) strongly influenced participants, indicating that emotional aspects (team spirit, fun) have greater potential than more rational factors such as health or the environment. According to the communication model developed by DEFRA (WRAP, 2013), four elements are necessary for change to take place in the behavior of individuals: opportunity, involvement, encouragement, and example. The opportunity makes waste sorting easier. People need help to make choices, so education, skills and quality information need to be provided. Involvement means giving effective signals, choosing the most appropriate methods to promote waste sorting. DEFRA points out that involvement is the involvement of people to take personal responsibility for what they do. Example -the company demonstrates in-house recycling, reuse, and waste prevention schemes. Employees' stories of how they sort, recycle and compost waste are published. Local businesses and communities show their commitment to sorting waste. In addition, ensuring a consistent policy is important (WRAP, 2013: 115-127). The Berlin (Bund-Berlin, 2020) Communication Concept on Organic Waste Collection states that citizens need to be informed individually, specifically, actively, and purposefully. Furthermore, it is emphasized that information campaigns and public relations alone are not enough to ensure stable attitude towards waste sorting and to achieve behavioral change in the population. There is also no instructive warning signal with the index finger raised, instead real action algorithms must be provided via communication. Based on the analysis of the research results, the Research objective was set out: Analysis of a case study, examining the communication of a Latvian regional utility service provider with customers and making proposals for developing the company's communication strategy to change waste sorting habits of community. METHODS To obtain broader insight into the problem, interviews with waste management specialists were conducted, focus group interviews were conducted with the most active clients on the existing problems and possible solutions. To obtain an objective overview of publications of the last three years (2017 -2019), a content analysis of publications on waste sorting issues was performed in the municipal informative publication "Kuldīgas Novada Vēstis", and in the local newspaper "Kurzemnieks". In February 2020, a survey of KKP customers was conducted with the help of 2603 electronic surveys. Answers were provided by 781 respondents. SELECTION As KKP provided opportunities to sort waste for the citizens in the city and in the country are substantially different, it is important that the respondents represent opinions of the citizens from both the city (48,2%), and the 13 parishes of Kuldīga municipality. The respondents mainly live in private houses (86%), but 14% in apartment buildings. Most of the respondents (67%) were women. The questionnaire was mostly answered by the respondents with higher education (55.9%), slightly less respondents with secondary education (40.9%), but in the minority -with basic education (3.2%). People aged 21-30 make up only 6.5% of respondents. To find out the involvement of this part of the audience and their opinion about the recycling of waste, it is possible to conduct another survey targeted specifically to this target audience. Table 1 shows that in Kuldīga municipality from all the waste collected in 2019 only 11% were submitted to recycling. As mentioned previously, directive 2018/851 of the European Parliament and European Commission provides the obligation to recycle up to 55% of waste produced in households and companies. In Kuldīga municipality, waste management is provided by Kuldīga municipal utility company SIA "Kuldīgas komunālie pakalpojumi". There are 23,383 people living in Kuldīga region and there are a total of 67 waste sorting places. Thus, one sorting point has been established for an average of 349 people, which is twice as dense as required by the regulations of the Cabinet of Ministers of the Republic of Latvia. RESULTS Residents can also deliver sorted waste -plastic, paper, metal, glass -to the sorted waste reception area free of charge. For several years now KKP has been supplying 240-liter containers to private house residents in Kuldīga free of charge for the separation of recyclable waste from municipal waste. Since the second half of 2018, when the KKP provided customers with separate containers for separating glass in private homes, the volume of collected glass has increased. If in 2018 90.87 tons of glass were collected, then in 2019 -235.88 tons of glass were collected. Binding regulations of Kuldīga Municipality Council No. 2011/23 stipulate that collection of a 240-liter municipal waste container costs 4.16 euros, but collection of sorted waste -2.21 euros. In Kuldīga municipality, a "Waste Sorting Instruction" has been developed, which was repeatedly delivered to all residents. In addition, information is regularly published in Kuldīga Municipality newsletter "Kuldīgas Novada Vēstis" (hereinafter -KNV), on the municipal website www.kuldiga.lv and Facebook profile, in the local newspaper "Kurzemnieks", and on the KKP website www.kkp.lv, as well as its Facebook profile. In April 2019, for the first time, an educational campaign "Let's put our efforts together, but waste separately" was organized, during which more than 1,200 children and young people from general education schools and kindergartens in Kuldīga region were educated about waste sorting in presence in the utility company. The KKP was one of the first in Latvia to respond to the campaign of the World Wide Fund for Nature and the Nature Protection Board "Going in nature, what you bring, take it!" With the campaign launched in 2018, these institutions called on every organization that manages natural objects and organizes events in nature to place information signs instead of waste bins. WASTE SORTING HABITS OF KULDĪGA RESIDENTS More than half of the KKP respondents (63.3%) answered convincingly that they participate in waste sorting. On the positive side, 12% of respondents who do not currently sort waste still plan to do so. Summarizing the views of the KKP clients on what hinders waste sorting, 3 groups of factors were distinguished (see Table 2): organizational factors, factors related to lack of information and communication, and factors related to habits. Information on the organizational factors influencing waste sorting updated during the study (see Table 2) and the suggestions of the citizens on how to prevent them (see Table 3) was passed on to the responsible KKP specialists responsible. One of the most important aspects promoting non-sorting of waste is the problem indicated by 50.6% of respondents in Kuldīga municipality (mostly residents of rural areas) that waste sorting containers are not available in their place of residence. The KKP plans to increase the amount of sorted waste, which will be communicated to the population. KKP communication with the public is considered understandable by 71% of respondents. Most of the residents have indicated that the information is sufficient and understandable, emphasizing that "whoever wants, understands". However, 24% of clients have indicated that they have not gotten into details of this issue at all. Here it is necessary to find the most appropriate communication tools that would also address and motivate these people to draw their attention to waste sorting. Those residents who noted that the information was insufficient and incomprehensible to them (5%) indicate that: there is a need for constant reminder of information; there is generally no clarity about the sorting of the plastic packaging of the various foodstuffs; more detailed explanations are needed on which polyethylene products are suitable for sorting, etc. Most of the population is aware of the possibility for the household to save financial resources by sorting waste, but 37% of respondents do not know it yet. Communicating additional information on financial savings (or losses) to citizens can encourage them to become more involved in waste sorting. In the future, when creating newsletter content, attention should be paid to the fact that only 17% of the population know that batteries and accumulators, electrical appliances, light bulbs can also be disposed for free at KKP, Dārzniecības iela 9, Kuldīga. Particular attention should be paid to the myths and fake news mentioned by KKP respondents regarding the lack of knowledge about what happens to waste after collection, the costs of washing glass containers before transfer, and the fact that waste sorting is useless, etc. To make bags of different colors and hand them out to the residents so that they can sort waste already in the apartment Informing the public Improvement of containers (difficult to throw waste through small openings, it needs to be further flattened, crushed) Even more containers need to be placed closer to households Deposit system The law shall stipulate mandatory waste sorting Increased tariffs for not sorting waste Several respondents indicated that they have a "Waste sorting instruction" at home that is easy to follow when sorting waste. As this is a communication tool that is easy to use and provides information in a clear way, it needs to be repeatedly sent in paper format. Although almost half of the residents of Kuldīga municipality still do not sort waste, 71% of respondents indicate that the information is sufficient, understandable, and accessible. Some of these respondents commented: "If there is a desire to sort waste, a great deal of information can be found. The problem is the desire or unwillingness to do so", "No need for additional information, everything is already clear", "I do not need additional information, everything is clear", "We are aware of the importance, there is no need to spend resources on information campaigns". The opinion of these respondents should be taken into consideration when compiling the newsletter, indicating separately which news are of a general nature and which are topical, new news, to which special attention should be paid. The vast majority indicate that the most accessible information for them is on the internet, including KKP website www.kkp.lv and Facebook profile. However, 24% of respondents obtain information: together with a monthly invoice for waste sorting, from personal e-mails addressed to the customer, from posters outdoors, from information stickers on containers, from radio and television, from newspapers. The reasons given by the population can help to improve the provision of the service by revising the frequency of container collection at publicly available sorted waste collection points and by finding an opportunity to order waste removal electronically. Explanatory information is needed on what determines the density of containers, which materials can be sorted, why it is not proposed to sort a more diverse range of waste, why containers with hatches are chosen instead of hinged lids, that glass containers do not have to be washed but simply emptied, that every resident has the right to dispose of the sorted waste at any publicly available collection point for sorted waste, including apartment residential buildings, that the sorted waste collected by KKP is not taken to a landfill for disposal, but is handed over for recycling. The analysis of the KKP previous communication with customers and customers' waste sorting habits showed that so far, the work has been done more in a campaign-like manner, no systematic media monitoring, content analysis, determination of the level of involvement has been performed. KKP communication strategy developed will optimize the existing communication, making it planned, targeted, and measurable. DISCUSSION The KKP communication strategy for convincing the public of the importance of waste sorting has been developed for the next three years -from 2021 to 2023, based on the strategy structure proposed by O. Kazaka (Kazaka, 2019): description of the field; goals; tasks; description of the target audience; positioning; communication directions; communication plan; criteria for evaluating the effectiveness of communication. KKP is a capital company of Kuldīga municipality, which, as a good governance company, really cares that the residents of Kuldīga city and 13 parishes of the municipality live in a clean, tidy and well-managed environment. In addition, Kuldīga, with its unique old town in the ancient valley of the river Venta, which is a UNESCO World Heritage Site, has an unwritten obligation to take care of the environment. Being on the UNESCO World Heritage List gives the place a quality mark. This would enable the city to attract additional resources in the fields of education, science, and culture, stimulate the preservation and protection of the historical center of Kuldīga, attract tourists, promote high-quality development of the city and the well-being of the population. In addition, KKP, as a municipal capital company, has access to extensive and diverse information channels and resources for communication with the residents of the municipality to reach the public as much as possible to promote waste sorting. The KKP vision is in line with the company's positioning -"Citizens' Partner No. 1 in improving the environment and everyday life, and in service innovation". KKP's mission is to be a team of experts, always one step ahead of everyone, providing versatile and innovative services to every customer, but the vision is to be the number one partner of citizens in improving the environment and everyday life, and in service innovation. Company values: professional employees, quality services, attitude of masters, educated customers. The aim of KKP's corporate communication is to persuade the public to sort waste so that citizens can sort waste voluntarily and happily, without throwing it all together for disposal in landfills. The goal of the communication strategy is to transfer 20% of the total amount of waste collected for recycling by the end of 2021, 30% by 2022, and 40% by 2023 by improving the availability of waste sorting infrastructure and carrying out planned public information and education. The main segments of target audience: KKP clients who already participate in waste management; Kuldīga municipality pre-school education facility senior group pupils, who should develop interest for waste sorting already at early age; Kuldīga municipality general education school students; various company teams, and seniors. Several tasks have been set. As the greatest audience coverage is provided by mixing of communication types, it is necessary to use several communication channels. Information on waste sorting should be distributed: on the internet (including KKP website www.kkp.lv and Facebook profile), together with a monthly invoice for waste removal, in personal e-mails, radio and television, newspapers, information stands, as well as lectures should be organized for on -site training. Make the most of social media, which enables two-way communicationthe opportunity to have a dialogue with customers. In cooperation with the newspaper "Kurzemnieks" to regularly create thematic pages on waste sorting. Communication direction. Promotion of waste sorting among potential and existing customers. It is planned to organize waste sorting trainings for all KKP employees and to involve those employees in organizing various educational events and campaigns. Attract well-known people who share their experience in waste sorting. Regularly publish opinion leaders who can influence the decisions of the target audience -educators, children and young people, doctors, environmentalists, animal friends, religious leaders, municipal leadership, athletes. Good examples need to be communicated of how people sort waste, such as people who collect paper and dispose it in paper waste collection campaigns, disposal of batteries in special containers in supermarkets, put plastic bottles in a deposit system in Lithuania, and compost bio-waste in their backyard garden. Organize educational events for schools, kindergartens, and business teams. Especially work with the younger generation, who are responsive to waste sorting and pass on the acquired knowledge to their families, relatives, and friends. In order to evaluate the effectiveness of communication, it is planned to carry out various activities: customer survey (gives an opportunity to find out what is happening), focus group discussions (gives an opportunity to find out why it happens and how it can influence the current situation), analysis of publications (using qualitative and quantitative content analysis), evaluate the effectiveness of communication on social media on Facebook (level of coverage, size of the audience, tonality, content, level of involvement -"Like", commentary, sharing the publication or other activities related to the profile of the organization). Media monitoring, content analysis, and level of involvement will be measured every week. In turn, the overall picture of waste sorting habits and views on the importance of sorting is measured once a year or according to the need for a specific campaign/event. Considering the set communication goals and target audience, tasks, positioning, and communication direction, KKP communication plan for 2021-2023 has been developed to convince the public of the need for waste sorting. For each activity, the implementation time, theme, and content summary, target audience, communication channels/tools to be used, and feedback are defined. The communication action plan does not include various day-to-day tasks, such as preparing press releases, creating information schedules, and other types of illustrative materials. The persons responsible for the implementation of the action plan have been determined. In developing KKP's communication strategy, great care was taken to make it flexible. It is important not only to implement the developed strategy, plan, timely implement the planned measures, involve appropriate people, use appropriate communication channels and tools for each target group, but also to feel the situation in society, market, and the world. For example, due to the state of emergency declared in the country, the implementation of educational activities unfortunately had to be postponed, but during that time schools in the region were invited to submit works to a drawing competition on participant's family contribution in waste sorting. The developed KKP communication strategy implementation will optimize current communication, making it planned, targeted and measurable. (Hindawi, 2018;Adomavičiūtė et al.,2012;etc.) mention the following as the main obstacles to waste recycling: 1) government plan and budget: insufficient government special regulation and a budget for municipal solid waste management, 2) insufficient education of households: households are unaware of the importance of recycling, 3) technology: lack of efficient recycling technologies, 4) management costs: high costs of manual waste classification. In Latvia, the main reason for not sorting waste is mentioned as incomplete waste sorting infrastructure, but the role of strategic and purposeful communication in this process is not questioned. Studies Solving the organizational problems identified during the research is largely hindered by the uncertainty about the national level policies. Only on January 22, 2021, the Cabinet of Ministers adopted the National Waste Management Plan for 2021-2028, which envisages expanding the system of separate waste collection, developing the institutional system of waste management, creating stronger waste management regions, and implementing the principles of circular economy to significantly increase waste recycling and reduce the amount of waste going to landfill. (Minister Plešs: the national waste management plan will ensure the development of the sector, 2021). The plan implies supporting the reform of municipal waste management regions proposed by the MEPRD and will move from 10 waste management regions to five waste management regions (Cabinet Order No. 45, 22.01.2021) KKP communication strategy for 2021-2023, developed in the course of the study described above, will be reviewed based on the official information received regarding the National Waste Management Plan for 2021-2028. Communication strategy of Kuldīga municipal utilities aimed at changing of public habits in waste sorting is based on the conclusions made in the analysis of the situation, envisaging specific purposeful activities for each of the identified target audience segments. However, it should be noted that the KKP budget does not allow to develop special video sequences corresponding to the respective groups with the involvement of people popular not only in Kuldīga region, but throughout Latvia, who could address more effectively the respective target groups. Neither budget nor resources allow the development of mobile applications or games (for smartphones) specifically targeted at each of the target group, the effectiveness of which has been demonstrated by recent studies (Zhao et al., 2016;Kozel et al., 2019;Hughes & Boothroyd, 2020;etc.). It is important that such activities are developed on the state level, based on the state-wide policy of waste management developed by the Ministry of Environmental Protection and Regional Development. Developed KKP strategy implies conduction of a survey of the population on their waste management habits at least once a year. Based on the results of research (Chen & Gao, 2020;Shen et al.,2019;Abduh et al., 2018;etc.), it is preferably in a survey to also include questions about the psychological criteria characterizing the respondents, which would allow for more purposeful segmentation of client groups, to develop more precise methods of communication with them, including purposeful training methods. Special attention should be paid to the age group up to 18 years, because according to research results (Kozel et al., 2019) it is this group that has a great influence on the attitudes and behavior of older generations in the field of waste management. The influence of social groups must be taken into account when developing a communication strategy for changing public habits in waste sorting. "If an individual cares about another's thoughts about himself (perhaps based on the misconception that others are paying attention to what he/she is doing), the individual could follow the crowd to avoid anger or to gain favor" (Thaler & Sunstein 2009: 59). By purposefully creating environment in which it is possible to gain positive emotions together, by performing socially desirable activities (waste sorting), it is possible to increase the motivation of individuals. Studies Wunsch et al., 2016) show that emotional aspects (team spirit, fun) have more potential than more rational factors such as health or environmental considerations. In conclusion, we would like to emphasize that waste sorting is a complex problem that must be addressed globally, across Europe, on the national level, regionally, on municipal and individual levels. It is important to solve not only communication and education issues, but also organizational problems. British communication specialists (WRAP 2013: 7) also recommend considering the impact on the company's resources and capacity, to determine whether it will be sufficient for successful communication: will the company's employees be able to collect additional collected material, or will there be enough containers and vehicles to collect additional material, or the company's employees will be able to be polite and answer the questions asked by the residents, whether the customer service specialists will be ready to answer additional questions, or whether all the necessary information will be available on the website.
2021-08-03T00:06:04.708Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a4b7a101ee142de12111a0c887ac757c35e5dd2d", "oa_license": null, "oa_url": "https://www.turiba.lv/storage/files/ap-11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a7aebba8e5d1c87a016eaf6c204813b93a377563", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
52287789
pes2o/s2orc
v3-fos-license
Mathematical Analysis of Cytokine-Induced Differentiation of Granulocyte-Monocyte Progenitor Cells Granulocyte-monocyte progenitor (GMP) cells play a vital role in the immune system by maturing into a variety of white blood cells, including neutrophils and macrophages, depending on exposure to cytokines such as various types of colony stimulating factors (CSF). Granulocyte-CSF (G-CSF) induces granulopoiesis and macrophage-CSF (M-CSF) induces monopoiesis, while granulocyte/macrophage-CSF (GM-CSF) favors monocytic and granulocytic differentiation at low and high concentrations, respectively. Although these differentiation pathways are well documented, the mechanisms behind the diverse behavioral responses of GMP cells to CSFs are not well understood. In this paper, we propose a mechanism of interacting CSF-receptors and transcription factors that control GMP differentiation, convert the mechanism into a set of differential equations, and explore the properties of this mathematical model using dynamical systems theory. Our model reproduces numerous experimental observations of GMP cell differentiation in response to varying dosages of G-CSF, M-CSF, and GM-CSF. In particular, we are able to reproduce the concentration-dependent behavior of GM-CSF induced differentiation, and propose a mechanism driving this behavior. In addition, we explore the differentiation of a fourth phenotype, monocytic myeloid-derived suppressor cells (M-MDSC), showing how they might fit into the classical pathways of GMP differentiation and how progenitor cells can be primed for M-MDSC differentiation. Finally, we use the model to make novel predictions that can be explored by future experimental studies. INTRODUCTION Hematopoietic stem cells differentiate into blood cells (neutrophils, monocytes, red blood cells, etc.) in a finely regulated process called hematopoiesis. In this branching process, each branch point represents a cell differentiating into one of two alternative lineages. Stimulatory factors, such as cytokines, induce differentiation into one lineage over another, and cross-antagonistic transcription factors maintain commitment to the chosen lineage (1,2). In the myeloid branch of hematopoiesis, granulocyte-monocyte progenitor (GMP) cells differentiate into essential cells of the innate immune system, including granulocytes (neutrophils, eosinophils, and basophils) and monocytes (which further differentiate into macrophages and dendritic cells), depending on the local concentrations of specific colony stimulating factors (CSFs) (3,4). Therefore, proper orchestration of GMP differentiation is of vital significance to human health. For instance, myeloid cells are often targeted with CSFs to treat a variety of diseases including arthritis, infections, pneumonia, cancer, type 1 diabetes, and neutropenia (5)(6)(7). A better understanding of the biological responses of myeloid cells to these stimuli will be useful to refine and develop new therapeutic strategies. Despite the vital roles that cells of the GMP lineage play in the body, much is still unknown about the dynamics of their differentiation. Laslo et al. suggested that PU.1 and C/EBPα stimulate cross-antagonistic transcription factors, Egr-2 and Gfi-1, to maintain granulocytic and monocytic commitment, respectively (15). This cross-antagonistic relationship, which is thought to be critical to gene regulation within the myeloid lineage, was modeled by Laslo et al. with a simple, symmetrical, interaction motif that exhibits lineage commitment of monocytes and granulocytes in response to external signals. However, the simple motif they propose cannot explain more complex behavior, such as GMP responses to low and high doses of GM-CSF. It is also not well understood how GMP cells respond to varying concentrations and combinations of cytokines, nor how GMP cells differentiate into myeloidderived suppressor cells (MDSCs), which are immature myeloid cells that exhibit both granulocytic and monocytic traits (18)(19)(20). MDSCs have anti-inflammatory properties and serve a beneficial role in a variety of pathological conditions (21,22) nonetheless, they are more often associated with promotion of cancer growth. It is well documented that MDSCs promote angiogenesis and metastasis, and many studies suggest that suppression of these cells may be a promising clinical target in cancer therapy (18,(23)(24)(25)(26)(27)(28). While originally lumped into one heterogeneous group, MDSCs have been reclassified into two separate types: polymorphonuclear (PMN)-MDSCs and monocytic (M)-MDSCs (18,23,29). Distinguishing between these subsets is crucial, as they have different mechanisms of immunosuppression, respond to different cytokines, and are more closely associated with different tissues and cancers (23,30,31). While PMN-MDSCs typically exist at higher population densities than M-MDSCs, M-MDSCs are more potent suppressors of inflammation on a per-cell basis (30,32). Of the two subsets, we will focus on M-MDSCs, as our model does not include the downstream transcription factors necessary to distinguish between PMN-MDSCs and other cells of the granulocyte lineage. In this paper, we propose a new model of the internal regulatory network that governs GMP cell differentiation and how various cytokine signals feed into this regulatory network. We convert our network diagram into a set of nonlinear ordinary differential equations (ODEs) and study their properties by dynamical systems theory. We first explore the polarization of GMP cells resulting from G-CSF and M-CSF signals. Next we explore the dynamics of the system in response to GM-CSF and propose a mechanism driving the complex behavior observed in GM-CSF experiments. We also explore how M-MDSCs may fit into this differentiation scheme, including the stability of the state and the nature of the phenotype itself. Finally, we evaluate the system's response to cytokine combinations and provide insight into the spectrum of behaviors induced by signaling crosstalk. MATERIALS AND METHODS The Proposed Regulatory Network and its Molecular Basis PU.1 and C/EBPα are thought to be master regulators of myelopoiesis, as C/EBPα favors granulopoiesis and PU.1 favors monopoiesis (33,34). In this subsection we summarize the experimental evidences characterizing the interactions of PU.1, C/EBPα and their closely interacting partners, in order to motivate the regulatory network (Figure 2A) that we will use to understand the differentiation of GMP cells. First of all, we note that the roles of C/EBPα and C/EBPβ appear to be redundant in hematopoiesis. When the C/EBPβ gene was knocked in to the C/EBPα locus, no significant changes in hematopoiesis or gene expression were ascertained. Since these proteins have highly conserved C-terminal dimerizationand DNA-binding domains, it is reasonable to assume that they bind to the same promoter sites (35). It is possible, indeed probable, that these proteins have differences in regulation at the transcriptional level; however, it has been demonstrated that GM-CSF and G-CSF upregulate both C/EBPα and C/EBPβ (36,37). Furthermore, both C/EBPα and C/EBPβ exhibit positive autoregulation (38,39). Due to this overlap of structure and function, we lump C/EBPα and C/EBPβ into one node/variable, called C/EBP. Unless otherwise specified, "C/EBP" will refer to the combination of C/EBPα and C/EBPβ rather than the entire family of C/EBP transcription factors. The interactions between PU.1 and C/EBP are intriguing, as they have both antagonistic and synergetic relationships (Figure 2A). It has been demonstrated that the promoter of the SPI1 gene (encoding the PU.1 protein) has multiple potential C/EBP binding sites, and that C/EBPα can induce PU.1 expression by directly binding to the promoter to activate transcription (40,41). Alternatively, C/EBPα can inhibit PU.1 indirectly by upregulating the Gfi1 gene (42). Gfi-1 in turn, physically binds with PU.1 to inhibit its activity as a transcription factor (43). These affects are amplified, since PU.1 auto-activates its own promoter site (44). Furthermore, Gfi-1 binds directly to numerous PU.1 target genes to repress PU.1's transcriptional activities (43). We suspect that this process could further inhibit SPI1 transcription, given possible positive feedback loops between PU.1 and its downstream targets. In addition, PU.1 antagonizes C/EBP either directly or through activation of IRF8, which creates a mutual inhibition circuit between PU.1 and C/EBP (45,46). IRF8 physically interacts with C/EBPα to prevent it from binding to chromatin and promoting transcription of target genes (47). While, to the best of our knowledge, no studies have demonstrated that C/EBPβ is inhibited by IRF8, it has been demonstrated that IRF8 also binds to and inhibits C/EBPε, suggesting that it may function similarly on C/EBPβ (47). Furthermore, it has been shown that IRF8-knockdown induces C/EBPβ expression in dendritic cells (48). PU.1 has also been shown to inhibit the transcriptional activity of C/EBPα and C/EBPβ in adipocyte differentiation via direct protein-protein interactions (46). Similar interactions may occur in myelopoiesis, as it has been shown that C/EBPα directly interacts with PU.1 to block PU.1-induced dendritic cell commitment (49). Despite this evidence, we do not model the potential direct interaction between PU.1 and C/EBP as regulatory details are not clear in regard to GMP differentiation, and a mutual inhibitory relationship is already captured within our motif. Egr-2, another downstream transcription factor promoted by PU.1, forms a complex with Nab-2 to inhibit Gfi-1. Similarly, Gfi-1 regulates Egr-1 and-2 to reduce the concentration of the Nab/Egr complex (15,50). Thus, the Egr-Gfi-1 relationship creates a second layer of antagonism within this myeloid differentiation system. Since Gfi-1 can inhibit Egr expression, but not Nab, we simplify our model by excluding Nab, with the assumption that the concentrations of the Egr-Nab complex will be proportional to the concentration of Egr. Within our model, three receptors (M-CSFR, G-CSFR, and GM-CSFR) transduce cytokine signals to regulate transcription FIGURE 2 | Regulatory network driving GMP differentiation in response to M-CSF, G-CSF, and GM-CSF. (A) Transcription factor network. (B) Cytokine signaling and regulatory network. Regulatory motifs are expressed in terms of direct and indirect interactions among proteins, where a line with an arrow head represents the activation of one protein by another and a line with a circular head represents inhibition. Blue and red ovals denote proteins highly expressed in monocyte and granulocyte lineages, respectively. GM-CSFR is represented by a purple oval as it can signal for both monopoiesis and granulopoiesis. Cytokines are denoted by rectangles. factor activity ( Figure 2B). These transcription factors, in turn, regulate expression of the receptors, thereby creating positive and negative feedback loops. We model PU.1 as the primary target of M-CSF signaling, since M-CSF induces monocyte differentiation and PU.1 is a master regulator of monopoiesis. Although we do not know of any confirmed pathway, it is known that M-CSF can signal through ERK to activate a transcription factor, Sp1, which can bind to multiple sites on the SPI1 promoter (44,51). Thus, it is plausible that M-CSF induces PU.1 expression through Sp1. PU.1, as well as C/EBPα, C/EBPβ, and Egr-2, bind to the M-CSFR promoter region to activate transcription, creating a positive feedback loop between PU.1 and M-CSFR (1,13,50,52). Gfi-1, however, binds to the promoter to disrupt transcription (53). Conversion of the Interaction Diagram Into a Mathematical Model To convert the interaction diagram in Figure 2B into a set of nonlinear ODEs, we use a formalism called "standard component" modeling (65). Each of the eight proteins in Figure 2B (excluding cytokines) is governed by an ODE of the form: The (relative) concentration or activity of protein i is denoted by the variable X i (t), 0 ≤ X i (t) ≤ 1. The function W i (X j ) accounts for all interactions within the network that directly affect the rate of change of X i such that ω i,j quantifies the direction and strength of the affect that protein j exerts on protein i. Negative values represent inhibition while positive values represent activation. The time scale for the rate of change of X i (t) is determined by 1/ρ i . The value of ω o i determines the value of X i when it is not receiving stimulus from any X j . One unit of the time variable, t, is roughly 2 h in our simulations. The nonlinear function, H(W) = 1/(1+e −σ W ), in this ODE is a sigmoidal function of W, with steepness determined by the parameter σ . Many biological phenomena such as phosphorylation cascades and transcriptional regulation are characterized by sigmoidal response curves. Our sigmoidal function H(W) captures such behavior in a very convenient way. As an example, we show the case of C/EBP activity: We use ρ TF rather than ρ C/EBP as all transcription factors have the same time scale in our model. Note that this ODE distinguishes between two concentrations of C/EBP: its "total" concentration, [C/EBP] T , and the concentration of the "free" form of the protein, [C/EBP] F . C/EBP is considered free when it's not bound to IRF8, therefore [C/EBP] F represents the active portion of [C/EBP] T , where and [C/EBP:IRF8] denotes the concentration of the C/EBP-IRF8 complex. Similarly, Since protein-protein binding is governed by the law of mass action, and the timescale for association and dissociation of proteins is likely to be much faster than other time scales in the model, we assume that, at any given time, the reaction [C/EBP] F + [IRF8] F ⇌ [C/EBP:IRF8] is at equilibrium. Thus, Using Equations ( Regarding binding of external cytokines to their membranebound receptors, we assume that the cytokine concentration, [L], is constant and much greater than the total concentration of receptors, [R] T . In this case, the concentration of the receptor:cytokine complex, [R:L], is given by the function: where K d is the dissociation constant of the receptor:cytokine complex. The cytokine concentrations are "inputs" to the model, the total receptor concentrations are dynamic variables of the model. Parameter values were hand-tuned so that the behavior of the system in response to cytokines aligns with experimental observations. For a more detailed discussion on parameter tuning, a table of parameter values and the complete set of equations constituting our mathematical model, see the Supplementary Material. Computational Methods All quantitative simulations were computed using the deterministic ODE solver, ode45, in MATLAB. To simulate a population of GMP cells, we generate a set of cells with stochastically varying initial conditions, taking the steady state concentrations of all variables in a naïve GMP cell (with no cytokine stimulation) and varying each initial concentration by a random factor drawn from a normal distribution with mean = 1 and standard deviation = 0.2. Although our model consists of eight nonlinear ODEs, we characterize its behavior in a pseudo-phase plane spanned by only two variables: [ Figure S1). Any subinterval for which the sign of d[Gfi1]/dt changes is further subdivided into ten sub-subintervals, and the iterative process is repeated until we have good approximation of the pseudo-steady state ( To construct heat maps, we simulate 500 stochastically generated cells, using the method specified earlier, under each cytokine condition. Using the differentiated population's ratios, each pixel was assigned an RGB value determined by the following equation: where the red, green and blue intensities are a function of the fraction of the population which differentiated into granulocytes progenitors (GP), M-MDSCs, and monocytes (MO), respectively, over the size of the largest population category (including undifferentiated GMP cells). For those interested in exploring our model further, we provide two resources for utilizing our model and conducting simulations of your own. The supplementary code provides an ODE file, a stochastic simulation function and a user friendly MATLAB script, "MainScript.m, " to produce time course simulations and figures as well as stochastic simulations under user specified conditions. A more extensive resource is provided online at https://github.com/bronsonweston/GMP-Modeling, which includes all algorithms previously mentioned and provides a script, "FigureGenerating.m, " to easily reproduce any of our results. This code can also be used as an example script to conduct alternative simulations not explored in this study. ASSUMPTIONS As with any model, we have made several simplifying assumptions to avoid unnecessary complexity. First, we ignore autocrine feedback loops of the GMP lineage. We maintain constant cytokine concentration(s) in order to evaluate the effects of the stimulus input, rather than accounting for how the cell may change external conditions. We assume that the cytokine production of an individual cell has a negligible impact on the initial decision-making process of GMP differentiation. Additionally, we assume that all protein isoforms function similarly in the context of our network. For example G-CSFR has seven isoforms, four of which are involved in granulopoiesis (66). We assume that [G-CSFR] is the sum of these isoforms, weighted according to the contribution of each to granulopoiesis. The GMP differentiation network has many mechanisms for generating sigmoidal nonlinearities, such as dimerization of receptor subunits and cooperativity of transcription factor binding to DNA promoter sites. We assume that our sigmoidal functions, H(W i ) = 1/(1+e −σ Wi ), adequately capture the cumulative non-linear effects of these molecular mechanisms. In addition, we assume that all transcription factors function at the same time scale, and all receptors function at the same, ten-fold slower time scale (ρ R = ρ TF /10). It is hard to know for sure what timescales these proteins are functioning on. While transcription factors are often functional immediately after synthesis, receptors must be trafficked to the periphery of the cell, diffuse within the cell membrane, assemble with other subunits, and bind to cytokines before a signal can be transduced back into the cell, after which the signal itself may take some time to get to its downstream target. At the very least, we would expect a significant time delay between production of the receptor and its impact on the expression of down-stream genes. For these reasons, we justify using a slower timescale for receptors than for transcription factors. Finally, we assume that receptor activation does not have a significant negative feedback mechanism. Although it has been observed that the level of a receptor, such as M-CSFR and GM-CSFR, is reduced after stimulation by its own ligand (67), we choose to ignore these feedback loops, as we are interested in the initial aspects of cell differentiation, which are dominated by the positive feedback loops included in our model. A Motif for GMP Cell Differentiation The primary objective of this paper is to construct and analyze a dynamic model of the differentiation of GMP cells into monocyte and granulocyte lineages. Before describing the results derived from our model, we compare it briefly to the work of Laslo et al. who proposed a simple, symmetric model of the interactions among C/EBPα, PU.1, Gfi-1, and Egr (15). The purpose of their model was to demonstrate that mutual antagonism between Gfi-1 and Egr can be a mechanism for inducing commitment of the monocytic and granulocytic lineages. While achieving its intended purpose, the model's forced symmetry and its neglect of critical regulatory mechanisms limit its predictive capacity and its ability to explain more complex phenomena of myelopoiesis. We improve upon the Laslo model with new, biologically relevant interactions, including a fifth transcription factor, IRF8, as well as new signaling pathways, CSF receptors, and regulatory mechanisms for these receptors. These additional interactions break the symmetry of Laslo's model but extend the range of behaviors we can model. Rather than modifying the equations of Laslo's model, we derive a new set of equations based on our standard-component modeling approach. Justification for these changes can be found in the methods section. Our motif for GMP cell differentiation is depicted in Figure 2B. We convert this signaling network into a set of non-linear ODEs (see Table S1) with parameter values specified in Table S2. Sample simulations for monocyte and granulocyte differentiation are presented in Figure 3. To gain some insight into these simulations, we use the notion of a "phase plane" from dynamical systems theory (68-70). Although our system of eight nonlinear ODEs defines an eight-dimensional phase space, we can reduce it to a two-dimensional phase plane by making pseudo-steady state approximations on six of the eight variables, leaving the concentrations of "master regulators" PU.1 and C/EBP as the fundamental state variables. The method by which we affect this reduction is explained in the section on "computational methods." On the phase plane ( Figure 4A), we plot nullclines for the state variables PU.1 and C/EBP in the case of no cytokine stimulation. The PU.1 nullcline is the locus of points where d[PU.1]/dt = 0, and the C/EBP nullcline is the locus of points where d[C/EBP] T /dt = 0. Where these nullclines intersect lie steady states of the full eight-dimensional set of ODEs. With no cytokine stimulation, these nullclines intersect five times to form three stable steady states (nodes) and two saddle points. The stable steady state with low levels of both C/EBP and PU.1 corresponds to a naïve GMP cell, whereas the other two stable steady states correspond to granulocyte and monocyte progenitor cells, depending on whether C/EBP or PU.1 is elevated, respectively. For the case of no cytokine stimulation, the GMP cell will sit indefinitely in the naïve state. It is important to recognize that, in our model, "low" and "high" are relative. GMP cells are not typically described as having low concentrations of PU.1 and C/EBP, since both transcription factors are required for the transition of a common myeloid progenitor into a GMP cell (4,33). In the framework of our model, however, it is appropriate to describe the GMP state as (PU.1 low , C/EBP low ), the granulocyte progenitor state as (PU.1 low , C/EBP high ), and the monocyte state as (PU.1 high , C/EBP low ). It is also important to note that, while PU.1 expression is elevated in neutrophils, PU.1 remains low in early granulocyte progenitors (71). M-CSF Induces Monopoiesis We begin our investigation of external signaling by exploring how nullclines shift in response to M-CSF stimulation. Comparing Figures 4A,B, we see that, in response to M-CSF, the PU.1-nullcline moves and the naïve GMP state is lost. Although both the monocyte (PU.1 high , C/EBP low ) and granulocyte progenitor (PU.1 low , C/EBP high ) states remain, Figure 5C). Although we use [C/EBP] T and [PU.1] as primary markers of cell type, temporal changes in the other transcription factors ( Figure 3A) give a more complete picture of the dynamics of the system. In the early stages of monopoiesis, we see an immediate increase in PU.1, IRF8, and Egr-2. IRF8 binds to C/EBP resulting in a slight decrease in C/EBP activity while Egr-2 represses Gfi-1, relieving suppression of PU.1. Furthermore, PU.1 upregulates FIGURE 4 | Nullcline movement due to M-CSF eliminates the GMP state and induces differentiation into the monocyte lineage. Blue and red lines are the PU.1 and C/EBP nullclines, respectively. Black circles and white circles designate stable and unstable steady states, respectively. Cyan asterisks represent stochastically generated initial conditions, while blue dashed lines represent the cellular trajectories of monopoiesis from these initial conditions. (Continued) itself, resulting in the switch-like behavior that is demonstrated in Figure 5C. Receptors such as GM-CSFR and M-CSFR are heavily upregulated while G-CSFR remains at a lower level (Figure 3C). G-CSF Induces Granulopoiesis G-CSF stimulation changes the landscape of the (PU.1, C/EBP) phase plane ( Figure 6A) more drastically than M-CSF stimulation. Surprisingly, the PU.1 nullcline is more sensitive to changes in G-CSF than the C/EBP nullcline. As a result, there remain five intersection points, but only two are stable (the monocyte and granulocyte progenitor states). The other three steady states are two saddle points and an unstable node. There appear to be two additional intersection points of these nullclines; however, the apparent intersections are an artifact of projecting the nullclines onto the (PU.1, C/EBP) phase plane. By plotting the nullclines in a three-dimensional phase space in Figure 6B, we show that the nullclines intersect only five times. The bifurcation diagram ( Figure 6C) is in agreement with our nullclines, and shows that the GMP state disappears at [GCSF] ≈ 0.21, with only two stable steady states remaining (monocyte and granulocyte progenitor) and three unstable steady states. Despite the bistable nature of the system under G-CSF stimulation, GMP cells preferentially differentiate into granulocytes due to the locations of the basins of attraction of the two stable steady states ( Figure 6A). While experiments suggest that G-CSF induces granulopoiesis, the dynamical changes during this process of differentiation are not well documented. Our model (Figure 3B) suggests that G-CSF stimulation results in an initial increase of PU.1 expression, due to increased C/EBP activity, before PU.1 is eventually suppressed by Gfi-1. Egr-2 is also suppressed directly by Gfi-1, and IRF8 is suppressed when PU.1 activity decreases. The system reaches steady state as a granulocyte progenitor cell with high expression of C/EBP, Gfi-1 and G-CSFR, as well as moderate expression of GM-CSFR ( Figure 3D). Interestingly, when comparing the differentiation time of M-CSF induced monopoiesis and G-CSF induced granulopoiesis (Figures 3A,B), we find that GMP cells commit to the granulocyte progenitor state more quickly than to the monocyte state. This is likely a result of the fact that the auto-activation of C/EBP is stronger in our model than that of PU.1. It is known that it can take approximately 6 days for a monoblast (the earliest stage of monopoiesis) to mature into a monocyte, while it takes a GMP cell 1.5-2 days to mature into a promyelocyte (72,73). As the transcription factor expression levels of our granulocyte progenitor state are similar with those of the promyelocyte state, we find that these temporal ratios are consistent with our simulations (71). However, we must note that, while these times are consistent with the literature, our model suggests that the differentiation time is concentration dependent (Figure S2). Low Concentrations of GM-CSF Favor Monopoiesis While Higher Concentrations Favor Granulopoiesis An important question we wish to address in this paper is: what possible mechanism can explain the concentrationdependent behavior of GM-CSF induced differentiation? GM-CSF signals upregulate C/EBP, which in turn promotes PU.1 and Gfi-1 transcription. However, Gfi-1 and PU.1 are mutually antagonistic, and PU.1 suppresses C/EBP activity via IRF8 ( Figure 2B). Thus, C/EBP can inhibit PU.1 through Gfi-1, or suppress itself and Gfi-1 via activation of PU.1. We propose that this combination of positive and negative interactions that C/EBP has with PU.1, and the asymmetrical nature of the system, manifests itself in the concentration-dependent outcomes of GM-CSF induced GMP differentiation. At low levels of GM-CSF (Figures 7A,C), both C/EBP and PU.1 rise swiftly. PU.1's positive autoregulation drives it to increase faster than Gfi-1, promoting IRF8 and Egr-2 production in the process. IRF8 binds to and suppresses C/EBP, preventing C/EBP-induced expression of Gfi-1, while Egr-2 directly suppresses Gfi-1. Eventually, Gfi-1 is irreversibly suppressed and PU.1 is dominant. The resulting phenotype resembles that of the monocyte lineage. Thus our model agrees with experimental observations, that low concentrations of GM-CSF encourage monopoiesis (9). When we compare M-CSF and GM-CSF induced monopoiesis (Figures 3A, 7A), we find that the final expression patterns are very similar, however the evolution of transcription factor expression is different. Notably, during GM-CSF induced monopoiesis, C/EBP levels and Gfi-1 levels rise substantially prior to being suppressed, while C/EBP and Gfi-1 remain low in M-CSF induced monopoiesis. Our model also suggests that GM-CSF induced monopoiesis is more rapid than M-CSF induced monopoiesis (Figure S2). At higher concentrations of GM-CSF (Figures 7B,D), C/EBP increases more rapidly due to a combination of stronger GM-CSF stimulation, suppression of IRF8, and C/EBP positive autoregulation. The rapid increase in C/EBP results in acceleration of Gfi-1 production. While PU.1 expression is also enhanced, PU.1 relies heavily on its own capacity to autoactivate. Therefore, when C/EBP is increased, there is a delay before PU.1 reaches its maximum production rate; however, Gfi-1 reaches its maximum production rate immediately. Thus, Gfi-1 is more responsive to a change in C/EBP than PU.1. Furthermore, Gfi-1 directly suppresses PU.1 and Egr-2, while PU.1 must upregulate Egr-2 to inhibit Gfi-1. If Gfi-1 increases faster than PU.1, it halts PU.1-induced Egr-2 expression and establishes dominance over PU.1. In this way, our model predicts that high concentrations of GM-CSF will induce granulopoiesis over monopoiesis, a result which is consistent with experimental observations (9). We find that, even though the differentiation times of GM-CSF and G-CSF induced granulopoiesis are very similar (Figure S2), during GM-CSF induced granulopoiesis the PU.1 and IRF8 levels spike considerably higher than during G-CSF induced granulopoiesis (Figures 3B, 7B). This is likely due to greater Gfi-1 activity during earlier stages of G-CSF induced granulopoiesis. Intriguingly, our model suggests that GM-CSFR expression decreases slightly after granulocytic commitment, and remains at lower levels than the monocytic lineage. Experimental evidence shows that, indeed, GM-CSFR expression is higher in monocytes than in granulocytes, despite the fact that higher concentrations of GM-CSF favor granulopoiesis over monopoiesis (9,74,75). To explore why, we examine the incoming signal strength of GM-CSF over time with high and low GM-CSF concentrations (Figure 7E). We find that the incoming GM-CSF signal is stronger in the short term under high-dose conditions (granulopoiesis), however the signal strength begins to decrease after ∼24 time units due to reduced GM-CSFR expression. In contrast, at lower doses of GM-CSF (monopoiesis), the signal strength remains low until ∼24 time units, when it increases substantially in a hyperbolic fashion to levels much higher than in granulopoiesis. We propose that the sudden increase in signal is due to a switch-like mechanism, resulting from the positive feedback loop involving GM-CSFR, C/EBP and PU.1. As a result of this mechanism, we observe that the lower the GM-CSF concentration, the longer it takes for the switch to kick in. We ascertain that, at low GM-CSF concentrations, the delay in the switch event permits PU.1 to establish dominance over Gfi-1 and C/EBP, and commit to the monocyte lineage. The signal strength of GM-CSF is half-maximal at ∼30 time units after stimulation. At this point in monopoiesis (Figure 7A), Gfi-1 is subdued, C/EBP is on a steep decline, and monocytic transcription factors are highly expressed. Thus, by the time the GM-CSF signal is strong, the cell is already committed to the monocyte lineage. Similarly, with higher levels of GM-CSF, the cell has decisively committed to the granulocytic lineage at the point of maximum signal strength (∼24 time units in Figure 7B). Our results suggest that in both monopoiesis and granulopoiesis the GM-CSFR signaling capacity changes significantly after the cell has already committed to one lineage over another. If this is true, then the high concentration of GM-CSFR in monocytes relative to granulocytes must serve an alternative function than lineage commitment. One possibility is that GM-CSFR signaling, or lack thereof, is crucial for regulating proteins not accounted for by this model. Alternatively, GM-CSF signaling may function to upregulate C/EBP in the monocytic lineage, since it is necessary for AP-1 to bind with C/EBP to promote monocytic genes (76). It is also possible that downregulation of GM-CSFR is crucial for proper granulocyte development, as C/EBPα is downregulated in later stages of granulopoiesis (77). While future experimental studies may clarify these issues, our model does lead us to an additional conclusion that we will discuss in the subsequent section. We find that higher concentrations of GM-CSF result in a higher initial signal to stimulate granulopoiesis; however, the signal decreases and levels off after the cell has committed to the granulocyte lineage. Lower concentrations of GM-CSF initially have lower signal strengths to initialize monopoiesis; however, GM-CSFR is upregulated to high levels after monocytic commitment, resulting in a greater GM-CSF signal strength in the monocytic lineage. GM-CSF Induces M-MDSC Differentiation If low levels of GM-CSF induce monopoiesis and high levels induce granulopoiesis, what happens when we try something in the middle? Remarkably, our model predicts that moderate exposure to GM-CSF can induce GMP differentiation into a hybrid state: PU.1 high C/EBP high (Figures 8A,B). Moreover, we find that the dynamics of this process are strikingly similar to GM-CSF-induced monopoiesis. While C/EBP and PU.1 both rise swiftly early in the process, there is a lag in GM-CSFR expression, allowing PU.1 to establish dominance over C/EBP and Gfi-1. Thus, the cell begins to resemble the monocytic phenotype. However, when GM-CSFR approaches maximum expression, the signal becomes strong enough to induce a switch in C/EBP behavior, resulting in high C/EBP expression. Furthermore, a large fraction of C/EBP binds with IRF8, restricting its capacity to activate granulocytic genes. As a result of this and high levels of Egr-2, Gfi-1 remains repressed. The outcome is a new hybrid state (PU.1 high , C/EBP high ). Naturally, the question arises, is there a myeloid cell that fits this description? Indeed, M-MDSCs fit this profile, as these monocytic cells presumably express high levels of PU.1 and are known to highly express C/EBPβ (78). Furthermore, M-MDSCs highly express IRF8 relative to granulocytes, and are likely to express high levels of Egr-2 and low levels of Gfi-1 as these are mutually antagonistic master regulators of the monocytic and granulocytic lineages, respectively (15,61). Because the hybrid state fits the expected expression profile of M-MDSCs and displays behavioral characteristic observed in M-MDSC experiments (as discussed below), we propose that this hybrid state is representative of M-MDSCs and refer to this state as the M-MDSC state for the remainder of the paper. We have described three distinct expression profiles that result from different GM-CSF concentrations, but it is still unclear which phenotypes are favored over the entire range of GM-CSF concentrations. To evaluate this "favorability spectrum, " we simulated 10,000 stochastically generated cells under different GM-CSF conditions ( Table 1). The results confirm that the monocytic state is heavily favored at lower concentrations of GM-CSF. However, the population ratio shifts toward granulocytes as the dose of GM-CSF is increased. We also observe a distinct dichotomy in the expression of monocytes and M-MDSCs, suggesting that GM-CSF induces some kind of toggle switch. To explore these effects further, we computed one-parameter bifurcation diagrams with respect to GM-CSF concentration ( Figure 8C). Indeed, we find that a toggle switch (saddle-node bifurcation) does occur from the monocyte state to the hybrid state when [GMCSF] ≈ 0.86. This suggests that the monocyte We simulated the differentiation of 10,000 stochastically generated cells at increasing GM-CSF concentrations over a time period of 150 time units (∼300 h). We classified the final state as "naïve GMP," "granulocyte progenitor," "monocyte" or "M-MDSC." The results show that monocytes are heavily favored at low concentrations of GM-CSF, while granulocytes are favored at high concentrations. Monocyte differentiation yields to the M-MDSC phenotype at higher GM-CSF concentrations. state is unstable at high GM-CSF concentrations, while the M-MDSC state is dependent on significant cytokine stimulation. To better understand the dynamics of cell differentiation at varying GM-CSF concentrations, it is helpful to consider the phase planes and cell trajectories in Figure 9. We find that the PU.1 nullcline does not respond to GM-CSF; however, the C/EBP nullcline moves in such a way that the granulocyte state remains fixed in position and the monocyte state shifts substantially. In agreement with the bifurcation diagrams, the nullclines show that, as [GMCSF] increases, the monocyte state moves toward higher concentrations of C/EBP. Furthermore, with this changing nullcline landscape, the basins of attraction alter, resulting in a shift of favorability toward the granulocyte progenitor state. The representative cell trajectories (dashed lines in Figure 9) are good indicators of how the nullcline shifts affect cell differentiation. At [GMCSF] ≈ 0.86, the C/EBP nullcline lifts away from the PU.1 nullcline, so that the monocyte state disappears and the M-MDSC state is revealed. Figure 9 shows that M-MDSC differentiation follows a similar trajectory as GM-CSF induced monopoiesis, as we observed before when comparing Figures 7A, 8A. The pattern of monocyte differentiation is particularly interesting. The differentiating cells move in an arching fashion, first toward states of high PU.1 and C/EBP and then toward states of high PU.1 and low C/EBP; overshooting the monocyte steady state, they make a second turn-around, involving increasing concentration of C/EBP, as they approach the stable monocyte steady state. This pattern is seen in Figures 7A, 8A as well, where the C/EBP concentration rises (the arching phase), plummets (the passing phase) and then begins to rise again (the second-turn phase). Differentiation dynamics of the M-MDSC phenotype are quite similar, the critical difference being that the final steady state has much larger concentrations of C/EBP than is typical of monocyte cells. These results suggest that the stability of the monocyte and M-MDSC states is dependent on the extracellular GM-CSF concentration. Thus, a monocyte exposed subsequently to higher levels of GM-CSF should transition into the M-MDSC state. This result is consistent with experiments that suggest tumorconditioned media can convert monocytes into M-MDSCs and that GM-CSF can induce M-MDSC differentiation from myeloid progenitors (31,79,80). Similarly, the model suggests that M-MDSCs that are removed from GM-CSF stimulus should be destabilized. Figure S3 explicitly shows how these transitions can occur. We find that the ability of GM-CSF to convert monocytes into M-MDSCs is partially due to high expression of GM-CSFR within the monocyte lineage, as the signal strength must be sufficiently high to induce this transformation. This suggests one possible biological motivation for monocytes to express such high levels of the receptor, as this monocytic plasticity may be useful in a variety of pathological conditions. Combined Treatment With G-CSF and M-CSF Results in a Heterogeneous Population of Granulocytes and Monocytes If G-CSF and M-CSF promote granulopoiesis and monopoiesis, respectively, what happens when we expose a cell to both simultaneously? A heat map of M-CSF and G-CSF stimulation ( Figure 10A) suggests that M-CSF may inhibit granulopoiesis at lower concentrations of G-CSF. However, when both cytokines are introduced at higher levels, our model suggests that the resulting population will be a heterogeneous mix of both granulocytes and monocytes, a result in agreement with experimental observations (8). Surprisingly, the model suggests that GMP cells stimulated by both G-CSF and M-CSF never differentiate into M-MDSCs. Phase plane analysis suggests that the M-MDSC state does not exist under such conditions ( Figure S4). G-CSF Can Push Cells Toward Monopoiesis in Low Signaling Conditions An intriguing phenomena occurs when G-CSF is paired with low doses of M-CSF. Figure S5 (an alternative view of the lower-left corner of Figure 10A) shows that G-CSF can induce monopoiesis at M-CSF concentrations too weak to stimulate differentiation alone. Although several cells differentiate into granulocytes under these conditions, the fact remains that a larger percent of GMP cells differentiate into monocytes than if G-CSF were absent. G-CSF has a similar effect when paired with GM-CSF. Figure S6 (an expanded view of the lower-left corner of Figure 10B) shows that low concentrations of G-CSF can actually lower the GM-CSF dose required to induce monopoiesis. When [GCSF] = 0, significant monocytic development is not triggered until [GMCSF] > 0.35; however, with [GCSF] = 0.05, the required minimal dose of GM-CSF decreases to 0.2. When the concentration of G-CSF is increased further, however, it pushes the system toward granulopoiesis. Ultimately, these results suggest that, for cells that are primed for the monocyte lineage but do not have quite enough stimulus to initiate the process, G-CSF can provide the small push that is necessary to initiate monopoiesis. However, if G-CSF is introduced at higher concentrations, it will induce granulopoiesis at the expense of monopoiesis. G-CSF Can Inhibit or Encourage M-MDSC Development When Paired With M-CSF and GM-CSF Having evaluated the effects of G-CSF coupled with M-CSF and GM-CSF individually, we naturally progress to evaluate G-CSF effects when paired with equal signals from GM-CSF and M-CSF (GM/M-CSF). As one might expect, our model predicts that, when paired with low levels of GM/M-CSF, G-CSF can induce GMP cells to favor granulopoiesis over monopoiesis ( Figure 10C). However, at higher levels of GM/M-CSF that still favor the monocyte phenotype {the interval [0.65, 0.9] in Figure 10C}, G-CSF can push the system in favor of M-MDSC development. In fact, the closer the GM/M-CSF signal is to the M-MDSC switch threshold (≈0.9), the less G-CSF is required to induce the M-MDSC phenotype. The capacity for G-CSF to induce GMPs to differentiate into M-MDSCs suggests that already differentiated monocytes in similar conditions (with GM-CSF) can also be pushed into the M-MDSC state by G-CSF. These results are intriguing, as G-CSF is typically associated with its effects on granulopoiesis and PMN-MDSCs, rather than M-MDSCs (31,81). Furthermore, we find that, in conditions that favor M-MDSCs, additional G-CSF can push GMP cells in favor of granulopoiesis. Therefore, our model predicts that G-CSF can either promote or inhibit M-CSF Can Induce M-MDSC Differentiation When Mixed With GM-CSF It has been well documented that both M-CSF and GM-CSF can contribute to M-MDSC development (82,83). Since our model indicates that M-CSF alone cannot induce the M-MDSC phenotype, we test to see what effect it has when coupled with GM-CSF ( Figure 10D). We find that a GM-CSF primed system is hyper-sensitive to M-CSF, as even very low doses of M-CSF can arrest granulopoiesis to favor M-MDSCs. (For detailed effects on granulopoiesis, see Figure S7.) This suggests that the effects of M-CSF are minimally concentration-dependent. We suspect that this extreme sensitivity is unrealistic for real-life conditions, as the sensitive behavior would likely be washed out by other disturbances, such as other cytokines in vivo or in growth serums. Regardless, this calculation suggests that M-CSF paired with GM-CSF makes for a much stronger inducer of the M-MDSC phenotype than either cytokine alone. Finally, we evaluate GMP behavior when M-CSF is paired with equal concentrations of G-CSF and GM-CSF (GM/G-CSF) (Figure 10E). We find that, under conditions that would otherwise encourage granulopoiesis, M-CSF can induce both monocytes and M-MDSCs. In contrast, when M-CSF was exclusively paired with G-CSF, M-MDSCs are not produced ( Figure 10A). Furthermore, Figure 10E is at odds with Figure 10D, where the effects of M-CSF with GM-CSF alone are not concentration dependent. However, as M-CSF is increased in a GM/G-CSF system, the proportion of M-MDSCs increases in a concentration-dependent manner. We suggest that the situation in Figure 10E is more realistic for experimental and biological settings than Figure 10D, as the concentration-dependent behavior is likely more robust to biological disturbances. DISCUSSION In consideration of the crucial roles played by cells of the GMP lineage in human health and disease, we have proposed a molecular regulatory network for the differentiation of GMP cells (Figure 2B), based on known facts about the underlying molecular controls of this aspect of hematopoiesis. From our proposed network we have constructed a dynamical model of GMP cell lineage commitment (see Supplementary Material for a complete specification of the mathematical model), and we have used numerical simulations and bifurcation analysis to reveal the Concentration-Dependent Effects of GM-CSF Signaling on GMP Differentiation Investigating the concentration-dependent response of GMP cells to GM-CSF, we uncovered three main features of the response. First: the dual regulatory effects of C/EBP on PU.1; C/EBP induces PU.1 directly by promoter stimulation and inhibits PU.1 indirectly through stimulation of Gfi-1 (40)(41)(42)(43). The balance of these effects is dependent on the concentration of GM-CSF. At high concentrations of GM-CSF, C/EBP increases quickly, resulting in a swift rise in Gfi-1 and repression of PU.1, thereby inducing granulopoiesis. The result, that C/EBP is an antagonist of PU.1 in granulopoiesis, is in agreement with Wang et al. (84), who showed that induction of an isoform of C/EBPα downregulates the SPI1 gene (encoding PU.1) to promote granulopoiesis. Alternatively, our model predicts that C/EBP has a positive impact on PU.1 in GM-CSF induced monopoiesis, as a result of a slower increase in C/EBP, allowing PU.1 enough time to upregulate itself and establish dominance. Second: PU.1's indirect antagonism of C/EBP and Gfi-1 is essential to commit the cell toward monopoiesis. Third: GM-CSFR signaling forms positive feedback loops with PU.1 and C/EBP. When stimulated, GM-CSFR transmits a signal to C/EBP to increase both C/EBP and PU.1 expression. These proteins, in turn, upregulate GM-CSFR resulting in a stronger GM-CSF signal, which results in even greater stimulation of C/EBP. These feedback loops create a sensitive, switch-like response of gene expression to GM-CSFR stimulation. We find that the lower the GM-CSF concentration is the longer it takes for the switch to kick in. In GM-CSF-induced granulopoiesis, the switch kicks in early, to allow sufficient upregulation of C/EBP and Gfi-1. In GM-CSF-induced monopoiesis, the switch is delayed, to allow PU.1 to upregulate itself and repress C/EBP and Gfi-1. In this way, we propose that these three dynamic features of the control system work synergistically to produce the unique behaviors associated with GM-CSF-induced differentiation. Differences Among CSF-Induced Differentiation Processes Given that our model successfully captures the endpoints of GMP differentiation induced by G-, M-, and GM-CSF, we propose that our model can also offer significant insights into the different temporal patterns of protein concentrations during the differentiation processes. For example, GM-CSF induced monopoiesis exhibits a significant spike in C/EBP and Gfi-1 concentrations in its early stages, followed by suppression of both proteins, whereas M-CSF induced monopoiesis does not exhibit such a spike. It is possible that these differences could influence downstream transcription factors not accounted for in our model, perhaps resulting in different subtypes of monocytes. (Alternatively, these incongruences may be shortlived, making no difference on the final phenotype.) Nonetheless, our model predicts that the final concentration of C/EBP in monocytes is dependent on the signaling strength of GM-CSF (see the MO branch in Figure 8C). Since the subtype of the monocyte may well depend upon its level of expression of C/EBP, the concentration of GM-CSF in the micro-environment of a differentiated monocyte may have immediate implications on the phenotype of the cell. Intriguingly, it has been shown that GM-CSF induced monocyte-derived macrophages are distinctly different in genetic expression from M-CSF induced monocytederived macrophages (85,86). Perhaps GM-CSF's influence on C/EBP concentration in this lineage plays some role in the differences observed in these macrophages. In addition, our model suggests that monopoiesis may be induced more quickly by GM-CSF than by M-CSF. If true, GM-CSF may be better suited for emergency monopoiesis than M-CSF. Similarly, we find that GM-CSF induced granulopoiesis exhibits a larger spike in PU.1 and IRF8 concentrations in its early stages than G-CSF induced granulopoiesis. Although these differences are not as dramatic as the differences between M-CSF and GM-CSF induced monopoiesis, we cannot dismiss the possibility that these differences may effect downstream transcription factors and prime the cells for different subtypes of granulocytes. For example, it has been shown that GM-CSF has a higher propensity for inducing eosinophils than G-CSF (8,87). GM-CSFR EXPRESSION PATTERNS OF MYELOID CELLS An unexpected finding of our model, which agrees with experimental data, is that cells of the granulocyte lineage express lower concentrations of GM-CSFR than monocytes (74,75). This is counter-intuitive, as granulocytic differentiation is favored over monocytes at higher concentrations of GM-CSF (9). Our model suggests that the signal strength of GM-CSFR is stronger in the initial commitment step of GM-CSF-induced granulopoiesis when compared to monopoiesis. However, after the lineage fate is fixed, the concentration of GM-CSFR continues to increase in monopoiesis, but decreases slightly in granulopoiesis ( Figure 7E). We suspect that these conditions may be crucial for cellular maturation. It is possible that lower levels of GM-CSFR are required to prevent excessive stimulation of C/EBPα, as C/EBPα is downregulated in later stages of granulopoiesis (77). It is also possible that high GM-CSFR expression is important in later stages of monopoiesis, to stimulate C/EBP. It is known that C/EBP not only stimulates PU.1, but forms a complex with AP-1 in monocytes to activate monocytic genes rather than granulocyte genes (40,76). Thus, the capacity to receive a strong GM-CSF signal may be important for gene regulation within the monocytic lineage. In agreement with this hypothesis, our results suggest that expression of C/EBP in monocytes increases as the GM-CSF concentration increases. Dynamics of the Monocytic Myeloid-Derived Suppressor Cell Our model predicts that the differentiation dynamics of M-MDSCs is very similar to typical monopoiesis; however, once the cell has committed to the monocytic lineage, there is a substantial upregulation of C/EBP. Since M-MDSCs have been shown to express high concentrations of C/EBPβ, but not C/EBPα, we suspect that some mechanism not captured by our model selectively suppresses C/EBPα (78). We get by without this mechanism, as the function of C/EBPβ and C/EBPα are redundant in hematopoiesis (35). Our model suggests that a significant fraction of C/EBP in M-MDCSs (and monocytes) is bound to IRF8, suggesting that its impact on granulocytic genes is diminished in these cells. Just like monocytes, the model suggests that PU.1, IRF8, Egr-2, M-CSFR, and GM-CSFR are all expressed at high levels in M-MDSCs as well, while G-CSFR is expressed at levels somewhere between a monocyte and a granulocyte progenitor. Thus, the G-CSFR is potentially a usable marker to distinguish between monocytes and M-MDSCs. Of course, if the variance of G-CSFR expression is large in monocytes or M-MDSCs, G-CSFR will not be an effective marker. Regardless, this suggests that G-CSF may have a stronger influence on M-MDSCs than on monocytes. Intriguingly, our model suggests that high GM-CSF concentrations can induce a monocyte to morph into an M-MDSC. This behavior is a consequence of high expression of GM-CSFR in monocytes. Additionally, our results suggest that the stability of this M-MDSC state is dependent on GM-CSF stimulation. Thus, if an M-MDSC is removed from cytokine stimulation, the phenotype of the cell will change. These results agree with the literature, as monocytes can be programmed into M-MDSCs in tumor microenvironments, and terminally differentiate into macrophages and dendritic cells when removed from stimulatory conditions (31,32). However, as our model is not designed to simulate terminal differentiation into macrophages and dendritic cells, it predicts that M-MDSCs will revert back into monocytes when GM-CSF is withdrawn. We hypothesize that M-MDSCs can be destabilized via the mechanism of our model, but rather than reverting back to a monocyte, will terminally differentiate due to other variables not accounted for in our model. CSF Synergies and Crosstalks We find that G-CSF may play a more dynamic role in GMP differentiation than has been previously proposed. G-CSF is typically thought of as an inducer of the granulocyte lineage, but our model suggests that GMP cells likely exhibit an entire spectrum of differentiation behaviors in response to G-CSF due to signaling crosstalk. We find that, at concentrations of M-CSF not quite sufficient to induce monopoiesis, small concentrations of G-CSF can provide the nudge necessary to initiate monocytic differentiation. We see a similar phenomenon when G-CSF is introduced with GM-CSF: if a cell is primed for monopoiesis, a small concentration of G-CSF may provide the stimulus needed to induce monopoiesis. However, when G-CSF is increased to higher concentrations, monopoiesis is arrested in favor of granulopoiesis. The model also predicts that G-CSF can induce M-MDSC development under the right conditions. Our simulations suggest that normal monopoiesis, in response to simultaneous stimulation by a combination of moderate levels of M-and GM-CSFs, can be skewed in favor of M-MDSCs if paired with G-CSF (see Figure 10C). This also suggests that G-CSF can induce monocytes in such conditions to differentiate into M-MDSCs. On the other hand, under conditions that normally favor M-MDSC development, higher G-CSF concentrations will push differentiation in favor of granulopoiesis. Therefore, our model suggests that G-CSF can either promote or inhibit M-MDSC differentiation, depending on extracellular conditions. These predictions should be tested in a laboratory environment, as the implications are far reaching. It is possible that low levels of G-CSF may be utilized in vivo to aid monopoiesis and M-MDSC development. Contrary to the dynamic role of G-CSF, our model suggests that M-CSF plays an exclusively antagonistic role in granulopoiesis. We predict that, under conditions of low G-CSF concentration, M-CSF can interfere with granulopoiesis to arrest GMP differentiation. We also find that M-CSF may drive M-MDSC differentiation under conditions that would normally favor granulopoiesis, depending on the relative concentrations of GM-CSF and G-CSF. Furthermore, our model suggests that pairing high concentrations of M-CSF and GM-CSF may be a potent inducer of M-MDSCs. CSFs as Clinical Targets Cumulatively, our results suggest that M-CSF, GM-CSF, and G-CSF can all favor M-MDSC development, depending on extracellular conditions. We suspect that high concentrations of M-CSF and GM-CSF, as well as lower concentrations of G-CSF, may be present in some biological environments that support M-MDSC development, such as a tumor micro-environment. Indeed, several tumors associated with MDSCs have been reported to express M-CSF, GM-CSF, and/or G-CSF (18). We propose that a model such as ours can be used to explore the effects of tumor-specific conditions on hematopoiesis. For instance, our model suggests that G-CSF may contribute to M-MDSC differentiation under some, but not all, conditions that are otherwise favorable to monocyte differentiation. Thus, inhibiting G-CSF may be a successful strategy to destabilize the M-MDSC state in a tumor micro-environment where G-CSF is expressed alongside M-CSF and GM-CSF. However, while G-CSF's role in M-MDSC development is more context dependent, our results suggest that M-CSF and especially GM-CSF signaling act as much stronger inducers of M-MDSCs. Interestingly, while GM-CSF may induce M-MDSCs independent of other CSFs, the model suggests that G-CSF and M-CSF are reliant on GM-CSF to induce the M-MDSC state. Therefore, we propose that, among the CSFs, GM-CSF is the most promising therapy target for M-MDSCassociated tumors, while M-CSF may be an excellent alternative. In agreement with our results, knockdown of tumor-released GM-CSF in mice significantly reduced M-MDSC populations, and resulted in increased anti-tumor suppressive immunity (79). In another study, inhibiting M-CSFR signaling suppressed M-MDSC populations, while making no difference to the PMN-MDSC population. Furthermore, when paired with the VEGFR-2 antibody, blocking M-CSFR signaling resulted in a significant reduction in tumor angiogenesis (25). In both instances, the ratio of PMN-MDSCs to M-MDSCs increased, suggesting that these effects are due, in part, to altered differentiation rather than proliferation. Alternatively, since MDSCs may be useful in a variety of pathological conditions, such as sepsis and burns (22,88), an effective therapeutic strategy may be to upregulate M-MDSCs by administering a combination of GM-CSF and M-CSF (see Figures 10C,D). We acknowledge that in vivo other cytokines that are similar to GM-CSF (such as IL-3) may play comparable roles in M-MDSC differentiation (89). Thus, M-CSF and G-CSF may still influence M-MDSC differentiation under conditions where GM-CSF is absent, increasing their value as therapeutic targets. Network Topology Ultimately, the behavior of the model is a consequence of the network topology, i.e., multiple feedback and feedforward loops in the reaction mechanism ( Figure 2B) and the relative strengths of these interactions (e.g., the ω i,j 's in our mathematical model). For example, direct positive feedback loops of C/EBP and PU.1 are crucial for switch-like behavior and are required for the stability of the granulocyte and monocyte phenotypes, respectively. Additional positive feedbacks loops exist within the mutually antagonistic architecture of the network. As C/EBP can antagonize PU.1 through Gfi-1, PU.1 forms two positive feedback loops by inhibiting C/EBP through IRF8 and by inhibiting Gfi-1 through Egr-2. These positive feedback loops are crucial to the stability of the monocytic phenotype. In contrast, C/EBP forms a positive feedback loop by activating Gfi-1, which in turn prevents PU.1 from upregulating IRF8 and inhibiting C/EBP. Thus C/EBP can exist at high concentrations by suppressing PU.1 through this positive feedback loop. Furthermore, Gfi-1 has a positive feedback mechanism by inhibiting PU.1 and Egr-2, which in turn would inhibit Gfi-1. These feedback loops are critical to the irreversibility of the granulocyte phenotype. More positive feedback mechanisms exist between receptors and transcription factors. For example, activated M-CSFR stimulates PU.1 and PU.1 stimulates the expression of M-CSFR. These types of positive feedback loops make the system more responsive to cytokine stimuli. Finally, C/EBP forms a negative feedback loop by activating PU.1 which in turn activates IRF8. This negative feedback loop is necessary for GM-CSF induced monopoiesis, as C/EBP must be suppressed, but only after it has activated PU.1. Feedforward loops also make significant contributions to the behavior of the system. For instance, GM-CSFR activates C/EBP both directly and indirectly (by inhibiting IRF8), thus forming a coherent-feedforward loop. The inhibition of IRF8 by GM-CSF is important for GM-CSF induced differentiation of GMPs into M-MDSCs (analysis not shown). Another example is the incoherent-feedforward loop by which C/EBP activates PU.1 directly and inhibits PU.1 indirectly through Gfi-1. This incoherent-feedforward loop is crucial to the concentration dependent nature of GM-CSF induced differentiation, as we discussed previously. Limitations of Model While our model makes several intriguing predictions, we acknowledge that the model neglects many genes and proteins that play important roles in hematopoiesis. Therefore, in interpreting our models results, we must be aware of its limitations. We designed the model specifically to capture the initial decision-making stages of GMP differentiation, rather than the terminal stages of granulopoiesis and monopoiesis. We propose that transcription factors downstream of our network will play large roles in the maturation of granulocyte progenitor and monocyte cells, but only subtly effect the initial dynamics of lineage commitment. Additionally, our model is limited to qualitative predictions. Although experiments often report quantitative measurements, it is impossible to compare these quantitative experimental results with our simulations for a variety of reasons. First of all, our calculations are made in dimensionless units, and the "real life" equivalent of 1 unit of M-CSF is not necessarily equivalent to 1 unit of G-CSF. Secondly, laboratory experiments typically utilize cytokine-enriched serum, with undefined serum components apparently necessary for cell survival and growth. These serum components are not accounted for in our model and may drastically impact how cells differentiate (87). Furthermore, cell-to-cell signaling, unaccounted in our model, may impact differentiation dynamics in experimental cultures. Finally, and perhaps the most important limitation of all, our model does not consider the impact of cytokine signaling on cellular responses such as proliferation and apoptosis. These responses may drastically impact the ratios of differentiated cells in experimental cultures. For example, while granulopoiesis may be favored under some conditions, rapid monocyte proliferation may skew experimental results in favor of a larger monocyte fraction. Summary We have presented a novel model of GMP cell differentiation and explored the molecular control system's dynamics to provide insight into experimental observations and to make new predictions. We investigated the concentration-dependent nature of GM-CSF-induced differentiation, and proposed a mechanism that can explain its mysterious behavior. We explored the dynamics of CSF signaling crosstalk and found that, while G-CSF may encourage monopoiesis under some conditions, it is likely that M-CSF always has an inhibitory effect on granulopoiesis. Furthermore, our model demonstrates how both GMP cells and monocytes may differentiate into M-MDSCs, providing new insight into how this bizarre phenotype fits into classical GMP cell differentiation. We found that G-CSF, M-CSF, and GM-CSF may all favor M-MDSC development under different conditions. Moreover, we proposed that, among the CSFs, GM-CSF is the most potent inducer of this phenotype. As for any "model" of a cellular control system, our model has limitations and potential sources of inaccuracy. For example, our model is not suitable for making quantitative predictions or capturing terminal states of GMP differentiation. Nonetheless, we are confident that our results have utility, as the dynamic processes captured by our model align with numerous experimental observations. Therefore, we welcome experimental evaluation of any of the qualitative predictions we have made. AUTHOR CONTRIBUTIONS JT, BW, and LL were all involved in the project's conceptual development. BW designed and ran all computational simulations. BW developed all figures and wrote the first draft of the manuscript. All authors contributed to revising this draft into final form. FUNDING This work was partly funded through NIH grant HL115835 to LL.
2018-09-18T13:07:56.230Z
2018-09-18T00:00:00.000
{ "year": 2018, "sha1": "dfbf7cd6176d1be1f0972a20a8315d62cfad9574", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.02048/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfbf7cd6176d1be1f0972a20a8315d62cfad9574", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
260806214
pes2o/s2orc
v3-fos-license
Fenofibrate suppresses the progression of hepatoma by downregulating osteopontin through inhibiting the PI3K/AKT/Twist pathway Primary hepatic carcinoma (PHC) is a leading threat to cancer patients with few effective treatment strategies. OPN is found to be an oncogene in hepatocellular carcinoma (HCC) with potential as a treating target for PHC. Fenofibrate is a lipid-lowering drug with potential anti-tumor properties, which is claimed with suppressive effects on OPN expression. Our study proposes to explore the molecular mechanism of fenofibrate in inhibiting HCC. OPN was found extremely upregulated in 6 HCC cell lines, especially Hep3B cells. Hep3B and Huh7 cells were treated with 75 and 100 μM fenofibrate, while OPN-overexpressed Hep3B cells were treated with 100 μM fenofibrate. Decreased clone number, elevated apoptotic rate, reduced number of migrated cells, and shortened migration distance were observed in fenofibrate-treated Hep3B and Huh7 cells, which were markedly abolished by the overexpression of OPN. Furthermore, the facilitating effect against apoptosis and the inhibitory effect against migration of fenofibrate in Hep3B cells were abolished by 740 Y-P, an agonist of PI3K. Hep3B xenograft model was established, followed by treated with 100 mg/kg and 200 mg/kg fenofibrate, while OPN-overexpressed Hep3B xenograft was treated with 200 mg/kg fenofibrate. The tumor growth was repressed by fenofibrate, which was notably abolished by OPN overexpression. Furthermore, the inhibitory effect of fenofibrate on the PI3K/AKT/Twist pathway in Hep3B cells and Hep3B xenograft model was abrogated by OPN overexpression. Collectively, fenofibrate suppressed progression of hepatoma downregulating OPN through inhibiting the PI3K/AKT/Twist pathway. Introduction PHC is one of the most common cancers worldwide, including HCC (75-85%) and intrahepatic cholangiocarcinoma (10-15%) (Sung et al. 2021).Approximately 47% of HCC patients are diagnosed in China, which is the fifth leading cause of death in China (Zhou et al. 2019;Petrick et al. 2020).The main risk factors for HCC are hepatitis virus infection (hepatitis B virus or hepatitis C virus), heavy alcohol consumption, obesity, and autoimmune liver disease.At present, the treatment for HCC is mainly divided into surgical treatment and non-surgical comprehensive treatment. Surgical resection is the first choice for early HCC, however, with a high possibility of relapse.The 5-year recurrence rate is as high as 70% (Forner et al. 2018).Molecular targeted therapy and immunotherapy are optional methods for advanced hepatoma, which are costly with multiple side effects.The median survival time of HCC patients is not more than two years (Yang et al. 2019;Vogel and Saborowski 2020).Therefore, it is urgent to explore effective treatment strategies for HCC. Osteopontin (OPN) is a phosphorylated glycoprotein that exerts a variety of biological effects by binding to receptors such as integrin and CD44 (Chernaya et al. 2018;Klement et al. 2018), which is involved in pathophysiological reactions such as bone formation, mineralization, reconstruction, inflammatory response, vascular diseases, and the development of tumors (Foster et al. 2018;Lok and Lyle 2019).Previous study has shown that the proliferation, invasion, and metastasis of tumor cells are facilitated by OPN, accompanied by an inhibition on apoptosis (Huang et al. 2017).It is reported that the production of vascular endothelial growth factor is induced by OPN to facilitate the progression of angiogenesis, which further mediates the resistance of cancer cells to chemotherapy (Du et al. 2017;Ouyang et al. 2018).Studies have shown that OPN mediates the occurrence and development of a variety of tumors (Wong et al. 2017;Cao et al. 2019).Furthermore, OPN is significantly upregulated in liver cancer tissues and serum of HCC patients, which plays a critical role in the occurrence, development, metastasis, and recurrence of HCC (Cao et al. 2012).A recent study has confirmed that OPN facilitates the progression of HCC by activating PI3K/AKT/Twist signaling pathway (Yu et al. 2018).Therefore, OPN may become a novel effective target for the treatment of HCC. Fenofibrate belongs to the third-generation lipid-lowering drugs of phenoxy aromatic acids, which significantly reduces total cholesterol (TC) and total triglyceride (TG) by activating peroxisome proliferator-activated receptor-α (Staels et al. 1998).Fenofibrate is commonly used in the treatment of hypercholesterolemia, hypertriglyceridemia, and mixed hyperlipidemia in clinical practice.Recently, marked antitumor effect of fenofibrate has been reported (Kong et al. 2021).However, the mechanism of action remains unclear.The latest study reported that the expression of OPN was suppressed by fenofibrate (Rowbotham et al. 2018).Our study aims to explore the molecular mechanism of fenofibrate in inhibiting the progression of HCC. Cells and treatments Normal human hepatocyte cell cline (L-02 cells) and six HCC cell lines (Hep3B cells, HepG2 cells, Huh7 cells, MHCC97H cells, HCCLM3 cells, and HCCLM6 cells) were obtained from iCell (China) and cultured in DMEM medium containing 10% FBS, which were incubated under 37 ℃ and 5% CO2.To obtain OPN-overexpressed cells, Hep3B cells were transfected with adenovirus containing pcDNA3.1-OPN, with pcDNA3.1-NCas a negative control.After 48 h transfection, cells were collected and the transfection efficacy was identified using the Western blotting assay. CCK-8 assay Cells were implanted in 96-well plates for 24 h, followed by adding with 10 μl CCK8 solution.After incubating for 2 h, the OD value was detected using the microplate reader (CMaxPlus, MD, USA). Wound healing assay When the cell density reached more than 90%, 200 μL of the gun tip was used to scratch in each well, followed by discarding the medium and replaced with DMEM incomplete medium.Then the scratch in each well was photographed.Cells were put into the incubator, and the scratch of each well was photographed again after 24 h.According to the scratch condition, the 24 h scratch data and the 0 h scratch data were determined.The corresponding scratch width and migration rate was calculated. Clone formation assay Two thousand cells/well were implanted in 6-well plates and incubated for 10 days.When macroscopic cloning appeared, the supernatant was aspirated and cells were fixed with the mixture of methanol and acetic acid at a ratio of 3:1 at room temperature for 5 min.Methanolic solution containing crystal violet was added and fixed for 15 min.The supernatant was aspirated and air-dried at room temperature for observation using an optical microscope (AE2000; Motic, China). Transwell assay The upper chamber of the Transwell insert (3422; Corning, USA) was implanted with 1.5 × 10 5 cells cultured in serum-free medium, which were then filled with 20% FBSsupplemented medium in the lower chamber.After 24 h incubation, cells were wiped off from the upper chamber, and those in the lower chamber were stained with crystal violet.Finally, the migrated cells were counted using an optical microscope (AE2000; Motic, China). The detection of apoptosis using the flow cytometry 1×10 6 cells were collected and washed by PBS buffer, which were then re-suspended with 300 μl pre-cold 1×Annexin V-FITC binding buffer.Then, cells were introduced with 5 μl Annexin V-FITC reagent and 10 μl PI reagent, followed by 10 min incubation in the dark at room temperature.Lastly, cells were loaded onto the flow cytometry (C6, BD, USA) for the analysis of apoptosis. Animals and xenograft model Twenty-four 24 female nude mice (7-9 week) were purchased from Charles River (China).After 7 days of adaptive feeding, nude mice were randomly divided into 4 groups: Control, 100 mg/kg fenofibrate, 200 mg/kg fenofibrate, and OPN OE+ 200 mg/kg fenofibrate groups.Six nude mice were used in each group.For the establishment of xenograft model, 5*10 6 cells were inoculated subcutaneously into the back of the axilla per mouse with a volume of 0.25 mL/ mouse.The tumor volume was recorded every three days.The administration was performed until the tumor volume reached approximately 100 mm 3 .In the control group, the Hep3B cell xenograft model was established, followed by orally dosed with normal saline for 14 days.In the 100 mg/kg fenofibrate and 200 mg/kg fenofibrate groups, the Hep3B cell xenograft model was established, followed by orally dosed with 100 mg/kg fenofibrate and 200 mg/kg fenofibrate daily for 14 days, respectively.In the OPN OE+ 200 mg/kg fenofibrate group, the OPN-overexpressed Hep3B cell xenograft model was established, followed by orally dosed with 200 mg/kg fenofibrate daily for 14 days.Tumors were weighed and sampled at the end. Statistical analysis Mean±SD was utilized to present data, which was analyzed using the one-way ANOVA method with the software of GraphPad Prism 7.0 software.P<0.05 was considered to be a statistically significant difference. The determination of the HCC cell line and the concentration of fenofibrate Firstly, the level of OPN in normal hepatocytes and 6 HCC cell lines was determined.Compared to L-02 cells, OPN was found extremely upregulated in 6 HCC cell lines, among which the highest expression of OPN was observed in Hep3B cells (Fig. 1A).Furthermore, to determine the concentration of fenofibrate and the target HCC cell line, cells were treated with 0, 12.5, 25, 50, 75, and 100 μM fenofibrate, followed by evaluating the cell viability using the CCK-8 assay (Fig. 1B).In L-02 cells, no impact of fenofibrate on the cell viability at all concentrations was observed.In Hep3B cells and Huh7 cells, the cell viability was signally repressed by fenofibrate in a concentration-dependent manner.However, in HepG2 cells, minor changes on the cell viability were observed by fenofibrate.Collectively, Hep3B cells and Huh7 cells, as well as 75 and 100 μM fenofibrate, were applied in subsequent assays. Fenofibrate inhibited the proliferation and migration, and facilitated the apoptosis of HCC cells by downregulating OPN To obtain OPN-overexpressed cells, Hep3B cells and Huh7 cells were transfected with adenovirus containing pcDNA3.1-OPN, with pcDNA3.1-NCas a negative control.Compared to pcDNA3.1-NC,OPN was found dramatically upregulated in the pcDNA3.1-OPNgroups (Fig. 2A), suggesting the successful establishment of OPN-overexpressed Hep3B cells and Huh7 cells.Subsequently, HCC cells were treated with 75 and 100 μM fenofibrate for 24 h, while OPN-overexpressed HCC cells were treated with 100 μM fenofibrate for 24 h.In Hep3B cells, the colony number was dramatically repressed from 131.0 to 66.7 and 24.7 by 75 and 100 μM fenofibrate, respectively.Compared to 100 μM fenofibrate, the colony number was reversed to 108.3 by the overexpression of OPN.In Huh7 cells, the colony number was dramatically repressed from 173.0 to 111.7 and 66.0 by 75 and 100 μM fenofibrate, respectively.Compared to 100 μM fenofibrate, the colony number was reversed to 137.0 by the overexpression of OPN (Fig. 2B).Furthermore, in Hep3B cells, the apoptotic rate in the control, 75 μM fenofibrate, 100 μM fenofibrate, and OPN OE+ 100 μM fenofibrate groups was 4.62%, 23.69%, 30.88%, and 16.29%, respectively.In Huh7 cells, the apoptotic rate in the control, 75 μM fenofibrate, 100 μM fenofibrate, and OPN OE+ 100 μM fenofibrate groups was 4.20%, 22.39%, 35.98%, and 17.37%, respectively (Fig. 2C).In Hep3B cells, the number of migrated cells was markedly reduced from 213.3 to 123.0 and 65.7 by 75 and 100 μM fenofibrate, respectively.Compared to 100 μM fenofibrate, the number of migrated cells was reversed to 148.0 by the overexpression of OPN.In Huh7 cells, the number of migrated cells was markedly reduced from 286.0 to 120.0 and 105.3 by 75 and 100 μM fenofibrate, respectively.Compared to 100 μM fenofibrate, the number of migrated cells was reversed to 181.3 by the overexpression of OPN (Fig. 2D).Moreover, in Hep3B cells, the migration distance observed in the wound healing assay in the control, 75 μM fenofibrate, 100 μM fenofibrate, and OPN OE+ 100 μM fenofibrate groups was 68.8%, 35.3%, 19.5%, and 59.1%, respectively.In Huh7 cells, the migration distance observed in the wound healing assay in the control, 75 μM fenofibrate, 100 μM fenofibrate, and OPN OE+ 100 μM fenofibrate groups was 62.9%, 46.7%, 32.0%, and 53.9%, respectively (Fig. 2E).A dramatically inhibitory effect of fenofibrate on the in vitro growth and migration of HCC cells was observed, which might be mediated by the downregulation of OPN. Fenofibrate suppressed the PI3K/AKT/Twist pathway in Hep3B cells by downregulating OPN We further checked the function of fenofibrate on the PI3K/ AKT/Twist pathway in Hep3B cells.Firstly, the level of OPN was found greatly suppressed by 75 μM and 100 μM fenofibrate, which was greatly reversed by the introduction of pcDNA3.1-OPN(Fig. 3).Furthermore, PI3K, p-AKT/AKT, Twist, and N-cadherin were found extremely downregulated, while E-cadherin was extremely upregulated by 75 μM and 100 μM fenofibrate.Compared to 100 μM fenofibrate, the level of PI3K, p-AKT/AKT, Twist, and N-cadherin was signally elevated, while the E-cadherin level was greatly decreased by the overexpression of OPN. The inhibitory function of fenofibrate on Hep3B cells was abolished by the agonist of PI3K To confirm whether the regulatory function of fenofibrate in Hep3B cells was associated with the PI3K/AKT/Twist pathway, Hep3B cells were treated with 100 μM fenofibrate in the presence or absence of 10 μM 740 Y-P.The apoptotic rate of Hep3B cells was found markedly increased from 4.03 to 33.96%, which was greatly reduced to 18.13% by the coculture of 740 Y-P (Fig. 4A).Furthermore, the migration distance in the control, fenofibrate, and fenofibrate+740 Y-P groups was 63.93%, 31.44%, and 47.23%, respectively (Fig. 4B).Moreover, the repressed OPN level observed in fenofibrate-treated Hep3B cells was found signally increased by the co-culture of 740 Y-P (Fig. 4C). Fenofibrate inhibited the in vivo growth of Hep3B cells by downregulating OPN To verify the anti-tumor property of fenofibrate, the xenograft model was constructed.We found that the tumor volume was dramatically suppressed by 100 mg/kg and 200 mg/kg fenofibrate.Compared to the 200 mg/kg fenofibrate group, the tumor volume was extremely reversed by the overexpression of OPN (Fig. 5A).Furthermore, the tumor weight in the control, 100 mg/kg fenofibrate, 200 mg/kg fenofibrate, and OPN OE+ 100 mg/kg fenofibrate groups was 0.31 g, 0.12 g, 0.05 g, and 0.23 g, respectively (Fig. 5B).The tumor growth inhibition rate in the 100 mg/kg fenofibrate and 200 mg/kg fenofibrate was 61.50% and 82.51%, respectively.Compared to the 200 mg/kg fenofibrate group, the tumor growth inhibition rate was decreased to 25.22% by the overexpression of OPN (Fig. 5C).Images of tumors were shown in Fig. 4D. Fenofibrate suppressed the PI3K/AKT/Twist pathway in tumor tissues by downregulating OPN Lastly, the regulatory mechanism of fenofibrate was verified in tumor tissues.OPN, PI3K, p-AKT/AKT, Twist, and N-cadherin were found signally downregulated, while E-cadherin was upregulated in tumor tissues by 100 mg/ kg and 200 mg/kg fenofibrate.Compared to the 200 mg/kg fenofibrate group, the level of OPN, PI3K, p-AKT/AKT, Twist, and N-cadherin was signally elevated, while the E-cadherin level was markedly decreased by the overexpression of OPN (Fig. 6). Discussion PHC is one of the most common malignant tumors of the digestive system in the world.The prevalence is relatively high in Asia and parts of the world, and the incidence is also increasing in Africa and Western countries (Chidambaranathan-Reghupaty et al. 2021).Most patients with PHC are treated with surgical resection, including liver transplantation, while standard chemotherapy and radiotherapy have limited efficacy.High invasiveness and metastasis are two major characteristics of PHC, resulting in poor prognosis even after surgical resection.The key to the treatment and prognosis of PHC is to inhibit and reduce the invasion and metastasis of liver cancer (Haber et al. 2021).In our study, the proliferation and migration of Hep3B cells was extremely suppressed by fenofibrate, accompanied by an elevation of apoptotic rate, which was in accordance with the performance of fenofibrate in breast cancer cells (Li et al. 2014), ovarian cancer cells (Wang et al. 2014), and pancreatic cancer cells (Hu et al. 2016).Furthermore, the in vivo xenograft model further confirmed the anti-tumor property of fenofibrate against HCC, which was also observed in PC-3 xenograft model (Tao et al. 2018). OPN is a secreted phosphorylated glycoprotein that is widely distributed in human tissues and exerts various functions such as mediating cell adhesion, promoting neovascularization, and inhibiting cell apoptosis.It is proved that OPN is closely related to the occurrence, development, metastasis, and recurrence of multiple malignant tumors (Coppola et al. 2004).Lin et al. (Lin et al. 2013) claimed that OPN was largely synthesized and secreted in malignant tumor cells, especially in HCC.Therefore, in recent years, the relationship between OPN and the progression of HCC has become a hot topic for researchers.By using quantitative PCR, Gotoh et al. (Pan et al. 2003) found that the expression level of OPN in HCC tissues was significantly higher than that in normal liver tissues.Furthermore, the positive expression of OPN in the surrounding cells of tumor nodules is extremely significant.Pan et al. (Pan et al. 2003) found that elevated AFP, p53 mutation, large tumor size, late stage, high grade, early recurrence or metastasis, and low 10-year survival rate were closely related to the high mRNA level of OPN in HCC patients.Moreover, in some patients with early HCC, the high mRNA level of OPN plays a role in predicting early recurrence.In our study, a critical upregulation of OPN was observed in 3 HCC cell lines, which was consistent with previous reports (Wu et al. 2022).Furthermore, a close relationship between the expression of OPN and the suppressive effect of fenofibrate on the cell viability was observed in 3 HCC cell lines, implying that the anti-tumor property of fenofibrate might be associated with OPN.Both in Hep3B cells and tumor tissues of Hep3B cell xenograft model, OPN was found extremely downregulated by fenofibrate, which was in line with previous researches (Rowbotham et al. 2018;Moxon et al. 2020).Moreover, the in vitro and in vivo anti-tumor function of fenofibrate were markedly abolished by the overexpression of OPN, suggesting that the function of fenofibrate was mediated by OPN. Twist was first identified in Drosophila in 1983, which is a highly conserved basic helix-loop-helix-DNA binding transcription factor that specifically regulates the expression of key target genes.Twist was initially found to regulate embryonic development by promoting epithelial-mesenchymal transition (EMT) during embryonic development (Jin et al. 2020).With the continuous in-depth study of the biological mechanism of EMT, it has been found that Twist, as a key regulator in EMT, is closely related to tumor invasion and metastasis (Yu et al. 2012), which is found to inhibit the E-cadherin promoter to result in EMT (Vermani et al. 2020).Existing (Li and Zhou 2011) studied the expression of Twist in MCF-7 cells and Hela cells and found that the high expression of Twist induced the morphological changes of EMT, accompanied by the activation of Akt and β-catenin signaling pathways.In our study, the anti-tumor property of fenofibrate was accompanied by an inhibition of PI3K/AKT/Twist signaling and EMT progression, which was also observed in fenofibrate treated renal transplant model (Wang et al. 2019).Furthermore, the regulatory effect of fenofibrate on the PI3K/ AKT/Twist signaling and EMT progression was abolished by the overexpression of OPN, implying that OPN was a key mediator involved in the regulatory mechanism of fenofibrate.In future work, the regulatory mechanism of fenofibrate will be further verified by co-treating HCC cells with fenofibrate and an agonist of Twist. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/. Fig. 1 Fig. 1 The HCC cell line and the concentration of fenofibrate were determined.A The expression level of OPN in L-02 cells and HCC cells was detected by the Western blotting assay (**p<0.01 vs. L-02 cells).B The cell viability in L-02 cells and HCC cells was detected by CCK-8 assay (*p<0.05 vs. 0 μM, **p<0.01 vs. 0 μM) Fig. 2 Fig. 2 Fenofibrate inhibited the proliferation and migration, and facilitated the apoptosis of Hep3B cells and Huh7 cells by downregulating OPN.A The expression level of OPN was detected by the Western blotting assay (**p<0.01 vs. pcDNA3.1-NC).B The growth of Hep3B cells and Huh7 cells was evaluated by the clone formation Fig. 4 Fig. 4 The inhibitory function of fenofibrate on Hep3B cells was abolished by the agonist of PI3K.A The apoptosis was determined by the flow cytometry.B The migration ability was checked by the Fig. 5 Fig. 5 The in vivo growth of Hep3B cells was inhibited by fenofibrate via downregulating OPN.A The curve of tumor volume during the experiments was drawn.B The tumor weight at the end of the experiment was weighed.C The inhibitory rate in each group against
2023-08-12T06:17:39.494Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "931668c283953f8fbf40468a94616e95ad1d4def", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00210-023-02604-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a3cc529420002c52c2f034a2b61e44479a4696ea", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
234328974
pes2o/s2orc
v3-fos-license
Two-Step Continuous Cooling Heat Treatment Applied in a Low Carbon Bainitic Steel 2020; Accepted: January 23, 2021 Thermo-mechanical treatments using continuous cooling after forging are an established method for producing bainitic steels, mainly because of the elimination of energy intensive additional heat treatment processes. The cooling is usually employed in an uncontrolled manner in the industrial sector, which can be detrimental to the resulting microstructural morphology and, consequently, to the final product properties. In this study, a new controlled two-step cooling route based on the principles of bainitic displacive growth was designed and applied in a 0.18C (wt-%) steel. Inverse finite element method was used on the cooling data to obtain the evolution of temperatures for the samples during cooling, allowing to assess point to point cooling rates. Investigations via X-ray diffraction, optical microscopy analysis and hardness testing revealed a variation of bainitic morphology, namely, the transition from granular bainite to lath-like bainite with relatively high hardness and constituents/ phase Introduction Recently, there has been an increasing economic and ecological interest in the reduction of energy consumption. In the steel production and manufacturing industry, aiming to make the processing chain leaner by replacing energy intensive production steels, new alloys and processing routes are being employed. One expressive example of this is the substitution of quenched and tempered steels by continuously cooling steels, which achieve their final microstructure right after manufacturing integrated thermomechanical processing (like hot rolling or forging) [1][2][3] . This change in the production chain results in a microstructural transition from tempered martensite to a wide array of bainitic microstructures in which the morphology of the microstructure and the resulting mechanical properties are closely related to the alloy's chemical composition. Bainite transformation is a physical phenomenon extensively studied in the field of modern metallurgy [4][5][6] . This interest can be attributed to the difficulties regarding the understanding of the transformation's thermodynamical aspects 7 as well as the wide range of industrial applications this microstructure provides [8][9][10] . In view of exploring the thermodynamical aspects of the bainitic transformation, multi-step heat treatments (MSHT) have been developed in the last years. They consist of adding temperature steps to the austempering process. This group of heat treatments (HT), which are focused on further enhancing the outcome of conventional one-step austempering treatments, has been proven to be an effective tool for fine tailoring the properties of bainitic laths and, especially, the characteristics of resulting room temperature retained austenite (RA). Wang et al. 11 , utilized a MSHT to achieve nanostructured bainite on a medium (0.30%) C steel. This was achieved due to carbon partitioning to the austenite, which reduced the martensite start temperature of the steel with each step, thus allowing succeeding bainite formation to have continuously thinner laths. The succeeding steps also allowed the transformation of blocky RA into bainite and film austenite, increasing both ductility and yield strength of the multi-step treated samples. Avishan et al. 12 , investigated the effect of a two-step HT with focus on optimization of the transformation induced plasticity (TRIP) effect through tailoring the properties of the retained austenite. They found out that extensively refining the retained austenite films results in an overly stable austenite, hindering the benefit of the TRIP effect. Mousalou et al. 13 , also achieved nanostructured bainite in a low carbon (0.26%) C steel. They reported expressive toughness improvement in comparison to conventional one-step treatment, which was attributed to the insertion of a higher variety of variants, increasing the high angle boundaries and thus, hindering crack propagation. Li et al. 14 , modified the conventional two-step HT by adding a brief step before beginning a lower temperature austempering. With the first step being at a higher temperature, they lowered the number of sites for bainite nucleation as well as the *e-mail: pjdcastro73@gmail.com undercooling between the 1 st and 2 nd steps, resulting in higher amounts of RA by sacrificing bainitic lath refinement. With this, they achieved a more effective TRIP effect, resulting in improved elongation and ultimate tensile strength. These studies showed the benefits of conducting multiple step austempering. However, no efforts have been directed to designing multiple step continuous cooling heat treatments. Accordingly, the present work addressed the microstructural response of a low carbon bainitic steel, DIN 18MnCrSiMo6-4, when subjected to a two-step continuous cooling after austenitizing, simulating the hot forging step. A usual treatment for this material is an uncontrolled air cooling directly after hot rolling or forging operations. The air-cooled microstructure consists of a ferritic bainitic matrix embedded by quasi axial islands of martensite-austenite constituents (MA), namely, Granular Bainite (GB) that characterizes the as-received microstructure of this material. The focus of the present work was to evaluate the effects of dividing the cooling process into two steps with different cooling rates. The first step involves intense cooling and the second air-cooling. The aim of the first step is to provide a higher driving force for the beginning of the bainitic transformation, influencing the resulting bainitic morphology and its properties. Methodology In order to analyze the effect of cooling strategy on the microstructure, optical microscopy, hardness testing and X-ray diffraction were utilized. Furthermore, simulation of the cooling curves to assess the temperature map evolution was performed. Accordingly, this section describes all utilized materials, methods and techniques, including sample description and preparation. Samples geometry, chemical composition and initial microstructure Samples were machined into cylinders of 54 mm length per 38 mm diameter from DIN 18MnCrSiMo6-4 hot rolled bars from Swisstec (SwissSteel), Emmenbrücke, Switzerland. Table 1 presents the chemical composition of the DIN 18MnCrSiMo6-4 steel employed in this investigation. The as-received microstructure of the samples is presented and further discussed in the results section 3.1. Microstructural characterization All samples were cut in their half-length, for analysis of the transverse section and then ground by a sequence of silicon carbide papers until 1200 grit. Sequentially, samples were polished with 1µm diamond paste for metallographic analysis. For optical microstructure analysis, samples were etched with 2% vol. Nital etchant for 10 seconds whereas for the analysis of prior austenitic grain size (PAGS), the samples were etched with an aqueous solution of picric acid (42mL wetting agent, 58mL of distilled water and 2.5g of picric acid) by swabbing during 6 minutes. Micrographs of the transversal section's core, middle radius and surface regions of the heat-treated samples were analyzed. For the as-received condition, only the mid-radius was analyzed as a reference condition. The Vickers hardness was also investigated in the aforementioned regions. For each region 5 indentations were carried out with a load of 1 kgf and an application time of 10 seconds. X-Ray diffraction (XRD) analysis was also carried out in the same regions as in the hardness and metallography analysis. A diffractometer Seifert MZ VI E, GE Inspection technologies operated at 33kV and 40 mA with a Cr radiation source (wavelength of λ= 2,29 Å) was used in the analysis. The diffraction intensities were acquired by a line position sensitive detector in the range of 60° < θ< 164° with a total scanning time of 1.3h. In order to avoid the influence of residual stresses generated by the sample preparation, electrolytic removal of a 100 µm layer was performed by a solution of 20% 3 4 H PO and 80% 2 4 H SO before XRD measurements. The obtained diffraction patterns were analyzed by Rietveld Refinement method with the software TOPAS 4.2 (Bruker-AXS, Karlsruhe, Germany). As the quantity of carbides in this steel is low and thus of limited detection with this method, only α-Fe (bainitic ferrite/ martensite) and austenite were considered in the Rietveld Refinement. The instrumental contribution to the peak broadening was removed by convolution of instrumental function determined with NIST LaB 6 standard calibration powder. For the calculation of the austenite carbon content, the Dyson and Holmes equation was used 15 . Development of the aimed two-step continuous cooling Aiming to achieve the lowest possible temperature for bainitic transformation, while avoiding austenite transformation to martensite or polygonal ferrite, a cooling path was designed, as shown in Figure 1, by the dashed line curve superposed to the utilized steel's continuous cooling transformation diagram (CCT). This cooling path consists of employing two different cooling media, the first generating a high cooling rate and the second a lower cooling rate, as indicated by the dashed line in Figure 1. The aim was to reach 450 °C at the beginning of the bainitic transformation and then transition to a slow cooling across the bainitic field. It should be noted that utilizing a one-step CCT diagram for a two-step heat treatment will result in deviations in the transformation curve shape because of the different evolution in length change caused by the cooling rate variation in the proposed heat treatment. However, the deviations in this work should be minor, as the beginning of transformations should only be delayed by the high cooling rate of the first step 16 . Based on the elaborated thermal route, a preliminary investigation was done to test room temperature (20 °C) oil and water baths as cooling media for the first step. The quenching data allowed estimating the required time for achieving the goal temperature for each cooling medium and the respective cooling rate, and thus the first step time. Subsequently, the two-step heat treatment was carried out based on the obtained data about the higher intensity medium followed by air. The chosen temperature for austenitizing was 1000 °C based on the works of Silveira et al. 17 . With this austenitization temperature, the PAGS of the as-received and post-treatment conditions remained similar, ruling out the influence of the Hall-Petch strengthening on the hardness comparison. All of the heat treatment steps were carried out at atmospheric pressure, with aid of a muffle furnace with standard air atmosphere. In order to evaluate the cooling rates and the reproducibility of the proposed thermal routes, thermocouples were inserted into the samples. Furthermore, the measured cooling curves were used for the determination of heat transfer coefficients, by means of Finite Element Method (FEM) inverse analysis. The simulation allowed evaluating the temperature field evolution, and thus, the calculation of the cooling rates at any samples' positions. Temperature mapping was determined by means of 3 thermocouples distributed along the sample's height, as illustrated in Figure 2. For the assessment of reproducibility, each heat treatment was carried out three times. Simulation of two step continuous cooling treatment by finite element method By means of the experimental cooling curves data, an inverse analysis was done in order to evaluate the input parameters for the simulation of the two-step continuous cooling heat treatment. FEM was performed by DEFORM® commercial software package. The simulation model considered the sample as a physical model based on the constitutive heat exchange equations [18][19][20] . Thereby the evolution of the temperature in the system is given by: T x y z t is the transient temperature distribution in °C, ρ is the material density, C is the specific heat, k is the thermal conductivity and Q is the internal heat generation per volume unit. Equation 1 describes the evolution of temperature through time and is governed by the competition of internal heat conduction and dissipation. The boundary conditions considered were those of radiation, conduction and convection, which are modelled by the following equations: Where r θ is the radiative heat flux, c θ is the convective heat flux, s σ is the Stefan-Boltzmann constant, 0 T is the room temperature, ε is the emissivity and T is the sample temperature. The h value represents the global conduction and convection heat transfer coefficients, which vary with the thermal conductivity k and temperature. The heattreatment modelling consisted of dividing the cooling into multiple simulation steps depending on global heat transfer coefficient, thermal conductivity and specific heat, according to the values of these parameters for the specific chosen step. In this regard, this heat treatment simulation considered its input parameters based on the following three steps: 1. high cooling rate of the first step; 2. a transitional period from the first to the second step; 3. calm air cooling in the second step. The second simulation step was implemented in order to account for the smooth transitional behavior of cooling curves subjected to transition in cooling media. For each region of the sample (top, core and bottom), four points were considered to more accurately evaluate the temperature evolution with the FEM: one shared the center position of the thermocouple and three more were distributed in a radius of 0.5 mm around the thermocouple center. A tetrahedral mesh type was used for modelling the sample and the boundary conditions were based on the chosen heat treatment parameters. Table 2 presents the simulation parameters. Results This section will address the results of the present work with brief discussion. A general in-depth discussion is carried out in section 4. Characterization of as-received condition Regarding microstructural characterization, indications were done accompanying all presented micrographs. In this manner, lath-like bainite (LLB) was indicated with parallel arrows, indicating its growth directions. Ellipses were used to delimitate regions with granular bainite (GB), polygonal ferrite (PF) and martensite-austenite constituents (MA). Grain boundary superposed dashed lines indicated prior austenite grain boundary (PAGB) locations. Figure 3a shows the microstructure and hardness results, whereas Figure 3b the austenitic grain and austenitic grain size for the as-received condition. Figure 3c shows the area indicated by a rectangle in Figure 3a in a higher magnification. Additionally, in Figure 3c, the scale (right lower corner) shows the numeric value of the microstructure corresponding prior austenitic grain size. The as-received condition of the material presents a granular bainitic matrix with MA constituents, PF and scarce regions of lath-like bainite, as shown in Figure 3a. Figure 3b shows this microstructure's characteristic PAGB in which the grain size was measured. Figure 3c shows the selected rectangular area of Figure 3a with higher magnification. In this image it is possible to identify the MA constituents, indicated by the ellipses. Parallel arrows indicated the bainite lath and its growth direction and the dashed line indicated a grain boundary. This microstructure is typically obtained in low carbon, medium silicon steels air cooled 21,22 , which, in this case, was 1 °C/s. The lack of cementite in this microstructure is due to the steel's silicon content, which has low solubility Newthon-Raphson in cementite, delaying its precipitation considerably 23 . As shown by an area delimited by a trapezoid shape in Figure 3c, some regions exhibit higher resistance to Nital etching. Reisinger et al. 24 , confirmed in their work that these regions are indeed also granular bainite, however, they have a preferred crystallographic orientation of {001} for the bainitic growth direction. Figure 4 shows the resulting preliminary oil and water cooling as well as the curve for the conventional air cooling (approximately 1°C/s) plotted over the CCT diagram. Figure 4 only presents the obtained data for the bottom thermocouple, as this region was the one with highest reached cooling rate. Because there was only interest in evaluating the cooling rate for the first treatment step, further cooling data was omitted in Figure 4. Two-step continuous cooling heat treatment As Figure 4 indicates, the oil medium would not avoid the polygonal ferrite field and only water would satisfy the high cooling rate prerequisite for the two-step heat treatment in development. Therefore, 20 °C water was chosen as cooling medium for the first step. The data from the preliminary quenching experiments allowed then estimating the required time in the water bath for the first step of the heat treatment. By comparing the overall surface finish of the twostep samples with the direct quenched ones through visual inspection, there were no apparent differences. This is expected, since the proposed heat treatment can be seen as an interrupted quenching treatment. Figure 5a shows the average temperature of all cooling experiment trials, as well as two vertical dashed lines at 11 and 21 seconds. The 11 s dashed line accounts for the end of the first heat treatment step, it means, when the sample was removed from the water and air cooling starts (second step). The 21 s dashed line indicates the lowest temperature reached before the outer region's temperature began to rise due to heat exchange with the core, already in air. The average process cooling rates were calculated based on the highest process temperature (austenitization) for these two intervals, these results are shown in Figure 5b. Further in section 3.3, experimental and simulation results are compared for the 11s and 21s instants indicated in Figure 5a. Based on the superposition to the CCT curve of the material, the use of water in the first step of the two-step heat treatment and air for the second step would be enough to begin the bainitic transformation in lower temperatures and the air cooling would enable avoiding the transformation from austenite to polygonal ferrite and martensite. The curves shown in Figure 5a exhibited a transitional cooling behavior between the end of the first step and the lowest temperature reached, also accompanied by a high dispersion in the obtained data. However, as shown in Figure 5b as the heat treatment progressed in air, the cooling rates tended to a uniform value, around 20 °C/s. This trend is also seen in the dispersion of the cooling rates, which lowered along with the continuity of the treatment. The temperature evolution indicates that, after reaching a maximum gradient, an inversion of temperature occurs between the core and outer surfaces, implying that a progress towards homogeneity is occurring before any transformation takes place. From a stress evolution perspective, this is desirable, since the transformations will occur practically simultaneously with a low temperature gradient during continuous cooling, similarly as in martempering 25 . Furthermore, even for thicker parts where the transformations occur with a higher temperature gradient, the volumetric change from austenite to bainite is not as pronounced as for martensite 26 . Therefore, considerable residual stresses gradients are not expected as in a conventional quenching process. Some early surface residual stress measurements showed an axisymmetric compressive residual stress at about 200 MPa in the surface. Figure 6 presents the results obtained by the inverse FEM analysis using the obtained experimental data for the heat transfer coefficients: global heat transfer coefficient h , heat conductivity k and specific heat c , which were the main input values used in the simulation model. Two-step continuous cooling simulation The simulation results are shown in Figure 7. Figure 7a shows the simulated cooling curves plotted over the experimental data and Figure 7b and c the temperature maps at 11 and 21 seconds, respectively. Figure 8 shows the cooling rates for the experimental and simulated conditions at 21 seconds, determined by inverse FEM analysis from the data shown in Figure 7. Although the used model for the treatment did not reproduce the temperature rise during the third simulation step, as seen in Figure 7a, the overall experimental and simulated cooling rates at the end of the second step were similar, as shown in Figure 8. Therefore, the results obtained from the temperature maps at 21 seconds, which correspond to the approximate cooling rates before the beginning of bainitic transformation, were extrapolated for the regions in which the microstructural observations were made, allowing a comparison between the obtained cooling rate and resulting post-treatment microstructural morphology. The experimental and simulated cooling rates will be presented along with microstructures in the next section. Microstructural characterization of the two- step heat treatment Figure 9a shows the average previous austenitic grain size for the 1000 °C austenitization after 20 minutes holding time. Figure 9b and c show the microstructure and hardness for the core region of the sample, which was subjected to an average cooling rate of 27 °C/s in the 0 to 21 seconds interval. Figure 9c shows some PAGB superposed dashed lines and arrows parallel to direction of bainitic laths growth direction. Here, the scale was also modified to show the respective numerical value of the corresponding microstructure PAGS. As seen in Figure 9a abnormal grain growth resulting from the total or partial dissolution of precipitates such as TiC and AlN occurred. This phenomenon causes undesirable microstructural inhomogeneity because of its anisotropic properties. Regarding the bainitic transformation, Lan et al. 27 , compared the effect of PAGS on the bainitic transformation in view of the distribution of carbon in the lattice, concluding that coarser austenitic grains contribute to the formation of interlocked bainitic laths and increases the temperature for beginning of bainitic transformation. Furthermore, Caballero et al. 28 , concluded that larger PAGS favors the formation of granular bainite, which in turn favors the formation of mixed granular and lath-like bainite microstructures for steels with inhomogeneous grain size. Figure 10a shows the microstructure and hardness for middle radius of the same sample. This region had a simulation obtained average cooling rate of 31 °C/s in the interval between 0 to 21 seconds. Figure 10b shows the magnified delimited region of (a) and indications of a region of GB with an ellipse, PAGB with superposed dashed lines and arrows parallel to bainitic lath growth direction. The scale was modified in order to show the numerical value of the microstructure corresponding PAGS. Table 3 presents the austenite and ferrite proportions obtained by X-ray diffraction. As seen in the table, there is a slight increase in the ferrite quantity and austenite carbon content for the two-step heat-treated sample. Additionally, the ferrite volume can be considered to be only comprised of bainitic ferrite since the transformation to polygonal ferrite in the heat-treated samples was suppressed. Phase quantification through X-Ray diffraction The results seen in Table 3 corroborates the incomplete reaction phenomenon seen in bainitic steels, where, according to Garcia-Mateo et al. 6 and Caballero et al. 29 , lower transformation temperatures should result in higher volumes of bainite. However, the obtained phase quantities are within the accuracy of the method of about ± 3 wt.-%, indicating that the effects on the phase contents are not pronounced. However, morphology and carbon content of the retained austenite is affected by the performed treatment, which is expected to influence the final mechanical properties, in particular the TRIP-behavior under load. General Discussion Granular bainite, which is the matrix of the material in its as-received condition, was characterized by Zajac et al. 22 as an aggregate of irregular ferrite interwoven with islands of different morphologies and chemical compositions dependent of carbon partitioning during cooling. In the present work, ferrite is surrounded with MA, as shown in Figure 3c, because of the cementite precipitation suppression caused by silicon 23 . As seen by the conflicting views on GB growth modelling on many works 21,30,31 , there is still no consensus on the GB growth mechanism, since this microstructure presents both diffusional and adiffusional morphological evidences. As an example, this can be noticed by the partial maintenance of prior austenite grain boundaries. In Figure 3c, considering the average PAGS shown in the scale, it is noticeable that most of the prior boundaries have been consumed after the transformation, making way for a microstructure with a random granular aspect. However, this microstructure still exhibits regions with lath-like bainite and maintained PAGB, with MA morphology either accompanying the shape of grain boundaries or the bainitic laths, in concordance with what was proposed in the works of Li and Baker 32 as well as Li et al. 33 . The microstructural duality of GB can be verified upon as the result of local driving force variation related to the inhomogeneous partitioning of carbon during continuous cooling transformation. This is similar to the dynamic partitioning effect seen in the work of Y.J Li et al. 34 . Besides that, the presence of polygonal ferrite, as found in Figure 3, is connected to the slow cooling used for the pre-process, which allows the reconstructive transformation from austenite to ferrite. The effects of the two-step heat treatment, in a context of higher driving force for the bainitic transformation, conferred by the rapid cooling first step, resulted in an evident microstructural change that can be noticed between the heat-treated and as-received conditions. Primarily, the higher available driving force resulted in the formation of larger packets of lath-like bainite, which replaced the former granular bainite matrix, as shown in Figure 9 and Figure 10. Furthermore, the transformation to polygonal ferrite was avoided due to the diminished diffusion in the temperature range in which the transformation occurred. Regarding the hardness variation between the as-received and heat-treated samples, there was an approximate increase of 55 HV for the latter, as shown in Figure 3, Figure 9 and Figure 10. This can be explained by a coupled effect from the suppression of polygonal ferrite and the dislocation generation at the austenite/bainite interfaces. The key mechanism responsible for the change in the bainitic morphology and properties is connected to the mechanism of bainitic growth and the higher mechanical resistance of austenite in lower temperatures. As stated by J. Cornide et al. 35 , the displacive bainitic growth depends on the variation of lattice parameter from FCC to BCC, leading to necessary strain accommodations which arise in form of dislocations on the surrounding lattice. Dislocation build-up in the growing bainitic interface is also the cause of the growth cessation, because of the higher required driving force to surpass the increasing dislocated structures. This phenomenon of mechanical stabilization, is intensified as the temperature decreases, leading to thinner bainitic laths due to the hindrance of the interface development. He et al. 4 , connected the interface dislocation build-up and its influence on mechanical properties, supporting the present work hardness variation. On the other hand, in accordance with the works of Singh et al. 36 , and Caballero et al. 37 , the intrinsic higher austenite resistance at lower temperatures also acts as an interface motion suppressor contributing to lath refinement. At the optical microscopy scale, this was seen by the increased phase/constituent boundaries of the images, making the differentiation of the micro constituents more difficult. As a consequence of the diffusionless growth of lath-like bainite, there is no rearrangement of previous boundaries, which explains the maintenance of the PAGB in Figure 9c and Figure 10b. Although the obtained microstructure and increase in hardness supports the evidence of lath/ constituent refinement, it is not yet possible to completely assess the efficacy of the proposed two-step heat treatment in the tailoring of mechanical properties. It is expected that the microstructural refinement mainly improves the yield strength 4,38 and impact toughness 39,40 while the higher content of austenite contributes to RA stability 41,42 of the materials treated this way. However, tensile and impact tests were not in the scope of the present work. Finally, comparing the core and middle radius regions, there was a slight increase in the hardness for the middle radius region. This, in agreement with Garcia-Mateo et al. 6 , is further evidence for lath refinement caused by the higher supercooling of this region, as provided by the simulation results (see section 3.4). Conclusions • A novel two-step continuous cooling heat treatment is proposed. It was designed and implemented on a low carbon bainitic steel. The heat treatment offers a promising path for microstructural optimization of continuous cooling bainitic steels at expense of a previous study on the cooling behavior of the part; • The higher driving force supplied for the bainitic transformation allows higher volumes of lath-like bainite to form, replacing the blocky granular bainite matrix while avoiding reconstructive transformations; • The growth mechanism difference of the investigated microstructures is reflected in the state of prior austenitic grain boundaries, hardness and phase/ constituent boundaries; • As seen by the experimental and simulated obtained cooling rates, the designed heat treatment was reproducible.
2021-05-11T00:06:54.280Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "13bed61e7b38cd6435416b380bf40cd52cd3e7b0", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/mr/a/GHrZvhBh559p38ypTRyKjMM/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "77b9515c19be9c57b238a9aa89ca36e535bce4c6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
218527610
pes2o/s2orc
v3-fos-license
Efficacy, safety, and tolerability of combined pirfenidone and N-acetylcysteine therapy: a systematic review and meta-analysis Background While antifibrotic drugs significantly decrease lung function decline in idiopathic pulmonary fibrosis (IPF), there is still an unmet need to halt disease progression. Antioxidative therapy with N-acetylcysteine (NAC) is considered a potential additional therapy that can be combined with antifibrotics in some patients in clinical practice. However, data on the efficacy, tolerability, and safety of this combination are scarce. We performed a systematic review and meta-analysis to appraise the safety, tolerability, and efficacy of the combination compared to treatment with pirfenidone alone. Methods We systematically reviewed all the published studies with combined pirfenidone (PFD) and NAC (PFD + NAC) treatment in IPF patients. The primary outcomes referred to decline in pulmonary function tests (PFTs) and the rates of IPF patients with side effects. Results In the meta-analysis, 6 studies with 319 total IPF patients were included. The PFD + NAC group was comparable to the PFD alone group in terms of the predicted forced vital capacity (FVC%) and predicted diffusion capacity for carbon monoxide (DLco%) from treatment start to week 24. Side effects and treatment discontinuation rates were also comparable in both groups. Conclusion This systematic review and meta-analysis suggests that combination with NAC does not alter the efficacy, safety, or tolerability of PFD in comparison to PFD alone in IPF patients. Background Idiopathic pulmonary fibrosis (IPF), the most common fibrotic interstitial lung disease (ILD), is a chronic, progressive, and irreversible disease characterized by progressive extracellular matrix accumulation leading to respiratory insufficiency. The management strategies for IPF include relieving symptoms, maintaining patient quality of life and slowing disease progression. Apart from nonpharmacological treatments such as long-term oxygen therapy or rehabilitation, antifibrotics are the gold standard and should be started as soon as possible after the diagnosis of IPF [1]. Pirfenidone (PFD), an oral pyridine with antifibrotic, anti-inflammatory and antioxidant functions, is currently approved for the treatment of IPF in most countries and recommended by the latest guidelines [1,2]. Evidence from the CAPACITY and ASCEND randomized controlled trials (RCTs) showed a significant reduction in the relative decline in forced vital capacity (FVC) over 72 weeks compared to the placebo group [3,4]. Furthermore, the pooled analysis and meta-analysis suggested a lower relative risk of death in PFD-treated patients than in placebo-treated patients [5]. N-acetylcysteine (NAC), a tripeptide (g-glutamyl-cysteinyl glycine), can replenish glutathione storage levels, increase the antioxidant capacity and correct the imbalance of oxidants and antioxidants associated with fibroproliferation [6]. On the basis of the negative results of the PANTHER trial [7], NAC did not receive a positive recommendation as a treatment for IPF in the latest international guidelines [1,7]. A substantial number of IPF patients receive combined PFD and NAC therapy [8][9][10]; however, data on the efficacy, safety, and tolerability of this combination are scarce. A recent placebo-controlled trial (PANORAMA) found that the rate of skin side effects was higher in the PFD + NAC group than in the PFD alone group [11]. However, other studies, including a study with inhaled NAC, suggest a slower lung function decline and a similar side effect profile in patients undergoing PFD + NAC treatment compared with patients undergoing PFD alone [8,[12][13][14]. Here, we systematically reviewed all studies with combined PFD and NAC treatment in IPF patients and performed a meta-analysis to compare the efficacy, safety, and tolerability of treatment with combined PFD and NAC vs treatment with PFD alone. Literature search This systematic review and meta-analysis was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and the PRISMA 2009-checklist. In addition, the meta-analysis was registered in PROSPERO (registration number: CRD42019134890). A structured literature search was performed for studies on the safety and efficacy of combined PFD and NAC treatment in IPF patients. The following databases were searched from the earliest available dates to May 2019: PubMed, EMBASE, the Cochrane Library, Ovid, Pro-Quest, Web of Science and Chinese databases (including the China National Knowledge Infrastructure (CNKI), Chinese VIP Information (VIP), and the Wan Fang database). In addition, "clinicaltrials.gov" and the bibliographies of previous meta-analyses on PFD or NAC were checked for relevant studies. The search terms included "idiopathic pulmonary fibrosis", "IPF", and "pulmonary fibrosis" for the disease and "pirfenidone", "Esbriet", and "acetylcysteine" for the intervention. No language or research type restriction was adopted. Study selection The inclusion criteria for the meta-analysis were as follows: (1) IPF patients diagnosed according to the 2011 American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines [15]; (2) interventions referring to combined PFD and NAC treatment, regardless of whether administration was oral or inhaled; and (3) the control group consisted of patients who received PFD alone. All appropriate studies were included in the meta-analysis. Two reviewers (HYS and DWY) inspected all studies after removing duplicate studies by reviewing titles and abstracts. Relevant studies were assessed by viewing the full-text articles to select studies that met the inclusion criteria mentioned above. Disagreements were resolved by a consensus-based discussion. To collect data on combined PFD + NAC therapy published in observational or retrospective studies involving patients with combined PFD, NAC, and corticosteroid/ proton pump inhibitor treatment, we contacted the corresponding authors and obtained the original data regarding PFD + NAC therapy from some studies and excluded those patients receiving glucocorticoids other than PFD + NAC. Other studies with a questionable combined therapy group, incomplete data or an inappropriate control group were excluded [16,17]. All patients included in the meta-analysis had not received glucocorticoids since the pirfenidone treatment began. Data extraction and quality scoring Two reviewers (HYS and XRL) extracted data from the included studies, including the following baseline characteristics: (1) first author, published year, study type, numbers of patients in the PFD + NAC group and PFD group; (2) changes in pulmonary function test (PFT) parameters such as changes in the predicted forced vital capacity (ΔFVC%) and changes in the predicted diffusion capacity for carbon monoxide (ΔDLco%); and (3) the number of side effects including skin reactions (photosensitivity and skin rash) and gastrointestinal reactions (anorexia, diarrhoea, and reduced appetite); the number of intolerable side effects leading to treatment discontinuation was also recorded. The quality of the included observational studies was estimated using the Newcastle-Ottawa Quality Assessment Scale (NOS). Two reviewers (HYS and XRL) independently assessed the quality of the included studies in the following three domains: selection, comparability, and outcome. Each study score ranges from 0 to 9 stars in the NOS scoring system [18]. The randomized controlled studies were assessed with the Cochrane Collaboration risk of bias assessment tool [19]. Data analysis The data extracted from the selected trials were used to generate forest plots in Stata SE 13.0 software (Stata Corp, College Station, TX, USA). The risk of patients experiencing side effects and other binary parameters are expressed as odds ratios (ORs) for both the included cohort and case-control studies. The changes in the PFT parameters and other continuous parameters are presented as standardized mean differences (SMDs) for different studies that adopted various PFT inclusion standards. We examined the level of heterogeneity to determine which type of analysis to use. If there was low heterogeneity (I 2 less than 40%), then we used a fixed effects model. If the I 2 statistic was greater than 40%, we applied a random effects model to summarize the data. Patients with the combination of PFD and inhaled NAC were only included in one case-control study [11], and the sensitivity analysis excluding the case-control study and the secondary analysis with only oral administration studies were completed in one step. Two-tailed p values less than 0.05 were considered significant. Study characteristics and quality scores After the removal of duplicates and selection by viewing the abstracts and titles, a full-text review of 35 articles was performed. Six [8,[11][12][13][14]20] and five [8,[12][13][14]20] studies were included in the qualitative and quantitative analyses, respectively (Fig. 1). The systematic review comprised a total of 319 patients (PFD + NAC group n = 144, PFD alone group n = 175). The studies were conducted in Europe (n = 4), Japan (n = 1) and China (n = 2). One study was a controlled clinical trial (PANO-RAMA trial [11] by Behr et al), four were cohort studies [8,12,13,20], and one was a case-control study [14]. Of note, one RCT was excluded because only the conference abstract was available [21]. A meta-analysis of observational real-world studies including 207 patients was performed. The general characteristics of the studies are shown in Table 1. The average quality score for the included observational studies was 7.25 for cohort studies and 6 for the case-control study based on the NOS. The only RCT, which was conducted by Behr et al. [11], had high quality after being assessed according to the Cochrane Collaboration risk of bias assessment tool. The detailed quality characteristics are shown in the Table S1. Effect of combined pirfenidone and acetylcysteine therapy on lung function parameters The ΔFVC% predicted from baseline to week 24 was available in four studies with a total of 108 patients (PFD + NAC: n = 48, PFD alone: n = 60). Due to the lack of standard deviation values provided, the study by Sakamoto et al. [14] was excluded. Therefore, only three studies [8,12,20] were included in the meta-analysis. Given the premise of moderate heterogeneity (I 2 = 62.5%, p = 0.069), the random effects model was applied for the analysis. The results showed that PFD + NAC therapy had no additional benefit in reducing the decrease in lung function (SMD = -0.09, 95% CI − 0.86-0.69, p = 0.295, Fig. 2a) compared to PFD alone. Intolerable side effects leading to treatment discontinuation were reported in three studies with a total of 100 patients [8,14,20] (PFD + NAC n = 34, PFD alone n = 66). There was no significant heterogeneity (I 2 = 0%, p = 0.762) observed among these studies. The results showed that combined PFD + NAC therapy did not increase the risk of intolerable side effects (OR = 2.85, 95% CI 0.84-9.59, p = 0.092, Fig. 3b) in comparison with PFD therapy. Patients receiving PFD + NAC therapy experienced intolerable side Qualitative analysis and sensitivity analysis Funnel plots and Egger's test could not be used to check for the existence of publication bias because our metaanalysis included fewer than 10 studies [19]. In addition, the secondary meta-analysis with only oral NAC studies and sensitivity analysis excluding the case-control study resulted in p values of 0.249, 0.611 and 0.955 for gastrointestinal, skin and intolerable side effects, respectively, and the forest plots composed of only oral NAC studies can be found in the Supplement Material ( Figures S1, S2 and S3). In the ensuing quantitative analysis comparing the results of the meta-analysis and Behrs' RCT (Table 2), the safety, tolerability, and efficacy outcomes [11] were similar in the PFD + NAC treatment group and the PFD monotherapy group except for a significantly higher rate of skin adverse effects in the RCT (p values in meta vs RCT: 0.097/0.038). Discussion The present meta-analysis did not show superior efficacy of the combination PFD plus NAC therapy in slowing lung functional decline in IPF and showed comparable safety and tolerability compared to PFD alone. The antifibrotic drug pirfenidone can significantly reduce lung functional decline in IPF patients; therefore, it is recommended in international guidelines as the treatment of choice [1]. However, patients still present with gradually worsening symptoms and a constant loss of quality of life [22], and the outcome is comparable to those of many malignant diseases [23]. There is still an unmet need to halt disease progression. Antioxidative therapy with NAC is discussed as a potential additional therapy in some patients in clinical practice. The randomized placebo-controlled trial IFIGENIA investigated NAC treatment vs the standard treatment with prednisone plus azathioprine in 182 mild to moderate IPF patients over 48 weeks [24]. Combined therapy with high-dose NAC (1800 mg, d), prednisone and azathioprine significantly preserved the absolute vital capacity (VC) and DLco compared to the combination of prednisone and azathioprine [24]. However, the results of the PANTHER-IPF trial [25], which also enrolled patients with mild to moderate IPF, showed that there was no significant difference in the decline in FVC and showed a higher rate of serious adverse effects [25] and especially a higher mortality rate in patients receiving triple therapy than in patients receiving the placebo. While another report of the PANTHER also demonstrated no benefit of NAC over the placebo [7], a post hoc analysis of the PANTHER study [26] suggested that the genotypic background of IPF patients may have an impact on the effects of NAC treatment. MUC5B and TOL-LIP SNPs were retrospectively investigated in a subgroup of patients in the PANTHER trial. Patients with a rs3750920 (TOLLIP) TT genotype (25% of all patients) showed favourable outcomes regarding a reduction in the risk of the composite endpoint, defined as death, transplant, hospitalization or ≥ 10% FVC decline, while patients with a CC genotype had a nonsignificant increase in the composite physiological index (CPI) [26]. Regarding lung function decline (especially FVC), our meta-analysis demonstrated comparable outcomes between the PFD + NAC group and PFD monotherapy group. Considering that the majority of studies included in this meta-analysis enrolled Caucasian patients with mild to moderate IPF (predicted FVC from 50 to 90%), the heterogeneity among these Fig. 3 Forest plot of the safety profile (outcome measure: at least one side effect, (a)) and tolerability profile (outcome measure: intolerable side effects leading to treatment discontinuation, (b)) between the combined pirfenidone and acetylcysteine group and the pirfenidone alone group studies may be related to ethnicity because the studies by Ma and Sakamoto [12,14], which showed favourable efficacy results for the combination treatment, enrolled Asian patients. In addition, a speculative explanation for this phenomenon could be that the proportion of patients with the TOLLIP TT genotype in the treatment groups differed among the studies, but the data were not available [7,26]. Furthermore, direct antioxidant and anti-inflammatory effects on the alveoli by inhaled instead of oral NAC treatment may also contribute to the favourable outcomes in Sakamoto's study [14]. There are some considerations regarding the safety and tolerability of PFD and NAC treatment in IPF patients. Gastrointestinal (diarrhoea, anorexia, etc.) and skin side effects (photosensitivity and skin rash) are the most common adverse effects experienced by IPF patients receiving PFD treatment [27,28]. Compared to the findings from the PANORAMA trial, our meta-analysis showed a similar rate of side effects except for skin side effects (lower rate). The exact reason for this difference is unclear but may be related to differences in the patients' habits, such as the time spent outdoors or the use of skin protection creams [11]. Our meta-analysis has several limitations. First is the small number of included studies. Second, the metaanalysis included only one RCT, and the rest of the studies were observational studies and real-world experiences. Third, the lung function decline assessment was partial because scarce data were available for the 6MWD and blood gas analysis; therefore, we cannot exclude improvements in other outcome measures due to treatment with combined PFD + NAC. Fourth, the random effects model, which is generally used to analyse the overall effect when moderate heterogeneity exists (I 2 > 40%), was applied for the analysis of patients experiencing at least one side effect and to assess differences in the FVC% decline between groups, leading to a wider confidence interval and a more conservative conclusion. Conclusions In conclusion, this systematic review and meta-analysis suggests that the combination of PFD and NAC does not alter the efficacy, safety, or tolerability of PFD in comparison to PFD alone in the IPF study population. High-quality, multi-centre RCTs and large-sample realworld observational studies evaluating the safety, tolerability, and efficacy of PFD + NAC therapy vs PFD monotherapy and investigating the genetic background of patients are needed to validate these results. Additional file 1: Table S1. Quality scores of observational studies in the meta-analysis based on NOS scoring system. Figure S1. Forest plot of efficacy profile (outcomes: the predicted decline in FVC% ( Figure S1-a) and DLco% ( Figure S1-b)) between the combined pirfenidone and acetylcysteine group and the pirfenidone alone group with only oral NAC studies. Abbreviations: FVC: forced vital capacity, PFD: pirfenidone, NAC: N-acetylcysteine. Figure S2. Forest plot of the safety profile (outcome measure: at least one side effect, Figure S2-a) and tolerability profile (outcome measure: intolerable side effects leading to treatment discontinuation, Figure S2-b) between the combined pirfenidone and acetylcysteine group and the pirfenidone alone group with only oral NAC studies. Abbreviations: PFD: pirfenidone, NAC: Nacetylcysteine. Figure S3. Forest plot of the specific safety profile (outcome measure: gastrointestinal side effects ( Figure S3-a) and skin side effects ( Figure S3-b)) between the combined pirfenidone and acetylcysteine group and the pirfenidone alone group with only oral NAC studies. Abbreviations: PFD: pirfenidone, NAC: N-acetylcysteine.
2020-05-07T15:28:07.147Z
2019-12-02T00:00:00.000
{ "year": 2020, "sha1": "75d20638b00bd0677e88340471dd980d946d896e", "oa_license": "CCBY", "oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-020-1121-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75d20638b00bd0677e88340471dd980d946d896e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13093948
pes2o/s2orc
v3-fos-license
Noncommutative Unification of General Relativity with Quantum Mechanics and Canonical Gravity Quantization The groupoid approach to noncommutative unification of general relativity with quantum mechanics is compared with the canonical gravity quantization. It is shown that by restricting the corresponding noncommutative algebra to its (commutative) subalgebra, which determines the space-time slicing, an algebraic counterpart of superspace (space of 3-metrics) can be obtained. It turns out that when this space-time slicing emerges the universe is already in its commutative regime. We explore the consequences of this result. Introduction In recent years a new approach has appeared to the quantization of gravity, the one based on noncommutative geometry. The idea is to make spacetime a noncommutative space (which is essentially nonlocal) with the hope that in this way at least some major obstacles to the gravity quantization could eventually be overcome. There are many attempts in this direction [1]. In [2] we have followed Connes [3, p. 99] who, in order to make a space X noncommutative defines a noncommutative algebra not directly on X but rather on a groupoid over X. This approach, which has been further developed in the series of works [4], will be called a groupoid approach to the unification of general relativity and quantum mechanics. The aim of the present paper is to compare the groupoid approach with the canonical gravity quantization [5], which can be thought of as a "reference point" for other methods of quantizing gravity. The groupoid approach is "more radical" in the sense that in this approach the noncommutative counterpart of the differential structure is quantized whereas in the canonical method three-metrics play the role of "quantization variables". We show that in spite of this difference the superspace formulation of general relativity (which could be regarded as a prerequisite of the canonical quantization) can be obtained from the groupoid approach if the corresponding noncommutative algebra is restricted to its commutative subalgebra which determines a suitable slicing of space-time. Consequently, in the groupoid approach when the space-time slicing appears gravity is already in its "classical (non-quantum) regime". However, this conclusion could follow from a simplification inherent in our model, and could eventually be avoided if one considers a more general module of the noncommutative counterpart of vector fields (the module of derivations of a given algebra). We organize our material in the following way. To make the paper self-contained and to fix our notation, in Section 2, we give a summary of the groupoid approach to noncommutative unification of general relativity with quantum mechanics. In Section 3, we define the noncommutative algebraic counterpart of the standard concept of superspace (the space of three metrics). The comparison of the canonical gravity quantization with the groupoid approach is done in Section 4, and some conclusions and comments are collected in Section 5. Basic ideas of the model The main idea of the groupoid approach to the unification of general relativity and quantum mechanics is to forget, in the very beginning, the concept of space-time and start with the abstract space G = E × Γ, where E is the total space of a principal fibre bundle, and Γ its structural group such that the orbits of the action of Γ on E form a smooth manifold M interpreted as spacetime (this construction can eventually be generalized to the category of differential spaces of constant dimension, see [6]). We endow G with the groupoid structure. In the present paper, for the sake of concreteness, we shall assume that E is the total space of the frame bundle over a space-time manifold M, and Γ the group SO (3,1). Of course, M = (G/SO(3, 1))/SO(3, 1)). Then one defines the algebra as the (intrinsic) direct sum where A const = pr * (C ∞ (M, C)), and C ∞ c (G, C) is the family of smooth compactly supported complex valued functions on G. The multiplication in the algebra A is defined in the following way: with γ, γ 1 , γ 2 ∈ G p , G p being the fiber in G over p ∈ E; integration is with respect to the Haar measure; (2) if a, b ∈ A const they are multiplied in the usual way, i. e., a * b = a · b; (3) if a ∈ A const and b ∈ C ∞ c (G, C), one evidently a noncommutative algebra. We also define the involution of a ∈ A by a * (γ) = a(γ −1 ) where γ = (p, g), p ∈ E, g ∈ Γ. 1 Let us also define the subalgebra A proj = π * M C ∞ (M, C) ⊂ A const . It plays the important role in our model since by restricting the algebra A to the subalgebra A proj we recover the space-time manifold of general relativity. 1 One should notice that we have corrected the definition of the algebra A as compared with our previous works (see [2,4]). This corection does not change our previous results. Let us consider the set DerA of all derivations of the algebra A. DerA is a Z(A)-module, where Z(A) denotes the center of A, and can be regarded as a noncommutative counterpart of vector fields. In the following, we shall consider a noncommutative differential geometry as defined by the Z(A)- of A parallel to E and Γ, respectively (this is only a simplifying assumption which in the general case should be relaxed). First, we define a metric on the Z(A)-submodule V as a Z(A)-bilinear non-degenerate symmetric mapping g : V × V → A, and for our model we choose the following metric adapted to the product structure of V where g E and g Γ are metrics on E and Γ, respectively, and pr E and pr Γ are the obvious projections. It turns out that the "vertical component" pr * Γ g Γ of the metric g is essentially unique (this is true for a broad class of derivation based noncommutative differential calculi, see [7]), whereas the "parallel component" pr * E g E of g is a lifting of the Lorentz metric in spacetime M (see also [8]). Now, with the help of the Koszul formula, we define the linear connection; then the curvature and the usual Ricci operator R : V → V which is the counterpart of the Ricci tensor with one index up and one index down (for details see [2]). In this way, we have all quantities needed to write the noncommutative Einstein equation derivations v ∈ V Γ satisfy it trivially, see [8]). Let us consider the representation of the algebra A in the Hilbert space where B(H) denotes an algebra of bounded operators on H and G q is the fiber of G over q ∈ E, given by the formula The integral is taken with respect to the Haar measure. The completion of A with respect to the norm a = sup q∈E π q (a) is a C * -algebra (see [3, p. 102]). We shall denote this algebra by E. We assume (as a separate axiom) that the dynamics of a quantum gravitational system is described by the following equation for every q ∈ E, where v ∈ kerG, and (F v ) v∈kerG is a one-parameter family We shall also assume that [F v , π q (a)] is a bounded operator. The fact that v ∈ kerG makes of eqs. (2) and (4) a "noncommutative dynamical system". We could also say that noncommutative Einstein equa-tion (2) plays the role of a "boundary condition" for quantum dynamical equation (4). To solve this system means to find the set It can be easily verified that it is a subalgebra of E. LetĒ G be the smallest closed involutive subalgebra of the algebra E containing E G .Ē G is said to be generated by E G . Since E is a C * -algebra and every closed involutive subalgebra of a C * -algebra is a C * -algebra (see [10, Sec. 1.3.3]),Ē G is also a C * -algebra; it will be called Einstein C * -algebra or simply Einstein algebra, and the pair (Ē G , kerG) -Einstein differential algebra. Now, the idea is to perform quantization with the help the usual C *algebraic method (see, for instance, [11], [12, chapter 9]) with the Einstein algebraĒ G as our basic C * -algebra. According to this method, a quantum gravitational system is represented byĒ G , and its observables by Hermitian elements ofĒ G . If a is a Hermitian element ofĒ G , and φ a state onĒ G then φ(a) is the expectation value of the observable a when the system is in the state φ. It can be shown that this gravity quantization scheme correctly repro-duces the usual general relativity (on space-time) and quantum mechanics (in the Heisenberg picture) when the algebra A is restricted to its center Z(A) (or to some subset of Z(A)) (see [2,8]). given by Algebraic version of superspace The quotient space S(S) = Riem(S) Diff(S) is called superspace. Its global properties were studied by Fischer [13] (see also [14]). In a particular coordinate system any metric h ∈ Riem(S) can be represented as a covariant metric tensor h ij (x) or as a contravariant metric tensor h ij (x), x ∈ S. Then, as shown by DeWitt [15], there exists a metric on S(S), called the Wheeler-DeWitt metric, which assumes the form It has the signature (− + + + ++) for each point of the 3-geometry. Let us now consider a slicing (S t ) t∈T of M such that S t is diffeomorphic if there is g ∈ Γ such that q = pg. Now, it can be easily seen that A S ⊂ A proj ⊂ Z(A). Indeed, pr −1 (x) ⊂ pr −1 (S) for every x ∈ S. Consequently, the differential algebra (A S , V S ) is commutative. We denote the set of all metrics in the module V S by Riem(A S ). As an analogue of Diff(S) we should take the set IsoA S of all isomorphisms of A S into itself. We have the action Any isomorphism f : A S → A S induces the mapping (which is also an isomorphism) where v ∈ V S , α ∈ A. Therefore, one has Noncommutative gravity and canonical quantization We now briefly recollect the canonical method of quantizing gravity to compare it with our approach. Any space-time metric can be locally written in the form where h ij , i, j = 1, 2, 3 is the metric tensor on the spacelike hypersurface S =const, N is called lapse function; it measures the proper time separation between hypersurfaces t =const. The so-called shift vector N i measures the deviation of curves x i =const from the normal to S (in the following we use units such that c =h = 1). The extrinsic curvature of S can be written as where the stroke "|" denotes covariant differentiation with respect to the 3-metric h ij . The momentum canonically conjugated to h ij is given by The classical Hamiltonian is where with 3 R being the scalar curvature of h ij and Λ the cosmological constant. By making the standard substitution: h ij → h ij , π ij → −i δ δh ij (δ is the functional derivative) one obtains the counterpart of the Schrödinger equation TheĤ 0 -part of this equation is the celebrated Wheeler-DeWitt equation. This is the fundamental equation for the "wave function of the universe" Ψ[h ij ] which is the functional of the 3-metric (we do not take into account any matter fields). We should emphasize that in the Wheeler-DeWitt approach it is the 3metric that is quantized (and the momentum canonically conjugated to it), whereas in our approach the "quantization variables" are elements of the Einstein C * -algebraĒ G . However, we can ask the question: what would happen to the equations of our theory (eqs. (2) and (4)) if we restrictĒ G to (Ē G ) S , i. e. if we go to the "superspace limit"? Since (Ē G ) S ⊂ Z(Ē G ) eq. (4) reduces to the trivial identity (0 ≡ 0) and hence it becomes insignificant. We are left with eq. (2) which, in this case, is reduced to the usual Einstein equations. In this way, gravity decouples from quantum mechanics. This is an important conclusion: if we go to the superspace limit quantum gravity effects become negligible. In this process, the slicing of space-time emerges, and consequently the concepts of time and instantaneous spaces become meaningful. This means that we are well beyond the Planck threshold in the non-quantum gravity regime (see [16] where the emergence of time from the noncommutative era has been studied). As it is well known, the Wheeler-DeWitt equation corresponds to the sta-tionary Schrödinger equation. Eq. (4) plays the similar role in our approach since, for weak gravitational fields it reduces to the Schrödinger equation (in the Heisenber picture of quantum mechanics) [8]. However, one should not forget that the Wheeler-DeWitt equation is the equation for three-metrics, whereas eq. (4) is the equation for elements of the algebraĒ G . Concluding remarks We have demonstrated that if in the groupoid approach to the unification of general relativity and quantum mechanics, proposed in [2], the algebra A = A proj ⊕ C ∞ c (G, C) is restricted to its subalgebra A S , consisting of functions constant on pr −1 (S t ) t∈T , where (S t ) t∈T is a time slicing of space-time M, one obtains the superspace formulation of general relativity. The important point is that our approach shows that at the level where time slicing of space-time appears, quantum gravity effects are already insignificant (i. e., gravity is too weak to exhibit quantum effects, see above Section 4). This seems reasonable since in the quantum gravity regime we would expect some kind of "foamy mixture" of space and time which is excluded by the well defined time slicing of space-time. This conclusion could be the consequence of a simplifying assumption incorporated into our model, namely that our noncommutative differential algebra is based on the submodules of derivations parallel to E and Γ, respectively. In this model "geometry parallel to E" is, in principle, responsible for gravity effects and "geometry parallel to Γ" is responsible for quantum effects. The fact that we have neglected "mixed terms" (those coming both from V E and V Γ ) means that in our model gravity is "weakly coupled" to quantum effects. Consequently, if we restrict the algebra A to its subalgebra A S (this restricting essentially means that slicing of space-time enters the scene) all terms parallel to Γ automatically are switched off. Such terms would be responsible for a "fluctuating slicing" of space-time which could be enough for an approximate validity of the canonical quantization of gravity. The decisive step in checking this hypothesis would be to construct a counterpart of our model based on a more general module of derivations. The analogous situation occurs in the canonical quantization approach. One begins with the sliced classical space-time (with no quantum effects). Then one performs the canonical quantization, as the result of which 3geometries begin to fluctuate, and the sliced regime of space-time becomes "fuzzy". As it is well known, when Einstein's equations are formulated as a constrained Hamiltonian system, the Hamiltonian constraint and the equations of motion determine the evolution of three-metrics in superspace, and the momentum constraint implies that the Hamiltonian flow is orthogonal (in the Wheeler-DeWitt metric) to the orbits of the diffeomorphism group (although these two directions need not be disjoint [14]). Since in our algebraic approach the submodule V S corresponds to the family of vector fields on the superspace S(A) the above mentioned regularities should be reflected in the structure of this submodule.
2014-10-01T00:00:00.000Z
2000-01-24T00:00:00.000
{ "year": 2000, "sha1": "0f23ffe4e15e6a4d2596c3c1cc169017da1e8c81", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0f23ffe4e15e6a4d2596c3c1cc169017da1e8c81", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
127787416
pes2o/s2orc
v3-fos-license
Electrostatically tunable MOEMS waveguide Bragg grating-based DWDM optical filter Abstract. An electrostatically actuated MEMS cantilever beam-based waveguide Bragg grating tunable optical filter has been designed and simulated. The tunable filter is obtained by shifting the reflected wavelength of the waveguide Bragg grating located on the electrostatically actuated cantilever beam. An approach to increasing the electrostatic actuation of the beam by having an electrode underneath the beam is used and a large wavelength tuning range for the optical filter is achieved. Dimensions of the device are chosen such that full-width-half-maximum is 0.75 nm, thus capable of filtering adjacent channels of the dense wavelength division multiplexing (DWDM) network. The filter has a tuning range of 10.65 nm (1552.52 to 1563.17 nm) providing add/drop functionality for 14 adjacent DWDM channels. Introduction When the micromachining and electronic properties of silicon are combined with a layer of an oxide of silicon (SOI), a high refractive index contrast optically transparent medium is obtained for wavelength division multiplexing (WDM) applications. The integration of optical waveguides on a silicon substrate permits a wide variety of communication devices. [1][2][3][4][5] In dense WDM networks (DWDM), to carry out wavelength (channel) selection functionality, dynamic optical devices such as tunable optical filters are required. Tuning range and extinction ratio are some of the important parameters to consider while designing a tunable filter. Different optical structures have been proposed to achieve tunable filters. Silicon on insulator bandpass filters in Mach-Zehnder configuration with one arm loaded with a ring resonator has been demonstrated 6 where filter bandwidth can be tuned from 10% to 90% of free-spectral range (FSR) at an acceptable off-band rejection. However, the footprint of Mach-Zehnder interferometer (MZI)-based structures is more when compared to ring resonators and gratings. Compact ring resonators can be used as spectral filters and very narrow full-width half-maximum of 200 pm has been reported using ring resonator filters but they suffer from low tunable range through effective index variation. 7,8 Waveguide gratings as integrated filters have a footprint smaller than MZI but larger than a ring resonator. However, they are capable of being widely tuned while having frequency selectivity necessary for DWDM network. Properties of the optical filters that are commonly modified by various tuning methods are bandwidth, peak amplitude, FSR apart from peak position. Integrated tunable devices designed to tune filter bandwidth, peak amplitude, or FSR are wave-length specific and hence cannot be used to cover an entire spectrum of the optical network. Tuning techniques in integrated optical filters can be achieved by optoelectronic and MOEMS-based techniques. The optoelectronic techniques involve methods such as ion implantation/carrier injection 9 or electro-optic 10 effect-based tuning. Due to inherent electronic nature of the tuning, these techniques have high modulation speeds. However, they suffer from a limited range of modulation or tunability. MOEMS techniques include thermo-optic, 11 acousto-optic 12 effects apart from MEMS-based electrostatic actuation [13][14][15] techniques. A wavelength shift of 18 nm using the thermooptic effect 16 in Bragg grating-based silicon-on-insulator (SOI) rib waveguide loaded with the heater has been reported but has broader (4 nm) −10 dB bandwidth. This broader bandwidth cannot filter more than four channels in DWDM filter applications. The acousto-optic effect is based on a change of refractive index of a medium due to the presence of sound waves in it. Tuning techniques based on acousto-optic effect are limited by an acoustic source design and the speed of the acoustic wave. However, when combined with other components such as Bragg grating and standard diffraction mirrors, Bitauld et al. 17 have observed a very high-wavelength resolution. Using MEMS, tuning is easy and monolithic integration is possible. 18 Electrostatic actuation-based MEMS tuning is achieved through the micromachined structures such as beams, bridges, and diaphragms and by considering elasto-optic, stress-optic, and strain-optic effects. In MOEMS tuning techniques, since mechanical perturbation is involved, tuning speed is lower compared to optoelectronic ones. However, the large tuning range is obtained with MOEMS techniques. Further, multiple optical devices can be tuned simultaneously with the actuation levels depending on the physical configuration of the optical devices with respect to the MOEMS structure. In this work, we present the design of a filter with tunable peak position based on Bragg gratings with MEMS as the tuning control mechanism. This filter can be used to cover 30% of a conventional band of optical spectrum thus providing a very large tuning range. A grating is placed on a cantilever beam such that perturbation of the beam due to electrostatic actuation causes strain along the length of the waveguide grating. Based on the work presented by Ref. 19, an integrated guided-wave MOEMS filter is designed. The above technique enables a larger tuning range of the filter. In the rest state, the designed grating has a uniform period, and under mechanical perturbation the period changes. The change in period is kept uniform by appropriate positioning of the waveguide grating on the beam. Due to the uniform change in period, the Bragg wavelength of the grating changes. With the chosen dimensions of the structure, we have obtained a tuning range of 10.65 nm with full-width-half-maximum (FWHM) of 0.75 nm which can be used to filter 14 adjacent channels in the C-band. Figure 1(a) shows a schematic of the proposed electrostatically actuated MEMS cantilever beam-based waveguide Bragg grating tunable filter for C-band optical DWDM network. The cantilever beam is composed of the metal layer at the bottom, over which is a silicon dioxide beam with amorphous silicon waveguide and waveguide grating on top. The metal layer functions as the electrode for electrostatic actuation. There is an air gap between the bottom of the beam and the ground plane on the substrate. The cross-sectional view of the beam with rib waveguide grating is shown in Fig. 1(b). When the beam deflects due to electrostatic actuation, the grating period changes giving rise to shift in Bragg wavelength. As the Bragg grating acts like a filter, by varying the period, tunability can be achieved. Proposed Fabrication Procedure Fabrication of MEMS Bragg reflectors presented in the literature 18,20 generally refers to bulk micromachining of substrate to realize the MEMS device. In our design, we propose adaptation of fabrication procedures developed for plasmonic optical waveguides 21 to realize large actuation in the MEMS electrostatically actuated beam. The proposed fabrication steps are given in Fig. 2. The proposed structure can be realized by surface micromachining techniques starting with silicon wafer. 19 The silicon dioxide layer is grown by PECVD. Titanium is deposited over oxide which forms the bottom electrode. Etch and patterning of silicon dioxide and photoresist spin coat is carried out. Lithography of photoresist as a sacrificial layer is done. To achieve adhesion of metal with an oxide layer, titanium is deposited over the oxide layer by sputtering. Following this gold is electroplated to form the top electrode. Silicon dioxide and amorphous silicon layers can be deposited by PECVD on the metal layer consecutively. 21 E-beam lithography to define the beam, waveguide, and grating is carried out. The sacrificial photoresist is to be carefully chosen such that it is insensitive to developing and resist removal steps of E-beam lithography. This can be achieved by use of alkaline developer and corresponding photoresist for E-beam lithography and using SU-8 2000 as the sacrificial photoresist which can be stripped using oxidized acid solutions but not with conventional solvent-based resist strippers. The sacrificial photoresist can be stripped out to release the beam. Design For electrostatic actuation of MEMS beams, a ground electrode is chosen as gold. Generally, the second electrode is the topmost metal layer on the MEMS beam and its dimension determines the electric field distribution along it. The air gap and material composition of the beam apart from its dimensions determine the pull-in voltage and thus usable actuation range of the MEMS beam. As the gap between the electrodes decreases and the medium present between them has a lower dielectric constant, a better mechanical response is obtained for an applied electric potential. Thus, the electrode present underneath the MEMS beam, as shown in Fig. 1(b), maximizes the strain obtained in it. This allows the beam top surface to be used for other optical waveguides and devices. This configuration is free from plasmonic and electrooptic effects even for MEMS beams of nominal thickness. Figure 3 shows dimensions of the proposed device. The composite MEMS cantilever beam has length 1400 μm, width 100 μm, and thickness 5.1 μm. A silicon rib waveguide of cross-section 1.5 × 1.6 μm 2 runs along the length of the cantilever beam as shown in Fig. 3. Waveguide Bragg grating of 600 μm length is positioned at distance of 750 μm from foot of cantilever beam. The following sections detail the mechanical, optical, and optomechanical design considerations and optimization for the proposed device. Mechanical Design Tuning speed of optical filter depends on mechanical resonance of MEMS structure as the technique used for tuning is change in stress/strain of the MEMS beam in response to applied electric potential. The analysis on the design of beam parameters such as suspended beam length, the thickness of beam, the gap between ground and bottom of the beam are shown in Figs. 4-7. Figure 4 shows variation of fundamental frequency with length of the beam. The variation of resonant frequency with the length of the beam and thickness is given in Eq. (1), where l is the length of the beam, t is the thickness, E is the elastic modulus, and ρ is the density. From Eq. (1), we can see that resonant frequency is inversely proportional to length: For narrow FWHM, longer length of grating is required. For appropriately placing 600 μm grating, we have chosen 1400 μm length of the beam which has fundamental frequency of 2.5 KHz under no actuation. Width of the beam is taken as 100 μm. Figure 5 shows the fundamental frequency variation with thickness of the beam. It can be seen that the increase in thickness increases the fundamental frequency as evident from Eq. (1), but strain decreases with increase in thickness as shown in Fig. 6. For this device configuration, thickness is taken as 5.1 μm. At a given voltage, strain reduces with increase in gap as shown in Fig. 7. The strain variation with increase in actuation voltage for a chosen value of 14 μm is shown in Fig. 8. Table 1 where Young's modulus and Poisson ratio is taken as the weighted average values proportional to the individual material thickness: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 6 3 ; 3 9 8 V pull-in ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where B is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 6 3 ; If the beam has narrow width relative to its thickness, b E is the Young's modulus E otherwise E and b E, the plate modulus are related as 22 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 6 3 ; 2 6 0 Therefore, as a trade-off, a cantilever silicon-based beam of 1400 μm length, 100 μm width, and 5.1 μm thickness with an air gap of 14 μm are considered here. The fundamental frequency of the device varies with an application of external static voltage due to a build-up of strain in the structure under steady-state conditions as shown in Fig. 9. Optical Design In this work, we have chosen a rib waveguide with a 1.5 μm thickness and 1.6 μm width. The refractive indices of the silicon dioxide and a-silicon are taken as 1.45 and 3.545 near the C-band, respectively, as shown in Fig. 10. This rib waveguide is designed to operate in a single mode at 1552.52 nm with effective refractive index of 3.49 as evident from mode profile shown in Fig. 11. The waveguide Bragg grating consists of periodic corrugations along the length of rib waveguide with a period of 222.3 nm and has a Bragg wavelength 1552.52 nm. The length of a grating is 600 μm and the modulation depth of the grating is 45 nm to obtain FWHM lower than 0.8 nm, which is desirable for C-band DWDM optical communication network with channel spacing of 100 GHz. Using finite-difference time-domain (FDTD) simulation, the waveguide grating is found to have a propagation loss of 0.15 dB∕cm. Optomechanical Design The waveguide Bragg grating is positioned along a length of the cantilever beam toward its free end. When a voltage is applied, Bragg wavelength shifts due to two effects, change in a period with strain and change in a refractive index of a waveguide with stress. In this design, the impact of strain on the optical properties of Bragg grating is more than that due to stress. As seen in Fig. 12, the peak stress occurs at the foot of the beam and remains in the order of 10 6 Pa. where Λ 0 is the initial period of the grating and ε is the value of strain over the grating length at a given voltage, V. It can be noted from Fig. 12 that ε varies nonuniformly along grating length thus causing chirp in the grating period. It can be noted that the strain is maximum near the free end of the beam and strain variation along beam length is minimum. Thus, the position of Bragg grating of 600 μm length is chosen as being 750 μm from the fixed end as shown in Fig. 12. For accurate analysis of the grating with nonlinear chirp, an instantaneous period of the grating is approximated to the polynomial function of fourth order which has considerably minimal error. Power Consumption Power consumption for the operation of a proposed device is observed as 0.122 nW. It has two contributing factors, the energy stored as strain in the beam and capacitive energy. Strain energy is obtained by taking line integral of strain energy density over beam length and multiplying with a cross-sectional area since the beam has a uniform crosssection. Variation of strain energy density over a length at a voltage of 12.07 V is shown in Fig. 13 and the maximum strain energy obtained is 1.8 pJ. In the proposed structure, the two electrodes and the gap between them form a capacitor. Capacitive energy, the energy required to maintain the potential difference across these electrodes, is 0.12 nJ at the maximum voltage of 12.07 V. Thus maximum power consumption for the proposed structure is estimated as 0.122 nW. Simulation and Results The mechanical characteristics of the cantilever beam were studied numerically using finite element analysis and optical characteristics of waveguide grating were analyzed numerically using coupled mode theory. The Bragg wavelength for unperturbed grating with period 222.3 nm at no actuation is 1552.52 nm with the FWHM of 0.75 nm which is less than 0.8 nm required for DWDM optical network. When the cantilever beam is actuated electrostatically with 12.07 Vapplied voltage the Bragg wavelength shifts to 1563.17 nm as observed from Fig. 14, thus providing a total tuning range of 10.65 nm. By careful design, optimization minor effects that affect device performance can be minimized. Temperature invariance along length can be achieved by appropriate positioning of the electrode contact pads, which is the major cause of heating/temperature variation. The waveguide and grating carrying DWDM optical signal is on the top surface of the largely insulating (SiO 2 ) beam of 5.1 μm thickness. And the electric potential is applied to the bottom of the beam. Thus, the optical medium is insulated from electrostatic field edge effect. Figure 15 shows variation of FWHM with applied voltages up to pull-in value. It can be seen from Fig. 15 that FWHM lies below 0.8 nm for all the voltages. By applying different voltages to the cantilever beam, the Bragg wavelength shifts and hence we can achieve a tunable filter. Table 2 shows the Bragg wavelength for various applied voltages from 0 to 12.07 V. The maximum voltage that can be applied for the cantilever beam is limited by pull-in voltage which is 12.1 V for chosen parameters. From Table 2, we can observe that 14 DWDM channels (1552.52 to 1563.17 nm) can be tuned by applying 0 to 12.07 V and the maximum peak position deviation is not greater than 0.04 nm at the given applied voltages. Figure 16 shows the reflected Bragg wavelength with applied voltages from 6.9 to 12.07 V. This variation can be put in a cubic polynomial fit as Eq. (6), where λ is the wavelength in nm and V is the voltage in volts: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 6 3 ; 2 1 3 λðVÞ ¼ 0.0041039V 3 þ 0.51784V 2 − 0.40089V þ 1552.5:
2019-04-23T13:23:25.755Z
2019-02-21T00:00:00.000
{ "year": 2019, "sha1": "7a3a80ba2b3d7538597341ccd0705673c8cff41c", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-micro-nanolithography-mems-and-moems/volume-18/issue-1/015503/Electrostatically-tunable-MOEMS-waveguide-Bragg-grating-based-DWDM-optical-filter/10.1117/1.JMM.18.1.015503.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "22129de2e44fb6d2553988ab27e5a21a991da3d3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Materials Science" ] }
204756629
pes2o/s2orc
v3-fos-license
Changes in Proteolysis in Fermented Milk Produced by Streptococcus thermophilus in Co-Culture with Lactobacillus plantarum or Bifidobacterium animalis subsp. lactis During Refrigerated Storage Proteolysis in fermented milk, a complex and dynamic process, depends on the starter cultures used. This study aimed to evaluate the influence of Lactobacillus plantarum or Bifidobacterium animalis subsp. lactis, or both, co-fermented with Streptococcus thermophilus, on the changes in the proteolysis profile of fermented milk during 21-day storage at 4 °C, including the pH value, proteolytic degree, protease activity, aminopeptidase activity, free amino acid content, and electrophoresis performance. The results showed that the treatments with co-cultures exhibited a higher amount of free amino groups and neutral protease activity at an extracellular level, whereas lower pH values and aminopeptidase activities towards the six substrates at an intracellular level than the ones with a single-strain of S. thermophilus over the refrigerated storage were observed. In co-fermentation with S. thermophilus, B. animalis subsp. lactis did not significantly affect the concentrations of most free amino acids, while contributions of L. plantarum were found. Electrophoresis indicated that the mixed starters, especially the co-cultures containing L. plantarum, showed a stronger degradation for caseins than the pure S. thermophilus culture. These findings suggest that culture combinations may influence the proteolysis characteristics of the fermented products, and probiotic cultures must be carefully chosen for fermented production. Introduction Fermented milk is defined as the products of milk acidification by lactic acid bacteria (LAB) during metabolic fermentation. Milk provides most LAB with excellent substrates for growth, however, it can only support two to four cell generations [1], which can be explained by the milk medium that does not contain sufficient free amino acids and peptides for the reproduction of the organisms [2,3]. Consequently, LAB depend on their complex proteolytic system to hydrolyze milk proteins to release free amino groups for proliferation. It is reported that proteolytic action in milk is correlated with the processes for forming polypeptides and oligopeptides through the action of bacterial proteases [4,5], and further forming free amino acids and smaller peptides by bacterial peptidases [6]. Several studies have revealed that the proteases and peptidases liberated from LAB are active throughout fermentation and post-storage in fermented dairy products [3,7,8], and they show specificity towards particular split sites or sequences during proteolysis [9]. Particularly, aminopeptidases are thought to be of ultimately important peptidase in fermented milk, because of their capabilities to release single amino acid residues from the oligopeptides formed by extracellular protease activity [3,10]. The activities of proteolysis in fermented milk products affect not only the matrix structures and nutritional values of these products [9], but also their functional characteristics [11,12], and the peptides and amino acids liberated by the proteolytic activities are flavor compounds by themselves, or they serve as precursors for catabolic reactions [13]. It has been well-established that proteolysis in fermented milk is a major event in flavor development and texture improving through the degradation of proteins [14]. Therefore, proteolytic properties are one of the key determinants of dairy product quality. In recent years, there has been an increasing interest in fermented milk products containing probiotic bacteria, as these probiotics are capable of colonizing in the gut and improving the health-promoting functions of fermented milk [15]. Some probiotic bacteria (such as Lactobacillus plantarum and Bifidobacterium spp.) grow poorly in milk on account of the lack of the essential proteolytic activity [16,17], and the practical approach is to culture these species in combination with Streptococcus thermophilus in the fermentation so as to enhance the viability of probiotic bacteria and to shorten the fermentation time [18,19]. This practice may cause interactions and changes in the proteolysis patterns when these probiotics are co-cultured with S. thermophilus in milk. In the past, the proteolytic activities and proteolytic enzymes of pure strains for individual L. plantarum, Bifidobacterium spp., and S. thermophilus have been determined and characterized extensively in fermented milk [3,20]. However, to our knowledge, no studies have been published on the effect of co-cultures of S. thermophilus with L. plantarum and/or B. animalis subsp. lactis on the proteolytic profiles in fermented milk. Our previous study reported that the species and combinations of probiotics affect the product characteristics of fermented milk, and S. thermophilus in co-cultures with a ratio of 1:2 of B. animalis subsp. lactis and L. plantarum may have potential for the production of fermented dairy products [19]. In current study, we investigated the changes in proteolysis in fermented milk with different combinations of cultures over 21-day storage at 4 • C. The bacterial cultures were as follows: S. thermophilus (St), S. thermophilus with B. animalis subsp. lactis (StBa), S. thermophilus with L. plantarum (StLp), and S. thermophilus in co-cultures with 1:2 of B. animalis subsp. lactis and L. plantarum (StBaLp). The objective of the work was to further study the effects of the combinational use of probiotics on the proteolysis properties in fermented milk. Changes in pH The pH changes in fermented milk over the course of storage at 4 • C are shown in Table 1. All of the batches of fermented milks showed a rapid decline in pH values with the increase of storage time, which may reflect the growth of the bacteria and the accumulation of lactic acid [21]. Casarotti et al. [18] and Tian et al. [22] observed similar pH patterns in co-fermentation yogurt from S. thermophilus co-cultured with probiotics. This may be due to the continuous fermentation of LAB during storage [23], which may lead to the hydrolysis of milk proteins. Fermentation with S. thermophilus in conjunction with B. animalis subsp. lactis or L. plantarum, or both, significantly accelerated the pH drop of fermented milks (p < 0.05) compared with a single culture of S. thermophilus in the storage. Obviously, the StBaLp samples showed the lowest pH values over the monitored storage period. Proteolytic Activity There was a progressive increase in the amount of α-amino groups in fermented milk along with the extension of the storage time (Table 2). At the initial storage, fermented milk samples from StBaLp showed the greatest proteolytic capacity, producing the largest amount of free amino groups (0.27 mmol/L; p < 0.05), followed by the StLp and St samples, whereas the StBa treatments released the lowest amount of free amino groups (0.21 mmol/L). No significant difference in the proteolytic activity was observed among the co-culture samples after 7 or 14 days of refrigeration, but the proteolytic activity of the co-culture samples was higher than that of the fermented milk from single S. thermophilus. On the 21st day, the proteolytic activity of the StLp-and StBaLp-fermented milks was stronger than that of StBa (p < 0.05), and the StBa samples had a similar value to the St samples (p > 0.05). Shihata and Shah [3] revealed that Bifidobacterium spp. exhibited a lower free amino release ability compared with S. thermophilus and other lactobacillus. The strain of L. plantarum is considered to have a highly proteolytic activity in dairy systems [24,25], which is consistent with our findings in this study. In all of the cases, StBaLp-fermented milks showed the strongest proteolytic capacity. Protease Activity A variety of proteases generated by LAB during milk fermentation, with a few exceptions, break down casein to large peptides [26]. As shown in Figure 1, proteases mainly located in the cell wall extracellular, and the activity in the extracellular extracts (EE), was about 10-20 times that of the intracellular extracts (IE). The acidic protease activities at the EE level in the treatments declined to minimum values on day 14 of storage, and then elevated until day 21, except for the StBa samples, which increased on the 7th and 21st day, and decreased on the 14th day, compared with the activity of the previous sampling point. The sharp increase in acidic protease activity at day 21 may be due to the proteolytic enzymes released by bacteria for their autolyzation. Our previous study also confirmed that the viability of the three strains in the fermented milks showed a drop on day 21 [19]. At the first day of cold storage, the StBa samples displayed the lowest acidic protease activity at the EE level among all of the treatments; this pattern may explain the low proteolytic activity of StBa-fermented milk under the same period (as shown in Table 2). It could be noted that the activities of the acidic protease were higher than the neutral protease at the EE level (p < 0.05). This is according to the previous findings of Yang et al. in probiotic soy yogurt [7]. Zakharov et al. [27] stated that most of the acidic proteases showed an optimal activity at an acid pH, as they might belong to the papain-like family of cysteine proteases. The neutral protease activity increased in the first seven days, and then gradually decreased to the end of the storage period, which was in harmony with those obtained by Ohmiya and Sato [28]. The decrease of neutral protease activity was probably because of the inhibition of activity by acid stress caused by a low pH environment. At the last two weeks of storage, a marked difference in the activity of the neutral protease among the fermented milks at an EE level was found, and the activity of the St-fermented milk was significantly lower than that of the co-fermented milk (p < 0.05). The decrease of neutral protease activity was probably because of the inhibition of activity by acid stress caused by a low pH environment. At the last two weeks of storage, a marked difference in the activity of the neutral protease among the fermented milks at an EE level was found, and the activity of the St-fermented milk was significantly lower than that of the co-fermented milk (p < 0.05). Aminopeptidase Activity The activities of the aminopeptidases in the EE and IE of all of the samples over storage are plotted in Figure 2. The specific activities against the six substrates tested at the EE level were observed to be different to that at the IE level (p < 0.05). The aminopeptidase activity in the IE of all of the batches of fermented milks elevated to a peak at day 7 (p < 0.05), followed by a gradual decrease (p < 0.05), while the samples from different starter cultures showed a different affinity for the substrates at the EE level. Almost all of the treatments showed a higher IE activity over storage, however, an exception was also observed where the EE of the StBa-fermented milk showed a higher aminopeptidase activity than the IE for proline-containing substrates. Although it is generally accepted that most aminopeptidase are located inside the cells, the presence of some peptidases in the cell wall fraction [29], and some extracellular aminopeptidases released in the proteolytic pathway of LAB, have been detected [3,9]. The lower values at the IE level, recorded in co-culture samples, than in the St samples throughout storage were probably due to the loss of activity of aminopeptidases at the low pH values [14]. On the first day, the StLp and StBa samples demonstrated the highest and lowest affinity for the six substrates at the IE level, respectively, but the StBa samples showed the highest activity at the EE level among the co-culture samples. On day 7 of refrigeration, the aminopeptidase activity of the StBa samples towards six substrates was reduced to a minimum at the EE level. However, at the IE level, the StBa samples displayed a greater specificity towards the substrate Met-ρ-NA than that of the StLp and StBaLp samples, while towards the other five substrates it was lower than that of StLp, but higher than the StBaLp samples. It was not difficult to find that StBa-fermented milks had distinguished aminopeptidase properties from those of the two co-fermented milks in terms of their specific activity towards six substrates from day 1 to day 7, which may be related to the special nitrogen requirements of B. animalis subsp. lactis at different stages of growth [30]. After prolonged storage for 14 days, the StLp and StBaLp samples showed the highest and lowest intracellular specific activity, respectively, whereas the highest and lowest activity at the EE level was monitored in the StBa and StLp samples among the co-culture treatments. At 21 days of storage, StBa treatments presented the highest affinity against six substrates at the IE level, while StLp and StBaLp treatments did not show any significant difference towards the other four substrates, except for the derivatives of methionine and proline. Aminopeptidase Activity The activities of the aminopeptidases in the EE and IE of all of the samples over storage are plotted in Figure 2. The specific activities against the six substrates tested at the EE level were observed to be different to that at the IE level (p < 0.05). The aminopeptidase activity in the IE of all of the batches of fermented milks elevated to a peak at day 7 (p < 0.05), followed by a gradual decrease (p < 0.05), while the samples from different starter cultures showed a different affinity for the substrates at the EE level. Almost all of the treatments showed a higher IE activity over storage, however, an exception was also observed where the EE of the StBa-fermented milk showed a higher aminopeptidase activity than the IE for proline-containing substrates. Although it is generally accepted that most aminopeptidase are located inside the cells, the presence of some peptidases in the cell wall fraction [29], and some extracellular aminopeptidases released in the proteolytic pathway of LAB, have been detected [3,9]. The lower values at the IE level, recorded in co-culture samples, than in the St samples throughout storage were probably due to the loss of activity of aminopeptidases at the low pH values [14]. On the first day, the StLp and StBa samples demonstrated the highest and lowest affinity for the six substrates at the IE level, respectively, but the StBa samples showed the highest activity at the EE level among the co-culture samples. On day 7 of refrigeration, the aminopeptidase activity of the StBa samples towards six substrates was reduced to a minimum at the EE level. However, at the IE level, the StBa samples displayed a greater specificity towards the substrate Met-ρ-NA than that of the StLp and StBaLp samples, while towards the other five substrates it was lower than that of StLp, but higher than the StBaLp samples. It was not difficult to find that StBa-fermented milks had distinguished aminopeptidase properties from those of the two co-fermented milks in terms of their specific activity towards six substrates from day 1 to day 7, which may be related to the special nitrogen requirements of B. animalis subsp. lactis at different stages of growth [30]. After prolonged storage for 14 days, the StLp and StBaLp samples showed the highest and lowest intracellular specific activity, respectively, whereas the highest and lowest activity at the EE level was monitored in the StBa and StLp samples among the co-culture treatments. At 21 days of storage, StBa treatments presented the highest affinity against six substrates at the IE level, while StLp and StBaLp treatments did not show any significant difference towards the other four substrates, except for the derivatives of methionine and proline. The specificity towards the substrates of Met-ρ-NA and Pro-ρ-NA in the samples of StLp was higher than that of StBaLp. In this study, compared with the StBa treatments, the aminopeptidase activities in the StBaLp treatments were not significantly changed by the addition of B. animalis subsp. lactis, which indicated that L. plantarum had a greater influence on the substrate specificity than B. animalis subsp. lactis. Interestingly, the activity of proteolysis did not follow the activity of aminopeptidase. This observation was in accordance with an earlier report [3]. in the StBaLp treatments were not significantly changed by the addition of B. animalis subsp. lactis, which indicated that L. plantarum had a greater influence on the substrate specificity than B. animalis subsp. lactis. Interestingly, the activity of proteolysis did not follow the activity of aminopeptidase. This observation was in accordance with an earlier report [3]. Bars marked with different lower-case letters indicate significant differences among days of storage for the same batch of fermented milks (p < 0.05); Bars marked with different upper-case letters indicate significant difference among starter cultures within the same period (p < 0.05). Free Amino Acid Content The changes in free amino acids (FAAs) of the samples during storage are listed in Table 3. Initially, the total FAA concentration was 34.89 mg/kg and 36.00 mg/kg in the St and StBa samples, respectively. After 21 days, the total FAA concentration dramatically dropped to 26.02 mg/kg in the St samples, and 29.35 mg/kg in the StBa samples. It suggested that the capacity of the amino acids generated by S. thermophilus or B. animalis subsp. lactis was insufficient and could not meet the requirements of the bacteria at the late stage of storage. A similar trend for non-fat yoghurt over Free Amino Acid Content The changes in free amino acids (FAAs) of the samples during storage are listed in Table 3. Initially, the total FAA concentration was 34.89 mg/kg and 36.00 mg/kg in the St and StBa samples, respectively. After 21 days, the total FAA concentration dramatically dropped to 26.02 mg/kg in the St samples, and 29.35 mg/kg in the StBa samples. It suggested that the capacity of the amino acids generated by S. thermophilus or B. animalis subsp. lactis was insufficient and could not meet the requirements of the bacteria at the late stage of storage. A similar trend for non-fat yoghurt over storage has been reported by Damir et al. [31]. Nevertheless, the total amounts of FAA increased to values of 39.31 mg/kg and 37.77 mg/kg from 37.68 mg/kg and 35.58 mg/kg on day 21 of storage for the StLp and StBaLp treatments, respectively. Statistically, the levels of total FAA concentrations in the co-culture samples were higher than that of the St samples (p > 0.05). Loadings based on the principal component analysis (PCA), in Figure 3, showed that fermented milks between StLp and StBaLp on the 21st day were positioned close to each other in the PCA diagram of PC 1 against PC 2, and the adjacent positions were found between St and StBa after 21 days storage. Note: Data are means ± standard deviation (n = 3). Different lower-case letters in the same row indicate significant differences among days of storage (p < 0.05). The PCA diagram of the FAAs data, in Figure 4, clearly explained the differences between the samples, and the relationships between attributes for the first three PCs, which were responsible for 90.33% of the variance contribution. The first PC was essentially an aspect of Gly, Cys, and Ala (positive correlation to PC 1), and Phe, Ile, and Met (negative correlation). PC 2 was Thr, Pro, and Glu (positive correlation to PC 2), and Val (negative correlation). The attributes of Leu were positively correlated with PC 3. When B. animalis subsp. lactis or L. plantarum was co-cultured with S. thermophilus, the contents of Pro, Glu, and Gly in the fermented milk could be significantly improved compared with the single S. thermophilus. The increase in these FAAs' concentrations may be attributed to the biosynthesis activity of probiotics [32]. Proline was the amino acid with the highest concentrations in all of the fermented milk specimens. The results agree with the report of Yang et al. [7], who investigated fermented milks prepared from mixed cultures of S. thermophilus, and L. bulgaricus with L. helveticus. A large increase in the Ala concentration was observed in samples with mixed strains containing L. plantarum, and alanine is related to a sweet taste. The concentrations of branched-chain amino acids (Val, Leu, and Ile), which are considered to be important precursors of flavour compounds in samples without L. plantarum, were higher than that in the samples with L. plantarum during storage. The values of the sulfur-containing amino acids Met in St, and the StBa samples, was significantly higher than that in the StLp-and StBaLp-fermented milks, while for Cys, the values in the St and StBa treatments were lower than that in the StLp and StBaLp treatments. What was demonstrated here was that L. plantarum in co-cultures with S. thermophilus significantly changed the amino acid metabolism of S. thermophilus. Note: Data are means ± standard deviation (n = 3). Different lower-case letters in the same row indicate significant differences among days of storage (p < 0.05). Molecules 2019, 24, 3699 8 of 13 flavour compounds in samples without L. plantarum, were higher than that in the samples with L. plantarum during storage. The values of the sulfur-containing amino acids Met in St, and the StBa samples, was significantly higher than that in the StLp-and StBaLp-fermented milks, while for Cys, the values in the St and StBa treatments were lower than that in the StLp and StBaLp treatments. What was demonstrated here was that L. plantarum in co-cultures with S. thermophilus significantly changed the amino acid metabolism of S. thermophilus. Electrophoresis Analysis The changes in the protein bands of the fermented milk with different starter cultures stored for 1, 7, and 21 d at 4 °C are presented in Figure 5. As expected, all of the four fermented milks had typical milk-like protein characteristics, the most abundant proteins being α-lactalbumin (α-La), βlactoglobulin (β-Lg), κ-casein (κ-CN), β-casein (β-CN), α-casein (α-CN), and bovine serum albumin (BSA), as confirmed by the standard proteins of the molecular weight. The band intensity of κ-CN, β-CN, and α-CN of the fermented milks visually decreased from day 1 to day 21, while the electrophoretic bands referring to α-La and β-Lg were almost unchanged in all of the treatments. This could be explained by the fact that caseins are the main substrate of the proteolytic system of LAB, which can be the correlation of the accumulation of free amino groups in the growth medium with the degradation of caseins [33,34]. Electrophoresis Analysis The changes in the protein bands of the fermented milk with different starter cultures stored for 1, 7, and 21 d at 4 • C are presented in Figure 5. As expected, all of the four fermented milks had typical milk-like protein characteristics, the most abundant proteins being α-lactalbumin (α-La), β-lactoglobulin (β-Lg), κ-casein (κ-CN), β-casein (β-CN), α-casein (α-CN), and bovine serum albumin (BSA), as confirmed by the standard proteins of the molecular weight. The band intensity of κ-CN, β-CN, and α-CN of the fermented milks visually decreased from day 1 to day 21, while the electrophoretic bands referring to α-La and β-Lg were almost unchanged in all of the treatments. This could be explained by the fact that caseins are the main substrate of the proteolytic system of LAB, which can be the correlation of the accumulation of free amino groups in the growth medium with the degradation of caseins [33,34]. As shown in Figure 5, the samples showed different proteolysis patterns depending on the starter cultures; this effect is partly due to the proteolytic activity of probiotics. A new distinguishable band near 33 kDa (range from αs-CN to β-CN) could be visualized in the samples with the presence of L. plantarum, compared with the samples with an absence of L. plantarum. This accounted for the intrinsically high proteolytic activity of L. plantarum. Ghosh et al. [35] observed that the mixture of S. thermophilus and L. plantarum had a stronger proteolysis for milk protein than the single-strain of S. thermophilus, and the mixed cultures gave bands beyond 10 kDa that could not be visible. After 21 days of storage, there was relatively little change in the intensities of the protein bands in the St sample, whereas obvious caseins degradation of co-fermented milk were observed throughout storage, corresponding to κ-CN, β-CN, and αs-CN. The analysis of the electrophoresis in this study was consistent with the results of the proteolytic activity of fermented milk. As shown in Figure 5, the samples showed different proteolysis patterns depending on the starter cultures; this effect is partly due to the proteolytic activity of probiotics. A new distinguishable band near 33 kDa (range from α s -CN to β-CN) could be visualized in the samples with the presence of L. plantarum, compared with the samples with an absence of L. plantarum. This accounted for the intrinsically high proteolytic activity of L. plantarum. Ghosh et al. [35] observed that the mixture of S. thermophilus and L. plantarum had a stronger proteolysis for milk protein than the single-strain of S. thermophilus, and the mixed cultures gave bands beyond 10 kDa that could not be visible. After 21 days of storage, there was relatively little change in the intensities of the protein bands in the St sample, whereas obvious caseins degradation of co-fermented milk were observed throughout storage, corresponding to κ-CN, β-CN, and α s -CN. The analysis of the electrophoresis in this study was consistent with the results of the proteolytic activity of fermented milk. Microbial Strains and Their Activation A strain of Streptococcus thermophilus in a freeze-dried direct vat set form with an activity of 250 Danisco unit were kindly provided by Danisco (Kunshan, China). Lactobacillus plantarum (CICC-20263) and Bifidobacterium animalis subsp. lactis (CICC-21717) were collected from the China Center of Industrial Culture Collection (Beijing, China). Both L. plantarum and B. animalis subsp. lactis were inoculated from their stock cultures (stored at -70 • C), by giving one transfer in deMann-Rogosa-Sharpe (MRS) broth (Hopebio, Qingdao, China) at 37 • C for 24 h. After two successive transfers in MRS, these activated cultures were further transferred into a sterilized milk medium at 37 • C for 6-7 h, to reach an initial microbial concentration of 10 7 cfu/mL for the experiment. Fermented Milks Preparation The following four bacterial cultures were conducted to ferment milk in trials: St, StBa, StLp, and StBaLp. The fermented milks were prepared according to a reported procedure [36]. Fresh milk (New Hope Diary, Chengdu, China) was heated at 90 • C for 10 min, and cooled to 43-45 • C in an ice bath. Subsequently, the milk was mixed with bacterial cultures. In all of the treatments, S. thermophilus were added at 0.1% (w/v) of milk (approximately 10 8 cfu/mL), while B. animalis subsp. lactis and L. plantarum were added alone or simultaneously at an initial concentration of 10 7 cfu/mL of milk. The mixtures were put into sterilized glass containers and incubated at 42 • C until a pH 4.60 was reached. After incubation, the fermented milk samples were stored at 4 ± 1 • C for 21 days. THe samples were analyzed at weekly intervals, up until the third week. pH Determination The fermented milk was vortexed with deionized water in 1:9 (w/w) before determination [37]. The changes in samples in the pH value were monitored using a pH meter (Sanxin, Shanghai, China) at room temperature (20 ± 2 • C). Proteolysis Evaluation The proteolysis activity was evaluated by measuring the free amino groups using the o-phthaldialdehyde (OPA) method of Church et al. [38]. Briefly, 2 g of fermented milk, and 1 mL of deionized water were mixed with 5 mL of 0.75 mol/L trichloroacetic acid (TCA) for 10 min, and filtered using Whatman No. 4 filter paper. Aliquots of 200 µL of the TCA filtrate and 4 mL of the OPA reagent were vortexed and reacted at room temperature (20 ± 2 • C) for 10 min. The absorbance at 340 nm was measured by a spectrophotometer (Aoyi, Shanghai, China). The free amino groups were calculated against the L-leucine standards (0-10 mmol/L). Crude Enzyme Extraction The cell wall extracellular extracts and intracellular extracts from the growth medium were prepared according to the method of Ramchandran & Shah [39]. The cells were collected from the fermented milk (10 g) by centrifugation at 12,000× g for 15 min (4 • C). The supernatant was harvested as the EE for the enzyme assays, while the pellet was washed twice, with 10 times the volume of 0.9% (w/v, NaCl) saline water, and centrifuged (5000× g for 15 min at 4 • C) each time so as to remove the saline water. The washed pellet was resuspended in 5 mL of 0.05 mol/L Tris-HCl buffer (pH 8.5), and sonicated at 40 KHz (Scientz Biotechnology Co., Ltd., Ningbo, China) for 10 min at 4 • C. The supernatant obtained after centrifugation (12,000× g for 15 min at 4 • C) was designated as the IE for the enzymatic assays. Protease Assays The protease activity in fermented milk was determined using the method of Li et al. [40]. In brief, the substrate casein (Sigma-Aldrich, St. Louis, USA) was dissolved and further diluted to a final concentration of 1% (w/v), with 0.05 mol/L lactate buffer (pH 3.0) and phosphate buffer (pH 7.0), respectively. The reaction system containing 1 mL of enzyme extract and 1 mL of substrate solution was incubated at 37 • C for 20 min, then the enzymatic reaction was terminated by adding 2 mL of 0.4 mol/L TCA. Subsequently, the mixtures were centrifuged at 5000× g for 15 min (4 • C), to obtain the supernatant. A volume of 1 mL of supernatant, 5 mL of 0.4 mol/L sodium carbonate, and 1 mL of Folin-Ciocalteu reagent (Yuanye Bio-Technology Co., Ltd., Shanghai, China) were incubated at 37 • C for 20 min. The optical densities of the reaction mixtures were read by a spectrophotometer (Aoyi, Shanghai, China) at 660 nm. The enzyme blanks were also monitored as the control. One unit of protease activity was defined as the amount of enzymes that release 1 µg of tyrosine per min per mL of casein at 37 • C. Aminopeptidase Assays An analysis of the aminopeptidase activities in fermented milk was performed using chromogenic substrates: amino acid derivate of ρ-nitroanilide (ρ-Na), namely, Lys-ρ-Na, Leu-ρ-Na, Met-ρ-Na, Ala-ρ-Na, Arg-ρ-Na, and Pro-ρ-Na, by a method reported Fernandez-Espla et al. [41]. A volume of 200 µL of enzyme extract, 800 µL of 50 mmol/L Tris-HCl buffer (pH 7.0), and 100 µL of 10 mmol/L substrate solution (Sigma-Aldrich, St. Louis, USA) were incubated at 37 • C for 20 min. The reaction was stopped by the addition of 2 mL of 30% (v/v) acetic acid. The released ρ-nitroanilide was monitored by measuring the absorbance at 410 nm (Aoyi, Shanghai, China). The content of ρ-nitroanilide was calculated from 9024 mol −1 cm −1 of the molar absorption coefficient. One unit of enzyme activity was defined as the amount of enzyme required to release 1 µmol of ρ-nitroanilide per min per L of extract under the above assay conditions. Free Amino Acid Analysis The free amino acid contents of the fermented milk samples were evaluated by an automatic amino acid analyzer (Sykam, Eresing, Germany), equipped with a LCA K 06/Na analytical column (150 mm × 4.6 mm, 7 µm), according to the method published by Das et al. [42]. The fermented milk samples were precipitated by 5% (w/v) TCA and centrifuged (10,000× g for 20 min at 4 • C). Then, the supernatants were filtered through a 0.45 µm membrane (Jinteng, Tianjin, China) to analyze. The samples were run by a two-solvent gradient: solution A was 40 mmol/L of citric acid buffer (pH 3.45), and solution B was 70 mmol/L citric acid buffer (pH 10.85). The injection sample volume was 50 µL and the flow rate was 0.45 mL/min, and the FAAs were analyzed at 570 nm and 440 nm. By comparison with known standards for amino acids (Sigma-Aldrich, St. Louis, USA), quantitative data of the FAAs were obtained. Polyacrylamide Gel Electrophoresis The fermented milk samples were prepared for electrophoretic analysis according to Laemmli [43]. Each portion of sample (1 mg/mL) was diluted with a loading buffer and denatured in a boiling water bath for 5 min. After centrifugation at 10,000× g for 10 min (4 • C), the harvested proteins' supernatant (10 µL) were loaded per lane for separating protein bands, using a 12% polyacrylamide gel with a 5% stacking gel. The separation was completed after 1.5 h at a voltage of 120 V. The gel was stained with 1g/L Coomassie brilliant blue R-250 (Sigma-Aldrich, St. Louis, USA) for 1 h, and then de-stained with a solution of 7.5% (v/v) methanol and 7.5% (v/v) acetic acid for 6 h. The gel pictures were taken using a Bio-Rad gel imager (Hercules, CA, USA). The protein concentration of the samples was measured by the Coomassie blue method [44]. Statistical Processing All of the assays were performed in triplicate. The data were analyzed by the two-way analysis of variance (ANOVA) with repeated measurements for fully 4 × 4 factorial design. After indications of significant interactions of starter treatments with time-points, statistically significant differences among the treatments or storage times were processed by one-way (ANOVA) with 95% confidence intervals, and multiple comparisons were performed with Ducan's method using the SPSS 21.0 software package (IBM, Chicago, USA). The overall difference between the samples in the free amino acids was assessed with principal components analysis.
2019-10-17T03:10:12.259Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "d2b05aa4e5556becee2ff8c2268d9ae488948ec7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/24/20/3699/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2b05aa4e5556becee2ff8c2268d9ae488948ec7", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
265014036
pes2o/s2orc
v3-fos-license
Atmospheric dispersion modelling and dose projection under high uncertainty conditions Understanding the overall magnitude of the deviations that may occur within the results of one or more codes allows avoiding discrepancies in decision making in the context of emergency preparedness and response. The uncertainty of the assessment input data plays a significant role in this. Currently, emergency centers around the world use a number of atmospheric dispersion modelling and dose projection tools that have the same functionality, are used for the same purpose, but may produce different results. This article reveals the problem of uncertainty in the results of atmospheric dispersion modelling and dose projection, which are laid down at the stage of input data for actual software products and decision support systems. The paper lists the main factors that can affect the uncertainty of the assessment results. On the example of the JRODOS system, possible options for describing the source for the spectrum of emergency events at NPPs are considered. Special attention is paid to assimilation of radiation monitoring results and response to hostilities. Introduction Modern approaches to the emergency response to radiation accidents at nuclear facilities worldwide are coherent.They aim to prevent human losses and establish control over the source of releasing radioactive substances.Mechanisms and procedures for responding to similar accidents are not designed for casualties of a terrorist or military nature, taking into account the principle of peaceful use of atomic energy.The existence of software evaluation tools and decision-making support systems for population protection during radiation accidents is no exception.These tools are the most effective if they provide the full range of input parameters to run the models [1,2].However, the response procedures require decision-making based on incomplete data or within the scope of information currently available to the decision-maker in unforeseen conditions. Today, world emergency centers use modern assessment tools such as the European ARGOS system [3] or the American regulatory RASCAL software complex [4].The development of a complex real-time decision support system for off-site response to radiation accidents RODOS has been actively supported and coordinated since the beginning of the 90s within the framework of the European Commission's scientific programs.The Java version of this system is called JRODOS [5].Currently, the further development of this product is continued.Its improved versions are periodically presented by the leading developer of JRODOS -Forschungszentrum Karlsruhe GmbH -on behalf of the JRodos Developers Consortium.It includes 15 institutions from different European countries.Since 2013 this system has been successfully used in the emergency centers of Ukrainian NPPs and regulatory body during emergency response measures for nuclear and radiological accidents.The JRODOS system consists of several mathematical models and databases for conducting predictive calculations on the consequences of possible radiation accidents and planning urgent and early countermeasures for protecting the public.This system is increasing the technical and strategic capabilities of responding to national and cross-border emergencies.The JRODOS models and databases can be adapted for different characteristics of the NPP location and geographical, meteorological, and environmental conditions.Using this system in a state of continuous arrival of numerical weather prediction data, timely information on the chronology and activity of the release allows prompt forecasting of radiological consequences on local and global spatial scales. Uncertainties problem Some decision support systems (DSS) have a flexible interface and allow to specify output data of the simulation in several ways.Variability of the data entry approach facilitates prompt response to incomplete information regarding the state of the affected object.The package of the primary input data includes source term data, meteorological conditions, calculation settings, and desired results list or their format. Figure 1 presents the impact of uncertainties regarding the release under unstable meteorological conditions.Case zoes for taking immediate countermeasures for population protection in the case of a severe accident may differ significantly depending on the release moment. The magnitude and radionuclides mixture are distinguished in addition to the chronological uncertainty of the temporal distribution of emissions.It may also differ significantly and depend on the initial activity of the dose-forming radionuclides in inventory.Currently, various views are considered in the framework of many international projects.This type of uncertainty can be significantly leveled at the stage of emergency preparedness by comparing existing approaches, such as [6], unlike the chronological one. The World Meteorological Organization investigates uncertainties associated with the numerical data of the meteorological forecast [7].Some modern DSS allows analyzing of consequences of cases for several variants of meteorological conditions or source terms.Such studies make it possible e to determine the influence of the forecast quality on the final result.It includes zone configuration for the adoption of urgent countermeasures for population protection.In practice, comparative analysis of the calculation results obtained by various organizations is carried out mainly within the framework of international projects and less often within the framework of special emergency exercises.The work [8] contains a list of examples and approaches for comparing results obtained using different codes or DSS. Radionuclide vector, physical and chemical forms Radionuclide mixes and physicochemical forms of release depend on a complex of factors such as activity of radionuclides in the reactor core and spent fuel pool, features of safety systems, phenomenological stage of fuel damage, etc. Grouping by physico-chemical classes describes the behavior of radioactive vapor-gas mixture within containment.More generalized distribution of radionuclides by physico-chemical forms is also characteristic at the modeling stages of the Several mathematical models based on the processes of heat and mass transfer and aerodynamics are used to describe radionuclides transport in the closed emergency room of nuclear enterprises.These models are part of integrated calculation software products such as MELCOR [9], MAAP, CONTAIN, etc.These codes use analytical and numerical solutions, operating with empirical and semi-empirical relations.Transport of nuclear fuel fission products is described; the power and composition of the emission of radioactive substances from the premises of emergency objects into the atmosphere are calculated with the help of such tools.The leading representatives of integral codes of this group have a similar structure and cover the main stages of modeling the transport of radioactive substances in process rooms for most design and post-design accidents that are considered during the safety analysis of nuclear power plants. For example, MELCOR [9] is an integrated computer code at the engineering level.It stimulates the severe accident at a nuclear power plant with light water reactors.This code was developed at Sandia National Laboratories for the US regulatory body.The International Atomic Energy Agency (IAEA) member countries, including Ukraine, actively use this code.MELCOR allows the simulation of a wide range of emergency processes at a nuclear facility.The code enables modeling the transport of fission products along with such processes as the thermal-hydraulic reaction of safety systems and adjacent structures, degradation and movement of fuel masses, the interaction of core melt with concrete of building structures, generation, transportation, and combustion of hydrogen, etc.The modeling basis with this code is a nodalization scheme.It is a spatial division of the power unit objects into separate volumes according to the principle of priority of this or that equipment/room contribution to the determining parameters of the emergency process.Thermal-hydraulic parameters within the same book at a particular moment are considered the same-chemical properties group MELCOR fission products.The behavior of chemical elements and their isotopes is regarded as the same within the same class. There are the following ways of obtaining data on the radionuclide mix in a radioactive release: 1) receiving information from actual measurement points of releases control subsystem (vent stack); 2) use of the general JRODOS library for the formation of source term; 3) analytical assessments of source term according to the phenomenological stages of fuel damage. The other two paths are preferred but not mandatory if the information provided by the first path is sufficient for the current calculations in the JRODOS system.During the shortage of information (whole or partial), using the two named ways of data formation is necessary. Initial data for the source term estimating are data on reactor inventory, or data on the radionuclide vector and activity of the coolant, in the absence of more than normative damage to the occupied zone [10,11].Data from the JRODOS general library can be used to generate data.It should contain source term according to defined chronological stages: • coolant release; • gas gap release; • fission products release due to partial fuel damage of the reactor core; • fission products are released due to reactor core melting (in-vessel and ex-vessel phases). Parameters of the release during accidents with different degrees of damage to the reactor core are determined by several chemical elements and the site of their release into the coolant in case of reactor core damage.The document NUREG-1465 [12] provides information on the approximate relative share of the fission products released from reactor core into the containment air space at various stages of fuel damage for PWR and BWR-type reactors. Special attention is paid to the distribution of iodine radionuclides during the assessment of the release parameters.This radionuclide is a dose-forming radionuclide by the physicochemical forms of the most critical pathway of exposure.It is necessary to consider the distribution of iodine according to its physical and chemical conditions to determine the rate of dry deposition and the iodine washing from the radioactive cloud.The JRODOS system makes it possible to take into account three forms of iodine: In modern approaches to realistic forecasting of the radiological consequences of accidents at water-water reactors, the following distributions are distinguished according to 2 ways of exiting the vapor-gas mixture: containment and a fast-acting reduction unit for releasing steam into the atmosphere.A higher amount of organic iodine distribution is chosen for a conservative assessment (especially for transboundary transport). Leakage pathways Radionuclides can bypass such a barrier as containment during an accident.They can also first enter the air space of the containment and only then enter the environment due to the leakage of the containment (figure 2).The first pathway of propagation is typical for accidents caused by the flow of the primary circuit into the second (failure of steam generator collector).At the same time, radionuclides bypass the containment, immediately entering the second circuit and entering the atmosphere without purification.Another path of propagation is characteristic of accidents associated with the rupture of pipelines of the first circuit up to the maximum design accident.It is necessary to consider the presence or absence of retenrtion to correctly assess the radionuclides released from the containment to the environment.The discharge may be subjected to various mechanisms of retention of radionuclides by safety systems (sprinkler system, bubbling) and under the influence of natural retention mechanisms (sedimentation, decay), depending on the release pathway.At the same time, the activity depends on the length of radionuclides delay before the release.The retention factor refers to the activity ratio of iodine and long-lived aerosols released into the environment to the activity created due to the accident (data from NUREG-1228 [6]). The efficiency of the filtering system through which the vapor-gas mixture passes is considered in the case of emissions after cleaning with filtering means.It happens if their efficiency is preserved during the course of the accident.At the same time, radioiodine distribution by physicochemical forms changes dramatically.Practical calculations show that the dose-forming groups will be noble gases (Kr, Xe) and organic compounds of radioiodine during accident scenarios with operating filtered containment venting system (FCVS).At the same time, in such methods, the delay time before release into the atmosphere plays a significant role in the radiological consequences results. Speed radionuclides entering the containment depend on a containment leakage rate.The following intensities of leakage from the containment are accepted in international practice: • 0.1 -0.3%/day (normal leakage for PWR-type reactor containments, 0.3%/day -VVER-1000); • 100% per day (failure of containment isolation valves); • 100% per hour (corresponds to the containment destruction of). The delay time of radionuclides before the release into the atmosphere is a determining factor in the calculation of poblic exposure doses in the part of external exposure from nobel gases (dose from the cloud) and radioiodine (dose from inhalation).It is possible to find the activity at any time after the shutdown of the reactor or movement of the steam-gas mixture at any time of its exposure in the free space of the containment knowing the inventory at the end of compaign. Effective release height Methodological approaches to assessment of the effective release height in various literary sources are presented quite ambiguously.However, we note that all these approaches introduce the following concepts: • release from tall pipes; • release from low pipes (guideline №. 50-SG-S3 of the IAEA [13]). The dawnwash effect can be also considered (estimation of the initial parameters of the atmospheric dispersion) in the second case to increase the realism degree of the calculation. The plumerise as a component and as a result of the heat energy of the release according to the current parameters is defined by the pre-programmed calculation procedures (Mathcad/Excel) as a part of the real-time calculation.They are used regarding to the IAEA method № 50-SG-S3 for cases: • unstable and neutral stability class (medium and high boundary of the mixing layer); • for conditions of a stable atmosphere (low boundary of the mixing layer). It should be noted that reducing the effective height of the release increases the degree of conservatism in the assessment results of radiological consequences in near range. The JRODOS system allows the setting of the total release height.It considers the plume rise as a dynamic component and a result of the heat energy of the release.It uses the additional parameters: thermal power of the release, vertical flow rate, and cross-sectional area (nozzles, vent stacks, etc.). Types of input data entry Modern DSS have a reasonably flexible policy for entering initial data.It allows entry of the source term in terms of release fractions and reactor core inventory at the time interval or the integral release activity without reference to reactor core inventory. Modern DSS is moving to the IRIX (International Radiological Information Exchange, [14]) format source library standard.The standard significantly facilitates data exchange between organizations and in an international context (table 1).TECDOC-955 was one of the first international documents covering the systematization of source temrs for NPP severe accidents.The source selection algorithms are based on the events tree concept according to branching criteria.It corresponds to the factors affecting the release intensity (power unit status, operation of safety systems, retention and filtration of the vaporgas mixture, etc.).Such algorithms are implemented in the International InterRAS system and its subsequent evolution as a software product, RASCAL. Assimilation of radiation monitoring data Application of radiation measurement data near the emergency power unit (monitoring grid, mobile vehicles, etc.) contributes to the validation of the model and the results confirmation of atmospheric dispersion modeling and dose projection.However, these systems may be partially unavailable in the case of military attacks or occupation.The reliability of the information provided by the radiation monitoring stations remains a separate issue. Ukraine still needs an integrated automated monitoring system for detecting, analyzing, and forecasting possible radiological consequences of accidents.Accident release may spread beyond the sanitary protection zones of nuclear power plants, other atomic installations, and radiationhazardous objects in Ukraine and beyond.However, the development of an integrated automated radiation monitoring system is planned until 2024 [15]. There are currently many challenges to developing real-time radiation impact assessment tools.Now, one of the ambitious directions of DSS development is solving the inverse problem of determining coordinates and characteristics of the emission source based on the results of field measurements.The practice of calculations for a wildfirefire in the Chornobyl exclusion zone shows that it is usually possible to estimate the integral characteristics of the release quite quickly if measurement data are available in the near range.However, this procedure requires considerable time to collect and process data to provide an inversion calculation in relatively large spatial scales. Conducting inverse modelling for events on a large spatial scale requires the involvement of specialized software tools and separate methodological approaches.In addition, the task is complicated because the format and completeness of the output data are individual in each case.Also, there currently needs to be methodical approaches regarding the consideration of radiation monitoring data in the constructed models of atmospheric dispersion of the DSS and its subsequent correction or refinement in real-time.The issue of forming universal approaches to inverse modelling remains open. The problem of forecasting radiological consequences of military causes The emergency preparedness and response phases are divided by the principle criteria (an announcement of the event class) according to the current IAEA classification for peacetime conditions.The classification of events as objects of the first threat category (for example, nuclear power plants) under martial law declared is not regulated by national or international regulations.It can be assumed that the situation around the Zaporizhzhia NPP is intermediate, given the repeated activation of Ukraine's crisis center's regulatory body during the first nine months of the full-scale invasion.It includes synthesizing elements of both the readiness and the response phases.Some examples of air mass movement trajectories modeling for the Zaporizhzhia NPP are presented in fig. 3. Today there is no experience in international practice on performing safety analyzes of nuclear installations under the war conditions.It includes a lack of methodology and initial data for their conduct (intensity of shelling, degree of damage to buildings and structures from the impact of various types and calibers of ammunition, action personnel, and population behavior in conditions of hostilities and extreme stress, etc.). Several conditional reference scenarios of severe damage to the reactor core at the VVER-1000/B-320 type reactor plant were considered a representative event for NPP industrial sites considering the above.The creation of the emissions library made it possible to simulate multiunit scenarios, such as total station blackout for all power unit on-site. Also, the accident at a spent nuclear fuel dry storage facility was considered, given the assumption of a possible mechanical destruction of spent fuel cask (as a result of hostilities or a terrorist attack), additionally for the Zaporizhzhia NPP.It was considered for one VSC-24 container containing 24 spent fuel assemblies with a minimum spending time of 5 years. The question of the results ambiguity of atmospheric dispersion modeling and the prediction of radiation doses is based on several uncertainty factors from the input information to the endpoints report on the results.The following can be distinguished among them: • detail and completeness of input data for assessment and analysis of the situation (state of the nuclear installation, source term, pathway and effective height of the release, time resolution, physical and chemical forms, number of calculated radionuclides, etc.); • provider and completeness of parameters of numerical weather data of (spatial and temporal resolution, completeness of the list of meteorological parameters); • atmospheric dispersion model and parameterization; • dose models, number of reference groups by age, exposure routes; • model of countermeasures, form, and completeness of the final results presentation of the assessment or forecast (scale, data format, deterministic or probabilistic interpretation); • type and completeness of accompanying databases (height of roughness of the underlying ground surface, population density, land use, types of shelters, features of the infrastructure, dose coefficients, etc.); • degree of experience and qualification of the expert. of discharges and provided detailed descriptions thereof.Furthermore, played a significant role in defining input parameters.• Andrii V. Iatsyshyn: Supported the justification of the research's relevance and played a crucial role in conducting research related to the prediction of radiation consequences in military events.Additionally, contributed to the formulation of key conclusions. Each author's unique expertise and dedication have been instrumental in the completion of this research, enriching its depth and breadth. Conclusions Consequences of acts of nuclear terrorism or military attacks on nuclear facilities may significantly impact the public and environment.They are associated with high uncertainties or insufficient initial data for calculations.Modern emergency preparedness and response modeling tools (as DSS) are not designed for use under conditions of such uncertainty. At the same time, there are many methodical approaches to the deriving source term during accidents accompanied by significant release of radioactive substances into the environment.These approaches help to approximate and sometimes re-analyze a dynamic picture of radionuclide concentrations in the air and total fallout.They also allow the conduct of comprehensive assessment on impact on the public and environment.A review of their application features showed that the development of approaches to the source term description is an effective tool for providing initial data in various variants and forms necessary for calculating radiation consequences in DSS and other software tools. The main characteristics of the source term represent a package of initial data for modeling atmospheric dispersion and dose projection during a severe accident at a nuclear power plant.It was found that there needs to be a universal methodology and procedures for responding to events with a high degree of uncertainty, particularly in the data regarding the source term. The problem of uncertainties requires further research and analysis from the point of view of the experience gained during the response since the beginning of the full-scale invasion of the Russian Federation, the military attack, and the seizure of the Zaporizhzhia NPP at the beginning of March 2022. Figure 2 . Figure 2. Leakage ways scheme of the steam-gas mixture. Table 1 . Types of source terms by methods of input data entry on the share/activity of the emission of modern DSS JRODOS [5]. F7Activity release rate [Bq/s or Bq/h] on the interval individually for each nuclide without reference to the reactor core inventory. The research results presented in this publication are the culmination of the collective efforts and distinct contributions of each author:• Volodymyr O. Artemchuk : Conceived the research idea, provided the rationale for its relevance, and played a pivotal role in drafting the article.Additionally, contributed to the formulation of key conclusions.• Yurii O. Kyrylenko: Conducted extensive research on the radionuclide composition and the physical and chemical forms involved.Developed a comprehensive diagram illustrating the paths of steam-gas mixture leakage and contributed significantly to the corresponding report.• Iryna P. Kameneva: Undertook a comprehensive review of contemporary assessment tools used in global crisis centers, providing valuable insights.Focused on the intricate issues of uncertainty and conducted research related to the assimilation of emergency monitoring data.• Valeriia O. Kovach: Conducted in-depth studies aimed at determining the effective height
2023-11-05T16:13:36.187Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "1b6cd6c128ab95dd410305474d12ebf7a4d456b8", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1254/1/012028/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0dbf8fbeda31171a780efe1a961b5ade53f2c59e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Physics" ] }
259689026
pes2o/s2orc
v3-fos-license
Review of Development and Recent Advances in Biomedical X-ray Fluorescence Imaging The use of X-rays for non-invasive imaging has a long history, which has resulted in several well-established methods in preclinical as well as clinical applications, such as tomographic imaging or computed tomography. While projection radiography provides anatomical information, X-ray fluorescence analysis allows quantitative mapping of different elements in samples of interest. Typical applications so far comprise the identification and quantification of different elements and are mostly located in material sciences, archeology and environmental sciences, whereas the use of the technique in life sciences has been strongly limited by intrinsic spectral background issues arising in larger objects, so far. This background arises from multiple Compton-scattering events in the objects of interest and strongly limits the achievable minimum detectable marker concentrations. Here, we review the history and report on the recent promising developments of X-ray fluorescence imaging (XFI) in preclinical applications, and provide an outlook on the clinical translation of the technique, which can be realized by reducing the above-mentioned intrinsic background with dedicated algorithms and by novel X-ray sources. Introduction Since the famous discovery of X-rays by Wilhelm Conrad Röntgen in 1895 [1], various modalities of X-ray imaging have been developed, which all share the common feature of providing insights into organisms in a non-invasive manner. The basic principle of X-ray radiography was introduced into clinical practice in 1896, demonstrating the potential of X-ray imaging even for soft tissue by injecting a contrast medium [1]. In parallel to the rapid introduction of novel applications, many physicists worked hard on improving the basic X-ray technology, e.g., by improving the focusing of electron beams or optimizing the quality of the fluorescing screens to capture images of better quality [1]. While most of these early imaging methods mostly relied on the fact that the different transmission of X-rays in structures of varying density creates contrast on an imaging screen, Max von Laue discovered the principle of X-ray diffraction by crystals in 1912 [2]. After the confirmation of the diffraction discovery by William Henry Bragg and William Lawrence Bragg, a father and son, with an alternative method, two new research fields were born, Xray crystallography and X-ray spectroscopy [2]. X-ray crystallography has since matured to one of the key methods in solving the three-dimensional structure of proteins, information which is essential to unravel the molecular function of proteins [3]. Nevertheless, several obstacles are still associated with the structural determination of proteins, such as the necessity to obtain large and well-diffracting crystals and the instability of purified proteins, which can be overcome by new techniques and strategies that have been developed over the past decade [3]. Besides crystallography, X-ray spectroscopy has also become an essential tool for obtaining local constituents' data, in particular to identify features observed in imaging systems [4]. Nowadays, the sophistication of large-scale synchrotron beamlines capable of both soft and hard X-ray spectroscopy in recent years provides a wide variety of experiments that combine X-ray absorption (XAS) and X-ray emission (XES) to obtain detailed electronic structural information from a given absorber [5]. Besides techniques and applications using X-rays to study the smallest structures, radiological techniques were also gradually improved over the years, such as the now widespread computer tomography [1]. X-ray computed tomography (CT) consists of measuring attenuation profiles of transverse slices of patients from many different angular positions by using a fan or cone beam from an X-ray tube, in conjunction with a detector array traveling on a circular path opposite the X-ray source around a patient [6]. CT scans are now typically used to diagnose many diseases, such as various types of cancers, heart diseases such as myocardial disease and analyses of the liver or pancreas of patients [7]. Since the 1970s and 1980s, the speed of image acquisition has substantially improved, and modern CT scanners are capable of imaging patients in a matter of seconds or less [8]. A major drawback of CT is that large masses within the gastrointestinal tract may not be visible during scans, and more sophisticated methods such as dual-energy CT are required to differentiate materials with the same attenuation at a certain energy for better lesion depiction [7,9]. X-ray fluorescence (XRF) spectrometry, on the other hand, is based on the principle that individual atoms emit characteristic X-ray photons upon excitation by an external energy source. The abbreviations XRF and XFI are often used interchangeably, hence it is noted here that both versions will be used in the following, where the choice depends on the use of the cited publication. Different to other molecular imaging methods, the spatial resolution in XFI only depends on the size of the applied X-ray beam and does not face any physical limitations [10]. In order to make entities of interest visible with XFI, dedicated markers have to be coupled to them, such as metallic nanoparticles or molecular tracers, for example, iodine atoms. Considering that these markers do not decay over time, longitudinal studies are possible to study the biodistribution of labeled entities over long timespans in one and the same object. Furthermore, several entities can be tracked simultaneously by using different marker elements and measurements on completely different size scales, starting from full-body in vivo scans of small animals to measurements at the singe-cell level, which are feasible with XFI [11,12]. Especially the last two aspects are an advantage compared to other commonly used imaging methods, such as positron emission tomography (PET), where only single markers can be imaged over limited timespans due to the half-life of only 110 min of 18 F, the workhorse of PET [13]. Compared to XFI, the sensitivity of CT is reduced, for instance, a tumor marked with gold nanoparticles in a tumor-bearing mouse model is undetectable for CT, while XFI can clearly locate it [12], as described in detail below. The main reason for this difference in the detection sensitivity is the fact that the contrast in CT arises due to a difference in photon counts in the forward (transmission) direction, while XFI is a spectroscopic method, where the fluorescence photons are emitted isotropically and the detection limit only depends on the spectral background in the signal region determined by multiple Compton scattering. Glocker and Schreiber were the first to perform quantitative analysis of materials using XRF in 1928; however, only in the 1950s did the first commercially produced Xray spectrometers become available, making the technique practicable [14]. Since then, improvements on the source, as well as on the detector side, have resulted in modern benchtop and handheld XRF systems, which are nowadays used in a variety of disciplines, such as forensic science, pharmaceuticals, cultural heritage and many others. This review article aims to provide a historical overview of the development of X-ray fluorescence measurement techniques and to summarize the recent developments in the field. A detailed description of different experimental setups, X-ray sources used and the thereby achievable detection limits in various application areas are presented. As well as the current status in preclinical research, a translation of XFI to clinical applications is presented. History and Developments of X-ray Fluorescence Measurement Techniques In 1968, the first measurements of iodine in vivo in humans using an XRF setup were performed, resulting in a high concentration of about 400 µg/g in the thyroid gland [15]. Since the 1970s, the techniques have been improved to also allow determination of cadmium, lead, mercury, platinum and gold, with the main focus of connecting metal element abundance with surveillance of heavily exposed workers [16]. The in vivo application of X-ray fluorescence analysis was found to be limited to elements with atomic numbers larger than about 40 in 1980 [17] due to the absorption of the emitted characteristic radiation within the object. In those first in vivo measurement setups, different radiation sources, radionuclide sources and X-ray tubes were compared, and the emitted radiation was detected with Ge (Li)-detectors in combination with a collimator in front of the detector [17]. Recorded spectra of X-ray fluorescence measurements of lead in water when using 57 Co sources showed a high background level, mostly caused by multiple scattered primary photons, and thereby showed limits in the minimum detectable concentration [17]. Similar minimum detectable concentration values could be achieved with X-ray tubes in combination with dedicated filters. However, those reported studies were carried out in human fingerbones, and it is stated in [17] that it is not possible to carry out detailed studies of the distribution of lead in the skeletons of occupationally exposed persons by means of X-ray fluorescence measurements in vivo because of the low sensitivity in measurements of deep-lying bones. Instead, X-ray fluorescence analysis is suggested to be used on autopsy samples, and hence only in an invasive way. The minimal detectable concentration of cadmium was studied by using a kidney model placed in water, with the conclusion that the sensitivity is very dependent on the layer of tissue between the detector and the kidney surface, and that the measurements mainly reflect the concentration in the kidney cortex instead of the whole organ [17]. An improved technique presented in [16] uses partly polarized photons and a detection angle of 90 • to achieve a minimum background. A modified X-ray therapy tube is used in combination with rods and foils in order to make the scattered beam more monoenergetic and to also reduce the absorbed dose to the person sitting on a specifically designed chair. Kidney localization is performed using an ultrasound prior to fluorescence measurements. Collimated detectors containing thick sensors of either Si (Li) or Ge point at right angles to both the primary and the scattered beam, with a goal to reduce the number of background counts. Optimization strategies in [8] mainly consist of increasing the measurement time and using an X-ray tube, which mainly produces characteristic radiation of a high fluence rate to further decrease the detection limit. A detailed discussion regarding the choice of the X-ray source, geometry and measurement sensitivity is presented in [18]. There, three main factors which affect the choice of the photon source are listed, namely, the need to maximize the lead X-ray fluorescence yield per incident photon, the adequate penetration depth, as well as minimizing the spectral background in the lead signal region. Compton-scattered photons are identified as the main source of background; hence, it is important to have the Compton scatter peak as far as possible from the lead X-ray peaks of interest [18]. 109 Cd is identified as the source with the best parameters, together with a special collimator design to optimize the field of view and thereby reduce unnecessary doses to the subject and minimize the energy range and intensity of the detected Compton-scattered photons [18]. The estimations presented in [18] are based on the assumption that Compton scattering is isotropic in the laboratory system, and they suggest normalizing the detected lead counts to the coherent scatter peak (i.e., from Rayleigh scattering) in order to compensate for variations in object size and shape and in overlying tissue thickness. Besides measurements of lead concentrations in exposed workers, cisplatin, a cytostatic agent which has been proven to be successful in the treatment of malignant tumors, was followed in vivo by means of X-ray fluorescence [19]. In this study, a measurement setup for plane-polarized photons, in which the primary beam is scattered in two mutually orthogonal directions at a target with a low atomic number but a high density, is used to reduce the background contribution from incoherently scattered (Compton) photons to about 40%, compared to unpolarized radiation [19]. Similar to the previous studies mentioned above, the minimum detectable concentration at a depth of 4 cm is about 8 µg/g for a 30 min measurement time [19]. However, one cannot compare this low marker concentration with preclinical experiments, because in a human kidney, the total mass of cisplatin is, even at such low concentrations, effectively higher than in a mouse kidney. Even though the method based on 109 Cd has been considered as very effective for a couple of years, it is not capable of measuring low-level lead concentrations as they are present in the general population [20]. With the help of Monte Carlo simulations and experiments using phantoms, Nie et al. [20] designed an improved system and predicted that it would be about three times more sensitive than the conventional system. In parallel, the construction of dedicated synchrotron light sources providing high-flux, focused, monochromatic, tunable and polarized X-ray pencil-beams, and the development of computer-assisted tomography for imaging, led to a new technique called X-ray fluorescence tomography [21]. The first runs were carried out at the National Synchrotron Light Source at Brookhaven National Laboratories in 1985, using a setup consisting of a monochromatic, focused and collimated X-ray beam, hitting a sample mounted on a goniometer and measuring the emitted characteristic X-ray fluorescence photons with a Si (Li) detector positioned at 90 • . Already, those early studies could show that computerized fluorescence tomography of small samples to study elemental distributions of minor and trace elements is practical, and that spatial resolutions on the micrometer-scale are feasible [21]. However, due to the limited access to synchrotron facilities, X-ray fluorescence computed tomography (XFCT) could not be made widely available for experiments, leading to the first attempts of designing bench-top systems [22]. At least one early study [23] concluded that an XFCT system using a special X-ray tube that can produce quasi-monochromatic X-rays would not be practical for human applications in terms of achievable spatial resolution, minimum detectable concentration and scanning time [22]. In [23], a parameter set for medical applications was studied, consisting of a water cylinder with a 30 cm diameter as phantom, gadolinium (Gd), iodine (I) and gold (Au) as marker substances, a 10 mGy of absorbed energy dose, as well as a dedicated detector and collimator geometry. The collimator was considered ideal, meaning that the lamellas were assumed to be infinitely thin and at the same time, perfect radiation absorbers. Likewise, the multiple simulated detector elements were considered as ideal, meaning that real properties such as efficiency and escape peaks were neglected. Different setups (fan-beam or pencil-beam), phantom geometries and sizes, detector angles, as well as excitation energies are studied in simulations and experiments in [23], with the conclusion that the main reasons for the weak performance of XFCT in a clinical scenario stem from the underlying physics, and therefore, cannot be overcome by technological progress on a mid-or long-term time scale. This has long been seen as a show-stopper for translation into clinical applications. Translation of X-ray Fluorescence Imaging (XFI) to Clinical Applications This intrinsic "background problem" [24] arising in large objects can, however, be solved by using X-rays of high brilliance, in combination with advanced spatial and spectral filtering, leading to the necessary reduction of the intrinsic spectral background in X-ray fluorescence imaging (XFI) [24]. It is well-known from several previous studies that this problematic background arises from multiple Compton-scattering events, which lower the photon energy into the signal range of interest. The larger the object, the higher the amount of background photons, as the probability of many sequential Compton-scattering events of each single-incident photon is larger when compared to a preclinical setup with only mouse-sized objects, as the human-sized objects are significantly larger than the mean free path length of the X-ray photons. An advanced spatial and spectral filtering algorithm can be derived from the strong anisotropy of the background and used to minimize intrinsic background contributions to the measured signal, without concomitant signal losses [24]. It was found that the main factors determining the yield of each photon path depend on the total path length and the relative solid angle of a detector's pixel with respect to the position of the Compton scattering. Taking these main factors into account, the strong anisotropy of the Compton background can be explained and used for a pixel selection algorithm. Based on this finding, in [24], a numerical study demonstrates the practicability of XFI in human-sized objects, as immune cell tracking with a minimum detection limit of 4.4 × 10 5 cells or 0.86 µg of gold in a cubic volume of 1.78 mm 3 can be achieved [25]. A comparison of the XFI setup presented in [25] with currently available clinical molecular imaging methods reveals the up-and down-sides of the proposed setup. A clear advantage when comparing XFI to PET/SPECT is the achievable spatial resolution in the mm-range, which is only limited by the size of the incident X-ray beam in XFI, whereas physical limits such as a-collinearity and positron range do exist in PET [26], leading to typical resolution values between 5 and 10 mm [25]. Sensitivity levels for micromolar and nanomolar gold concentrations, as demonstrated in [25], lie in between the achievable levels in magnetic resonance imaging (MRI) and nuclear medicine, similar to image acquisition times, which strongly depend on the area of interest [25]. As any other imaging technique, XFI requires dedicated markers, e.g., gold nanoparticles in the context of human-sized objects, but different to the radioisotopes needed in PET/SPECT, XFI markers do not decay over time, and hence allow longitudinal studies over arbitrarily long time windows (as long as the markers remain in the body). In addition, not only can a single marker element be measured in an XFI scan, but also multiple marker elements simultaneously, called multi-tracking, which is another clear advantage over other imaging modalities as different aspects can be studied in a single scan. A current drawback of the human-sized XFI setup is the high effort required in technology, especially for suitable X-ray detectors and collimators, which become very costly when produced in the size as used in simulations [24,25]. The simulations presented in [25] used an X-ray detector with a big hole on one side to move the voxel phantom inside, which would also be required in a realistic scenario with patients. However, this, in turn, leads to a loss of the sensitive detector area, which is crucial to reach a high sensitivity level. Besides the fact that more simulations are required to determine an optimal detector layout, 4π detectors do not exist as yet, but typical detection areas are rather in the range of a few tens of mm 2 , as used in the demonstration measurements presented in [24]. Therefore, further developments in suitable X-ray detector technology, which is capable of measuring energies and absolute numbers of photons at the same time, are one essential step towards a clinical translation of XFI. Furthermore, the use of a synchrotron X-ray source as simulated in [25] is impractical for clinical applications, as those machines are huge, expensive and only have very limited access. One potential solution for the translation into clinics is the use of compact, laser-driven X-ray sources, which have become an active field of research in recent years [27][28][29][30]. By using state-of-the-art high-power lasers, it is possible to accelerate electrons to relativistic energies over very short distances due to the creation of highly intense plasma wakefields. Typical laser-wakefield accelerators (LWFA) provide an accelerating field gradient more than 1000 times higher than conventional radiofrequency (RF)-cavity-driven accelerators and are thus much more compact [29]. The concept presented in [29] uses only one single highpower laser beam, which is divided into two synchronized light pulses, of which one pulse drives the LWFA and the other one acts as an undulator by scattering from the relativistic electrons. By using this principle of inverse Compton scattering (also called Thomson scattering in the energy range relevant for medical imaging), quasi-monoenergetic and tunable X-rays can be produced [29]. A dedicated design study for a compact laser-driven source for medical X-ray fluorescence imaging presents an optimization procedure, with the goal to produce X-ray beams of sufficient quality for XFI studies [31]. Several recently published studies demonstrate the basic requirements of the source proposed in [31], such as the stability of a compact laser-plasma accelerator over a typical clinical working day of 8 h, as well as the energies required for producing XFI-suitable incident beams [32]. As highsensitivity XFI measurements require an incident bandwidth below 15% FWHM [24], which is not fulfilled by typical Thomson sources, an additional electron-focusing device, namely an active plasma lens, has to be implemented in the setup in order to produce tunable X-rays with percent-level bandwidths [33]. The very first XFI demonstration measurements at such a source have shown that the principle works; however, improvements such as an increase in the laser repetition rate and background reduction on the source side are still necessary [34]. Current Status and Recent Promising Developments of Preclinical XFI Research Considering that X-ray fluorescence measurements have been seen as impractical for routine in vivo imaging, especially in terms of the scanning time [22], several research groups have instead focused on imaging of smaller objects, mostly in connection with gold nanoparticles (GNPs). In [22], ordinary polychromatic diagnostic energy X-rays from a conventional X-ray tube were used to perform XFCT imaging of GNP-containing objects inside phantoms mimicking tumors/organs within a small animal. While earlier developments of XFCT benchtop settings produced rather disappointing results, adapted approaches using a pencil-beam from polychromatic X-rays could demonstrate the detection of biologically relevant concentrations of GNPs (1-2% by weight) [12] (see Figure 1). Figure 1 demonstrates that the detection sensitivity of XFCT is substantially higher compared to CT. While [12] shows this convincing result, there is no detailed discussion on why this is the case; thus, we wanted to explicate this finding in more detail. CT, or for reasons of simplicity a general X-ray absorption image, relies on a signal difference between neighbored rays, leading to a visible contrast. If we assume, for simplicity, that there are two neighbored rays which traverse a given object of the same thickness for both rays, then a contrast is visible if and only if the difference in (detected) photon counts is significantly larger than the statistical noise of both counts. Such a significant difference can only arise if along the volume of both rays, there is a sufficient difference in the electron density. Therefore, if the tumor size and/or its density difference compared with its surrounding is too small, then no significant signal difference over noise is possible, and the tumor remains invisible, as shown in Figure 1. bandwidth below 15% FWHM [24], which is not fulfilled by typical Thomson sources, an additional electron-focusing device, namely an active plasma lens, has to be implemented in the setup in order to produce tunable X-rays with percent-level bandwidths [33]. The very first XFI demonstration measurements at such a source have shown that the principle works; however, improvements such as an increase in the laser repetition rate and background reduction on the source side are still necessary [34]. Current Status and Recent Promising Developments of Preclinical XFI Research Considering that X-ray fluorescence measurements have been seen as impractical for routine in vivo imaging, especially in terms of the scanning time [22], several research groups have instead focused on imaging of smaller objects, mostly in connection with gold nanoparticles (GNPs). In [22], ordinary polychromatic diagnostic energy X-rays from a conventional X-ray tube were used to perform XFCT imaging of GNP-containing objects inside phantoms mimicking tumors/organs within a small animal. While earlier developments of XFCT benchtop settings produced rather disappointing results, adapted approaches using a pencil-beam from polychromatic X-rays could demonstrate the detection of biologically relevant concentrations of GNPs (1-2% by weight) [12] (see Figure 1). Figure 1 demonstrates that the detection sensitivity of XFCT is substantially higher compared to CT. While [12] shows this convincing result, there is no detailed discussion on why this is the case; thus, we wanted to explicate this finding in more detail. CT, or for reasons of simplicity a general X-ray absorption image, relies on a signal difference between neighbored rays, leading to a visible contrast. If we assume, for simplicity, that there are two neighbored rays which traverse a given object of the same thickness for both rays, then a contrast is visible if and only if the difference in (detected) photon counts is significantly larger than the statistical noise of both counts. Such a significant difference can only arise if along the volume of both rays, there is a sufficient difference in the electron density. Therefore, if the tumor size and/or its density difference compared with its surrounding is too small, then no significant signal difference over noise is possible, and the tumor remains invisible, as shown in Figure 1. The detection sensitivity in X-ray imaging and CT hence solely relies on density differences within the object to be measured. Obviously, the local amount of gold nanoparticles in Figure 1 does not suffice to create a sufficiently increased electron density compared to the surrounding of the tumor. In contrast to X-ray absorption imaging, X-ray fluorescence imaging does not rely on such local density differences, but only on the ability to excite and detect a sufficient number of characteristic fluorescence photons, whereby The detection sensitivity in X-ray imaging and CT hence solely relies on density differences within the object to be measured. Obviously, the local amount of gold nanoparticles in Figure 1 does not suffice to create a sufficiently increased electron density compared to the surrounding of the tumor. In contrast to X-ray absorption imaging, X-ray fluorescence imaging does not rely on such local density differences, but only on the ability to excite and detect a sufficient number of characteristic fluorescence photons, whereby this number needs to be put in relation to the spectral background in the element-specific signal energy region. Since the attenuation of X-rays in mouse-sized objects does not play a major role (a key advantage over optical fluorescence), XFI only requires a sufficiently large local number of markers at the site of interest, such as a tumor or inflammatory region, but no contrasts with the surroundings. This difference in the corresponding image generation processes explains the much higher degree of sensitivity of XFI compared to X-ray absorption imaging. Besides pencil-beam approaches, cone beam implementations of XFCT have also been developed, which allow fluorescence signal acquisition, a crucial aspect for making XFCT suitable for in vivo imaging under the practical constraints of the X-ray dose and scan time [12]. In a typical benchtop XFCT setup, as presented in [12], diagnostic X-ray tubes are used in combination with dedicated filters in order to optimize the incident spectrum in terms of quasi-monochromatization and dose; for example, 125 kVp X-rays filtered with 2 mm of tin. With such a setup, a tumor-bearing mouse injected with GNPs was successfully imaged, demonstrating the capabilities of benchtop XFCT under the conditions most relevant to in vivo imaging [12]. In the past years, nanoparticles (NPs) have been emerging as attractive new contrast agents in biomedical imaging due to their capacity for higher sensitivity and for (targeted) drug delivery. In addition, they offer flexible tailoring of both physical and biochemical properties [35]. NPs from different elements were used for XFCT demonstration experiments, e.g., Mo, Gd and Au, reaching different levels of spatial resolution and sensitivity [35]. In a recent proof-of-principle study presented in [35], mice were imaged in vivo in an XFCT setup reaching 100 µm spatial resolution and demonstrating longitudinal imaging by imaging each mouse 5 times (1 h, 1 week, 2 weeks, 5 weeks and 8 weeks post-tailvein injection of suspension with NPs at a 1% Mo mass fraction). The used setup was a combination of a laboratory pencil-beam arrangement with the sensitive detection of tailored MoO 2 NPs and real-time monitoring of respiration and body temperature under anesthesia [35]. A liquid-metal-jet microfocus source was coupled to a multilayer Montel mirror, which had a Gaussian reflectance profile centered at 24 keV with a FWHM of about 1.4 keV, hence creating a quasi-monochromatic pencil-beam of 100 µm in diameter [35]. These spectral characteristics are ideal for X-ray fluorescence studies with MoNPs, as their K-absorption edge at 20 keV allows a significant separation from the main Comptonscattering peak at energies above 23 keV [35]. A whole-body projection image with a size of 40 mm × 70 mm took around 15 min, while the acquisition time for a local-region XFCT and CT with 30 projections, each having a size of 40 mm × 20 mm, would take around 1 h to acquire [35]. For the 2D 15-min scan, a radiation dose of 1 mGy was estimated by means of Monte Carlo simulations, using the same imaging geometry as in the experiment and the voxelized digital mouse phantom DIGIMOUSE [36] as the simulated object [35]. However, the XFCT mode required a dose of 22 mGy. The relative clearance of the measured whole-body signal over time could be correlated to the clearance of the injected nanoparticles, reaching signals close to the background level after 8 weeks [35]. It must be noted here that no quantitative conclusions could be drawn from the full-body projection images since effects such as self-absorption of fluorescence photons can only be modeled from tomographic data [35]. Therefore, additional in vivo XFCT and CT scans were acquired in [35] with the liver as the region of interest due to the major accumulation of NPs observed in that organ, which were then analyzed with an iterative XFCT reconstruction algorithm that allows for quantitative determination of NP concentrations. The detection limit of the imaging system was estimated at 0.05 mg/mL of Mo, but it is noted in [35] that this limit can be linearly improved with an increased pixel exposure time which, however, implies a higher radiation dose and longer scan times. A drawback of this achievement is that no heavier elements than Mo can be imaged due to the current limitation in photon energy of the liquid metal jet source used. Besides the use of pencil-beam setups with near-monochromatic incident radiation, in vivo biodistribution measurements of gold nanoparticles (GNPs) have also been demon-strated with polychromatic fan-beam X-rays [37]. The combination of a transmission CT detector installed in an existing pinhole XRF imaging system using a two-dimensional cadmium zinc telluride (CZT) camera allows to acquire functional and anatomic information on the same platform [38]. The pinhole XRF system used in [38] comprised a tungsten fan-beam collimator, a lead pinhole collimator and a CZT camera, reaching a spatial resolution of 4.4 mm. Due to the different optimal energy spectra for XRF and CT imaging (high energies above the K-edge of GNPs for XRF and low energies to produce sufficient contrast on CT), images had to be sequentially acquired on the same platform [38]. Nevertheless, the use of a 2D array detector reduces the excessive imaging acquisition time and radiation dose due to fact that 2D XRF images can be directly acquired without image reconstruction [38]. The radiation dose delivered in the dual-imaging setup was 59.1 mGy for the XRF images and an additional 321.7 mGy for the CT image acquisition, which should be reduced by optimizing the X-ray tube parameters, the filter material used, as well as the scanning procedure, in general [38]. Nevertheless, the detection limit of 0.01 wt.% needs to be further improved, e.g., by replacing the CZT detector used with pixelated detectors of better energy resolution and a higher maximum count rate performance [38]. Conclusions Since the very first applications in the late 1970s, the method of X-ray fluorescence imaging has made substantial improvements, especially regarding the achievable minimum detection sensitivity and its usage in different application areas. While the first studies mainly used radioactive isotopes and special geometric configurations, setups used nowadays can either be realized at synchrotrons or conventional X-ray sources, where the applied beam diameters and especially the radiation dose can be monitored and controlled with much higher precision. Even though the suitability of X-ray fluorescence imaging for preclinical and clinical applications was considered unlikely in the early 2000s, there has been tremendous progress to improve the modality in recent years. On the one hand, compact setups have been developed in order to enable measurements at existing X-ray systems, while on the other hand, different strategies have evolved to overcome intrinsic background limitations. The use of dedicated filters, collimators, pinholes or pixelated detectors nowadays allows the detection of low marker concentrations even in large objects, which clearly paves the way towards future clinical applications. A current limitation lies in the fact that measurements of the highest sensitivity can only be performed at synchrotron beamlines, which in turn strongly limits the potential applications. Therefore, it is essential to further develop compact systems which will allow usage of the modality in laboratory and clinical environments. Different strategies for such compact X-ray systems have already been demonstrated, in which most combine XFI and CT imaging in order to gain functional and anatomical information in one measurement. The main challenge of these systems currently lies in the fact that the incident radiation from a conventional X-ray tube needs to be focused and monochromatized in order to achieve measurements of the highest spatial resolution and detection sensitivity. One promising solution is the use of dedicated X-ray optics, which allow to focus a certain X-ray energy of interest; however, their efficiency needs to be improved to allow for measurements of acceptable imaging times and radiation doses [39]. Overall, the application areas of XFI are manifold, reaching from measurements of elemental distributions in non-destructive testing, to uptake studies of certain entities into single cells, to different applications in medical imaging, such as biodistribution studies of new medical drug compounds or tumor localization measurements with the highest precision. Thus, XFI bears the convincing potential to complement other already wellestablished molecular imaging methods in areas where XFI offers unprecedented data, e.g., the simultaneous in vivo tracking of different immune cell subtypes in preclinical research, with both high spatial resolution and sensitivity.
2023-07-12T06:06:45.989Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "d9b28d5bca91797828d87b933cffddc4a083d9ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/13/10990/pdf?version=1688198021", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5937e01b82c57b0b6db677c1bcebecfcf0d73e68", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
12707332
pes2o/s2orc
v3-fos-license
The Versatile Modiolus Perforator Flap Supplemental Digital Content is available in the text. Background: Perforator flaps are well established, and their usefulness as freestyle island flaps is recognized. The whereabouts of vascular perforators and classification of perforator flaps in the face are a debated subject, despite several anatomical studies showing similar consistency. In our experience using freestyle facial perforator flaps, we have located areas where perforators are consistently found. This study is focused on a particular perforator lateral to the angle of the mouth; the modiolus and the versatile modiolus perforator flap. Methods: A cohort case series of 14 modiolus perforator flap reconstructions in 14 patients and a color Doppler ultrasonography localization of the modiolus perforator in 10 volunteers. Results: All 14 flaps were successfully used to reconstruct the defects involved, and the location of the perforator was at the level of the modiolus as predicted. The color Doppler ultrasonography study detected a sizeable perforator at the level of the modiolus lateral to the angle of the mouth within a radius of 1 cm. This confirms the anatomical findings of previous authors and indicates that the modiolus perforator is a consistent anatomical finding, and flaps based on it can be recommended for several indications from the reconstruction of defects in the perioral area, cheek and nose. Conclusions: The modiolus is a well-described anatomical area containing a sizeable perforator that is consistently present and readily visualized using color Doppler ultrasonography. We have used the modiolus perforator flap successfully for several indications, and it is our first choice for perioral reconstruction. MATERIALS AND METHODS We performed a volunteer study to confirm the location of the modiolus perforator using a CDU on 20 hemifaces and a prospective clinical series using the modiolus perforator as a pedicle for a freestyle perforator flap design. CDU Volunteer Study We examined 10 volunteers bilaterally by CDU, 3 men and 7 women aged 26 to 57 (43), using a BK Medical color Doppler ultrasonographer with a 10-to 12-mHz linear transducer. The technique was performed as described above, and the location of the perforator was marked with a permanent marker (red dot). The corresponding CDU screen images are shown next to the clinical image ( Fig. 1). Clinical Study We reviewed 14 cases, 3 male and 11 female patients aged 6 to 85, reconstructed by an island flap based only on the modiolus perforator lateral to the angle of the mouth. Four patients were smokers. The surgical indications were defects following removal of basal cell carcinoma in 6 cases, malignant melanoma in 4 cases, 2 squamous cell carcinoma, 1 atypical fibroxanthoma, and 1 trichoid epithelioma. The reconstructions were performed on the cheek in 6 cases, upper lip in 5, nose in 2 and lower lip in 1. The operative technique was either freestyle exploration or guided by preoperative CDU localization. Freestyle Technique The perforator location was explored through a nasolabial incision in a caudal direction until the perforator was localized. The flap was dissected circumferentially around the perforator enabling a free rotation ( Fig. 2 and Video 1) (See Supplemental Digital Content 1, which displays the versatility of the modiolus perforator flap and range of motion. This video is available in the "Related Videos" section of the Full-Text article on PRSGlobalOpen.com or available at http://links.lww.com/PRSGO/A178.) The perforator was not skeletalized in any of the cases. A simple detachment of the surrounding adhesions to the zygomaticus major, risorius, and depressor anguli oris muscles was done to enable flap rotation (Fig. 3). CDU-guided Technique The facial artery was identified below the angle of the mouth. The artery was then followed by a very slow movement upward until the modiolus perforator was identified. The location was then marked by a permanent marker. The flap was designed based on the CDU findings and the size of the defect and surgery commenced as described above (Fig. 4). CDU Volunteer Study We identified a usable perforator close to the modiolus by CDU bilaterally in 10 subjects, 3 males and 7 females, median age 42 (26-57) years. In the majority of cases, we found that the perforator branched off from the main artery as a single branch; however, in a few cases, it divided into 2 or 3 branches. In most cases, the perforator was curved or even S-shaped as it passed between the muscles. The perforator branching point from the facial artery was marked with a red dot in the figures. Despite the observed perforator branching point variations, it appeared to pass through to the subcutis lateral to the angle of the mouth at the level of the modiolus in all cases. Clinical Study We performed 14 perforator flaps based on the modiolus perforator in 14 patients ( Table 1). The location of the perforator was at the level of the The reconstructive goal was achieved in all 14 cases; however, in 3 patients, who were heavy smokers, a revision and further corrective procedures were needed due to distal tip necrosis. The perforator was identified by surgical exploration in 6 cases and guided by CDU in the latter 8 cases. DISCUSSION The modiolus has been described to be a fibrous chiasma, a condensation of the deep and superficial facial fascia, where the facial muscles join to form insertion at the angle of the mouth. 3,4 The facial artery runs lateral to it, superficial to the buccal fat pad, in a window marked by the zygomaticus major muscle superiorly and risorius muscle inferiorly. [4][5][6] The results of this article show that this window contains a sizeable perforator that is consistently present and can readily be visualized and identified by CDU. We refer to it as the modiolus perforator. The facial artery is kinked in a lazy-S shape in this area, which adds to its mobility during facial expression and mouth opening. This added mobility has been beneficial for the advancement of some of our flaps up to 4 cm especially when used in a V-Y fashion (Fig. 3). Three anatomical studies describe the facial artery perforators and share findings similar to ours, indicating the consistency of a perforator lateral to the angle of the mouth. Hofer et al 7 series of 5 patients in combination with an anatomical study that showed a high density of perforators lateral to the mouth. Ng et al 8 named it reference point A, inferolateral to the angle of the mouth, and Qassemyar et al 9 referred to the perforator lateral to the angle of the mouth. CDU is known to be a good tool for identification of the facial artery; however, localization of the small perforators has until now been deemed unclear or unavailable. [7][8][9] We tested the accuracy of CDU as a tool for identification of the modiolus perforator on a random sample of 10 individuals (20 hemifaces). We were readily able to identify a sizable perforator at the modiolus level bilaterally in all cases (Fig. 1). The modiolus perforator is a consistent finding and can easily be located by pre-or perioperative CDU or simply by careful exploration just lateral to the fibrous skin attachments of the orbicularis muscle. The modiolus perforator flap is in fact a variation of the well-known nasolabial flap and has a great potential to become a work horse flap for the reconstruction of lip, cheek, and selected nasal defects. The flap can be designed either as a V-Y advancement flap or a propeller flap depending on the location and the size of the defect. It allows for a successful reconstruction of a whole anatomical subunit, replaces like with like, and has a forgiving donor site, which can be closed directly. The localization of the perforator with CDU will most certainly make it more accessible in the near future. We have successfully used both propeller and advancement modiolus perforator flaps for different indications, and it has become our first choice for perioral reconstruction. This article appears to be the first to recognize the benefits of CDU in the localization of facial artery perforators for a freestyle flap design, and we postulate that this will positively affect its application in the future. CONCLUSIONS We have shown that perforators can readily be visualized by the operative plastic surgeon using a modern CDU device and verified the consistency of a significant facial artery perforator lateral to the angle of the mouth, the modiolus perforator. The average diameter of 1 mm provides a reliable vascular basis for an advancement or propeller flap design for various reconstructive purposes in the area.
2018-04-03T05:06:32.319Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "8b12c65bc1724e13674ce4b2c88be2237425208c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/gox.0000000000000611", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b12c65bc1724e13674ce4b2c88be2237425208c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234074823
pes2o/s2orc
v3-fos-license
The morphology and scaling law model of polyvinylidene fluoride/carbon fiber using electrospinning technique Electrospinning is a simple process used for polymer fibers that have diameters ranging from micro to nanometers. The purpose of this study was to investigate the morphology and scaling law model of Polyvinylidene Fluoride/Carbon (PVDF/Carbon) fiber math. Fiber is made by variations in concentration 15% w/w (FPC1), 18% w/w (FPC2), 21% w/w (FPC3) and 24% w/w (FPC4) by adding of 2% (w / w) carbon in each solution. The electrospinning process parameters used are 10 kV voltage, needle tip distance and collector 12 cm, and flowrate 0.1 ml / hour. The results showed that the morphology of FPC1-FPC2 fibers was in the form of bead and the morphology of FPC3-FPC4 are structure free bead. The avarege diameters of FPC1, FPC2, FPC3 and FPC4 are 910 nm, 1123 nm, 1349 nm and 1506 nm, respectively. The scaling law models result R Square (R2) of the experiment was 0.9992, indicating a very linear model relationship between theory and experiment. Polyvinylidene Fluoride/Carbon (PVDF/Carbon) fiber composite will be used as water filtration. Introdution Electrospinning is a simple process used for polymer fibers that have diameters ranging from micro to nanometers [1][2][3]. The process of making fibers is carried out by spraying a polymer solution using a high voltage electrostatic field and is supported by a solution flow pump mechanism. Fabrication via electrospinning can produce continuous fibers for large-scale, practical-efficient use and easy-to-control dimensions [1][2][3]. The nanofibers are controlled by using electrospinning to produce a fine dense fiber with a very small diameter, large surface area, and very small pore size. These properties cause nanofibers to be used for various applications, such as filtration [4,5], water filtration [6], wound dressing [6,9], drug delivery [2, 6,9] dan tissue enggenering [1,6]. Many of the polymers used to produce fibers have been successfully used by various studies using electrospinning methods for example, Poly (acrylonitrile) [9], Poly (ether sulfone) [10], Poly (vinyl alcohol) [1,6] and Poly (vinylidene Flouride) [10,11]. Polyvinylidene Fluoride (PVDF) has hydrophobic properties, and this polymer has a larger molecule than other conventional polymer membranes for example, PVP, PVA or PPL [11][12][13], so that this polymer is difficult to electrospinning. Therefore, to 2 stabilize the properties of PVDF by mixing carbon materials. The new nature of mixing the two materials can be utilized and used as an engineering material for air and water filters because they have hydrophilic functional groups such as carboxyl, epoxy, and hydroxyl [15]. Thus, the synthesis of the two materials is suitable combined. Previous research results have reported pure PVDF electrospun with carbon [15][16][17], obtained the marphology of the fiber in the form of beadless fibers with a diameter of 80-160 [16][17], and pure PVPDF research has also produced beadless fibers in diameter 150-400 [17]. However, morphological studies and prediction of fiber diameter size models using mathematical models are still few. According to Jauhari (2020), a prediction is important in order to overcome the risk of failure of the formation of the fiber polymer mat so that a stable diameter is produced [13][14][15]. The complex system is understood through modeling the law of scale (SL) by combining process parameters (flow rate, concentration, conductivity, electricity, inertia, surface tension, and viscosity) [14,15,21]. In this study, we will discuss the morphology of PVDF / carbon fibers and the prediction of fiber size using a scaling law model. Preparation Reduction Graphene (rGO). Three grams of carbon and 18 grams of KmnO4 to which H2SO4: H3PO4 solution was added (360: 40 mL v / v) was stirred at 50 o C for 12 hours. Then the solution is added to 30 mL H2O2 30% then deposited for 1 day. Furthermore, the solution is centrifuged at a rotation speed of 3,000 -4,000 rpm for 15-20 minutes and decanted to separate the filtrate and the residue. The residue was washed in 200 mL of distilled water, then washed 2 times with 200 mL HCl 30%, 200 mL aceton, and 200 mL ethanol. The residue was dried, so that solid Graphene Oxide (GO) was obtained. GO was dissolved in 100 mL of distilled water at 1% (w / v) and heated to boiling. Next, stirrer the solution by adding 5% (w / v) ascorbic acid at 60 o C for 2 hours. After that, it is centrifuged at a rotation speed of 3,000-4,000 rpm for 15-20 minutes and the residue is dried at 100 o C until the final result is Graphene. Synthesis of Fiber Polyvinylidene Fluoride/carbon (PVDF/carbon) fibers were prepared by dissolving PVDF with various concentrations, namely 15% (w/w), 18% (w/w), 21% (w/w) and 24% (w/w) and the addition of carbon by 2% (w/w). Then dissolved using DMF solvent on hotplate-magnetic stirring (Therumo Sci., Japan) with a temperature of 80 o C, for 24 hours and a constant speed of 300 rpm and labeled FPC 1, FPC 2, FPC 3, and FPC 4 in each solution. The solution was transferred into a 10 ml syringe equipped (Terumo, Japan) and spun using electrospinning (Nanolab ES / DS 106, Malaysia). The process parameters used were a flow rate of 3.33 μl per minute, a high voltage of 13 kV, the distance between the needle tip of the collector drum of 10 cm, and the fiber rotating of 250 rpm. Also, the humidity is kept constant in the room at 45% and the temperature is 25ºC. At the time of collection of fibers can use the camera to monitor Taylor Cone that formed on the tip of the syringe. The morphology of FPC 1, FPC 2, and FPC 3 fibers was observed using a fluorescence microscope (MiF) (Optika B-380 Material Science MET, Italy). The diameter size analysis used ImageJ 1.52a software (National Institutes of Health, USA), and the analysis results were made of normal distribution using OriginPro 2018 software (OriginLab Corporation, USA). Scaling Law Model Scaling Law (SL) is a law that expresses physical phenomena that occur when the size of a device or material is reduced. SL has been widely applied in solid matter physics to describe the size of polymers and particles that make up solid materials. The polymer diameter that has been synthesized using electrospinning, according to SL, will be in the form of charge voltage, solution density, molecular weight, flow rate function, electrical conductivity, polymer volume fraction, and dielectric constant. The Scaling law (SL) model can be explained in the theoretical prediction of the diameter of particles and fibers using the following equation [15,16] (2) the prediction of polymer particle size [16][17][18][19]. The dielectric constant values were determined but predicted for the viscosity values, polymer particle size, and droplet size. Where the evaluation results are used to explain that with polymer control and the appropriate process parameters will produce polymer particles with stable sizes and controlled properties that can be produced. (w/w). The process parameters used were the flow rate of 3.33 μl per minute, high voltage 13 kV, The distance between the needle tip of the collector drum of 10 cm, and fiber rotating of 250 rpm. The observed morphology is shaped like a continuous, regular strand. The results of MiF were successful in confirming that the lowest concentrations in this study, namely 15% and 18%, showed bead structures in FPC 1 and FPC 2 fibers. Meanwhile, at high concentrations, namely 21% and 24% showed a beadfree structure in FPC 3 and FPC 4 fibers. The formation of beads was related to viscosity and surface area. Solutions with low polymer concentration have low viscosity and few polymer chain bonds so that the elongation process when electric spinning was not perfect, this causes the formation of fiber beads [3,20]. Besides, increasing the surface tension has the effect of reducing the surface area of the mass unit of the solution [9,21]. When the free solvent molecules were in a high polymer concentration, there was a tendency for the solvent molecules to gather and form fibers with beads [3,9]. The size distribution of the diameter with the distribution of fiber diameter sizes ranging from 550 to 1800 is shown in Figure 2. The observed distribution of the diameter of FPC1, FPC2, FPC3 and FPC4 fibers had a coefficient of variance (cv) with prices of 0.16, 0.17, 0.12, and 0.09. Homogeneity occured when the ratio between the standard deviation and the mean diameter of the fibers was less than 0.3 [22,23]. The results of cv confirm that all fiber distributions were homogeneous. The mean diameters (d) of FPC1, FPC2, FPC3 and FPC 4 fiber math were found to be 910 nm, 1123 nm, 1349 nm and 1506 nm. Meanwhile, the standard deviation was 161 nm, 156 nm, 158 nm and 156 nm. It has been observed that increasing the concentration of the polymer solution results in a larger mean diameter. This is related to the increasing of polymer chains in solution when the concentration was higher [20,24]. The addition of 2% (w/w) carbon also significantly affected the total solution concentration while increasing the average fiber diameter. This was confirmed through previous studies, that pure PVDF had an average diameter ranging from 200-400 nm. In addition, the stretching of the fibers by the Coulomb force took less time because the high concentration solution dries faster. The logarithms of fiber diameter measurements for both empirical data and theoretical calculations as a function of flow rate against electrical conductivity are shown in Figure 3. The Circle, the triangle and the hexagon are data obtained from previous studies of pure polymer [20], PVP/ETH composites [19] and experimental results. [20,21]. The result of R Square (R 2 ) experiment approach is 0.9992, it is understood that the fiber diameter is in very good condition and more or less in accordance with the theoretical model. Thus, controlling the fiber diameter can be done by adjusting the parameters of the electrospinning process, namely electrical conductivity and flow rate. Conclusion The Morphology and Scaling Law Model of Polyvinylidene Fluoride/carbon (PVDF/carbon) fiber have been successfully produced and predicted using electrospinning techniques. Nanofiber was produced optimally with variations in the concentration of PVDF of 15% (w/w), 18% (w/w), 21% (w/w) and 24% (w/w) with the addition of carbon with a concentration of 2% (w/w). The process parameters of electrospinning, namely the distance between the tip of the collector drum needle were 10 cm, the flow rate was 3.33 μl per minute, the high voltage was 13 kV. MiF results have been shown a concentration of 15% and 18% were bead fiber structure and 21% and 24% were free to structure beads. The mean diameters of the FPC1, FPC2, FPC3 and FPC 4 fiber math were found to be 910 nm, 1123 nm, 1349 nm and 1506 nm, respectively. Besides, the results have been shown that the average fiber diameter was in very good condition and appropriate with the theoretical model developed in previous studies.
2021-05-10T00:03:31.820Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "52f9f6fbd28a2112f86039b7038cbdfe3d030336", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/1796/1/012076", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "3512c4c6a32c0437f51b991f645c1600edf2747c", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
270248843
pes2o/s2orc
v3-fos-license
Mutations in Genes Producing Nitric Oxide and Hydrogen Sulfide and Their Connection With Apoptotic Genes in Chronic Myeloid Leukemia Background Despite advances in chronic myeloid leukemia (CML) genetics, the role of nitric oxide (NO) and hydrogen sulfide (H2S) gene mutations and their relationship to apoptotic genes is unclear. Therefore, this study investigated NO- and H2S-producing genes' mutations and their interactions with apoptotic genes using Sanger sequencing and next-generation sequencing (NGS). Methodology A complete blood count (CBC) was carried out to measure the total number of white blood cells, while IL-6 levels were assessed in both control and CML patients using an ELISA technique. Sanger sequencing was used to analyze mutations in the CTH and NOS3 genes, whereas NGS was applied to examine mutations on all chromosomes. Results White blood cell (WBC) and granulocyte counts were significantly higher in CML patients compared to controls (p<0.0001), and monocyte counts were similarly higher (p<0.05). Interleukin-6 (IL-6) levels were significantly elevated in CML patients than controls (p<0.0001), indicating a possible link to CML etiology or progression. Multiple mutations have been identified in both genes, notably in CTH exon 12 and the NOS3 genes VNTR, T786C, and G894T. This study also measured IL-6 concentrations using IL-6 assays, identifying its potential as a CML prognostic diagnostic. WBC counts, granulocyte counts, and mid-range absolute counts, or MID counts, were significantly higher in CML patients than in normal control individuals. NGS identified 1643 somatic and sex chromosomal abnormalities and 439 actively expressed genes in CML patients. The findings imply a genomic landscape beyond the BCR-ABL1 mutation in CML development compared to other databases. Conclusion In conclusion, this study advances the understanding of the genetic characteristics of CML by identifying mutations in the NO- and H2S-producing genes and their complex connections with genes involved in apoptosis. The comprehensive genetic profile obtained by Sanger sequencing and NGS provides possibilities for identifying novel targets for therapy and personalized treatments for CML, therefore contributing to developments in hematological diseases. Hydrogen sulfide (H 2 S) is synthesized internally inside mammalian tissues by the enzymatic actions of cystathionine-β-synthase (CBS), cystathionine γ-lyase, and 3-mercaptopyruvate sulfurtransferase, which is located in the mitochondria.The mechanism regulates the vascular diameter, and protects the endothelium against oxidative stress, ischemia, reperfusion damage, and chronic inflammation, by activating potassium DNA extraction and quantification The extraction of genomic DNA from blood samples collected from persons diagnosed with CML was performed using the genomic blood DNA isolation kit (Hibrigen, Turkey) according to the manufacturer's instructions, with some modifications.In summary, blood samples were obtained and promptly handled within a specified time period to avoid DNA deterioration.After extracting the DNA, we assessed both the amount and the quality of the isolated genomic DNA.The DNA concentration was measured using a nanodrop spectrophotometer at a wavelength of 260 nm.In addition, the quality of the extracted DNA was assessed by determining the A260/A280 ratio.A ratio between 1.8 and 2.0 indicates that the DNA is free of contaminants and lacks any protein or other impurities.Only DNA samples with A260/A280 ratios within the acceptable range were selected for downstream applications, ensuring high-quality genomic DNA for further molecular analysis. Determination of genotype Three genetic variants within the NOS3 gene and one variant of the CTH gene were studied.Individual amplification of DNA for each variant was performed using polymerase chain reaction (PCR), followed by gel electrophoresis and sequencing analysis.DNA sequencing plays a vital role in understanding genetic diversity and uncovering potential health and disease susceptibility implications.The PCR product underwent sequencing, particularly Sanger sequencing.Initially, the sample sequence was processed at the Kahramanmaraş Sütçü Imam University, ÜSKIM Laboratory, following purification and amplification with specific primers for both directions.Subsequently, a sequencing library was created using the Applied Biosystems ABI 3100 AVANT DNA Sequencer (Thermo Fisher Scientific Inc., Waltham, MA) to enable thorough sequencing analysis.The resulting extension file (AB1) was then scrutinised using Mutation Surveyor software, version 5.2.0 (SoftGenetics, State College, PA) to detect any mutations or variations in the target sequence. NGS has transformed genomics by granting scientists unparalleled access to extensive genetic information.An essential stage in this procedure involves preparing the sequencing library, which entails converting the desired DNA into a suitable format for the sequencing platform.For this reason, we transferred the DNA samples to the Istanbul Laboratory, in Istanbul, Turkey.After checking the quality and purity of the DNA samples through nanodrop analysis, we proceeded to the next library preparation step.The library preparation process typically commences with fragmentation of the target DNA, followed by adapter ligation and PCR amplification.During the library preparation and sequencing process, numerous sequence artefacts negatively affect raw data quality for downstream analyses.Therefore, quality control and preprocessing of the raw data are crucial steps to ensure the accuracy and reliability of the sequencing results.Various tactics, such as paired-end and mate-pair sequencing, can be applied, which help the assembly of short sequences into contigs and scaffolds.After preparing the library through the standard protocols, we conducted the sequencing step using the DNBSEQ-G400 flexible genome sequencer (MGI Tech Co., Ltd, Thailand), created based on a new flow cell system that could flexibly assist a range of various sequencing modes.The raw data was analyzed using the SAMtools software (Sanger Institute, Parameters Control CML p-value WBC (10 IL-6 concentration Patients with CML had markedly increased ( p<0.0001) levels of IL-6 compared to control individuals, indicating that IL-6 may have a role in the onset or progression of CML, as shown in Table 1 and Figure 1D. Sanger sequencing CML mutations were found in 40 NOS3 and CTH gene-sequenced CML patients compared to external databases (gnomAD, COSMIC, and cBioPortal).CTH determined exon 12 missense, substitution, inversion, and duplication mutations (Figure 2a).All missense genes (1:70904800) replicated in multiple patients, and heterozygous mutations (28400G>GT) led to amino acid changes (serine>isoleucine) (dbSNP:1021737), and all mutations were at the end of the cys_met_meta_pp domain, as shown in Figure 3A and Appendix A. Additionally, the NOS3 gene, which was sequenced using three primers (VNTR 4a/b, T786C, and G894T), found numerous mutations in different locations on the gene (Figure 2b) when compared to external databases (gnomAD, COSMIC, and cBioPortal).The T786C and G894T mutations were located in the NOS3 gene domain, whereas the VNTR change occurred in intron 3 of all T786C patients.These mutations included missense, substitution, synonymous, splice region, and intron mutations.In addition, the (dbSNP:1799983) variant present in many samples had a missense mutation that changed the nucleotides (8468T>TG) on the position of (7:150696111), which replicated in many samples.The other three mutations in the splice region were (7:150696187, 7:150696176, and 7:150696178) and the variants (8533, 8535G>GC, and 8544G>GA) (Figure 3b; Appendix B).However, the G894T primers were sequenced, and different types of variations were estimated, including modifications to nucleotides that mutated sequences, and substitution mutations in all patients.The 21 (dbSNP:2070744) was found through the nucleotide variants (2436C>T) on (7:150690079) (Figure 3d; Appendix C).The VNTR modification was also on NOS3, and all variations that altered nucleotides included mutation types such as substitution, duplication, and insertion.The variant (dbSNP:3918168) is produced by a nucleotide change (6714G>GA) at location (7:150694357).This variant also resulted in duplication and insertion mutations, including (6725_6751het_dupAGTCTAGACCTGCTGCG GGGGTGAGGA) at locations (7:150694368_7:150694394). The VNTR also had an insertion mutation due to a changed nucleotide (6751_6752) (Figure 3e; Appendix D). Next-generation sequencing Next-generation whole-genome sequencing identified 1643 somatic and sex chromosomal abnormalities and 439 gene expressions in CML patients.The results were cross-referenced to the gnomAD, COSMIC, and cBioPortal databases.Patients with CML expressed 439 genes.Figure 4A shows how all chromosomes contribute to CML.Specifically, the X chromosome carries 96 of the 106 sex differences.Ninety-four intron alterations occur during gene expression, including upregulation and downregulation.Figure 4C shows the genes CXorf36, ASB11, ZRSR2, and TENM1.The remaining two mutations (out of 96) are unidentified.There are 10 chromosomal Y variants in four genes' intronic regions.Furthermore, chromosome 1 has 98 mutations.There are 69 mutations in 29 expressed genes, and 29 remain unidentified.Among the 69 mutations, CHD1L's frameshift-deletion mutation and PIK3CD's splice region variation stand out.Finally, 67 of the 69 variations are introns.Furthermore, chromosome 2 had 163 alterations, with 95 in 44 expressed genes.The non-transcription region (AC012363.8)had 14 mutations at the same location as MTND4P26, whereas EMILIN1 gained a missense mutation.There were also mutations in the GCA gene's intron and 3' untranslated region (3'UTR).There were 53 more unidentified mutations, including 15 in non-coding areas (Table 2). TABLE 2: Variation distribution on genes and chromosomes Furthermore, chromosome 3 revealed 121 mutations in 28 expressed genes.These 34 mutations remain unidentified, while the other two were synonymous and located on the NPRL2 gene's PLCL2 non-coding transcript exon.The FLNB and TBCCD1 genes had two identical missense mutations.The remaining alterations were intron-based.Another 14 genes with 81 variations were found on chromosome 4.There were approximately 32 unidentified mutations, totaling 49.Two synonymous mutations and one missense FGFR3 mutation were identified.The remaining alterations affected genes located in introns.Of the 87 mutations on chromosome 5, 40 were unidentified mutations.Figure 4b shows 47 of the 87 variations on 27 expressed genes.This variant had a 3'UTR mutation and a CDH9 gene missense mutation.The RP11-232L2.1 gene had exons that did not code for proteins.PPP2R2B's 5' untranslated region (5'UTR) and GM2A's frameshift mutation were also identified; intronic mutations occurred.There were 14 active genes on chromosome 6.These genes contained 43 variations, including 24 unidentified mutations.The remaining 19 mutations were distributed among 14 genes, with six occurring in similar numbers on RP11-288G3.4 and HLA-V's non-coding transcript exons.Mutations to the HLA-DOA splice region and the TULP4 3'UTR were also identified.The remaining genes were introns. Chromosome 7 had a total of 141 different variations.Out of 141 occurrences, 79 were characterized by unidentified mutations, while 62 were associated with 25 specific genes.This investigation identified three mutations in the 3'UTR of the AQP1 gene.We also detected two MUC12 and SMO missense mutations, two STRIP2 3'UTR mutations, and a CEP41 mutation.All the remaining ones were introns. Furthermore, chromosome 8 contained a total of 101 genetic variations; 71 variations were not known and 30 variants had an impact on 17 genes being expressed.Both genes included equal quantities of non-coding transcript exons SMARCE1P4 and RP11-468O2.1.The 5'UTR of the CTSB gene had one mutation, whereas the other mutations were located in the introns.Of the 68 variants found on chromosome 9, 17 were linked to actively expressed genes.There were at least 24 variants that contained mutations whose identity was not known, whereas 44 variations had mutations that had been identified.The 44 variations consisted of two missense mutations in the KANK1 gene, three mutations in the 3'UTR of the CDKN2A gene, a mutation in a non-coding transcript exon of the CCL27 gene, a missense and synonymous mutation in the SURF6 gene, and three mutations in the 3'UTR of the MED22 gene.The most severe mutations were found in introns.The dataset contained a total of 72 mutations located on chromosome 10.There were 22 variations with unidentified alterations, whereas 50 variants were associated with 27 genes.Three mutations were detected in the 3'UTR of the VPS26A gene, whereas the other modifications were inside introns.Out of 83 variations, 26 were linked to mutations on chromosome 11 that were now unidentified.Of the 57 modifications, 30 were associated with expressed genes in intronic regions.However, there were three missense mutations in the PIDD1 gene and three missense and synonymous mutations in the MUC6 gene.Chromosome 12 contained a total of 91 genetic variants.Approximately 36 changes were associated with unidentified mutations, while the remaining mutations were associated with 24 functional genes.The KANSL2 gene harbored two synonymous alterations in its 3'UTR.ITGA7 and NEMP1 had missense mutations, whereas HOXC-AS3 had a mutation in a non-coding transcript exon. Additionally, chromosome 13 had 41 variants.Twenty variants were related to unexplained mutations, whereas 13 to intron mutations in four expressed genes: TPTE2, PAN3, RXFP2, and LMO7.Chromosome 14 had 25 variants, of which two were unknown mutations.Except for synonymous NEK9 mutations, the remaining 23 variants corresponded to 13 intron-expressed genes.Chromosome 15 had 36 variants, four unexplained mutations, and nine related to expressed genes.All of these changes were intronic except for an MTHFS gene missense mutation.Unknown mutations accounted for 16 of the 35 variants on chromosome 16.The other 19 variants affected nine intron-region genes, including JPT2, TRAF7, and PLCG2. Chromosome 17 had 68 variations.Among these variations, 13 were linked to unknown mutations, while 55 were related to 25 expressed genes.The ITGAE gene had a frameshift mutation, and the SMTNL2 and CHRNE 3'UTRs were also mutated.ULK2 had a synonymous mutation, whereas WIPF2 had both insertion and synonymous mutations.Intronic areas were mutated again. Discussion Elevated WBC counts are commonly observed in individuals diagnosed with CML [13].The cost-effective and direct approach for detecting CML involves utilizing differential analysis and CBC techniques [14]; the parameter of CBC generally changes during cancer incidence [15] and also after chemotherapy administration [16].During the occurrence of cancer, an increase in the total WBC count is observed.It is possible that, following treatment, the WBC counts subsequently decrease.Due to this rationale, the WBC count obtained via the CBC test has emerged as a biomarker for the detection of leukemia.This study observed a high total WBC count, granulocyte count, and MID count. IL-6 has been postulated as a potential prognostic marker for CML [17].As a result, IL-6 levels may rise significantly throughout CML, exceeding the baseline rate.The acquired findings were statistically significant, demonstrating an increase in IL-6 levels with the onset of cancer. The nucleotide sequences of the NOS3 and CTH genes were determined using Sanger sequencing.In CML patient samples, different changes were found in the CTH gene.These changes were all found outside the cys-met-meta-pp domain on exon 12.However, to our knowledge, no previous study has found a relationship between the CTH gene and CML, and this is the first study to show an extensive number of mutations in the CTH gene [18]. Furthermore, the NOS3 gene exhibited distinct mutations in the VNTR, T786C, and G894T genes in colorectal cancer [19].Notably, all these variants were found within the NOS3 gene, except for specific variants in the VNTR.Together with the tyrosine kinase activator and BCR-ABL1 genes [20], these results show that the NOS3 gene is expressed in people with leukemia. Many genetic disorders and syndromes have been identified in recent decades using NGS technologies.The utilization of NGS is rapidly becoming standardized as a diagnostic tool and for molecular patient monitoring, enabling the evaluation of treatment effectiveness [21].The present study's findings indicate that 1643 variations were seen across the 22 chromosomes, including the XY chromosome.Furthermore, gene expression analysis revealed that 439 genes were actively expressed.Additionally, two genes were sequenced using the Sanger method, while one gene was identified using the ELISA technique.Nevertheless, the findings indicate that, apart from BCR-ABL1, several genes are linked to CML development. A few study limitations may impact the ability to adapt to and understand the results.The study's sample size may not represent the CML population, limiting its external validity.The study's approach relies primarily on observational and genetic analysis, which may introduce biases or confounding factors that are not adequately controlled and addressed. Conclusions The study thoroughly investigated the genetic landscape of CML, revealing insights into the delicate interaction between NO, H 2 S, gene mutations, and apoptotic genes.The NOS3 and CTH gene mutations were identified using Sanger sequencing and NGS, indicating novel interactions with CML pathogenesis.The study found previously unknown mutations in the CTH gene and expanded the understanding of its role in CML.Additionally, various mutations in the NOS3 gene, such as the VNTR, T786C, and G894T variations, revealed CML's complex genetic landscape.The NGS study found 1643 somatic and sex chromosomal abnormalities and 439 actively expressed genes, revealing CML's genomic complexity beyond the well-known BCR-ABL1 mutation.These findings highlight the potential of NGS as a diagnostic and prognostic tool, providing insights into personalized treatment approaches for CML that extend beyond BCR-ABL1 targeting strategies. FIGURE 2 : FIGURE 2: Sanger sequence analysis through the cBioPortal database (A) A lollipop mutational map showing the CTH gene mutation.(B) A lollipop mutational map showing the NOS3 gene (VNTR, T786C, and G894T).PTM, post-translational modification FIGURE 3 : FIGURE 3: Electropherograms showing the mutational sample with reference (A) A Sanger sequence chromatogram for the CTH gene showing the missense mutation (dbSNP:1021737), and amino acid change (serine>isoleucine) in position (28400G>GT); (B) a Sanger sequence chromatogram for the NOS3 gene showing the mutation in the splice region on T786C that changes the nucleotide (8535G>GC); (C) a Sanger sequence chromatogram for the NOS3 gene showing the mutation in the splice region on T786C that changes the nucleotide (8371 C>T); (D) a Sanger sequence chromatogram for the NOS3 gene showing the substitution mutation (dbSNP:2070744) that changes the nucleotide (2436C>T) located on G894T; (E) a Sanger sequence chromatogram for the NOS3 gene showing the duplication mutation 6725_6751het_dupAGTCTAGACCTGCTGCG GGGGTGAGGA) located in VNTR; (F) a Sanger sequence chromatogram for the NOS3 gene showing the duplication mutation 6751_6752het_INSAGTCTAGGACCTGCTGCGGGGGTGAGGA) located in VNTR. FIGURE 4 : FIGURE 4: Next-generation sequencing (NGS) analysed through the SRplot database (A) The chromosome distribution map illustrates the highest concentration of chromosomes; (B) a twodimensional Circos plot displaying four columns, with the first indicating chromosomes, the second showing starting coordinates, the third indicating end coordinates, and the fourth representing fold change reflecting gene upregulation and downregulation during cancer progression and GC content variability; (C) an RCircos diagram (version 1.2.2, an R package for Circos 2D track plots) depicting gene names to showcase expressed genes and copy number variation, and includes information on chromosome location in the first three columns, along with gene name locations, while log2fc is displayed in another column.
2024-06-05T15:09:34.079Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "d230c53d3a9474a61e9ef45e42a69a334715c58a", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/255262/20240603-32652-ud29i1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1784941c6037e30d2a2ed9c0f1f130312a12bbc0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
252918556
pes2o/s2orc
v3-fos-license
Stationary states of activity-driven harmonic chains We study the stationary state of a chain of harmonic oscillators driven by two active reservoirs at the two ends. These reservoirs exert correlated stochastic forces on the boundary oscillators which eventually leads to a nonequilibrium stationary state of the system. We consider three most well-known dynamics for the active force, namely, active Ornstein-Uhlenbeck process, run-and-tumble process and active Brownian process, all of which have exponentially decaying two-point temporal correlations but very different higher order fluctuations. We show that irrespective of the specific dynamics of the drive, the stationary velocity fluctuations are Gaussian in nature with a kinetic temperature which remains uniform in the bulk. Moreover, we find the emergence of an `equipartition of energy' in the bulk of the system -- the bulk kinetic temperature equals the bulk potential temperature in the thermodynamic limit. We also calculate the stationary distribution of the instantaneous energy current in the bulk which always shows a logarithmic divergence near the origin and asymmetric exponential tails. The signatures of specific active driving become visible in the behavior of the oscillators near the boundary. This is most prominent for the RTP and ABP driven chains where the boundary velocity distributions become non-Gaussian and current distribution has a finite cutoff. I. INTRODUCTION The study of nonequilibrium steady states (NESS) of extended systems driven by equilibrium reservoirs has been of long standing interest. Perhaps the simplest example is that of a harmonic chain connected to two thermal reservoirs at the ends, which was studied by Rieder, Lebowitz and Lieb in 1967 [1]. It was shown that this system reaches a Gaussian NESS which carries a constant energy current, even in the limit of thermodynamically large system size. Several generalizations of this model have been studied over the past decades, ranging from inclusion of anharmonic interaction, pinning potential and disorders, which show non-trivial stationary state behavior including anomalous transport and non-linear temperature profile [2][3][4][5][6][7][8][9][10]. An important question that arises naturally is how the stationary state of an extended system is affected when it is driven by nonequilibrium reservoirs that violate the fluctuation-dissipation relation [11][12][13][14][15]. Active reservoirs are a special class of nonequilibrium reservoirs that consists of self propelled particles like bacteria or Janus beads [16][17][18]. The action of active reservoirs on single probe particles has been a topic of increasing interest over the past few years, due to their unusual emergent features like negative viscosity and modification of equipartition theorem [19][20][21][22][23][24][25][26][27][28][29]. Recently the effect of active reservoirs on extended systems have been studied in a simple setting similar to the model proposed by Rieder, Lebowitz and Lieb-an ordered chain of harmonic oscillators connected to two active reservoirs which exert exponentially correlated stochastic forces on the boundary oscillators [30]. It was shown that this simple system exhibits some remarkable features like negative differential conductivity and current reversal. Both the average energy current and kinetic temperature profile, which were computed exactly, depend only on the autocorrelation of the active force and holds true irrespective of the specific dynamics. However, the signatures of the specific dynamics of the active forces are expected to be present in the higher order fluctuations of these observables. In this paper we study the NESS of a harmonic chain driven by different kinds of exponentially correlated active forces. In particular, we consider three most well known active processes, namely, Active Ornstein-Uhlenbeck Process (AOUP) [31], Run-and-Tumble Process (RTP) [32,33] and Active Brownian Process (ABP) [34,35] to model the dynamics of the active forces. To characterize the NESS, we focus on the behavior of the energy current, velocity and potential energy fluctuations of the oscillators. Surprisingly, we find that the bulk properties in the NESS are universal and do not depend on the specific dynamics of the active forces. More specifically, we show that, in all the three cases, the instantaneous current distribution at the bulk has logarithmic divergence near the origin as well as asymmetric exponential tails. We also find that the velocity fluctuations of the bulk oscillators are Gaussian, which is accompanied by an 'equipartition of energy' -in the thermodynamic limit, the kinetic and potential temperatures become equal in the bulk, which we show analytically. The signatures of the specific dynamics of the active force become visible in the behavior of the oscillators near the boundaries. In particular, we show that the velocity distributions of the boundary oscillators show different non-Gaussian features for the ABP and RTP driven chains. On the other hand, the Gaussian nature of the AOUP active force ensures that the boundary velocity fluctuations remain Gaussian in this case. The instantaneous current distributions at the boundaries show more surprising features-for ABP and RTP drives, the boundary current distributions have semi-finite supports, which can be understood from the bounded nature of the driving forces in these cases. For AOUP, on the other hand, the boundary current distribution has exponential tails which we compute exactly. The paper is organized as follows. In the next section we introduce the setup and give a brief summary of our results. Sections III and IV are devoted to the study of the temperature profile and velocity distributions of the oscillators. The behavior of the current distributions is discussed in Sec. V. We conclude with some general remarks in Sec. VI. 1. Schematic representation of a harmonic chain of oscillators connected to two nonequilibrium reservoirs at the two ends. Apart from the usual thermal noise the boundary oscillators are driven by auto-correlated active forces f 1 (t) and f N (t). II. MODEL AND RESULTS We consider a chain of N oscillators, each with mass m, connected by springs of stiffness k. The chain is connected to two active reservoirs which exert exponentially correlated stochastic forces on the boundary oscillators in addition to the usual white noise and dissipative forces coming from thermal reservoirs [see Fig. 1]. The displacement x l of the l-th oscillator from its equilibrium position follows the equations of motion, where v l =ẋ l and we have assumed fixed boundary condition, x 0 = x N+1 = 0. The white noises ξ 1 and ξ N acting on the boundary oscillators denote the forces from the thermal components of the reservoirs which satisfy the fluctuationdissipation relation [36], Here T 1 and T N denote the temperatures of the reservoirs and for simplicity we have assumed that the dissipation coefficient γ is the same for both the reservoirs. The active forces f j (t) are assumed to be exponentially correlated colored noises, where τ 1,N measure the activity of the reservoirs. The linear Langevin equations (1) can be straightforwardly solved in the frequency domain to obtain [6], wheref j (ω) is the Fourier transform of f j (t) with respect to t and G(ω) is the Green's function matrix; see Appendix A for the detailed solution. Clearly, the stationary state distribution of {x l , v l } would depend on the statistical properties of the active force f j (t) throughf j (ω). From Eq. (4), it is clear that the twopoint dynamical correlations of physical observables which are linear in x l (t), involve only the two-point correlation f i (ω)f j (ω ) = δ i jg (ω, τ j )δ (ω + ω ), whereg(ω, τ j ) is the frequency spectrum of the active force and is given by a Lorentzian,g In the following we consider three different dynamical processes which correspond to very different fluctuations of f j (t), although each has an exponentially decaying autocorrelation of the form Eq. (3). I. Active Ornstein-Uhlenbeck Process (AOUP): We first consider the scenario where the active force at each boundary undergoes an independent Ornstein-Uhlenbeck process [31,37],ḟ where η j (t) is a Gaussian white noise with η j (t) = 0 and η j (t)η j (t ) = δ (t − t ); the diffusion constant D j denotes the strength of the noise. The linear nature of the process and the Gaussian nature of the noise leads to a Gaussian propagator for the active force f j (t), Evidently, the stationary distribution of f j is also Gaussian with f j = 0 and f 2 j = D j /τ j . Equation (7) implies that the stationary two-point correlation of the active force f j (t) f j (t ) is given by Eq. (3) with, The linear nature of the system and the Gaussian nature of the active force f j ensures that, for the AOUP drive, the joint probability distribution of {x l , v l } is also Gaussian, where W T = (v 1 , · · · , v N , x 1 , · · · , x N ) and Σ is the corresponding 2N × 2N dimensional positive-definite correlation matrix. II. Run-and-tumble process (RTP): In this case we consider the active force f j (t) to be a dichotomous noise similar to the famous Run-and-Tumble process [33,38], where σ j (t) alternates between 1 and −1 with rate α j . In this case f j = ±A j can take only two discrete values and the corresponding propagator is given by [39], Clearly, in the stationary state, the two values of f j occur with equal probability 1/2. It is straightforward to see that this process leads to the two point auto-correlation of the form Eq. (3) with τ j = 1/(2α j ), and a j = A j . However, the higher order correlation of f j , computed from Eq. (11), are quite different from that of the AOUP, and, in general, the stationary state distribution P({x l , v l }) is expected to be non-Gaussian. III. Active Brownian process (ABP): The third case refers to the scenario where the active force evolves according to the active Brownian dynamics [35,40], where ζ j refers to a Gaussian white noise with ζ j (t) = 0 and ζ j (t)ζ j (t ) = δ (t − t ). Clearly, θ j (t) undergoes a standard Brownian motion which leads to a Gaussian propagator [39], Corresponding distribution for f j = A j cos θ j (t) eventually reaches a stationary state, The auto-correlation f j (t) f j (t ) is given by Eq. (3) with However, the higher order correlation for f j for this case is different than that of both AOUP and RTP and the stationary state weight P({x l , v l }) is expected to be non-Gaussian as well as different from that in the RTP driven case. Clearly, despite having the same two-point auto-correlation given by Eq. (3), the dynamical nature of the active force f j is very different for all the three cases. We expect to see the signatures of these specific dynamics in the stationary state of the different activity driven harmonic chains. To characterize the stationary state properties of the activity driven chain we focus on the potential energy, local velocity, and current fluctuations in the harmonic chain, both in the bulk and at the boundaries. We support our analytical results with the help of numerical simulation using stochastic second order Runge-Kutta algorithm [41,42]. Note that, for a harmonic chain, energy current in the stationary state splits into two components-a thermal one J therm , proportional to the temperature difference (T 1 − T N ) of the thermal reservoirs, and an active one J act , which depends on the activity driving [30]. Since we are mainly interested in characterizing the activity driven stationary state, we use T 1 = T N = 0 for the remainder of the paper. Before going into the details of the computation, we first present a brief summary of our main results. Temperature profile: We first compute the local potential temperature profile, defined as, where U l denotes the average potential energy of the l-th oscillator. We show that,T l becomes uniform in the bulk (i.e., for 1 l N) in the thermodynamic limit N → ∞ and the bulk potential temperature value, given by, which is the same as the bulk kinetic temperatureT bulk computed earlier [30], which indicates the existence of an 'equipartition of energy'. Velocity distribution: We also measure the stationary probability distribution P(v l ) of the velocities of the oscillators and show that, surprisingly, in the limit of thermodynamic size, for any activity of the reservoirs, the velocity distributions of the bulk oscillators are Gaussian with widthT bulk , irrespective of the dynamics of the active force. The velocity distributions of the oscillators near the boundaries, however, are non-Gaussian for ABP and RTP driven chains, and depend on the specific driving dynamics. Current distribution: Another observable of immense importance is the energy current flowing through the system. We show that, for the bulk oscillators, P(J l ), the probability distribution of the instantaneous current J l , flowing from the (l − 1)-th to the l-th oscillator, exhibits certain universal features, irrespective of the specific dynamics of the active force: The distribution diverges logarithmically for |J l | → 0 and shows asymmetric exponential decay for large J l , where J act and g l and u l are defined in Eqs. (34) and (39). In fact, the Gaussian nature of the stationary state of the AOUP driven chain allows us to exactly compute the stationary current distribution in the bulk, where K 0 (z) is the zeroth order modified Bessel function of the second kind [43]. We also compute the boundary current distribution for the AOUP driven chain which has the same qualitative shape as the bulk current distribution. For RTP and ABP driven chains, however, the boundary current distributions are strikingly different, which we measure numerically. III. TEMPERATURE PROFILE It is often convenient to consider a local 'kinetic temperature' for driven oscillator chains, which can be defined as the average kinetic energy of the l-th oscillator, For an activity driven harmonic chain, it has been shown that the kinetic temperature attains a uniform valuê in the bulk, with an exponentially decaying boundary layer [30]. For a harmonic chain, one can also define a local 'potential temperature',T l , from the average potential energy of the l-th oscillator U l [see Eq. (17)], defined as, To compute U l , we need position correlations of the form x l (t)x n (t) in the stationary state, for n = l, l ± 1. From Eq. (4) we have, where,g(ω, τ j ), given in Eq. (5), denotes the Lorentzian spectrum of the active force. These correlations can be computed exactly using the explicit form of G ln (ω). The details of the computation are provided in Appendix C; here we quote the main results. It turns out, that the average potential energy can be expressed as the sum of two contributions from the two reservoirs, Here U 1 (l, τ 1 ) and U N (l, τ N ) are the contributions form left and right reservoirs respectively [see Appendix C ]. We find that, U j (l, τ j ) for bulk oscillators, is independent of l in the thermodynamic limit N → ∞, where j = 1, N. Consequently, the potential temperature pro-fileT l attains a uniform valueT bulk in the bulk. In fact, from Eq. (25), (22) and the above equation, it is clear that i.e., the bulk kinetic and potential temperatures are identical in the thermodynamic limit. Note that, Eq. (27) holds irrespective of the specific form of the dynamics. Fig. 2 shows plots ofT l andT l for AOUP, RTP and ABP for two sets of τ 1 and τ N and validates our prediction Eq. (27). The potential temperatures of the oscillators near the boundaries, calculated explicitly in Appendix C, are different from their respective kinetic temperatures. The difference is illustrated in the inset of Fig. 2 (a) for two sets of τ 1 and τ N . IV. VELOCITY DISTRIBUTIONS The probability distribution of the velocities plays an important role in the characterization of the NESS of the oscillator chain. In the presence of a thermal gradient such a system usually reaches a stationary state, where the velocity fluctuation of the l-th oscillator are typically Gaussian with the width given by its local kinetic temperature [2,3]. In this section we explore the fluctuation of the velocities of the individual oscillators in the presence of the different active drivings. For the AOUP driven chain, as mentioned before, the joint probability distribution P({x l , v l }) is a multivariate Gaussian [see Eq. (9)]. Consequently, the marginal velocity distribution P(v l ) must also be a Gaussian, for l = 1, 2, · · · N, whereT l = m v 2 l is the average kinetic temperature of the l-th oscillator. This is illustrated in Fig. 3(a) where the numerically measured velocity distribution of the middle oscillator (l = N/2) is plotted along with the corresponding Gaussian which shows perfect agreement. For RTP and ABP driven chains, on the other hand, Eq. (9) is not expected to hold. Surprisingly, however, numerical simulations show that for oscillators in the bulk, the typical velocity fluctuations are still Gaussian. This is shown in Fig. 3 (a) where the scaled velocity distributions of the l = N/2-th oscillator of the RTP and ABP driven chains are compared with Eq. (28) showing an excellent agreement. Nevertheless, the signatures of the underlying non-Gaussian stationary states become apparent in the velocity fluctuations of the oscillators near the boundaries. Figure 3 (b) shows a plot of the marginal distribution P(v 1 ) of the left boundary oscillator-the obvious non-Gaussian nature of the distribution is very clear for RTP, while for ABP, the deviation from Gaussian form Eq. (28) becomes prominent at the tails. For AOUP driven chain the boundary velocity fluctuations are also Gaussian, as expected. V. CURRENT FLUCTUATIONS The NESS of an activity driven harmonic chain is characterized by the existence of an average energy current flowing through the system, which can be computed exactly [30]. Instantaneous current at the left and right boundaries J 1 and J N+1 are defined as the rate of work done by left reservoir and right reservoir on the system, respectively, The instantaneous energy current flowing from the (l − 1)-th to l-th oscillator is given by, The Hamiltonian nature of the bulk dynamics ensures that in the stationary state, where, J act is the average energy current flowing through the system. It has been shown [30] that the average active current is given by a Landauer-like formula, where |G 1N (ω)| 2 denotes the phonon transmission coefficient andg(ω, τ j ) corresponds to the Lorentzian spectra of the jth active reservoir. The presence of the non-trivial reservoir spectra makes the activity driven current different than the thermally driven scenario, where the average current is given by, Here T 1 and T N denote the temperatures of the thermal reservoirs attached at the two ends of the chain. For a thermodynamically large chain of oscillators driven by active forces satisfying Eq. (3), the average active current is given by, . Note that, E (τ j ) is nonmonotonic in τ j , and its form does not depend on the specific active force dynamics. However, J act also depends on a j (τ j ) which makes its τ j dependence different for the different models. In particular, for AOUP, a j ∝ 1/ √ τ j , which results in an active current which monotonically decreases as function of τ j , which is illustrated in Fig. 4(a). On the other hand, for RTP and ABP, a j does not depend on τ j resulting in a nonmonotonic behavior of J act indicating the emergence of a negative differential conductivity. This is shown in Fig. 4(b) and (c) for RTP and ABP, respectively. More apparent signatures of the specific active force are expected to be encoded in the higher order fluctuations of the instantaneous current which we investigate next. A. Stationary distribution of J l in the bulk We start with the stationary distribution of the instantaneous current P(J l ) for the bulk oscillators. From Eq. (30) we can write First we consider the AOUP driven chain. The Gaussian nature of the stationary state [see Eq. (9)] in this case implies that the joint distribution of {v l−1 , v l , x l−1 , x l } is also a multivariate Gaussian, where, W T l = (v l−1 v l x l−1 x l ) and Σ l is the corresponding 4 × 4 correlation matrix [see Eq. (D16) in Appendix D]. To compute P(J l ), it is most convenient to consider its Fourier transform with respect to J l which is the moment generating function, e iµJ l = dv l−1 dv l dx l−1 dx l e iµ k Using Eq. (36) and (37) and performing the Gaussian integrals, we get, where g l and u l denote stationary correlations, defined as, Here J act is the average active energy current given in Eq. (34). The current distribution can be exactly computed by taking the inverse Fourier transform of Eq. (38) [see Appendix D for the detail] which yields, where K 0 (z) is the zeroth order modified Bessel function of second kind. In the thermodynamic limit, g l can be computed explicitly [see Appendix D] and is given by, where η j = 1 + 5(a) compares the numerically measured P(J l ) at l = N/2 with the analytical prediction Eq. (40) and shows excellent agreement. Interestingly, current distribution is asymmetric and shows divergence near J l = 0, despite having a nonzero mean. In fact, from Eq. (40), using the asymptotic behavior of K 0 (z) for z → 0, we get, near J l = 0. Here E γ 0.577216 is the Euler's constant. This logarithmic divergence is illustrated in Fig. 5(b) for different values of the activity. On the other hand the P(J l ) shows asymmetric exponential decay at the tails. It should be mentioned here, that form of the distribution (40) is the same as the ones obtained previously in the context of time-integrated heat current fluctuations of Brownian particles in an active environment [44] and relaxation of harmonic oscillators subjected to a temperature quench [45]. For RTP and ABP driven chains, the current distribution cannot be computed exactly since P(x l , v l ) is not known explicitly. However, as we have shown in Sec. IV, the velocity distribution of the bulk oscillators P(v l ) is Gaussian even for these cases, and one can then expect Eq. (36) to hold approximately for 1 l N. In that case, the bulk current distribution for ABP and RTP driven chains should also follow Eq. (40). We investigate the validity of this approximation using numerical simulations - Fig. 6(a) and (b) compare the measured instantaneous current distribution for ABP and RTP drivings with Eq. (40). Indeed, a very good agreement is observed, including the logarithmic divergence near the origin, validating our analytical prediction, for all the three different active drivings. The higher moments of the bulk current can, in principle, be calculated from Eq. (40). In particular, the second moment is given by [see Appendix D 3], We compare this prediction with numerical simulations in Fig. 6, which again show a very good agreement, even for ABP and RTP driven chains. B. Instantaneous current distribution at boundary The signatures of activity become apparent in the current fluctuation near the boundary. Using the definition of the boundary current J 1 given in Eq. (29), the corresponding stationary distribution can be written as, For AOUP driven chain, we can again use the Gaussian nature of the driving force to write, where W T 1 = (v 1 f 1 ) and Σ 1 is the corresponding correlation matrix [see Appendix D 1]. To obtain P(J 1 ), we proceed in the same manner as in Sec. V A and first compute the moment generating function, Performing the Gaussian integrals, we arrive at an expression which is very similar to the moment generating function of the bulk current, where, Once again, we can compute the inverse Fourier transform exactly [see detail Appendix D] which yields an explicit form for the boundary current distribution, P(J N+1 ) can be computed exactly following the same procedure. Clearly, the shape of boundary current distribution is qualitatively similar to that at the bulk for the AOUP driven chain. In Fig. 7(a), numerically measured P(J 1 ) is plotted along with the analytic curve Eq. (50), which, as expected, shows an excellent agreement. For RTP and ABP driven chains, however, the distributions of boundary currents are drastically different. Figure 7(b) shows P(J 1 ) for RTP driven chain which has a monotonically increasing shape and reaches a maximum at J 1 = J max 1 , which is independent of τ 1 and τ N . It also appears that, P(J 1 has a semi-finite support -it vanishes for J 1 > J max 1 . For ABP, on the other hand, the distribution shows a maximum at J 1 = 0 although the finite cutoff at J 1 = J max 1 is still present in this case. It is hard to compute P(J 1 ) in these two cases. However, the existence of the finite cutoff directly follows from the boundedness of the active force f j for RTP and ABP. In fact, from the definition of J 1 = (−γv 1 + f 1 )v 1 , it is clear that J 1 reaches its maximum value for v 1 = f max 1 /2γ where f max 1 denotes the maximum value of the active force. This in turn, leads to This upper cutoff is indicated in Fig. 7(b) and (c) with vertical dashed lines, which perfectly agree with the numerically measured distributions. Using a similar argument, one can show that the instantaneous current at the right boundary has a lower cut-off at J min N+1 . VI. CONCLUSIONS In this work, we study the stationary state properties of a harmonic chain driven by active reservoirs, which exert exponentially correlated stochastic force on the boundary oscillators. Considering three different dynamics of the active force, namely the active Ornstein-Uhlenbeck process, Runand-Tumble process and active Brownian process, we show that the typical stationary state behavior of the bulk oscillators does not depend on the specific driving. In fact, the bulk kinetic temperature, potential temperature, local velocity and instantaneous current distributions which we compute analytically, all show the same qualitative features irrespective of the specific form of activity driving. Surprisingly, in spite of the inherently nonequilibrium nature of the driving, the velocity distribution of the oscillators at the bulk is Gaussian for all the three different drivings. The shape of the bulk current distributions also turns out to be universal, with a logarithmic divergence near the origin and asymmetric exponential tails. Moreover, the bulk kinetic temperature turns out to be the same as the bulk potential temperature which indicates an equipartition of energy in the bulk of the system. On the other hand, the behavior of the oscillators near the boundaries bear clear signatures of the specific active driving. In fact, unlike the bulk current, the current at the boundary turns out to have a semi-finite bound for RTP and ABP driven chains, which we also compute analytically. This work adds a significant step towards the understanding of the activity driven transport. It would be interesting to study the dynamical behavior of the activity driven chain, in particular, the relaxation to the stationary state and how it differs from the thermally driven scenario. Another relevant question is, how does the NESS change when the active reservoirs have more than one time-scale [39,46]. It is also worthwhile to ask how the stationary state behavior changes if the reservoirs are modeled by an extended active particle chain similar to [47,48]. The Langevin equations (1) can be solved using a matrix Green's function method [6]. For the sake of completeness we provide the detailed solution in this section. It is convenient to recast Eqs. (1) as, where Equation (A1) can be solved exactly using the Fourier transform, In the frequency domain, Eq. (A1) reduces to an algebraic equation,X where G(ω) is the Green's function matrix defined by, andF(ω) is the Fourier transform of the active force vector F(t). The exponential auto-correlation of F(t) leads to, where,g (A7) From Eq. (A5), it is clear that G(ω) is a symmetric matrix and its complex conjugate G * (ω) = G(−ω). The elements of G can be obtained exploiting the tridiagonal structure of G −1 [49]. In particular, we will need, where θ l satisfies recursion relations, with the boundary conditions, θ 0 = 1 and θ 1 = −mω 2 + 2k − iωγ. The above recursion relations can be explicitly solved to get, where ω and q are related through, with ω c = 2 k/m. Moreover, for notational simplicity, we have introduced, Note that, for |ω| < ω c i.e. for frequencies within the characteristic band of the harmonic chain, q ∈ [−π, π], whereas for |ω| > ω c , q becomes complex. Appendix B: Velocity correlations In this section, we provide the details of the computation of the nearest neighbor velocity correlations v l−1 (t)v l (t) = Ẋ (t)Ẋ T (t) l−1,l in the steady state. To this end, using (A3) and (A4), we first note that, Using the above equation along with Eq. (A6) we get, Thus it is clear that there are two separate contributions from the left and right reservoirs. In the following we explicitly compute the contribution coming from the left reservoir, and the contribution from the right reservoir can be computed similarly. Using Eq. (A8), we have, Clearly, V l (τ 1 ) will have non-zero contributions from only the even components of the integrand. Hence, using explicit forms of θ l and θ n from Eqs. (A11) and (A12) and keeping only the terms which are even in ω, we get, Finally, since we are interested in calculating the correlation function in the bulk, we take l = N/2 + ε and take the limit ε N to get, At this point, it is important to remember that, for ω > ω c , q becomes complex. Thus, in the large N limit, the integrand vanishes exponentially as e −2Nq in the region ω > ω c (wherē q is real). Therefore, the range of the integration reduces to 0 ≤ ω ≤ ω c , or in terms of q, 0 ≤ q ≤ π. Moreover, in the thermodynamic limit, sin Nq and cos Nq are highly oscillatory and the resulting integrand can be well approximated by averaging over the fast oscillations in x = Nq [10]. This averaging can be performed using the following identities, Identifying c 1 , c 2 and d as the real and imaginary parts of a(q), and real part of b(q), respectively [see Eq. (A14)], we get, The above integral can be performed exactly using the explicit form of ω(q) and g(ω, τ 1 ) from Eq. (A13) and Eq. (A7). Similarly, the contribution from the right reservoir V l (τ N ) can also be calculated. Combining these results, we finally get, where η j = 1 + 4kτ 2 j m −1/2 − 1. Appendix C: Potential Energy profile In this section, we explicitly compute the average potential energy of the l-th oscillator in the NESS, defined by Eq. (23). As mentioned in Eq. (25), the average potential energy of the l-oscillator can be written as k 4 ∑ j=1,N U j (l, τ j ), where, U j (l, τ j ) denotes the contribution from the j-th reservoir. Using Eq. (24) in the definition (23), we have, for l = 1, N, where j = 1, N. On the other hand, for the boundary oscillator l = 1, we have, Note that the Fourier transform of the two-point autocorrelation function of colored noiseg(ω) is an even function of ø, therefore in Eqs. (C1) and (C2), we can neglect the terms with odd power of ø as they would give vanishing contribution. In the following, we compute the non-zero contributions explicitly, for a thermodynamically large chain. 1. Potential energy for the bulk oscillators (1 l N) We start with the computation of U l for the oscillators at the bulk, i.e., for 1 l N. Using Eqs. (A8), (A11), and (A12) in Eq. (C1) we have, where, we have kept only the even component of the integrand in (C1), with for l = 1, N. Similarly, the contribution from the right reservoir can be expressed as, It is easy to see that, in the thermodynamic limit, the integrand vanishes in the region ω > ω c [see the discussion after Eq. (B6)]. Moreover, averaging over the fast oscillations in this limit using the identities (B7), we get, Using the ω −q relation (A13) and the explicit form ofg(ω, τ) from Eq. (A7), we arrive at, This integral can be computed exactly and yields, The contribution from the right reservoir can be similarly obtained and turns out to be of the same form; the final expression of the average potential energy for the oscillators at the bulk is then given by, in the thermodynamic limit. Potential energy for the oscillator near the left boundary The average potential energy of the left boundary oscillator U 1 has two contributions U 1 (1, τ 1 ) and U N (1, τ N ) from the reservoirs at the two ends, given by Eq. (C2). Substituting, the explicit forms of G lm from Eq. (A8) and (A12), we get the contribution from the left reservoir, with, where, as before, we have kept only the terms which are even in ω. Similarly, we have the contribution from the right reservoir, where, Once again, in the thermodynamic limit N → ∞, the integrand vanishes for ω > ω c and shows fast oscillations for ω < ω c . Averaging over these fast oscillations as before, we get, This integral can be evaluated numerically remembering ω = ω c sin(q/2) and usingg(ω, τ j ) from Eq. (A7). In contrast, the contribution from the left reservoir is nonzero for the whole domain 0 < ω < ∞. In this case, it is convenient to consider the contributions from inside the band ( 0 ≤ ω ≤ ω c ) and outside the band (ω > ω c ) separately and write, The contribution from inside the band is given by, As before, in the thermodynamic limit, we can average over the fast oscillations in x = Nq to get, where, with Outside the band, i.e., for ω > ω c , q becomes complex, and to compute the corresponding contribution, it is convenient to define q = π − iq whereq is a real variable and ω = ω c coshq/2. The integral in Eq. (C18) takes a much simpler form in terms ofq, where we have used Eq. (C10) and the identities, sin nq = (−1) n+1 i sinh nq, cos nq = (−1) n cosh nq, (C20) for integer n to average over the fast oscillations in Nq. Finally, the average potential energy of the left boundary oscillator can be evaluated combining the Eqs. (C13), (C16) and (C19). For 1 < l N/2 the average potential energy can be computed following a similar procedure and we quote the final results here. In this case, the contribution from the right boundary simplifies to, U N (l, τ N ) = 1 2πkγ π 0 dq 1 + cos q k 2 + γ 2 ω 2 γ 2 ω 2 cos(2lq − 2q) + k 2 cos (2lq) g(ω, τ N ). (C21) As before, the left reservoir contribution comprises of two parts-one coming from inside the band, and the other coming from outside the band, which can be computed numerically. is positive-definite. Using Eq. (D1) the distribution of J 1 can be expressed as, The corresponding moment generating function, which is nothing but the Fourier transform of P(J 1 ), is given by, where, f 2 1 = D 1 /τ 1 in the stationary state. The Gaussian integrals over v 1 and f 1 can be evaluated exactly using Eq. (D2), yielding, with a = u 1 + J act g 1 and b = u 1 − J act g 1 . Here u 1 and g 1 correspond to certain stationary state correlations, given by, τ 1 − 2γJ act − γ 2T 1 T 1 1 2 , g 1 = det(Σ 1 ) = u 2 1 − J 2 act (D7) whereT 1 = v 2 1 is the kinetic temperature of the oscillator at the left boundary which has been calculated in reference [30] . The current distribution can be obtained by taking the inverse Fourier transform of the moment generating function Eq. (D6), To evaluate this complex integral explicitly we need to choose a convenient contour. a and b are real positive quantities [Eq. (D7)]. Hence the integrand in Eq. (D8) has two branch points at µ = ia and µ = −ib. We choose the corresponding branch cuts as shown in Fig. 9. Now, for J 1 < 0 one can draw a closed contour ABCDEFA which has no singularities inside and hence, I A→B + I B→C + I C→D + I D→E + I E→F + I F→A = 0, (D9) where I α→β denotes the the integral Eq. (D8) evaluated along the path α → β . Clearly, for J 1 < 0, the contribution from I B→C and I F→A vanish when the radius of the arcs R → ∞. Similarly I D→E → 0 when radius of the circular arc DE vanishes. Hence, from Eq. (D9) we have, P(J 1 ) = I A→B = − I C→D + I E→F . (D10) To evaluate I C→D and I E→F we note that along the segments CD and EF, µ = ia + re iπ 2 and µ = ia + re − i3π 2 , respectively, where r ∈ [0, ∞). Substituting these in Eq. (D8) and using Eq. (D10), we finally get, for J 1 < 0, where K 0 (z) is the modified Bessel function of second kind. The distribution for J 1 > 0 can be computed similarly by choosing the contours ABGHIJA. In this case, we get, Using the explicit forms of a and b and combining Eqs (D11) and (D12) we get the complete boundary current distribution which is quoted in Eq. (50). Higher moments of the active current can, in principle, be computed from Eq. (38) or (40) for the AOUP driven chain. In this case, the second moments of the bulk and boundary currents, respectively, are given by, For RTP and ABP driven chains, as discussed in Sec. V A, Eq. (40) describes the fluctuations of the bulk current reasonably well. Hence, we expect Eq. (D21) also to hold in these cases, which indeed is the case, as shown in Fig. 8(b) and (c). The boundary current distributions, for RTP and ABP, however, are drastically different [see Sec. V B] and consequently, Eq. (D22) is not expected to describe the variance of the boundary currents in these scenarios. Hence we take recourse to numerical simulations in this case - Figure 10(b) and (c) show plots of numerically measured J 2 1 for the RTP and ABP driven chains, respectively. It turns out that, similar to the behavior of the average current, the second moment also shows non-monotonic behavior in these cases.
2022-10-18T01:16:24.018Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "ae792995fb619b0b9b680080cf725633352acf73", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ae792995fb619b0b9b680080cf725633352acf73", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
88503746
pes2o/s2orc
v3-fos-license
Understanding Fundamental Phenomena Affecting the Water Conservation Technology Adoption of Residential Consumers Using Agent-Based Modeling More than one billion people will face water scarcity within the next ten years due to climate change and unsustainable water usage, and this number is only expected to grow exponentially in the future. At current water use rates, supply-side demand management is no longer an effective way to combat water scarcity. Instead, many municipalities and water agencies are looking to demand-side solutions to prevent major water loss. While changing conservation behavior is one demand-based strategy, there is a growing movement toward the adoption of water conservation technology as a way to solve water resource depletion. Installing technology into one’s household requires additional costs and motivation, creating a gap between the overall potential households that could adopt this technology, and how many actually do. This study identified and modeled a variety of demographic and household characteristics, social network influence, and external factors such as water price and rebate policy to see their effect on residential water conservation technology adoption. Using Agent-based Modeling and data obtained from the City of Miami Beach, the coupled effects of these factors were evaluated to examine the effectiveness of different pathways towards the adoption of more water conservation technologies. The results showed that income growth and water pricing structure, more so than any of the demographic or building characteristics, impacted household adoption of water conservation technologies. The results also revealed that the effectiveness of rebate programs depends on conservation technology cost and the affluence of the community. Rebate allocation did influence expensive technology adoption, with the potential to increase the adoption rate by 50%. Additionally, social network connections were shown to have an impact on the rate of adoption independent of price strategy or rebate status. These findings will lead the way for municipalities and other water agencies to more strategically implement interventions to encourage household technology adoption based on the characteristics of their communities. Introduction Water is undeniably necessary, supporting 7.4 billion people and over 8.7 billion species of life.However, the growing human population and consequences of climate change have created widespread water scarcity that is only expected to worsen in the coming decades.By 2025, 1.8 billion people around the globe will face water scarcity [1].Beyond damaging an individual's quality of life, water scarcity also negatively impacts ecosystem health and political and social stability [2] change adds further pressure to water resources, and government officials and policy advocates have taken two different approaches to address growing water concerns: supply-side management and demand-side management [3].Supply-side management focuses more on increasing the availability of water through the development and renewal of water infrastructure systems and identifying new water sources [4].This encompasses the creation of reservoirs, water pumps, and irrigation systems to continue to have adequate water supplies.Supply-side solutions have been effective historically; however, it does not influence water use patterns of the consumer, which is the next necessary step in managing demand growth [4].Demand-side management is based on the idea that lowering a household's (or other users') usage for water will subsequently reduce water demand.While implementing demand-side management to govern a typically inelastic good is controversial among economists and planners, it has been shown in many studies to be effective in alleviating water scarcity [5][6][7][8].Ref. [6] reviewed different demand-side management tools and explored their potential and effectiveness to save water under varying conditions in developed countries.At its core, reducing residential water demand can be done by changing behavior or technology [6].Changing someone's behavior, according to [9,10], is a process including incentives and disincentives, the modeling of behaviors, education, and persuasive communication.These techniques work best with mostly-engaged audiences, are adopted infrequently, and are less likely to save water if people do not trust the water authorities [11].Despite all of the multifaceted approaches, changing behavior tends to pan out only in the short term while the comprehensive installation of water-efficient appliances in households has been shown to reduce indoor consumption by 35-50% [6].Change in technology is meant to curb the problems with behavior conservation changes by erecting a more permanent fixture for conservation.In a report of California's water scarcity, [12] found that one-third of the state's water usage could be saved with existing conservation technology.This total equates to more than 2.3 million acre-feet of water.As technology improves, as it has drastically since this report was written in 2003, water savings will only become more prominent.The dire state of water scarcity has diminished the sufficiency of supply-side management.It will eventually become too difficult to track down additional water sources, or there will simply be no more water left to find.Because of this, more research is needed on demand-side approaches.Additionally, although there are two parts to demand-side water management, change in technology will be the most permanent, applicable method heading into the coming decades [13].Change in behavior is typically ephemeral, while technology is more easily maintained through water policy adoption.However, technology's impact on policy implementation and household adoption patterns still needs to be specified and characterized.Governmental rebate availability, demographic and household characteristics, and external factors are variables that can cause different adoption patterns.Additional costs or potential savings of technology adoption can also be highly variable [14][15][16].In addition, the role of "word of mouth" through social network interactions has been shown to be influential to the adoption processes [17].While some of these influential factors have been researched to promote policy change and growth, there is a deficiency in the existing literature as to how they all intersect and challenge water conservation technology adoption. To mitigate water scarcity, understanding why-and to what extent-households adopt conservation technology based on the demographic and household characteristic, social interactions, technology cost, water price and other factors is crucial.To this end, the study presented in this paper aimed to investigate the underlying factors and behaviors affecting water technology adoption of residential consumers through the use of Agent-based Modeling (ABM).In the agent-based model of the current study, households are agents categorized into the three adoption states of non-adopter, potential adopter, and adopter, based on the theory of innovation diffusion [18].The transition of agents between non-adopter and potential adopter is driven by the adoption utility of households, which is determined by their demographic and household characteristics [19].Another mechanism triggering this transition is social interactions which influence households' adoption decision-making based on the theory of peer effect [20].In addition, per the theory of affordability, if the adoption of a new technology is economically affordable for households [21], they would adopt it and thus transition from the potential adopter state to the adopter state. Unlike studies that focus on residential water use behaviors [21], conservation technology effectiveness [22,23], and demand projection [24,25], the current study investigated how changes in different mechanisms (such as water price structure) can affect the adoption rate of conservation technology (rather than residential water demand).Hence, the outputs of the ABM developed in this study are the number and type of adopted water conservation technologies under the influence of various factors (e.g., socio-demographic characteristics, social networks, and water policies).In fact, the outcomes of the model developed in this study can supplement the information from residential water demand projection models in order to incorporate the effects of water conservation technology adoption in projecting future demands under various scenarios. Background Despite there being an immediate need for households to begin conserving water, there is limited knowledge within the scientific community on the reasons people adopt water conservation practices in the first place.Water conservation encompasses both behavioral conservation as well as technology adoption.Because the scope of water conservation is so vast, with both behavioral and technological possibilities, this study focused on water conservation technology adoption conservation as a means of resolving problems with water scarcity.More specifically, we plan to examine the underlying mechanism affecting a household's willingness to adopt water conservation technologies. Most of the recent literature on residential water conservation management and technology adoption incorporate some of the following features: water conservation affordability, water price and incentives, education and demographics, household/building attributes, and social network influence.In the following sub-sections, some of the studies on residential water conservation and technology adoption were used for identifying various influencing mechanisms and factors.Although recent studies in this field have contributed thoroughly to water management and the understandings of household influence on water conservation technology, there is currently little to no research assessing all of these mechanisms and factors at once.The remainder of this section summarizes the various mechanisms and factors affecting the water conservation technology adoption of households. Water Conservation Affordability Public acceptance of water conservation technology adoption is integral, but also highly variable [14,16].The characteristics that influence the potential installation of water conservation technologies are not fully understood.According to [16], cost is one of the largest deterrents or motivations of adopting water-saving technologies.The more expensive a technology, the less likely a household will install it.Income level plays a similar role in influencing the public perception of water-saving technology adoption [26].Ref. [27] claims that higher-income households are more willing to adopt technologies.Those with less income, conversely, may simply struggle to afford new technologies. Water Price and Incentives Directly reflecting cost and income, external factors such as water pricing and rebate programs play a role in water-saving technology adoption [6].In a study of 13 California cities, it was found that certain price-based deterrents of water consumption were more influential on conservation than installing water-saving technology [28].The higher the price of water, the less technology one would adopt; conversely, the lower the price of water, the more technology one would install [28].Ref. [6] argue that the comprehensive adoption of water conservation technologies can only be implemented by setting effective regulation and incentives.This sentiment is echoed by another study, which supports the implementation of rebate programs particularly for showerheads and cloth washers [29].However, in older studies, government control and assistance were regarded as counterproductive, which caused more grief than environmental pay-off [30].Ref. [31] assert that households avoid government programs because they cause increased confusion, provide limited choices, take too much time to install, and do not show the direct conservation effects.To solve this, the greater the financial benefit a government entity or utility employs to encourage water-saving technology adoption, the greater the non-financial resources, such as marketing and education, is needed [27].While there are conflicting perspectives, it is clear that water pricing and other external factors have potential effects on water conservation technology adoption. Education and Demographics Education and awareness can be just as influential as government financial incentives.Education correlates positively with public acceptance of water-conserving practices [14,16,32].The development of a greywater reuse program in Barcelona was considered a success due to its awareness efforts and education [33].For water conservation in general, the more knowledge a household has on conservation practices-whether through behavior or technology-the more that household conserved water [32].Along with education, researchers have found other demographics that influence a household's willingness to adopt water conservation technology.One example is home ownership status; those who own their home are more likely to consider long-term water conservation solutions such as technology [34,35].Gender can also make a small impact; since women are commonly heads-of-households, they are more likely to make water conservation technology decisions [19]. Household/Building Attributes There are studies that show the specific characteristics of a house itself reflect a particular willingness of the household to adopt water conservation infrastructure.Firstly, the age of a household dictates openness to new technology [36].The newer the home, the more likely it is to already have water-saving infrastructure [36].The household size also influences public perception, for those who live in bigger homes may also incur larger water costs and, thus, feel more obliged to invest in water-and cost-saving technology [32].Installing water conservation infrastructure outside the home can also restore water supplies.Households with larger open spaces are more willing to incorporate technology since outdoor areas significantly contribute to water usage [37]. Social Network Influence Recent studies have shown that, both in developing and developed countries, social networks and peer effects are important phenomena in human technology adoption behavior [17,38].Individual consumer attitudes are modified over time through social influence and interactions [39].Contextually, households share information and learn from one another.A head-of-household is likely to adopt water-efficient technology based on interactions with someone who has adopted the technology.Technology adopting families educate others on the benefits of technology through their interactions with it.Intuitively, households are more likely to adopt when they know and are connected to other adopters [40].Through community, people are connected through different means-family, work, neighborhoods.Interactions among households depend on the structure of social networks through which they are connected [41].Scientifically, however, it is difficult to identify all possible connections based on empirical data [17]. Significance Understanding the underlying mechanism of water conservation technology adoption patterns is relevant because water scarcity is becoming a worldwide epidemic.There are two ways conservation can combat this problem: changing conservation behavior and changing conservation technology.While changing conservation behavior has made significant strides in water preservation, it is not the only piece of the puzzle [12].It has been discussed that technology improvement is a quicker and more permanent method [6].However, more research is required to understand the full potential that technology has on water conservation for households.Changing conservation technology in conjunction with behavioral changes can help alleviate water scarcity altogether.As technology improves-as it does every day-there will need to be methods for implementing the technology into households of different demographic, household, and external factors.Households are the agents adopting the technology; therefore, knowing their variability in adoption probability is the next big step in improving the status of drought and water scarcity. There has yet to be research done that can simultaneously analyze all the demographic, household, external factors (i.e., water pricing structure and rebate policy), and social networks that could influence a household's decision to install water conservation technology.Without this information, government agencies will have no starting point for raising awareness or creating proper policies and regulations to encourage technology adoption.Conservation measures will not be grounded in any knowledge of household influences, making them futile.By focusing on these demographic, household, social and external factors, all aspects of demand-side water management can be evaluated together to solve larger societal and political problems regarding water scarcity and climate change. Methodology To implement this research, a simulation approach was used.The simulation approach enables replicating many various types of populations, while other methods (such as conducting surveys and interviews) can only reflect one particular population at a time [42].According to [43], simulation is an effective method for theory development when (i) a theoretical field is new; (ii) the use of empirical data is limited; and (iii) other research methods fail to generate new theories in the field.These traits are consistent with the current study of water conservation technology adoption.The chosen simulation technique for this study is agent-based modeling. Agent-Based Modeling Agent-based modeling (ABM) is a powerful modeling technique that focuses on the individual active components of a system [44].In ABM, active components (e.g., human entities) are characterized as agents, each with a set of social capabilities and goals, values, and preferences.Agents exist in an environment defined by specific rules/micro-behaviors and can inform or evolve their goals or priorities over time [45].ABM can account for (1) various rational and behavioral decision-making rules for different agents; and (2) an agent's reactions to other agents' decisions.The use of ABM will enable (1) discovering what factors and micro-behaviors result in technology adoption decisions; (2) juxtapose the preferences of various households with the range of conservation technology alternatives to determine the distribution of expected conservation outcomes; and (3) explore effective intervention strategies to enhance water conservation technology adoption.In addition, the use of ABM will enable the construction of a theoretical space that will include a range of community profiles in terms of demographics, water use, social network structures, and other factors.ABM can replicate many different types of populations, and project diverse, tangible scenarios throughout future years [46,47]. ABM has been successful in studying complex behaviors, policy analysis in infrastructure systems [48,49], and water demand management.Ref. [4,24,50] have utilized ABM as a successful tool to analyze water management systems.Ref. [50] demonstrated that the ABM is a useful methodological approach to dealing with the complexity derived from multiple factors with influence in the domestic water management in emergent metropolitan areas.Ref. [4] developed an ABM framework for assessing the consumer water demand behavior against different degrees of water supply and water supply systems.Their model incorporated both consumers and policy-makers as agents as they adapted their behaviors to different water supply systems and rainfall patterns.Studies such as these have set a precedent that agent-based modeling is a viable research tool for water use and management issues. ABM has also been successfully adopted in the evaluation of complex phenomena in human-technical systems such as the adoption of environmentally-friendly technologies [38,41,51,52]. For example, Ref. [41] developed an agent-based model for the adoption of residential solar photovoltaic (PV) systems.In addition, other studies, such as one conducted by [38], showed that ABM can be useful in the simulation of the adoption behavior of innovative energy conservation technologies by capturing the underlying mechanisms affecting the decision-making behaviors of households.In another study, Ref. [52] adopted ABM to simulate the technology adoption behaviors related to three water-related innovations among households in Southern Germany.This study demonstrated that ABM enables capturing the effects of various factors and attributes (e.g., geographic attributes, heterogeneous agents, and decision processes).According to the [52], ABM provides a more realistic model of innovation diffusion in comparison with aggregated models such as the Bass model.Ref. [52]'s research evaluated the trends of innovation diffusion under several water strategies and policies by developing an empirically-based ABM.However, their model differs from the one in the current study, in which a theoretically-driven ABM was developed that enables the policymakers to test various intervention strategies to diffuse further water-efficient infrastructure in their application area.In particular, the model in the current study captures the effects of social networks in conjunction with several other socio-demographic factors in understanding household behaviors related to water conservation technology adoption. In addition, ABM provides a useful tool for conducting exploratory analysis.Exploratory analysis [53,54], utilizes computational models and simulation experiments to conduct scenario analysis and evaluate the behavior of complex systems [47,55].Exploratory analysis has been utilized in different studies (e.g., [56,57]) for the evaluation of environmental policies.Unlike traditional simulation approaches, exploratory analysis does not aim to predict the behavior of a system and does not intend to optimize a system.Instead, exploratory analysis focuses primarily on considering different policy scenarios based on changes in system behavior and future uncertainty.To this end, ABM enables capturing the adaptive behaviors and complex interactions that affect the patterns of behaviors in a phenomenon of interest [58].Hence, ABM was selected in this study to conduct exploratory analysis on the evaluation of the underlying mechanisms affecting water conservation technology adoption by residential consumers. Theoretical Framework The ABM in this study was created based on a number of theoretical elements including the theories of Innovation Diffusion, Peer Effect, and Affordability.Demographic and building characteristics, external factors, and social interactions all play a role in whether or not a household adopts water conservation technology.As discussed in Section 2, there have been many studies that analyze the influence of certain demographic, household, and external factors on water conservation technology adoption in isolation; however, theoretically, all of these attributes have the potential to influence a household's willingness to adopt a conservation technology.To this end, the theory of Innovation Diffusion was adopted to capture the coupled effect of income level, education, ownership status, house age, water pricing regimes, rebate availability, technology cost, and social networks concurrently.Based on Innovation Diffusion Theory (IDT), in adopting new technologies, a population can be divided into three groups: non-adopters, potential adopters, and adopters [18].Non-adopters are individuals who do not consider adopting a new technology.In contrast, potential adopters are individuals who do consider adopting new technologies.Different demographic and household attributes can influence whether an individual is a non-adopter or potential adopter.A potential adopter may become an adopter if the adoption of a technology is economically affordable for it.Based on the similar premise, in this study, households were divided into three categories (i.e., non-adopter, potential adopter, and adopter) in terms of their position for water conservation technology adoption.The transitions of households between these categories depend on their demographic characteristics, household attributes, peer influence, as well as water price and technology price factors.The theoretical framework of these transitions is depicted in Figure 1.Different components of the ABM framework are explained in the following section. Computational Simulation The creation of a computational representation for the proposed ABM theoretical framework entails the construction of mathematical models and algorithms to capture the theoretical logic representing the behaviors of households for the adoption of water conservation technology.Anylogic 7.0 was utilized to create a computational ABM.In the ABM framework proposed in this study, an agent (household) is the main target of influence, and the model shows how the agents' behaviors change over a designated period of time.The model incorporates only one agent class, which is the households.The households were divided into three categories (i.e., non-adopter, potential adopter, and adopter), defining their position on water conservation technology adoption.The transitions of households between these categories depend on their demographic and social attributes as well as water price and technology price factors.A household agent, based on its attributes, can transition from one state to another-from non-adopter to potential adopter and from potential adopter to adopter.These transition functions ultimately influence an agent toward or against a particular output.The variables related to the household socio-demographic characteristics, including household income, head education, age and gender, house ownership status, and household size, as well as the household building attributes such as house size and age and garden size, were used to determine one parameter, called Adoption Utility, presented in Equation (1): The variables related to the socio-demographic and building attributes of the households, as well as the coefficients of these variables, were abstracted from the study conducted by [19].The variables and their coefficients are summarized and documented in the Appendix A (Table A1).For example, the Adoption Utility of a household whose head is a female college graduate, without other demographics considered, is calculated as follows: 2.91education × 1yes + 1.21gender × 1female.If the utility value is greater than or equal to a user-inputted utility threshold, it then triggers the transition from non-adopter to potential adopter.The threshold indicates a measure of sensitivity.A model user can increase the adoption utility threshold in order to increase the importance placed on the demographic and household characteristics.For this particular model, the lowest possible theoretical threshold is 3000, while the maximum threshold is 60,000.The utility threshold is important because it allows the model to simulate a variety of community profiles.Because the utility value and threshold are based on the demographic characteristics and importance of those characteristics, respectively, variations in the threshold values make it possible to explore a range of community profiles.Communities have varying characteristics (e.g., income, education, or even house size distribution).Through the use of the utility threshold, the difference among communities can be reflected in the analysis. The function rule that triggers the transition from potential adopter to adopter is based on the Affordability Theory.Affordability is defined as the ability of households to pay for their water Computational Simulation The creation of a computational representation for the proposed ABM theoretical framework entails the construction of mathematical models and algorithms to capture the theoretical logic representing the behaviors of households for the adoption of water conservation technology.Anylogic 7.0 was utilized to create a computational ABM.In the ABM framework proposed in this study, an agent (household) is the main target of influence, and the model shows how the agents' behaviors change over a designated period of time.The model incorporates only one agent class, which is the households.The households were divided into three categories (i.e., non-adopter, potential adopter, and adopter), defining their position on water conservation technology adoption.The transitions of households between these categories depend on their demographic and social attributes as well as water price and technology price factors.A household agent, based on its attributes, can transition from one state to another-from non-adopter to potential adopter and from potential adopter to adopter.These transition functions ultimately influence an agent toward or against a particular output.The variables related to the household socio-demographic characteristics, including household income, head education, age and gender, house ownership status, and household size, as well as the household building attributes such as house size and age and garden size, were used to determine one parameter, called Adoption Utility, presented in Equation ( 1): The variables related to the socio-demographic and building attributes of the households, as well as the coefficients of these variables, were abstracted from the study conducted by [19].The variables and their coefficients are summarized and documented in the Appendix A (Table A1).For example, the Adoption Utility of a household whose head is a female college graduate, without other demographics considered, is calculated as follows: 2.91 education × 1 yes + 1.21 gender × 1 female .If the utility value is greater than or equal to a user-inputted utility threshold, it then triggers the transition from non-adopter to potential adopter.The threshold indicates a measure of sensitivity.A model user can increase the adoption utility threshold in order to increase the importance placed on the demographic and household characteristics.For this particular model, the lowest possible theoretical threshold is 3000, while the maximum threshold is 60,000.The utility threshold is important because it allows the model to simulate a variety of community profiles.Because the utility value and threshold are based on the demographic characteristics and importance of those characteristics, respectively, variations in the threshold values make it possible to explore a range of community profiles.Communities have varying characteristics (e.g., income, education, or even house size distribution).Through the use of the utility threshold, the difference among communities can be reflected in the analysis. The function rule that triggers the transition from potential adopter to adopter is based on the Affordability Theory.Affordability is defined as the ability of households to pay for their water expenditures [59].A household's annual water expenditures include the annual water bill plus costs of new water conservation technologies adopted until that year.In this model, household Affordability Index is measured by the household's annual water expenditures as a percentage of annual income, as shown in Equation ( 2) [21,59]: where, B is the household annual water bill, I is the household annual income, T is the water conservation technology available for adoption, C T is the average initial cost of purchasing the technology, R T is the available rebate for the adoption of the technology, and n T is the number of the technology in the household. If the Affordability Index of a household agent is less than the user-defined affordability threshold value, the household agent will transition from potential adopter to adopter.If it exceeds the affordability threshold, the adoption of technology is not affordable, and thus the agent will remain as a potential adopter.In other words, a household adopts the offered conservation technologies until the household's Affordability Index exceeds the affordability threshold value.The affordability threshold value is a function of income, water price, and water technology costs.Since water price might be regulated based on the income profile of communities, the affordability threshold can be location-specific.The affordability threshold ranges from 1-3% according to the studies conducted by the California Department of Public Health, the US Environmental Protection Agency, and United Nations Development Programs [60]. In the affordability measurement process, water price regime is incorporated into the model as an input parameter.Three different water pricing structures were assessed: fixed price, fixed charge, and block prices.The fixed price strategy places a cost on water per unit value.For example, one cubic meter of water costs a household $1.16.A noteworthy component of this pricing strategy is that the cost directly depends on how much water was used.Conversely, fixed charge is a pre-established, flat rate ($25.25) per month, regardless of how much water was actually consumed.Block pricing is similar to fixed pricing in the sense that the unit rate depends on how much water was used-it is a volumetric pricing strategy.However, instead of charging consumers per unit of water with the same rate, block pricing charges households based on the amount of water they consume.Households who typically use more water are charged at a higher rate than those who use less water.More specifically, households using less than 0.65 m 3 /day of water will be charged $0.95 per m 3 ; households using between 0.65 and 1.5 m 3 /day of water will be charged $1.14 per m 3 ; and households using more than 1.5 m 3 /day of water will be charged $1.37 per m 3 .These water pricing structures are proposed by [23], and the price values are based on the Miami-Dade Water and Sewer Department's rates [61]. Technology cost was also incorporated into this model as a parameter affecting the affordability index.An agent is able to adopt six main types of water conservation technology: high-efficiency bathroom faucets, kitchen faucets, shower heads, toilets, washing machines (clothes), and dishwashers.[23] conducted a study on the cost and efficiency of these technologies, which is documented in Table A2 of the Appendix A, along with the rebate information that the City of Miami Beach Utility offers for each of these technologies [62].Each technology's water-saving capacity is considered a measure of water demand reduction, as the technology is new and more water-efficient.The rebates can affect the technology cost as well-if household agents feel as though they will receive money back, the costs may be perceived as more affordable according to the established affordability index.This, in turn, impacts the model outputs. Water 2018, 10, 993 9 of 24 Equations ( 1) and ( 2) make up the Adoption Utility and Affordability Index, which define the adoption state of each household agent (i.e., non-adopter, potential adopter, and adopter).There is another phenomenon that can lead a household agent to transition from the non-adopter state to the potential adopter state and that is the social network influence from other agents.According to the theory of Peer Effect, household agents can have a connection to each other; through this connection between non-adopter and adopter households, non-adopter agents may communicate with adopter agents, and thus get influenced by them into making decisions regarding the adoption of a new technology [20,63].The model considers and implements five structures of social networks, the description of which are shown in the Appendix A (Table A3).Once the model has established a network according to the given structural parameters, it proceeds to simulate the social influence between connected agents.Given a user-defined likelihood of influence, if the non-adopter agent is connected to an adopter agent, there is a chance that the non-adopter will transition into the potential adopter state.Further details about social network influence modeling can be found in [64]. Figure 2 depicts all the transition rules between the three adoption states of the household agents.As shown in Figure 2, each agent, which is in the non-adopter state initially, can become a potential adopter based on its adoption utility or influence from social networks, and then immediately becomes an adopter if the conservation technology is affordable.Hence, it is possible for a non-adopter to become adopter in one time-step of simulation.However, at the same time step, a non-adopter agent should first become a potential adopter before it turns into an adopter.This is because a direct transition from the non-adopter state to the adopter state is not considered in the theory of innovation diffusion. Water 2018, 10, x FOR PEER REVIEW 9 of 24 technology [20,63].The model considers and implements five structures of social networks, the description of which are shown in the Appendix A (Table A3).Once the model has established a network according to the given structural parameters, it proceeds to simulate the social influence between connected agents.Given a user-defined likelihood of influence, if the non-adopter agent is connected to an adopter agent, there is a chance that the non-adopter will transition into the potential adopter state.Further details about social network influence modeling can be found in [64]. Figure 2 depicts all the transition rules between the three adoption states of the household agents.As shown in Figure 2, each agent, which is in the non-adopter state initially, can become a potential adopter based on its adoption utility or influence from social networks, and then immediately becomes an adopter if the conservation technology is affordable.Hence, it is possible for a non-adopter to become adopter in one time-step of simulation.However, at the same time step, a non-adopter agent should first become a potential adopter before it turns into an adopter.This is because a direct transition from the non-adopter state to the adopter state is not considered in the theory of innovation diffusion.Income growth and household size growth were the last attribute input parameters for the model.All of these inputs will generate a number of outputs, which demonstrate the basis of the type and timing of technology adoption by household agents.The simulation model outputs include the annual percentage distribution of all of the adoption states, the water demand reduction, and the different types of technology adopted over the predetermined time period of simulation which is twenty years. Model Initialization and Implementation In addition to developing a theoretically-driven ABM of household water conservation technology adoption, empirical data was used as values for initial conditions and model parameters to calibrate the ABM.To this end, data from the City of Miami Beach was used in the implementation of the ABM.The City of Miami Beach has more than ten thousand residential water consumers.To reduce the computational complexity of the model, a sample of 280 households that statistically represent the demographic distribution of the population was randomly selected and divided into three zip codes to be modeled.All 280 agents will start out as non-adopters; and, depending on Income growth and household size growth were the last attribute input parameters for the model.All of these inputs will generate a number of outputs, which demonstrate the basis of the type and timing of technology adoption by household agents.The simulation model outputs include the annual percentage distribution of all of the adoption states, the water demand reduction, and the different types of technology adopted over the predetermined time period of simulation which is twenty years. Model Initialization and Implementation In addition to developing a theoretically-driven ABM of household water conservation technology adoption, empirical data was used as values for initial conditions and model parameters to calibrate the ABM.To this end, data from the City of Miami Beach was used in the implementation of the ABM.The City of Miami Beach has more than ten thousand residential water consumers.To reduce the computational complexity of the model, a sample of 280 households that statistically represent the demographic distribution of the population was randomly selected and divided into three zip codes to be modeled.All 280 agents will start out as non-adopters; and, depending on different influences, will transition to potential adopter or adopter.The model then runs using Census data from these three zip codes, as well as individual household water use data provided by the Miami-Dade Utility.The census data includes information regarding median household income, education, average home ownership and average household size.Since some of the data provided by the Census are only average values, a triangular average distribution was used to assign each household a random value (see Figure A1 in the Appendix A).A uniform distribution was also used to assign the household head age, garden size, and house size in square feet.Values of parameters such as head gender and house age were randomly assigned due to the unavailability of data.Moreover, data related to a household's source of water such as the number of showerheads, toilets, and faucets come from a custom distribution.While the model could have been made with hypothetical inputs not based on reality, utilizing real data helps to convey a better narrative about water technology adoption for future policy-making and regulation. Model Verification and Validation ABMs are often criticized for relying on informal and subjective validation or no validation at all [65].Validating ABMs developed for complex systems using historical data is difficult and infeasible because of the stochastic nature of human-behavior models [48].Ref. [52] argued that social-system models cannot be tested for their structure appropriateness in a meaningful way as the interconnections of social processes are vague in the sense that competing theories exist for most phenomena.ABMs are typically validated using internal verification of the features representing the model quality [48,66].The verification of the ABM developed in this study was conducted through a gradual, systemic, and iterative process.The internal validity of the model was ensured through the use of grounded theories for modeling decision and behavioral processes of households.The theoretical and computational models were built rich in causal factors that can be examined to see what leads to particular outcomes.Each component of the model was checked for completeness, coherence, consistency, and correctness (4Cs) based on the performance of the model outputs.For instance, the model performance was verified by (i) taking the function of one component of the model and making sure it influences the outputs to the degree that is specified in the model; and (ii) running the simulation model with extreme values of each component and verifying the functionality of the model under that situation.Most errors that were discovered through verification had less to do with problems within the theories, and more regarding issues with coding correctly.Thus, most errors in the verification process were fixed relatively quickly and smoothly and then the aforementioned four features (4Cs) of the model were ensured.As there are no aggregated independent data available regarding the adoption of such water technologies in various lifestyles [52], the external validity of the ABM was conducted through the comparison of the model outcomes with the findings of other studies in the area of water conservation technology adoption.This technique has been also applied in a study by [67] for validating multi-agent models.As shown in Table 1, the results of the model reinforce what other studies have already noted.For example, the results of the model showed that the rate of adoption of water conservation technologies under various scenarios can lead to a 3-10% reduction in the overall water demand of the City of Miami Beach; this outcome is consistent with the findings of a study conducted by [68] that analyzed the impacts of the water conservation incentives on water demand in Miami-Dade County through surveys among the households.This study reports that about 6-14% reduction in water demand was achieved during the implementation of two 4-year water conservation incentive programs in this area."Social network type is not significant in determining mean energy use change, but is when considering the time required the network to reach equilibrium" [40]. Effect of household income level Income growth mostly influences a household's willingness to adopt water conservation technology. "We have previously found financial variables to be important supplements to attitude measures in technology adoption modeling" [30]. Scenario Setting After the model was verified and validated, it was used for simulation experimentation and scenario setting.Each of the three water price strategies was analyzed based on the simulation model for different combinations of the model input parameters.The possible scenarios were established based on different combinations of the input parameters in the model, shown in Table 2. Through the combination of various values of the input parameters, 230 scenarios were generated in total.The combinations of these scenarios reflect changes in water pricing structure, rebate status, income growth, household size growth, utility threshold, affordability threshold, and social network structure.Accordingly, under each specific scenario, 100 runs of Monte-Carlo experiments were conducted to determine the mean value of the output parameters (i.e., the number of adoptions and the resulting water savings).In addition, in order to compare the scenarios equally across the analysis, a base scenario was created as the reference point for the comparison.Table 2 also shows the values used for the parameters in the base scenario (see the last column).More details related to which parameters were used and how they were changed in the experimentation process to provide a diverse and all-encompassing series of outputs are presented in the Appendix A (Table A4).−5; −4; −3; −2; −1; 0; 1; 2; 3; 4; 5 0 Household size growth (%) −5; −4; −3; −2; −1; 0; 1; 2; 3; 4; 5 0 Utility threshold 10,000; 20,000; 30,000; 40,000; 50,000 30,000 Affordability threshold (%) 1, 1.5, 2, 2.5, 3 1.5 Social network structure Random (N = 1); distance-based (R = 100); ring lattice (N = 1); scale-free (M = 1); small-world (N = 1, P = 0.1) Random (N = 1) Results and Discussion Using the developed agent-based model, the scenario analyses of the simulated data were conducted in order to specify the effects of different factors on the water conservation technology adoption of households.Due to the stochastic nature of the simulation model, the 100 experiments related to each scenario led to varying outcomes, from which the mean value of percent adopter, number of adopted technologies, and overall demand reduction were abstracted and recorded.The results and corresponding discussions were formulated using three different forms of analysis as explained below. Socioeconomic Scenario Analysis Trend analysis across the various generated scenarios of income growth, water pricing strategy, rebate program, and utility threshold showed how much water households saved, how many households adopted, and which technologies were adopted under each scenario.Of these scenarios, certain trends regarding overall demand reduction-due to adoption of the technologies-were discerned and documented in Figure 3.The amount of residential water demand reduction due to the adoption of conservation technology was calculated based on the number and type of technologies adopted over the simulation period (i.e., 20 years).This study did not consider the behavioral aspects related to water conservation.The calculated residential water saving potential is only based on the adoption of conservation technologies.If the water conservation behaviors of the users are considered, the potential for residential water saving could be even more significant.Among the three water price strategies, the fixed charge strategy led to a more overall demand reduction.As shown in Figure 3, allocating rebates could increase its enhancement by 24% (4 m 3 /day).The strategy of fixed charge with rebate resulted in a total of 8-12 m 3 /day water savings more than the strategy of fixed price without rebate in various income growth rates.This amount means about 46-72% increase in the overall residential water demand reduction amount.In Figure 3, for all water price strategies and rebate status, as the income increased, there was an exponential increase in overall water demand reduction after adoption of new and efficient technologies. Although increased income led to more water savings derived by the adoption of conservation technologies, it might also lead to higher per capita water usage because higher-income households were shown to consume more water than lower-income households [29].Hence, the relationship between water usage, the adoption of water conservation technologies, and income is complex.Therefore, the number of technologies adopted were also accounted for in this study, and brought about interesting insights. Figure 4a shows an exponential trend in the total adoption number of expensive technologies (i.e., toilet, washing machine, and dishwasher) under various water pricing structures and rebate programs.It was discovered that with rebate allocation, the total number of expensive technology adoptions increased by almost 50% regardless of water price strategy or income growth.In Figure 4b, the adoption of inexpensive technologies (i.e., kitchen and bathroom faucet and showerhead) does not increase significantly (less than 10%) under any water price scheme when a rebate is included for affluent households (i.e., positive income growth rates); however, it is significant among the households with negative income growth rates.In other words, the results showed that the effectiveness of rebate programs is dependent on two factors (i) the type of technology (i.e., expensive or inexpensive), for which the rebate is allocated; and (ii) the affluence of the community, in which the rebate program is implemented.Additionally, it can be observed that under the strategy of fixed charge with rebate allocation, the maximum number of inexpensive technologies were adopted, approximately independent of income growth rate.What can be noted, however, is that across all of the other water price and rebate strategies, income growth will lead to the higher adoption of both expensive and inexpensive technologies. Trend analysis across the various generated scenarios of income growth, water pricing strategy, rebate program, and utility threshold showed how much water households saved, how many households adopted, and which technologies were adopted under each scenario.Of these scenarios, certain trends regarding overall demand reduction-due to adoption of the technologies-were discerned and documented in Figure 3.The amount of residential water demand reduction due to the adoption of conservation technology was calculated based on the number and type of technologies adopted over the simulation period (i.e., 20 years).This study did not consider the behavioral aspects related to water conservation.The calculated residential water saving potential is only based on the adoption of conservation technologies.If the water conservation behaviors of the users are considered, the potential for residential water saving could be even more significant.Among the three water price strategies, the fixed charge strategy led to a more overall demand reduction.As shown in Figure 3, allocating rebates could increase its enhancement by 24% (4 m /day).The strategy of fixed charge with rebate resulted in a total of 8-12 m /day water savings more than the strategy of fixed price without rebate in various income growth rates.This amount means about 46-72% increase in the overall residential water demand reduction amount.In Figure 3, for all water price strategies and rebate status, as the income increased, there was an exponential increase in overall water demand reduction after adoption of new and efficient technologies.Although increased income led to more water savings derived by the adoption of conservation technologies, it might also lead to higher per capita water usage because higher-income households were shown to consume more water than lower-income households [29].Hence, the relationship between water usage, the adoption of water conservation technologies, and income is complex.Therefore, the number of technologies adopted were also accounted for in this study, and brought about interesting insights.Figure 4a shows an exponential trend in the total adoption number of expensive technologies (i.e., toilet, washing machine, and dishwasher) under various water pricing structures and rebate programs.It was discovered that with rebate allocation, the total number of expensive technology adoptions increased by almost 50% regardless of water price strategy or income growth.In Figure 4b, the adoption of inexpensive technologies (i.e., kitchen and bathroom faucet and showerhead) does not increase significantly (less than 10%) under any water price scheme when a rebate is included for affluent households (i.e., positive income growth rates); however, it is significant among the households with negative income growth rates.In other words, the results showed that the effectiveness of rebate programs is dependent on two factors (i) the type of technology (i.e., expensive or inexpensive), for which the rebate is allocated; and (ii) the affluence of the community, in which the rebate program is implemented.Additionally, it can be observed that under the strategy of fixed charge with rebate allocation, the maximum number of inexpensive technologies were adopted, approximately independent of income growth rate.What can be noted, however, is that across all of the other water price and rebate strategies, income growth will lead to the higher adoption of both expensive and inexpensive technologies. The analysis also considered the sensitivity of the results to the utility threshold values.The utility threshold had a negative linear correlation with the adoption rate.Figure 5 shows the mean frequency of adoption states (i.e., adopter, potential adopter, and non-adopter) under various utility threshold values in the base scenario.In this figure, as the threshold increased, the percent adopter decreased, regardless of water price strategy or rebate status.The greater the threshold, the greater the demographic and building characteristics have to be in order to adopt.In contrast, the lower the threshold, the lower importance is granted to these factors.For example, if it is anticipated that demographic and building characteristics will not be important in the adoption of water conservation technology for a specific community (i.e., lower utility threshold), the results show that there is even a potential of a 67% adoption under the base scenario.The analysis also considered the sensitivity of the results to the utility threshold values.The utility threshold had a negative linear correlation with the adoption rate.Figure 5 shows the mean Water 2018, 10, 993 14 of 24 frequency of adoption states (i.e., adopter, potential adopter, and non-adopter) under various utility threshold values in the base scenario.In this figure, as the threshold increased, the percent adopter decreased, regardless of water price strategy or rebate status.The greater the threshold, the greater the demographic and building characteristics have to be in order to adopt.In contrast, the lower the threshold, the lower importance is granted to these factors.For example, if it is anticipated that demographic and building characteristics will not be important in the adoption of water conservation technology for a specific community (i.e., lower utility threshold), the results show that there is even a potential of a 67% adoption under the base scenario. Social Network Influence Examination For all water pricing and rebate potential strategies, five structures of social networking were implemented and tested.Figure 6 demonstrates that among the social network structures, the highest percentage of households transitioned out from a non-adopter state through the scale-free network, followed by distance-based, then small-world networks.In the social networks with the random and ring lattice structures, the smallest household percentage was influenced into adopting water conservation technology.The results also showed that the effect of the social network structure on the adoption of water conservation technology is independent of water price strategy and rebate status.However, the adoption percentage fluctuates across the five social networks under each scenario of price strategy and rebate status.Another analysis conducted related to the effects of social network structures was about the rate (speed) of each structure in reaching the adoption equilibrium state.The adoption equilibrium means a steady or stable state where the adoption rate no longer changes [40].From this point forward, there will be no significant increase or decrease in the adoption rate.The faster a social network structure reaches the adoption equilibrium, the earlier technology diffusion happens [40] and consequently, more water is saved earlier.As shown in Figure 7, whenever a steady state was observed in these graphs, it was identified as the time at which the adoption rate reaches an equilibrium through the influence of social networks.As shown in Figure 7, among the social network structures, the distance- Social Network Influence Examination For all water pricing and rebate potential strategies, five structures of social networking were implemented and tested.Figure 6 demonstrates that among the social network structures, the highest percentage of households transitioned out from a non-adopter state through the scale-free network, followed by distance-based, then small-world networks.In the social networks with the random and ring lattice structures, the smallest household percentage was influenced into adopting water conservation technology.The results also showed that the effect of the social network structure on the adoption of water conservation technology is independent of water price strategy and rebate status.However, the adoption percentage fluctuates across the five social networks under each scenario of price strategy and rebate status. Social Network Influence Examination For all water pricing and rebate potential strategies, five structures of social networking were implemented and tested.Figure 6 demonstrates that among the social network structures, the highest percentage of households transitioned out from a non-adopter state through the scale-free network, followed by distance-based, then small-world networks.In the social networks with the random and ring lattice structures, the smallest household percentage was influenced into adopting water conservation technology.The results also showed that the effect of the social network structure on the adoption of water conservation technology is independent of water price strategy and rebate status.However, the adoption percentage fluctuates across the five social networks under each scenario of price strategy and rebate status.Another analysis conducted related to the effects of social network structures was about the rate (speed) of each structure in reaching the adoption equilibrium state.The adoption equilibrium means a steady or stable state where the adoption rate no longer changes [40].From this point forward, there will be no significant increase or decrease in the adoption rate.The faster a social network structure reaches the adoption equilibrium, the earlier technology diffusion happens [40] and consequently, Another analysis conducted related to the effects of social network structures was about the rate (speed) of each structure in reaching the adoption equilibrium state.The adoption equilibrium means a steady or stable state where the adoption rate no longer changes [40].From this point forward, there will be no significant increase or decrease in the adoption rate.The faster a social network structure reaches the adoption equilibrium, the earlier technology diffusion happens [40] and consequently, more water is saved earlier.As shown in Figure 7, whenever a steady state was observed in these graphs, it was identified as the time at which the adoption rate reaches an equilibrium through the influence of social networks.As shown in Figure 7, among the social network structures, the distance-based network reached the equilibrium state most quickly followed by ring lattice then scale-free and small-world networks.The random network has not reached equilibrium over the twenty-year period.So the results indicate that if the peer effect is activated through a distance-based network structure, it can speed up the diffusion water conservation technology more than other structures. Under the base scenario, various numbers of connections per agents (N = 0-10) were tested for the random social network structure to evaluate the impact of the increasing connectivity level on the adoption rate of the agents' network.As shown in Figure 8, increasing the number of connections between the households improved their adoption rate significantly.However, it was identified that increasing the connectivity level of agents to more than 5 connections in the random network would have no additional impact on the adoption rate.This level of connectivity (i.e., N = 5) in this network can be characterized as a tipping point, where the effect of connectivity level reaches a stable state. Water 2018, 10, x FOR PEER REVIEW 15 of 24 based network reached the equilibrium state most quickly followed by ring lattice then scale-free and small-world networks.The random network has not reached equilibrium over the twenty-year period.So the results indicate that if the peer effect is activated through a distance-based network structure, it can speed up the diffusion of water conservation technology more than other structures.Under the base scenario, various numbers of connections per agents (N = 0-10) were tested for the random social network structure to evaluate the impact of the increasing connectivity level on the adoption rate of the agents' network.As shown in Figure 8, increasing the number of connections between the households improved their adoption rate significantly.However, it was identified that increasing the connectivity level of agents to more than 5 connections in the random network would have no additional impact on the adoption rate.This level of connectivity (i.e., N = 5) in this network can be characterized as a tipping point, where the effect of connectivity level reaches a stable state.The results of this study demonstrated that activating peer effect through social networks in a community can accelerate the diffusion of innovation regardless of the structure of social networks.Educating the public is one of the ways to achieve a greater rate of conservation diffusion [32].The idea of social marketing can be used to design effective information campaigns in order to encourage water consumers to adopt water conservation technology.Informational programs through various means of social media can increase the knowledge of residents about the benefits of adopting water conservation technologies.For instance, promoting water conservation technology adoption through mass media has the potential to reach a very large number of residential consumers [70].Based on the results of the current study, future studies can further examine the effects of social media on users' based network reached the equilibrium state most quickly followed by ring lattice then scale-free and small-world networks.The random network has not reached equilibrium over the twenty-year period.So the results indicate that if the peer effect is activated through a distance-based network structure, it can speed up the diffusion of water conservation technology more than other structures.Under the base scenario, various numbers of connections per agents (N = 0-10) were tested for the random social network structure to evaluate the impact of the increasing connectivity level on the adoption rate of the agents' network.As shown in Figure 8, increasing the number of connections between the households improved their adoption rate significantly.However, it was identified that increasing the connectivity level of agents to more than 5 connections in the random network would have no additional impact on the adoption rate.This level of connectivity (i.e., N = 5) in this network can be characterized as a tipping point, where the effect of connectivity level reaches a stable state.The results of this study demonstrated that activating peer effect through social networks in a community can accelerate the diffusion of innovation regardless of the structure of social networks.Educating the public is one of the ways to achieve a greater rate of conservation diffusion [32].The idea of social marketing can be used to design effective information campaigns in order to encourage water consumers to adopt water conservation technology.Informational programs through various means of social media can increase the knowledge of residents about the benefits of adopting water conservation technologies.For instance, promoting water conservation technology adoption through mass media has the potential to reach a very large number of residential consumers [70].Based on the results of the current study, future studies can further examine the effects of social media on users' The results of this study demonstrated that activating peer effect through social networks in a community can accelerate the diffusion of innovation regardless of the structure of social networks.Educating the public is one of the ways to achieve a greater rate of conservation diffusion [32].The idea Water 2018, 10, 993 16 of 24 of social marketing can be used to design effective information campaigns in order to encourage water consumers to adopt water conservation technology.Informational programs through various means of social media can increase the knowledge of residents about the benefits of adopting water conservation technologies.For instance, promoting water conservation technology adoption through mass media has the potential to reach a very large number of residential consumers [70].Based on the results of the current study, future studies can further examine the effects of social media on users' choices of water conservation adoption. Scenario Landscape Analysis The results of the ABM simulation model should be processed to generate the scenario landscape and to identify pathways towards the desired outcomes.Classification and Regression Tree (CART) analysis was used to analyze the simulation data and explain the impact of different factors affecting the water conservation technology adoption.CART is a nonparametric technique for data mining that can select, from among a large number of variables, the most important variables in determining the desirable outcomes based on their interactions [71].CART operates by recursively partitioning the data until the ending points, or terminal nodes, are achieved using preset criteria.It, therefore, begins by analyzing all explanatory variables and determining which binary division of a single explanatory variable best reduces the deviance in the response variable (final output) to produce accurate and homogenous subsets [72].The CART analysis has two components: the predictor importance analysis and the regression tree.The predictor importance analysis distinguishes which variables lead the greatest significance for the response variable.The regression tree is a tree-structured representation in which a regression model is fitted to the data in each partition.The importance predictors of each parameter engender a tree diagram that illustrates all possible pathways (combination of different values of the variables) toward or against the final response variable [73]. The predictor importance analysis of CART was conducted to highlight which parameters (mechanisms) fostered the greatest significance to the model outputs.The predictor importance analysis was conducted to determine which parameters (mechanisms) had the greatest effect on the model outputs.The results of this analysis are shown in Figure 9.The results show the importance of each independent parameter (e.g., income growth, water price structure, etc.) in determining different model outcomes: (a) Expensive Technology Adoption (ETA); (b) Inexpensive Technology Adoption; and (c) Overall Daily Water Demand Reduction (ODWDR).As shown in Figure 9 (panel c), the results demonstrated that income growth, affordability threshold, water price structure, and rebate program were the top four most important parameters (in descending order) affecting the total technology adoption (which results in ODWDR).The structure of social networks, utility threshold, and household size growth had less impact on water demand reduction.This order of importance is mostly consistent in the adoption of inexpensive technology.In the adoption of inexpensive technologies, water price was the most important parameter, followed by income growth and utility threshold (panel b).The adoption of inexpensive technologies was more dependent on socio-demographic and house characteristics (which is reflected in the utility threshold) than for expensive technologies.Nevertheless, income growth and affordability threshold, which are economic parameters, influenced the adoption of expensive technologies (panel a). The simulated data were also utilized for meta-modeling using the regression tree of CART analysis.The scenario landscape was created based on the best fit of the CART model (Figure 10).In Figure 10, each path includes a set of branches representing the specific values of the most important parameters in determining the model outcome based on the predictor importance analysis.Each path leads to a terminal node (shown with bold border) representing the final outcome which is the overall daily water demand reduction (ODWDR).Basically, the scenario landscape of adoption patterns (Figure 10) demonstrates how the results (in terms of residential water demand reduction derived by conservation technology adoptions) would vary under different scenarios (combinations) of the underlying technology adoption mechanisms.As shown in the scenario landscape of adoption patterns (Figure 10), the residential water demand can be reduced potentially by as much as 5.8-18.3m 3 /day (see the red and green nodes) through the adoption of water conservation technology under different scenarios (which translates to about a 3-10% reduction in the overall water demand of households in the service area). each independent parameter (e.g., income growth, water price structure, etc.) in determining different model outcomes: (a) Expensive Technology Adoption (ETA); (b) Inexpensive Technology Adoption; and (c) Overall Daily Water Demand Reduction (ODWDR).As shown in Figure 9 (panel c), the results demonstrated that income growth, affordability threshold, water price structure, and rebate program were the top four most important parameters (in descending order) affecting the total technology adoption (which results in ODWDR).The structure of social networks, utility threshold, and household size growth had less impact on water demand reduction.This order of importance is mostly consistent in the adoption of inexpensive technology.In the adoption of inexpensive technologies, water price was the most important parameter, followed by income growth and utility threshold (panel b).The adoption of inexpensive technologies was more dependent on sociodemographic and house characteristics (which is reflected in the utility threshold) than for expensive technologies.Nevertheless, income growth and affordability threshold, which are economic parameters, influenced the adoption of expensive technologies (panel a).The simulated data were also utilized for meta-modeling using the regression tree of CART analysis.The scenario landscape was created based on the best fit of the CART model (Figure 10).In Concluding Remarks As water scarcity becomes more critical, demand-side management methods for conservation are increasingly necessary.The agent-based model and scenario analysis revealed concrete methods for encouraging household water conservation technology adoption.Firstly, income growth most influences potential adopter households' willingness to adopt, followed closely by water pricing strategy.With no regard to other factors, households adopted enough water conservation technologies to reduce the daily water demand by more than 7 m (almost 8% of the city's daily residential water demand) under the fixed charge water pricing.This reduction was not met under Concluding Remarks As water scarcity becomes more critical, demand-side management methods for conservation are increasingly necessary.The agent-based model and scenario analysis revealed concrete methods for encouraging household water conservation technology adoption.Firstly, income growth most influences potential adopter households' willingness to adopt, followed closely by water pricing strategy.With no regard to other factors, households adopted enough water conservation technologies to reduce the daily water demand by more than 7 m 3 (almost 8% of the city's daily residential water demand) under the fixed charge water pricing.This reduction was not met under the volume use charging strategies.While fixed charging strategies may lead people to pay less than their water use shows, it can make the adoption of water conservation technology affordable.This is especially true for households that are aware of water shortages, making them potential adopters. Based on assessing different community profiles from the CART analysis, volumetric water charging strategies are best implemented in more affluent communities where income growth is more likely.Conversely, a fixed charge regime would be best suited for less affluent communities, where income growth is less common.Rebate allocation programs increased the adoption rate-especially for expensive technologies, which had an increase of 50%.The findings suggest that municipalities and water agencies can use rebate allocation programs either with volumetric water pricing strategies or across less affluent communities.This pathway leads to a desired amount of water demand reduction.The adoption of inexpensive technology-i.e., kitchen and bathroom faucet, showerhead-did not increase at all when a rebate was included, and this was especially so in households with high income growth rates.In fact, the adoption of inexpensive technologies is significantly dependent on socio-demographic and household characteristics than for expensive ones.This indicates that targeting households to adopt inexpensive technology needs to involve outreach programs more than rebate policy. Another important finding was related to the effects of social networks.The adoption percentage fluctuated across all five social networking schemes under each scenario of water price and rebate status.However, the distance-based network, among all network types, reached equilibrium in a shorter period.This means that the peer effect through neighboring social connections can speed up technology adoption potential more so than other social networks. In terms of water pricing, for households who are already potential adopters, implementing a fixed charge strategy makes the adoption of water conservation technology more affordable.Offering rebates for technologies along with volumetric water pricing will lead communities to adopt enough technology to reach the desired water demand reduction levels.More broadly, if agencies' goals are to increase the rate of technology adoption, they must consider which pricing and rebate policies will be the most successful in their particular community.The planning and governance of water price has a greater importance on household adoption of water conservation technology than any other demographic, household, or social networking factors.The results of this study are important to consider in improving demand-side conservation management strategies.It should be noted that the modeling approach was utilized in this study to explore possible patterns of water conservation technology adoption and examine the underlying mechanisms rather than making predictions.While the research fostered a unique way to evaluate water conservation technology patterns, there are past studies (see Table 1) that, despite using a variety of different methods, found similar findings to the model.This, in turn, served as a point of external validation to the model's results.These results provide a clear course of action for the future development of household water conservation technology adoption programs and provide further evidence that demand-side management strategies will help foster a solution to urban water conservation problems. Limitations and Future Studies While the findings of this study will help municipalities and water agencies to strategically encourage the household adoption of water conservation technology, they do pose some limitations.Unfortunately, not every demographic characteristic of an individual can have could be accounted for, such as religious identity, race, sexual orientation, or even number of children in the household.That is not to say that all of these demographics would have had an impact on the utility value and household's adoption state, but it could have fostered more inclusive results.These characteristics were not considered due to a lack of information from the Census or water research.In the future, these identities will hopefully become more prominent in mainstream Census and demographic research, allowing for their inclusion in these models.Another important note about this model is that the only dynamic parameters considered were house age and social network influence (peer effect).The other input parameters in the model (such as threshold values) are static, which inhibits the ability of capturing feedback mechanisms.Through a feedback mechanism, households can reflect upon their decisions and change accordingly [63].For example, the water pricing stays the same over the simulation period (20 years) and does not change based on the rate of adoption.While it is possible for government officials to change water pricing regime after a certain amount of time based on the adoption rate (as a feedback mechanism), this model did not account for them.In the model presented in this study, no feedback mechanism was incorporated as the inclusion of feedback mechanisms in the diffusion of innovations requires new methods of parametrization, calibration, and validation [74].Hence, it is of great importance to consider the feedback mechanisms in water conservation technology adoption of households in future studies.Future studies can also evaluate additional mechanisms and phenomena affecting the water conservation technology adoption.For example, the impact of implementing water outage policies in a community on the conservation technology adoption behavior of households can be added to the model developed in this study.Despite these limitations, this study presented valuable findings towards better understanding the underlying mechanism of water conservation technology adoption for residential consumers.A3.The attributes and parameters of social network structures [64].A3.The attributes and parameters of social network structures [64]. Figure 1 . Figure 1.The theoretical framework for the simulation. Figure 1 . Figure 1.The theoretical framework for the simulation. Figure 2 . Figure 2. The control-flow diagram for agent transitions between adoption states. Figure 2 . Figure 2. The control-flow diagram for agent transitions between adoption states. Figure 3 . Figure 3.The modeling trends in the overall daily water demand reduction. Figure 3 . Figure 3.The modeling trends in the overall daily water demand reduction. Figure 4 . Figure 4.The modeling trends on the total number of adopted technologies (a) Expensive technology; (b) Inexpensive technology. Figure 4 . Figure 4.The modeling trends on the total number of adopted technologies (a) Expensive technology; (b) Inexpensive technology. Water 2018 , 24 Figure 5 . Figure 5.The distribution of adoption states over different utility threshold values. Figure 6 . Figure 6.The social network structure influence on the distribution of the adoption states over different scenarios. Figure 5 . Figure 5.The distribution of adoption states over different utility threshold values. Water 2018 , 24 Figure 5 . Figure 5.The distribution of adoption states over different utility threshold values. Figure 6 . Figure 6.The social network structure influence on the distribution of the adoption states over different scenarios. Figure 6 . Figure 6.The social network structure influence on the distribution of the adoption states over different scenarios. Figure 7 . Figure 7.The comparison of the time to reach the equilibrium state across the social network structures (a) Random; (b) Distance-based; (c) Ring lattice; (d) Scale-free; (e) Small-world. Figure 8 . Figure 8.The effect of the connectivity level in social networks on the adoption rate. Figure 7 . Figure 7.The comparison of the time to reach the equilibrium state across the social network structures (a) Random; (b) Distance-based; (c) Ring lattice; (d) Scale-free; (e) Small-world. Figure 7 . Figure 7.The comparison of the time to reach the equilibrium state across the social network structures (a) Random; (b) Distance-based; (c) Ring lattice; (d) Scale-free; (e) Small-world. Figure 8 . Figure 8.The effect of the connectivity level in social networks on the adoption rate. Figure 8 . Figure 8.The effect of the connectivity level in social networks on the adoption rate. Figure 9 . Figure 9.The predictor importance analysis for the model outcomes. Figure 9 . 24 Figure 10 , Figure 9.The predictor importance analysis for the model outcomes. Figure 10 . Figure 10.The scenario landscape of adoption patterns using Classification and Regression Tree (CART) analysis. Figure 10 . Figure 10.The scenario landscape of adoption patterns using Classification and Regression Tree (CART) analysis. If the distance between two agents is less than the given maximum connection range (the maximum distance in meters between agents for there to be a connection), then both agents are connected.are similar to the ring lattice, while also including some long-distance relationships.The neighbor link probability is the chance that two agents connected to the same neighbor may also connect to each other.multiple connections (considered as hubs), while others have very few connections.Number of hubs (M)M = 1-10 10 Figure A1 .Figure A1 . Figure A1.The average demographic and water consumption data of the zip codes used in the model. Table 1 . The external validation of the model findings. Table 2 . The variation of the input parameter values for the scenario setting. Table A4 . The variation of parameters for the model experimentation process.
2019-01-20T14:39:40.535Z
2018-07-27T00:00:00.000
{ "year": 2018, "sha1": "fd1dd78da95905eb1b956b34a698d8a59905d5cd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/8/993/pdf?version=1532687560", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fd1dd78da95905eb1b956b34a698d8a59905d5cd", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
5842371
pes2o/s2orc
v3-fos-license
Central Neurocytoma: A Review of Clinical Management and Histopathologic Features Central neurocytoma (CN) is a rare, benign brain tumor often located in the lateral ventricles. CN may cause obstructive hydrocephalus and manifest as signs of increased intracranial pressure. The goal of treatment for CN is a gross total resection (GTR), which often yields excellent prognosis with a very high rate of tumor control and survival. Adjuvant radiosurgery and radiotherapy may be considered to improve tumor control when GTR cannot be achieved. Chemotherapy is also not considered a primary treatment, but has been used as a salvage therapy. The radiological features of CN are indistinguishable from those of other brain tumors; therefore, many histological markers, such as synaptophysin, can be very useful for diagnosing CNs. Furthermore, the MIB-1 Labeling Index seems to be correlated with the prognosis of CN. We also discuss oncogenes associated with these elusive tumors. Further studies may improve our ability to accurately diagnose CNs and to design the optimal treatment regimens for patients with CNs. INTRODUCTION Central neurocytoma (CN) was first described in the 1980's by Hassoun et al. [1] who studied two patients with intraventricular tumors using electron microscopy. CN is a benign tumor of the central nervous system that is classified as a grade II tumor by the World Health Organization (WHO) [2,3]. A combination of treatments, such as surgery with adjuvant radiation, can be considered for CN despite its good prognosis [2][3][4]. Because of the tumor's rarity and its elusive nature, only a limited number of studies, case reports, and reviews have been published on CN. Central Neurocytoma: A Review of Clinical Management and Histopathologic Features CNs are found in the anterior half of the lateral ventricle, although some have reported to be found in the third and fourth ventricles [29][30][31]. The tumor is also usually attached to the septum pellucidum near the foramen of Monroe [32,33]. The cellular origination of CN is unclear; however, various authors have suggested CN may develop from neuronal cells, neuronal progenitor cells, neuronal stem cells, and multipotent precursor cells [11,21,29,[34][35][36]. CLINICAL MANIFESTATIONS CN may increase the intracranial pressure by obstructing the interventricular foramen, which can lead to hydrocephalus [32,54]. Patients may also experience nausea, vomiting, headache, seizures, decreased consciousness, weakness, and memory or vision problems [4,7,30,38,[54][55][56][57]. In rare cases, intraventricular hemorrhage may also occur [58]. Patients with EVN present with similar symptoms, in addition to weakness and numbness in the limbs [20,42,59,60]. These symptoms are typically present for approximately 3-6 months, although the dura-tion of symptoms can vary from a few days to many years [29,30,55,61]. The duration seems to be mostly related to tumor location, and does not seem to be correlated to the aggressiveness of the tumor [4]. SURGERY Surgical management with a gross-total resection (GTR) is currently the gold standard treatment for CNs, which often has excellent prognosis and minimizes the chances of CN re- currence [65]. GTR is achieved in nearly 30-50% of all CN patients. In an analysis of 310 patients with CN who underwent a GTR, there was a 99% five-year survival rate [14,54,65,66]. In comparison, individuals who had surgery with only subtotal resection (STR) had an 86% five-year survival rate. STR of CN increases the rate of recurrence and decreases the rate of survival [65]. A recent multi-center study found that in 71 patients with CN, those with STR had a 3.8-fold higher risk of recurrence and adverse outcomes compared to patients with GTR [12]. For patients with STR, adjuvant radiotherapy was administered in this study. Fractionated radiotherapy Radiotherapy and radiosurgery are non-invasive adjuvant treatments, but the toxicities from radiation are still being weighed against the benefits of tumor control [65,67]. Because CNs usually have excellent prognosis when GTR is achieved, radiation is not always indicated [25,54,55]. Radiotherapy and radiosurgery have been adopted as an adjuvant treatment when GTR cannot be achieved, the patient is inoperable, or the tumor is aggressive [12,61]. A recent report suggests that fractionated radiotherapy (FRT) after STR had a statistically significant higher tumor control rate and improved survival in adults [11,68]. A higher 5-year progression free survival has also been shown for patients who received adjuvant FRT after STR (67%) than patients without FRT (53%) [69]. Stereotactic radiosurgery While FRT delivers multiple fractions of radiation in lower dosage, stereotactic radiosurgery (SRS) administers one higher dose (9 to 25 Gy) of radiation in 1-5 fractions. The first literature on the use of SRS for CN was published by Schild et al. [54]. SRS has been suggested to be potentially favorable to FRT. Although not statistically significant, Patel et al. [11] also reported that adjuvant SRS for patients with STRs demonstrated a 100% tumor control rate compared to an 87% tumor control rate for patients with adjuvant FRT [54]. Garcia et al. [70] also reported a higher tumor control rate of 93% with SRS versus 88% with FRT. The relative risk (RR) of SRS to FRT for recurrence was 0.57 less (95% CI: 0.21-1.57; log-rank p=0. 85), and the RR for mortality was 0.23 less (95% CI: 0.05-1.05; log-rank p=0.22), although statistically insignificant. Lower complication was noted for patients with SRS, although distant tumor recurrence was slightly higher in patients who received SRS than those who received FRT. SRS is suggested to be at least as effective as FRT in achieving tumor control. Prognosis Schild et al. [54] reported a 5-year survival rate for patients who received FRT or SRS after surgical resection of 88%, while the 5-year survival rate for patients without adjuvant radiation was only 71%. Imber et al. [69] also found significantly improved survival rates when adjuvant radiotherapy is administered following STR. Patients with STR and FRT had a 67% 5-year survival rate, while patients with STR only had a 53% survival rate. Additionally, Kim et al. [71] reported that pa- tients with STR had a lower recurrence rate after adjuvant therapy of FRT or SRS (1 of 12 patients) when compared to patients without adjuvant radiation (3 of 12 patients), although the difference was statistically insignificant. Overall, adjuvant therapy following incomplete resection of CNs appears to result in better tumor control. Toxicity Complications can arise from radiation therapy. A single institutional study found that 4 out of 7 patients who received FRT exhibited complications, such as white matter degradation or radiation necrosis, although FRT was effective in achieving local tumor control [71]. Chen et al. [72] found that among 60 patients treated with radiation therapy, 28 patients exhibited grade I neurotoxicity which resulted in short-term memory impairment and motor deficit. Seven patients displayed grade II neurotoxicity, while three patients had grade III neurotoxicity [72]. The associated symptoms included cognitive disturbance, hemianopsia, seizure, and involuntary movement [72]. CHEMOTHERAPY Although chemotherapy is not a primary treatment modality for CN, chemotherapy has been used as an adjuvant or salvage therapy for recurrent CNs or inoperable patients [11,31,73]. There are no studies using chemotherapy as a primary form of treatment for CN, nor a comparison of the efficacy between radiotherapy and chemotherapy as an adjuvant treatment [4,11,[73][74][75]. Only a few case reports noted partial tumor regression following chemotherapy, and only one study reported a child with a complete response using a combination of topotecan, carboplatin, and phosphamide in three cycles [4,[75][76][77][78]. HISTOPATHOLOGICAL ANALYSIS AND MOLECULAR PATHOGENESIS CN has been relatively difficult to diagnose because of its histopathological similarity to other brain tumors, such as oligodendrogliomas and ependymomas [4,11,64]. Light microscopy is ineffective in identifying CNs [9,79,80]. Generally, immunohistochemistry is performed for the diagnosis of CN [9,29,79]. The histology of CN can vary throughout a single specimen and is typically benign (Fig. 5) [10]. The tumor cells create a "honeycomb pattern, " and appear small and round with scant cytoplasm and stippled chromatin [4,10,24,31,32]. Since these characteristics are similar to the appearance of oligodendrogliomas, there is a potential opportunity for misdiagnosis [4,10,11,24,31,32]. Likewise, the pathological features of ependymomas are similar to the perivascular rosette or straight line cell arrangements that are also seen in CN [10]. Therefore, multiple immunohistochemical markers are helpful in differentiating CNs from other tumors. Although less commonly used, electron microscopy can be another helpful tool in diagnosing patients with CNs by looking for parallel arrays of microtubules with dense-core neurosecretory granules and clear vesicles [7,21,24,36]. Immunohistochemical markers Synaptophysin is one of the major molecular markers for CN [10]. Positive staining for synaptophysin, a transmembrane glycoprotein present in presynaptic vesicles of neurons, is a strong indicator for neuronal cells and its neoplasms (Fig. 6). Synaptophysin staining is usually found in the fibrillary and perivascular areas of CN [9,81,82]. In addition to positivity for synaptophysin, negativity for neuron specific enolase (NSE) and vimentin has been reported to suggest CN over oligodendroglioma and clear cell ependy- (Table 1) [4]. NSE is a glycolytic enzyme located in the cytoplasm of neurons [83]. Although it is present in CN, it lacks neuronal specificity and has been reported to be present in non-neuronal neoplasms [9,24]. Vimentin is an intermediate filament protein found in glial cells, and is usually present in oligodendrogliomas and clear cell ependymomas, but absent in CN [84,85]. Epithelial membrane antigen (EMA) is another protein that differentiates CN from oligodendroglioma and ependymoma. EMA, which is normally expressed in epithelial cells, is present in ependymal cells in the central nervous system, as well as in ependymomas [86]. Furthermore, EMA positivity has also been linked with other glial tumors such as glioblastoma, astrocytoma, and oligodendroglioma [86,87]. Thus positivity of EMA can suggest ependymoma and oligodendroglioma over CN [4]. Neuronal nuclei (NeuN) is present in the nuclei and perinuclear cytoplasm of post-mitotic neurons in the central nervous system. Positive staining for NeuN suggests the neuronal nature of neoplasms and is considered to be a reliable marker for clear cell neoplasms of the central nervous system, which include CN, oligodendroglioma, and clear cell ependymoma [4,88,89]. It has also been reported that positive staining for NeuN correlates with a lower proliferation index [4,90]. Glial fibrilllary acidic protein (GFAP), which is detected in glial cell tumors, is usually absent in CN. It is the most abundant intermediate filament protein in astrocytes, and is usually present in astrocytes that infiltrate or surround CN [4,21,[51][52][53]91,92]. Cases of CN with GFAP positivity suggest glial differentiation of bipotential (astrocytic and neuronal) precursor cells, and also correlates with a more malignant disease course [93]. Neurofilament (NF), which exists as intermediate filaments in neurons, is largely absent in CN. This suggests that the full differentiation of CN cells to developed neurons is rare [12,34]. Vasiljevic et al. [12] reported that positivity in NF is a key diagnostic difference between CN and pineal parenchymal tumor. Oligodendrocyte transcription factor 2 (Olig2), which is a transcription factor that regulates oligodendroglial differentiation, is also generally absent in CN. Olig2 can be used as a diagnostic marker for oligodendroglioma, and positivity for Olig2 suggests oligodendroglioma over CN [94,95]. This also argues the rarity of CN cells undergoing glial differentiation. Chromagranin A (chrA) is a neuroendocrine protein located on secretory vesicles of neurons. ChrA is generally absent in CN, but cases of positivity have been reported [96][97][98]. Peng et al. [96] suggested that the positivity of chromogranin A may be due to the presence of ganglion cells in CN. Overall, ChrA is not a reliable marker for the diagnosis of CNs; however, ChrA positivity in some CNs may provide an insight to the cellular and developmental origins of CN [98]. MIB-1 LI has also been reported to be an indicator of tumor relapse. It has been reported that CN with MIB-1 LI >2% had a 63% chance of recurrence; whereas CN with MIB LI <2% had only a 22% chance of recurrence over a 150-month period [4,99]. Furthermore, Chen et al. [93] found that out of the nine patients presented in the study, the four that experienced tumor recurrence or death from continuous tumor growth or surgical complications had MIB-1 LI >2%, suggesting that an MIB-1 LI >2% may indicate a more aggressive disease course. MIB-1 LI may also be used to determine the tumor grade. Sharma et al. [18] reported that the only proliferation marker that correlated with CN atypia is the MIB-1 LI. Söylemezoglu et al. [52] found that a MIB-1 LI >2% correlated with microvascular proliferation and suggested that a CN with MIB-1 LI >2% should be termed 'atypical' [4]. Atypical CN are also known to spread through the cerebrospinal fluid and metastasize in the ventricles or the spinal cord [100][101][102][103]. Genetic alterations Many types of genetic mutations have been associated with CNs. N-myc proto-oncogene (N-Myc), which is an oncogene associated with the development of other cancers such as neuroblastoma and medulloblastoma, is overexpressed in CN [34,[104][105][106]. The overexpression of N-Myc in neuroblastoma seems to indicate a poorer prognosis [34,107]. N-Myc is required for neural proliferation, but inhibits complete neural differentiation of neuronal progenitor cells [34,108]. The levels of N-Myc are inversely related to the tumor suppressor gene encoding Myc box-dependent-interacting (BIN-1) protein, which has been found to be significantly underexpressed in neurocytomas, as well as other tumors [34,105]. This suggests that CN may contain a mutation somewhere in the pathway that includes both N-Myc and BIN-1, and that the respective overexpression and underexpression of N-Myc and BIN-1 may play a large role in CN tumorigenesis [34,105]. Phosphatase and tensin homolog (PTEN) gene is a tumor suppressor gene also found to be overexpressed in CN [105]. Musatov et al. [109] showed that PTEN overexpression inhibits neural differentiation in PC12 cells by phosphoinositide 3-kinase and mitogen-activated protein kinase pathways [34]. Thus, PTEN and N-Myc overexpression together could potentially warrant the lack of full neuronal differentiation frequently seen in CN. In addition, Sim et al. [35] found that insulin-like growth factor 2 was overexpressed in CN cells compared to cells in the ventricular zone, and may play a key role in the prolifera-tion of neurocytoma, similar to its role in the proliferation of glioblastoma multiforme [34,110]. Similarly, platelet-derived growth factor D (PDGF-D) and neureglin 2 (NRG-2) were found to be overexpressed in CN. PDGF-D overexpression has been found to be involved with the maturation of certain tumors [34,111], whereas NRG-2 overexpression has been linked to the proliferation of neuroblasts, as well as the aggressiveness in breast carcinoma [34,112,113]. Overall, PDGF-D and NRG-2 can also offer an explanation for the tumorigenesis of neuronal progenitor cells [34]. CONCLUSION CN is a benign tumor of the CNS that has an excellent prognosis. Surgery with gross total resection is the most preferable, correlated with the best long-term survival rates and local tumor control. Adjuvant radiotherapy may be considered for residual CN following STR, large CN size, or CNs near inoperable regions. Radiotherapy or chemotherapy the primary treatment for CNs has not been thoroughly examined. Many histological markers are available for the diagnosis of CN, although some markers are also sensitive to other tumors. The MIB-1 LI is currently the most accurate tool to determine prognosis, tumor relapse, and tumor grade. Further molecular and genetic studies may offer insights into other immunohistochemical methods for improved diagnostic accuracy. Conflicts of Interest The authors have no financial conflicts of interest.
2017-11-11T18:35:42.393Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "33861abe9b145949ec2c7977d81addcf4a88aede", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5114192?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7164c3b5bd35197e91e384e336ee24a5b61881af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
60441282
pes2o/s2orc
v3-fos-license
BCI2000Web and WebFM: Browser-Based Tools for Brain Computer Interfaces and Functional Brain Mapping BCI2000 has been a popular platform for development of real-time brain computer interfaces (BCIs). Since BCI2000's initial release, web browsers have evolved considerably, enabling rapid development of internet-enabled applications and interactive visualizations. Linking the amplifier abstraction and signal processing native to BCI2000 with the host of technologies and ease of development afforded by modern web browsers could enable a new generation of browser-based BCIs and visualizations. We developed a server and filter module called BCI2000Web providing an HTTP connection capable of escalation into an RFC6455 WebSocket, which enables direct communication between a browser and a BCI2000 distribution in real-time, facilitating a number of novel applications. We also present a JavaScript module, bci2k.js, that allows web developers to create paradigms and visualizations using this interface in an easy-to-use and intuitive manner. To illustrate the utility of BCI2000Web, we demonstrate a browser-based implementation of a real-time electrocorticographic (ECoG) functional mapping suite called WebFM. We also explore how the unique characteristics of our browser-based framework make BCI2000Web an attractive tool for future BCI applications. BCI2000Web leverages the advances of BCI2000 to provide real-time browser-based interactions with human neurophysiological recordings, allowing for web-based BCIs and other applications, including real-time functional brain mapping. Both BCI2000 and WebFM are provided under open source licenses. Enabling a powerful BCI suite to communicate with today's most technologically progressive software empowers a new cohort of developers to engage with BCI technology, and could serve as a platform for internet-enabled BCIs. INTRODUCTION A brain-computer interface (BCI) is a system that translates brain activity into control signals for a computer. Modern incarnations of BCIs rely on rapid and low-latency brain signal acquisition, preprocessing, feature extraction, classification and/or regression, and frequently, postprocessing of the resultant control signal (Wolpaw et al., 2002). In the case of closed-loop BCI, some form of visual or auditory feedback is given to the user to inform them of their control performance, typically requiring a low round-trip latency from signal acquisition to output. BCI development typically requires performant implementations of data acquisition and signal processing algorithms, high precision synchronization of external device telemetry, and typically, control of external software, requiring inter-process control or device input emulation (Wolpaw et al., 2000;Vaughan et al., 2003). These technical requirements make the development of software for this purpose extremely challenging; however, there are a number of existing software platforms that bootstrap this development endeavor. BCI2000 has been a standardized research platform for BCI development for the last 15 years; it has been used by over 400 labs, and has been cited in numerous publications (Schalk et al., 2004). OpenViBE is another platform that has been developed to support real-time BCI research, offering a graphical programming language for signal processing and visualization (Renard et al., 2010). Additionally, a low-level communication protocol supporting signal acquisition and synchronization, called LabStreamingLayer, allows for TCP network streaming and synchronization of multi-modal data streams (Kothe, 2016) and could form the foundation of a BCI platform. Widespread adoption and advancement of web browser technology makes it an attractive target for a BCI platform. Recent advancements in browser technology and standards have enabled direct access to low-level system resources such as graphics hardware and accelerometry/system sensors with application programming interfaces (APIs) that have exposed this hardware and software functionality via easy-to-use yet powerful and performant JavaScript packages. Network-enabled services also implement publicly available APIs that allow developers to call upon remote computational resources, such as Amazon web services (AWS), or to query information from vast databases of indexed knowledge, such as Wikipedia and Google Image Search. Moreover, many libraries supporting visual presentation of user interfaces and data visualizations have been developed. For example, d3.js (Bostock, 2011) has been used to power interactive data visualizations with impressive performance and an expressive yet functional API. Many of the technologies readily available in the modern web browser would be useful to have available for the development of a contemporary BCI-for example, the ability to tag data in realtime with a speech transcription, via the WebSpeech API (Shires and Wennborg, 2012), or the ability to present stimuli in 3D using a virtual reality headset, via WebVR (Vukicevic et al., 2016) and three.js (Cabello et al., 2010). Visualization of the resulting data using d3.js (Bostock, 2011) or even sonification using the WebAudio API (Adenot et al., 2018) are fruitful endeavors for understanding realtime BCI output. Existing BCI software suites generally provide some amount of interprocess communication, typically exposed via user datagram protocol (UDP) or shared memory. However, browsers do not typically allow web apps to access UDP natively due to security concerns; further, existing communication schemes like BCI2000's AppConnector interface do not scale well to high data volumes, like those required to transmit human electrocorticography (ECoG) signals. BCI2000's existing interprocess communication tooling was designed with the transmission of control signals in mind, communicating signals using ASCII for simplicity instead of binary at the expense of inflating the data rate by a factor of ∼8-fold-an approach that was successful until the need to transmit raw and processed ECoG data streams was desirable. Modern browsers implement a protocol built on top of TCP called WebSocket (Fette, 2011) that allows an HTTP client to escalate an existing connection to a general purpose real-time bidirectional binary/ASCII communication interface. WebSockets are perfectly situated to facilitate the transfer of raw brain signals, extracted neural features, and processed control signals from a BCI software suite to a web app on a browser-enabled device, as well as the transfer of auxiliary sensor information from the web app back to the native software suite, all in real time. In this article, we present an implementation of the aforementioned interface as a plugin to BCI2000, which we call BCI2000Web. ECoG Functional Mapping: A Testbed for Web Technologies In this report, we additionally demonstrate the utility of this new BCI2000Web interface with an example application that shares many technical requirements with a BCI: a functional mapping tool capable of visualizing cortical activation derived from ECoG recordings in real-time using local processing at the bedside or in the operating room, and of synchronizing the final results to a centrally hosted repository. Functional mapping of eloquent cortex is a target application of great scientific and clinical impact. About a third of patients with epilepsy have seizures that are resistant to medication therapy. In many of these patients, seizures arise from a focal brain area, and if this area can be safely removed, seizure control can be achieved. When non-invasive testing cannot reliably identify the seizure onset zone as distinct from brain regions needed for normal neurological function, clinicians may choose to surgically implant electrodes in the depths of the brain (stereo-EEG) or on its surface (electrocorticography, or ECoG). These intracranial electrodes may be implanted for a week or more in order to reliably localize the onset of seizures. These electrodes also facilitate the identification of eloquent cortex-i.e., regions that are implicated in speech and language, as well as perception, movement, and other important brain functions. A technique called electrocortical stimulation mapping (ESM) is typically used to map these regions. During ESM, pulse-trains of electrical current are passed between pairs of the implanted electrodes to temporarily disable a small patch of cortex while the patient performs a simple language or motor task. A behavioral change elicited by this temporary lesion indicates that the stimulated area of the brain is necessary for task completion (Ojemann et al., 1989). This testing procedure is time-consuming and uncomfortable for the patient, sometimes eliciting after-discharges (Lesser et al., 1984;Blume et al., 2004); these after-discharges can also evolve into seizures, which can be of questionable utility for diagnosing ictal cortex (Hamberger, 2007). The limitations of ESM have motivated a complementary mapping technique based upon estimates of task-related changes in the power spectra, especially in high frequencies, of passive recordings of ECoG or stereo-EEG during behavioral tasks. This mapping technique, hereafter referred to as ECoG functional mapping, produces maps of task-related cortical activation, which may include cortex that is recruited by a task but not critical to task performance. In contrast, ESM uses a temporary electrophysiological disruption of cortical function to simulate the acute behavioral effects of tissue resection, and is presumed to be specific to areas critical to task performance. Nevertheless, a number of clinical studies have demonstrated good correspondence between ECoG functional mapping and ESM (Brunner et al., 2009;Wang et al., 2016). Moreover, several studies have shown that ECoG functional mapping can be used to predict post-resection neurological impairments, and in some cases it has predicted impairments that were not predicted by ESM (Wang et al., 2016). For these reasons, some epilepsy surgery centers have begun to use ECoG functional mapping as a complement to ESM, sometimes providing a preliminary map of cortical function that guides the use of ESM. However, most epilepsy centers have not yet adopted ECoG functional mapping because of the lack of technical resources, especially software that can be used with their clinical EEG monitoring systems. Several ECoG functional mapping packages have been developed in recent years. For example, SIGFRIED acquires a large baseline distribution of neural activity in a calibration block, then rapidly accumulates estimates of cortical activation by averaging neural activity evoked by behavior in blocks of time (Brunner et al., 2009). A commercial product called cortiQ (Prueckl et al., 2013) is capable of performing this block-based mapping paradigm, which makes it possible for minimally trained clinical professionals to perform passive ECoG mapping. Both SIGFRIED and cortiQ are built using the BCI2000 framework and take advantage of the extensive optimizations and development legacy of the platform. A more nuanced mapping technique, termed spatial-temporal functional mapping (STFM), provides time-resolved, trial-locked results during a specific task by collecting a pooled baseline activity from a pre-defined1 s period before the onset of a trial, then performing a statistical test on each time/channel bin in a window of interest relative to trial onset (Wang et al., 2016). Though the results of STFM are more complicated and require more expertise to interpret than the block-based mapping used by SIGFRIED or CortIQ, they provide a more detailed map of the spatial-temporal evolution of task-related activation, which can help clarify the role of different areas activated by a given task, of clear utility in cognitive neuroscience research and of potential clinical utility in planning surgical resections. ECoG functional mapping relies on high performance signal processing and sophisticated real-time visualization, making it a suitable application example for BCI2000 and BCI2000Web. We saw an opportunity to build an easy-to-deploy-and-use tool for both researchers and clinicians that delivers the time-resolved, trial-locked results of STFM at the bedside in a web application, using BCI2000Web as the underlying communication technology to drive a browser-based interactive visualization. As a demonstration of the potential of the BCI2000Web plugin, in this report we also present WebFM, a software suite built on top of Node.js and BCI2000Web for performing real-time functional mapping in a web browser. DESIGN AND IMPLEMENTATION We chose to build our BCI WebSocket interface on top of BCI2000 as opposed to the other aforementioned technologies for many reasons, including support for acquisition devices in common use within epilepsy monitoring units and EEG research lab settings, high performance spectral extraction implementations, pedigree within the research community, highly accurate stimulus presentation capabilities, comprehensive documentation, and its ability to replay experimental sessions post hoc easily and accurately. The BCI2000 environment is a general-purpose computational framework, typically used to construct BCIs, built upon four binary executables: the signal source module, which acquires physiological data from a supported amplifier; the signal processing module, which extracts neural features and transforms those features into control signals; an application module, which reacts to those control signals and provides feedback to the subject; and an Operator module, which orchestrates the behavior of all three functional submodules of the system (see Figure 1). Signals propagate from the source module to the processing module to the application module, with interconnections facilitated by a network-based protocol (in older versions of BCI2000) or a shared memory interface (in more recent iterations). Each of the modules consists of a series of signal "filters, " which accept an incoming signal (as a channels-by-elements array) and output a derived signal, potentially of different dimensionality. A built-in Operator scripting language allows for setup and configuration of filters within an experimental session to occur automatically, and a Telnet interface exists in the Operator module, capable of accepting textual commands in the Operator scripting language from outside BCI2000. BCI2000Web To address remote control of BCI2000 and data transmission between BCI2000 and browsers, we developed a Node.js module called BCI2000Web that accepts Operator scripting language commands via WebSocket and transmits them to the Operator executable via Telnet, returning system output back to the client. It is primarily used to control data acquisition and signal processing parameters remotely via a connected WebSocket-enabled client, typically a browser. BCI2000Web has been developed as a service that runs within the Node.js runtime. Upon starting, it opens a Telnet connection to the Operator module and also functions as a basic HTTP server. While BCI2000's Telnet implementation only supports one client sending one set of instructions that are executed serially, BCI2000Web provides an interface that allows multiple clients to make requests to send commands to the Operator module; these commands are queued and executed sequentially, with responses sent back to the appropriate client FIGURE 1 | A full BCI2000 stack including a Signal Source, Signal Processing, Application, and Operator module communicates with BCI2000Web, implemented as a Node.js module, via Telnet. Browser-based remote control software and visualization tools interact with BCI2000Web, and receive raw and processed neural signals directly from the BCI2000 system modules, via WebSockets, while the application module presents stimuli to the patient, in this case, the word stimulus "HEALTH" for a word reading paradigm. asynchronously. BCI2000Web is capable of interfacing with an unmodified BCI2000 distribution and automating system configuration without any further software or modifications to BCI2000 modules. In order to transmit the raw and processed signal from the BCI2000 filter pipeline to the browser, however, source modifications within the system modules are required. The raw and processed signal is never sent directly to the Operator module, so the signal can only be transmitted to a browser by compiling secondary WebSocket servers into the existing modules at specific locations within the filter chain. This modification has been realized in our implementation as a generic "WSIOFilter" (WebSocket Input/Output GenericFilter) that can be instantiated multiple times into the BCI2000 filter chain. Each WSIOFilter defines a parameter specifying the address and port its WebSocket server is hosted on. Once an incoming connection is escalated to a WebSocket, this filter sends packets to the client in the BCI2000 binary format, first describing the dimensionality of the signal and the system state vector via a "SignalProperties" and "StateList" packet, then a "GenericSignal" and "StateVector" packet for the current system signal and state vector once per sample block. These filters can be instantiated several times in the signal processing chain for any particular signal processing module. This filter has also been included as a source module extension that enables transmission of the raw signal in all signal source modules, and an application module extension that enables transmission of the application module input-identical to the signal processing output-in all application modules. In practice, the amount of data being sent/received by instantiations of the WSIOfilter is directly related to CPU usage on the sending and receiving machines, while the latency of system throughput from recording to browser is more a function of the network setup and the number of network interface hops the data has to traverse. A WebSocket-enabled client is unlikely to natively understand the format of the incoming/outgoing messages on any of the aforementioned connections: our implementation of BCI2000Web adds some decorators to Operator scripting commands and Operator outputs to handle multiple clients, and the WSIOFilter output is implemented in the BCI2000 binary protocol. A JavaScript library, bci2k.js-available as a package on the Node package manager (NPM) registry-contains functions that manage the BCI2000 WebSocket connections and translate the binary BCI2000 format into readily usable data structures within a JavaScript context. Non-browser WebSocket-enabled clients will need to implement this functionality in order to communicate using these interfaces. WebFM: Browser-Based ECoG Functional Mapping Subdural ECoG recordings are the target modality for WebFM, the aforementioned functional mapping application; this modality has different signal processing requirements than scalp EEG. The signal processing module used in the system in the Johns Hopkins Epilepsy Monitoring Unit is a modification of the default SpectralSignalProcessing.exe module. This signal processing module consists of a chain of filters, the first of which is a spatial filter capable of applying a common average reference, a frequently used spatial filter for ECoG recordings (Liu et al., 2015). This is followed by a series of IIR Butterworth filters, including a fourth order low pass at 110 Hz, followed by a second order high pass at 70 Hz and a 4th order notch filter at 60 Hz. After the signal is downsampled to 500 Hz from the native sampling rate, it is passed through a spectral estimator filter, which generates an autoregressive model on a window of filtered data and uses the model coefficients to form an estimate of the signal's power spectrum, using the Burg method (Burg, 1968). A WSIOFilter is instantiated at this point in the filter chain, capable of streaming this estimated spectral content of the neural signals in real-time. A system diagram and description of the system topology is detailed in Figure 1. A language or motor task is parameterized as a BCI2000 .prm file and a collection of audio-visual stimuli in a git repository hosted on GitHub, available as packages that remote-control BCI2000 using the BCI2000Web server. Any number of these tasks can be checked out into the BCI2000Web distribution, and the server will automatically present them as startup options within the built-in BCI2000Web browser interface, shown and described in Figure 2. These paradigms typically specify a parameterization for StimulusPresentation.exe, a BCI2000 application module capable of presenting audio-visual stimuli to the patient with high-precision timing and sequence control. A browser is used to communicate to the bedside datacollection and stimulus-presentation machine, and to set up this system parameterization. (Because of this setup, it is notable that, when high-precision control isn't needed for stimulus presentation, the tasks presented to patients may themselves be interactive web applications, utilizing bci2k.js and BCI2000Web to inject behavioral markers into the data recorded by BCI2000.) A monitor and speaker connected to the bedside computer is set up in front of the patient, and a microphone is connected to the auxiliary analog inputs provided by the acquisition system, to be digitized synchronously with the electrophysiology. The WebFM/BCI2000Web system currently supports more than 20 possible experimental paradigms, including a task battery used for clinical assessment for functional localization. These paradigms are currently versioned in GitHub repositories with group permissions and access control managed by the authors. A setup script is provided with BCI2000Web that accepts a GitHub login and clones/updates all available task repositories into the proper location. Patients and Electrode Localization All aspects of this study were carried out in accordance with the recommendations of the Johns Hopkins Institutional Review Board with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Johns Hopkins Institutional Review Board. Before any functional mapping sessions occur with a patient, a post-operative computed tomography scan containing electrode locations is co-registered to a pre-operative magnetic resonance imaging scan of sufficient resolution (typically with voxel dimensions of 1 mm or less) to render the patient's cortical surface anatomy in high detail, using Freesurfer (Fischl, 2012) or Bioimage Suite (Papademetris et al., 2006). These electrode locations are overlaid on a 2D rendering of the cortical surface. An image file depicting this cortical anatomy and electrode layout, as well as a comma-separated value (.csv) file containing the normalized image coordinates of each electrode, is uploaded to the WebFM server via controls within the WebFM browser interface. This layout doesn't typically change during a patient's EMU stay, and it is referenced and retrieved by using a subject identification code, effectively de-identifying the reconstruction for research purposes. Software During an ECoG functional mapping session, a browser running on the visualization device contacts the WebFM server and queries the bedside machine for the subject's identification code and what task is currently running. The WebFM server then serves the corresponding cortical reconstruction image and sensor location file in addition to a bolus of javascript code that is capable of opening WebSockets to the BCI2000Web server and WSIOFilters running on the bedside machine. The code also contains statistics packages and graphical libraries necessary for acquiring, analyzing, and visualizing the data. The browser then opens these data streaming WebSockets and performs the mapping without further contacting the WebFM server. After each trial of the task, the visualization is updated and once a full task run has been collected, the resulting map can be saved back to the WebFM server for indexing and post-hoc inspection, available on the WebFM Landing page, detailed in Figure 3. The statistics and visualization for WebFM are based on the techniques and methods described in Wang et al. (2016). The baseline window for the tasks is defined as a configurable period from 1,000 to 200 ms before the trial onset and a baseline distribution is formed per channel from the pooled high gamma power values during this period. A two-way t-test is performed between the distribution for each time-channel bin and that channel's baseline distribution. The resulting p-values are corrected for multiple comparisons using the Benjamini-Hochberg (BH) procedure, controlling the false discovery rate at 0.05 (Benjamini and Hochberg, 1995). This correction is used to threshold the results displayed in the WebFM raster and spatial plots: time-channel bins that did not survive the BH correction are hidden from view. Any individual time point in this raster can be dynamically selected and visualized by "scrubbing" the mouse cursor over the raster display; this yields circles drawn on a two dimensional representation of the electrode montage, highlighting which cortical locations were active during that particular time-point across trials. An options dialog allows users to change baseline periods, modify visualization timing parameters and amplitudes, as well as make comparisons across task conditions and contrasts. The visualization is shown and further described in Figure 4. The visualization APIs exposed by WebFM can be used to implement a number of other visualizations as well. One mode of FIGURE 2 | A screenshot of the BCI2000 remote control interface. The paradigm index is hosted by BCI2000Web over HTTP. This page is populated by the experimental paradigms present on the host machine (center) with buttons to start sub-tasks and specific blocks (right). A pane in the top left reads out the current BCI2000 system state, in addition to a system reset button. In the bottom left, a link to the system replay menu allows for recorded BCI2000 .dat file playback for system testing and offline mapping. FIGURE 3 | The landing page for WebFM. A pane in the top left shows system state and houses buttons that start trial-based functional mapping paradigms and a "live" mode that visualizes neural activity on the brain in real-time, as visualized in prior studies (Lachaux et al., 2007). A list of subject identifiers on the bottom left pane enables users to pull up previous/current subjects; a list of saved maps for the selected subject appears in the "Records" pane on the bottom right. The "+" in the top left of the "Subjects" pane allows operators to add new subjects to the database, and the "Metadata" pane at the top right allows operators to upload brain reconstruction images and normalized electrode locations for displaying functional mapping results. The brain images used for mapping are often overlaid with information about seizures and/or ESM results, so that functional activation can be easily visually compared with these data; the image shown in the center includes colored circles depicting the hypothesized spread of ictal activity during the subject's seizures. WebFM provides a visualization of raw high gamma activation in real time, as in (Lachaux et al., 2007); other modifications have also been used to visualize the propagation of interictal spiking and seizure propagation across cortex. Deployment As of the time of writing, the WebFM system has been deployed at two sites: the Johns Hopkins Hospital and the University of Pittsburgh Medical Center. Across these sites, WebFM has been used with three acquisition devices: the NeuroPort system (Blackrock Microsystems, Salt Lake City, UT), a Grapevine system (Ripple, Salt Lake City, UT), and the EEG1200 system (Nihon Kohden, Tomioka, Japan). Between these sites and amplifiers, WebFM has been used to create over 200 functional maps across 33 subjects. The majority of these subjects (19) were hospital inpatients undergoing epilepsy monitoring prior to resective surgery. Clinical staff in the Johns Hopkins Epilepsy Monitoring Unit have a link to the WebFM portal on their desktop machines and frequently use the passive ECoG mapping results when discussing surgical plans. The remaining 14 subjects FIGURE 4 | WebFM visualization description An example of WebFM results for an image naming task in a subject with high density (5-mm spacing) temporal-parietal-occipital electrode coverage. A horizon raster Heer et al. (2009) to the left shows a time (x-axis) by channels (y-axis) plot of trial-averaged task-modulated high gamma power, thresholded for statistical significance with BH correction for a FDR of <0.05. Warm colors represent a statistically significant increase in task-modulated high gamma power, while cool colors indicate a statistically significant decrease in task-modulated high gamma power. The left black vertical bar within the raster indicates the trial-start (t = 0 s) where StimulusCode transitioned from zero to a non-zero value, indicating that a stimulus was being displayed. The right black vertical bar is a temporal cursor that interactively tracks the user's mouse gestures; the current time it indexes is shown in the top left corner, 0.538 s after stimulus onset. Buttons next to the selected time manipulate visualization properties. The current temporal slice is visualized on the brain image (right) as circles with size and color indicating the magnitude of the z-score, with the same coloration as in the horizon chart. A button in the top right maximizes the display to occupy the full screen-space of the device; a gear icon next to the fullscreen icon presents a configuration dialog box containing options for saving results, changing visualization parameters, configuring realtime signal or BCI2000 state trial-triggering, and visualizing the raw signal, amongst much more functionality. A drop down menu next to the gear icon turns on/off multiple visualization layers, enabling/disabling display of ESM, functional mapping, connectivity metrics, evoked responses, etc. A status message at the bottom right indicates WebFM has connected to BCI2000Web via bci2k.js and a trial counter, in this image represented with an "[n]", increments as trials are delivered to and visualized by WebFM. were temporarily implanted with a 64-channel high density ECoG strip during lead implantation for deep brain stimulation; for these subjects, WebFM was used to map sensorimotor cortex in the operating room. WebFM has even been used to generate maps of activity recorded at one site by researchers at another site in real time, utilizing virtual private networks. DISCUSSION BCI2000Web and WebFM take advantage of several recent technological developments. First and foremost, these packages capitalize on advancements in the modern web browser, which is quickly becoming a platform capable of general purpose computing. With a focus on frontend user interaction, many packages have been written in JavaScript that support the rapid implementation of interactive applications and visualizations. WebFM in particular makes use of d3.js (Bostock, 2011) to provide a high-quality interactive visualization of trial-averaged high gamma modulation directly on the brain. The key to taking advantage of these web-based technologies is the implementation of BCI2000Web, which utilizes the WebSocket API to transmit binary-formatted brain data directly to the browser over TCP/HTTP, and which allows direct communication to and from BCI2000. While the experimental paradigms presented in conjunction with WebFM utilized the native BCI2000 stimulus presentation module to interact with the subject, the general-purpose access to Operator scripting over WebSockets provided by BCI2000Web easily lends itself to a different system architecture, in which a browser application itself is responsible for interacting with the subject and providing experimental markers sent via WebSocket; this topology is depicted in Figure 5. Several paradigm packages for BCI2000Web leveraging this architecture have been authored to date. Some make use of the WebSpeech API (Shires and Wennborg, 2012) to do real-time speech tagging and segmentation for tasks involving freely generated speech; another uses the WebMIDI and WebAudio APIs (Wyse and Subramanian, 2013) to register subject input on musical peripheral devices, and perform high-performance audio synthesis in response. Public JavaScript APIs allow for rich BCI interactions, and experimental paradigms can pull upon web resources such as Google Image search for providing varied and tailored stimuli at run-time. Extending this idea, it is easy to envision a system architecture in which users' neural data is sent to a browser application that communicates with a server backend in real time, allowing cloud-based services to apply sophisticated machine learning techniques that wouldn't be feasible otherwise on the client-side. Even further, one could develop a browser-based application that transmits multiple users' neural data to each other's clients, facilitating brain-based communication. FIGURE 5 | A system diagram depicting an experiment implemented in browser JavaScript running on an independent mobile device. The mobile device is running an experimenter-implemented web-page in fullscreen mode which communicates directly with BCI2000Web for event logging as well as the Signal Processing module for receiving extracted neural control signals. A JavaScript package, bci2k.js, manages WebSocket connections that handle transmission of operator scripting language commands and decodes neural control signals from a binary format. The mobile device is running a word reading paradigm (with the stimulus "HEALTH" currently presented) that has defined asynchronous experimental states including markers for automated vocal transcription onsets using the WebSpeech API. A query for system state is also relayed by the BCI2000Web server. The benefit of such an architecture is that the patient interface is separated from the bedside clinical acquisition machine and can be left with the patient without concern of the patient manipulating the clinical datastream. Cross-device compatibility is another advantage to using the browser as a visualization and stimulus presentation platform. Any browser-enabled device (smartphone, tablet, PC, or even game console) can be used to present stimuli or visualize output. Because of this "write-once, run-anywhere" development process, WebFM can be used by clinicians to view mapping results in real-time on their smartphones from outside the patient's room while ECoG functional mapping is being run by technicians. Drawbacks and Caveats The rationale behind the division of processing using native binaries and visualization using browser-interpreted javascript is due to current limitations inherent to browsers. Browser-hosted JavaScript is rapidly advancing as a next-generation efficient computational platform with the advent of WebAssembly and ASM.js (Herman et al., 2014), but at the time of writing it is still too computationally demanding to perform real-time feature extraction and signal processing in the browser. Furthermore, browser access to low level computer hardware and connected USB devices is only in the early development stages. Given these limitations, BCI2000Web was designed to take advantage of the device driver access and computational efficiency of the C++ code base that powers BCI2000 for acquisition device abstraction and signal processing/feature extraction. This architecture frees frontend developers from dealing with complicated signal processing code in JavaScript, and instead enables them to focus on user experience and design. In the future, a full-stack BCI2000 analog could be implemented directly within the browser, and BCI2000Web is a glimpse of what that software could empower for web developers with access to neural features. A significant amount of the development effort for BCI2000 has been spent on implementing high-performance signal processing and stimulus presentation software. Delivering audiovisual stimuli to subjects with a consistent yet minimal latency is a non-trivial task that BCI2000 has accomplished by interfacing with low-level graphics drivers in a nuanced way. Operating system version, bit-width (32 vs. 64), driver versions, compiler optimizations, and varying hardware capabilities collude to make this stimulus presentation problem a fragmented and moving target-one which BCI2000 has historically hit with surprising accuracy, achieving visual presentation latency on the order of one to two frames at a 60 Hz monitor refresh rate and audio latencies on par with modern audio production software (Wilson et al., 2010). The BCI2000 core team encourage developers to implement custom signal processing and stimulus presentation paradigms within this BCI2000 environment using documented C++ code templates in order to benefit from these optimizations. That said, so long as tasks are designed properly and ground truth stimulus and response signals are collected (e.g., screen mounted photodiodes and patient facing microphones connected directly to auxiliary inputs on the amplifier), it is still possible to collect data of high scientific quality using the browser as the primary stimulus presentation software even if its stimulus display and communication latency are in question. We benchmarked the visual timing performance of a system with and without BCI2000Web modifications using the procedure in Wilson et al. (2010) on a platform comprising Windows 7 64 bit with BCI2000 r5688, Google Chrome 67.0.3396.99, and a 256 channel 1,000 Hz recording from a Blackrock NeuroPort running with a 20 ms sample-block size; a standard configuration for a moderate-to-high channel-count ECoG recording running on an up-to-date clinical machine as of the time of writing. An unmodified BCI2000 distribution on this system exhibits a visual latency (t 3v , as expressed in Wilson et al., 2010) of 52 ms with a standard deviation of 8.0 ms. With BCI2000Web sending neural signals to a browser via WebSocket on the same acquisition machine, a mean visual latency of 60 ms with a standard deviation 9.4 ms was observed. Using the hospital wireless network to send neural signals via WebSocket to a tablet PC running Windows 10 and the same version of Chrome results in a visual latency of 62 ms with a standard deviation of 13.4 ms. These latency metrics indicate a minimal impact to timing performance when using BCI2000Web. In many real-time BCI implementations, spectral feature extraction occurs in windows of 128-256 ms with a slide of 16-32 ms, and single-trial visual timing differences fall well within one windowing period. BCIs reliant upon time-domain features-in particular those that perform trialaveraging of evoked response potentials-will be more sensitive to these latency differences, and it is critically important to run timing benchmarks for specific hardware/software/network configurations in these circumstances. It should be noted that these performance metrics are configuration-specific and are likely to vary significantly across use cases; BCI2000Web comes bundled with an A/V timing paradigm that can be used to collect timing-test data, but analyses of these latencies and interpretation of what constitutes sufficient performance is application specific and is left to the end-user. CONCLUSIONS The development of a communication protocol that connects one of the most widely adopted BCI research and development suites with the power of modern browser technologies is expected to accelerate the pace of development for BCI technologies. Newer software developers, primarily taught using these modern software development paradigms, can now develop new BCI applications and neural signal visualizations while leveraging the legacy and performance of native BCI2000 modules. We have developed and presented a web-based ECoG functional brain mapping tool using this technology, and we have successfully deployed it at two sites with a cohort of 33 patients over two years. BCI2000Web and WebFM together utilize the relative strengths of a highly optimized C++ code base in BCI2000 and the high level visualization libraries within modern browsers to demonstrate a clinically useful and modern functional mapping tool. We have also used BCI2000Web for ongoing, albeit unpublished, BCI research projects, and we describe herein the advantages and potential uses of BCI2000Web in future BCI applications. This software is documented and released under permissive free and open source software licenses, and is put forward by the authors for use in the research and development of BCIs and multi-site studies on the clinical efficacy of ECoG functional mapping. DATA AVAILABILITY STATEMENT A standalone distribution of BCI2000Web is available on GitHub (github.com/cronelab/bci2000web). This distribution comes packaged with pre-compiled BCI2000 binaries that contain WSIOFilter taps for data access. The bci2k.js package-which translates BCI2000 binary packets from signal taps to usable data structures, and handles the Operator scripting language protocol-can be installed with NPM (node install bci2k); its codebase is available on GitHub (github.com/cronelab/bci2k.js). WebFM can also be found on GitHub (github.com/cronelab/webfm). All of this software is available under free and open source licenses. The data used in the live demo at www.webfm.io is available via the WebFM API: the subject's brain image, base-64 encoded, is located at www.webfm.io/api/brain/PY17N009; the subject's sensor geometry is located, in JSON format, at www.webfm.io/api/geometry/PY17N009; the high gamma activation data for the presented task (syllable reading) is located at www.webfm.io/api/data/ PY17N009/SyllableReading." AUTHOR CONTRIBUTIONS BCI2000Web was developed by GM with assistance from MC. Testing and validation of BCI2000Web was performed by CC. WebFM was developed by MC with assistance and maintenance by CC and supervision by NC. Deployment and testing of WebFM in the Johns Hopkins Epilepsy Monitoring Unit was undertaken by GM, MC, and NC. This report was prepared by GM and MC, with input from NC. FUNDING Work on this article and the software presented therein has been supported by the National Institutes of Health (R01 NS088606, R01 NS091139).
2019-02-13T14:03:55.948Z
2019-02-13T00:00:00.000
{ "year": 2019, "sha1": "3bdf2417eef9690170adbc5cd849d3136dc474fc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.01030/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bdf2417eef9690170adbc5cd849d3136dc474fc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
216061198
pes2o/s2orc
v3-fos-license
COVID-19: A New Virus, but a Familiar Receptor and Cytokine Release Syndrome Summary Zhou et al. (Nature) and Hoffmann et al. (Cell) identify ACE2 as a SARS-CoV-2 receptor, and the latter show its entry mechanism depends on cellular serine protease TMPRSS2. These results may explain proinflammatory cytokine release via the associated angiotestin II pathway and a possible therapeutic target via the IL-6-STAT3 axis. be important to assess the long-term impact of NET inhibition on the tumor immune landscape and immunotherapy response to shed light on these outstanding questions. In summary, Teijeira et al. demonstrate a requirement for CXCR-1 and -2 receptor activation in NET induction and a novel role for NETs as a physical shield between human tumor cells and cytotoxic lymphocytes-specifically CD8 + T cells and NK cells (Figure 1). The results of this study highlight an exciting therapeutic approach suggesting that NET blockade may allow immune cells to ''peNETrate'' the tumor microenvironment and ultimately promote the efficacy of immunotherapy. In the past two decades, severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV) were transmitted from animals to humans, causing severe respiratory diseases SARS and MERS in endemic areas. In December 2019, another coronavirus was discovered in patients with infectious respiratory disease in Wuhan, Hubei province, China, to have the ability for humanto-human transmission. The disease, now termed coronavirus disease 2019 , has spread rapidly all over the world, resulting in a pandemic. COVID-19 is induced by the pathogenic SARS-coronavirus 2 (SARS-CoV-2) and is associated with 2,165,500 cases and 145,705 deaths as of April 17, 2020 (COVID-19 Map Johns Hopkins University and Medicine). The major phenotype of COVID-19 is severe acute respiratory distress syndrome ( (Lu et al., 2020;Zhou et al., 2020). The genome sequence of SARS-CoV-2 is similar to, but distinct from, the two other coronaviruses, as it has about 80% sequence identity with SARS-CoV and about 50% with MERS-CoV. Interestingly, SARS-CoV-2 is about 90% identical at the whole-genome level with two bat coronaviruses, bat-SL-CoVZC45 and bat-SL-CoVZXC21, collected in eastern China. Protein Previews Immunity 52, May 19, 2020 ª 2020 Elsevier Inc. 731 ll sequence analysis showed that SARS-CoV-2 has seven conserved non-structural domains, just like SARS-CoV, suggesting that the two coronaviruses are related. Furthermore, SARS-CoV-2 has a similar receptor-binding domain structure to that of SARS-CoV, despite amino acid variation at some key residues. Thus, it is possible that SARS-CoV-2 uses the same cell entry receptor-angiotensin converting enzyme II (ACE2)-as SARS-CoV (Kuba et al., 2005;Li et al., 2003). Considering the high mortality rate of COVID-19 (6.7% all over the world), the development of effective therapeutics is an urgent issue and requires the identification of quality targets. In this regard, two papers have identified ACE2 as cell entry receptors for SARS-CoV-2 (Hoffmann et al., 2020;Zhou et al., 2020). In addition, Hoffman and colleagues showed that receptor-mediated virus entry was dependent on a serine protease, transmembrane serine protease 2 (TMPRSS2). Of note, clinically approved inhibitors of TMPRSS2 can prevent cell entry by SARS-CoV-2. Because alveola type 2 cells highly express both ACE2 and TMPRSS2 in the steady state, these cells might be the primary entry cells for SARS-CoV-2 in the lung. Zhou et al. (2020) performed virus infectivity studies using HeLa cells that did or did not express ACE2 protein originating from humans, Chinese horseshoe bats, civet, pigs, and mice. They showed that SARS-CoV-2 can enter cells expressing all ACE2 proteins, except mouse ACE2, but could not enter cells lacking ACE2, suggesting that the virus utilizes ACE2 as its entry receptor. Additionally, SARS-CoV-2 did not enter cells lacking ACE2 that expressed either dipeptidyl peptidase 4 (DPP4) or aminopeptidase N (APN), the entry receptors for MERS-CoV and HCoV-229E, respectively. It is known that cell entry by the coronavirus requires the binding of the S1 region of the virus spike (S) protein to the cell surface receptor followed by the fusion of the viral and cellular membranes mediated by the S2 subunit of S protein. This process requires S protein priming by host cell proteases, which entails S protein cleavage at the boundary of S1 and S2 proteins or within the S2 subunit. Analyzing which cellular factors are used by SARS-CoV-2 for cell entry should provide insights into the viral transmission and reveal effective therapeutic targets. Hoffmann et al. (2020) performed a detailed analysis of the cell entry mechanism of SARS-CoV-2. They demonstrated that SARS-CoV-2 uses ACE2 for entry and TMPRSS2 and the endosomal cysteine proteases cathepsin B and L (CatB/L) for S protein priming. They also showed that SARS-CoV-2 S protein (SARS-2-S) is efficiently cleaved at the S1/S2 site in 293T cells. Furthermore, utilizing replication-defective vesicular stomatitis virus (VSV) particles bearing either SARS-2-S or SARS-CoV S protein (SARS-S), they showed that either S protein entered the cells with an identical spectrum of cell lines from various animal species, which is consistent with the amino acid residues essential for ACE2 binding by SARS-S being conserved in SARS-2-S (Lu et al., 2020;Zhou et al., 2020). In fact, both SARS-2-S and SARS-S entered ACE2-negative cells such as BHK-21 cells when the expression of human ACE2 or bat ACE2 was forced, but not with the expression of human DPP4 or human APN. Furthermore, an antibody against human ACE2 blocked both the SARS-S-and SARS-2-S-driven entry into human cell lines. These results indicated that SARS-2-S, like SARS-S, uses ACE2 for its cellular entry. Hoffmann et al. then investigated the protease dependence of the SARS-CoV-2 entry. SARS-2-S-driven entry into 293T cells (TMPRSS2-negative) expressing ACE2 was inhibited by ammonium chloride, an inhibitor of CatB/L, while its entry into Caco-2 cells (TMPRSS2-positive) was less efficiently inhibited. A clinically proven TMPRSS2 inhibitor, camostat mesylate, partially inhibited SARS-2-S-driven entry into Caco-2 cells, while camostat mesylate together with E-64d, an inhibitor of CatB/L, completely inhibited the entry, suggesting that both TMPRSS2 and CatB/L are involved in SARS-CoV-2 entry. However, the forced expression of TMPRSS2 rescued SARS-2-S-dependent entry into CatB/L-suppressed 293T cells, suggesting that the entry of SARS-CoV-2 is induced when cells express TMPRSS2 regardless of CatB/L expression. SARS-2-S-driven entry into lung cells is also inhibited by camostat mesylate. These findings show that SARS-CoV-2 cell entry depends on surface molecules such as ACE2 and TMPRSS2. Together, Zhou et al. and Hoffmann et al. showed that SARS-CoV-2 uses ACE2 as its cell entry receptor, just as SARS-CoV does. Importantly, these results suggest therapeutic targets for COVID-19. One is the binding between SARS-2-S protein and ACE2, and the other is the serine protease activity of TMPRSS2 for SARS-2-S protein priming. These therapeutic targets may act on the initial phase of the SARS-CoV-2 infection, but not dominantly on the latter phase of the disease, because the extremely powerful chronic inflammation induced in the latter phase is the main cause of ARDS-mediated death (de Wit et al., 2016). Consistently, ARDS is a lethal syndrome caused by pneumonia, sepsis or aspiration due to ''cytokine storms,'' in which immune cells and nonimmune cells release large amounts of proinflammatory cytokines that cause damage to the host. Hyper-activation of the NF-kB pathway is involved in the phenotype. One of the major pathways for NF-kB activation after coronavirus infection is the MyD88 pathway through pattern recognition receptors (PRRs), leading to the induction of a variety of pro-inflammatory cytokines, including interleukin-6 (IL-6), tumor necrosis factor alpha (TNFa) and chemokines (de Wit et al., 2016). ACE2 is a membrane protein and inactivator of angiotensin 2 (AngII). Importantly, ACE2 is endocytosed together with SARS-CoV, resulting in the reduction of ACE2 on cells, followed by an increase of serum AngII (Kuba et al., 2005). Because ACE2 is also downregulated in lung-injury models and recombinant ACE2 suppressed ARDS development (Imai et al., 2005), severe lung inflammation itself might induce dysregulation of the renin-angiotensin pathway followed by ARDS development after SARS-CoV-2 infection. Indeed, SARS-CoV-induced ARDS in an animal model is prevented by inhibitors of angiotensin receptor type 1 (AT1R) (Kuba et al., 2005). AngII acts not only as a vasoconstrictor but also as a pro-inflammatory cytokine via AT1R (Eguchi et al., 2018). The AngII-AT1R axis also activates NF-kB and disintegrin and metalloprotease 17 (ADAM17), which generates the mature form of epidermal growth factor receptor (EGFR) ligands and TNFa, two NF-kB stimulators (Eguchi et al., 2018). ADAM17 induction also processes the membrane form of IL-6Ra to the soluble form (sIL-6Ra), followed by the gp130-mediated activation of STAT3 via the sIL-6Ra-IL-6 complex in a variety of IL-6Ra-negative nonimmune cells including fibroblasts, endothelial cells, and epithelial cells (Murakami et al., 2019). STAT3 is required for full activation of the NF-kB pathway, and the main stimulator of STAT3 in vivo is IL-6, especially during inflammation, although there are nine other members of IL-6 family cytokines that can activate STAT3, at least in vitro (Murakami et al., 2019). Therefore, SARS-CoV-2 infection in the respiratory system can activate both NF-kB and STAT3, which in turn can activate the IL-6 amplifier (IL-6 Amp), a mechanism for the hyper-activation of NF-kB by STAT3, leading to multiple inflammatory and autoimmune diseases (Murakami et al., 2019). The IL-6 Amp induces various pro-inflammatory cytokines and chemokines, including IL-6, and recruits lymphoid cells and myeloid cells, such as activated T cells and macrophages, in the lesion to strengthen the IL-6 Amp in a positive feedback loop (Figure 1). Importantly, because IL-6 is a major func-tional marker of cellular senescence, the age-dependent enhancement of the IL-6 Amp might correspond to the age-dependent increase of COVID-19 mortality. Indeed, the ARDS seen with SARS-CoV-2 infection is a cytokine release syndrome (CRS), which is a disorder induced by cytokine storms. The lethal side effect of CRS found with chimeric antigen receptor (CAR)-T cell therapies for leukemia and lymphoma is also associated with elevated inflammatory cytokines (Neelapu et al., 2018). It is possible that the enhanced pro-inflammatory cytokines are induced by the IL-6 Amp. Considering that the anti-IL-6R antibody tocilizumab is an effective treatment for CRS in CAR-T cell therapies (Neelapu et al., 2018), researchers might want to consider drugs with a similar mechanism of action for CRS in COVID-19. Taken together, the demonstration of ACE2 as the SARS-CoV-2 receptor for cellular entry provides a key target for therapeutic development during the initial phase of the infection. In its later phase, the potential dysregulation of the AngII-AT1R pathway downstream of ACE2 could lead to cytokine release syndrome as observed in COVID-19 patients that may require targeting of cytokine pathways, particularly IL-6-STAT3 axis.
2020-04-23T05:06:49.652Z
2020-04-19T00:00:00.000
{ "year": 2020, "sha1": "92d94dd352c9ebd593a8e2a6abcbc58310508628", "oa_license": null, "oa_url": "http://www.cell.com/article/S1074761320301618/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "92d94dd352c9ebd593a8e2a6abcbc58310508628", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Biology" ] }
245579346
pes2o/s2orc
v3-fos-license
THE EFFECTS OF MINERAL WOOL FLY ASH ON COHESIVE SOIL STRENGTH BEHAVIOUR This research work represents updated results of cohesive soil strength improvement with mineral wool fly ash. In the investigations, these materials were used: Portland cement CEM I 42.5 R, fly ash obtained from a mineral wool production process, sand and clay. Mixtures were prepared as follows: dry mixing of Portland cement and fly ash; dry mixing of sand and clay; adding water into Portland cement and fly ash; adding sand and clay mixture into already prepared Portland cement and fly ash suspension. The content of fly ash replacing Portland cement varied from 0% to 40%, and the content of sand mixture varied from 20% to 60%. After 24 hours, investigated samples were taken out from cylinder forms and kept in a desiccator with a humidity of 90% and at 20 °C temperature. Uniaxial compressive strength of the samples was determined after 548 days and compared to previous research results obtained after 7, 28 and 183 days. The most predictable compressive strength is for samples, which composition is 100% cement and 0% fly ash. In these samples, the highest compressive strength was obtained, comparing them to the other investigated samples. Compressive strength change is minimal for samples with a 10–30% amount of fly ash. The most significant decrease in compressive strength was obtained for samples with a 40% fly ash after 183 Introduction Soil stabilisation is widely used in many road construction applications. There is a possibility to apply such methods for soil stabilisation, which allow to reduce the costs of soil stabilisation and solve ecological problems. Some methods, like stabilisation using recycled asphalt (Zarins, 2020), lime (Firoozi et al., 2017) or slag (Mahedi et al., 2018), have become traditional and are often used. Other methods, like stabilisation using glass waste (Baldovino et al., 2021), shredded tire (Behnood, 2018), volcanic ash (Ghadir & Ranjbar, 2018), fly ash (Jalal et al., 2020;Riekstins et al., 2020), or ferric chloride solution as electronic industry waste for hardening of polymer resins by soil grouting are still being tested, and their applicability is relatively narrow. It is well known that the fly ash utilisation problem is complicated because fly ash accounts for 15% of total ash (Pundinaitė-Barsteigienė et al., 2017). Also, the chemical composition of fly ash varies (Bhatt et al., 2019;Kang et al., 2020) and depends on industry type and provided products (Cho et al., 2019). Fly ash collected during the production of mineral wool gain an advantage due to its constant and more predictable composition (Zakarka et al., 2019). Diversity of the chemical composition of fly ash allows studying the possibilities for reusing local fly ash and makes it possible to achieve different soil strengths and increments of stiffness (Deepak et al., 2020). Due to this reason, fly ash reuse has to be evaluated on a case-by-case basis, as there are many locally unique factors like transportation costs, recycling costs, landfill charges, labour costs, and environmental costs (Stonys et al., 2016;Vaitkus et al., 2018). Nevertheless, many countries have promoted the reuse of fly ash waste in sustainable construction (Amran et al., 2021). Also, it is essential to understand the behaviour dependence on the lifetime of construction of soil stabilised with fly ash (Karim et al., 2020). Most often, compression tests of the soil stabilised with fly ash are performed after 2-7 days (Graytee et al., 2018), or after 1-2 weeks (Liang et al., 2020), or after up to 1-2 months (Gu & Chen, 2020;Liang et al., 2020). There is little information in the literature on the results of the tests of stabilised soil after 2-6 months (Chousidis et al., 2016;Jia et al., 2020;Wong, 2015;Yoobanpot et al., 2017;Zakarka et al., 2019) or even after 1-2 years (Giergiczny, 2019;Moon et al., 2016). This research represents fly ash as a stabiliser for cohesive soil with the results after 1.5 years compared to previous research (Zakarka et al., 2019) after 7-183 days. Obtained results allow gaining understanding about strength increment of stabilised clay with fly ash in 1.5 years. The sufficient strength after soil strengthening is achieved when the compressive strength reaches more than 0.5 MPa (State Enterprise Lithuanian…, 2012). Conforming to the first testing plan, the investigations were organised after one year, but the COVID-19 pandemic (Župerkienė et al., 2021) was delayed, and results were presented only after 1.5 years. Also, this research provides a possibility to reduce the amount of mineral wool waste because 2.5 million tons of mineral wool waste is generated annually in the European Union, which is one of the most unutilised materials (Yliniemi et al., 2020). Experimental setup Samples were prepared by mixing these materials: -Portland cement (C) CEM I 42.5 R, which complies with the LST EN 197-1:2011/P:2013 Cement -Part 1: Composition, specifications and conformity criteria for common cements; fly ash (FA) obtained from a mineral wool factory in Vilnius (Lithuania) as mineral wool production waste, the chemical composition of which is presented in Table 1; sand, which granulometric composition is presented in Figure 1, was also used in previous research (Zakarka et al., 2019); clay powder (CP), which chemical composition is presented in Table 1, and water. The granulometric composition of sand was determined in consonance to LST CEN ISO/TS 17892-4:2017 Geotechnical investigation and testing − Laboratory testing of soil − Part 4: Determination of particle size distribution and LST CEN ISO/TS 17892-12:2018 Geotechnical investigation and testing -Laboratory testing of soil -Part 12: Determination of liquid and plastic limits. Investigated sand coefficient of uniformity C u = 2.77 and coefficient of curvature C c = 0.90. Conforming to the Lithuanian Geology Survey (2019), investigated sand is assigned to uniform sand (SaU). For investigated sand and clay mixtures, plastic (W p ) and liquid (W L ) limits were determined without fly ash additives, as Trivedi et al. (2013) recommended. When 80% CP and 20% SaU are mixed, W p = 15.1% and W L = 28.4%. After mixing 60% CP and 40% SaU, W p = 11.6% and W L = 20.1%, after mixing 40% CP and 60% SaU, W p = 9.3% and W L = 16.3%. As stated in Engineering Geological and Geotechnical Soil Investigations Classification (Lithuanian Geology Survey, 2019), all sand and clay mixtures are assigned to sandy low plasticity clay (saCIL). Depending on the calcium oxide (CaO) content, fly ash is divided into class C and F (Guo et al., 2017;Kim et al., 2003), which have different effects on mixtures. Fly ash is assigned to class C if CaO 15-35% or SiO 2 + Al 2 O 3 + Fe 2 O 3 ≥ 50% and assigned to class F if CaO ~ 5% or SiO 2 + Al 2 O 3 + Fe 2 O 3 ≥ 70%. Investigated mineral wool fly ash assignment to the class C or F is complicated because the amount of SiO 2 + Al 2 O 3 + Fe 2 O 3 is 49.65% (could be assigned to class C), and CaO amount is 3.52% (could be assigned to class F). To each different composition of the mixture, three sets of cylinder samples were prepared, the diameter of which 4.5 cm, height -7.0 cm. In total, 15 different compositions were investigated, which are presented in Table 2. Mixtures were made as follows ( Figure 2): 1) dry mixing of Portland cement and fly ash; 2) dry mixing of sand and clay; 3) adding water into Portland cement and fly ash; 4) adding sand and clay mixture into already prepared Portland cement and fly ash suspension. It was observed that increasing the water ratio to 1.5 to achieve proper mixing quality for some samples (Table 2). Such an increase in water ratio made it possible to achieve the maximum compressive strength of the prepared sample (Fuller et al., 2018). All investigated samples after 24 hours were taken out from cylinder forms (the diameter of which 4.5 cm, height -7.0 cm) and kept in desiccators with a constant humidity of 90% and a temperature of 20 °C. The compressive strength of the samples was determined with a 100 kN electromechanical universal testing machine (Walter+Bai AG ) after 548 days and compared to previous research (Zakarka et al., 2019) results obtained after 7, 28, and 183 days. The samples were loaded with the sanded surfaces contacting the testing machine platens. The top-loading plate has a spherical hinge. Uniaxial compression ramp 2 mm/min was applied. Before determining the uniaxial compressive strength, the density of samples was identified (Figure 3). The lowest density (1.388 g/cm 3 ) was obtained for samples of 60% Portland cement and 40% fly ash with 80% clay powder and 20% sand. The highest density (1.884 g/cm 3 ) was obtained for samples consisting of 100% Portland cement and 0% fly ash with 40% clay powder and 60% sand. Sample density tends to increase when the amount of clay is decreased for the same amount of fly ash, and the amount of sand increases. Also, it was noticed that density depends on fly ash amount because fly ash additives decrease total sample density. Analysis of obtained results The compressive strength of investigated samples conforming to their composition is presented in Table 3, including previous Zakarka et al. (2019) research results after 7, 28, and 183 days (sample number in Table 3 corresponds to Table 2). Table 3 represent a view of the samples after compression. It was observed that while increasing the amount of fly ash, the quality of the mixture becomes worse. Due to the increased amount of fly ash, the Portland cement conglomerates of poorly mixed samples appear. The size and the amount of conglomerates depend on fly ash amount. Nevertheless, after 1.5 years (548 days), each of the investigated samples reached more than 0.50 MPa compressive strength, which is assumed as sufficient strength after soil strengthening (State Enterprise Lithuanian…, 2012). Based on the sample failure plane, fly ash concentrations were obtained. When the fly ash in the suspension was increased, larger size and gaps among fly ash concentrations were observed (Table 3). For No. 13-15 samples (Table 3) with 60% Portland cement and 40% fly ash total concentration of 70-90% fly ash in the failure plane was obtained. Fly ash amount in the suspension proportion was analysed separately versus compressive strength. The results for different compression tests periods are presented in Figures 4-7. Results presented in Figures 4-6 are compiled conforming to Zakarka et al. (2019). Also, such presentation of results makes it possible to analyse the influence of fly ash amount on the compressive strength based on different clay and sand proportions. 100% C + 0% FA 90% C + 10% FA 80% C + 20% FA 70% C + 30% FA 60% C + 40% FA It is seen from Figures 4-6 that as the amount of fly ash increases, the compressive strength decreases. The amount of clay in the sample has a significant influence on the compressive strength. The test data approve this fact for compressive strength without fly ash. After 548 days (Figure 7), for samples composed primarily of sand (40% clay and 60% sand), the amount of fly ash does not influence compressive strength. For these samples, the average compressive strength obtained is more than 5.0 MPa. Samples mainly composed of clay (80% clay and 20% sand) tend to decrease in compressive strength as the amount of fly ash in the sample increases. For these samples, the obtained compressive strength decreases from 5.0 MPa to 3.0 MPa. Compressive strength obtained from previous research of Zakarka et al. (2019) after 7-183 days showed that compressive strength decreases if fly ash increases. Analysing compressive strength results after 548 days, fly ash amount has an uncertain influence on the compressive strength, except for samples made with 80% clay and 20% sand. A more significant influence on compressive strength was noticed when sand was added to the samples compared to fly ash. In samples without fly ash (here, Portland cement content is 100%), the compressive strength increased by 77% from day 28 th to day 548 th . When samples are with maximum fly ash amount (40%) and minimum Portland cement amount (60%), the compressive strength increases by 34% (Figures 8-12). Analysing the summarised results presented in Figure 13, the most predictable compressive strength is observed for samples without fly ash in Table 3), where the binder is only Portland cement. Also, these samples obtained the highest compressive strength compared to other samples. The change in compressive strength is minimal for samples with 10-30% fly ash. The highest decrease in compressive strength was obtained after 183 days for samples with 40% fly ash in Table 3). Nonetheless, the compressive strength of these samples increased after 548 days and is almost the same as for samples without fly ash. Conclusions The main objective of this study was to update previous research results and evaluate the compressive strength in sandy low plasticity clay when fly ash and Portland cement additives were used. To achieve such an objective, a series of uniaxial compression tests were conducted for samples with different Portland cement, fly ash, sand and clay. In total, 15 mixtures were investigated. Obtained results were compared to tests made with a binder of 100% Portland cement and without fly ash. For this mixture, it was obtained the most predictable compressive strength. Also, for these samples, the highest compressive strength was obtained compared to the other samples mixtures. Change in the compressive strength is minimal for samples with 10-30% fly ash. The highest decrease of compressive strength was obtained for samples with 40% fly ash after 183 days. Nonetheless, the compressive strength of these samples increased after 548 days and is almost the same as for samples without fly ash. After 1.5 years (548 days), each of the investigated samples reached more than 0.5 MPa compressive strength, which is assumed as sufficient strength after soil strengthening. It is rational to limit fly ash quantity admixture in Portland cement by 30% from the mass of the mixture. Conforming to results obtained after various test periods, the use of fly ash to improve the compression of cohesive soil is promising. In addition, further investigations are needed to create the mixture recipes that depend on soil type and minimum compressive strength requirements.
2021-12-31T16:14:08.494Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "5b551e02e5b7918a55e909c09baf2ce818032fc7", "oa_license": "CCBY", "oa_url": "https://bjrbe-journals.rtu.lv/article/download/bjrbe.2021-16.545/2910", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a578f376933ca3ebc7f253298061e783bc479ffd", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
254237938
pes2o/s2orc
v3-fos-license
DNA barcoding survey of Trichoderma diversity in soil and litter of the Colombian lowland Amazonian rainforest reveals Trichoderma strigosellum sp. nov. and other species The diversity of Trichoderma (Hypocreales, Ascomycota) colonizing leaf litter as well as the rhizosphere of Garcinia macrophylla (Clusiaceae) was investigated in primary and secondary rain forests in Colombian Amazonia. DNA barcoding of 107 strains based on the internal transcribed spacers 1 and 2 (ITS1 and 2) of the ribosomal RNA gene cluster and the partial sequence of the translation elongation factor 1 alpha (tef1) gene revealed that the diversity of Trichoderma was dominated (71 %) by three common cosmopolitan species, namely Trichoderma harzianum sensu lato (41 %), Trichoderma spirale (17 %) and Trichoderma koningiopsis (13 %). Four ITS 1 and 2 phylotypes (13 strains) could not be identified with certainty. Multigene phylogenetic analysis and phenotype profiling of four strains with an ITS1 and 2 phylotype similar to Trichoderma strigosum revealed a new sister species of the latter that is described here as Trichoderma strigosellum sp. nov. Sequence similarity searches revealed that this species also occurs in soils of Malaysia and Cameroon, suggesting a pantropical distribution. Introduction The Amazon area is one of the largest regions on Earth covered with tropical rain forests and is one of the Electronic supplementary material The online version of this article (doi:10.1007/s10482-013-9975-4) contains supplementary material, which is available to authorized users. most biodiverse ecosystems with approximately 60,000 species of vascular plants (Ter Steege et al. 2003;Hoorn et al. 2010). Efforts of multiple research groups have resulted in a considerable increase of our knowledge on the plants occurring in this region (Ter Steege et al. 2003;Pitman et al. 2001;Tuomisto et al. 2003;Baker et al. 2004;Phillips et al. 2004;Kreft et al. 2004;Duque 2004), whereas the diversity and ecology of the microfungi remains relatively underexplored. Fungi play a central role in many ecological processes in forest ecosystems, including decomposition of plant litter and nutrient cycling. Although decomposition rates in tropical forests are typically higher than in temperate forests (Powers et al. 2009) this parameter is highly variable (Hättenschwiler et al. 2011). In tropical as well as cooler regions, colonization by endophytic and epiphytic phyllosphere fungi occur in early stages of decomposition when the loss of litter mass and chemical changes occur most rapidly (Osono and Takeda 2002). Composition and functioning of soil microbial communities are among the key factors that determine decomposition rates (Coûteauxm et al. 1995;Hättenschwiler et al. 2011). Thus the composition of communities of soil micro-organisms present in the nutrient-poor Amazonian rainforests may strongly impact the decomposition process. Trichoderma, a name now generally used in preference over the associated teleomorph Hypocrea, is a genus of primarily mycoparasitic/fungicolous filamentous fungi that contains species with great opportunistic potential including the capability to decompose woody and herbaceous materials (see Druzhinina et al. 2011 for references). In-depth molecular evolutionary and taxonomic studies of Trichoderma have resulted in the distinction of about 200 currently recognized species (e.g. Samuels Samuels 2006c;Druzhinina et al. 2006Druzhinina et al. , 2010aDruzhinina et al. , 2010bDruzhinina et al. , 2011Jaklitsch 2009Jaklitsch , 2011Atanasova et al. 2013a). Species recognition in Trichoderma is usually based on the application of the genealogical concordance phylogenetic species recognition concept (Taylor et al. 2000) based on the partial sequences of the translation elongation factor 1-alfa (tef1), endochitinase chi18-5, calmodulin (cal1) and other loci. The concept allows an assignment of the species rank to the clade that is apparent on at least two single-loci phylograms and is not contradicted by the others. The relatively well detailed molecular phylogeny of Trichoderma resulted in the development of reliable tools for infrageneric DNA barcoding and for the recognition of new species (Druzhinina et al. 2005;Kopchinskiy et al. 2005;Atanasova et al. 2013a). Trichoderma diversity has been previously explored in Colombia, but only three species have hitherto been reported from the Colombian Amazon region, namely T. virens, T. asperellum and T. harzianum (Hoyos-Carvajal et al. 2009a). The development of microfungal communities in litter bags was studied in primary and secondary lowland rainforests in two regions of Colombian Amazonia, viz. Araracuara and Amacayacu, which are approximately 600 km apart, using a culturing approach to reveal the fungal succession of leaf litter in forests at different stages of regeneration. The fungi were isolated from the litter bags after different periods of decomposition. In the Amacayacu region this litter-related diversity was compared to that present on rootlets of Garcinia macrophylla (Clusiaceae), a tree species that occurred in all four Amacayacu plots. The objective of the study we present here was to investigate the diversity of Trichoderma and discuss the potential of these fungi for the decomposition of leaf litter in lowland tropical rainforest. Study area The studied forests in Colombian Amazonia belong to the tropical humid forest according to the life zone definition of Holdridge (Holdridge et al. 1971;Holdridge 1982) having an equatorial superhumid climate without a dry season (Type Afi of Köppen, 1936, cited by Duivenvoorden andLips 1993). The average annual temperature is approximately 25°C with over 100 mm precipitation every month resulting in an average annual rainfall of 3,100-3,300 mm (Tobón Marín 1999). Two locations were selected in the Middle Caquetá region. The first location was on the lower terrace of the Caquetá River, near Araracuara community (0°37 0 S, 72°23 0 W). The seven plots studied at this location are part of a mosaic of primary and secondary forests and of agricultural fields originating from slash-and-burn agriculture (chagras) at different stages of regeneration (López-Quintero et al. 2012). A second location in this region comprised a mature forest characterized by the presence of a dipterocarp tree species, Pseudomonotes tropenbosii (Dipterocarpaceae, Londoño et al. 1995), located about 50 km downstream from the Araracuara region in Peña Roja (00°34 0 S, 79°08 0 W) at 200-300 m altitude (López-Quintero et al. 2012). The second location was chosen in the National Park Amacayacu (3°25 0 S, 70°08 0 W), which covers 293,500 ha of tropical humid forest. Here, two terra firme (i.e. non-flooded) plots and two várzea (i.e. flooded) plots were selected, each containing a mature and a regenerating forest. Full details on the forests studied and the plots selected are provided by López-Quintero et al. (2012). Litter decomposition experiments and isolation procedure Fresh mixed leaf litter from dominant trees occurring in the plots was collected from the forest floor. The litter was air-dried, weighed, packed in 27 litterbags with a mesh size of 1 mm 2 at each location, thus in total 108 litterbags for all four locations, and placed directly on top of the litter layer at the forest floor of the respective plots. One litter bag was recollected after four different times of exposure, namely after 4-6, 9, and approximately 12 and 17 months of exposure on the forest floor. Microfungi were isolated from particles of fresh and decomposed leaf litter samples using a soil-washing method modified after Gams and Domsch (1967). Briefly, three grams of fine litter fragments were taken from the litter bags, and washed three times for 5 min each time with 500 ml sterile distilled water using strong mechanical agitation. The washed particles were blotted dry aseptically and four of them with an approximate size of 4 mm 2 were placed in each of ten Petri dishes containing 2 % water agar. Thus 40 litter particles were used in total for each plot and time of isolation. After incubation for 7 days at 25°C in the dark, mycelia growing out from the litter particles were picked and transferred to cornmeal agar (CMA, Difco) and further purified. In addition, 10 rootlets of Garcinia macrophylla were sampled from each plot at the Amacayacu location for the isolation of microfungi using the same isolation method, but plating approximately 10-mm-long and 1-mm-diam root fragments. All microfungi were preliminarily identified morphologically and subsequently by ITS-based DNA barcoding using sequence similarity search as available at NCBI portal (http://www.ncbi.nlm.nih.gov). Here we present the observed diversity of Trichoderma isolates (Table 1), whereas a full study on all fungal isolates will be presented elsewhere. DNA extraction, PCR amplification and sequencing Genomic DNA of Trichoderma was isolated using the QIAGEN DNeasy Ò Plant Mini Kit following the manufacturer's protocol. The ITS1, 5.8S rRNA and ITS2 regions of the ribosomal RNA (rRNA) gene cluster were amplified using the primers ITS1 and ITS4 (White et al. 1990), sequenced using an ABI 3700 capillary sequencer (PE Biosystems) and further analyzed using the Lasergene software package (DNASTAR Inc.). Fragments of chi18-5 (GH18 chitinase CHI18-5, previously called ech42), cal1 (calmodulin) and tef1 (translation elongation factor 1 alpha) were amplified as described previously (Druzhinina et al. 2008;Jaklitsch et al. 2006). Chi18-5 is a protein coding fragment, cal1 has one intron and the tef1 fragment contains two introns, one complete and one partial exons. PCR fragments of these genes were purified (PCR purification kit, Qiagen, Hilden, Germany) and sequenced at Eurofins MWG Operon (Ebersberg, Germany). DNA barcoding All sequences were aligned for each locus separately and grouped to phylotypes using MEGA 5 software. Unique phylotypes were identified as follows: ITS1 and 2 sequences were identified using the oligonucleotide barcode program TrichOKEY (www.isth. info; Druzhinina et al. 2005). Ambiguous cases were then subjected to the sequence similarity search tool BLASTN against the NCBI GenBank database (http:// www.ncbi.nlm.nih.gov). All isolates that were not resolved by ITS1 and 2 sequences (T. longibrachiatum and H. orientalis, section Trichoderma and others) were then identified by the analysis of the fourth intron of tef1 using a sequence similarity search against the NCBI GenBank and TrichoBLAST (www.isth.info, Kopchinskiy et al. 2005) databases. The NCBI accession numbers for ITS sequences obtained in this study are listed in Table 1. Antonie van Leeuwenhoek (2013) 104:657-674 659 Phylogenetic analyses DNA sequences were aligned with CLUSTAL X version 2.1 (Thompson et al. 1997;Larkin et al. 2007) and visually verified with GeneDoc version 2.6 (Nicholas and Nicholas Nicholas and Nicholas HB Jr 1997). Ambiguous fragments of the alignment were removed with the gBlocks server for the selection of less stringent options (Talavera and Castresana 2007). The loci used in this study were previously checked for absence of intragenic recombination (Druzhinina et al. 2008). Neutral evolution was tested by linkage disequilibrium-based statistics and Tajima's test as implemented in DnaSP 4.50.3 (Rozas et al. 2003). The interleaved NEXUS file was formatted using PAUP*4.0b10 (Swofford 2002). The best nucleotide substitution model for each locus was determined using jMODELTEST (Posada 2003) and the unconstrained GTR ? I ? G nucleotide substitution model was applied to all loci. Metropolis-coupled Markov chain Monte Carlo (MCMC) sampling was performed using MrBayes v. 3.0B4 (Ronquist and Huelsenbeck 2003) with two simultaneous runs of four incrementally heated chains that performed for 1-3 millions of generations. The number of generations for each dataset was determined using the AWTY graphical system (Nylander et al. 2008) to check the convergence of MCMC. Bayesian posterior probabilities (PP) were obtained from the 50 % majority rule consensus of trees sampled every 100 generations after removing the first trees (300-500 depending on the locus). PP values lower than 0.95 were not considered significant (Leaché and Reeder 2002). Morphological examination Growth rates of the isolates were assessed after inoculation near the margin of 9-mm-diameter Petri dishes using three different media, viz. CMA (Difco cornmeal agar supplemented with 2 % D(?)-glucosemonohydrate. i.e. CMD), SNA (synthetic nutrientpoor agar), and OA (oatmeal agar; for recipes of the latter two media see Gams et al. 2007) and incubated in the dark at 24, 27, 30, 33, and 36°C. The colony radius was measured daily until the colonies reached the opposite side of the Petri dish. Colony color was characterized according to the Methuen Handbook of Color (Kornerup and Wanscher 1983). Conidial dimensions, based on 25 measurements for each isolate-medium combination, were made using photographs made with a Zeiss Axioskop 2 and interference contrast using a 63 9/1.5 Plan-Neofluar objective and equipped with a Nikon Ds-Fi1 camera. Images were processed by the Nikon NIS-elements D software package. Conidiophore structures and measurements of phialides and hyphal cells were recorded at 2,0009 magnification with a Wild camera lucida. Colony features were studied with a Leica NZ FLIII binocular microscope. For scanning electron microscopy parts of the colonies growing on OA agar plates were fixed in 3 % glutaraldehyde/PBS and postfixed in 1 % osmium tetroxide. After dehydration through an ethanol and acetone series, the fungal cells were critical pointdried followed by Pt/Pd sputter coating. Cells were viewed with a field emission scanning electron microscope at 5 kV (FEI, Eindhoven, The Netherlands) as described by Teertstra et al. (2009). Phenotype microarrays Growth of putative new species and respective reference strains (T. strigosum, T. strigosellum sp. nov. and T. sp. C.P.K. 3606) was analyzed on 95 carbon sources using the Biolog Phenotype MicroArray system for filamentous fungi (Biolog Inc.) as described before Friedl et al. 2008;Atanasova et al. 2010). Incubation was performed at 12 h cyclic illumination as 25°C. Statistical analyses were performed using Statistica 6.1 (StatSoft. Inc.). DNA barcoding of Trichoderma diversity revealed twelve known and four putatively new taxa Ninety-four (88 %) out of 107 strains were recognized as 10 species of Trichoderma by using the oligonucleotide barcode programs TrichOKey (Druzhinina et al. 2005) and TrichoBLAST based on ITS1 and 2 and tef1 phylotypes, respectively, ( Table 2). The remaining 13 strains could not be reliably identified. Two isolates (FPFh19 and FH6-16) and one isolate (FPFh10), all from rootlets of Garcinia macrophylla in the Amacayacu flood plain forests (i.e. várzea), had identical tef1 phylotypes to DAOM 229990 (NCBI GenBank EU280015) and DAOM 229888 (EU280054), respectively, that were detected previously by Hoyos-Carvajal et al. (2009a) in rain forest soils of Peru. DAOM 229990 represents a putative new species in section Trichoderma, while DAOM 229888 formed a long isolated lineage distantly related to T. helicum (Hoyos-Carvajal et al. 2009a). Both species had previously been found in forest soil in Loreta near Iquitos, Perú (Hoyos-Carvajal et al. 2009a). Three other isolates [i.e. P4(129b), P4-4(64) and P1-6(13)] obtained from a 30-year-old secondary forest plot and a recently cut down primary forest plot in the Araracuara region have an ITS1 and 2 ? tef1 haplotype related to T. rogersonii in the 'Small Koningii Branch' (Samuels et al. 2006b) and thus were assigned as T. cf. rogersonii. Seven isolates CBS 102805, 102806, 102816, 102817 and 102818 from leaf litter decomposing for six months in the mature Pseudomonotes tropenbosii (Dipterocarpaceae) forest in Peña Roja, as well as strains P1-2(25) and P4(166) isolated from 17-month-old litter in a recently cut down forest (P1) and a 30-year-old secondary forest plot (P4) in Araracuara shared highly similar ITS1 and 2 phylotypes and were attributed to section Trichoderma by TrichOKey, but no species identification was obtained. Sequence similarity based on 291 nt of the tef1 large intron determined that these isolates are most closely related to T. strigosum (89-90 % of similarity) while other species of section Trichoderma showed only 85 % similarity or less. Therefore they were tentatively identified as T. cf. strigosum (see below). Interestingly, three true T. strigosum isolates have also been detected by TrichOKey and confirmed by tef1 (Tables 1, 2). A few cosmopolitan species dominate the Trichoderma mycoflora in the Amazonian leaf litter Three species comprising 68 % of the isolates dominated the diversity of Trichoderma in the Amazon forests investigated, namely the T. harzianum complex (38 %), T. spirale (17 %) and T. koningiopsis (13 %) ( Table 2). Note that these species were dominantly isolated from either leaf litter or Garcinia rootlets, and from terra firme, várzea and successional forests (Table 2). Interestingly, the T. harzianum species complex was represented by at least three genetically distinct phylogenetic species, namely T. inhamatum (1 strain), T. harzianum sensu stricto (2 strains) and T. cf. harzianum (=H. 'pseudoharzianum' sensu Druzhinina et al. 2010a, 41 strains). The next frequent species are: the putatively novel taxon related to T. strigosum (=T. strigosellum sp. nov., see below) (7 strains); T. virens (6 strains); T. asperellum (3 strains) and T. asperelloides (3 strains). All other taxa were detected not more than twice. Thirteen Trichoderma species were isolated from litter bags at different stages of decomposition (Table 2), while only six taxa were detected from rootlets of Garcinia macrophylla ( Table 2). The Trichoderma community from the decomposing litter was less diverse than from fresh to littledecomposed leaves (Table 2). Fresh leaf litter and relatively little-decomposed leaves of 4-6 months yielded 84 isolates, compared to six isolates from 9 to 12 months-old leaves and 17 isolates from 17-months-old leaves. T. asperellum, T. asperelloides, T. sp. DAOM 229888, T. harzianum sensu stricto and T. hamatum were also isolated at least once from fresh leaf litter. All dominantly found species occurred in both Amacayacu and Araracuara regions. Twelve species were observed in the Amacayacu plots and only six in Araracuara including Peña Roja. T. strigosum and T. strigosellum sp. nov. (see below) were detected in Araracuara and Peña Roja, and T. cf. rogersonii was only observed in Araracuara. Trichoderma epimyces, T. virens, T. asperelloides, T. asperellum, T. hamatum, T. inhamatum, T. sp. DAOM 229888 and T. sp. DAOM 229990 were isolated in Amacayacu and not in Araracuara. No clear distinction was apparent between the number of isolates obtained from primary and secondary forests, nor between those isolated from terra firme and várzea forests in Amacayacu (Table 2). Genealogical concordance phylogenetic species recognition confirms T. strigosellum sp. nov. To reveal the exact phylogenetic position of isolates identified as T. cf. strigosum in section Trichoderma we applied the exact sequence of the 4th large intron of tef1 (as retrieved by TrichoMARK, www.isth.info, Druzhinina et al. 2005) to sequence similarity search (BLASTN) against NCBI GeneBank. The application of the precise intron sequence without flanking coding areas is necessary to get the most accurate identification result that is not biased by strong similarities of less polymorphic coding regions (exons). The taxonomy report obtained from this search revealed that besides T. strigosum, the query isolates are related to Hypocrea valdunensis (1 hit), T. viride (teleomorph H. rufa, 79 hits) and T. viridescens (4 hits) (listed in decreasing similarity). The Bayesian phylogram constructed with tef1 sequences of T. strigosum and the query isolates ( Fig. 1) demonstrated that T. strigosum and T. cf. strigosum are monophyletic and both belong to a statistically supported clade together with T. valdunensis and T. viride, while T. viridescens is the most distanced genetic neighbor of them. Interestingly, isolates of T. strigosum and T. cf. strigosum formed two statistically supported subclades what allowed us to hypothesize that they may represent two sister species. To test this we constructed Bayesian phylograms based on cal1 and chi18-5 phylogenetic markers (both unlinked to tef1, see Trichoderma genomes on Mycocosm portal of DOE JGI and Kubicek et al. 2011). This analysis demonstrated that both subclades were also present on chi18-5 and cal1 phylogenetic trees. The same result, the two statistically supported subclades corresponding to T strigosum and T. strigosellum sp. nov. were also present of a concatenated phylogram with tef1, ITS1 and 2, cal1 and chi18-5 loci (Supplementary materials). Thus, the isolates of T. cf. strigosum fulfill the criteria of the genealogical concordance phylogenetic species recognition concept (Taylor et al. 2000) and represent a new species described below as T. strigosellum sp. nov. The phylogenetic position of isolates identified as T. strigosum was also confirmed by this analysis (Figs. 1 and 2). A sequence similarity search conducted for all sequences of this new species revealed further strains of the species that until now had been identified as T. strigosum. These had been isolated from Malaysia (DAOM 230018), Colombia (DAOM 229937) and Cameroon (G.J.S. 05-02) suggesting that this new species has a broad, probably pantropical distribution (Figs. 1, 2). Its sibling species, T. strigosum, found during our studies in the same regions and habitats in Colombia as the new taxon, is further known from Brazil, forest soil from North Carolina, USA, soil under Theobroma cacao trees in Pastaza district, Peru, and from a forest in Turkey (Ismail Erper, Lea Atanasova, Irina Druzhinina, unpublished data), thus suggesting a geographically broad distribution for this species. Physiological profiling of T. strigosellum sp. nov. and T. strigosum We applied BIOLOG Phenotype MicroArrays with FF Phenotype microplates to further test whether T. strigosellum sp. nov. and T. strigosum are physiologically similar or may be distinguished by phenotypic characters. Carbon utilization by T. strigosellum sp. nov. was rather similar to T. strigosum as both could grow on almost all tested carbon sources (Fig. 3a). M-inositol, however, is hardly utilized by T. strigosellum sp. nov. In most cases T. strigosellum sp. nov. showed better growth than T. strigosum, especially on the best utilized carbon sources, such as D-lactose, N-acetyl-D-glucosamine, D-maltotriose, D-raffinose, maltose, lactulose, and stachyose. For some compounds, such as D-melibiose, D-sorbitol, L-ornithine, L-threonine, L-fucose, D-saccharic acid, glycyl-L-glutamic acid, and adonitol, growth was variable (Fig. 3a), but rather strain-and not species-dependent. Thus, the largest differences in hyphal growth were observed on carbon sources such as glycerol, amygdalin, m-inositol and maltitol (Fig. 3b). This analysis further supported our above conclusion on divergence between T. strigosellum and T. strigosum. Furthermore, and in line with it, linear growth rates at 30 and 33°C were higher for T. strigosellum sp. nov. compared to T. strigosum (Fig. 4). Development of an ITS1 and 2 oligonucleotide barcode for T. strigosellum sp. nov. We compared ITS1 and 2 phylotypes of T. strigosellum sp. nov. and T. strigosum and found that five out of eight available sequences for T. strigosellum sp. nov. had a 'species-specific' oligonucleotide barcode in the 5 0 area of the ITS2 locus that immediately follows the genus-specific hallmark four (Druzhinina et al. 2005). Compared to T. strigosum this hallmark contained one indel (an extra C), one T ? C transition and one G ? T transversion (Fig. 5). However, three strains of T. strigosellum sp. nov. displayed a phylotype identical to that of T. strigosum (Fig. 5). Thus the ITS barcode alone cannot reliably identify both species, but may attribute them to the T. strigosum clade. The two species are reliably differentiated in phylogenetic analyses of the tef1 large intron. Discussion Fungi play diverse roles in the functioning of tropical forest ecosystems. Unfortunately, biodiversity studies on microfungi in Colombian Amazonian rainforests are still sparse. Here we investigated the diversity of Trichoderma species in decomposing leaf litter in a series of primary and secondary Amazon forests from the Araracuara and Amacayacu regions that were recently studied for mushroom and plant diversity (López-Quintero et al. 2012). The macrofungal diversity differed considerably between these two Amazon regions, but also between primary and secondary forests, as well as between flooded and non-flooded forests (López-Quintero et al. 2012). Hitherto, only three Trichoderma species have been reported from Colombian Amazonia, namely T. virens, T. asperellum and T. harzianum (Hoyos-Carvajal et al. 2009a). Thus the 15 Trichoderma species that we report from Colombian Amazonia, including four putative new species, show that the microfungal diversity of these forests deserves further exploration. Other species, such as T. atroviride, T. brevicompactum, T. erinaceus, T. hamatum, T. inhamatum, T. koningii, T. koningiopsis, T. longibrachiatum, T. reesei, T. viridescens, together with a few sofar undescribed species have been reported from other parts of the country (Veerkamp and Gams 1983;Hermosa et al. 2000Hermosa et al. , 2004Kraus et al. 2004;Ortiz and Orduz 2000;Lee and Hseu 2002;De Souza et al. 2006;Samuels et al. 2006b;Mendez and Viteri Méndez and Viteri 2007;Hoyos-Carvajal et al. 2009a) with T. harzianum, T. asperellum and T. asperelloides (reported as T. asperellum 'B') being most commonly isolated, followed by T. brevicompactum (Hoyos-Carvajal et al. 2009a, 2009b. Trichoderma cf. harzianum, T. koningiopsis, and T. spirale occurred in fresh leaves, but also in young to 17-months-decomposing leaves. The repeated presence of T. cf. harzianum, T. spirale, T. koningiopsis and T. virens (isolated more than three times) in freshly collected leaf litter suggests that these species may occur as leaf endophytes. Endophytic colonization of epigeous parts of tropical plants is known for several apparently rare Trichoderma species (see Druzhinina et al. 2011 for references) that we did not detect in this study. There are also indications that numerous common environmentally opportunistic species, such as T. cf. harzianum and T. hamatum Druzhinina et al. 2005 may also become endophytes (Chaverri et al. 2011, Chaverri andSamuels, 2013) and T. harzianum s.s. and T. asperellum were reported as endophytic in bean stem tissue by Hoyos-Carvajal et al. (2009b). However, our understanding of the functional diversity of Trichoderma species in the Colombian tropical lowland Amazon remains limited, especially with respect to this switch between endophytic and saprotrophic life styles. Trichoderma can be mycotrophic feeding on living and dead fungal biomass. Recent genomic and transcriptomic studies , Druzhinina et al. 2011, Atanasova et al. 2013b have proven that mycotrophy is the major genetic basis that allows Trichoderma to establish itself in a diversity of habitats ranging from biotrophy on plants and animals to exclusive saprotrophy. According to the concept of Kubicek et al. ( 2011) and Druzhinina et al. (2011), Trichoderma is initially fungicolous and this lifestyle gave rise to a number of derived nutritional strategies including biotrophy and saprotrophy. In this study we demonstrated that Trichoderma is present in the community of leaf litter-decomposing fungi in Colombian Amazonia. However, whether Trichoderma is a primary decomposer in this ecosystem or whether it follows other fungi remains unresolved. A pioneering occurrence of many Trichoderma species has been repeatedly observed in soils of unstable ecosystems (summarized by Domsch et al. 2007). Moreover, Trichoderma species together with fungi such as Mucor hiemalis and Absidia glauca, were found to appear later in the fungal succession of decomposing Swida leaves (Osono 2005). These findings demonstrate that Trichoderma spp. may play their role during various stages of litter decomposition. It appears remarkable that the diversity of Trichoderma in the biodiversity-rich ecosystem of the tropical lowland Amazon forest was found to be dominated by a group of cosmopolitan species with high opportunistic potential, such as T. cf. harzianum, T. spirale and T. koningiopsis (Atanasova et al. 2013a). Similar observations were made by Migheli et al. (2009) on Sardinia located in the Mediterranean hotspot of biodiversity where Trichoderma diversity did not contain any endemic species and was dominated by the same species as detected in the current study. Migheli et al. (2009) speculated on the relative role of human activity that favors establishment of invasive Trichoderma species and harms the presumed endemic community of the otherwise unique and species rich environment. The results of the current study further support the view that a number of Trichoderma species that are most frequently detected in soil and litter form invasive communities that establish in various ecosystems. However, the interaction between the later 'strong' Trichoderma species and local infrageneric communities requires further investigation. The likely pantropical T. strigosellum sp. nov. differed ecophysiologically from its closest neighbor, the cosmopolitan species T. strigosum. Growth of T. strigosellum sp. nov. at elevated temperatures (e.g. 33°C) was significantly better than that of T. strigosum, which may imply a greater fitness in the tropical lowland forest ecosystems where the species occurs. Trichoderma species have applications ranging from the production of enzymes and antibiotics (H. jecorina/T. reesei), to bioremediation of xenobiotic substances, and biological control of plant-pathogenic fungi and nematodes Druzhinina et al. 2011). Previous studies on Trichoderma from neotropic regions focused on the isolation of strains with antifungal activity against pathogens of agro-industrially important crops, e.g. cacao (Theobroma cacao) and coffee (Coffea spp.) (Samuels et al. 2006a;Hanada et al. 2008;Mulaw et al. 2010). Our data confirm that the Amazon region harbors a rich pool of Trichoderma species, including yet undescribed species, which allows us to better understand their role in important ecological processes of these ecosystems such as nutrient cycling. Therefore, it is likely that further diversity explorations of this important group of fungi from these regions will yield significant data. Note: the Latin stem strigosus applied to T. strigosellum sp. nov. and T. strigosum may be used in two different interpretations, typifying the differences between the species and striga (Botanical Lat., a bristle-like hair) referring to the appearance of the conidiophore extensions in T. strigosum, and strigosus (Lat. meager, or boring of oratory) reflecting the plain appearance of the new species in lacking conidiophore extensions. Holotype In Herbarium Universidad de Antioquia as HUA 179963, with isotype in Herbarium CBS as CBS H-21054. Ex-type cultures CBS 102817 (=C.P.K. 3604), isolated from leaf litter exposed for 6 months in litter bags placed on forest floor in a Pseudomonotes tropenbosii (Dipterocarpaceaea) forest in Peña Roja, Department Amazonas, Colombia, July 1999. The isolate was collected by Carlos Lopez Quintero as CBS 102817. Morphology A new species is similar to T. koningii and T. koningiopsis but differentiated morphologically by much less developed aerial mycelium. Differing from its closest relative, T. strigosum, by complete absence Fig. 6 Morphology of Trichoderma strigosellum sp. nov. CBS 102817. a. Colony on cornmeal agar (CMA) at room temperature; b. Branching conidiophores and phialides on CMA; c. Conidia on CMA; d. Drawing of conidation and conidia from CMA; e. Low magnification of SEM image of spore clusters; f. SEM of hyphae, phialides and conidiogenesis of sterile conidiophore elongations and better growth at higher temperatures. Colonies on OA dark greygreen (Fig. 6) reaching 7-8 cm diameter after 5 days on CMA at 24°C, and 9 cm after 6 days on CMD, SNA and OA at 27°C, but only 0.1-0.4 cm on these three media at 36°C. Submerged mycelium of young colonies irregularly and loosely branched, spreading radially; aerial mycelium with central conidiation after 4 days on CMD; zonate, with zones on OA approximately 15-18 mm distant, with scattered small pustules with deep green colour (27E8); growth on SNA sparse with loosely branched hyphae that form conidial heads, pustules hardly distinguishable, but after 7 days near the margin of the plate becoming distinct. Odor somewhat musty, but strain CBS 102805 had a coconut odor on OA. Comments Among the species related to the T. koningii complex treated by Samuels et al. (2006b), T. strigosellum needs to be compared with other species having narrow conidia. In this respect, T. strigosellum resembles T. koningii and T. koningiopsis, but it does not form abundant aerial mycelium. Phylogenetically, T. strigosellum is a sister species to T. strigosum, but differs from the latter by the absence of sterile appendages and by smaller conidia and narrower phialides. Because T. strigosum has inconspicuous sterile or fertile conidiophore elongations, the species was placed by Bissett (1991) in section Pachybasium. The new species does not have such elongations. T. strigosellum can be reliably identified by high sequence similarity ([93 %) or identity of the tef1 large intron sequence. Phylotypes of tef1 large intron of T. strigosum share\90 % similarity with those of T. strigosellum. Note that no diagnostic coding regions were found for definitive species identification.
2022-12-05T15:15:21.392Z
2013-07-25T00:00:00.000
{ "year": 2013, "sha1": "7808921f9d7070aa2c4d76e4e549668ca1929c01", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10482-013-9975-4.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "7808921f9d7070aa2c4d76e4e549668ca1929c01", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
55302472
pes2o/s2orc
v3-fos-license
Ecophysiology of tropical tree crops : an introduction In this special issue, ecophysiology of major tropical tree crops, considered here on a broader sense and including species such as banana, cashew, cassava, citrus, cocoa, coconut, coffee, mango, papaya, rubber, and tea, are examined. For most of these crops, photosynthesis is treated as a central process affecting growth and crop performance. The crop physiological responses to environmental factors such as water availability and temperature are highlighted. Several gaps in our database concerning ecophysiology of tropical tree crops are indicated, major advances are examined, and needs of further researches are delineated. INTRODUCTION Pant physiological research has a fundamental role in advancing the frontier of knowledge essential for a better understanding of plants and their interactions with surrounding environments (El-Sharkawy, 2006) for the entire or any period of the life cycle.Plant life cycle is an important aspect responsible for key differences among crops when growth and developmental strategies are considered.With regard to trees, in spite of their basic physiology being similar to that of annual species, there are some facets, such as size and complexity of organisation, making them an exciting field of diversity.The concept of a tree is amenable to an array of definitions, which usually involves size as well as physiognomy.Here, I have considered what is meant by tree in a broad sense according to the definition of Hallé et al. (1978) and summarised in the Merriam-Webster's Collegiate Dictionary (Mish, 1993): "… a woody perennial plant having a single usually elongate main stem generally with few or no branches on its lower part; a shrub or herb of arborescent form…".Such a definition provides a partial justification for the selection of the tropical tree crops examined here, embracing not only trees such as cashew, mango and rubber but also nonwoody species such as banana, coconut and papaya (Table 1).Furthermore, for only a few tropical tree crops with greater economic importance in the world trade (e.g.citrus and coffee) there has been a relatively considerable amount of basic research on environmental physiology, but much less as compared with temperate tree crops 1. Brief description of the tropical tree crops dealt with in this special issue.Unless otherwise stated, for all species, fruits are the harvestable yield.Sources: Alvim and Kozlowski (1977); León (1987);Smith et al. (1992); Schaffer and Andersen (1994);Last (2001). such as apple and stone fruit.Lack of fundamental research may be due partially to the fact that the majority of tropical tree crops are cultivated in third-world countries where limited resources are available for adequately exploring the diversity amongst tropical plant species.This is another reason to justify the choice of the crops explored in this special issue. There is a great deal of pliancy in the orientation, deepness and style of each article of the present issue, but in the majority of them photosynthesis is treated as a major process affecting growth and crop performance.This is not surprising taking into account that 90-95% of plant dry mass is derived from photosynthetically fixed carbon, although straightforward relationship between photosynthesis and crop yield is not always observed (Khanna-Chopra, 2000;Kruger and Volin, 2006).It is highlighted that highly-productive species such as cassava, papaya and banana show high photosynthetic rates, which may reach values as large as 50 µmol m -2 s -1 as in cassava (El-Sharkawy et al., 1992).By contrast, slowgrowing crops, such as citrus, cocoa and coffee, which have evolved as understory trees are traditionally considered as displaying very low photosynthetic rates, seldom above 10 µmol m -2 s -1 , even in the field under favourable growth conditions (DaMatta, 2003).This behaviour has mostly been associated with large diffusive, rather than biochemical, limitations to photosynthesis (Lloyd et al., 1992;DaMatta et al., 2001), which can become increasingly important particularly under stressful conditions such as drought and elevated temperatures. Plants are frequently exposed to a variety of harsh environmental conditions which negatively affect growth and crop yield.An understanding of the responses of crops to their environment is thus fundamental to minimise the deleterious impact of unfavourable climatic conditions and to manage them for maximum productivity.Boyer (1982), for instance, argued that water supply affects productivity of trees and annual crops more than all other environmental factors combined.This aspect has been deeply explored in this issue, as for banana, cashew, cassava, coconut, papaya, and tea; however, the development of internal water deficit may be important to some crops such as coffee and mango in order to trigger phenological events such as flower bud release.Decreases in yield induced by low soil water supply may largely be associated with a decline in photosynthetic rates, either by a direct effect of dehydration on the photosynthetic apparatus or by an indirect effect by way of stomatal closure, which restricts CO 2 uptake.In addition to soil water deficits, atmospheric water deficit is also of particular relevance to tropical tree crops.This is due to the very low root hydraulic conductivity as compared with annuals, which brings about a pronounced effect of transpiration on tree water relations (DaMatta, 2003).Furthermore, for a tropical environment the range of evaporative demand on average is far higher than that of temperate zones.This implies that leaf water status changes much more diurnally in tropical trees than in many temperate trees or annuals, and leaf water deficits may occur under the high evaporative demand even without any soil water shortage, such as in banana, cocoa, coffee, papaya, and tea.Therefore, regulation of leaf water status by atmospheric conditions is relatively more important in tropical tree crops than in many other crops.Yield of crop plants under soil and/or atmospheric drought stress will largely depend on adaptive mechanisms allowing them to maintain growth and a high photosynthetic production under prolonged drought conditions.However, studies on the effects of drought on crop performance are often complicated, firstly due to the complex nature of drought stress in the field, and secondly because crop yield may be affected more directly by the smaller leaf area rather than by the decreased photosynthetic rate per unit leaf area during and following drought events. Long diurnal periods with air temperature above 35 o C are relatively common in tropical areas.High air temperatures may steeply increase the leaf-to-air temperature difference to values above 5 to 10 o C or more, as shown for tea.Anyway, one of the major difficulties in interpreting the response of physiological processes such as photosynthesis to temperature, particularly in the field, is that increase in temperature is associated with a rise in atmospheric vapour pressure deficit.Therefore, decreases in photosynthetic rates could be due to increases in temperature per se or increases in vapour pressure deficit leading to stomatal closure, or both.However, in contrast to temperate species (see, for example, Salisbury and Ross, 1992), there seems to be a broad adequate temperature range (20-35ºC, or even above) for photosynthesis nearly corresponding to the normal temperature fluctuations frequently found in hot environments in which tropical tree crops are generally grown (DaMatta, 2003).For example, by growing cassava F.M. DaMATTA in a hot environment, El-Sharkawy and Cock (1990) demonstrated that maximum values of net photosynthetic rates around 30 to 36 µmol m -2 s -1 were common with leaf temperature in the range 32 to 37 o C.They also pointed out that failure in adequately controlling air humidity and irradiance was responsible for the findings of several earlier reports showing cassava has lower photosynthesis and lower and narrower optimum temperatures of 25 to 28 o C for maximum photosynthetic rates. Manipulation of microclimate for increasing the efficiency with which resources are used in agriculture has received renewed attention the last few years.In agroforestry and inter-cropping systems taller plant canopies may alter not only the radiation, but also air humidity and temperature around understory crops.Seedlings of many tropical tree crops grow better under shaded conditions than in full sunlight (e.g.cocoa, tea), perhaps because they are subjected to photoinhibition, and/or have large root resistances to water uptake resulting in early stomatal closure (Huxley, 2001).During the juvenile phase of tree crops, inter-cropping with fastgrowing tree crops such as cassava, papaya and banana is often successfully used.This allows not only improved light capture and biomass production per unit land area but also improved growth as a result of a more favourable water status.In fact, it may be suggested that shading, provided it is not excessive, may be advantageous for tree crop cultivation in the tropics considering that: (i) photosynthesis in several tropical tree crops is irradiance-saturated below full sunlight; (ii) in the tropics during most of the year incoming radiation is high and may lead to photoinhibitory damages, particularly when associated with water shortage; and (iii) improved microclimate leads to a buffering of air humidity and soil moisture availability, thereby allowing maintenance of leaf gas exchange for longer.Other reasons for maintaining shade trees with perennial crop plantations include the income provided by their fruit and/or timber (or latex if rubber is the dominant species), increasing awareness of the environmental costs associated with high-input monocrops, and biodiversity maintenance.Indeed, a growing body of evidence suggests that whenever correctly managed, inter-cropping and agroforestry schemes will become a promising alternative for sustainability in tropical agriculture, as highlighted here for crops such as cocoa, coffee, rubber, and tea. FUTURE SCOPE There is an increasing trend in expanding tropical agriculture towards marginal and degraded lands where water shortage and unfavourable temperatures already constitute major constraints to crop yield.The scientific community has long been aware of the impact of the environment on plant productivity, and this aspect of plant biology has recently become a greater political and public concern in the wake of discussions surrounding global climate changes (Chapple and Campbell, 2007).In any case, large areas of valuable irrigated land are facing crop conversion problems because either they are allotted to less valuable annual crops or critical salinisation problems (roughly one-fourth of world-wide irrigated land involved); Janssens and Subramanim, 2000.Considerable areas with currently sufficient water will experience in a near future some degree of water shortage, e.g.India and parts of China (Wallace, 2000).The use of appropriate perennial crops in combination with adequate irrigation to exploit saline lands may be successful where annuals would normally fail.In addition, perennial crops may help to buffer the farmer's production against year-to-year oscillations in yields from rainfed annual crops.In effect, in the latter half of the last century, most particularly in the last decade, the proportion between annual and perennial crops evolved more in favour of the latter.There is also an expectation that the proportion of perennial crops to annual crops will continue increasing during this new century (Janssens and Subramaniam, 2000).Most of this increase involves tropical and subtropical tree crops.Needless to say there are uncountable tropical species with potential agricultural use whose domestication remains unfulfilled. As occurs with most tropical plant species the gaps in our knowledge on ecophysiology of tropical tree crops are incommensurable, though significant advances have occurred in the last years.The bulk of research has slowly been shifted from more observational studies on plant growth and developmental responses to physiological processes, as can be seen when examining the current papers dealing with banana, citrus, coffee and mango, for example.Unfortunately, however, physiological research concerning tropical tree crops has been restricted to a few laboratories throughout the tropical/subtropical countries where those crops are chiefly grown.To date, significant fundamental research ECOPHYSIOLOGY OF TROPICAL TREE CROPS has been conducted using potted plants without the appropriate calibration in the field, which can lead to a waste of time and resources since in most cases results cannot be extrapolated, or simulated by crop modelling, to describe what may occur in natural environments (El-Sharkawy, 2006;Long et al., 2006).Even under field conditions, much emphasis on ecophysiology of tropical tree crops has been focused at the leaf level without advancing substantially towards the canopy level.Furthermore, part of the available information obtained in field conditions is known on the grounds of empirical experimentation rather than scientifically based, with predominantly observational results and no mechanistic and functional links.With a few exceptions, the use of isotope techniques, fundamental biochemical and molecular studies, multiscale analyses, and crop simulation models have not yet been the major goals in basic and applied research in tropical tree crops.From the above, the deep understanding of the physiology of currently cultivated tropical trees and its impact on subsistence and commercial agriculture is a challenge to be met within the near future. Hopefully, this special issue will not only highlight some recent advances in ecophysiology of tropical tree crops, but also may serve as a stimulus for further efforts in this important challenging field of research.As yet we are awaiting that the new tools in genetics, biochemistry and molecular biology, that are just beginning to be explored in major crops such as coffee and citrus, can also be expanded towards other tropical tree crops.It must be emphasised, however, that if significant benefit to the farmer is to be attained, crop performance must be also evaluated under the naturally changing tropical environmental conditions.After all, yield improvement under such conditions is the major goal to be achieved.
2018-12-07T11:20:16.223Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "bdb5c238b1567bc6becf2274f190d4c4b3348f7a", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bjpp/a/vTqmSMj6nKYzyF4wxzQ7KYm/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bdb5c238b1567bc6becf2274f190d4c4b3348f7a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
267741038
pes2o/s2orc
v3-fos-license
Supplementation of vitamin E or a botanical extract as antioxidants to improve growth performance and health of growing pigs housed under thermoneutral or heat-stressed conditions Background Heat stress has severe negative consequences on performance and health of pigs, leading to significant economic losses. The objective of this study was to investigate the effects of supplemental vitamin E and a botanical extract in feed or drinking water on growth performance, intestinal health, and oxidative and immune status in growing pigs housed under heat stress conditions. Methods Duplicate experiments were conducted, each using 64 crossbred pigs with an initial body weight of 50.7 ± 3.8 and 43.9 ± 3.6 kg and age of 13-week and 12-week, respectively. Pigs (n = 128) were housed individually and assigned within weight blocks and sex to a 2 × 4 factorial arrangement consisting of 2 environments (thermo-neutral (21.2 °C) or heat-stressed (30.9 °C)) and 4 supplementation treatments (control diet; control + 100 IU/L of D-α-tocopherol in water; control + 200 IU/kg of DL-α-tocopheryl-acetate in feed; or control + 400 mg/kg of a botanical extract in feed). Results Heat stress for 28 d reduced (P ≤ 0.001) final body weight, average daily gain, and average daily feed intake (−7.4 kg, −26.7%, and −25.4%, respectively) but no effects of supplementation were detected (P > 0.05). Serum vitamin E increased (P < 0.001) with vitamin E supplementation in water and in feed (1.64 vs. 3.59 and 1.64 vs. 3.24), but not for the botanical extract (1.64 vs. 1.67 mg/kg) and was greater when supplemented in water vs. feed (P = 0.002). Liver vitamin E increased (P < 0.001) with vitamin E supplementations in water (3.9 vs. 31.8) and feed (3.9 vs. 18.0), but not with the botanical extract (3.9 vs. 4.9 mg/kg). Serum malondialdehyde was reduced with heat stress on d 2, but increased on d 28 (interaction, P < 0.001), and was greater (P < 0.05) for antioxidant supplementation compared to control. Cellular proliferation was reduced (P = 0.037) in the jejunum under heat stress, but increased in the ileum when vitamin E was supplemented in feed and water under heat stress (interaction, P = 0.04). Tumor necrosis factor-α in jejunum and ileum mucosa decreased by heat stress (P < 0.05) and was reduced by vitamin E supplementations under heat stress (interaction, P < 0.001). Conclusions The addition of the antioxidants in feed or in drinking water did not alleviate the negative impact of heat stress on feed intake and growth rate of growing pigs. Oxidative stress is defined as an imbalance in favor of oxidants as compared to the antioxidant system in the body [14].Oxidative stress is produced by reactive oxygen species (ROS) such as hydroxyl radical, peroxyl radical, hydrogen peroxide, superoxide anion radical and singlet oxygen [15].These molecules produce oxidation in cells, causing damage to DNA, proteins, and lipids, altering the normal functioning of the cell, and producing measurable byproducts, including 8-hydroxydeoxyguanosine (8-OHdG), protein carbonyls and malondialdehyde (MDA) [16].The first line of defense against oxidants is the endogenous antioxidant system, which includes superoxide dismutase, catalase, and glutathione peroxidase [17].The second line of defense is antioxidants provided in the diet, such as vitamin E, vitamin C, carotenoids, polyphenols, selenium, and zinc [17]. Vitamin E is a fat-soluble vitamin and serves as a natural antioxidant in the body.The most biologically active form of vitamin E is D-α-tocopherol [15] and DL-αtocopheryl acetate is the most common source of vitamin E used for animal diets.Vitamin E prevents lipid peroxidation by scavenging ROS and donating electrons abstracted by free radicals from biomolecules [15].Supplementation of vitamin E together with selenium improved intestinal epithelial barriers and alleviated oxidative stress in growing pigs housed under heat stress [18].Vitamin E increased immune responses in broilers [19], egg production in laying hens [20] and feed intake in hens housed under heat stress [21].On the other hand, Niu and coworkers reported no effects of vitamin E supplementation on body weight (BW), average daily feed intake (ADFI), and gain:feed ratio (G:F) in broilers housed under heat stress conditions [19].Natural vitamin E (D-α-tocopherol) supplementation in drinking water of pigs showed high absorption of vitamin E [22,23] and may be strategically used to decrease negative effects of heat stress in pigs when feed intake is reduced. Polyphenols are compounds found in plants and serve to protect against insects, ultraviolet light, and physical damage [24].Polyphenols have antioxidant properties preventing damage by ROS, can activate antioxidant enzymes and inhibit oxidases.Supplementation of polyphenols in the diet decreased MDA concentrations in plasma [25] and reduced diarrhea and E. coli excretion in weaned piglets [26].In addition to antioxidant properties, polyphenols have been suggested to increase digestive enzyme secretions, modulate the intestinal microbiota and morphology, improve immune system functioning, and provide anti-inflammatory properties [27].The prospect of dietary polyphenol supplementation to alleviate oxidative stress associated with heat stress through their ability to donate multiple electrons and quenching free radicals is promising and requires further study in swine. The objective of the present study was to investigate the hypothesis that supplementation of vitamin E and a botanical extract containing polyphenols in feed or drinking water could enhance growth performance, intestinal health, and oxidative and immune status in growing pigs housed under heat stress conditions. Animals, housing, and experimental design The study was conducted with a total of 128 pigs in duplicate experiments.In each experiment, 64 crossbred pigs (Smithfield Premium Genetics, Roanoke Rapids, NC, USA) equally divided into 32 barrows and 32 gilts were used.Pigs had an initial body weight of 50.7 ± 3.8 and 43.9 ± 3.6 kg and age of 13 and 12 weeks for Exp. 1 and 2, respectively.Pigs were blocked by initial BW and sex and randomly assigned within blocks to a 2 × 4 factorial randomized complete design using an experimental allotment program [28].Therefore, there were a total of 16 blocks per treatment combination (8 blocks within each duplicate study).Factors consisted of 2 types of environments (thermo-neutral and heat-stressed), and 4 supplementation treatments applied as follows: (1) control diet (25 IU/kg of DL-α-tocopheryl acetate; CON); (2) control diet + 100 IU/L of D-α-tocopherol supplemented via the drinking water (Emcelle tocopherol, Stuart Products, Bedford, TX, USA; VEW); (3) control diet + 200 IU/kg of additional DL-α-tocopheryl acetate supplemented in the feed (Rovimix, DSM Nutritional Products, Parsippany, NJ, USA; VEF); and (4) control diet + 400 mg/kg of a botanical extract containing a variety of polyphenols supplemented in the feed (Promote, AOX 50, Cargill, Minneapolis, MN, USA; POL), based on recommended inclusion rate of the manufacturer.Supplementation of vitamin E of the control treatment corresponded to current industry recommendations [29] and the supplementation of 200 IU/kg corresponded to previous experiments [30].Water supplementation of vitamin E was set at 100 IU/L to provide approximately the same amount of vitamin E on a daily basis as the feed supplementation, assuming a predicted water intake to feed intake ratio of 2:1. Within each experiment, pigs and treatments were allotted randomly within blocks into 2 rooms, each containing 32 pens per room.Pigs were housed individually, resulting in 16 pigs per treatment combination for the overall study.Dietary and water treatments were equally represented in each room and were randomly distributed within room to avoid potential location effects.Pens measured 0.91 m × 1.82 m and contained a stainless-steel cup waterer (AquaChief, Hog Slat, Inc., Newton Grove, NC, USA) and an individual stainless-steel feeder (Boar feeder, Hog Slat, Inc.).All pigs were provided ad libitum access to feed and drinking water.Immediately after pigs were allocated, they were provided water supplementation or dietary treatments for 7 d prior to the initiation of temperature treatments (adaptation period).Both rooms in each experiment were set at a constant temperature of 22 °C during the adaptation period and after this period was completed, the environmental treatments (thermoneutral and heat-stressed) were implemented for the subsequent 28 d period.The heat-stressed and thermoneutral environmental treatments were represented by one room each within each experiment, resulting in 2 replicate rooms per environmental treatment for the study.Each room was equipped with an environmental control system (GL-5124LW Grower Direct, Monitrol, Inc., Boucherville, Quebec, Canada) to mimic high temperatures and temperature fluctuations during the day as commonly experienced in the summer season and normal thermo-neutral conditions, respectively.Temperatures for the heat-stressed room were set at 28.3, 29.4, 29.4, 31.1, 32.8, 33.3, 34.4, 35.6, 34.4, 31.7, 29.4 and 29.4 °C for 2400, 0200, 0400, 0600, 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h, respectively.For the thermo-neutral room, temperatures were set at 18.9, 18.9, 20.0, 20.0, 21.1, 21.1, 22.2, 22.2, 21.1, 21.1, 20.0, and 20.0 °C for 2400, 0200, 0400, 0600 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h, respectively.Temperatures were recorded every 10 min using 3 data loggers (LogTag, Micro DAQ Ltd., Contoocook, NH, USA) distributed in each room at approximately the same height as the pigs. Dietary treatment feeds were manufactured at the North Carolina State University Feed Mill Educational Unit (Raleigh, NC, USA).Diets were primarily based on corn and soybean meal and were formulated to contain 2.78 g standardized ileal digestible lysine per Mcal ME (Table 1) and met or exceeded all nutrient requirements for growing pigs as suggested by the National Research Council [31].A basal mix containing all ingredients, except the test ingredients, was first created and divided into 4 batches.The first and second batch were used (without any supplement) as the control treatment (CON) and the treatment receiving vitamin E in the water (VEW).Vitamin E or the botanical extract were mixed with the basal diet to create the vitamin E (VEF) and the polyphenol-containing botanical extract (POL) treatments, respectively. To prepare the water supplementation treatment, a stock solution was prepared by adding concentrated vitamin E to water at a ratio of 0.0256:1.The vitamin E stock solution was subsequently metered into the drinking water at a rate of 1:128 vitamin E stock solution:drinking water using a water medication device (Dosatron DM11F, Hog Slat, Inc.).Treated water was supplied to randomly selected pens (within block) within each room (8 pens per room).To determine water disappearance, both rooms were equipped with 4 water meters each.Two water meters (Elster C700 digital Invision 5/8" × 3/4" bronze valve, Elster AMCO Water, Inc., Ocala, FL, USA) were located in the principal water system (one on each side of the room) to measure water consumption for treatments that were not receiving water supplementation (24 pens in each room), and 2 water meters (water meter 5/8" Arad, AradGroup, Dalia, Israel) were used to measure water intake for the water supplementation treatment (8 pens in each room). Growth performance and water intake Body weight was measured on d −7 (7 d prior to the initiation of heat stress), 0, 7, 14, and 28 to calculate ADG.Daily feed intake was measured from the difference between daily feed additions and feed remaining at the end of each weekly period divided by 7 d.Gain:feed ratio (G:F) was calculated by dividing ADG by ADFI.Water intake was determined weekly by subtracting the reading on each water meter at the beginning of the period from the reading at the end of the period. Respiration rate and rectal temperature Respiration rate and rectal temperature were measured on d 0 (immediately prior to the initiation of the environmental treatments) as a baseline before heat stress was initiated.Likewise, respiration rate and rectal temperature were measured on d 1, 2, 3, 4, 5, 6, 7, 14, 21, and 28 of heat stress between 1300 and 1600 h (peak of heat stress during the day).Respiration rate was determined by counting the number of flank movements during a 30-s period at rest, using a stopwatch by the same observer during all the evaluations.Rectal temperature was measured using a digital thermometer (GLA M700, GLA Agricultural Electronics, San Luis Obispo, CA, USA) after respiration measurements were completed. Sample collection Blood samples from each pig were collected by venipuncture (jugular vein) using 20-gauge × 3.8 cm drawing needles (Vacuette, Greiner bio-one, Kremsmunster, Austria) on d 2 and 28 (at 1200 h), representing short-term and long-term heat stress.Blood for serum analysis was collected into 10-mL vacuum tubes (BD Vacutainer serum, Franklin Lakes, NJ, USA).Blood was centrifuged at 4,000 × g for 10 min at 4 °C using a refrigerated centrifuge (Centra GP8R, Thermo IEC, Waltham, MA, USA) and serum was collected.Serum was aliquoted into 3 tubes of 2 mL capacity (Biotix, Inc., Neptune Scientific, San Diego, CA, USA) and stored at −80 °C until further analysis.Blood for complete blood cell count (CBC) analysis was collected into 6-mL vacuum tubes (BD Vacutainer containing 10.8 mg K 2 EDTA) and immediately submitted to Antech Diagnostic Laboratory (Cary, NC, USA) for analysis. At the end of each experiment (d 28), 16 pigs in each room (total of 64 pigs; 8 pigs per experimental treatment) were euthanized using a captive bolt gun, followed by exsanguination.Blood samples were collected at the time of exsanguination of euthanized pigs and the blood samples were processed as indicated previously and stored at −80 °C.The abdominal cavity was opened, and 25 cm of the proximal jejunum (anterior to the duodenal-jejunal junction) and 25 cm of distal ileum (10 cm proximal to the ileal-cecal junction) were excised.Mucosa samples from the proximal jejunum and distal ileum were scraped using a glass slide, placed into 2-mL tubes (Biotix, Inc.), snap frozen in liquid nitrogen, and subsequently stored in a −80 °C freezer until further analysis was conducted.Four cm of intact intestinal tissue from the jejunum and ileum was collected, rinsed in 0.9% saline, and fixed in 40 mL of 10% formaldehyde solution for 3 d for histological measurements.Approximately 100 g of liver from the center of the right lobule was collected and stored in a plastic bag at −20 °C for analysis of vitamin E. Chemical analyses Proximate analysis of the diets was conducted by the Agricultural Experiment Station Chemical Laboratories, University of Missouri (Columbia, MO, USA) using AOAC official methods [32].Diets were analyzed for moisture (Method 934.01), crude protein (Method Concentrations of vitamin E (IU/kg) in feed samples and in drinking water (IU/mL) samples were analyzed by DSM Technical Marketing Analytical Services (Belvidere, NJ, US), using a high-performance liquid chromatography system with fluorescence detection following AOAC Official Method 971.30 [32] for α-tocopherol and α-tocopheryl acetate determination in foods and feeds.Vitamin E concentrations in serum samples collected on d 2 and 28 and concentrations of vitamin E in the liver were determined by the Veterinary Diagnostic Laboratory at Iowa State University (Ames, IA, USA) using high performance liquid chromatography. Intestinal measurements Cross sections of 0.4 cm thick of fixed tissue samples from the jejunum and ileum were taken and 2 to 3 sections per pig were stored into cassettes submerged in 10% formalin by the North Carolina State University College of Veterinary Medicine Histopathology Laboratory (Raleigh, NC, USA) for hematoxylin and eosin (H&E) and for Ki-67 staining of slides.Each microscope slide was photographed using an AmScope FMA050 microscope (AmScope, Irvine, CA, USA) and AmScope 3.7 software to capture and analyze images at 40× magnification.Eighteen randomly positioned villi and crypts were selected to measure villus height (from top of the villus to the crypt junction), villus width (from the middle of the length of the villus), and crypt depth (from the crypt junction to the base of the crypt) based on previously described methods [33].Villus height and crypt depth ratio was obtained by dividing the villus height by its own crypt depth. The proliferation rate of cells in the crypts was measured by staining for protein Ki-67, a protein located in the nucleus of proliferating cells and stained with a Ki-67 antibody.Microscope slides were scanned using 100× magnification using an AmScope FMA050 microscope and AmScope software.Images of fifteen crypts per sample were captured and evaluated using the Image JS software [34].The ratio of Ki-67 positive cells in each crypt of the jejunum and ileum tissue was calculated by dividing Ki-67 positive cells by total cells in the crypt. Concentration of cytokines in mucosa and serum Tumor necrosis factor-α (TNF-α) was measured in the mucosa of the proximal jejunum and distal ileum.Samples (0.75 to 0.80 g of mucosa) were combined with 1.5 mL of phosphate buffered saline (PBS; pH = 7.4) and subsequently homogenized (Tissuemiser, Bio-Gen PRO200, PRO Scientific Inc., Oxford, CT, USA).The samples were then centrifuged at 15,000 × g at 4 °C for 20 min.A 1 mL sample of supernatant was obtained and stored at −80 °C until it was analyzed.Total protein was evaluated in mucosal samples prior to analysis of TNF-α using the Pierce BCA protein assay kit (Thermo Scientific, Rockford, IL, USA).A porcine TNF-α ELISA kit (Quantitine R&D Systems, Inc., Minneapolis, MN, USA) was used to analyze TNF-α.Mucosal concentrations of TNF-α were expressed in pg/mg of total protein.The intra-assay CV were 7.3% and 4.1% for mucosal ileum and jejunum samples, respectively. Oxidative status in mucosa and serum Analysis of MDA was conducted in the mucosa of the ileum and jejunum and in serum.Samples of mucosa (100 mg) were homogenized using 1 mL of PBS and 10 µL of butylated hydroxytoluene.Concentrations of MDA in mucosal tissues and serum samples were analyzed using the Oxiselect TBARS assay kit protocol (MDA Quantitation; Cell BioLabs, Inc., San Diego, CA, USA).Only ileum results were measured because jejunum samples were compromised during analysis at the last step of the assay.Absorbances were measured at 532 nm in a multi-detection micro-plate reader (Synerg HT, BioTek Instruments, Winooski, VT, USA).Results from MDA for ileum mucosa and serum samples were expressed in µmol/g of total protein and µmol/L, respectively.Intraassay CV were 2.4% and 9.0%, respectively. Statistical analyses Data were analyzed using the Proc MIXED procedure of SAS (v.9.4,SAS Institute.Inc., Cary, NC, USA).Individual pig was used as the experimental unit.The model included environmental treatment, antioxidant supplementation treatments, and their interaction.Block nested within environment was used as the random effect.The least significant difference method was used to determine differences between means following a significant Fisher test.Statistical significances were considered at P < 0.05 and tendencies at 0.05 ≤ P ≤ 0.10. Room temperature, relative humidity, and water consumption The mean temperatures for the thermo-neutral room and heat stress rooms were 20.5 ± 1.66 °C and 30.0 ± 3.46 °C, respectively for Exp. 1, and 21.8 ± 3.66 °C and 31.7 ± 3.10 °C, respectively for Exp. 2. Room temperatures fluctuated within day, which was consistent with the experimental design (Fig. 1A and B).The relative humidity for the thermo-neutral room and heat stress room was 54.6% and 52.4%, respectively for Exp. 1, and 65.4% and 47.4%, respectively for Exp. 2. Water disappearance per pig for the pigs housed in the heat stressed environment was lower than that in the thermo-neutral environment (6.7 vs. 11.3L/d; P = 0.007).Water supplementation with vitamin E increased water disappearance compared to the control water when pigs were housed in the thermoneutral environment (14.3 vs. 8.4 L/d; P = 0.017), but water disappearance was not different due to vitamin E within the heat stressed environment (6.77 vs. 6.62 L/d). Growth performance In Exp. 1, 1 pig (thermo-neutral with POL treatment) was removed from analysis due to very poor growth.In Exp. 2, 2 pigs (thermoneutral with POL and VEF treatments) were removed due to excessive weight loss related to suspected ileitis (Lawsonia intracellularis).Subsequently, all pigs in Exp. 2 were individually treated daily from d 9 to 21 of the experiment with an oral dose of 8.8 mg/kg BW of tiamulin hydrogen fumarate (Denagard 12.5%, Elanco Animal Health, Greenfield, IN, USA).Three pigs (2 from the thermo-neutral with VEW treatment, and 1 pig from heat-stress and control treatment), were medicated until the end of the study and 1 pig died (heat-stressed environment with VEF treatment). Body weight (BW) was decreased (P ≤ 0.06) in heat stressed pigs during the last 3 weeks of the experiment, but not during the first week (Table 2).Heat Fig. 1 Mean temperatures of the thermo-neutral and heat-stressed environments from d 1 to 28 in Exp. 1 (panel A) and 2 (panel B).Temperatures were measured every 10 min using data recorders stress reduced ADG and ADFI during each week (P < 0.04), and overall (P < 0.001).Gain:feed was reduced (P = 0.030) in pigs exposed to heat stress during the first week, but it was not impacted during the remainder of the study, or overall (P > 0.18).Dietary and water supplementation treatments did not significantly impact ADG, ADFI, or G:F regardless of whether pigs were housed under heat stressed or thermo-neutral conditions.In the first week of the experiment, ADG and G:F tended to increase (P < 0.09) by supplementation of antioxidants for pigs in the heat stress environment, but not the thermo-neutral environment (interaction, P = 0.051 and 0.076, respectively).In week 2, ADG and G:F for pigs provided with vitamin E in the water were lower (P < 0.10) compared to pigs given vitamin E or POL in the feed when pigs were housed under thermoneutral conditions, but not for heat-stressed pigs (interaction, P = 0.085 and 0.048). Table 2 Growth performance of pigs exposed to thermo-neutral and heat-stressed environments and provided antioxidants in feed or water a a Values are least square means of 16 pigs.Dietary treatments consisted of control diets (CON), vitamin E supplementation in water (VEW), vitamin E supplementation in feed (VEF), and botanical extract supplementation in feed (POL).Dietary and water treatments were provided starting on d −7 and environmental treatments started on d 0 b Temperatures were set at the following time points: 2400, 0200, 0400, 0600, 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h.Temperatures for the thermoneutral room were 18.9, 18.9, 20.0, 20.0, 21 The heat-stressed environment increased (P < 0.001) respiration rate (Fig. 2A) and rectal temperatures in pigs (Fig. 2B).No significant differences in respiration rate or rectal temperature were detected among supplementation treatments (P ≥ 0.05).Respiration rate and rectal temperature decreased over the course of the experiment (P < 0.001) for both the thermo-neutral and heat-stressed environments, but the disparity between the thermoneutral and heat stress environment remained throughout the study. Histology and immunohistochemistry in the gut Villus height, villus width and crypt depth in the jejunum and ileum were not affected (P > 0.05) by environment, supplementation treatments, or their interaction (Table 3).Villus:crypt ratio in the jejunum increased by dietary vitamin E supplementation treatments compared with control (P = 0.046; +17.6%).Cellular proliferation measured with Ki-67 staining was greater due to heat stress in the jejunum (P = 0.037; +14.7%), but not in the ileum.Moreover, proliferation of enterocytes in the ileum was increased (P < 0.05) by dietary vitamin E and vitamin E in the drinking water for pigs housed in the heatstressed environment compared to pigs supplemented with vitamin E in feed and water in the thermo-neutral environment (interaction, P = 0.043). Concentration of vitamin E in serum and liver The concentration of vitamin E in serum was increased by supplementation (P < 0.001) of vitamin E in water and in feed when compared with control and the botanical extract treatments (3.59, 3.24, 1.64 and 1.67 mg/kg, respectively).Serum vitamin E concentration tended to be greater when measured on d 28 vs. d 2 (2.62 vs. 2.45 mg/kg; P = 0.067).When measured on d 28, vitamin E in serum was increased by supplementation of vitamin E in feed and in drinking water when compared with control and botanical extract treatment only in pigs housed in the thermo-neutral environment (interaction, P = 0.016), but not in the heat-stressed environment (Table 4).Supplementation of vitamin E in feed and in drinking water increased (P < 0.001) the vitamin E concentration in liver tissue (Table 4).The addition of vitamin E in water increased vitamin E in the liver to a greater extent when compared to the vitamin E supplementation in the feed (P < 0.05).Dietary botanical extract treatment did not affect vitamin E concentration in the liver (P < 0.05).No significant differences (P ≥ 0.05) in liver vitamin E concentrations were found due to heat stress or the interaction of thermal environment and supplementation. Oxidative status and cytokine concentrations The concentration of MDA in serum was increased by dietary vitamin E, vitamin E in water and dietary botanical extract treatments (P < 0.017), but not (P ≥ 0.05) by environment or the interaction between environment Table 3 Intestinal histology and immunohistochemistry in pigs exposed to thermo-neutral and heat-stressed environments and provided antioxidants in feed or water a a Values are least square means of 8 pigs.Dietary treatments consisted of control diets (CON), vitamin E supplementation in water (VEW), vitamin E supplementation in feed (VEF), and botanical extract supplementation in feed (POL).Dietary and water treatments were provided starting on d −7 and environmental treatments started on d 0 until d 28 b Temperatures were set at the following time points: 2400, 0200, 0400, 0600, 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h.Temperatures for the thermoneutral room were 18.9, 18.9, 20.0, 20.0, 21 and supplementation (Table 4).Serum concentrations of MDA were greater (P < 0.001) when measured on d 28 compared with MDA levels on d 2.Moreover, heat stress reduced (P = 0.028) serum MDA concentrations on d 2, and, although numerically higher, it was not different (P = 0.213) when measured on d 28 of heat stress (interaction, P < 0.001).Additionally, main effects of heat stress and antioxidant supplementations were not significant for MDA concentration in ileum mucosa (P ≥ 0.05).However, MDA concentration in ileum mucosa was greater (P < 0.05) for pigs housed under thermo-neutral conditions and fed the control diet compared to all other treatments (interaction, P = 0.005).Serum concentrations of IFN-γ, IL-1α, IL-1β, IL-2, IL-4, IL-6, IL-10, IL-12, IL-18, and TNF-α were not impacted (P ≥ 0.05) by environment, supplementation, or their interaction (Table 5).Serum concentrations of IL-8 were reduced (P < 0.05) and IL-1Ra concentration tended to be increased (P = 0.056) by heat stress, but no effects due to supplementation or interactions were observed (P ≥ 0.05).Serum IFN-γ and IL-8 were higher on d 28 (P = 0.078 and P < 0.001) compared to d 2. In contrast, IL-1Ra, IL-12 and IL-18 were lower (P < 0.05) on d 28 compared to d 2. The concentration of TNF-α in mucosa of the jejunum was decreased (P = 0.022) by the heat-stressed environment, and the supplementation of vitamin E in water tended to increase (P = 0.064) TNF-α in jejunum mucosa (Table 5).TNF-α concentration in the mucosa of the ileum was decreased by the heat-stressed environment (P < 0.05) and vitamin E supplementation in the water (P < 0.001), but not dietary vitamin E or botanical extract treatments.TNF-α was reduced in the ileum mucosa by vitamin E supplementation, but not dietary botanical extract in the heatstressed environment (interaction, P < 0.001). Complete blood count (CBC) Red blood cells, hemoglobin, and hematocrit percentage were reduced on d 28 by the heat-stressed environment, but this was not the case on d 2 (interaction, P < 0.05; Table 6).White blood cells, platelets, neutrophils, and monocytes counts were lower (P < 0.001) on d 28 compared to d 2, but no other differences were observed. Discussion Heat stress reduces growth performance in pigs as demonstrated in many studies [1,2,4,5,35].In the present study, the impact of heat stress on BW could not be detected after 7 days of exposure, but clearly and consistently reduced BW of pigs when measured in subsequent weeks, ultimately resulting in a reduction of 7.4 kg (9% reduction) at the end of the 28-day study.The impact of heat stress has been reported to be dependent on pig body weight, with a greater negative impact in heavier pigs [5].The reduction in ADFI associated with heat stress is the major contributor to decreased growth performance, although reduced ADFI does not always completely account for the decreased growth performance [2,36].During high temperature environments, the body reacts by decreasing or avoiding any extra heat production that could increase core body temperature, including high feed intake.In the present study, ADG and ADFI in growing pigs were decreased by 26.7% and 25.4% due to heat stress, without impacting feed efficiency.These results are consistent with other report in growing pigs [2,3,[35][36][37][38]. Clearly heat stress reduced performance in the present study and we hypothesized that the use of antioxidants could ameliorate, in part, the negative effects of heat stress in growing pigs.However, the supplementation of vitamin E in water and the supplementation of vitamin E and botanical extract in feed did not affect BW, ADFI, or G:F, regardless of environmental temperature.Niu and coworkers reported that the addition of vitamin E in the diet did not affect BW or ADFI, but G:F was decreased using 100 mg/kg of dietary vitamin E in broilers and no effects were observed using 200 mg/kg of vitamin E, regardless Table 5 Immune markers in serum and intestinal mucosa of pigs exposed to thermo-neutral or heat-stressed environments and provided antioxidants in feed or water a a Values are least square means of 16 pigs for serum and 8 pigs for tissue measurements.Dietary treatments consisted of control diets (CON), vitamin E supplementation in water (VEW), vitamin E supplementation in feed (VEF), and botanical extract supplementation in feed (POL).Dietary and water treatments were provided starting on d −7 and environmental treatments started on d 0 until d 28 b Temperatures were set at the following time points: 2400, 0200, 0400, 0600, 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h.Temperatures for the thermoneutral room were 18.9, 18.9, 20.0, 20.0, 21.1, 21.1, 22.2, 22.2, 21.1, 21.1, 20.0, and 20.0 °C, and for heat-stressed room they were 28.3, 29.4, 29.4, 31.1, 32.8, 33.3, 34.4, 35.6, 34.4, 31.7, 29. of heat stress [19].In growing pigs, dietary supplementation with vitamin E reduced feed efficiency, but no statistical differences were detected for ADG and ADFI [39].Inclusion of dietary polyphenols (from grape pomace included at 7.5% [40] and 0.1% of a blended polyphenol additive [25]) did not show any significant differences on growth performance when used in broilers and weaned piglets, respectively.In the present study, the botanical extract containing polyphenols did not affect growth performance of pigs.The response to dietary polyphenols can be affected by differences in absorption, metabolism, and interaction with other nutrients [24].Indeed, there are many different polyphenolic compounds with potential promising impacts on health, immune response, microbial balance, antioxidant status, and ultimately growth performance.However, these supplements need to be closely characterized in terms of concentrations of active compounds, where they were derived from and by what specific method, followed by clearly defined experimental protocols aimed at evaluating their efficacy [27]. The use of water by pigs to drink and spray themselves to reduce core body temperature is expected to be higher under a high temperature environment.Although it was not a primary objective of the present study, the estimated disappearance of drinking water for pigs housed in the heat-stressed environment was 40.7% lower than the thermo-neutral environment.Pigs in the present study only had access to cup waterers with a nipple inside the cup, specifically to minimize water wastage associated with behavioral changes such as wetting of the skin to increase evaporative heat losses.It should also be noted that the water in the heat stress rooms was warm due to the high temperature of the rooms, which may have caused the lower water consumption of heat stressed pigs compared to the pigs housed in the thermo-neutral room.Others reported decreased water consumption in pigs during hot temperatures [6,41] and reduced water intake in pigs when the drinking water temperature was warm compared to cold water [42].Supplemental vitamin E appeared to increase water disappearance within the Table 6 Complete blood count measured on d 2 and d 28 in pigs exposed to thermo-neutral or heat-stressed environments and provided antioxidants in feed or water a a Values are least square means of 16 pigs.Dietary treatments consisted of control diets (CON), vitamin E supplementation in water (VEW), vitamin E supplementation in feed (VEF), and botanical extract supplementation in feed (POL).Dietary and water treatments were provided starting on d −7 and environmental treatments started on d 0 until d 28 b Temperatures were set at the following time points: 2400, 0200, 0400, 0600, 0800, 1000, 1200, 1400, 1600, 1800, 2000, and 2200 h.Temperatures for the thermoneutral room were 18.9, 18.9, 20.0, 20.0, 21. thermoneutral rooms, but not within the heat stressed rooms. High respiration rate and rectal temperature are positively correlated with heat stress in pigs when temperatures exceed 25 ºC temperature [3,18,43,44].High body temperature is associated with thermoregulatory mechanisms sending blood flow to the periphery to dissipate the excess heat [45].In the present study, heat stress clearly increased respiration rate and rectal temperature throughout the study, but antioxidant supplementation did not ameliorate these effects.In the present study, some acclimation to the heat-stress and thermo-neutral conditions was observed as indicated by a reduction in rectal temperature and respiration rate over time, similar to other studies [43,46]. Heat stress causes damage in the intestine due to a redistribution of blood flow to the periphery to dissipate heat, reducing blood flow to the splanchnic organs.Heat stress has been shown to cause damage to the tips of jejunal villi, shortening of villus height and decreasing crypt depth [35,44], damaging duodenal epithelium [7], and compromising intestinal integrity [9].The impact of heat stress on intestinal dysfunction has recently been confirmed and it was further demonstrated that these impacts were closely related to alterations in intestinal microbiota [41].Contrarily to these reports, no effects of heat stress on histology in the ileum or jejunum were detected in the current study.However, heat stress increased cell proliferation in the jejunum as measured by Ki-67 staining.The addition of dietary vitamin E increased the villus:crypt ratio in the jejunum, but dietary supplementation with the botanical extract did not alter intestinal histology.Gessner and coworkers showed significant increases in villus height:crypt depth ratio in the duodenum of 6-week-old piglets when using polyphenols (10 g/kg of grape seed and grape marc extract) in the diet [47].The addition of vitamin E in feed and in water improved cell proliferation in the ileum of pigs housed under heat stress condition, but not in pigs housed under thermo-neutral conditions, suggesting that the body accelerated cellular proliferation to compensate for cellular death by hypoxia during heat stress when vitamin E, but not the botanical extract, was supplemented. Several authors reported that heat stress reduced serum vitamin E concentration [20,48], presumably because vitamin E reacts against oxidation caused by heat stress, reducing its concentration in serum.In addition, reduced vitamin E intake due to an overall decrease in feed consumption associated with heat stress is expected to have a significant impact on vitamin E status.In the current study, serum vitamin E concentration was reduced from 2.76 to 2.48 mg/kg due to prolonged heat stress when measured on d 28, but not during short-term heat stress on d 2. Similarly, vitamin E concentrations were greater on d 28 compared to d 2 in pigs housed under thermoneutral conditions, but not in heatstressed pigs.Liver concentrations of vitamin E were not impacted by heat stress, in spite of the significant impact of heat stress on pig performance, including a substantial reduction in feed intake and thus, reduced vitamin E intake.Serum and liver vitamin E concentrations were increased with vitamin E supplementation, especially when vitamin E was supplemented in the drinking water compared to dietary vitamin E. The high concentrations of vitamin E in serum and liver with vitamin E supplementation in water could be due in part to the fact that the natural form of vitamin E (D-α-tocopherol) that was used is more bioavailable than the synthetic form (DL-αtocopheryl acetate) that was used in the feed.Similarly, Wilburn et al. reported greater concentrations of vitamin E in serum and liver when using natural RRR-αtocopheryl acetate in water compared to the synthetic all-rac-α-tocopheryl acetate [23], confirming very efficient absorption of vitamin E when it is supplemented in the water [22,49].In addition, total intake of vitamin E per day, using the estimated water consumption of pigs supplemented with vitamin E in the water was 900 IU compared to 500 IU total vitamin E intake when supplemented in feed.Thus, part of the response is likely related to greater vitamin E intake when it was supplemented in the water.The addition of the botanical extract did not impact serum vitamin E concentrations similar to other reports [49,50], suggesting that the botanical extract used in the current study was not effective in sparing or regenerating vitamin E. On the other hand, Luehring et al. [51] showed that polyphenols in combination with low dietary vitamin E increased vitamin E in plasma and in liver of growing pigs when using fish oil to induce oxidative stress.Lack of response to polyphenols could be related to low absorption rate of dietary polyphenols [44], the type of and activity of polyphenols used [27], or antioxidant functioning of polyphenols independent from vitamin E. Malondialdehyde is a product produced during lipid peroxidation in the cell under oxidative stress [52].In a study conducted by Montilla et al. [10], MDA was 2.5-fold greater in grower pigs (35 kg body weight) during a short 1-day period of heat stress compared with a thermo-neutral environment.In the present study, heat stress reduced serum MDA concentrations after shortterm exposure, but not after longer-term heat stress.The reduction of MDA concentrations during short-term heat stress suggested that the enzymatic (superoxide dismutase, catalase, glutathione peroxidase) and nonenzymatic (vitamin A and vitamin E) antioxidant systems reacted effectively against oxidation, but that this could not be fully maintained during prolonged heat stress [17].The inclusion of other dietary antioxidants, such as polyphenols, reduced MDA levels in broilers and piglets in muscle, liver, and plasma [51,53,54].In contrast, supplementation with the botanical extract or vitamin E either in feed or drinking water increased serum MDA concentrations when compared to the other treatment.Other studies found that vitamin E and polyphenol-based antioxidants did not affect MDA concentrations in loin muscle in finishing pigs and diabetic or not diabetic rats, and in piglets [39,49,55,56]. In the present study, no effects on MDA concentrations in the ileum due to heat stress or supplementation were observed.Lambert et al. reported no increase in lipid peroxidation products in the small intestine of rats housed under high temperatures (42.5 ºC) [57].Contrarily, Maini and co-workers [58] found that adding 200 IU dietary vitamin E to diets fed to broilers under heat stress reduced MDA concentrations due to amelioration of enzymatic and nonenzymatic antioxidant system by the vitamin E. On the other hand, Ebrahimzadeh and others [40] showed a greater reduction in MDA levels when using polyphenols (7.5% of grape pomace) than vitamin E (200 mg/kg of α-tocopherol acetate feed) in broilers.Intestinal cells pre-treated in vitro with Trolox (a watersoluble analogue of vitamin E) showed markedly reduced oxidative stress when compared with intestinal cells pretreated in vitro with ascorbic acid [58]. Tight junctions provide structural integrity and barrier function in the intestinal epithelium.When they are dysregulated by heat stress, it causes alterations in the barrier function, producing pro-inflammatory and antiinflammatory cytokines [59].Thus, under heat stress, the pro-inflammatory cytokine TNF-α is produced [60].In the present study, TNF-α in the ileum and jejunum was reduced by the heat-stress environment.We can speculate that the reduction of TNF-α in ileum and jejunum under heat stress can be due to the inhibition of NF-κB (nuclear factor kappa-light-chain) or that the peak of TNF-α occurred before the tissues were collected on d 28.Bouchama et al. [60] and Liu et al. [18] did not find significant changes in TNF-α in jejunum and ileum of pigs housed under heat stress when using dietary vitamin E and selenium.In the present study, supplementation of vitamin E in feed and in water resulted in a reduction of TNF-α when compared to the rest of the dietary treatments for pigs housed under heat stress.This may suggest that dietary supplementation with vitamin E reduced some inflammation in tissue of pigs during heat stress. Serum TNF-α, IL-1α, IL-1β, IL-2, IL-4, IL-6, and IL-10 were not affected by heat stress, dietary supplementation, or day of measurement.Perhaps, the heat-stressed environment in the present study was not severe enough to produce inflammation in the body that could be detected in serum.Additionally, our previous work [49] did not find effects on serum TNF-α in weaned piglets supplemented with dietary vitamin E and polyphenols.In contrast, Gabler et al. [61] reported significantly lower serum TNF-α levels in pigs housed under heat stress on d 3. Likewise, Pearce et al. [62] showed a reduction in TNF-α in serum of growing pigs under heat stress due to the inhibition of NF-κB enhance of activated B cell by heat shock proteins produced by heat stress.Heat exposure reduced TNF-α in the ileum suggesting that heat stress had effects at the local tissue level and probably could not be detected in serum.Thus, the expression of mucosal TNF-α can be different than circulating TNF-α [9,63].Similarly, TNF-α concentrations in serum were not affected by heat exposure for increasing duration (0, 2, 4 and 6 h) in finishing pigs [35]. In the present study, serum IFN-γ concentration increased on d 28 but IL-12 and IL-18 were reduced on d 28 compared to d 2, showing low inflammatory responses, even though IL-12 and IL-18 act synergistically inducing IFN-γ [64].Additionally, IL-8, a proinflammatory cytokine and activator of neutrophils in local inflammation [65] was reduced in serum by heat stress and increased on d 28 compared to d 2. In contrast, Liu et al. [18] did not observe changes in IL-8 in the jejunum and ileum of 20-kg pigs when exposed to 20 ºC or 35 ºC using dietary vitamin E and selenium.IL-1Ra is a natural anti-inflammatory cytokine protein which increases during inflammation [66] and has an antagonist effect on IL-1β and IL-1α [64].In the present study, serum IL-1Ra increased due to heat exposure, and IL-1Ra was reduced on d 28 compared to d 2. Based on this result, heat exposure produced some inflammation to increase IL-1Ra in serum to counteract this inflammation in the pigs.Also, the reduction of IL-1Ra on d 28 suggests the early potential presence of injurious components in the body [67] with the following resolution by d 28. Red blood cell count, hemoglobin and hematocrit percentage rise or fall altogether, and increase due to deprivation of drinking water or decrease due to blood loss [68].Red blood cells have high polyunsaturated fatty acids in their membranes and can be affected by oxidative stress and serving their high concentrations of oxygen as ROS precursors [69].In the present study, red blood cell count, hemoglobin and hematocrit percentage were reduced by 1.5%, 3.0% and 3.7%, respectively by the heat stress environment on d 28.Likewise, Mendoza et al. [3] observed a small reduction of 1% in red blood cells, hemoglobin and hematocrit due to heat stress in 39-kg BW pigs.Also, Adenkola et al. [70] showed a reduction of 19% in red blood cells during thermally stressful environmental conditions in adult pigs by 3 months (harmattan season).Thus, in the present study, the reduction of red blood cell count, hemoglobin and hematocrit in the heat-stressed environment at d 28 could be associated with the oxidation of polyunsaturated fatty acids in the red blood cells by heat stress [69] and impaired synthesis of hemoglobin [71].Even though the heat stressed pigs had reduced ingestion of water and possibly dehydration, this fact was not significant enough to elevate red blood cell count, hemoglobin and hematocrit.All CBC values were within normal ranges [72].Platelets are involved in aggregation and clot formation and immunity [68,71].Habibu et al. [71] reported a reduction in platelet count due to heat stress in cattle and ducks.In the present study, platelets were not impacted by heat stress, but they were reduced by 40% on d 28 compared to d 2 with the total values being 29% below the normal range [72]. White blood cells play a critical role in the immune system.In the present study, white blood cells, neutrophils, and monocytes were not affected by heat stress, but they were decreased on d 28 by 29%, 75%, and 69%, respectively, compared to d 2. In contrast, Mendoza et al. [3] reported reductions in neutrophils (−10%) due to heat stress.Adenkola et al. [70] reported increased numbers of white blood cells, neutrophils, but no differences in monocytes, during the hot-dry season (temperatures between 30-34 ºC) in adult pigs.In the present study, the reduction in white blood cells, neutrophils, and monocytes on d 28 can be due to a resolution of a potential injury in the pigs, even though values were within normal ranges [72]. In this study, the supplementation of vitamin E in feed and in water, and a dietary botanical extract containing a variety of polyphenols did not affect red blood cells, hemoglobin, hematocrit, white blood cells, neutrophils, monocytes, or platelets.Attia et al. [73] did not find significant differences in complete blood count when dietary vitamin E was supplemented in the feed of broilers under heat stress.Likewise, Stukelj et al. [74] did not observe changes in hematological parameters of 7-week pigs when dietary polyphenols were supplemented in the diet. Conclusions Heat stress clearly increased rectal temperature and respiration rate, which persisted throughout the study, and decreased growth performance of pigs resulting in reduction in body weight of 7.4 kg during the 28-day study.The negative impact of heat stress on growth rate was primarily related to a reduction in feed consumption.In spite of the significant negative impact of heat stress on growth performance, there were no clear or consistent effects of heat stress on oxidative stress, serum cytokines, or intestinal morphology.Supplementation of vitamin E increased serum and liver concentrations of vitamin E, especially when provided via the water, but the polyphenol-containing botanical extract was not effective in improving vitamin E status.However, nutritional supplementation was not effective in improving growth performance, oxidative stress, or immune markers.Heat stress showed limited impacts on oxidative stress, intestinal morphology, and immune markers, which may have limited the potential impact of nutritional supplementation with vitamin E and plant-based polyphenols from the botanical extract.The addition of the antioxidants in feed or in drinking water in the current study did not ameliorate the negative effects caused by heat stress in growing pigs. Fig. Fig.2Effect of environment on respiration rate and rectal temperature measured on d 1, 2, 3, 4, 5, 6, 7, 14, 21 and 28.Environment × day interaction (P < 0.001).Measurements were taken between 1300 and 1600 h (peak of heat stress during the day).Numbers represent least squares means ± SEM of 64 pigs.a-d Means with different superscripts are different (P < 0.05).A Respiration rate on d 0 was not different between treatments (P = 0.128; 19.64 and 18.36 respirations/30 s for the heat-stressed and thermo-neutral environment, respectively).Respiration rate in heat stressed pigs was greater than pigs housed under thermos-neutral conditions from d 1 through d 28.Respiration rate decreased over time within both environments.B Rectal temperature on d 0 did not differ between environmental treatments (P = 0.312; 39.26 and 39.40 ºC for the heat-stressed and thermo-neutral environment, respectively).Rectal temperatures in heat stressed pigs were greater in comparison with those in the thermo-neutral environment for all days of measurement.Rectal temperature decreased over time for both environments 4 and 29.4 °C c Effects abbreviations: E = environment, S = supplementation, D = day, E × D = environment × day, E × S = environment × supplementation, S × D = supplementation × day, E × S × D = environment × supplementation × day.Effects without significant differences or tendencies were not shown d Tumor necrosis factor-α e Not applicable 1, 21.1, 22.2, 22.2, 21.1, 21.1, 20.0, and 20.0 °C, and for heat-stressed room they were 28.3, 29.4,29.4,31.1, 32.8, 33.3, 34.4,35.6, 34.4,31.7,29.4 and 29.4 °C c Effects abbreviations: E = environment, S = supplementation, D = day, E × D = environment × day, E × S = environment × supplementation, S × D = supplementation × day, E × S × D = environment × supplementation × day.Interactive effects without significant differences or tendencies were not shown 2 Effect of environment on respiration rate and rectal temperature measured on d 1, 2, 3, 4, 5, 6, 7, 14, 21 and 28.Environment × day interaction (P < 0.001).Measurements were taken between 1300 and 1600 h (peak of heat stress during the day).Numbers represent least squares means ± SEM of 64 pigs.a-d Means with different superscripts are different (P < 0.05).A Respiration rate on d 0 was not different between treatments (P = 0.128; 19.64 and 18.36 respirations/30 s for the heat-stressed and thermo-neutral environment, respectively).Respiration rate in heat stressed pigs was greater than pigs housed under thermos-neutral conditions from d 1 through d 28.Respiration rate decreased over time within both environments.B Rectal temperature on d 0 did not differ between environmental treatments (P = 0.312; 39.26 and 39.40 ºC for the heat-stressed and thermo-neutral environment, respectively).Rectal temperatures in heat stressed pigs were greater in comparison with those in the thermo-neutral environment for all days of measurement.Rectal temperature decreased over time for both environments .1, 21.1, 22.2, 22.2, 21.1, 21.1, 20.0, and 20.0 °C, and for heat-stressed room they were 28.3, 29.4,29.4,31.1, 32.8, 33.3, 34.4,35.6, 34.4,31.7,29.4 and 29.4 °C Proliferation was evaluated by staining crypt cells with Ki67 antibody.Ki67 is a protein in the nucleus of proliferating cells c Effects abbreviations: E = environment, S = supplementation, E × S = environment × supplementation d
2024-02-19T14:11:46.034Z
2024-02-19T00:00:00.000
{ "year": 2024, "sha1": "f04428b556772faff770224441025cc499d45d51", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0051acafe6539bb67ee7bbe7a12df394f72951de", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
264224502
pes2o/s2orc
v3-fos-license
What Is History of Psychology? Network Analysis of Journal Citation Reports, 2009-2015 This essay considers the History of Psychology—its interests and boundaries—using the data behind the Journal Impact Factor system. Advice is provided regarding what journals to follow, which broad frames to consider in presenting research findings, and where to publish the resulting studies to reach different audiences. The essay itself has also been written for those with only passing familiarity with its methods. It is therefore not necessary to be an expert in network analysis to engage in “virtual witnessing” while considering methods or results: Everything is clearly explained and carefully illustrated. The further consequence is that those who are new to the History of Psychology as a specialty, distinct from its subject matter, are introduced to the myriad historical perspectives within and related to psychology from the broadest possible perspective. A supplemental set of exemplary readings is also provided, in addition to cited references, drawing from identified sources from beyond the primary journals. The history of psychology has two aspects: the content and the activity.The stuff and the doing.Textbooks are mostly full of stuff, and light on doing (Flis, 2016;Thomas, 2007).So, unfortunately, are most teachers.Most have never done any history at all. 1 As a result, these teachers cannot authentically guide their students toward their own doings (Barnes & Greer, 2014;Bhatt & Tonks, 2002;Brock & Harvey, 2015;Fuchs & Viney, 2002;Henderson, 2006).And thus, the growth of the specialty has been stunted by an overabundance of the wrong kind of fertilizer: the history of psychology, rather than the History of Psychology (see Barnes & Greer, 2016;Capshew, 2014). 2 This essay therefore attempts to redress the imbalance by using tools from the Digital Humanities to begin to describe the latter, and its doings, from a new perspective. Defining the Doing Specialists have discussed the doing of the History of Psychology at some length (e.g., Danziger, 1994Danziger, , 2013;;Furumoto, 1989Furumoto, , 2003;;Teo, 2013a).But not all of these discussions have been straightforward or easy to follow (see commentary by, for example, Brock, 2014Brock, , 2017;;Burman, 2017;Danziger, 1997Danziger, , 1998;;Green, 2016;Pettit & Davidson, 2014;Weidman, 2016).Fortunately, the issue can be simplified with a single observation.Notably, the doing is focused in three "primary" journals: The Journal of the History of the Behavioral Sciences, History of the Human Sciences (HHS), and History of Psychology (quoting Pickren, 2012, p. 25; see also Capshew, 2014, pp. 151-152, 171-172;Teo, 2013a, p. 843;Weidman, 2016, p. 248).These venues are where the History that has been done in Psychology is most often reported, when it is not reported directly in books, and thus we need only examine them to observe the evidence of its doing. That is the goal here.The primary journals were used to "seed" a citation analysis (following Park & Leydesdorff, 2009).Taking this seeding as representative of the History of Psychology's "center," I then appealed to quantitative tools like network analysis to identify its "periphery" (following Danziger, 2006;Pickren, 2009;Teo, 2013b).In this way, it was possible to test the specialists' intuition regarding the relative importance and position of those three journals specifically.I also looked beyond the journals to identify disciplinary boundaries and more distant frontiers, thereby describing some of the recent "institutional ecology" that gives contemporary History of Psychology its shape (following Star & Griesemer, 1989). From Citations to Networks Such analyses typically focus on content and thereby articulate the what of a body of work.Or they focus on people, and so examine the who behind the what.Here, though, I have looked at the places where those people have published: the institutional black boxes to which articles are sent when specialist authors intend to contribute to their professional discipline.In other words, I have sought to identify the where (see Shapin, 1988Shapin, -2007Shapin, /2010)).And, in this way, the what is made examinable in a new way. In considering the citations that locate these wheres relative to each other, it is necessary to look in two directions: journals cited by articles published in the primary journals (outgoing) and journals publishing articles citing material from the primary journals (incoming).These relations then collectively define a "directed network" (see Newman, 2010, for a gentle but comprehensive introduction).And that in turn enables the application of a set of specific quantitative tools.The strengths of connections are thus empirically demonstrable, clusters identifiable, and importance calculable. It was in this way that I modeled the recent History of Psychology as a collective or organized doing.This has been illustrated in network terms as the tight and coherent grouping of centrally interconnected parts: The doing's center is represented by the journals that are highly cited by the group as a whole-and which cite each other frequently-even as they are separated from peripheral others that are cited less often.That focus on the journals is then also what affords the wide-angle look at the doing as a discipline: The interests I've illustrated are taken to be generally representative, including of the interests of those authors who publish their histories in books (because there is no reason, in principle, for the interests examined indepth in books to be different from those discussed more superficially in journals). The data come from a widely used third-party source: the Journal Citation Reports, Social Science edition (hereafter simply JCR), published at the time of writing by Thomson Reuters and subsequently acquired by Clairivate Analytics.In particular, I examined two different facets of this database: the journal-level citation summaries (in Studies 1 and 2), and the subject classifications (in Study 3).That in turn afforded certain strengths and weaknesses. The data reported-on here are identical with those behind the Journal Impact Factors (JIFs) that inform so many decisions in academia.As a result, I was able to take advantage of the controls implemented at the source to ensure that those metrics reflect real and substantive uses (see Hubbard & McVeigh, 2011).In other words, I treated the data as if the relations defined between journals were akin to meanings represented by a "controlled vocabulary" (following Burman et al., 2015).This is also what afforded my confidence in the results: Although JIFs are sometimes dismissed as a flawed measure of productivity or importance, their calculation requires access to carefully vetted citation data.And it is these data that I examined, not data gathered from the wild and filtered according to my own interpretation of what ought to count.(For criticisms of the use of JIFs in psychology, see Hegarty & Walton, 2012, and, more generally, Braun, 2012). To format the data for analysis, I used Excel.Then I constructed and analyzed the networks using Gephi (Bastian, Heymann, & Jacomy, 2009).Similar procedures can now also be performed directly in R (see Costantini et al., 2015).The effect, however, is the same: Relational data are gathered or reconstructed from an existing database, formatted in relational terms and uploaded into an analysis program, and then visualized and analyzed as networks.It's these that are usually presented as results, interpreted-sometimes using other quantitative tools-and discussed.That was my approach here too. In what follows, a single investigation is presented through three connected empirical studies.This allows the narrative to build simply and incrementally.The parts are then discussed collectively, with new challenges, questions, and opportunities identified in conclusion. Study 1. The Discipline as a Network of Influences The first study looked specifically, and solely, at citation patterns-outgoing and incoming-at the level of the journals themselves.To do this, I relied on citation data from the summary reports provided by the JCR for each of the 7 years for which data exist for all three of the primary journals (2009-2015 inclusive). The purpose, in this first study, was to use the primary journals to identify the main influences on the discipline: not just where its doings are done, but also where its methods come from and where its results find their audiences.The discussion then focuses on some of the power of network analysis, while highlighting a major pitfall into which the unwary traveler might easily stumble.The second study adds depth and detail, using the same data set, while focusing specifically on the doing of the History of Psychology.And the third study takes a step further, to address the question of what the History of Psychology is actually about; its ecology of interests, as well as how these can be grouped together into more meaningful superordinate categories. Method To begin, I accepted as given-as a premise-that specialistinsiders recognize three journals as primary.That enabled me to use the citations within and to these journals as evidence of reach and influence.I took note of every journal cited by a primary journal and marked it in Excel as an outgoing citation.I also did the same for every primary journal that was cited in turn, marking each of these as an incoming citation.The relations thus defined have strengths quantifiable by the number of these journal-to-journal citations. To construct the data set, I first created a series of spreadsheets.Each journal's incoming journal-level citations have an annual report in the JCR that spans several pages, and the citation data from each of these was imported and merged into a single page in a spreadsheet.The result, in my own work product, was a commonplace booklet for each journal: one page for every year, showing the citation patterns for all of the years considered. From these booklets, I created summaries.The year-by-year details provided in the annual reports were collapsed into annual totals.I then consolidated those totals, so that the annual citation counts to each journal could be read horizontally across a single table: Each column gave the count for 1 year's citations, and the sum along each row gave the total number of citations for all the years listed.Finally, using these consolidated workbooks, I created a single comma separated values (CSV) file with just three columns: the originating journal (labeled "source"), the target journal (labeled "target"), and the total number of citations reported by the JCR over the full period of study (labeled "weight"). The CSV file is the output from Excel, but it is the input for Gephi.I then imported this file directly into Gephi's data laboratory as an "edge" table, allowing the software to automatically create the individual "nodes" for each of the identified journals.(Edges and nodes are the meat-and-potatoes of network analysis: the connections and the things-being-connected.) Note, however, that the directions of these edge tables are reversed relative to each other when considering inbound and outbound citations: Inbound citations have the citing journal as their source and the primary journal as their target, and outbound citations have the primary journal as their source and the cited journal as their target.This distinction is crucial, too, because Gephi will not receive the data properly otherwise. In the results that follow, I have reported two numbers for each journal: the citation counts and the number of years in which these citations were made over the study period.I did this because it was not initially obvious which of the two is more important for assessing the resulting network.Citation counts are the usual means of assessing productivity, and thus in this case present an obvious choice for defining strength of relation.But the consistency of citation could also be important for assessing connectedness between defining disciplinary features.And because that is what we are primarily interested in understanding, I chose to report both (with standard deviations calculated from citations but order-of-presentation influenced by consistency). Results Between 2009 and 2015, the three primary journals of the History of Psychology cited 357 different journals and were in turn cited by 247 different journals.In total, this reflects 5,245 outbound journal-to-journal citations and 2,257 inbound journal-to-journal citations.That said, however, each journal is also always its own biggest fan: self-citations, at the journal-to-journal level, are typical (and account for 557 of each of the two totals).But aside from suggesting that the authors themselves see a coherent discourse being presented at the journal level-and thus that a certain amount of self-citation is to be expected 3 -these are not in themselves meaningful for our purposes.Journals are obviously related to themselves. Outbound citations.Outbound citations are a measure of the value and esteem in which sources are held by members of a discipline.They therefore afford a sense of what the History of Psychology is about, according to those who define it by their activities, as well as how it's done: cited sources reflect importations of both content and method (with no distinction). Examining the means and standard deviations of the journal-to-journal citation counts to derive a basic guide (while also controlling for the different number of articles published by each of the three journals), we see that authors published in the Journal of the History of the Behavioral Sciences (JHBS) regularly cited the American Journal of Sociology (137 citations over the full 7 years of citations examined), American Psychologist (96 citations over 7 years), Isis (70 citations/7 years), American Sociological Review (69/7), History of Psychology (56/7), and the American Journal of Psychology (59/6).These venues were cited more frequently than two standard deviations above the mean number of journal-to-journal citations.Other consistently popular sources, cited between one and two standard deviations above the journal mean, included Psychological Review (50/7), HHS (24/6), American Economic Review (35/5), American Journal of Psychiatry (34/5) Authors published in History of Psychology (HoP) regularly cited the Psychological Review (147/7), American Psychologist (139/7), JHBS (137/7), and the American Journal of Psychology (109/7).Other consistently popular sources included Theory & Psychology (49/7), Psychological Bulletin (47/7), HHS (36/7), American Journal of Psychiatry (32/5), Isis (30/5), Journal of Social Issues (28/5), the French-language journal Année Psychologique (60/4), and Psychology of Women Quarterly (31/3). This then implies that, in addition to the three primary journals, scholars interested in the History of Psychology ought to consider following four other journals regularly: American Psychologist (263 citations/7 years), Psychological Review (216/6), Isis (143/6), and the American Journal of Psychiatry (111/5).Other key nonprimary journals worth considering on this basis, in the sense that they're cited at significant levels by two of the primary journals, include the American Journal of Sociology (198/7), American Sociological Review (109/7), Theory & Psychology (83/7), American Journal of Psychology (168/6.5),and Social Studies of Science (60/5). Inbound citations.Inbound citations are a measure of the uses to which work produced by members of a discipline are being put.It therefore gives a sense of what the History of Psychology is good for: citing sources reflect use and interest, and so provide a glimpse of different audiences. Again using journal means and standard deviations as a guide, we see that JHBS's inbound nonself citations came primarily from HoP (137 citations over the full 7 years of citations) and HHS (77 citations over 7 years).Less significant, but still noteworthy, are History of Psychiatry (37 over 6), Isis (28/6), and Theory & Psychology (32/5). HoP is the heaviest self-citer in the group: 217 citations of 574 inbound (almost 38%).This is double the rate in HHS (18.2%) and nearly double that of JHBS (21.7%).Indeed, the journal's proclivity for self-reference skews the distribution of its citations so severely that no other journal rises to two standard deviations above the mean.Aside from itself, though, its main inbound sources are JHBS (46/7) and Theory & Psychology (27/6). Again, we can look at overlap to identify the key journals to follow.This time, though, only two nonprimary journals are indicated: Theory & Psychology (95/6) and Isis (44/4). Mutual citations. Even without looking at a network visualization, it's clear from this that a small number of journals are both cited by and cite one or more of the primary journals at a significant enough level to consider them part of the "distant center" of the discipline.And only one is connected in both directions to all three: Theory & Psychology.But the question of how closely it's related to them can only be answered quantitatively.We therefore turn to that examination next. Discussion Two figures are presented to simplify these results, each illustrating one of the two aspects of the relational data reported in the JCR. Figure 1 presents a network using citation counts to set the strength of the connections between journals, and Figure 2 uses the number of years in which citations were made.Two technical elements are then also illustrated: Node size is a function of PageRank in both figures and shade is a function of Eigenvector Centrality. 4 These are different measures of "importance" in the network (bigger and darker imply greater influence), and they are consistent here both with each other and between images.Although the lists above might therefore have been reordered slightly by reversing this focus, we calculate that the difference would have been insignificant. 5 My preferred layout algorithm is called "Force Atlas 2" (Jacomy, Venturini, Heymann, & Bastian, 2014).This is a force-directed 2D spatial organizer that takes advantage of the multithreaded processing of modern computers, and thereby reduces the time required to produce an accurate and intuitively useful network.Its primary weakness is also shared by its competitors: The resulting illustration is a projection of a multidimensional object onto a two-dimensional surface, so positions are often underdetermined for weakly connected nodes (the map could have multiple configurations) or even misleading (unconnected nodes that would appear distant in three dimensions are sometimes shown close to each other in two dimensions).Indeed, that very thing has happened between Figures 1 and 2: Weakly connected nodes vary widely in position along the outer edges of the network, even while strongly connected central nodes move very little.For this reason, the output from such analyses cannot simply be accepted as shown (cf.Burman, 2018). 6 The result that matters most for our purposes, however, is straightforward: These analyses suggest that this doing is influenced primarily by 12 key journals.Again, the ordering is slightly different depending on the metric used, but following standard deviations provides a useful guide: HHS and JHBS are further than two standard deviations above the mean on all four centrality metrics, and HoP is further than one standard deviation above the mean for both measures of Eigenvector Centrality but not for PageRanks.The other nine journals are then all positioned at or above average importance for the discipline, but by less than HoP. Still, from my perspective, PageRank is the better metric given my intent: it is very widely used, its calculations reflect a recursive process in which global-connectedness plays an important role, and its outcome always sums to 1 across a data set (enabling the simple rescaling of subsets).It then follows from this that the most important nonprimary journal for specialist Historians of Psychology-in terms of its overall impact, but not its JIF-is the nonprimary journal with the highest PageRank score: American Journal of Sociology.However, this journal's influence on the discipline is only a little greater than the others.Indeed, all nine are well within one standard deviation of HoP on the global metric of influence (PageRank of citations).Thus, I suggest that the key nonprimary influences ought to be considered collectively.They are American Journal of Sociology, American Psychologist, Isis, American Sociological Review, Psychological Review, American Journal of Psychiatry, American Journal of Psychology, Social Studies of Science, and Theory & Psychology. 7 Yet it is curious that HoP-a primary journal-is by this analysis itself insignificantly different from the next-closest near-peripheral journals.This could perhaps be a side effect of its relative youth.(It was founded in 1998 and first received a JIF in 2009).Yet it could also be a side effect of the means by which the data themselves were treated prior to visualizing the above textual summary: Using a network to illustrate relational lists derived using standard deviations is easy to understand, but it is also potentially misleading when it comes to calculating the strengths of the underlying relations.For this reason, Study 2 considers the influence of all of the connected journals prior to the use of any filters.But the easiest way to make sense of this requires a bit of a divergence. Study 2. Close Friends and Distant Acquaintances In life, we all routinely make a distinction between close friends and distant acquaintances.Yet no one would deny knowing somebody on the basis of the application of a statistical tool.Furthermore, in mathematical sociology, the statistically insignificant relations omitted from Study 1 are considered the primary means by which information flows through a network.Thus, in Study 2, my intent is to examine the full "strength of [the] weak ties" that bind the discipline together. This expression-the strength of weak ties-is due to Mark Granovetter (1973Granovetter ( , 1983). 8He argues that the power of network analysis is that it enables the synthesis of micro-and macro-level perspectives.This is also a perennial problem in the History of Science (see Galison, 2008).And, indeed, similar methods as those used here are now being used by Historians of Psychology to address it (e.g., Green et al., 2015aGreen et al., , 2015b;;Pettit et al., 2015).That said, however, one of Granovetter's other insights is more germane to our particular interests; namely, the degree of overlap between two networks is directly proportional to the strength of their connection.In other words: to reach new audiences, specialist Historians of Psychology ought to target friendly journals that are nonetheless distant in the disciplinary network from the center and from each other.Of course, doing this requires that we know where the internal boundaries are.That's what we examine in Studies 2 and 3. In presenting these concepts, Granovetter (1973) distinguished between ties of different relational strengths: "strong, weak, or absent" (p.1361).In his case, this reflects the quality of a relationship between two people.Thus, a strongly connected pair can be called friends; a weakly connected pair, acquaintances. 9To operationalize these definitions in a way that was useful for the purposes of this study, I further defined the first two types of connection as involving mutual citation of greater or lesser strength and the last as involving a one-way citation that may nonetheless be influential.That then enabled the reuse of the data set from Study 1, which this time was not pretreated in any way prior to the network analysis.A filter was instead applied from within the analysis program to focus specifically on journals with mutual citations and thereby identify the network-of-doings. Method The initial import of data from Excel into Gephi at first affords a disorganized cloud, rather than a recognizable network.Nodes are distributed randomly in conceptual space, attached by their edges but otherwise without identifiable shape or form.Although analyses can still be performed on the underlying relations, it is helpful to use the software's visualization tools as an interpretive guide.This serves as a check against error and as an aid to understanding. When using the ForceAtlas2 layout algorithm, it is often useful to increase the separation between nodes and provide some additional spacing between clusters.For this reason, I like to turn on "Dissuade hubs."This has the effect of pushing noncentral subnetworks toward the outer edges of the network, and keeps the center for the primary network. 10The "expansion" layout algorithm is also both useful and straightforward in its effects: It increases the distance between each node by changing the scale of the network.(No substantive changes are made to the underlying geometry.) Labels can be attached to nodes by selecting the Nodes table, in the data laboratory screen, and copying the ID column into the Label column.In the overview screen, the "show node labels" toggle must then be turned on.Font size will also inevitably need to be altered in order for labels to be legible. In the visualizations prepared for Study 1, node size was set as a function of PageRank.This is done again in Study 2, so that the size of each node reflected its position in the overall network.In the statistics panel, this calculation is performed for a Directed Network with "Use edge weight" turned on.The results are then reflected in the visualization using the Appearance panel: node size must be changed using the PageRank attribute.I usually select a range of 10 to 100, but this is just so there's a useful amount of visual difference between the smallest node and the largest. For Study 1, node color was set as a function of Eigenvector Centrality.The same thing has been done again.The resulting images were then converted to grayscale for publication because only the relative brightness is meaningful here. After making these various changes, it is useful to reapply the layout algorithm(s).Afterward, I applied the Expansion algorithm until I was happy with the way the resulting network looked.(It should be easy to read when zoomed-in.) The unfiltered results of this process are shown in Figure 3, which are obviously uninterpretable at this scale except for one feature: There are three large circles at the center of the network and they are highly interconnected.These are, unsurprisingly, the primary journals. Results This network takes into account both the inbound and outbound citations provided by the JCR.The consequence is that 494 journals are represented, accounting for 7,502 citations.(The total number is not the same as in Study 1 because of overlap: One fifth of the journals identified have citations going in both directions.)This then represents the entirety of the History of Psychology, for the years studied, insofar as its doings can be represented by examining journal-to-journal citations (and where the data from all of the represented journals are controlled by the JIF system). Taking this full network into account, the specialists' intuition that there are three primary journals is confirmed: JHBS's PageRank score is 14.6 standard deviations above the mean, HHS's is 13.9σ above the mean, and HoP's 8.9σ.No other journal scores higher than two standard deviations above the mean on this global measure of influence although both the American Journal of Sociology and American Psychologist do score higher than one standard deviation above the mean.But note too that this is also an "influence network" and what is needed for our purposes is more akin to a "social network." To make this shift requires the elimination of Granovetter's (1973) "absent" ties.I did this by applying a "mutual degree" filter, set to hide all of the journals that do not have citations in both directions.In other words, all of the journals without both inbound and outbound connections to the primary journals are omitted from Figure 4.That then highlights the journals where most of the History of Psychology is done in its broadest interpretation: The journals shown are those that both cite the primary journals and which are cited by the primary journals. From this perspective, the discipline's social network includes 97 journals (20% of the total number identified in the influence network).And they account for 4,830 citations (64.4% of the previous total).Beyond them exist ties in one direction or the other, but-following Granovetter's Note.A large number of journals are connected only weakly to the primary journals, and so are pushed to the edges of the visualization: the "far periphery."(Number of citations is reflected in the width of the line connecting two nodes). typology-these connections are unimportant given our goal.Yet we can certainly err on the side of inclusiveness: The highlighted journals can be understood to represent the History of Psychology from its center to its periphery.Not everything that could be included has been, of course, in part because journals without JIFs have been omitted (and, of course, books are absent too).That said, however, even journals on the near-periphery can be fairly distant; some are more "historical" than they are History, others are merely contextual, and still others publish only commentaries or book reviews discussing historical or contextual interests. Discussion Changing the parameters of the mutual degree filter enables the quick determination of the discipline's different internal boundaries.This is again another measure of centrality and can be understood to reflect the strength of the shared connection to the three primary journals. As a function of method, the primary journals alone are visible above 4 mutual degrees: They are cited by all three of the primary journals, plus at least one other, and they have links going in both directions. 11This then in turn affords several levels of proximity, using Granovetter's (1973) typology, in treating the History of Psychology as a discipline: primary (>4 degrees), strong (2-3), weak (1), and absent (<1).To make a potentially useful distinction, I have split the strong group in two.Thus, we can refer to the 3-degree journals as the discipline's "outer center" and the 2-degree journals as its "near periphery." The full range of associated journals is shown in Figure 4: the discipline's strong and weak ties.But because networks can be difficult to interpret close-up, in their details, I will proceed through the different layers in order of their PageRank scores.In the supplemental bibliography, I have also provided some examples of articles that struck me as especially interesting or relevant.I set an arbitrary cutoff of 1975, for these, and I used the number of citations since publication as a guide in helping me choose among them (albeit Note.In the visualization, edge thickness has been reduced to 30% of default (0.3).Node size is according to PageRank and shade by EigenVector Centrality.The thick bars to the right of the three primary journals represent self-citations. Other journals identified by PageRank for their influence on the History of Psychology, but marked only as acquaintances having merely mutual recognition-weak friends, and thus describable as part of the discipline's far-peripherywere led by the Psychological Bulletin (e.g., Rucci & Tweney, 1980;Wagemans et al., 2012).There are many other journals in this group too, of course, but the list quickly becomes unwieldy; just mentioning them by name pushes this essay over the journal's strict word limit.(Note that they are all still visible in Figure 4, and that an example from each has been included in the supplemental bibliography). It's clear from this examination that a wide variety of interests is reflected.More, in fact, than any individual could ever hope to engage.Indeed, it doesn't quite seem as though there is a unitary discipline reflected; the boundaries of the History of Psychology appear to be quite porous.But what of specific topical interests?To more clearly specify what it is that this collective is doing, separately and together, it would be useful to examine how the journals group together. Study 3. From Friends to Interests The usual way to group the members of a network is to conduct a "modularity analysis."This detects higher order clusters by following the geometry of their relations to each other.In this case, however, the usual way won't achieve the desired goal: We are not interested so much in the emergent properties of the network as we are in how its different parts connect to categories that have known meanings external to the network.Those are the focus of Study 3. To gain access to these external meanings, I added a new layer to the data set.The resulting examination takes advantage of all of the citation data gathered for Study 1 and used in Study 2, but it also incorporates the topical categories provided in the JCR.I then used their meanings to provide a way to answer the big question that inspired this project: "What is History of Psychology?" Method Every entry in the JCR for each journal from Study 1-those cited by a primary journal, or which cites a primary journalhas been reexamined from a new perspective.In all cases, at least one category was provided for the journal that associated it with a topical interest.As a result, the three primary journals now all share one category-"History of social sciences"-even as HoP is also categorized as "Psychology, multidisciplinary" and HHS as "History & philosophy of science." Constructing the new data layer was simply a matter of creating a new spreadsheet from the JCR, with one row for each category.To make the use of a single filter possible, the choice of direction was also important: The journal was set as the source and the category as the target.No weight column is required as Gephi will assign a weight of 1 by default.This then means that the categories will have only negligible effects on network geometry, while still allowing the extant associations to be examined. With these new data added, the layout algorithms were reapplied and the PageRanks recalculated.I also performed a new Modularity Analysis, and color-coded the nodes by cluster membership. 13To do this, I selected "color" in the Appearance panel.Then I assigned "modularity class" as the attribute. To focus on the categories rather than the journals, I used a new filter: "indegree."Setting this equal to or greater than 4 meant that all of the nonprimary journals were eliminated from the visualization.(As a function of method, even the most important among the "friends" can receive only three inbound links: one from each of the primary journals.)What remained were the primary journals and the associated categories, with color-codings indicating group-memberships provided by modularity analysis.Because their importance has been calculated using PageRank, these scores can also be normalized to reflect this subgrouping (without recalculating on the filtered geometry) and given as percentages. Results After the filtering, only 45 nodes are visible (see Figure 5).Three of these are the primary journals, and the rest are categories: what it is that the History of Psychology is mainly about.These categories are then associated with one of three calculated clusters, each of which attaches to a journal according to the strength of the underlying relations.But the cluster analysis will be discussed separately, while taking advantage of what follows. These analyses suggest that the largest single contributor to the doing is "Sociology" (as indeed was recently observed by Araujo, 2017).However, two other categories also rise above two standard deviations from the mean: "Psychology, Multidisciplinary" and "Psychiatry."Between one and two standard deviations, we also find "Political Science" and "Social Sciences, Interdisciplinary."But it is surprising that "History" ranks only half-a-deviation above the mean.This is perhaps because "History & Philosophy of Science" does too. Removing this confound by combining the categories using their external meanings is not a straightforward thing to do.Of course, the Psychology categories are easy to bring together under one banner because they are all explicitly marked as "Psychology."And the Health-related categories are only slightly less obvious.But the others required some interpretation.The resulting higher order clusters are debatable, of course, but it seems to me that there are seven groups.And these make good sense from the point of view of a specialist in this field.Sorting them by the percentage they own of the network then gives the following breakdown: Psychology (28.1%), 14 Social Control and Intervention (18.1%), 15 Money and Power (14.8%), 16 People and Places (13.7%), 17 Health (11.3%), 18 Historiography of Science (8.1%), 19 and general Social Sciences (5.9%) 20 the specifics of which could be associated with various other groups according to the contents of each article. Discussion The cluster analysis shows topical connections by journal.Following the convention of naming groupings after the topranked member, it is easy to see in Figure 5 that "sociological" articles will find their best fit with HHS (pink), "psychological" articles with HoP (green), and "political" articles with JHBS (yellow).To get a sense of the specific interests of each of the primary journals, we can then perform a similar operation as before, but following the cluster analysis. JHBS covers nine categories, accounting for 19.8% of the network.HoP covers 12 categories, accounting for 32.2% of the network.And HHS covers 21 categories, accounting for 48% of the network.These percentages suggest different degrees of topical focus at each journal.But a second grouping can also be made-referring to the external meanings of categories-to give a more precise assessment.This can then serve as guidance to authors. These are quite different reflections of the same basic set of interests.It's also clear that the three journals seek to publish very different kinds of articles.Thus, for example, we might expect that historical discussions of psychological theory or psychological findings (or significant people and their labs) would be more likely to find a home in HoP.Institutional histories and examinations of the internal politics of the discipline would be more likely to be accepted at JHBS.And discussions of control and power with implications for individuals and their mental health, which we might more simply refer to as issues of governmentality and subjectivity, in HHS. This could be interpreted to mean that the primary journals have in a sense institutionalized the turn toward a "polycentric" approach to the doing of the History of Psychology and together reflect a polycentric historiography (cf.Danziger, 1991Danziger, , 2006)).But it could also be interpreted less charitably.Indeed, one might wonder about the strength of the disciplinarity represented: Only one of the three primary journals in the History of Psychology shows significant interest in Psychology as an explicit subject.So while the primary journals are where the History of Psychology finds its main outlets, the analyses presented here suggest that the activities reflected in them could also reflect other disciplinary concerns (cf.Weidman, 2016). General Discussion There has never been a comprehensive, quantitative, wideangle look at the current state of History of Psychology as if it were a discipline in its own right (cf.Capshew, 2014;Hilgard, Leary, & McGuire, 1991).Methods now exist to examine similarities in language-use in a corpus of text, and these have been used to examine the early history of psychological publishing (e.g., Green et al., 2015aGreen et al., , 2015b)).But the present approach to the quantitative examination of possibilities in the use of language-of a discipline's discourses-arose within Psychology itself (Burman et al., 2015(Burman et al., , 2018)). Still, this earlier work did not look at how possible meanings are reflected in actual use.The contribution of this article is thus an extension of those efforts into this new area, using a new data source.The novelty of the source then also enabled other innovations, such as the use of known-categories to identify interests at the level of journals according to the discourses they represent (rather than by the stated intent of the editors).And thus, we gain new insight into how the History of Psychology is actually done by those who do it. That said, however, the investigation has also led me to reflect on issues that I had not previously considered.For example, what if the History of Psychology is not a discipline, but an interdiscipline?(A coming together of different groups with relatable interests and having a plurality of disciplinary allegiances, norms, and values.)This is potentially very unstable, as a configuration, and that instability is not conducive to placing students in a supply chain of talent that leads from the start of undergraduate training through to a tenured professorship.It is also not especially well-suited to serving the training needs of the interdiscipline's largest audience (viz.psychologists with history requirements under accreditation).But the notion itself is, at least, testable. Performing a further modularity analysis using the external meanings of the categories from Study 3 shows that the primary journals do indeed cluster together when considered relative to those meanings.We can therefore say that, relative to these primary interests-Psychology, Social Control and Intervention, Money and Power, People and Places, Health, and the Historiography of Science (see Figure 6) 21 -these three journals are indeed the primary publishing venues for historical scholarship as it is done in Psychology (or relating to psychology).But this is an emergence from the bottom-up, not a disciplining from the top-down.One is therefore also led to wonder, in consequence, if a different administrative approach might be required at the level of the governing disciplinary institutionssuch as graduate programs and scholarly associations 22 -if the collective doing represented in these journals is to thrive. Conclusion There are a relatively small number of ways of contributing to the primary journals, in the History of Psychology, but looking outward toward the periphery shows a much more diverse and open field than I expected.I have attempted to represent these interests by including-in the supplemental bibliographysome of the articles that struck me as the most interesting at different levels through the circle of relations, but those choices undoubtedly reflect my tastes.Because I did not simply follow raw citation counts, and tried to find a balance between impact and recency (subject also to limits of space), there is room for debate.My hope, though, is that the list will be taken as an invitation to explore: Every journal identified has a lot more to offer that would be of lasting interest to this community. By no means, in other words, should the list be taken as exhaustive.Some of what's missing, too, is the result of coverage gaps between the different products in the JCR family of databases.It is certainly known that materials published in omitted sources continue to be cited within the interdiscipline (e.g., Brush, 1974; in the supplemental bibliography).But further research is required to reconcile the different sources in such a way as to reflect the same level of control.Indeed, the intermingling apparent in the raw citation data leads one to question why-aside from commercial considerations-there are multiple database products for JIFs at all: Journals in the Social Sciences cite journals in the Natural Sciences and vice versa, so why are these useful metrics omitted from those journals in the other's database?In light of this structural problem, one might think it would be simpler to replace the JCR with a more transparent source of information in future research.Unfortunately, however, there aren't many alternatives. The cited references made accessible through PsycNET afford an interesting possibility for recent material, especially following the launch of the PsycINFO Data Solutions service (http://www.apa.org/pubs/psycinfodatasolutions/).What this would lose in giving up the JCR's categories, it would then make up for in access to the American Psychological Association's [APA] own controlled vocabulary (see Burman et al., 2015).But the resulting analyses would then have access only to articles published in areas the APA considers to be close enough to Psychology to merit inclusion in the database.Studies would therefore be blind in a different way.Every choice has its consequence. Finally, but perhaps most importantly, I need to reiterate that the History of Psychology is not a journals-only discipline.Certain large-scale arguments can only be made by reflecting on a series of smaller demonstrations ("microhistories"), and the book is still the best tool for this job.The major limitation of examining journal-publishing is then that we are blind, from the outset, to that very important aspect of the doing.Here, though, that weakness has been turned to a strength: There is no reason to believe that journal-publishing and book-publishing would diverge substantially in the interests guiding their doings, so the perspective derived by leveraging one can help to remedy our blindness of the other. In short, there are discoveries here, to be sure, but the lessons are incomplete.Still, it could be worse.At least the guidance is positive: where the maps err, they err on the side of conservatism.Thus, when something has been identified that would never have previously been considered, we can trust that we have learned something new.We just need to go looking for what was missed, from where, and why (see Burman, 2018;Green, 2016;Pettit, 2016).Onward! Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Notes 1.There are some notable exceptions, but I am reluctant to provide a list out of a concern that it would be considered exclusive.An easy way of keeping track of the leading lights is to see who has received a major award, and Officer or Fellow status where applicable, from one or more of the primary societies concerned with the History of Psychology: the Society for the History of Psychology (American Psychological Association [APA] Division 26), Cheiron, the European Society for the History of the Human Sciences (ESHHS), and the Forum for the History of Human Science (an interest group of the History of Science Society).Another way is to look at journal editorial boards.2. Graham Richards is the best known user of capitalization to distinguish between stuff and doing: little-p "psychology" and big-P "Psychology" (Richards, 1987, p. 204;1996, pp. 1-2).That is my intent here too: to reflect an institutionalization of the History of Psychology that is separately examinable from the history of psychology as subject matter, and which may have different disciplinary and national styles in the same way that we are coming to accept of differences between national psychologies (see, for example, Burman, 2015).However, the split can also be understood more powerfully-in both casesas "the content" and "the form" (Burman, 2016a(Burman, , 2016b;;also Ratcliff & Burman, 2017).Thus, here my focus is on the form. James Testa, Vice President of Editorial Development and Publisher Relations for Thomson Reuters, confirms that a certain amount of journal-level self-citation is expected.But this number is surprisingly low-15% (Testa, 2016)-given the rates observed here: 25.7% for the History of Psychology as a whole, with wide variation.This is potentially problematic.The higher self-citation rate, which could simply be the result of the small number of primary journals where specialist material is published, may then expose the discipline to greater scrutiny.As Testa (2016) explains, "Significant deviation from this normal rate . . .prompts an examination by Editorial Development to determine if excessive self-citations result in an artificial inflation of the impact factor" (p.9).It is therefore plausible that higher rates would lead to a stricter enforcement of anti-inflation policies, all else being equal, and thus also the inappropriate rejection of potentially inflationary articlessuch as substantive scholarly commentaries-from inclusion in the Journal Impact Factor (JIF) database.Due to strict limits on space here, however, this issue will be examined separately.4. On Eigenvector Centrality, see Bonacich (1972Bonacich ( , 1987Bonacich ( , 2007)). On PageRank as an assessment of importance, see Page, Brin, Motwani, and Winograd (1998) and Gleich (2015).An historical overview is provided by Franceschet (2011).5.The two figures are effectively identical in their relational geometries, even though the physical placement of individual nodes may be slightly different (e.g., the reversal in position between Isis and Theory & Psychology).Between the two figures, PageRank correlates at r = .99and EigenVector Centrality correlates at r = 1.Within each figure, PageRank and Eigenvector Centrality correlate at r = .90for citations and at r = .88for number of years cited.Further examination by means of the Kolmogorov-Smirnov test-for which my colleague Laura Bringmann has my gratitude-also showed no significant difference when comparing the two networks by PageRank (p = .0971).Thus, my preference is for PageRank in this application.However, this is also the more theoretically appropriate metric for this application: It takes into account the overall influence of the network, whereas Eigenvector Centrality focuses on the influence of local links.6.It's often useful to try to manipulate the nodes while the algorithm is running to see if there's any effect on the network geometry.Generally speaking, though, more-central nodes are more trustworthy in their relative positioning.7. The cutoff here is somewhat arbitrary: the result of my attempt to balance between four metrics.Following PageRank of citations, however, the list could continue.Indeed, continuing until the distance in PageRank from HoP is greater than one standard deviation adds another 13 journals: Theory, Culture, & Society, British Journal of Psychiatry, American Political Science Review, History of Psychiatry, Sociological Review, BioSocieties, American Economic Review, Economy and Society, History of Science, Sociology, Sociological Theory, British Journal of Sociology, and Social Forces.Yet it seems unreasonable to suggest that a discipline could be comprised of 25 journals, especially when most of these others only rarely discuss matters of relevance to it.In short, it was my judgment to make the cut at Theory & Psychology.The question of where and how to draw boundaries is then dealt-with in another way in Study 2. 8.The connection from this work to the analysis of citation patterns becomes most interesting when citations are treated as a form of currency, and thus the mapping of such relations serves as a way to examine a scholarly discipline as a kind of knowledge economy (see, especially, Granovetter, 2005).9.By contrast, a pair with an absent tie may merely nod at each other while passing in the street or when buying a newspaper.They are known to each other, but the effects are negligible.In such cases, Granovetter (1973) explains, even the knowledge of each other's names can be insufficient to claim a relationship.Although a connection can be shown empirically, the effects of an absent tie only border on real significance. Figure 1 . Figure 1.Network of citations.Note.Self-citations are shown as a horizontal bar to the right of the journal node.HoP = History of Psychology; JHBS = Journal of the History of the Behavioral Sciences; HHS = History of the Human Sciences. Figure 2 . Figure 2. Network of years in which citations were made.Note.HoP = History of Psychology; JHBS = Journal of the History of the Behavioral Sciences; HHS = History of the Human Sciences. Figure 3 . Figure 3.The collective, organized doing called History of Psychology according to its journal-to-journal citations (2009-2015). Figure 6 . Figure 6.The dominant interests of the History of Psychology, leading the three primary journals to cluster together relative to the external meanings of those interests.Note.Color-coding provided by a modularity analysis conducted of the unfiltered network.
2018-11-04T01:12:41.722Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "dbe2b4feb68870e0535afa27557587986cc233fa", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2158244018763005", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5c02355c198d5ba787c2db7204796fc01ecc760a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
118604111
pes2o/s2orc
v3-fos-license
Asteroid family ages A new family classification, based on a catalog of proper elements with $\sim 384,000$ numbered asteroids and on new methods is available. For the $45$ dynamical families with $>250$ members identified in this classification, we present an attempt to obtain statistically significant ages: we succeeded in computing ages for $37$ collisional families. We used a rigorous method, including a least squares fit of the two sides of a V-shape plot in the proper semimajor axis, inverse diameter plane to determine the corresponding slopes, an advanced error model for the uncertainties of asteroid diameters, an iterative outlier rejection scheme and quality control. The best available Yarkovsky measurement was used to estimate a calibration of the Yarkovsky effect for each family. The results are presented separately for the families originated in fragmentation or cratering events, for the young, compact families and for the truncated, one-sided families. For all the computed ages the corresponding uncertainties are provided. We found 2 cases where two separate dynamical families form together a single V-shape with compatible slopes, thus indicating a single collisional event. We have also found 3 examples of dynamical families containing multiple collisional families, plus a dubious case. We have found 2 cases of families containing a conspicuous subfamily, such that it is possible to measure the slope of a distinct V-shape, thus the age of the secondary collision. We also provide data on the central gaps appearing in some families. The ages computed in this paper are obtained with a single and uniform methodology, thus the ages of different families can be compared, providing a first example of collisional chronology of the asteroid main belt. cases of families containing a conspicuous subfamily, such that it is possible to measure the slope of a distinct V-shape, thus the age of the secondary collision. We also provide data on the central gaps appearing in some families. The ages computed in this paper are obtained with a single and uniform methodology, thus the ages of different families can be compared, providing a first example of collisional chronology of the asteroid main belt. Introduction One of the main purposes for collecting large datasets on asteroid families is to constrain their ages, that is the epoch of the impact event generating a collisional family. A collisional family not always coincides with the dynamical family detected by density contrast in the proper elements space. More complicated cases occur, such as a dynamical family to be decomposed in two collisional families, or the opposite case in which a collisional family is split in two density contrast regions by some dynamical instability. Although other methods are possible, currently the most precise method to constrain the age of a collisional family (for the ages older than ∼ 10 My) exploits non-gravitational perturbations, mostly the Yarkovsky effect (Vokrouhlický et al., 2000). These effects generate secular perturbations in the proper elements of an asteroid which are affected not just by the position in phase space, but also by the Area/Mass ratio, which is inversely proportional to the asteroid diameter D. Thus, the main requirements are to have a list of family members with a wide range of values in D, enough to detect the differential effect in the secular drift of the proper elements affecting the shape of the family, and to have a large enough membership, to obtain statistically significant results. Recently Milani et al. (2014) have published a new family classification by using a large catalog of proper elements (with > 330, 000 numbered asteroids) and with a classification method improved with respect to past methods. This method is an extension of the Hierarchical Clustering Method (HCM) (Zappalá et al., 1990), with special provisions to be more efficient in including large numbers of small objects, while escaping the phenomenon of chaining. Moreover, the new method includes a feature allowing to (almost) automatically update the classification when new asteroids are numbered and their proper elements have been computed. This has already been applied to extend the classification to a source catalog with ∼ 384, 000 proper elements, obtaining a total of ∼ 97, 400 family members . In this paper we are going to use the classification of Milani et al. (2014), as updated by Knežević et al. (2014), and the data are presently available on AstDyS 1 . This updated classification has 21 dynamical families with > 1, 000 members and another 24 with > 250 members. The goal of this paper can be simply stated as to obtain statistically significant age constraints for the majority of these 45 families. Computing the ages for all would not be a realistic goal because there are several difficulties. Some families have a very complex structure, for which it is difficult to formulate a model, even with more than one collision: these cases have required or need dedicated studies. Some families are affected by particular dynamical conditions, such as orbital resonances with the planets, which result in more complex secular perturbations: these shall be the subject of continuing work. The results for families with only a moderate number of members (such as 250 − 300) might have a low statistical significance. The age estimation includes several sources of uncertainty which cannot be ignored. The first source appears in the formal accuracy in the least square fit used in our family shape estimation methods. The uncertainty depends upon the noise resulting mostly from the inaccuracy of the estimation of D from the absolute magnitude H. The second source of error occurs in the conversion of the inverse slope of the family boundaries into age, requiring a Yarkovsky calibration: this is fundamentally a relative uncertainty, and in most cases it represents the largest source of uncertainty in the inferred ages. In Sec. 4.1 we give an estimate of this uncertainty between 20% and 30%. As a result of the current large relative uncertainty of the calibration, we expect that this part of the work will be soon improved, thanks to the availability of new data. Thus the main result of this paper are the inverse slopes, because these are derived by using a consistent methodology and based upon large and comparatively accurate data set. Still we believe we have done a significant progress with respect to the previous state of the art by estimating 37 collisional family ages, in many cases providing the first rigorous age estimate, and in all cases providing an estimated standard deviation. The work can continue to try and extend the estimation to the cases which we have found challenging. Since this paper summarizes a complex data processing, with output needed to fully document our procedures but too large, we decided to include only the minimum information required to support our analysis and results. Supplementary material, including both tables and plots, is available from the web site http://hamilton.dm.unipi.it/astdys2/fam ages/. Least squares fit of the V-shape Asteroids formed by the same collisional event take the form of a V in the (proper a -1/D) plane. The computation of the family ages can be performed by using this V-shape plots if the family is old enough and the Yarkovsky effect dominates the spread of proper a, as explained in (Milani et al., 2014, Sec. 5.2). The key idea is to compute the diameter D from the absolute magnitude H, assuming a common geometric albedo p v for all the members of the family. The common geometric albedo is the average value of the known WISE albedos (Wright et al., 2010;Mainzer et al., 2011) for the asteroids in the family. Then we use the least squares method to fit the data with two straight lines, one for the low proper a (IN side) and the other for the high proper a (OUT side), as in Milani et al. (2014), with an improved outlier rejection procedure, see (Carpino et al., 2003) and Sec. 2.4. Selection of the Fit Region Most families are bounded on one side or on both sides by resonances. Almost all these resonances are strong enough to eject most of the family members that fell into the resonances into unstable orbits. In these cases the sides of the V are cut by vertical lines, that is by values of a, which correspond to the border of the resonance. For each family we have selected the fit region taking into account the resonances at the family boundaries. The fit of the slope has to be done for values of 1/D below the intersection of one of the sides of the V affected by the resonance and the resonance border value of proper a. In Table 1 we report the values for a and D, and the cause of each selection. The cause of each cut in proper a is a mean motion resonance, in most cases a 2-body resonance with Jupiter, in few cases either a 2-body resonance with Mars or a 3-body resonance with Jupiter and Saturn. When no resonance with this role has been identified, we use the label FB (for Family Box) to indicate that the family ends where the HCM procedure does not anymore detect a significant density contrast (with respect to the local background). This is affected by the depletion of the proper elements catalog due to the completeness limit of the surveys: the family may actually contain many smaller asteroids beyond the box limits, but they have not been discovered yet. On the contrary when the family range in proper a is delimited by strong resonances, the family members captured in them can be transported far in proper e (and to a lesser extent in proper sin I) to the point of not being recognizable as members; over longer time spans, they can be transported to planet-crossing orbits and removed from the main belt altogether. The tables in this paper are sorted in the same way: there are four parts, dedicated to families of the types fragmentation, cratering, young, one-sided; inside each group the families are sorted by decreasing number of members. In some cases the tables have been split in four sub-tables, one for each type. In two cases we have already defined the fit region in such a way that we can include two families in a single V-shape. This family join is justified later, in Section 3, by showing that the two dynamical families can be generated by a single collision. This applies to the join of 10955 with 19466 and to the join of 163 with 5206. Note that the join of two families, justified by the possibility to fit together in a single V-shape with a common age, is conceptually different from the merge of two families due to intersections, discussed in Knežević et al., 2014); however, the practical consequences are the same, namely one family is included in another one and disappears from the list of families. For one-sided families we are also indicating the "cause" of the missing side. E.g., for 2076 the lack of the IN side of the V-shape is due to the 7/2 resonance; on the other hand, the dynamical family 883 could be the continuation of 2076 at proper a lower than the one of the resonance. However, the V-shape which would be obtained by this join would have two very different slopes, thus it can be excluded that they are the same collisional family. For most families the "cause" of the delimitation in proper a, in the sense above, can be clearly identified. However, some ambiguous cases remain: e.g., for family 1128 the outer boundary could be due to the 3-body resonance 3-1-1 (the three integer coefficients apply to the mean motions of Jupiter, Saturn and the asteroid, respectively); for family 3 the inner boundary could be due to 4-3-1. For family 3330 a 3-body resonance (not identified) at a = 3.129 could be the cause of the inner boundary. For the one-sided family 3827 we do not know the cause of the missing OUT side, although we suspect it has something to do with (1) Ceres, given that the proper a of Ceres is very close to the upper limit of the family box. The family of (3395) Jitka is a subfamily of the dynamical family 847. The family of (15124) 2000 EZ 39 is a subfamily of the dynamical family 569. With 3/1? we are indicating 2 cases (480,15) in which the families could be delimited on the IN side by the 3/1 resonance (also 170, 1658 in which the 3/1 could be the cause of the missing IN side), but the lower bound on proper a appears too far from the Kirkwood gap. This is a problem which needs to be investigated. Binning and fit of the slopes Next we divide the 1/D axis into bins, as in Figures 1 and 2. The partition is done in such a way that each bin contains roughly the same number of members. The following points explain the main features of the method used to create the bins: 1. the maximum number of bins N is selected for each family, depending upon the number of members of the family; 2. the maximum value of the standard deviation of the number of members in each bin is decided depending upon the number of members of the family; 3. the region between 0 and the maximum value of 1/D is divided in N bins; 4. the difference between the number of members in two consecutive bins is computed: 4.1 if the difference is less than the standard deviation, the bins are left as they are; 4.2 if the difference is greater than the standard deviation, the first bin is divided into smaller bins and then the same procedure is applied to the new bins. This procedure is completely automatic, and it is the same both for the inner and the outer side of a V-shape. In the example of the Figures, namely the family of (20) Massalia, in the IN side there are 84 bins with a mean of 19 members in each, with a STD of this number 13. In the OUT side there are 82 bins with mean 19 and STD 11. In the case of the low a side we select the minimum value of proper a and the corresponding 1/D in each bin, as in Fig. 1. For the other side we select the maximum value of the proper semimajor axis and the corresponding 1/D, as in Fig. 2. These are the data to be fit to determine the slopes of the Vshapes: thus it is important to have enough bins to properly cover the range in proper a. Error Model and Weights The least squares fit, especially if it includes an outlier rejection procedure, requires the existence of an error model for the values to be fit. Until now there are no error models for the absolute magnitude and the albedo, which are available for a large enough catalog of asteroids. We have built a simple but realistic error model for 1/D computed from the absolute magnitude H (the formula is D = 1 329 by combining the effect of two terms in the error budget: the error in the absolute magnitude with STD σ H and the one in the geometric albedo with STD σ pv . The derivatives of 1/D with respect to these two quantities are: then the combined error has STD To compute this error model we need to select three values: 1) the common geometric albedo p v for all the family members, 2) the dispersion with respect to this common albedo σ pv , 3) the uncertainty in the absolute magnitude σ H . For the first two, we select all the "significant" WISE albedos, that is the values of the albedos greater than 3 times their standard deviations (with S/N > 3). Then we cut the tails of this distribution (see Figure 3): p v is the mean and σ pv is the standard deviation of the values of the albedo without the tails. For the third value σ H we use the same for all the families and the chosen value is 0.3, see the discussion in [Sec. 2.2] and in (Pravec et al., 2012). The histograms such as Figure 3 are available for all the families listed in Table 2 at the Supplementary material web site. In Table 2 we show the albedo value of the namesake asteroid, with its uncertainty and the appropriate reference: W for WISE data , I for IRAS, S for (Shepard et al., 2008), and A for AKARI. In some cases albedo data are not available. The columns 5 and 6 contain the value of the albedo used for the cut of the tails, and the last two columns are the mean albedo and the standard deviation. (293) Brasilia are interlopers in the dynamical families for which they are namesake, as shown by albedo data outside of the family range. Indeed, in the following of this paper we are going to speak of the family 1272 (Gefion) instead of 93, and of the family 1521 (Seinajoki) instead of 293; both are obtained by removing interlopers selected because of albedo data, and the namesake is the lowest numbered after removing the interlopers. For many families we have proceeded in the same way, that is removing interlopers clearly indicated by an albedo discordance. The list of these interlopers for each family is in the Supplementary material. In some cases we have joined two dynamical families for the purpose of mean albedo computation: 2076 includes 298, 163 include 5026, 10955 includes 19466 2 . Family 847 includes the subfamily 3395: the same mean albedo was used for both, although (847) has albedo 0.147 ± 0.01 and (3395) 0.313 ± 0.05, which are on the opposite side of the mean. Also 569 in Table 2 includes the subfamily 15124. The family of (434) Hungaria is a difficult case: some WISE data exist for its family members, but they are of especially poor quality. Thus we have used for all the albedo derived from radar data (Shepard et al., 2008), and assumed a quite large dispersion (0.1). Outlier Rejection and Quality Control The algorithm for differential corrections used for the computation of the slopes includes an automatic outlier rejection scheme, as in (Carpino et al., 2003). Both the use of an explicit error model for the observations and the fully automatic outlier rejection procedure are implemented in the free software OrbFit 3 and are used for the orbit determination of the asteroids included in the NEODyS and AstDyS information systems 4 . Thus, although the application of these methods to the computation of family ages is new, this is a very well established procedure on which we have a lot of experience. In practice, outlier rejection is performed in an iterative way. At each iteration, the program computes the residuals of all the observations, their expected covariance and the corresponding χ 2 value. If we can assume that the observation errors have a normal distribution, to mark an observation as an outlier we can compare the χ 2 value of the post-fit residual with a threshold value χ 2 rej : the observation is discarded if χ 2 i > χ 2 rej . At each iteration it is also necessary to check if a given observation, that we have previously marked as an outlier, should be recovered. Therefore, the program selects an outlier to be recovered if for the non-fitted residual χ 2 i < χ 2 rec . The current values for χ 2 rej and χ 2 rec are 10 and 9, respectively. During each iteration of the linear regression we compute the residuals, the outliers, the RMS of the weighted residuals and the Kurtosis of the same weighted residuals. Our method converges if there is an iteration without additional outliers. All these data are reported in Table 1 of the Supplementary material. Besides the automatic outlier rejections, some interlopers have been manually removed when there was a specific evidence that they do not belong to the collisional family, e.g., based upon WISE data: also these manual rejections are detailed in the Supplementary material. Fragmentation Families The results of the fit for the slopes of the V-shape are described in Table 3 for the families of the fragmentation type. To define fragmentation families, we have used the (admittedly conventional) definition that the volume of the family without the largest member has to be more than 12% of the total. This computation has been done after removing the interlopers (by physical properties) and the outliers (removed in the fit), and is based on D computed with the mean albedo p v . Comments for some of the cases are given below. • For family 158 (Koronis) the values of the inverse slope 1/S on the two sides are consistent, that is the ratio is within a standard deviation from 1: this indicates that we are measuring the age of a single event. The well known subfamily of (832) Karin, with a recent age, does not affect the slopes. • Family 24 (Themis) has the well known subfamily of (656) Beagle near the center of the V-shape, thus it does not affect the slopes. The values IN and OUT are not the same but the difference has very low statistical significance. The low accuracy of the IN slope determination is due to the fact that the 11/5 resonance cuts the V-shape too close to the center, sharply reducing the useful range in D. • For family 847 (Agnia) we have estimated also the slopes for the subfamily 3395. 847 has discordant slope values on the two sides, but the • Family 1726 (Hoffmeister) has an especially complicated dynamics on the IN side, due to both the nonlinear secular resonance g + s − g 6 − s 6 and the proximity with (1) Ceres, see the discussion in [Sec. 4.1]. However, the results on the two slopes are perfectly consistent: this is in agreement with what was claimed by Delisle and Laskar (2012), namely that the Yarkovsky effect prevails over the chaotic effects induced by close approaches (also by the 1-1 resonance) with Ceres, in the range of sizes which is relevant for the fit. • For the family 480 (Hansa) the slope for the IN side has lower quality, probably due to 3/1 resonance. It is a marginal fragmentation with 14% of the total volume, excluding (480) Hansa itself. • Family 808 (Merxia) is a fragmentation with a dominant largest member (64% in volume), thus (808) must not be included in the fit. • For the family 3330 (Gantrisch) it has been difficult to compute a slope for the IN side, because of the irregular shape of the low a border resulting in few data to be fit. • Family 10955 (Harig) can be joined with family 19466: in this way two one-sided families form a single V-shape: this join is confirmed by the two slopes being consistent. Thus one collisional family is obtained from two dynamical families. This merge was already suggested in (Milani et al., 2014)[Sec. 4.3.2], based on the family box overlap (by 40%). • Family 1521 (Seinajoki) appears to have two discordant slopes: in the projection (a, sin I) a bimodality appears in the family shape. We draw from this the conclusion that there are two collisional families, the one on the IN side being older. • The family 569 (Misa) is a marginal fragmentation (fragments account for 19% of the total volume). The ratio of the IN and OUT slopes is not significantly different from 1, mostly because of the low accuracy of the IN value. (15124) 2000 EZ 39 appears to be the largest fragment of a fragmentation subfamily inside the family 569: the inverse slopes are significantly lower, indicating an age younger by a factor 2.19 ± 0.78 with respect to 569 (based upon the OUT values). Cratering Families The results of the fit for the slopes of the V-shape are described in Table 4 for the families of the cratering type, defined by a volume of the family without the largest member < 12% of the total. Comments for some of the cases are given below. • Family 4 (Vesta) has two discordant slopes on the IN and OUT sides. As already suggested in (Milani et al., 2014)[Sec. 7.2], this should be interpreted as the effect of two distinct collisional families, with significantly different ages. The estimated ratio of the slopes provides a significant estimate of the ratio of the ages, because the Yarkovsky calibration is common to the two subfamilies, corresponding to two craters on Vesta. • Family 15 (Eunomia) has a subfamily which determines the OUT slope, the ratio of the slopes gives a good estimate of the ratio of the ages, because of the common calibration. The interpretation as two collisional families, proposed in (Milani et al., 2014)[Sec. 7.4], is thus confirmed. • Family 10 (Hygiea) has a shape (especially in the proper (a, e) projection) from which we could suspect two collisional events, but the IN and OUT slopes not just consistent but very close suggest a single collision. • For family 3 (Juno) the IN and OUT slopes are discordant, but due to the low relative accuracy of the slopes the difference is marginally significant. The number density as a function of proper a is asymmetric, more dense on the OUT side. • Family 163 (Erigone) can be joined with 5026 (Martes), with (163) as parent body for both (marginally within the cratering definition, fragments forming 11% of the total volume). This is confirmed by similar albedo (dark in a region dominated by brighter asteroids) and by very consistent slopes of the IN side (formed by family 163) and of the OUT side (formed by 5026), see Figure 4. There is a very prominent gap in the center, which explains why we have found no intersections; it should be due to the YORP effect; see Section 5.2. Again one collisional family is obtained from two dynamical families. Young Families We define as young families those with an estimated age of < 100 My; thus the inverse slopes are much lower than those of the previous tables. These can be both fragmentations and craterings. The results of the fit are described in Table 5. These families have a comparatively low number of members, but because they also have a small range of proper a values a significant slope fit is possible. In particular we have introduced the three last families in the (1547); it is known to be very young (Nesvorný et al., 2003), it has been included to test the applicability of the V-shape method to recent families (see Sec. 4.2.3). One Side The one-sided families are those for which we cannot identify one of the two sides of the V-shape. The results of the fit are described in Table 6. The families of this type can be due to fragmentations and craterings: in most cases there is no dominant largest fragment, and they might have had parent bodies disappeared in the resonance which also wiped out one of the sides, thus we do not really know. • The family 170 (Maria) has a possible subfamily for low proper a (no effect on the OUT slope). There is no dominant largest fragment, thus it could be either a fragmentation or a cratering, in the latter case with parent body removed by the 3/1 resonance. • For the family 1272 (Gefion) there is no dominant largest fragment, thus the same argument applies, with possible parent body removal by the 5/2 resonance. • For family 2076 (Levin) the possibility of merging with families 298 (Baptistina) and 883 has been discussed in [Sec. 4.1]. Joining Baptistina does not change the slopes; joining 883 would result in a two-sided V-shape, with a gap due to the 7/2 resonance in between; however, the two slopes would be very different. All three dynamical families (for which we already have some intersections) could be considered as a single complex dynamical family, but still they would belong to different collisional families with different ages. The slope (thus the age) we have computed belongs to the event generating only the 2076 family. There are not enough significant physical data on the members of these families 5 , not even on the comparatively large (298), to help us in disentangling this complex case. • Family 1658 (Innes) is the largest fragment but it is not dominant in size, thus we cannot distinguish between fragmentation and cratering with parent body removed by the 3/1 resonance. • (375) Ursula is an outlier in the fit for the IN slope of 375. This can have two interpretations. Either (375) is the largest fragment of a marginal fragmentation (fragments are 23% of total volume), in which case it is correct not to include it in the slope fit, or (375) is an interloper and the family could have had a parent body later disappeared in the 2/1 resonance. Unfortunately, it is difficult to use albedo data to help on this, because there is no albedo contrast with the background. Yarkovsky Calibrations The method we use to convert the inverse slopes from the V-shape fit into family ages has been established in [Sec. 5.2], and consists in finding a Yarkovsky calibration, which is the value of the Yarkovsky driven secular drift da/dt for an hypothetical family member of size D = 1 km and with spin axis obliquity (with respect to the normal to the orbital plane) 0 • for the OUT side and 180 • for the IN side. Since the inverse slope is the change ∆(a) accumulated over the family age by a family member with unit 1/D, the age is just ∆(t) = ∆(a)/(da/dt). The question is how to produce the Yarkovsky calibration. As discussed in [Sec. 5.2.6], this can be done in different ways depending upon which data are available. Unfortunately for main belt asteroids there are too few data to compute any calibration: indeed, a measured da/dt is available for not even one main belt object. The solution we have used was to extrapolate from the data available for Near Earth Asteroids. The best estimate available for da/dt is the one of asteroid (101955) Bennu, with a S/N ≃ 200 (Chesley et al., 2014). By suitable modeling of the Yarkovsky effect, by using the available thermal properties measurements, the density of Bennu has been estimated as ρ Bennu = 1.26 ± 0.07 g/cm 3 . Bennu is a B-type asteroid, thus it is possible to compute its porosity by comparison with the very large asteroid (704) Interamnia which is of the same taxonomic type and has a reasonably well determined bulk density (Carry, 2012). In Table 7 we list the data on benchmark large asteroids with known taxonomy and density. For the other taxonomic classes we estimate the density at D = 1 km by assuming the same porosity of Bennu and the same composition as the largest asteroid of the same taxonomic class. Thus in the Table the density at D = 1 km for B class is the one of Bennu from (Chesley et al., 2014), the ones for the other classes are obtained by scaling. Once an estimate of the density ρ is available, the scaling formula can be written as: where D = 1 km used in this scaling formula is not the diameter of an actual asteroid, but it is the reference value corresponding to the inverse slope; we also assume cos(φ) = ±1, depending upon the IN/OUT side. The additional terms which we would like to have in the scaling formula are thermal properties, such as thermal inertia or thermal conductivity: the problem is that these data are not available. To replace the missing thermal parameters with another scaling law would not give a reliable result, also because of the strong nonlinearity of the Yarkovsky effect as a function of the conductivity, as shown in (Vokrouhlický et al., 2000)[ Figure 1]. We are not claiming this is the best possible calibration for each family. However, for generating a homogeneous set of family ages, we have to use a uniform method for all. To improve the calibration (thus to decrease the uncertainty of the age estimate) for a specific family is certainly possible, but requires a dedicated effort in both acquiring observational data and modeling. E.g., the Yarkovsky effect could be measured from the orbit determination for a family member (going to be possible with data from the astrometric mission GAIA), thermal properties could be directly measured with powerful infrared telescopes, densities can be derived for binaries by Table 8: Data for the Yarkovsky calibration: family number and name, proper semimajor axis a and eccentricity e for the inner and the outer side, 1-A, density value ρ at 1 km, taxonomic type, a flag with values m (measured) a (assumed) g (guessed), and the relative standard deviation of the calibration. Thus it has been possible by radar to confirm that it has a satellite, and to measure its diameter; infrared observations allowed to assign this asteroid to the taxonomic class V. When all the data are analyzed and published, we expect to have for (357439) an estimated density (from the satellite orbit and the volume, both from the radar data). This could provide a Yarkovsky calibration, specifically for the Vesta families, significantly better than the one of this paper. This implies that the main results of this paper are the inverse slopes, from which the ages can continue to be improved as better calibration data become available. In Table 8 we are summarizing the data used to compute the calibration. The eccentricity used in the calibration is selected, separately for the IN and OUT side, as an approximate average of the values of proper eccentricity for the family members with proper semimajor axis close to the limit. It is clear that the extrapolation from Near Earth to main belt asteroids introduces a model uncertainty, which is not the same in all cases. If a family has a well determined taxonomic type, which corresponds to one of the benchmark asteroids, our computation of the calibration is based on actual data and we assign to this case a comparatively low relative calibration STD of 0.2; these cases are labeled with the code "m". We have also estimated the Bond albedo A, which is used in the scaling, from the mean geometric albedo p v by WISE. For subfamilies 3395 (inside 847) and 15124 (inside 569) we have assumed the same taxonomy as the larger family. Then there are cases in which the taxonomic class is similar, but not identical to the one of the benchmark. (1726) is of type Cb, (668) of type Ch in the SMASSII classification, both assimilated to a generic C type; (808) is Sq, (1272) is SI in SMASSII, (1658) is AS in the Tholen classification, all assimilated to a generic S type. These are labeled with the code "a" and we have assigned a relative STD of 0.25. Finally we have 7 cases in which we do not have taxonomic data at all, but just used the mean WISE albedo of Table 2 to guess a simplistic classification into a C vs. S complex. These are labeled "g" and have a relative STD of 0.3. Thus these are the worst cases from the point of view of age uncertainty, but they are the easiest to improve by observations. Ages and their Uncertainties The results on the ages are presented in Tables 9-12, each containing the Yarkovsky calibration, computed with the data of Table 8, the estimated age and three measures of the age uncertainty. The first uncertainty is the standard deviation of the inverse slope, as output from the least square fit, divided by the calibration. The second is the age uncertainty due to the calibration uncertainty from the last column of Table 8: this relative uncertainty is multiplied by the estimated age. The third is the standard deviation of the age, obtained by combining quadratically the STD from the fit with the STD from the calibration. The first uncertainty is useful when comparing ages which can use the same calibration, such as ages from the IN and from the OUT side (as shown in the last two columns of Tables 3-6); this can be applied also to the cases of subfamilies. The third uncertainty is applicable whenever the absolute age has to be used, as in the case in which the ages of two different families, with independent calibration errors, are to be compared. Among the figures, not included in this paper but available in the Supplementary material site, there are all the V-shape plots, which can be useful to better appreciate the robustness of our conclusions. In this Section we also comment on ages for the same families found in the scientific literature, with the warning that for some families there are multiple estimates, including discordant ones, in some cases published by the same authors at different times. Thus we think it is important to have a source of ages computed with a uniform and well documented procedure, such as this paper. Compilations of ages, such as Brož et al., 2013), are useful for consultation, but have the limitation of mixing results obtained with very different methods, sometimes even with methods not specified. We use the terminology consistent when one nominal value is within the STD of the other, compatible when difference of nominal values is less than the sum of the two STD, discordant otherwise. Ages of fragmentation families The ages results are in Table 9; comments on specific families follow. 158 (Koronis): the present estimate increases somewhat the result we reported in [ Table 10] of 1500 My for the OUT side (the result for the IN side was considered of lower quality), but within the fit uncertainty. Now the results from the two sides are not just consistent but very close, and the fit uncertainty has slightly improved (in Ta-Table 9: Age estimation for the fragmentation families: family number and name, da/dt, age estimation, uncertainty of the age due to the fit, uncertainty of the age due to the calibration, and total uncertainty of the age estimation. ble 10 of the previous paper the calibration uncertainty was not included). The earliest estimates in the literature were just upper bounds of ≤ 2 Gy (Marzari et al., 1995;Chapman et al., 1996), followed by (Greenberg et al., 1996;Farinella et al., 1996) who give ∼ 2 Gy; Brož et al. (2013) give 2.5 ± 1 10955 (Harig), including 19466: a well determined slope, consistent between the two sides, thus confirming the join. The absolute age is of limited accuracy because of the lack of physical observations. No previous estimates found in the literature. 1521 (Seinajoki) has two significantly different ages, younger for the OUT side. This is an additional case of a dynamical family containing two collisional families. gives 50 ± 40 My, which is compatible with our estimate for the OUT side. 1128 (Astrid) has a perfect agreement on the two sides, which appears as a coincidence since the uncertainty is much higher. Nesvorný et al. (2005) give 100 ± 50 which is consistent, our estimate being more precise. 845 (Naema) has a good agreement on the two sides. Nesvorný et al. (2005) give 100 ± 50 which is compatible, our estimate being more precise. Ages of cratering families The ages results are in Table 10; comments on each family follow. 4 (Vesta): the idea that Vesta might have suffered two large impacts generating two families [Sec. 7.3] is quite natural given that cratering does not decrease the collisional cross section, and has been proposed long ago (Farinella et al., 1996). The new error model and outlier rejection procedure have reduced the fit uncertainty, especially for the OUT side, thus the ratio of values on the two sides has increased its level of significance (see Table 4). The good agreement of the age from the IN side with the cratering age of the Rheasilvia basin, 1 Gy according to Marchi et al. (2012) is very interesting. Only a rough lower bound age of ∼ 2 Gy is available for the Veneneia basin because of the disruption due to the impact forming Rheasilvia (O'Brien et al., 2014). Thus our age estimate from the OUT side is an independent constraint to the age of Veneneia. 15 (Eunomia): in [ Table 10] the difference in the slopes for the two sides was much smaller and the fit uncertainty for the OUT side much larger, thus the existence of two separate ages was proposed as possible. The improved results provide a ratio very significantly different from 1, thus the existence of two collisional families inside the single dynamical family 15 is now supported by high S/N evidence. Nesvorný et al. (2005) give 2.5 ± 0.5 Gy as age for the entire family, which is compatible with our IN side age. 20 (Massalia): our new results are very similar to the ones of our previous paper as well as consistent with (Vokrouhlický et al., 2006b), giving as most likely an age between 150 and 200 My. On the contrary (Nesvorný et al., 2003) give 300 ± 100 which is marginally compatible. 10 (Hygiea): the interesting point is that this dynamical family appears to have a single age, a non-trivial result since the family has a bimodal shape in the proper (a, e) projection, and (10) has almost the same impact cross section as (4) Vesta. In the literature we found only giving a consistent, but low accuracy, 2 ± 1 Gy. 31 (Euphrosine): This high proper sin I family is crossed by many resonances, nevertheless the age can be estimated. In the literature, we found only the upper bound < 1.5 Gy in . 3 (Juno): the two ages IN and OUT are not consistent but only compatible; more data are needed to assess the possibility of multiple collisions. In the literature we found only an upper bound < 700 My in . 163 (Erigone): another very good example of join of two dynamical families, 163 and 5026, into a collisional family with all the properties expected, including age estimates consistent (within half of STD) and a lower number density in a central strip. (Vokrouhlický et al., 2006b) give an age of 280 ± 112 My, which is higher but consistent; (Bottke et al., 2015) by a different method give an age 170 +25 −30 , which is lower but consistent with the IN side. From the figures we can deduce that in both papers their family 163 also includes our 5026. Ages of young families The ages results are in Table 11. We are interested in finding a lower limit for the ages we can compute with the V-shape method. For most of these asteroids there are in the literature only either upper bounds or low relative accuracy estimates of the ages . In order of estimated age: 1547 (Nele): for this family Brož et al. (2013) give an age < 40 My; Nesvorný et al. (2003) give a constraint ≤ 5 My on the age of the Iannini cluster, which he identified as composed of 18 members not including (1547). Our estimate (for a family with 152 − 3 = 149 members, including (4652) Iannini) is higher, but such a young age could be too much affected by the effect of the initial velocity field, which is apparent in the anti-correlation between proper a, e. From this example we conclude that probably 15 My is too young to be an accurate estimate by the V-shape method; this family should be dated by a method using also the evolution of the angles ̟, Ω. 3815 (König): we have a precise estimate, in the literature we have found only an upper bound < 100 My . 606 (Brangane): also a precise estimate, in good agreement with 50 ± 40 in . We do not have a ground truth to assess the systematic error due to contamination from the initial velocity spread, which for these ages may not be negligible 6 . 396 (Aeolia): also a precise estimate, consistent with the upper bound < 100 My in . 18405 (1993FY 12 ): Brož et al. (2013) give an age < 200 My. Our estimate is precise and not just consistent, but the same on the two sides. For this range of ages around 100 My the initial velocity field should not matter. From these examples we can conclude that the V-shape method is applicable to young families with ages below 100 My, but there is some lower age limit t min such that younger ages are inaccurately estimated from the V-shape. The cases we have analyzed suggest that t min > 15 My, but we do not have enough information to set an upper bound for t min . Ages of one-sided families The ages results are in Table 12; these ages are based upon the assumption that only one side of the family V-shape is preserved. Of course if this was not the case, ages younger by factor roughly 2 would be obtained. For each case, comments on the justification of the one-side assumption are given below. 170 (Maria): the very strong 3/1 resonance with Jupiter makes it impossible for asteroids of the IN side of the family to have survived in the main belt, moreover the shape of the family in the (a, 1/D) plane is unequivocally one sided. This is an ancient family, and our age estimate is compatible with 3 ± 1 Gy given in , but we have significantly decreased the estimate, to the point that this cannot be a "LHB" family, as suggested by . 1272 (Gefion): the very strong 5/2 resonance with Jupiter makes it impossible for most asteroids of the OUT side of the family to have survived in the be a 1/D dependency in this spread. main belt. Thus there is no OUT side in the V-shape 7 . Nesvorný et al. (2005) give an age 1.2±0.4 Gy, in good agreement with ours, while [ Figure 1] show a one-sided model, giving a discordant age of 480 ± 50 My. 2076 (Levin): as discussed in Section 3, this could be just a component of a complex family, possibly including 298 and 883. The OUT slope, thus the age we have estimated, refers to the event generating 2076, while 298 and 883 have too few members for a reliable age. In the literature there are ages for the family of (298) Baptistina: e.g., Bottke et al. (2007) give a discordant age of 160 +30 −20 My, but they refer to a two-sided V-shape including our 883, with an enormous number of outliers. 3827 (Zdenekhorsky): the family shape is obviously asymmetric, with much fewer members on the OUT side 8 . This prevents a statistically significant determination of the OUT slope. The family is not abruptly truncated, possibly because the effect of (1) Ceres is weaker than the one of the main resonances with Jupiter. 1658 (Innes): the shape of the family in the (a, 1/D) plane is clearly onesided. The family ends on the IN side a bit too far from the 3/1 resonance, thus the dynamics of the depletion on the IN side remains to be investigated. 375 (Ursula): the strongest 1/2 resonance with Jupiter makes it impossible for most asteroids of the OUT side of the family to have survived in the main belt. This prevents a statistically significant determination of the OUT slope. With an age estimated at ∼ 3.5 ± 1 Gy, this family could be the oldest for which we have an age. Brož et al. (2013) give the upper bound < 3.5 Gy. Conclusions and future work In this paper we have computed the ages of 37 collisional families 9 . The members of these collisional families belong to 34 dynamical families, including 30 of those with > 250 members. Moreover, we have computed uncertainties based on a well defined error model: the standard deviations for the ages are quite large in many cases, but still the signal to noise ratio is significantly > 1. Main results In Figure 5 we have placed the families on the horizontal axis with the same order used in the Tables, separated in four categories 10 . On the vertical axis (in a logarithmic scale) we have marked the estimated age with a 1 STD error bar. To avoid overcrowding of the Figure, for the families with compatible ages from the IN and OUT side we have used the average (weighted with the inverse square of the STD) as the nominal with an error bar σ = σ 2 IN + σ 2 OU T /2. If the two ages are incompatible we have plotted the two estimates with the corresponding bars 11 . We have also used an informal terminology by which families are rated by their age: primordial with age > 3.7 Gy, ancient with age between 1 and 3.7 Gy, old with age between 0.1 and 1 Gy, and finally the adjective young, as used previously, is for ages < 0.1 Gy. By looking at Figure 5 it is apparent that we have been quite successful in computing ages for old families, we have significant results for both young and ancient, while we have little, if any, evidence for primordial families. This should not be rated as a surprise: already Brož et al. (2013), while specifically searching for primordial families, found a very short list of candidates, out of which 4, 10, 15, 158 and 170 we are showing to be ancient, but not primordial. From our results, only two families could be primordial, 24 and 375, although they are more likely to be just ancient. Thus we agree with the conclusion by Vokrouhlický et al. (2010) that most of the primordial families, which undoubtedly have existed, have been depleted of members to the point of not being recognized by statistically significant number density contrast: our results indicate that this conclusion applies not only to the Cybele region (beyond the 2/1 resonance) but to the entire main belt. Figure 5 also shows that our results allow many statistically significant absolute age comparisons between different families. Although the results should be improved, especially by obtaining more accurate Yarkovsky calibrations, this can be the beginning of a real asteroid belt chronology. The 10 To locate these families in the asteroid belt, the best way is to use the graphic visualizer of asteroid families provided by the AstDyS site at http://hamilton.dm.unipi.it/astdys2/Plot/ 11 For 847 we have used the IN age and STD, as discussed in Section 3. large compilations of family ages, such as Nesvorný et al. (2005); Brož et al. (2013) are very useful to confirm that our results are reasonable. When available, the uncertainties reported in these compilations are generally larger; often only upper/lower bounds are given. However, the literature as analyzed in Section 4.2 shows that often results obtained with different methods, even by the same authors, can be discordant. Thus the comparison of ages for different families should not be done with the ages listed in a compilation, but only from a list of ages computed with a single consistent method, including a single consistent calibration scheme, as in this paper. In the previous paper Milani et al. (2014) we had introduced the distinction between dynamical and collisional families; out of the 5 dynamical families we analyzed as examples, we found 3 cases in which a dynamical family corresponds to at least 2 collisional ones. In this paper we report on the results of a systematic survey of the largest (by number of members) dynamical families, monitoring whether the 1 to 1 correspondence with collisional families does or does not apply. We have found two examples, for which we use the definition of family join, in which two separate dynamical families together form a single V-shape, with consistent slopes, thus indicating a single collisional event: this applies to families 10955 and 19466, 163 and 5026. Note that this is distinct from a family merge which can arise when two families, as a result of adding new members with recently computed proper elements, acquire some members in common . We have also found at least three examples of dynamical families containing multiple collisional families: 4, 15 and 1521. For these we have obtained discordant slopes from the IN and the OUT side of the V-shape, resulting in distinct ages, see Figure 5. We have found a dubious case, family 3, and there are several other cases already either known or suspected. Finally, we have found two cases of families containing a conspicuous subfamily, with a sharp number density contrast, such that it is possible to measure the slope of a distinct V-shape for the subfamily, thus the age of the secondary collision: the subfamily 3395 of 847, and 15124 of 569. There are several cases of subfamilies, with a separate collisional age, already reported in the literature, but they are mostly from recent (< 10 My of age) collisions: we have identified subfamilies with ages of ∼ 100 My. From the above discussion, we think a new paradigm emerges: whenever a family age computation is performed, the question on the minimum number of collisional events capable of generating the observed distribution of members of the family in the classification space has to be analyzed. This needs to take also into account other families in the neighborhood (in the classification space). In our case, the classification space is the 3-dimensional proper elements space because we use dynamical families, but note that the same argument applies also to other classifications made in different spaces, such as the ones containing also physical observations data: separate collisional families may well have the same composition. Open problems On other issues we have accumulated data, useful to constrain the asteroid families evolution, but we do not have a full model. An example is the fact already known that many families have a central gap, in the sense of a bimodal number frequency distribution of members as a function of proper a. The interpretation of this gap as a consequence of the interaction between the YORP and the Yarkovsky effect, as proposed in (Vokrouhlický et al., 2006b), is plausible and widely accepted, but a model capable of predicting the timescales of this evolution is not available. We have observed the presence and depth of the gap for all the families having, in our best estimate, < 600 My. • Ages between 10 and 100 My: the gap does not occur in the youngest 1547 and the one near the upper limit of 100 My, that is 396, but occurs in 18405 which has an age similar to 396, and in the two with ages ∼ 50 My, 3815 and 606. • Ages between 100 and 200 My: the gap occurs consistently in families such as 3395, 15124, 1128, 845, and less deep in 20. • Ages between 400 and 600 My: 10955 has a gap and 668 does not. • Ages > 600 My: among the ancient families only 158 and maybe 31 show some small dip in density at the center. These results do not contradict the interpretation that YORP moves the rotation axes towards the spin up/spin down position, but takes quite some time to achieve a strong bimodality which gradually empties the gap. Over longer time scales, spin axis randomization can reverse the process. However, our set of examples above shows that the time scales for such processes are not uniform, but may substantially change from family to family. Another open problem results from the fact that several families on the outer edge of the 3/1 resonance gap appear to have a boundary close to, but not at the Kirkwood gap. This happens to the IN side of families 480 and 15; there are also families 170 and 1658 which are one-sided because of the missing IN side, with the family not touching the gap. This might require a dedicated study to find a plausible explanation. Family ages left to be computed Of the dynamical families in the current classification, there are 11 with > 300 members for which we have not yet computed a satisfactory age. The motivations are as follows. • There are five complex families: 135, known to have at least two collisional families, with incompatible physical properties, difficult to disentangle; see e.g., [ Figure 10]; 221, complex both for dynamical evolution (Vokrouhlický et al., 2006c) and suspect of multiple collisions; 145, which appears to have at least 2 ages; 25, corresponding to a stable region surrounded by secular resonances, could have many collisional families; 179, a cratering family which is difficult to be interpreted. • There are another four families strongly affected in their shape in proper element space by resonances: 5, 110, 283 with secular resonances, and 1911 inside the 3/2 resonance. • Two others: 490, well known to be of recent age (Nesvorný et al., 2003;Tsiganis et al., 2007); 1040, at large proper sin I and also quite large e; both are strongly affected by 3-body resonances. We are convinced that for many of these it will be possible to estimate the age, but this might require ad hoc methods, different from case to case. In this paper we have included all the ages which we have up to now been able to estimate by a uniform method. Other families with marginal number of members for the V-shape fit (between 100 and 300 in the current classification) could become suitable as new proper elements are computed and the classification is automatically updated, especially in the zones where the number density is low, such as the high I region, and the Cybele region, beyond the 2/1 resonance.
2015-04-27T09:14:20.000Z
2015-04-21T00:00:00.000
{ "year": 2015, "sha1": "9ab279795d40f9effa211ccfd0b57502fb7b2d35", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.05461", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9ab279795d40f9effa211ccfd0b57502fb7b2d35", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
248241909
pes2o/s2orc
v3-fos-license
Effects of medical history and clinical factors on serum lipase activity and ultrasonographic evidence of pancreatitis: Analysis of 234 dogs Abstract Background Lipase measurements and ultrasonographic (US) evidence of pancreatitis correlate poorly. Objectives Identify explanations for discrepant lipase and pancreatic US results. Animals Two hundred and thirty‐four dogs with gastrointestinal signs. Methods A retrospective study was conducted, in which lipase activity and US were performed within 30 hours. Medical history, clinical examination results, lipase activity, and US results were recorded. Results Lipase and US results were weakly correlated (r s = .25, P < .001). At both evaluated time cut‐offs, median lipase activities were significantly higher with shorter durations of clinical signs before presentation (≤2 days, 334 U/L; >2 days, 118 U/L; P = .03; ≤7 days, 334 U/L; >7 days, 99 U/L; P = .004), but US was not significantly more frequently positive. For both cut‐offs (>216/≤216 U/L, >355/≤355 U/L; reference range, 24‐108 U/L), median disease duration was significantly shorter (3 vs 4 days) with higher lipases. Previous pancreatitis episodes were significantly associated with an US diagnosis of pancreatitis (P = .04), but median lipase activities were not significantly higher (386 U/L vs 153 U/L; P = .06) in these dogs. Pancreatic US was significantly more often positive when the request contained “suspicion of pancreatitis” (P < .001) or “increased lipase” (P = .01). Only changes in pancreatic morphology, echogenicity, and peripancreatic mesentery were significantly associated with a positive US diagnosis, and also had significantly higher lipase activities. Conclusions and Clinical Importance Duration of clinical signs before presentation differently affects laboratory and US evidence of pancreatitis. Previous pancreatitis episodes and information given to radiologists influence US results. These findings can be helpful for future studies on pancreatitis in dogs. Assessments of the diagnostic utility of PLI and lipase activity versus a standardized histopathologic evaluation similar to what has been published in cats 16 Another diagnostic cornerstone of a clinical pancreatitis diagnosis is US. Whenever US results have been compared to the laboratory surrogate gold standard lipase (PLI or lipase activity), a clear discrepancy was found between both modalities with poor agreement and correlation of both tests. 14,[17][18][19][20] Causes for this discrepancy have not been investigated. We assume duration of clinical signs before presentation plays a role, because circulating lipase very likely reflects the current state whereas recognizable US changes might lag behind, depending on when in the course of pancreatitis the patient is presented. Also, previous episodes of pancreatitis may have caused remnant pancreatic lesions still detectable ultrasonographically, which can be mistaken for an active process. 14 Therefore, we aimed to find explanations for the discrepancy between pancreatic US and lipase measurements. Our hypotheses were: (a) duration of clinical signs before presentation differently influences lipase and US Cases were excluded if the US report did not mention the pancreas ( Figure 1). Because prednisolone can increase PLI 21 as well as lipase acitivity 22 in dogs, and PLI often is increased in dogs with hyperadrenocorticism without clinical evidence of pancreatitis, 23 pretreatment with corticosteroids or a diagnosis of hyperadrenocorticism was an additional exclusion criterion. Concurrent azotemia was not an exclusion criterion, because neither experimentally induced acute kidney injury 24 nor chronic renal failure 25 has been shown to have significant effects on lipase activity and PLI, and correlation of lipase activity with serum creatinine concentration was poor in a recent study. 13 | Medical history and clinicopathological variables Presence of the following clinical signs was recorded: general clinical demeanor at presentation as judged by the attending clinician after taking the history and examining the dog (normal vs diminished), vomiting, hematemesis, diarrhea, hematochezia, anorexia, painful abdomen, obesity, as well as the clinical examination results. Duration of clinical signs was recorded exactly when the number of days was known. If owners stated that clinical signs had been present for 1 to 2 days, then the dogs were grouped into ≤2/>2 days for calculation. All dogs with clinical signs for at least 7 days were grouped into ≤7/>7 days for calculations. | Lipase activity and PLI concentration Lipase activity (reference range, 24-108 U/L) is included in the routine serum biochemistry panel and was measured using an in-house assay (LIPC, Roche on Cobas Integra 800, Roche Diagnostics, Rotkreuz, Switzerland). 14 Similar to interpretation of PLI concentrations, we originally had created a preliminary equivocal zone of 109-216 U/L that we considered a questionable range. 14 Pancreatic lipase immunoreactivity (reference range, 0-200 μg/L) was measured by IDEXX Laboratories (Diavet IDEXX, Switzerland) at the clinician's discretion. | Pancreatic US diagnosis and US variables Ultrasonography was performed either by a board-certified radiologist or a resident under supervision. Reports were written immediately after the examination. The results were taken as written in the original radiologists' reports and reviewed for each patient. The US diagnoses were divided into two groups using the verbatim description and diagnosis of the radiologist's report, namely (a) normal pancreas (ie, no US changes in the pancreas and the pancreas noted as normal) and (b) pancreatitis when the diagnosis in the radiology report was either pancreatitis or suspicion of pancreatitis. Eight descriptive imaging terms relating to the pancreas and gastrointestinal tract also were recorded. | Statistical analyses Spearman's rank correlation coefficients (r s ) of lipase activity, with PLI and the US results, were determined. Agreement between lipase and US results was assessed using Cohen's kappa coefficient (κ). Linear regression analysis was used to calculate the goodnessof-fit (R 2 ) of truncated lipase activities with PLI results. Similar to PLI being reported only up to 1500 μg/L by the external laboratory, lipase activities >1500 U/L also were truncated at 1500 U/L for regression analysis. Kruskal Wallis and Mann-Whitney U-tests were applied to compare lipase activities between US categories. Associations among pancreatic US diagnosis and laboratory, as well as US variables, were assessed using chi-squared tests. For secondary and additional endpoints, exploratory data analysis was performed and P-values were used in an exploratory context. 26 Cramer's V and Hedge's g were used as measures for effect size for categorical and metric variables. All tests were performed 2-tailed using a 5% level of significance (P = .05). All statistical analyses were performed using SPSS version 25 (IBM Inc). | Dogs A total of 362 client-owned dogs initially were identified that presented with ≥1 of the predefined clinical signs and had lipase activity measurements performed. Subsequently, 101 dogs were excluded because US was performed >30 hours after blood analysis or had corticosteroid treatment before presentation. A further 27 cases were excluded because the pancreas was not mentioned in the US report, leaving 234 dogs for analysis ( Figure 1). One hundred and thirty-one dogs were male and 103 were female. Truncation of lipase activity results at 1500 U/L yielded an r s = .916 (P < .001). Linear regression analyses indicated that lipase activity cut-offs corresponding PLI cut-offs ≤200/>400 μg/L were higher than previously suggested 14 | Correlation of medical history and clinical signs with lipase activity and the pancreatic US diagnosis Previous pancreatitis episodes correlated significantly with an US diagnosis of pancreatitis, but not lipase activity ( Median lipase activity was significantly higher in dogs that had more acute clinical signs (calculated for ≤2 days or ≤7 days before presentation) compared to dogs that had more prolonged clinical signs (P = .03 and .004, respectively). No significant difference was found between duration of clinical signs and the pancreatic US diagnosis (Table 2). | Correlation of individual US variables with lipase activity and the pancreatic US diagnosis 3.6.1 | History for the radiologist When the radiology request contained "suspicion of pancreatitis," the US diagnosis was significantly more often pancreatitis (P < .001), whereas lipase activities were not significantly higher (P = .06). Lipase activity was significantly higher (P < .001) and the US diagnosis (P = .01) was significantly more often positive, when the radiology request contained "increased lipase," lipase activity was significantly higher (P < .001) and the US diagnosis (P = .01) was significantly more often positive (Table 3). Chi-squared statistics on the significance of radiologist bias (information on "suspicion of pancreatitis" or "increased lipase") on the individual pancreatic US variables are given in Table 4. | Pancreatic morphology Lipase activity was significantly higher when an enlarged pancreas and rounded pancreatic contours were recorded (P < .001). Both variables also were significantly associated with a final US diagnosis of pancreatitis (P < .001). | Pancreas visualization Visualization of the pancreas correlated significantly with the pancreatic US diagnosis but not with lipase activity (Table 3). Comments on the right and left lobe, respectively, were available in 16.3% and 10.7% of reports; the body was only specifically mentioned in 6.9% of cases. Lipase activity was significantly higher when visualization of the right and left lobe of the pancreas was mentioned. | Pancreatic echogenicity Lipase activity was significantly lower in dogs with normal pancreatic echogenicity (P < .001), and significantly higher in those with a hypoechoic (P < .04), or hypo-and mixed-echoic pancreatic echogenicity (P < .01). A normal echogenicity was significantly associated with a normal pancreas on US, whereas all recorded changes in echogenicity (hypoechoic, mixed echoic and hyperechoic pancreas parenchyma) were significantly associated with an US diagnosis of pancreatitis (P < .001; Table 3). | Gastrointestinal tract involvement Few gastrointestinal variables correlated with lipase activity or the pancreatic US diagnosis (Table 3). Dogs with aperistalsis of the small intestine had significantly higher lipase activity (P = .03), whereas a corrugated duodenum was significantly associated with an US diagnosis of pancreatitis (P = .03). 3.6.6 | Surrounding mesentery and peritoneal effusion Dogs with hyperechoic mesentery and peritoneal effusion had significantly higher lipase activities (P = .04 and .02, respectively). Both variables also were significantly associated with a positive US diagnosis (P < .001 and .02, respectively; Table 3). | Correlation of individual US variables and clinical variables Associations between individual US variables and clinical signs can be found in Table S1. | DISCUSSION We aimed to find explanations for the weak correlation between laboratory and US evidence of pancreatitis in dogs. 14,[17][18][19][20] Clinical signs, lipase measurement as the surrogate laboratory gold standard, and pancreatic US have never been correlated to detect information that could optimize a clinical diagnosis of pancreatitis. Improving our ability to clinically diagnose pancreatitis is important because pancreatic biopsy is highly invasive, focal lesions can be missed, and therapeutic consequences are limited. 3 Lipase activity correlated strongly with PLI concentration, similar to previous reports. 7,8,10,[12][13][14][15] Regression analysis between both assays identified higher lipase activity cut-offs than initially estimated. 14 The basis for currently used PLI cut-offs (≤200 μg/L/ >400 μg/L) is unknown. Healthy dogs can have concentrations up to 279 μg/L, 27 and concentrations >200 μg/L can be found in clinically healthy dogs with results up to 516.2 μg/L. 28 Reasoning for the suggested equivocal zone was that it is virtually impossible to rule out transient mild pancreatitis in clinically normal dogs. 29 Probably, some safety margin similar to the commonly used threshold of 3 Â the upper limit of the reference range for lipase activity was built in. 13,30 Understandably, usage of these cut-offs has implications for comparisons with other tests. We initially had performed all analyses with both lipase assays, assuming that attending clinicians did not request PLI concentrations whenever lipase activity was already markedly increased. 31 However, plotting the data indicated no discernible pattern ( Figure S1). pancreas until morphological changes are detectable. 34 Only 0.3% of patients with a first episode of AP (n = 983), but 32% (n = 58) with a fifth episode of acute recurrent pancreatitis have either computer tomography, magnetic resonance imaging, US or endoscopic USbased morphological changes in the pancreas. 34 Such data are lacking in veterinary medicine, but our results give impetus to considering the comment "previous episode of pancreatitis" when interpreting pancreatic imaging results in dogs. The effect of time on lipase results is visible when considering the duration of clinical signs before presentation. Dogs had significantly shorter duration of clinical signs when lipase activity was >216 U/L or >355 U/L. Similarly, lipase activity was significantly higher at both time cut-offs (<2d/<7d) when dogs were more acutely sick. Duration of clinical signs was not significantly associated with an US diagnosis of pancreatitis ( Table 2). Effects of duration of clinical signs before presentation on lipase activity and pancreatic US has not been assessed previously. Emerging evidence suggests that duration of clinical signs before presentation indeed has an impact. In a recent study, dogs had repeated US examinations every 24 hours after admission. At presentation, 24/37 dogs (65%) had US findings suggestive of AP, whereas 10 dogs (27%) became positive on US examination within 2 days after hospitalization. 20 Similarly, the weak significant relationship between PLI concentrations and US pancreatic severity score found at baseline evaluation was lost when analyzed again for 12 dogs with repeated testing (days not specified). 19 This finding can be viewed as further indirect evidence that serum lipase and US results change at different rates. Lipase activity was significantly higher for 4 clinical signs recorded as present in medical records, whereas 3 clinical signs were significantly associated with an US diagnosis of pancreatitis (Table 1). Only the clinical signs "diminished general demeanor" and abdominal pain had significant associations with both lipase activity and US results. Single clinical signs have never been compared with lipase results in dogs. Pancreatic lipase immunoreactivity and C-reactive protein correlated moderately (r s = .42) with a clinical activity index in 13 dogs with pancreatitis. 35 However, this correlation was based on all time points from presentation until discharge, and no temporal association could be inferred from that study. 35 Table 2). The classical US abnormalities associated with pancreatitis include variable degrees of a hypoechoic (hypo-and mixed-echoic), enlarged pancreas with rounded edges, surrounding hyperechoic mesentery with or without adjacent free fluid. 36 We found significantly higher lipase activities and significantly more US diagnoses of pancreatitis for these 5 abnormalities ( Ultrasonography is heavily dependent on operator skill and experience. [40][41][42] However, even when images were recently re-analyzed in a blinded fashion on the basis of more standardized US variables, correlation with concurrently measured PLI still was poor. 19 We found that visualization of the pancreas was significantly correlated with the US diagnosis, but it remained unclear exactly how much of the pancreas was seen. We could not determine if normal parts were not mentioned, or if all parts were examined but only abnormalities mentioned. In a recent retrospective study, correlations between clinical signs and affected pancreatic lobes were found (n = 293). 37 In that study, abdominal pain, vomiting, and diarrhea were significantly more commonly identified in diffuse pancreatitis, whereas anorexia was more prevalent in right-sided and diffuse pancreatitis. 37 In our study, abdominal pain, diminished clinical demeanor, and anorexia were significantly more common with an US diagnosis of pancreatitis (Table 1). Exact prospective recording of severity of clinical signs, as well as standardized pancreatic US reporting, will help delineate associations between clinical presentations and US findings. Changes in pancreatic echogenicity and morphology were significantly different when the radiology request contained "suspicion of pancreatitis" or "increased lipase/actual lipase result" (Table 4). A suspicion of pancreatitis may have influenced ultrasonographers, resulting in a more focused search in the pancreatic region and greater weight assigned to subtle US findings. Significantly fewer dogs had normal pancreatic echogenicity when a "suspicion of pancreatitis" or "increased lipase" was written on the request form ( There were no significant associations of information given to radiologists with US gastrointestinal tract findings, suggesting that the medical history indeed had an effect on interpretation of pancreatic US findings. Similar possible biases from radiology request forms have been reported in cats undergoing imaging for pancreatitis, but numbers were too low to be significant. 41 When the radiology request contained "suspicion of pancreatitis," the US diagnosis was significantly more frequently positive, and lipase activity was not different ( there was no significant correlation with an US diagnosis of pancreatitis, this finding correlated significantly with lipase activity, possibly reflecting more acute stages of pancreatitis. 42 We do not believe opioid analgesics interfered with our results because metimazole is our first-line drug in dogs with visceral pain. A hyperechoic mesentery and peritoneal effusion surrounding the pancreas are highly suggestive of AP 44,45 and correlated significantly with both lipase activity and the US diagnosis in our study. Mesenteric echogenicity was also significantly correlated with a clinical diagnosis of pancreatitis but not with PLI in dogs presenting with gastrointestinal clinical signs. 19 We found no associations between the presence of a hyperechoic mesentery and peritoneal effusion and information given to radiologists on request forms ("suspicion of pancreatitis," "increased lipase"), suggesting that these US finding are more robust and less prone to variable interpretation. Our study had some limitations. We could not determine frequency of clinical signs because all clinical signs were recorded as present when mentioned in the records, regardless of severity and frequency. Another limitation was that US examinations were carried out by multiple radiologists, and reporting was not standardized. When parts of the pancreas were not mentioned, we did not know if they were normal and thus not mentioned or not seen. The fact that almost all visualization (left limb, right limb, body) correlated with lipase results and final US diagnosis makes it very likely that radiologists mentioned those parts when they felt they were abnormal. We purposely relied on US reports and did not re-evaluate saved static US images or loops. It is our experience that saved US images often do not fully reflect all changes seen during the examination, but only represent excerpts. Similar experiences were reported in a multiinstitutional study on pancreatitis in dogs. 5 Also, evaluation of static ultrasound images is not free from drawbacks, because radiologists can differ markedly in their assessment of archived images. 46 We would like to emphasize that our results refer to the LIPC Roche DGGR-lipase assay. 47 There are now several DGGR-based assays on the market with different reference ranges, and therefore our results cannot necessarily be applied to other assays. In conclusion, we believe the following factors play a role in the studies so as to better assess relationships between laboratory and US findings. 34 Serial assessments of both lipases and US would be ideal to explore how the factor time affects results of both tests. ACKNOWLEDGMENT No funding was received for this study.
2022-04-20T06:25:15.874Z
2022-04-19T00:00:00.000
{ "year": 2022, "sha1": "0260ffcbcb96a0416757554a3a468938101acf61", "oa_license": "CCBYNCND", "oa_url": "https://www.zora.uzh.ch/id/eprint/218302/1/Veterinary_Internal_Medicne___2022___Hammes___Effects_of_medical_history_and_clinical_factors_on_serum_lipase_activity_and.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7fa63f38bba32648e06573feca0c235e004557ff", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235129241
pes2o/s2orc
v3-fos-license
Category Mismatches in Coordination Vindicated Bruening and Al Khalaf (2020) deny the possibility of coordination of unlike categories. They use three mechanisms to reanalyze such coordination as involving same categories: conjunction reduction, super-categories, and empty heads. We show that their proposal leaves many cases of unlike category coordination unaccounted for, and we point out various methodological, technical, and empirical problems that it faces. We conclude that the so-called Law of the Coordination of Likes is a myth. Instead, all conjuncts must satisfy any external restrictions on the syntactic position they occupy. Such restrictions may be rigid, resulting in categorial sameness, but when they are underspecified or disjunctive, category “mismatches” may arise. Introduction The view that only the same grammatical categories may be conjoined (e.g., Chomsky 1957:36), elevated to the status of a universal law (Williams 1981:sec. 2), has been repeatedly questioned (e.g., Sag et al. 1985, Bayer 1996. At present, a more frequent view-concisely expressed in the following quotation from The Cambridge Grammar of the English Language (CGEL)-seems to be that any constituents may be coordinated, as long as each is licensed in the syntactic position occupied by the coordinate structure: (1) If (and only if) in a given syntactic construction a constituent X can be replaced without change of function by a constituent Y, then it can also be replaced by a coordination of X and Y. (Huddleston andPullum 2002:1323) 1 Any apparent "sameness" requirements result from the fact that each conjunct must satisfy the constraints imposed on the syntactic position occupied by the coordinate structure. These constraints may be rigid, resulting in the sameness of categories of all conjuncts. However, when 326 R E M A R K S A N D R E P L I E S We wish to thank the following people for their comments on previous versions of this article: Bob Borsley, Mary Dalrymple, John J. Lowe, Joan Maling, Ora Matushansky, Geoff Pullum, Eric Reuland, and the anonymous reviewers of Linguistic Inquiry. (The usual disclaimers apply.) Agnieszka Patejuk gratefully acknowledges the Mobilność Plus mobility grant awarded by the Polish Ministry of Science and Higher Education. 1 (1) is a variant of the so-called Wasow's Generalization: "If a coordinate structure occurs in some position in a syntactic representation, each of its conjuncts must have syntactic feature values that would allow it individually to occur in that position" (Pullum and Zwicky 1986:752-753, (4) such constraints are underspecified or disjunctive, each conjunct may satisfy these in a different way, leading to category mismatches. Bruening and Al Khalaf (B&AK) (2020) deny the possibility of coordination of unlike categories. To reanalyze category mismatches in coordination as involving the same categories, they use three mechanisms: conjunction reduction (CR), supercategories (SCs), and empty heads (EHs). B&AK use CR-coordination of larger constituents and subsequent ellipsis-for coordination of arguments with modifiers, as in (2a), where the coordination of an NP (meat) and a PP (at restaurants) is claimed to actually involve two VPs, as shown in (2b), contrary to what the placement of neither . . . nor . . . might suggest. (2) a. I eat neither meat nor at restaurants. (Zhang 2009:187, (7.24c (3), and Mod (inspired by ModP; Rubin 2003) for modifiers, as in (4). Such predicative or modifier constituents have complex categories consisting of an SC and the usual basic category (NP, AP, etc.), for example, Pred:NP or Pred:AP. In such cases, the identity of the SCs is sufficient for coordination to be licensed. (3) a. Pat is a Republican and proud of it. (Sag et al. 1985:117, (2b) (4) a. We walked slowly and with great care. (Sag et al. 1985:140, (57 B&AK use empty heads (EHs) in subcategorization violation examples such as (5a), where one conjunct is a CP, even though the verb subcategorizes for the preposition on followed by an NP (see (5b)), and not a CP (see (5c)). On B&AK's analysis in (6), N is a phonetically and semantically empty nominal head, converting a CP into an NP. (5) a. You can depend on my assistant and that he will be on time. (Sag et al. 1985:165, (124b)) b. You can depend on my assistant. c. *You can depend (on) that he will be on time. In sections 2-3, we show that both strategies, SC and EH, face numerous empirical, technical, and methodological problems. Though these problems suffice to invalidate B&AK's proposal, in section 4 we further refute B&AK's empirical arguments against unlike category coordination and present new data supporting the existence of coordination of unlike categories, in accordance with the CGEL quotation in (1). While we follow B&AK in relying on data from English, similar arguments could be made on the basis of other languages. 2 Supercategories Consider (7)-(8) (B&AK 2020:25, (85) and (84), respectively); (8) represents coordination in (7a), and the representation of (7b) would be analogous. The "C:NP/AP" index on became indicates that this verb c-selects an NP or an AP. This requirement is satisfied in (7a), as each of the base categories within the complex category Pred:͕NP,AP͖ is either an NP or an AP, but not in (7b), because of the violating base category PP. So, for the purpose of categorial selectional restrictions, base categories do count as syntactic categories. By contrast, if SCs are present, base categories do not count as syntactic categories for the purpose of the same-category coordination schema in (9) (B&AK 2020:24, (82)); for example, Pred:NP and Pred:AP in (8) are taken to be the same category ␣ in (9). Technical Problems: Complexity, Vagueness, and Inconsistency The deceptively simple schema in (9) hides this underlying complexity of B&AK's analysis. It faithfully reflects only the situation where the same simple categories are coordinated. In the case of SCs, as in (8), it must instead be interpreted as follows: 3 (a) SCs of all constituents apart from Coord must be the same (see Pred in (8)); (b) the complete complex categories of the sister of Coord and its mother must be the same (see Pred:AP); (c) the set of base categories within the complex category of the coordination contains exactly the base categories of its daughters (see ͕NP,AP͖). Unfortunately, it is not clear what theoretical mechanism makes it possible to collect base categories into sets, nor is it clear what theoretical properties complex categories such as Pred: ͕NP,AP͖ have. The theoretical vagueness surrounding complex categories is striking, given that the proposed mechanisms are completely new and crucial for B&AK's claim that there are no categorial mismatches in coordination. Also, the fact that base categories within such complex categories do count as syntactic categories for the purpose of categorial selectional restrictions of the verb, but at the same time do not count as syntactic categories for the purpose of the claim that coordination involves the same categories, reveals internal conceptual inconsistency. Empirical Problem: Semantically Specified Arguments Let us consider some attested examples 4 (from the English Web 2015 corpus 5 and Google, some simplified) involving the verbs treat (in (10)-(11)), word (in (12)-(13)), and behave (in (14) All these verbs take an argument expressing manner. In all three cases, it is clear that the relevant dependent is an argument, not a modifier: it is obligatory; that is, without it the sentence becomes ungrammatical or the verb changes its meaning. 6 While the argument/modifier distinction is no-toriously murky, it is generally accepted that obligatory dependents are arguments. 7 This manner argument may bear various syntactic categories: not just AdvP (e.g., individually ), but also at least PP (e.g., with respect) and NP (this way). As the above examples show, manner phrases of different categories may be coordinated in these argument positions. How could B&AK account for such examples? The EH strategy-postulating an empty nominal head converting a CP into an NP, discussed in section 3-is unavailable, as such manner arguments are not canonically nominal and, besides, there are no CPs in these examples that could be analyzed as NPs. 8 The CR strategy also fails here; for example, the hypothetical input to ellipsis in the case of (14) would be flawed. So the only possibility left is to use the SC strategy. However, these manner arguments are not predicates, nor are they modifiers. Nonetheless, given that the functional projection MannerP has been postulated in the literature (e.g., in Scott 2002:104 and Alexeyenko 2012), one mightslightly modifying a statement in B&AK 2020:10-"propose that there was something right about the MannerP analysis" and introduce a new supercategory, Manner. 10 Similarly, predicates like reside take obligatory locative arguments, including NP and PP arguments, as in the pseudocleft examples in (18) Given that a coordination of such locative NP and PP arguments may form a pivot in pseudoclefts, as in (20), this is a genuine case of coordination of unlike categories by B&AK's standards, one that is not covered by their analysis-unless yet another SC mimicking a functional projection (e.g., LocP in Kim 2019:chap. 4) is assumed. The same argument can be made on the basis of predicates that select for durative arguments, such as last, as in the following attested ( Again, it is possible to construct corresponding pseudocleft sentences (so that CR is not applicable) and to reverse the order of conjuncts (so that EH is not applicable). And again, B&AK's approach could be "rescued" by postulating yet another SC inspired by a functional projection (e.g., Dura-tiveP in Kratzer 2004:412). Methodological Problem: Unfalsifiability A methodological problem with the SC strategy is that, once SCs loosely inspired by functional projections are generally admitted, the claim that only same categories may be coordinated becomes unfalsifiable. The reason is this. While-as we endeavor to demonstrate in this articlethere is no requirement that only same categories may be coordinated, conjuncts are "same" by virtue of occupying the same syntactic position: they bear the same grammatical function, the same semantic role, or-in some constructions-at least the same information-structural status. Given the multitude of functional projections proposed since the 1980s, there is a good chance that for any grammatical, semantic, or pragmatic property that unlike category conjuncts can share, there exists a corresponding functional projection. If so, another "supercategory" may be postulated, loosely inspired by that functional projection, which "explains" the "apparent" coordination of unlike categories. Hence, unless the applicability of this strategy is limited in a principled way, B&AK's claim that there are no categorial mismatches in coordination becomes unfalsifiable and, as such, is of limited scientific value (Popper 1935). 11 For this reason, in what follows we assume that the SC strategy is limited to Pred and Mod. But then (10)- (17) and (20)- (25) constitute genuine counterexamples to B&AK's analysis. Empirical Problem: Modifier and Argument Consider the verbs die and reside. Die takes only one argument (the subject), and any locative phrase is an optional modifier, so-for B&AK-in Rome in (26) has the complex category Mod: PP. By contrast, reside takes two obligatory arguments (*St. Peter did reside is ungrammatical), so in Rome in (27) Such examples provide another kind of empirical counterargument against SCs. Empirical Problem: Coordination of Unlike Supercategories Consider (29) (B&AK 2020:11, (35b)), which involves coordination of two predicative modifiers. B&AK mark conjuncts with the SC Mod, adding that they could perhaps be marked with the SC Pred "in place of or in addition to" Mod. This is another place where B&AK are vague about the exact properties of one of the two main mechanisms-SCs and EHs-they invoke to claim that there are no category mismatches in coordination: it is left undecided whether the SC of predicative modifiers is Mod (as in (29)), Pred, or ͕Mod,Pred͖. The last possibility seems most intuitive-the other two seem arbitrary-but it faces empirical problems. Consider example (30) involving coordination of two modifiers. (30) Reluctantly and embarrassed, the white officer released the Black man . . . (Theodore Kirkland, Spirit and Soul: Odyssey of a Black Man in America, 339) The first modifier, reluctantly, is an unambiguous adverb and cannot predicate of the subject. By contrast, the other modifier, the adjective embarrassed, is predicative. 12 Hence, on the most intu-12 It is uncontroversial that embarrassed may act as a predicative adjective, as it may occur with verbs such as become, seem, look, and appear. Other predicative adjectives may also be coordinated with adverbs, as in (i). (i) Reluctantly, and full of tears, I threw in the towel and got a cab . . . (http://endduchenne.co.uk/london2cambridge/) itive interpretation of B&AK's SC mechanism, the relevant constituent in (30) However, (30) should be ungrammatical on this interpretation because the two conjuncts in (31) bear different supercategories, Mod and ͕Mod,Pred͖, violating the schema in (9). 14 Similarly, one of the two arbitrary possibilities mentioned by B&AK, that of assigning just Pred to predicative modifiers, would also lead to coordination of unlike SCs, as illustrated in (32). Only the second arbitrary possibility, that of assigning just Mod to predicative modifiers, leads to a grammatical structure (obeying the schema in (9)), shown in (33) (30) has the structure in (33) rather than (31) or (32); another assumption is needed to ensure this. 15 Theoretical Weakness: Lack of Independent Motivation The final problem with the SC strategy is its lack of independent motivation. When proposing the SCs Pred and Mod, B&AK refer to Bowers 1993 and Rubin 2003, respectively. However, the SCs Pred and Mod have little in common with the original functional projections PrP (henceforth, PredP) and ModP, and arguments for those functional categories do not automatically carry over to the similarly named supercategories. In fact, some of the original empirical arguments for PredP and ModP can be interpreted as arguments against the SCs Pred and Mod. In particular, both functional heads-usually phonetically empty-were argued to have lexical realizations in some constructions in some languages (see Bowers 2001:sec. 1.6 on Pred and Rubin 2003:sec. 3 and references there on Mod). If so, the original functional projections PredP and ModP may be properly-lexically-larger than the embedded predicates or modifiers of category NP, PP, AP, and so on. This should be contrasted with the supercategories Pred and Mod, which are coextensive with the underlying NPs, PPs, APs, and so on. Also, as made clear in the extensive critique of PredP in Matushansky 2019, the original theoretical arguments for this functional projection are void in current versions of mainstream generative grammar; on the contrary, theoretical arguments may be constructed against the usefulness of PredP in contemporary linguistic theory. Similarly, a critique of the original motivation for ModP may be found in Song 2020:sec. 3. Hence, the original functional projections PredP and ModP do not provide either empirical or theoretical motivation for the SCs Pred and Mod proposed by B&AK. Since B&AK do not adduce any independent motivation for these SCs, we conclude that such SCs are a completely new mechanism, motivated solely by the use to which B&AK put it: to work around unlike category coordination. Empty Heads The second strategy used by B&AK to avoid unlike category coordination is to assume two EHs whose effect is to "convert" one category into another: a null N converting (within syntax proper) CPs into NPs and a null Adv (present only in the lexicon, apparently inactive in syntax proper) converting adjectives into adverbs. The EH strategy is invoked in the analysis of unlike category coordination of arguments, where the argument farther from the head violates this head's selectional restrictions, that is, for situations schematically shown in (34). B&AK provide (5a), repeated here as (35), as an example of (34a), and (36) as an instance of (34b). In both examples, the CP is reanalyzed as an NP headed by the semantically and phonetically empty N (cf. (6)). Methodological and Empirical Problem: Subcategorization Violations The main methodological problem with this part of B&AK's argumentation is that it is limited to-and draws far-reaching conclusions from-the very narrow range of data related to subcategorization violations, a phenomenon that "has nothing to do with coordination per se" (Bayer 1996:585n7). But, even focusing on unlike category coordination in nonpredicative argument positions, for which the EH strategy was designed, the vast majority of cases involve coordination of unlike category arguments that do satisfy selectional restrictions and that may occur in any order within coordinate structures (subject to general restrictions such as the weight of conjuncts). One case in point are arguments expressing manner, location, or duration, discussed in section 2.2. It is also easy to find examples of coordination of NP and CP arguments that are similar to (35) but do not violate any subcategorization requirements: for example, arguments of convey (see (37)), mean (see (38)), understand (see (39)), suggest (see (40)), and show (see (41) Crucially, what speaks against the EH analysis and thus makes such sentences genuine counterexamples to B&AK's analysis is the possibility of changing the order of conjuncts, as illustrated in (43) Many more examples involving coordination of categorially unlike arguments are provided in sections 3.2 and 4. Empirical Problem: Order of Conjuncts B&AK's analysis predicts that whenever coordination of an NP and a CP is possible, and it cannot be accounted for via CR or SCs, only one order of conjuncts is possible, with the "true" NP closer to the selecting head (see section 3.5 for technical details). For example, while (44) We agree that (45) is less acceptable, but we claim that it is still fully grammatical. The diminished acceptability is a matter of relative weights of the two conjuncts. For example, Sag et al. (1985: 167n34) cite examples such as (46)-(47) Sag et al. (1985:167n34) note that their theory (just like B&AK's account) would predict (47) to be grammatical only under the ellipsis (CR) analysis, which would in turn predict the impossibility of topicalization of (47) (in contrast to (46)). They construct topicalized versions of (46)- (47), mark the latter with one question mark, and ask readers to "assess for themselves the accuracy of this prediction." However, it is well-known that-"outside of some very well-rehearsed examples such as Beans, I like" (Davies and (48)-(49) are both acceptable and, if (49) seems a little more awkward, this is expected given that it is syntactically more ambiguous and so more difficult to process. 17 In summary, contrary to B&AK's claim, any order of NP and CP conjuncts within the propositional argument of remember is possible. Combined with the pseudocleft facts in (48)-(49) and with the lack of appropriate supercategories in this case, this means that none of B&AK's strategies is available. That is, verbs such as remember, selecting for an NP or a CP (or a coordination thereof ), contradict B&AK's analysis. Empirical Problem: Overgeneration Probably the starkest empirical problem that this part of B&AK's analysis faces is overgeneration. The analysis predicts that any predicate that combines with an NP will also combine with the coordination of an NP and a CP, even if it does not combine with a CP directly. That is, every such predicate behaves like depend (on) in (5). This prediction is wrong: verbs such as withdraw and strengthen select for an NP that may express a proposition, and yet this NP cannot be coordinated with a CP, as shown in the following examples: (50) ͕He withdrew/This strengthens͖ ͕this claim / the claim that Homer is a genius͖. (52) *͕He withdrew/This strengthens͖ this claim and that Homer is a genius. This is a known issue, pointed out in Bayer's (1996:585-586) critique of Sag et al. 1985, which makes the same wrong prediction. Even allowing for semantic restrictions, this prediction is incorrect. The preposition despite, for example, permits NP complements which denote facts or propositions, but not [CP] complements, and conjuncts containing [CP] are disallowed as well. b. Despite the fact that all the musicians quit, Michael signed the contract. c. *Despite that all the musicians quit, Michael signed the contract. d. *Despite LaToya's intransigence and that all the musicians quit, Michael signed the contract. If we require the complement of despite to be an NP, and reject any attempts to compromise this requirement, the ungrammaticality of [(53d)] follows immediately. While B&AK refer to Bayer 1996, they do not address this problem. We see no way of accounting for such examples within B&AK's set of assumptions. Methodological Problem: Multiple Nominalizing Empty Heads and Unfalsifiability As mentioned earlier, the nominal EH crucial for B&AK's account is semantically empty; it cannot bear any s-features, so it cannot head an argument that is semantically selected. However, in footnote 27 B&AK also admit the existence of other-semantically contentful-nominal EHs. One such EH should be responsible for nominalizing question CPs; since they may occur as objects of prepositions, including the object of (depend) on (see (54)), the EH nominalizing such question CPs cannot be semantically empty. (54) The price and the quality depend on how desperate you are. (English Web 2015) This semantically contentful EH would be the second null head responsible for the coordination of NPs and CPs, namely, for cases involving question CPs, as in (55) (B&AK 2020:20n24). 18 (55) It's amazing how tall he is and the things he can do. (Munn 1993:119, (3 .24a)) In footnote 25, B&AK assume that "CPs can occur in subject position, but they must be NPs with a null N head when they do." In this context, consider (56) B&AK's unacceptability marking of (56) is misleading. In footnote 7, they say that "[i]n an informal poll of approximately seven speakers, two had the pattern of judgments described here," while five accepted (56). If so, is the nominalizing EH at work in (56) in the language of the five speakers who accept it the same as the EH at work in (57) in the language of the two speakers who accept (57) but not (56)? B&AK seem to assume (in the same footnote) that these are the same EHs, that is, that there is just one nominal null head able to convert a CP[that] into an NP. But, given that this null head is semantically empty, this means that such subjects cannot be semantically selected; in particular, they cannot be specified as [‫מ‬animate] or [‫מ‬sentient]. This is counterintuitive and hence should be carefully justified; B&AK do not provide such a justification. The alternative is that the five speakers (the majority) accepting (56) have another-semantically contentful-nominalizing EH. But then, given that this EH behaves differently from the EH that nominalizes question CPs (question CPs, but not declarative CPs, may be immediate objects of prepositions), this would be yet another-third-EH crucial in B&AK's attempt to eliminate unlike category coordination, one that is not constrained by the various properties that B&AK assume, not correlated with short answers, and so on. This would take us one step forward on the slippery slope toward the possibility of postulating "category converting" EHs at will, that is, toward unfalsifiability. Technical Problems: Complexity, Vagueness, and Inconsistency In their analysis, B&AK assume that trees are built from left to right rather than from the bottom up. For example, there is a derivational stage of (35) where a partial tree for you can depend on is constructed, and another stage, corresponding to you can depend on my assistant, with only partial representation of the coordinate structure (see B&AK 2020:26). While we find this part of the proposal unobjectionable and quite intuitive from the perspective of analysis (but not synthesis), B&AK make a number of nonstandard and vague assumptions about features, resulting in a rather complex analysis. First, features are divided into syntactic and semantic. The nominal EH at work in (35)-(36) may bear syntactic features (number, gender, etc.), but not semantic features (animacy, sentience, etc.). Second, when a coordinate structure is built, features of particular conjuncts-it is not clear whether only semantic features or all features 19 -are collected into a stack, rather than a set. At any stage of the derivation, the root of the coordinate structure contains the current stack. Third, the lack of semantic features on the EH does not mean that no features are added to the stack; rather, it means that a special element (feature?) "-" is added. Fourth, semantic feature checking "must take place as soon as it can" (B&AK 2020:27) and, if checking fails at this vague point, the derivation crashes. Let us see how this analysis is intended to work. First, consider example (35) ( You can depend on my assistant and that he will be on time). The preposition on (or perhaps the combination depend on; B&AK are not clear on this) syntactically selects an NP and has semantic features to check. According to B&AK (2020:26), semantic features are checked when the coordinate structure contains the first conjunct: at this point, the root of this structure contains the stack ͗S͘, 20 and the (verb plus) preposition checks its semantic features; see (58). When the second conjunct-headed by the semantically contentless empty N -is merged, the root contains the stack ͗S, -͘ (assuming that the top of the stack is on the right). At this stage, the preposition sees the lack of semantic features (-), but this is not an issue because its semantic features have already been checked; see (59). If the order of the conjuncts were different, that is, if the clausal NP were the first conjunct, then at the crucial point the stack would be ͗-͘, and checking would fail; see (60). The fact that the stack would change to ͗-, S͘ once the whole coordinate structure is built does not matter because the derivation has already crashed; see (61). In (36) (That she got third place and her injury in the final round notwithstanding . . . ), when the left-to-right derivation reaches the postposition notwithstanding, the coordinate structure is fully built and its root contains the stack ͗-, S͘. As S is the top of the stack, the postposition can check its s-features. If the order of conjuncts were reversed, the stack at that point would be ͗S, -͘, and the derivation would crash. For this analysis to work, it is crucial which parts of the structure are built exactly when. For example, assuming that a single (i.e., connected) partial tree is present at each stage, 21 a skeletal coordinate structure is built for (35) at the stage of you can depend on my, when only a part of the first conjunct is constructed. Presumably, this is the earliest stage when s-features of the selector may-and, thus, must-be checked. But are the semantic features of the first conjunct already in the root stack at that stage, even though the source of such features, the noun, is not present yet? It would seem that at that point the stack at the root should still be empty, so the derivation should crash. Unfortunately, the presentation of B&AK's analysis is too vague to decide this matter. However, it is relatively clear that "s-feature checking at the earliest opportunity" leads to inconsistency, given that B&AK bind their analysis of coordination with short answers. Consider the dialogue in (62), with the short answer That he will be on time. On B&AK's analysis, (62) is acceptable because the selector is elided before PF, so the fact that its s-features have not been checked by then does not matter. But, according to B&AK's set of assumptions, unchecked s-features lead to a crash not at PF, but much earlier: when the selector has the first opportunity to check its s-features and fails to do so. Clearly, in the case of (62), this opportunity arises when the CP is merged into the tree, before ellipsis takes place. But, given that this CP is really an NP headed by a semantically contentless EH, that is, given that the stack of this CP is ͗-͘, the selector cannot check its s-features, so the derivation crashes. This means that B&AK's analysis does not account for subcategorization violations in short answers, despite their claims. On the other hand, if s-feature checking could wait until PF, there is no reason why (35) with the order of conjuncts reversed is unacceptable: s-feature checking could wait until the full coordinate structure is built, with the resulting stack ͗-, S͘. In short, there is a conflict between B&AK's analysis of coordination and their analysis of short answers-the two phenomena that they strive to account for in a uniform manner. Non-ly Adverbs B&AK extend the EH analysis to cases such as The Once and Future King, The Now and Future Caliphate, and A Soon and Distant Christmas (B&AK 2020:14-15, (44a), (48b), (47c)). The first example receives the analysis in (63) This analysis is based on the assumption that-like -ly adverbs (e.g., crucially ), which are composed of an adjective (e.g., crucial) and the Adv head -ly-non-ly adverbs such as once also contain an adjective and an Adv head, but this head is semantically and phonetically empty (see Adv in (63)), so it may be elided, as shown in (63). On this analysis, all non-ly adverbs should pattern with once, now, and soon. However, this prediction is false: as shown in (64)-(68), many non-ly adverbs behave differently. B&AK's analysis also predicts a strong correlation between coordination and displacement: (69) is supposed to show that non-ly adverbs, even though they apparently cannot occur immediately prenominally (we will refute this claim of B&AK forthwith), are acceptable as nominal modifiers when displaced (while -ly adverbs can never be understood as nominal modifiers, even when displaced). (69) a. *I was expecting a soon visit. b. How soon a visit are you expecting? c. I wasn't expecting that soon a visit. d. A visit so soon would be wonderful. (B&AK 2020:31, (96)) However, this presumed correlation breaks down in the case of other non-ly adverbs, such as here and there, which cannot occur prenominally and cannot be coordinated with an adjective (see (64)- (65)), yet may occur postnominally, as in (70) B&AK's analysis is also based on the incorrect assumption that once, now, soon, and so on, cannot occur immediately prenominally. Attested counterexamples abound, such as (72) These empirical problems are fatal for the part of B&AK's analysis that is concerned with non-ly adverbs. But their analysis is also based on a number of nonstandard assumptions, in addition to those concerning the nominal EH(s). The first such assumption is that adverbs such as once, now, and soon are prefabricated syntactic trees projected from Adv in the lexicon. Second, Adv is assumed to be active only within the lexical entries of non-ly adverbs. That is, it does not occur in the lexicon on its own; it is not active in syntax proper because, if it were, it could turn any adjective into an adverb so that any adjective could occur in strictly adverbial positions. This distinguishes Adv from N , which operates only in syntax proper. Third, as shown in (63), ellipsis does not just make parts of the structure phonetically unrealized; instead, it nonmonotonically alters the structure already built, so that now the remaining constituent [ Adj once]-rather than [ Adv [ Adj once] Adv ]-is an immediate constituent of N′. Fourth, B&AK posit a special constraint, (76) (their (99)), which must be checked only at PF, as it is violated in (63) before ellipsis applies. 22 22 Note that this constraint would also be satisfied by the ellipsis of the first [ N′ king] alone in (63), as the remaining N′ would then have the structure [ N′ Adv], which would not violate (76). But then a similar analysis, with ellipsis of [ N′ king] alone, would license any Adv constituents under N′, including -ly adverbs, so the analysis would incorrectly predict the grammaticality of, say, *the formerly and future king. A simple way to repair this aspect of B&AK's analysis is to reformulate (76) by saying that an Adv cannot be an immediate constituent of the N′ (regardless of the presence of other immediate constituents). (76) *[ N′ Adv N′] Fifth, B&AK must assume that the ellipsis of [ N′ king] may extend to the Adv head only because it is semantically and phonetically empty. Otherwise, the same analysis would be available for -ly adverbs, whose head is not phonetically empty. In brief, B&AK's analysis of constructions such as the once and future king is based on wrong empirical generalizations and makes wrong empirical predictions, besides making controversial and insufficiently justified assumptions. Hence, it does not provide independent evidence for an analysis of unlike category coordination in terms of EHs. Empirical Arguments against Coordination of Unlike Categories? In sections 2 and 3, we refuted B&AK's analysis on empirical, technical, and methodological grounds. In this section, we provide further arguments for what we consider to be the standard view-summarized in (1), the quotation from CGEL-and refute what may be interpreted as B&AK's arguments against this standard view. B&AK never actually refer to this standard view. Instead, they provide arguments against a superficially similar claim, namely, that it should be sufficient for a selecting element to permit a coordination of X and Y if it permits X and Y separately (B&AK 2020:9, 18-19). This putative claim significantly differs from that of CGEL: it lacks the key requirement that X and Y have the same function. Without this requirement, the claim considered by B&AK is obviously false. For example, as shown in (77), while give may combine with a theme and a goal, these two arguments cannot be coordinated, even if they have the same categories, simply because they bear different functions. Nevertheless, some of the examples provided by B&AK are more subtle and might be interpreted as potential counterexamples to the CGEL position, so it is important to show that they do not contradict the view expressed in (1). The complete list of such counterexamples-see B&AK's (64)-is given in (78) b. She lost the match to an underdog. Examples (78c-g) are of a different nature: as confirmed by general and valence dictionaries, they involve two different meanings of the verbs speak, agree, meet, fight, and hear, so an attempt to coordinate their arguments results in zeugma. For example, in the case of speak in (78c)-an example important for B&AK, as it is cited twice in their article (their (25) and (64b))-A Valency Dictionary of English distinguishes four general senses of this verb, with speak nonsense exemplifying sense A and speak with Sarah, sense C (Herbst et al. 2004: 790-792); relevant senses of speak are also distinguished by online valence dictionaries such as VerbNet, FrameNet, and PropBank (all accessible at https://uvi.colorado.edu/uvi_search) and by general dictionaries (e.g., meanings 12-13 and 3 in https://www.dictionary.com/). 23 When meanings expressed by two homophonous predicates are sufficiently close, some speakers may assume the existence of just one predicate, so examples of the kind B&AK consider to be ungrammatical may be found in corpora. This is true of hear (see (79)), but also fight (see (80) 23 It seems that some speakers of English have yet another, more idiomatic meaning of speak (not recorded in the dictionaries we consulted), which allows for both nonsense and a PP[with] argument. (i) Whereas it informs when we speak nonsense with someone we love, we can imply that speaking nonsense with someone we do not love has no point. (Google) In such cases, nonsense and PP[with] have different functions, so their coordination is ruled out for the same reason as in the case of (78a-b). 24 The relevant entry in Herbst et al. 2004:78 assumes that the NP and the CP have the same function, but also that the PP[in] realizes a different function. Corpus examples below contradict this latter view. (84) There's a comedic element to Kelvin, but the audience also has to believe [[ NP his sincerity] and [ CP that he really loves Kacie]]. Example (81) involves the same kind of unlike category coordination as (78h), and yet it is fully acceptable. Similarly, (82) has the same structure as (78i), and it is spotless. The reversed order of PP and CP conjuncts is exemplified in (83). Finally, apart from the coordination of an NP and a PP or a CP and a PP, (84) illustrates the third possibility, that is, coordination of an NP and a CP. B&AK (2020:19) admit that some of the examples in (78) may be acceptable to some speakers, but only with special intonation and interpretation suggesting ellipsis (i.e., the CR strategy). For example, (78b) may have the following structure (cf. B&AK 2020:19, (65) We agree that, to the extent that (78b) may be made acceptable, it is an instance of ellipsis with special intonation, as shown in (85). However, examples (81)-(84) are not amenable to such an interpretation: the intonation observed in (85) is absent there and the input to ellipsis of the kind indicated in (85) is ungrammatical, as demonstrated in (81′) Such examples provide a new argument against the movement theory of control (Hornstein 1999), based on the fact that, on that theory, control into a single conjunct would violate Ross's (1967: sec. 4.2) Coordinate Structure Constraint (specifically, its "element constraint"; Grosu 1973), so all these examples should be ungrammatical. 25 Conclusion While Bruening and Al Khalaf (2020) employ three different strategies to deal with what they consider to be only apparent unlike category coordination, their proposal still leaves many different cases of such coordination unaccounted for. These include predicates such as behave, reside, and last, which impose mainly semantic restrictions on their arguments, but also such run-of-the-mill verbs as believe, hope, teach, and so on. In the discussion of B&AK's analysis, we also pointed out a number of methodological, technical, and empirical problems, which we consider to be fatal for their proposal. We conclude that the Law of the Coordination of Likes, as it is sometimes called, is a myth. Coordination does not impose any such constraint; rather, all conjuncts must satisfy any external restrictions on the syntactic position they occupy. In some cases such restrictions are rigid, resulting in categorial sameness; in other cases they are underspecified or disjunctive, resulting in category "mismatches."
2021-05-23T18:26:53.332Z
2021-07-16T00:00:00.000
{ "year": 2023, "sha1": "7fa6ada7f74b5cc13b7bf8b628360fde1ea9bdf2", "oa_license": "CCBY", "oa_url": "https://direct.mit.edu/ling/article-pdf/doi/10.1162/ling_a_00438/1987364/ling_a_00438.pdf", "oa_status": "HYBRID", "pdf_src": "MIT", "pdf_hash": "47bfe2e40f5f303e83b0ec1ceba68107d51a448e", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Sociology" ] }
220680280
pes2o/s2orc
v3-fos-license
Immunotherapy in older patients with non-small cell lung cancer: Young International Society of Geriatric Oncology position paper Immunotherapy with checkpoint inhibitors against programmed cell death receptor (PD-1) and programmed cell death ligand (PD-L1) has been implemented in the treatment pathway of patients with non-small cell lung cancer (NSCLC) from locally advanced disease to the metastatic setting. This approach has resulted in improved survival and a more favourable toxicity profile when compared with chemotherapy. Following the successful introduction of single-agent immunotherapy, current clinical trials are focusing on combination treatments with chemotherapy or radiotherapy or even other immunotherapeutic agents. However, most of the data available from these trials are derived from, and therefore might be more applicable to younger and fitter patients rather than older and often frail lung cancer real-world patients. This article provides a detailed review of these immunotherapy agents with a focus on the data available regarding older NSCLC patients and makes recommendations to fill evidence gaps in this patient population. BACKGROUND More than half of all patients with non-small cell lung cancer (NSCLC) are aged above 70 years, and almost 10% are 80 years or older. 1 The multi-organ age-related decline can alter drug pharmacokinetics and increase the risk of complications of locoregional and systemic treatments. 2,3 This risk is also influenced by the increasing burden of comorbidities and polypharmacy, which increase the risk of adverse events and also impact survival. 4,5 Moreover, quality of life (QoL) and functional endpoints are not well represented in clinical trials and should be considered at least as relevant as overall survival (OS). 6,7 Chronological age alone provides relatively little information regarding the tolerance of older patients to cancer treatments. A comprehensive geriatric assessment (CGA), a multidisciplinary diagnostic and treatment process, can fill this knowledge gap and inform treatment decisions by identifying medical, psychosocial and functional limitations of older adults and facilitating a co-ordinated plan to maximise overall health in the context of ageing. 8 In older cancer patients, the use of a CGA is associated with a number of benefits: 9,10 the prediction of complications and side effects from treatment; estimation of survival; aiding patients, clinicians and family members in treatment decisions; detection of problems neglected by routine history and physical examination in the initial evaluation and new problems during follow-up care; improvement of mental health, well-being and pain control; and highlighting areas for potential intervention. Geriatric assessments have also been found to show prognostic value specifically in NSCLC patients. 11,12 Furthermore, models based upon geriatric assessments have been developed to predict the risk of chemotherapy toxicity in older adults and better inform decision making. 13,14 However, these assessments can be time-consuming and are not practical for all patients, and screening tools, such as G8, Flemish version of the Triage Risk Screening Tool and Vulnerable Elders Survey-13, have therefore been validated to identify those requiring a CGA. 15 Appropriately selected older NSCLC patients have been shown to derive a similar survival benefit compared with their younger counterparts in the curative setting. 16,17 Nonetheless, the underrepresentation of older adults in clinical trials defining the current standard of care limits the applicability of such results to the population seen in routine practice. 7,18 In the palliative setting where chemotherapy is indicated, the decision-making should not be dictated by age alone. [19][20][21] Single-agent chemotherapy can improve OS in older patients without adversely impacting QoL compared with best supportive care alone; [22][23][24] data are controversial regarding the benefit of combination chemotherapy in this age group, particularly in those who are more frail. 21,25 www.nature.com/bjc Tyrosine kinase inhibitors (TKIs) such as those targeting the epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase (ALK) or ROS-1 are the treatment of choice for oncogeneaddicted NSCLC patients, on the basis of the superiority of these agents in survival outcomes and their mild toxicity profile. Although TKIs are often a good match for older patients, these patients constitute a small subset of NSCLC and might still be at a higher risk of toxicity. Immune checkpoint inhibitors, designed to revitalise antitumour immune responses, have revolutionised the management of a number of malignancies, including NSCLC; this type of immunotherapy also represents a potentially appropriate treatment option for older patients. Below, we outline the mechanism of action of immunotherapy and its adverse events before reviewing the data supporting the use of immunotherapy in patients with NSCLC-alone or in combination-with a particular focus on older patients, in an effort to address the issue of whether age influences the efficacy and toxicity of this approach. We also discuss the potential impact of the ageing process on the immune system and, hence, on the efficacy of immunotherapy. Mechanism of action of immune checkpoint inhibitors Strict regulation of the immune system is crucial for allowing the co-ordinated clearance of infected or malignant cells while sparing normal cells. In addition, mechanisms to downregulate the immune response are important to prevent immune overreactivity once a pathogenic insult has been cleared and in cases where cells different from self are encountered in a physiological setting, such as in gamete formation or in the developing foetus. 26 Evading immunosurveillance is one of the hallmarks of cancer -cancer cells hijack the key regulatory mechanisms of the immune system, such as checkpoint pathways, to enable their survival. 27 Immune checkpoint pathways operate during homoeostasis to control the duration and extent of immune responses and prevent autoimmunity, but tumour cells have developed the ability to activate inhibitory checkpoints on T cells to avoid being recognised and destroyed. The importance of inhibitory checkpoint signals on T cells in immune evasion led to the development of two classes of inhibitory monoclonal antibody, which are now standard treatment options for a number of malignancies including NSCLC: those that block the interaction between cytotoxic T-lymphocyte associated protein 4 (CTLA4) on the tumour and B7 on the T cell that inhibits T-cell priming activation; and those that block the interaction between programmed death receptor-ligand 1 (PD-L1) on the tumour and programmed death receptor 1 (PD-1) on the T cell that inhibits recognition of the tumour cells by T cell and subsequent tumour cell lysis. 28 Ongoing research is investigating the role of multiple targets in thoracic malignancies, including other stimulatory/inhibitory receptors involved in T-cell checkpoints and the use of novel agents in combination with currently licensed agents. 26,29 Treatment-related adverse events Immunotherapy is associated with a unique spectrum of treatment-related adverse events (TRAEs), also known as immune-related adverse events. These include dermatological, gastrointestinal, hepatic, endocrine and other less common inflammatory events arising from general immunologic enhancement. 30 Older patients often have an increased risk of TRAEs with cancer treatments in general due to a decreased organ reserve, comorbidities and polypharmacy. In the case of immunotherapy, an aged immune system may, in principle, play an additional important role in determining the risk of TRAEs. IMMUNOSENESCENCE Older age correlates with a decline in organ function, 31 including the composition and function of the immune system-its cells, the microenvironment in which they operate and the cytokines modulating their proliferation and activity. 32 This decline might, in principle, result in an altered efficacy and safety profile of immunotherapy agents in the older cancer patient. The remodelling of the immune system associated with the ageing process is called immunosenescence 32 and involves a number of changes that can be associated with a decrease in immune surveillance both in the adaptive and innate immune system. In older patients, this reduced surveillance manifests clinically as an increased risk of developing viral and bacterial infections and reactivation of latent infections, such as varicella zoster virus and cytomegalovirus (CMV). 33,34 Chemotaxis, phagocytosis and cytotoxicity are impaired, as are the mechanisms of antigen presentation by macrophages and dendritic cells. 35 The responsiveness of T cells to pathogens decreases with age and involves a reduced ability to move to lymph nodes, lower proliferation in response to antigens and cytokines and reduced cytokine release. These changes result in the loss of the costimulatory protein CD28, particularly in CD8 lymphocytes. 36 CD8 + CD28lymphocytes downregulate responses (suppressor effect) via CD4 + cells and dendritic cells, and are often clonally expanded, thereby reducing the numbers of both naïve and central memory T cells. The impact of recurrent infections-in particular, CMV infections-on naïve T cells is deemed to be a key contributor to these changes. 37 Interestingly, CD8 + CD28lymphocytes gain other functions, showing increased cytotoxicity mediated by enzymes usually found in natural killer cells. 38 Immunotherapy toxicity may occur as a process of autoimmunity. Although higher levels of autoantibodies are seen in older patients, it is still unclear whether this change translates into an increased risk of side effects from immunotherapy agents. 39 Additionally, it has been suggested that older adults also have higher levels of myeloid-derived suppressor cells and regulatory T (T reg ) cells, 40,41 which are key mediators of immune evasion and resistance to checkpoint inhibitors. Older age is associated with higher levels of systemic inflammation, with increased levels of pro-inflammatory cytokines such as interleukin (IL)-6 and acutephase proteins such as C-reactive protein (CRP), a phenomenon often called 'inflammaging'. 42 While high levels of IL-6 in the tumour microenvironment are associated with resistance to checkpoint inhibitors, 26,43 more research is needed on the implications of inflammaging on outcomes of immunotherapy. 32 Finally, age also influences the interaction between the microbiome and immune system. Animal models and clinical series suggest that changes in the microbiome influence the efficacy of checkpoint inhibition; 44 consequently, the decline in microbiota diversity associated with ageing might negatively influence immune checkpoint inhibitors. 45 SINGLE-AGENT IMMUNOTHERAPY As immunotherapy with immune checkpoint inhibitors started revolutionising the treatment of NSCLC, the first step was the development of monotherapy agents. Pembrolizumab This anti-PD-1 monoclonal antibody was the first checkpoint inhibitor agent to be investigated for the management of patients with advanced NSCLC. The Phase 3, randomised KEYNOTE-010 trial investigated the use of pembrolizumab versus docetaxel in pretreated patients with PD-L1 expression on at least 1% of tumour cells. 46 The median OS was 10.4 versus 8.5 months, favouring pembrolizumab (hazard ratio [HR] 0.71, 95% confidence interval [CI] 0.58-0.88; P = 0.0008), and higher levels of PD-L1 expression on tumour cells were associated with better outcomes Immunotherapy in older patients with non-small cell lung cancer: Young. . . F Gomes et al. (HR 0.54, 95% CI 0.38-0.77; P = 0.0002 in the PD-L1 > 50% subgroup). In this setting, the median OS improvement was 13% inferior for patients aged ≥65 years (Table 1) but there was only a small proportion of patients in that upper age cohort, which limits any conclusions. No age-specific data on toxicity are available from these three trials but the overall incidence of TRAEs of grades 3-5 varied between 13 and 31% with pembrolizumab versus 35 and 53% with chemotherapy. 45,47,48 A 2019 pooled analysis of the abovementioned Phase 3 trials focused on the efficacy and safety in patients aged 75 years or above and confirmed an OS benefit of pembrolizumab (tumour PD-L1 expression of either ≥1 or ≥50%) versus chemotherapy, with a favourable toxicity profile, similar to their younger counterparts. 49,50 Nivolumab The anti-PD-1 monoclonal antibody nivolumab was first evaluated in two Phase 3 trials in patients who had previously been treated with platinum doublet chemotherapy. The CHECKMATE-017 and CHECKMATE-057 trials randomised patients regardless of PD-L1 expression to nivolumab versus docetaxel for squamous and nonsquamous NSCLC subtypes, respectively. 51,52 Several pooled analyses of both trials with increasing follow-up periods have been published: the 5-year pooled analysis represents the longest survival follow-up with immunotherapy for randomised Phase 3 trials in patients with advanced NSCLC. [53][54][55][56] This latest analysis confirmed the long-term OS benefit of nivolumab (HR 0.68, 95% CI 0.59-0.78) with an OS rate at 5 years of 13% versus 3% with docetaxel. 55 In the subgroup analysis, the benefit of nivolumab for patients aged 75 years or above was not clearly established considering the small number of patients within this age group in both trials ( Table 1). The use of nivolumab as monotherapy had an incidence of TRAEs of grade 3-5 of 10% in the nivolumab pooled analysis compared with 55% for docetaxel. In the CHECKMATE-026 trial, nivolumab was compared with the standard of care first-line platinum-based chemotherapy for patients with PD-L1 expression ≥1%. 57 This trial was negative regarding progression-free survival (PFS), which was its primary endpoint. The Phase 2 CHECKMATE-171 trial evaluated the safety of nivolumab in a European population of pretreated patients with squamous NSCLC 58 and reported an incidence of grade 3-4 TRAEs for those aged ≥70 years of 14%, compared with 12% across the study population. Similarly, the Phase 3b/4 CHECKMATE-153 trial assessed the safety profile of nivolumab in North America and reported an incidence of grade 3-4 TRAEs of 12% for those aged ≥70 years compared with 11% for younger patients. 59 Atezolizumab This anti-PD-L1 monoclonal antibody was explored as monotherapy versus docetaxel in the Phase 3 OAK trial in pretreated NSCLC patients regardless of their PD-L1 expression. 60 The median OS was 13.8 months on atezolizumab compared with 9.6 months on docetaxel (HR 0.73, 95% CI 0.62-0.87; P = 0.0003). In the subgroup analysis, older patients (≥65 years) had an additional 14% reduction in the risk of death compared with younger patients (Table 1). No age-specific safety data are available, although the incidence of grade 3-5 TRAEs was 15% for atezolizumab versus 43% with docetaxel. Moreover, the use of atezolizumab delayed the time to deterioration in physical function in the study population (HR 0.75, 95% CI 0.58-0.98). 61 Considering that the lung cancer population is predominantly older, with 44% of cases in the UK occurring in patients aged 75 and older, a benefit on physical function is of great clinical significance. 62 Data on the use of single-agent atezolizumab in the first-line setting from IMPOWER-110 (NCT02409342) and IMPOWER-111 (NCT02409355) trials are awaited. Durvalumab The Phase 3 MYSTIC trial investigated durvalumab versus platinum-based chemotherapy versus the combination of durvalumab and tremelimumab, a monoclonal anti-CTLA-4 antibody, in the first-line setting. 63 In the subgroup of patients with PD-L1 expression ≥25% (primary analysis subgroup), the median OS for durvalumab versus chemotherapy was 16.3 versus 12.9 months, respectively (HR 0.76, 97.5% CI 0.56-1.02; P = 0.036)-although statistical significance was not achieved, this constitutes a clinically meaningful improvement in OS for durvalumab versus chemotherapy. A more meaningful benefit for older patients (65 years or older) was observed, with HR 0.66 (97.5% CI 0.45-0.95) favoring durvalumab over chemotherapy. 64 When comparing durvalumab plus tremelimumab with chemotherapy, the median OS was 11.9 versus 12.9 months (HR 0.85, 98.8% CI 0.61-1.17; P = 0.202) with no benefit in any age groups. 63,64 With regard to safety, the incidence of TRAEs of grades 3-5 was 15% with durvalumab versus 35% with chemotherapy. No age-group analyses of TRAEs were carried out. CHEMOIMMUNOTHERAPY Chemotherapy coadministered with immunotherapy is a more recent development in the management of patients with advanced NSCLC. A number of reasons exist for potentially better outcomes on a combination. Cytotoxic cell death might create additional antigens that are recognised by the immune system. 65 In addition, chemotherapeutic agents can reduce the number of suppressive cells, such as myeloid-derived suppressor cells and T reg cells, that would otherwise limit the efficacy of the immunotherapeutic agents. 66 Furthermore, by reducing the tumour bulk, cytotoxic agents allow T-lymphocytes to infiltrate the tumour and recovery of an exhausted immune. 67 Pembrolizumab combinations In KEYNOTE-189, 68 a Phase 3 double-blind, randomised placebocontrolled trial of patients with metastatic non-squamous NSCLC and any level of tumour PD-L1 expression, first-line pembrolizumab plus platinum-based chemotherapy (cisplatin or carboplatin) and pemetrexed was superior to platinum-based chemotherapy and pemetrexed in terms of OS (overall HR 0.49, 95% CI 0.38-0.64) and PFS (overall HR 0.52, 95% CI 0.43-0.64). Median OS in the chemoimmunotherapy arm was 22.0 months, versus 10.7 months for the standard chemotherapy arm (HR 0.56, 95% CI 0.45-0.70; P < 0.01). 69 In subgroup analyses by age (Table 2), the OS benefit extended to patients of 65 years and over (HR 0.64, 95% CI 0.43-0.95). 68 No subgroup analyses by age were conducted for PFS or for any toxicity outcomes. In the chemoimmunotherapy 74 an open-label Phase 3 randomised trial of patients with metastatic non-squamous NSCLC and any level of tumour cell PD-L1 expression (including patients with EGFR or ALK genetic alterations), a combination of carboplatin, paclitaxel, bevacizumab plus atezolizumab was superior to carboplatin, paclitaxel and bevacizumab with respect to OS (overall HR 0.78, 95% CI 0.64-0.96) and PFS (overall HR 0.62, 95% CI 0.52-0.74). In a comparison of age groups, PFS was favoured by the chemoimmunotherapy arm in patients below the age of 65 years (HR 0.65) and in those aged 65-74 years (HR 0.52). Among patients aged 75-84 (9% of patients), the HR for PFS was 0.78 but was not statistically significant. The results comparing an additional third arm consisting of carboplatin and paclitaxel plus atezolizumab have not yet been reported. Overall, grade >3 TRAEs were reported in 58.5% of patients in the chemoimmunotherapy arm and 50% in the chemotherapy arm. In the IMpower130 trial, atezolizumab was also studied in combination with carboplatin and nab-paclitaxel compared with chemotherapy alone in patients with metastatic non-squamous NSCLC, 75 . PFS (HR 0.65, 95% CI 0.54-0.77) and OS (HR 0.80, 95% CI 0.65-0.99) were improved in the chemoimmunotherapy arm in the intention-to-treat population. When analysed by age group, the PFS benefit of the chemoimmunotherapy arm remained and was similar among younger and older patients (age < 65 years PFS HR 0.64, 95% CI 0.50-0.82; age >65 years PFS HR 0.64, 95% CI 0.50-0.82). By contrast, the OS benefit of the chemoimmunotherapy arm was no longer statistically significant when stratified by age group (age < 65 years OS HR 0.79, 95% CI 0.58-1.08; age >65 years OS HR 0.78, 95% CI 0.58-1.05). Grade >3 TRAEs occurred in 75% of patients in the chemoimmunotherapy arm and 60% in the chemotherapy arm. Atezolizumab was also studied in patients with metastatic nonsquamous NSCLC in combination with carboplatin or cisplatin plus pemetrexed versus chemotherapy alone in the IMpower132 trial. 76 PFS (HR 0.60, 95% CI 0.49-0.72) was improved in the chemoimmunotherapy arm compared with the chemotherapy arm, and this improvement was confirmed in age-group analyses as well. For patients with metastatic squamous NSCLC, the open-label Phase 3 randomised trial IMpower131 77 demonstrated a PFS benefit for first-line atezolizumab plus carboplatin and nabpaclitaxel versus carboplatin and nab-paclitaxel alone (HR 0.71, 95% CI 0.60-0.85). The final OS data showed no benefit for the intent-to-treat population (HR 0.88, 95% CI 0.73-1.05; P = 0.158), but on secondary analysis for those with high PD-L1 expression (≥50% PD-L1 expression on tumour cells or ≥10% expression on tumour-infiltrating immune cells) there was an apparent benefit in OS (HR 0.48, 95% CI 0.29-0.81). 78 In subgroup analyses by age, only available for PFS, a benefit to all three age groups was demonstrated (age < 65 years HR 0.77, 95% CI 0.61-0.99; age 65-74 years HR 0.66, 95% CI 0.51-0.87; age 75-84 years HR 0.51, 95% CI 0.30-0.84). TRAEs of grade 3 and above occurred in 69% of patients in the chemoimmunotherapy arm compared with 58% in the chemotherapy arm. COMBINATIONS OF IMMUNOTHERAPY Combining different immunotherapy agents that target different checkpoints in T cells is the most recent development in the field of advanced NSCLC. CHECKMATE-227 is a complex randomised Phase 3 trial divided into two parts for the first-line treatment of patients with advanced NSCLC primarily exploring the combination of nivolumab plus ipilimumab versus standard platinumbased chemotherapy. The first part, which has been published, had two independent primary endpoints: PFS with nivolumab plus ipilimumab versus chemotherapy in patients with a high tumour mutational burden (≥10 mutations per megabase); 79 and OS with nivolumab plus ipilimumab versus chemotherapy in patients with a tumour PD-L1 expression level of 1% or more. 80 For other hierarchical endpoints, the trial included a group with PD-L1 expression below 1% and also treatment arms with nivolumab or nivolumab plus chemotherapy. Focusing on the published data for the primary OS endpoint in the case of PD-L1 expression levels ≥1%, the nivolumab plus ipilimumab combination was superior to the chemotherapy arm (17.1 versus 14.9 months; HR 0.79, 95% CI 0.65-0.96; P = 0.007). In the subgroup analysis, the benefit for the group aged 65-74 years was not clear when compared with younger patients, with a HR of 0.91 (0.70-1.19) versus HR 0.70 (0.55-0.89), respectively. Similarly, the group aged 75 years or more did not seem to benefit, although this was a small group CI confidence interval, IQR interquartile range, HR hazard ratio, NR not reported, OS overall survival, PD-L1 programmed cell death ligand 1, PFS progression-free survival, TKI tyrosine kinase inhibitor, y years. Immunotherapy in older patients with non-small cell lung cancer: Young. . . F Gomes et al. comprising only 81 patients. With regard to toxicity, grade >3 TRAEs were reported in 32.8% of patients in the nivolumab plus ipilimumab arm and 36% in the chemotherapy arm, with more serious adverse events occurring in the immunotherapy arm (24.5% versus 13.9%). CHECKMATE-817 is a Phase 3b/4 trial primarily exploring the safety (grade 3-5 TRAEs) of a flat dose of nivolumab combined with ipilimumab (standard weight-based dose) in the first-line treatment of advanced NSCLC. The trial included two cohorts: a standard cohort of 391 patients with a performance status (PS) of 0-1, and a smaller, 'special populations' cohort of 198 patients comprising those with a PS of 2 or of 0-1 plus other factors that might have excluded them from other clinical studies of immunotherapy agents (asymptomatic untreated brain metastasis, hepatic impairment, renal impairment or human immunodeficiency virus). 81,82 In the main cohort (those with PS 0-1), 15% was aged 75 years or above, whilst 22% of the PS 2 group within the special populations' cohort (comprising 139 patients) was aged 75 years or above. The incidence of grade 3-4 TRAEs was 35% and 26%, respectively, favouring the older, more frail group, with no difference in treatment-related death. Moreover, the overall safety data of nivolumab flat-dosing was identical to the weight-based dosing modality. RADIOIMMUNOTHERAPY Between 30 and 50% of patients diagnosed with NSCLC receive radiotherapy in the early-or late-stage disease setting and, as such, radiotherapy is a valuable treatment modality. Radiotherapy is known to induce immune and inflammatory changes that can prime the tumour microenvironment to initiate an immune response. Moreover, this initial immune priming can be augmented systemically by combining radiotherapy with immunotherapies to relay an abscopal response. Radiation-induced immunogenic cell death induces the release of tumour antigens, dendritic cell maturation, augmentation of T-cell priming, upregulation of MHC-I and PD-L1 expression, and upregulation of the levels of cytokines and chemokines 83-87consequently, interest in combining radiotherapy with immunotherapy agents to improve antitumour immunity and responses has increased. Clinical evidence on the combination of thoracic radiotherapy and immunotherapy for patients with NSCLC is lacking. However, some data are available on their sequential use. 88 A secondary analysis of KEYNOTE-001, a Phase 1 study that included patients with metastatic NSCLC, showed that those who previously received radiotherapy had a significantly longer PFS and OS than non-irradiated patients when subsequently treated with pembrolizumab. 88 However, these patients also experienced a greater degree of pulmonary toxicity compared with non-irradiated patients. Although increasing age was associated with improved PFS in this model on univariate analysis; it no longer reached significance on multivariate analysis, which may be linked with the presence of clinical confounding factors. The efficacy of durvalumab in the setting of unresectable stage III NSCLC following concurrent chemoradiotherapy was explored in the PACIFIC trial. 89 Durvalumab significantly prolonged the median OS compared with placebo (HR 0.68, 99.7% CI, 0.47-0.997; P = 0.0025). Nevertheless, the OS benefit was less clear for older patients ( Table 2). In this trial, the incidence of pneumonitis of any grade was higher in the durvalumab arm than in the placebo (34% versus 25%), although the rates of grade 3-4 pneumonitis were similar (4% versus 3%). An exploratory analysis investigated the efficacy of durvalumab in patients who developed pneumonitis, and the survival outcomes were similar to the intent-to-treat population. 90 Although radiation pneumonitis becomes more common with age, 91 this does not appear to be the case for immunotherapy pneumonitis. 92 DISCUSSION Immune checkpoint inhibition therapy targeting PD-1/PD-L1 has changed the treatment landscape for advanced NSCLC. Although due to the ageing immune system (immunosenescence) there is a concern that older patients might be at risk of lower efficacy and/ or increased adverse effects with these agents, limited subgroup analyses from pivotal clinical trials indicate that older patients might gain the same benefit from immunotherapy as younger patients, with an acceptable toxicity profile. However, methodologically and conceptually, results from the pivotal Phase 3 trials cannot yet be generalised to older patient populations. These trials only included patients with a PS of 0-1; consequently, the evidence in vulnerable/frail patients remains very limited. The median age at trial enrolment was about 10 years younger than the median age of NSCLC diagnosis in Western countries. The subgroup analyses on older patients were conducted post-hoc and the trials not powered for age-group comparison. Data on patients >75 years old are lacking, and any available data in this group are conflicting, potentially reflecting the small sample size of these elderly cohorts, or poorer tolerance of therapy and additional comorbidities within this population. Moreover, the pivotal Phase 3 trials did not include CGA or geriatric screening at baseline as suggested by current guidelines. 93 Prospective studies with a real-world population are therefore required to incorporate geriatric assessments into their design, such as is the case in the ELDERS study. This observational cohort study of 140 patients (with NSCLC or melanoma) was designed with the primary aim of assessing safety, but also to investigate the QoL of immunotherapy in younger and older patients (cut-off at 70 years). The PS value was not part of the selection criteria and the study integrated geriatric screening and assessments for subsequent exploratory analysis. An interim analysis of the first 32 patients with NSCLC reported no significant differences in toxicity between the age groups in a real-world population where 30% of patients were PS 2 and 46% of the older patients failed the geriatric screening (using the G8 tool). 94 The final results of this study are expected in late 2020. It is still not entirely clear whether the poorer PS and increased incidence of comorbidities that can be associated with older age predict for more toxicity and/or less efficacy with immunotherapy. In two large cohort studies of nivolumab, patients with PS 2 had similar adverse events compared with PS 0-1 but poorer outcomes in terms of OS. 95,96 However, these results were not reproduced in the PePS2 study assessing pembrolizumab in a PS 2 population, in which the response and OS appeared similar to previous reports in patients with PS 0-1. 97 Additionally, the CheckMate-171 trial, which included elderly and PS 2 patients, reported no differences in terms of toxicity and OS between the overall population and the elderly subgroup. 58 Real-world data derived from the Italian expanded access programme (EAP) for nivolumab in pretreated patients also suggested a similar OS across age groups and, although toxicities were not analysed separately, their overall incidence was similar to data derived from randomised trials. [98][99][100] Further data from an Italian multicentre retrospective study of patients >75 years old treated with anti-PD-1 agents (either nivolumab or pembrolizumab) were also consistent with previous registration trials in terms of toxicity profile and efficacy. 101 Therefore, there is currently no need to exclude patients with reasonable performance status from treatment with immunotherapy on the basis of age. Contrary to chemotherapy, the duration of treatment with immunotherapy is long, and patients can receive treatment for many months or even years. However, the impact of long-term treatment on patient fitness and comorbidities is unclear. In patients experiencing immune-related adverse effects, steroids are recommended-often at high doses and for prolonged courses. 102 The impact on older patients of managing these effects is not clear but might be as problematical as immunotherapy treatment itself, Immunotherapy in older patients with non-small cell lung cancer: Young. . . F Gomes et al. as long-term steroid use can influence muscle bulk, bone strength, glucose tolerance and immune function. The combination of immunotherapy and chemotherapy is now integrated as a standard of care in the first-line treatment setting of NSCLC. 103 Although such combinations result in improved response rates, PFS and OS (regardless of PD-L1 status) compared with chemotherapy only, the toxicity rates are higher. Although sequential chemotherapy and immunotherapy might therefore seem a better option for older patients to reduce toxicity, appropriately selected older patients might benefit in some cases from combination strategies. It is therefore imperative to adequately assess older patients within this treatment scenario for a frailty or prefrailty status in order to avoid over-or undertreatment and to determine which patients will be able to tolerate the combination. Older patients are more prone to experience chemotherapy toxicity and are more likely to discontinue chemotherapy as a result; 104 determining the aetiology of a given toxicity and managing it appropriately can be even more challenging when combining chemotherapy and immunotherapy. The major concern is that toxicities cause a functional decline that results in a loss of independence and a poorer QoL. In conclusion, there is an essential need to generate data to address the use of immunotherapy in the older population as a whole, including in vulnerable and pre-frail patients. 105 These data should include functional measures of frailty such as the G8, with a formal CGA in patients identified as vulnerable. Endpoints should not only be based around survival or response to treatment shown by imaging but should also include patient-reported outcomes such as maintaining QoL, which might be a more relevant goal in older patients. In addition, a further consideration is the potential impact of immunosenescence on immunotherapy. To this end, various biological markers and tests for immunosenescence could be incorporated into clinical trials to help determine whether the changes in the immune system associated with ageing have any impact on treatment efficacy and/or toxicity. Such assays include an assessment of T-cell phenotype, including the presence of circulating T reg cells and CD8 + /CD28 -T-cells and response to antigen challenge using EliSpot; the presence of autoantibodies; the presence of inflammatory markers, including the neutrophil to lymphocyte ratio and levels of CRP and IL-6; and an assessment of the stool microbiome for Firmicutes and Bacteroides species. Conducting such trials, however, can be difficult due to the heterogeneity of this population and the complex clinical variables. In addition, pharmaceutical companies might be less interested in focussing their studies on older patients or those with comorbidities where higher rates of adverse events are often encountered. In this regard, a good methodological compromise might be to design Phase 2 studies focusing on such patient populations, or to include specific preplanned subgroup analysis on older patients in pivotal randomised trials. Additionally, functional endpoints and patient-reported outcomes for older individuals could be included as exploratory or secondary endpoints in registration trials. Lastly, real-world data are an invaluable, readily available resource and should be collected and shared to help inform decision making when discussing treatment in these patient groups.
2020-07-22T15:01:46.755Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "452043d5d9858796a026f63f1d2da10f791bf418", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41416-020-0986-4", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b854261b994583742173a987282e3a5007ed5e19", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237545895
pes2o/s2orc
v3-fos-license
Effects of L-carnitine combined with pancreatic kininogenase on thioredoxin 2, thioredoxin reductase 1, and sperm quality in patients with oligoasthenospermia Background To study the effects of L-carnitine (LC) combined with pancreatic kininogenase on thioredoxin 2 (Trx 2), thioredoxin reductase 1 (TrxR 1), and sperm quality in patients with oligoasthenospermia. Methods A total of 300 male infertility patients with oligoasthenospermia who were treated in the andrology clinic of our hospital from December 2019 to December 2020 were randomly divided into an LC group and combined treatment group, and 50 males with normal semen were selected as a control group. The computer-assisted semen analysis system (CASA) was used to detect the total number, vitality, and forward motility of the sperm before and after treatment, and sperm morphology was detected by the Diff-Quik method of the sperm staining kit. Sperm chromatin dispersion (SCD) method was used to detect sperm DNA fragments, and Western-blot was used to detect the protein expression of Trx 2 and TrxR 1. Results There were no significant differences in sperm density, motility rate, forward motile sperm rate, and DNA fragmentation rate in oligoasthenospermia patients before treatment (P>0.05). However, after 1 month of treatment, the sperm density, motility rate, and forward motile sperm rate were all higher than before treatment (P<0.05), while the DNA fragmentation rate was lower than before treatment. At the same time, each index of semen in the combination group was higher than that in the LC group (P<0.05), and the total effective rate in the combination group was significantly higher than in the LC group (P<0.01). The expression of Trx2 protein in oligoasthenospermia patients was significantly increased (P<0.05), while the expression of TrxR1 protein was significantly decreased (P<0.05). After 3 months of treatment, the expression of Trx2 protein was significantly decreased (P<0.05), while the expression of TrxR1 protein was significantly increased (P<0.05). Conclusions The results suggest Trx 2 and TrxR 1 may be candidate protein markers for oligoasthenospermia. LC combined with pancreatic kininogenase in the treatment of male oligoasthenospermia can effectively promote sperm maturation, enhance sperm motility, and improve semen quality, which has high application value. Introduction Infertility seriously affects human reproductive health and family harmony, and brings a heavy burden to society. The incidence of infertility is increasing year by year, and among married people is as high as 10-15%, of which male factors account for 50% (1,2). Epidemiological studies suggest oligoasthenospermia is an important cause of male infertility. According to the 5th edition of Semen Parameters, oligozoospermia is indicated when sperm density is less than 15×10 6 /mL; asthenospermia is indicated when the composition ratio of sperm with fast forward motility is less than 32%; and when both sperm density and fast forward motility of sperm are lower than normal, oligoasthenospermia is indicated (3). In recent years, related studies have found that a high concentration of reactive oxygen species (ROS) produced by inflammatory cells, germ cells, and abnormal sperm cells during growth and metabolism often causes damage to cellular DNA, lipid, and protein, and reduces sperm concentration and vitality, which is related to the occurrence of oligoasthenospermia (4). Continuously high levels of ROS can cause oxidative stress on cells and mitochondrial function disorders (5). L-carnitine (LC) is the only carrier for the transport of long-chain fatty acids into the mitochondrial inner membrane for β-oxidation, promoting oxidation reaction and providing energy for cells (6). LC also regulates testicular supporting cells, removing oxygen free radicals in seminal plasma, reducing sperm aggregation, and reducing spermatogenic cell apoptosis (6,7). At present, many hospitals regard LC as the preferred drug for the treatment of male infertility syndrome (8). The human body has a high tolerance to LC, and most patients can achieve stable absorption of the drug by direct oral administration. Pancreatic kininogenase is a proteolytic enzyme with its highest content in the pancreas. In human genitalia, pancreatic kininogenase can improve the quality of life of sperm, balance the content of components in the epididymis, and eliminate bacteria, all of which facilitate normal erectile function and regulate male sexual function (9). This study aimed to observe and analyze the effects of LC, pancreatic kininogenase, and their combined application on thioredoxin 2 (Trx 2), thioredoxin reductase 1 (TrxR 1), and sperm quality in oligoasthenospermia patients through a clinical randomized controlled study. Evaluating the efficacy and safety of the combined application of LC and pancreatic kininogenase in the treatment of oligoasthenospermia patients will establish a clinical reference for drug therapy in male patients with oligoasthenospermia. We present the following article in accordance with the STROBE reporting checklist (available at https://dx.doi.org/10.21037/tau-21-680). Research subjects A total of 300 male infertility patients with oligoasthenospermia who were admitted to the andrology clinic of our hospital from December 2019 to December 2020 were included in this study. Inclusion criteria: (I) the couple lived together for more than 1 year, had a normal sex life without contraception, and female fertility test results were normal; (II) patients stopped all spermatogenic drug treatments for ≥12 weeks before the start of the study; and (III) following the fifth edition of the WHO standard, the male was diagnosed as oligozoospermia (sperm concentration <15×10 6 /mL) or asthenospermia (forward motility <32%) when examined more than twice. Exclusion criteria: (I) patients with significant systemic disease, endocrine disease, reproductive tract infection, and varicocele, and (II) patients with a history of cryptorchidism, orchitis, mumps in late adolescence, anti-sperm antibodies, or other known etiologies of male infertility. All procedures performed in this study involving human participants were in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by Ma'anshan Maternal and Child Health Hospital (No. 2019084). Each participant was informed in detail about the content of the study and their rights, and participated voluntarily. All enrolled patients signed an informed consent form. Patients were randomly divided into an LC group and combined treatment group, each composed of 150 patients. Patients in the LC group ranged from 22-34 years, with an average age of 27.85±5.18 years, while those in the combined treatment group ranged from 20-35 years old, with an average age of 28.49±5.46 years. A further 50 males with normal semen were selected as a control group, in which men with reproductive tract infections, sexually transmitted diseases, blocked vas deferens, tumors, trauma surgery, and diabetes were excluded. Treatment methods The LC group were given LC oral solution (10 mL: 1 g), 10 mL/time, three times per day, taken with meals (Shenyang First Pharmaceutical Co., Ltd., Northeast Pharmaceutical Group). In the combined treatment group, a pancreatic kininogenase enteric-coated tablet was added, and was taken as 120 U, three times per day. Test indexes Semen samples were collected 3-7 days after the cessation of sexual activity before treatment, and 1 and 3 months after treatment, and all samples were stored in an incubator at 37 ℃. After liquefaction, the sperm count, motility, and forward mobility were detected by the computer-assisted semen analysis system (CASA) in accordance with the testing standards of the Human Semen Examination and Processing Laboratory Manual formulated by the WHO. The sperm motility rate and forward motility rate were calculated as follows: sperm motility rate = motile sperm (forward motile sperm + non-forward motile sperm)/total sperm count × 100%; forward motile sperm rate = forward motile sperm count/total sperm count × 100%. Sperm morphology detection was performed using the Diff-Quik method described in the instructions of the sperm staining kit (Shenzhen Boride Biotechnology Co., Ltd.), and sperm DNA fragments were detected by the sperm chromatin dispersion (SCD) method. Kits were obtained from Anke Biotech Co., Ltd. The sperm DNA fragmentation rate = (halo-free sperm number/total sperm number) × 100%. Clinical efficacy According to the relevant criteria [8], cure was defined as pregnancy occurring normally during treatment or within 6 months after the treatment, and semen parameters were completely restored to normal; markedly effective was defined as sperm density ≥15×10 6 /mL, forward motile sperm ≥32%, total sperm motility ≥40%, but pregnancy did not occur; effective was defined when at least one of sperm density, motility, or motility rates improved >30% compared to before treatment; and non-effective when there were no changes in semen examination parameters or even deterioration. The total effective rate = (cure + markedly effective + effective) cases/total cases × 100%. Western-blot Sperm in both groups was extracted with RIPA lysate, and the protein concentration was determined. A 5× loading buffer was then added, and the mixture boiled for 10 min to denature the sperm. According to the molecular weight of the target protein, the separation glue and the concentrate glue were configured, and the sample was loaded to start electrophoresis at 80 V. After 30 minutes, the sample was concentrated into the separation gel, and the voltage was changed to 120 V for 90 minutes. The PVDF membrane activated by methanol was used for transfer for 2 hours, then sealed with 5% skim milk for 2 hours, after which it was cut according to the size of the target protein band and incubated in rabbit anti-human Trx 2 (1:300) and TrxR 1 (1:300) antibodies at 4 ℃ overnight. The second day, the membrane was removed, washed with TBST for three times, and the secondary antibody (1:6,000) was used for incubation at room temperature for 2 hours. This was again washed with TBST three times. Solution A and solution B in the ECL detection kit were mixed in a 1:1 ratio and dropped into the PVDF membrane, and observed by gel imager. Statistical method SPSS 20.0 statistical software was used and, the measurement data were represented as x±s and tested by t test. Chi-square test was used for counting data (%), and if the theoretical frequency <1, Fisher's exact test was used, and the rank sum test was used for rank counting data. P<0.05 indicated a statistically significant difference. Changes of parameters and indexes in routine semen examination All patients completed the study. There were no significant differences in sperm density, motility rate, forward motile sperm rate, and DNA fragmentation rate in oligoasthenospermia patients before treatment (P>0.05). However, after 1 month of treatment, the sperm density, motility rate, and forward motile sperm rate were all higher than those before treatment (P<0.05), while the DNA fragmentation rate was lower than that before treatment. At the same time, the parameters of semen in the combination group were higher than those in the LC group (P<0.05). After 3 months of treatment, semen parameters improved significantly, as shown in Table 1 and Figure 1. The morphology of the sperm is shown in Figure 2. Clinical efficacy The clinical efficacy of patients after 1 month and 3 months of treatment was recorded, respectively. The results showed that the markedly effective rate of patients in the combined treatment group was significantly higher than that in the LC group (P<0.05), and the total effective rate of patients in the combined treatment group was significantly higher than that in the LC group (P<0.01), as shown in Table 2. Expression of Trx2 and TrxR1 in sperm Western-blot results showed that before treatment, compared with men with normal semen, the expression of Trx2 protein in oligoasthenospermia patients was significantly increased (P<0.05), while the expression of TrxR1 protein was significantly decreased (P<0.05). After 1 month of treatment, the fluorescence intensity of Trx2 protein was decreased and that of TrxR1 protein was increased, but there was no significant difference between the LC group and the combined treatment group (P>0.05). After 3 months of treatment, the fluorescence intensity of Trx2 protein was significantly down-regulated compared with before treatment (P<0.05), while that of TrxR1 protein was significantly up-regulated (P<0.05), as shown in Table 3 and Figures 3 and 4. Discussion Male oligoasthenospermia is one of the main causes of male infertility. In recent years, due to environmental deterioration and lifestyle changes, the incidence of this disease has increased, which has a great impact on male physical and mental health and family harmony (10). How to effectively treat oligoasthenospermia has become a hot spot in clinical research. At present, drug therapy is the main treatment, and as there are many clinically related medications available, the choice of drug regimen is important. According to the treatment method, it can be divided into Chinese medicine treatment and western medicine treatment. Traditional Chinese medicine relies on its overall view and syndrome differentiation system, and treats people according to syndrome and cause. It has certain advantages in treating oligoasthenospermia and has significant curative effect, but the effect is not ideal for severe oligoasthenospermia caused by genetic factors. Western medicine treatment of oligoasthenospermia mainly for the etiology of treatment, clinical use is more widely L-carnitine, vitamin E, and anti-estrogen drugs (Androgens, gonadotropins, anti-estrogen drugs). The clinical efficacy of these drugs alone is often not significant. In the case of multi-drug combination therapy, the clinical use of hormone drugs is limited due to the inexact efficacy and many adverse reactions. At present, there are many studies focusing on the treatment of integrated Chinese and western medicine. This study focused on western medicine. LC in the human body mainly derives from dietary intake and biosynthesis in the brain, liver, and kidney. LC is mainly distributed in blood and tissues, among which the epididymis has the highest concentration, although it is not synthesized here (11). Studies have shown that when sperm move from the head to the tail of the epididymis, LC can reduce lipid content on the sperm membrane, and increase the ratio of saturated fatty acids, unsaturated fatty acids, cholesterol, and congealed fat, causing changes in membrane composition and structure and maintaining fluidity of the sperm membrane (7). Other reports have shown that LC has an antioxidant effect, can inhibit the production of reactive oxygen species, avoid sperm oxidative damage, and effectively maintain the normal physiological function of sperm (12,13). LC is one of the preferred drugs for the treatment of male infertility, and has a certain curative effect on oligozoospermia, asthenospermia, and teratozoospermia. Pancreatic kallikrein, also known as pancreatic kallikrein and kallikrein, is an important component of the kinin system. As a proteolytic enzyme, it is commonly found in the pancreas, submandibular glands, and saliva, among which the content is highest in the pancreas. The use of pancreatic kininogenase can increase protein content in the epididymis, clear blood vessels, and improve the quality of sperm (14), and studies have also confirmed that it regulates genital function. After pancreatic peptidogenase enters the genitals, its pharmaceutical effect appears immediately and penetrates every part of the genitals, restoring their vitality, improving sexual function and having an optimizing effect on the penis (15). Pancreatic peptidase has made an important contribution to the treatment of male diseases (14,16). It is mainly used for the treatment of varicocele (17) either alone or in combination with other drugs, and pancreatic propeptidates can be used in the treatment of male sexual dysfunction caused by diabetes. In this study, LC and pancreatic peptidogenase were used to treat male patients with oligoasthenospermia. The results showed that the total effective rate of patients in the combined treatment group was significantly higher than that of the LC group. Sperm density, sperm motility, forward motility, DNA fragmentation degree, and the sperm malformation rate after combined treatment were all better than those in the LC group, suggesting that LC and pancreatic peptidogenase have a positive synergistic effect, which can improve semen quality and enhance the therapeutic effect through different mechanisms. The Trx/TrxR system exists in human sperm and plays an important role in the defense of sperm oxidative stress. Trx /TrxR reduces antioxidant proteins through NADPH, and the latter can convert superoxide and hydrogen peroxide into H 2 O, thereby eliminating ROS and avoiding sperm damage (18,19). Many studies have confirmed that when there is oxidative stress, Trx in serum or plasma is significantly increased, which is a sign of oxidative stress (20). This study found that Trx2 in patients with oligoasthenospermia was significantly up-regulated compared with men with normal semen. This suggests that in the sperm of patients with oligoasthenospermia, as an important substance of anti-oxidative damage, the increased compensatory expression of Trx2 can reduce the degree of oxidative stress to a certain extent. After treatment, the expression of Trx2 was down-regulated, and the down-regulation effect was more obvious in the combined treatment group than in the LC group. TrxR 1 is considered an important enzyme that controls cellular REDOX status, antioxidant defense, and cellular REDOX regulation (21). Studies have shown that in asthenospermia, the content of TrxR is decreased, the content of sperm ROS is increased, sperm apoptosis is increased, and the number of immature sperm is increased (22). In this study, TrxR 1 was significantly down-regulated in oligoasthenospermia patients compared with men with normal semen, and the mechanism may be related to the increased oxidative stress level and apoptosis level of sperm caused by its low expression. After treatment, the expression of TrxR 1 was upregulated, and the upregulated effect was more obvious in the combined treatment group than in the LC group. In conclusion, Trx 2 and Trxr 1 are abnormally expressed in oligoasthenospermia patients, which suggests they play an important role in its occurrence and development. and may be potential protein markers for the disease. However, the specific mechanism remains unknown, and requires further study. LC combined with pancreatic peptidogenase in the treatment of male oligoasthenospermia can effectively promote sperm maturation, enhance sperm motility, and improve semen quality, which has high application value. Acknowledgments Funding: None. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All procedures performed in this study involving human participants were in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by Ma'anshan Maternal and Child Health Hospital (No. 2019084). Each participant was informed in detail about the content of the study and their rights, and participated voluntarily. All enrolled patients signed an informed consent form. Footnote Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-09-09T20:37:53.225Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "041af6d57655000c99fd2ae430f0e456d995dcb9", "oa_license": "CCBYNCND", "oa_url": "https://tau.amegroups.com/article/viewFile/77274/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f14681efe6837cf24240c53d78cbd56cf60559d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8819412
pes2o/s2orc
v3-fos-license
Ferritin Assembly in Enterocytes of Drosophila melanogaster Ferritins are protein nanocages that accumulate inside their cavity thousands of oxidized iron atoms bound to oxygen and phosphates. Both characteristic types of eukaryotic ferritin subunits are present in secreted ferritins from insects, but here dimers between Ferritin 1 Heavy Chain Homolog (Fer1HCH) and Ferritin 2 Light Chain Homolog (Fer2LCH) are further stabilized by disulfide-bridge in the 24-subunit complex. We addressed ferritin assembly and iron loading in vivo using novel transgenic strains of Drosophila melanogaster. We concentrated on the intestine, where the ferritin induction process can be controlled experimentally by dietary iron manipulation. We showed that the expression pattern of Fer2LCH-Gal4 lines recapitulated iron-dependent endogenous expression of the ferritin subunits and used these lines to drive expression from UAS-mCherry-Fer2LCH transgenes. We found that the Gal4-mediated induction of mCherry-Fer2LCH subunits was too slow to effectively introduce them into newly formed ferritin complexes. Endogenous Fer2LCH and Fer1HCH assembled and stored excess dietary iron, instead. In contrast, when flies were genetically manipulated to co-express Fer2LCH and mCherry-Fer2LCH simultaneously, both subunits were incorporated with Fer1HCH in iron-loaded ferritin complexes. Our study provides fresh evidence that, in insects, ferritin assembly and iron loading in vivo are tightly regulated. Introduction With over one million insect species on earth [1], there can be no simple generalized description of the iron storage strategies they employ [2][3][4][5][6][7][8][9][10][11]. Nevertheless, insect ferritins are widely recognized as the key protein complexes involved in the biological handling of excess cytosolic ferrous iron [12][13][14][15][16]. In particular, the study of Drosophila melanogaster ferritins has informed the field of insect iron physiology (reviewed in [17][18][19]). With exception of the testis-specific mitochondrial ferritin [20], most cell types of Drosophila melanogaster involved in iron storage accumulate ferritin in their endomembrane system [21][22][23][24][25]. Subcellular localization within the vesicular system comes with distinct evolutionary adaptations for the insect ferritins. First, the Ferritin 1 Heavy Chain Homolog (Fer1HCH) and Ferritin 2 Light Chain Homolog (Fer2LCH) subunits have N-terminal signal peptides that direct them to the endoplasmic reticulum [26,27]; Second, Fer1HCH and Fer2LCH are cross-linked to each other by disulfide bonds, giving rise to a highly organized symmetrical arrangement of 12 Fer1HCH and 12 Fer2LCH subunits in the assembled ferritin complex [28]; Third, the Fer1HCH and Fer2LCH genes share common enhancers (they are transcriptionally co-regulated) being chromosomal neighbors and also showing post-transcriptional co-regulation to ensure the provision of roughly equal amounts of subunits [16,21,29]. These regulatory relationships are conserved in other insects besides Drosophila melanogaster [30,31]; Fourth, iron loading into ferritin critically depends on transport from the cytosol to the endoplasmic reticulum, a function likely mediated by the zinc regulated and iron regulated transporter 13 (Zip13) [25]; Fifth, the two subunits Fer1HCH and Fer2LCH have been detected in distinct vesicular compartments at the initial stages of the ferritin biosynthetic process, one hour post-feeding on iron-containing media, suggesting that subunit-specific trafficking and post-translational modifications may precede the formation of the ferritin complex [21]. A recent complementary effort in mosquito cells is likely to provide independent information for the ferritin assembly and secretion processes [32]. Despite the differences between the subcellular accumulation of ferritin: in the cytosol of vertebrates [33], in the chloroplasts of plants [34] and in the secretory pathway of many insect ferritins (for insects with cytosolic ferritins see [4] and also the ferritin sequences of Rhodnius prolixus [35]), strong evolutionary links exist between ferritins from prokaryotes and archaea to eukaryotes [36][37][38][39][40]. In particular, the mechanism of iron mineralization in assembled ferritins is highly conserved [38][39][40]. Ferritin assembly is generally thought to occur spontaneously, aided by the high stability of the ferritin subunit dimers [41][42][43][44][45]. Recently, self-assembly of ferritin was shown to be required for achieving ferroxidase catalytic activity [46]. Given that ferritins isolated from different mammalian tissues show differences in the ratios of the two types of their subunits, the regulation of ferritin assembly in vivo requires further investigation [47][48][49][50][51][52][53]. The Drosophila intestine is highly compartmentalized with small groups of enterocytes and adjacent enteroendocrine cells specializing in different functions [54][55][56][57][58][59], including metal storage and detoxification [17,18,[60][61][62][63][64]. The larval anterior midgut provides an ideal epithelium to observe the ferritin biosynthetic process because it contains large enterocytes, which do not normally express ferritin, but readily induce its expression upon iron treatment [2,16,21,22,[65][66][67]. The Fer1HCH G188 allele, which splices the green fluorescent protein (GFP) into the endogenous Fer1HCH gene and assembles GFP-Fer1HCH subunits in iron-loaded ferritin complexes, was previously used together with Fer2LCH-specific antibodies to detect both subunits in larval intestines [21]. In the iron region, Fer2LCH subunits fully co-localized with GFP-Fer1HCH, i.e., there were no vesicles in which the subunits could be seen separately [21]. These vesicles represent a specialized Golgi compartment, packed with assembled, iron-loaded ferritin [2,3]. In the anterior midgut, ferritin assembly had not occurred 1 h after the transfer of larvae on an iron-rich diet, but was complete by 4 h [21]. Accordingly, 1 h after the transfer, Fer2LCH was readily detectable in a separate vesicular compartment to GFP-Fer1HCH, whereas 4 h after the transfer only vesicles containing both subunits were detected in anterior midgut cells of Fer1HCH G188/+ larvae [21]. These observations led to a model, whereby individual ferritin subunits are modified in separate vesicular compartments prior to assembly of the ferritin complex. The present study was undertaken to further test the hypothesis of a regulated ferritin assembly process involving separate vesicular compartments by using fluorescent-protein-based imaging to allow for the simultaneous visualization of Fer1HCH and Fer2LCH subunits in the larval intestine. Results and Discussion To visualize the ferritin assembly process in vivo, a UAS-mCherry-Fer2LCH construct was designed. The mCherry fluorescent protein was inserted in the N-terminus of the Fer2LCH gene, immediately after the predicted cleavage site associated with the signal peptide that targets Fer2LCH to the endoplasmic reticulum [27]. To express mCherry-Fer2LCH in an iron-inducible manner in the larval anterior midgut, a Fer2LCH-Gal4 driver was generated by transposition [68] of the P{GawB} element [69] into Fer2LCH EP1059 [10]. Both the parental EP and the new Gal4 lines were homozygous lethal, because normal Fer2LCH gene function was interrupted by the insertions. In contrast, Fer2LCH-Gal4, UAS-Fer2LCH recombinants were homozygous viable, indicating that the new driver could express heterologous Fer2LCH where it was required during development. Fer2LCH-Gal4, UAS-mCherry-Fer2LCH flies were not homozygous viable, consistent with previous observations that ferritin consisting solely of GFP-Fer1HCH and Fer2LCH subunits was not functional [10,21]. It was still possible, however, to form functional ferritin complexes if GFP-Fer1HCH was present together with Fer1HCH and Fer2LCH [21], which provided a rational to work with UAS-mCherry-Fer2LCH in the presence of endogenous Fer2LCH. Two further Fer2LCH-Gal4 lines became available from the Kyoto stock center [70] and all three lines gave identical intestinal expression. Ferritin is also expressed in the brain [10,24,[71][72][73][74][75]. Images obtained from the brains indicated some differences between the three Fer2LCH-Gal4 lines, but these results are not presented here. Ferritin Gal4 Driver Lines Recapitulate Iron-Dependent Induction in the Anterior Midgut To test whether the Fer2LCH-Gal4 lines recapitulated the endogenous ferritin expression pattern in larvae [22] and, in particular, the iron-dependent inducible expression in the anterior midgut, they were crossed to flies carrying a recombinant Fer1HCH G188 , UAS-stinger-RFP chromosome. Simultaneous monitoring of cytoplasmic green fluorescence from the endogenous GFP-Fer1HCH protein trap and nuclear red fluorescence from cells expressing Fer2LCH-Gal4 was possible in the progeny of this cross. Under iron limiting conditions, defined by addition of 200 µM Bathophenanthroline Sulfate (BPS; an effective iron chelator [15,20,76]) into the standard yeast and molasses based diet [77], Fer2LCH NP4763 -Gal4 expressed strongly in the iron region enterocytes, but also in cells posterior to this region ( Figure 1a). Under dietary iron supplementation (1 mM Ferric Ammonium Citrate; FAC), the driver was clearly induced in the anterior midgut cells, in each and every cell that also expressed GFP-Fer1HCH from the endogenous gene promoter (Figure 1b). Expression in the iron region and in cells posterior to it remained. The same results were obtained with another driver, Fer2LCH NP2602 -Gal4 (Figure 1c,d). Thus, in the anterior midgut region, the Fer2LCH-Gal4 lines recapitulated the well-established, iron-dependent ferritin expression pattern. mCherry-Tagged Fer2LCH Subunit Expression Driven by Fer2LCH-Gal4 in the Intestine The intestines of 3rd instar larvae from the Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 genotype raised in diets containing 200 µM BPS (Figure 2a (Figure 2c). One possible explanation would be that mCherry-Fer2LCH is more stable than stinger-RFP in these cells and the fluorescence reflects an earlier or lower-level induction of the Fer2LCH-Gal4 driver, or, alternatively, secreted mCherry-Fer2LCH is taken up by these cells, as has been shown to be the case for the nephrocyte-like garland cells [10]. When the intestines were dissected from larvae grown in diets supplemented with 1 mM FAC, both mCherry-Fer2LCH and GFP-Fer1HCH were detected in the anterior midgut region, but, curiously, mCherry-Fer2LCH appeared to be absent from the cells posterior to the iron region and only accumulated in the iron region enterocytes in the middle midgut ( Figure 2d). This raised the question whether mCherry-Fer2LCH was being secreted to the hemolymph or to its neighboring iron-region cells or, less intuitively, whether it was being degraded despite the presence of dietary iron. The absence of mCherry-Fer2LCH is consistent with the known fact that these cells posterior to the iron region do not accumulate assembled, iron-loaded ferritin [22]. The reasons that would explain the differences in some cell types between the presence of the reporter gene expression and the mCherry-Fer2LCH accumulation are not understood, however these observations suggest that active transport of the ferritin subunits may be implicated in the assembly of functional ferritin complexes in vivo. Further evidence in support of this notion came from the altered accumulation of GFP-Fer1HCH (arising from Fer1HCH G188/+ ) when the secretory pathway was blocked in embryos by means of a lethal mutation in Sec23 [10]. Nevertheless, Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larvae grown in 1 mM FAC accumulated mCherry-Fer2LCH in the same cell types where GFP-Fer1HCH was present (Figure 2e,f), suggesting that some aspects of the expected intestinal response to dietary iron were being reported faithfully with these tools. Inspection of the iron region in the Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larvae revealed some abnormally large vesicular compartments, reminiscent of autophagosomes [78,79], where red and green fluorescence was readily observable. These compartments were substantially larger in intestines from larvae grown in 1 mM FAC food (compare Figure 2b-e) and they appeared to be present in the posterior half of the iron region. These larger compartments (autophagosomes) were not readily observable in intestines dissected from Fer1HCH G188/+ larvae and we therefore considered that they indicated a cellular stress imposed in the presence of mCherry-Fer2LCH and iron. The autophagosomes are a likely response to endoplasmic reticulum stress [80][81][82]. Moreover, it is possible that the fluorescent proteins are more resistant to degradation in this environment than their attached subunits [83], so the fact that mCherry and GFP signals are abundant suggests that both ferritin subunits had reached these compartments, but whether they were assembled, present as single subunits, or degraded remains unclear. The reasons that would explain the differences in some cell types between the presence of the reporter gene expression and the mCherry-Fer2LCH accumulation are not understood, however these observations suggest that active transport of the ferritin subunits may be implicated in the assembly of functional ferritin complexes in vivo. Further evidence in support of this notion came from the altered accumulation of GFP-Fer1HCH (arising from Fer1HCH G188/+ ) when the secretory pathway was blocked in embryos by means of a lethal mutation in Sec23 [10]. Nevertheless, Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larvae grown in 1 mM FAC accumulated mCherry-Fer2LCH in the same cell types where GFP-Fer1HCH was present (Figure 2e,f), suggesting that some aspects of the expected intestinal response to dietary iron were being reported faithfully with these tools. Inspection of the iron region in the Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larvae revealed some abnormally large vesicular compartments, reminiscent of autophagosomes [78,79], where red and green fluorescence was readily observable. These compartments were substantially larger in intestines from larvae grown in 1 mM FAC food (compare Figure 2b-e) and they appeared to be present in the posterior half of the iron region. These larger compartments (autophagosomes) were not readily observable in intestines dissected from Fer1HCH G188/+ larvae and we therefore considered that they indicated a cellular stress imposed in the presence of mCherry-Fer2LCH and iron. The autophagosomes are a likely response to endoplasmic reticulum stress [80][81][82]. Moreover, it is possible that the fluorescent proteins are more resistant to degradation in this environment than their attached subunits [83], so the fact that mCherry and GFP signals are abundant suggests that both ferritin subunits had reached these compartments, but whether they were assembled, present as single subunits, or degraded remains unclear. Intestines were dissected, mounted in Vectashield with DAPI and imaged by confocal microscopy. Green fluorescence is from GFP-Fer1HCH; red fluorescence from mCherry-Fer2LCH; cyan fluorescence from DAPI. Using the 10× objective, GFP-Fer1HCH is readily observed only in the iron region (IR) as previously described. In contrast, mCherry-Fer2LCH is detected both in the iron region and in cells posterior to the iron region (PIR), recapitulating the expression pattern seen in Figure 1a, but it is also readily observable in the anterior midgut (AM); (b) Closer view of the iron region using the 40× objective (anterior is to the left) (c) and of the anterior midgut: only mCherry-Fer2LCH was detected here; (d) Larvae of the genotype Fer1HCH G188 /Fer2LCH NP4763 , UAS-mCherry-Fer2LCH were grown on a diet supplemented with 1 mM FAC. There is a visible induction of GFP-Fer1HCH and mCherry-Fer2LCH in the anterior midgut. In the majority of larvae observed (n > 10) the cells posterior to the iron region no longer express mCherry-Fer2LCH when raised on an iron-rich diet; (e) Closer view of the iron region-stars mark abnormally large vesicular compartments, which may represent an autophagic response in some cells of the larvae grown on food supplemented with 1 mM FAC; (f) Closer view of the anterior midgut region. Subcellular Distribution of GFP-Fer1HCH and mCherry-Fer2LCH in Iron Region and Anterior Midgut Enterocytes The cells that had no signs of autophagosome formation were imaged at a higher magnification (using a 63× objective & 2× optical zoom at the Confocal) to detect the subcellular localization of the ferritin subunits in enterocytes of Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larval intestines, raised in a diet supplemented with 1 mM FAC. Initial focus was on the iron region enterocytes (Figure 3a), where a perfect co-localization between mCherry-Fer2LCH and GFP-Fer1HCH had been expected [21]. In contrast to our expectations, only a limited number of vesicles containing both tagged ferritin subunits were visible and these were almost exclusively in the perinuclear region of cells. Further to the periphery, mCherry-Fer2LCH and GFP-Fer1HCH could be clearly detected in distinct vesicular compartments. Judging by morphological criteria and relative abundance, GFP-Fer1HCH was present in the Golgi-like vesicles that specialize in iron storage in these cells, whereas mCherry-Fer2LCH accumulated in a less abundant type of vesicle, which is normally devoid of ferritin (compare to Figure 6C in [21]). This distribution brought to question whether the mCherry-Fer2LCH subunits were being properly incorporated into the ferritin complexes of these cells. Intestines were dissected, mounted in Vectashield with DAPI and imaged by confocal microscopy. Green fluorescence is from GFP-Fer1HCH; red fluorescence from mCherry-Fer2LCH; cyan fluorescence from DAPI. Using the 10ˆobjective, GFP-Fer1HCH is readily observed only in the iron region (IR) as previously described. In contrast, mCherry-Fer2LCH is detected both in the iron region and in cells posterior to the iron region (PIR), recapitulating the expression pattern seen in Figure 1a, but it is also readily observable in the anterior midgut (AM); (b) Closer view of the iron region using the 40ˆobjective (anterior is to the left) (c) and of the anterior midgut: only mCherry-Fer2LCH was detected here; (d) Larvae of the genotype Fer1HCH G188 /Fer2LCH NP4763 , UAS-mCherry-Fer2LCH were grown on a diet supplemented with 1 mM FAC. There is a visible induction of GFP-Fer1HCH and mCherry-Fer2LCH in the anterior midgut. In the majority of larvae observed (n > 10) the cells posterior to the iron region no longer express mCherry-Fer2LCH when raised on an iron-rich diet; (e) Closer view of the iron region-stars mark abnormally large vesicular compartments, which may represent an autophagic response in some cells of the larvae grown on food supplemented with 1 mM FAC; (f) Closer view of the anterior midgut region. Subcellular Distribution of GFP-Fer1HCH and mCherry-Fer2LCH in Iron Region and Anterior Midgut Enterocytes The cells that had no signs of autophagosome formation were imaged at a higher magnification (using a 63ˆobjective & 2ˆoptical zoom at the Confocal) to detect the subcellular localization of the ferritin subunits in enterocytes of Fer2LCH-Gal4, UAS-mCherry-Fer2LCH/Fer1HCH G188 larval intestines, raised in a diet supplemented with 1 mM FAC. Initial focus was on the iron region enterocytes (Figure 3a), where a perfect co-localization between mCherry-Fer2LCH and GFP-Fer1HCH had been expected [21]. In contrast to our expectations, only a limited number of vesicles containing both tagged ferritin subunits were visible and these were almost exclusively in the perinuclear region of cells. Further to the periphery, mCherry-Fer2LCH and GFP-Fer1HCH could be clearly detected in distinct vesicular compartments. Judging by morphological criteria and relative abundance, GFP-Fer1HCH was present in the Golgi-like vesicles that specialize in iron storage in these cells, whereas mCherry-Fer2LCH accumulated in a less abundant type of vesicle, which is normally devoid of ferritin (compare to Figure 6C in [21]). This distribution brought to question whether the mCherry-Fer2LCH subunits were being properly incorporated into the ferritin complexes of these cells. Upon imaging the anterior midgut, co-localization within cells between GFP-Fer1HCH and mCherry-Fer2LCH was rare. A typical enterocyte in the anterior midgut is depicted (Figure 3b). Despite the ferritin induction as a response to iron, these cells accumulate mCherry-Fer2LCH and GFP-Fer1HCH in separate compartments. These results suggested that the mCherry-Fer2LCH subunits were not being incorporated into functional ferritin complexes. To directly observe the assembled ferritin complexes and the loading of iron into these, protein extracts from fly genotypes expressing GFP-Fer1HCH or mCherry-Fer2LCH under non-reducing SDS-PAGE were ran and the gels were stained for protein or iron, respectively. Iron Loading in Ferritins with GFP-Fer1HCH Subunits Only Occurs When They Are Expressed from Fer1HCH G188 But not from Fer2LCH-Gal4, UAS-GFP-Fer1HCH Flies Wild type ferritin and ferritin with a varying number of GFP-Fer1HCH subunits attached to the assembled complex (of 12 Fer2LCH:x Fer1HCH:y GFP-Fer1HCH subunits, where x + y = 12) have been previously analyzed by non-reducing SDS-PAGE and radioactive iron incorporation assays [21]. Ferritin iron is sufficiently concentrated as to be also readily observable with a simple incubation with potassium ferrocyanide in acid conditions (Prussian blue stain) and ferritin protein is the dominant abundant high molecular protein observed with Coomassie blue staining in extracts from adult flies analyzed in this manner [15,16,84]. Hence the first two lanes in Figure 4 represent the wild type control (with a prominent ferritin band representing the complex of 12 Fer1HCH and 12 Fer2LCH subunits) and the GFP-tagged ferritin from Fer1HCH G188/+ , where wild type ferritin complexes are absent and new higher molecular weight complexes appear (representing increasing numbers of GFP-Fer1HCH subunits incorporated). Iron is accumulated in these Fer1HCH G188/+ -specific ferritins. Upon imaging the anterior midgut, co-localization within cells between GFP-Fer1HCH and mCherry-Fer2LCH was rare. A typical enterocyte in the anterior midgut is depicted (Figure 3b). Despite the ferritin induction as a response to iron, these cells accumulate mCherry-Fer2LCH and GFP-Fer1HCH in separate compartments. These results suggested that the mCherry-Fer2LCH subunits were not being incorporated into functional ferritin complexes. To directly observe the assembled ferritin complexes and the loading of iron into these, protein extracts from fly genotypes expressing GFP-Fer1HCH or mCherry-Fer2LCH under non-reducing SDS-PAGE were ran and the gels were stained for protein or iron, respectively. Iron Loading in Ferritins with GFP-Fer1HCH Subunits Only Occurs When They Are Expressed from Fer1HCH G188 But not from Fer2LCH-Gal4, UAS-GFP-Fer1HCH Flies Wild type ferritin and ferritin with a varying number of GFP-Fer1HCH subunits attached to the assembled complex (of 12 Fer2LCH:x Fer1HCH:y GFP-Fer1HCH subunits, where x + y = 12) have been previously analyzed by non-reducing SDS-PAGE and radioactive iron incorporation assays [21]. Ferritin iron is sufficiently concentrated as to be also readily observable with a simple incubation with potassium ferrocyanide in acid conditions (Prussian blue stain) and ferritin protein is the dominant abundant high molecular protein observed with Coomassie blue staining in extracts from adult flies analyzed in this manner [15,16,84]. Hence the first two lanes in Figure 4 represent the wild type control (with a prominent ferritin band representing the complex of 12 Fer1HCH and 12 Fer2LCH subunits) and the GFP-tagged ferritin from Fer1HCH G188/+ , where wild type ferritin complexes are absent and new higher molecular weight complexes appear (representing increasing numbers of GFP-Fer1HCH subunits incorporated). Iron is accumulated in these Fer1HCH G188/+ -specific ferritins. [15]. Higher molecular weight bands (indicated by three asterisks) represent assembled ferritin complexes with an increasing number of fluorescent protein subunits attached [21]; (b) Prussian blue staining to reveal iron-loaded ferritin molecules. Note that no native iron-loaded ferritin is detected in samples from Fer1HCH G188/+ fly homogenates, suggesting that in this genotype the ferritin assembly process efficiently combines GFP-Fer1HCH subunits with its endogenous Fer1HCH and Fer2LCH counterparts. In contrast, when mCherry-Fer2LCH subunit (lanes 3 and 5) or GFP-Fer1HCH subunit (lane 4) expression are driven by Fer2LCH-Gal4, only ferritin comprised from wild type subunits is iron-loaded. When the Fer2LCH-Gal4, UAS-mCherry-Fer2LCH chromosome was tested (over a balancer chromosome, i.e., in conditions where one copy of Fer2LCH was unaffected and both copies of Fer1HCH were present), higher molecular ferritin complexes appeared in the protein stains of gels, albeit in less abundance compared to the Fer1HCH G188/+ genotype (Figure 4a), suggesting that assembled ferritin complexes were present. However, the most abundant species was the wild type ferritin. Importantly, it was only in wild type ferritin that iron could be detected in these flies ( Figure 4b). These results were consistent with some limited ferritin complex formation (i.e., see Figure 3a) and with a more general conclusion that most functional (i.e., iron-loaded) ferritin in these animals had not incorporated the mCherry-Fer2LCH subunit. One remaining concern was whether the attachment of mCherry to Fer2LCH is the main reason behind these phenomena, for example by affecting the process of iron loading into ferritin. To test this idea, UAS-GFP-Fer1HCH transgenic flies were generated, whereby GFP was attached exactly at the same position as it is found in the Fer1HCH G188 protein trap allele and crossed them to Fer2LCH-Gal4. It was reasoned that the presence of a few GFP-Fer1HCH subunits in the assembled ferritin complex should not inhibit iron loading, given the positive control (i.e., the Fer1HCH G188/+ genotype). Nevertheless, the Fer2LCH-Gal4, UAS-GFP-Fer1HCH flies were unable to produce detectable quantities of iron-loaded ferritin complexes containing GFP-Fer1HCH subunits. This genotype accumulated iron in ferritin complexes consisting exclusively of endogenous Fer1HCH and Fer2LCH subunits (Figure 4). To explain these observations, a hypothesis that the timing of GFP-Fer1HCH subunit expression determines whether ferritin iron loading occurs in GFP-Fer1HCH-containing ferritin complexes was proposed and tested. [15]. Higher molecular weight bands (indicated by three asterisks) represent assembled ferritin complexes with an increasing number of fluorescent protein subunits attached [21]; (b) Prussian blue staining to reveal iron-loaded ferritin molecules. Note that no native iron-loaded ferritin is detected in samples from Fer1HCH G188/+ fly homogenates, suggesting that in this genotype the ferritin assembly process efficiently combines GFP-Fer1HCH subunits with its endogenous Fer1HCH and Fer2LCH counterparts. In contrast, when mCherry-Fer2LCH subunit (lanes 3 and 5) or GFP-Fer1HCH subunit (lane 4) expression are driven by Fer2LCH-Gal4, only ferritin comprised from wild type subunits is iron-loaded. When the Fer2LCH-Gal4, UAS-mCherry-Fer2LCH chromosome was tested (over a balancer chromosome, i.e., in conditions where one copy of Fer2LCH was unaffected and both copies of Fer1HCH were present), higher molecular ferritin complexes appeared in the protein stains of gels, albeit in less abundance compared to the Fer1HCH G188/+ genotype (Figure 4a), suggesting that assembled ferritin complexes were present. However, the most abundant species was the wild type ferritin. Importantly, it was only in wild type ferritin that iron could be detected in these flies (Figure 4b). These results were consistent with some limited ferritin complex formation (i.e., see Figure 3a) and with a more general conclusion that most functional (i.e., iron-loaded) ferritin in these animals had not incorporated the mCherry-Fer2LCH subunit. One remaining concern was whether the attachment of mCherry to Fer2LCH is the main reason behind these phenomena, for example by affecting the process of iron loading into ferritin. To test this idea, UAS-GFP-Fer1HCH transgenic flies were generated, whereby GFP was attached exactly at the same position as it is found in the Fer1HCH G188 protein trap allele and crossed them to Fer2LCH-Gal4. It was reasoned that the presence of a few GFP-Fer1HCH subunits in the assembled ferritin complex should not inhibit iron loading, given the positive control (i.e., the Fer1HCH G188/+ genotype). Nevertheless, the Fer2LCH-Gal4, UAS-GFP-Fer1HCH flies were unable to produce detectable quantities of iron-loaded ferritin complexes containing GFP-Fer1HCH subunits. This genotype accumulated iron in ferritin complexes consisting exclusively of endogenous Fer1HCH and Fer2LCH subunits (Figure 4). To explain these observations, a hypothesis that the timing of GFP-Fer1HCH subunit expression determines whether ferritin iron loading occurs in GFP-Fer1HCH-containing ferritin complexes was proposed and tested. A Model for Ferritin Biosynthesis in Anterior Midgut Enterocytes The proposal is that cellular iron entry induces both ferritin subunits in a pulse, i.e., Fer1HCH and Fer2LCH mRNAs are produced in a coordinated, non-continuous manner and that following their translation they are first processed separately, but then assembled rapidly, first as heterodimers [28], then into the complex that receives the excess iron (Figure 5a). Zip13 is required for the iron-loading step [25]. In addition, the presence of a ferritin subunit in the absence of its partner is not sufficient for complex formation. Indeed, previous studies have shown that heterozygous mutants (or RNA interference [16]) in either Fer1HCH or Fer2LCH produce half the amount of ferritin [21]. Similarly, overexpression experiments suggest that both ferritin subunits need to be induced to achieve a demonstrable increase in ferritin accumulation [21,24]. The recent discovery in the dipteran fly Bactrocera dorsalis of an alternatively spliced intron in Fer2LCH that leads to the insertion of a premature codon revealed a further aspect of the co-regulation of the two ferritin subunits, connecting transcriptional to post-transcriptional control [31]. A Model for Ferritin Biosynthesis in Anterior Midgut Enterocytes The proposal is that cellular iron entry induces both ferritin subunits in a pulse, i.e., Fer1HCH and Fer2LCH mRNAs are produced in a coordinated, non-continuous manner and that following their translation they are first processed separately, but then assembled rapidly, first as heterodimers [28], then into the complex that receives the excess iron (Figure 5a). Zip13 is required for the iron-loading step [25]. In addition, the presence of a ferritin subunit in the absence of its partner is not sufficient for complex formation. Indeed, previous studies have shown that heterozygous mutants (or RNA interference [16]) in either Fer1HCH or Fer2LCH produce half the amount of ferritin [21]. Similarly, overexpression experiments suggest that both ferritin subunits need to be induced to achieve a demonstrable increase in ferritin accumulation [21,24]. The recent discovery in the dipteran fly Bactrocera dorsalis of an alternatively spliced intron in Fer2LCH that leads to the insertion of a premature codon revealed a further aspect of the co-regulation of the two ferritin subunits, connecting transcriptional to post-transcriptional control [31]. Figure 5. (a) Schematic representation of anterior midgut enterocyte from Fer1HCH G188/+ larvae at one hour post-feeding on 1 mM FAC. Iron has been sensed by an unknown mechanism in the cytosol, ferritin transcription has been induced (the transcription factors involved have not been experimentally determined [13,29]) and two types of vesicles have formed: one containing Fer2LCH subunits only and another containing Fer1HCH and GFP-Fer1HCH subunits. These vesicles will soon give rise to assembled, iron-loaded ferritin in a single type of Golgi vesicle (see [21] for evidence). The ZIP13 transporter is implicated in iron transport to the vesicles [25]. Question marks above the red arrows indicate that these processes are poorly understood; (b) Similar representation from Fer2LCH-Gal4, UAS-mCherry-Fer2LCH larvae. Again, iron has been sensed in the cytosol, ferritin transcription has been induced and two types of vesicles have formed: one containing Fer2LCH subunits only and the other containing Fer1HCH subunits only. There has also been synthesis of the transcription factor Gal4, which will move into the nucleus. When ferritin assembly and iron loading take place, there is no mCherry-Fer2LCH present. This model implies that approximately one hour later when mCherry-Fer2LCH will be synthesized from the action of the Gal4-UAS system (red dotted arrows), there will either be no remaining Fer1HCH-containing vesicles with which to co-assemble or the iron loading process on assembled ferritin has finished. The model further implies feedback inhibition of ferritin synthesis, resulting in a coordinated pulse of expression of both genes encoding for the ferritin subunits upon cellular iron entry. Figure 5. (a) Schematic representation of anterior midgut enterocyte from Fer1HCH G188/+ larvae at one hour post-feeding on 1 mM FAC. Iron has been sensed by an unknown mechanism in the cytosol, ferritin transcription has been induced (the transcription factors involved have not been experimentally determined [13,29]) and two types of vesicles have formed: one containing Fer2LCH subunits only and another containing Fer1HCH and GFP-Fer1HCH subunits. These vesicles will soon give rise to assembled, iron-loaded ferritin in a single type of Golgi vesicle (see [21] for evidence). The ZIP13 transporter is implicated in iron transport to the vesicles [25]. Question marks above the red arrows indicate that these processes are poorly understood; (b) Similar representation from Fer2LCH-Gal4, UAS-mCherry-Fer2LCH larvae. Again, iron has been sensed in the cytosol, ferritin transcription has been induced and two types of vesicles have formed: one containing Fer2LCH subunits only and the other containing Fer1HCH subunits only. There has also been synthesis of the transcription factor Gal4, which will move into the nucleus. When ferritin assembly and iron loading take place, there is no mCherry-Fer2LCH present. This model implies that approximately one hour later when mCherry-Fer2LCH will be synthesized from the action of the Gal4-UAS system (red dotted arrows), there will either be no remaining Fer1HCH-containing vesicles with which to co-assemble or the iron loading process on assembled ferritin has finished. The model further implies feedback inhibition of ferritin synthesis, resulting in a coordinated pulse of expression of both genes encoding for the ferritin subunits upon cellular iron entry. An interpretation of the experimental results is depicted in Figure 5. According to our hypothesis, the reason for not seeing significant ferritin complex formation incorporating GFP-Fer1HCH or mCherry-Fer2LCH subunits when driven with the Gal4-UAS system is that they are produced too late in the timeframe of events that follow cellular iron entry. In other words, at the time endogenous Fer1HCH and Fer2LCH are being produced and processed, Fer2LCH-Gal4 has induced the Gal4 transcription factor, but Gal4-induced transcription has not yet occurred (red dotted arrow in Figure 5b). At a later stage, when UAS-mCherry-Fer2LCH is expressed and translated, there are few Fer1HCH subunits available to form the ferritin complex; hence mCherry-Fer2LCH accumulates in a separate vesicle. The time-delay described here is inherent in the mode of action of the Gal4-UAS system [85], a drawback previously recognized and leading to the development of protein-trap systems [86][87][88]. Our model also accounts for the observation that Fer2LCH-Gal4, UAS-Fer2LCH flies are homozygous viable (the homozygous Fer2LCH-Gal4 driver is lethal because the P-element insertion interrupts endogenous Fer2LCH function). In homozygous Fer2LCH-Gal4, UAS-Fer2LCH flies there will be no endogenous Fer2LCH subunits to complex with Fer1HCH at the time of cellular iron entry; therefore recently made Fer1HCH will not be used up and the temporal delay is accommodated in this situation. Evidence that mCherry-Fer2LCH Is Incorporated in Iron-Loaded Assembled Ferritin Complexes When Co-Expressed Simultaneously with Fer2LCH To test the proposed model, Fer2LCH-Gal4, UAS-Fer2LCH was crossed to Fer2LCH-Gal4, UAS-mCherry-Fer2LCH, reasoning that in this way there would be no endogenous Fer2LCH expression (due to the Gal4 insertions), but Fer2LCH expressed from the UAS transgene would rescue and would be expressed at the same time with mCherry-Fer2LCH. Non-reducing SDS PAGE of whole-fly homogenates (from flies raised on 1 mM FAC) was performed and the gels were treated with Coomassie and Prussian blue stains ( Figure 6). As predicted by the model, iron loading in ferritins assembled with mCherry-Fer2LCH was observed in the new genotype. An interpretation of the experimental results is depicted in Figure 5. According to our hypothesis, the reason for not seeing significant ferritin complex formation incorporating GFP-Fer1HCH or mCherry-Fer2LCH subunits when driven with the Gal4-UAS system is that they are produced too late in the timeframe of events that follow cellular iron entry. In other words, at the time endogenous Fer1HCH and Fer2LCH are being produced and processed, Fer2LCH-Gal4 has induced the Gal4 transcription factor, but Gal4-induced transcription has not yet occurred (red dotted arrow in Figure 5b). At a later stage, when UAS-mCherry-Fer2LCH is expressed and translated, there are few Fer1HCH subunits available to form the ferritin complex; hence mCherry-Fer2LCH accumulates in a separate vesicle. The time-delay described here is inherent in the mode of action of the Gal4-UAS system [85], a drawback previously recognized and leading to the development of protein-trap systems [86][87][88]. Our model also accounts for the observation that Fer2LCH-Gal4, UAS-Fer2LCH flies are homozygous viable (the homozygous Fer2LCH-Gal4 driver is lethal because the P-element insertion interrupts endogenous Fer2LCH function). In homozygous Fer2LCH-Gal4, UAS-Fer2LCH flies there will be no endogenous Fer2LCH subunits to complex with Fer1HCH at the time of cellular iron entry; therefore recently made Fer1HCH will not be used up and the temporal delay is accommodated in this situation. Evidence that mCherry-Fer2LCH Is Incorporated in Iron-Loaded Assembled Ferritin Complexes When Co-Expressed Simultaneously with Fer2LCH To test the proposed model, Fer2LCH-Gal4, UAS-Fer2LCH was crossed to Fer2LCH-Gal4, UAS-mCherry-Fer2LCH, reasoning that in this way there would be no endogenous Fer2LCH expression (due to the Gal4 insertions), but Fer2LCH expressed from the UAS transgene would rescue and would be expressed at the same time with mCherry-Fer2LCH. Non-reducing SDS PAGE of whole-fly homogenates (from flies raised on 1 mM FAC) was performed and the gels were treated with Coomassie and Prussian blue stains ( Figure 6). As predicted by the model, iron loading in ferritins assembled with mCherry-Fer2LCH was observed in the new genotype. These results confirm that the mCherry-Fer2LCH subunit can in principle assemble with the Fer1HCH and Fer2LCH subunits giving rise to functional ferritin molecules. For these mCherry-tagged assembled ferritins to be iron-loaded, simultaneous timing of the expression of mCherry-Fer2LCH and Fer2LCH subunits is required (Figure 7). Nevertheless, iron loading was clearly less compared to the native ferritins. The same holds for GFP-Fer1HCH-containing ferritins [15,21]. Why this is the case is not presently understood, but the bulky tags may affect the folding of the subunits, resulting in diminished ferroxidase activity of the complex or interfering with iron delivery to ferritin. These results confirm that the mCherry-Fer2LCH subunit can in principle assemble with the Fer1HCH and Fer2LCH subunits giving rise to functional ferritin molecules. For these mCherry-tagged assembled ferritins to be iron-loaded, simultaneous timing of the expression of mCherry-Fer2LCH and Fer2LCH subunits is required (Figure 7). Nevertheless, iron loading was clearly less compared to the native ferritins. The same holds for GFP-Fer1HCH-containing ferritins [15,21]. Why this is the case is not presently understood, but the bulky tags may affect the folding of the subunits, resulting in diminished ferroxidase activity of the complex or interfering with iron delivery to ferritin. New Tools Are Required for the in Vivo Imaging of Ferritin Assembly in the Drosophila Intestine Cellular iron sensing is not yet understood in Drosophila, beyond the post-transcriptional Iron Regulatory Protein-Element paradigm [89]. A genetic screen designed to uncover the transcriptional factors involved in iron-induced transcription failed to reveal any, possibly because it only screened homozygous viable mutants [13]. Experiments presented here support the notion that the ferritin assembly is a highly regulated process, however more investigations are required to unravel the full sequence of events following cellular iron entry into the enterocytes of the anterior midgut. Generating a Fer1HCH G188 , Fer2LCH-Gal4 recombinant chromosome is an obvious yet challenging objective, as the two genes are direct chromosomal neighbors [29]. It would be helpful to obtain a fly strain expressing mCherry-Fer2LCH directly from the Fer2LCH promoter to support future studies. In this respect, the GFP-protein trap line Fer2LCH CPTI100064 [87] does not accumulate GFP-Fer2LCH in the intestines (data not shown). Our efforts to employ the P[acman] BAC libraries [90] to rescue ferritin deficiency mutants [10,15] were stalled by inefficient transgenesis of the 154,003 base pairs of the R22M06 BAC clone that includes the Fer1HCH, Fer2LCH genomic locus. Genetic engineering techniques in Drosophila are evolving at an incredible pace and a strategy for generating mCherry knock-in alleles in Fer2LCH using the Clustered Regularly Interspaced Short Palindromic Repeat associated technology can be considered [91][92][93]. Alternatively, the use of bisarsenic fluorescent probes, activated upon cage assembly, might be adopted by site-directed mutagenesis of Fer1HCH and Fer2LCH to generate optimal bisarsenic binding pockets and visualize the process in vivo [94,95]. New Tools Are Required for the in Vivo Imaging of Ferritin Assembly in the Drosophila Intestine Cellular iron sensing is not yet understood in Drosophila, beyond the post-transcriptional Iron Regulatory Protein-Element paradigm [89]. A genetic screen designed to uncover the transcriptional factors involved in iron-induced transcription failed to reveal any, possibly because it only screened homozygous viable mutants [13]. Experiments presented here support the notion that the ferritin assembly is a highly regulated process, however more investigations are required to unravel the full sequence of events following cellular iron entry into the enterocytes of the anterior midgut. Generating a Fer1HCH G188 , Fer2LCH-Gal4 recombinant chromosome is an obvious yet challenging objective, as the two genes are direct chromosomal neighbors [29]. It would be helpful to obtain a fly strain expressing mCherry-Fer2LCH directly from the Fer2LCH promoter to support future studies. In this respect, the GFP-protein trap line Fer2LCH CPTI100064 [87] does not accumulate GFP-Fer2LCH in the intestines (data not shown). Our efforts to employ the P[acman] BAC libraries [90] to rescue ferritin deficiency mutants [10,15] were stalled by inefficient transgenesis of the 154,003 base pairs of the R22M06 BAC clone that includes the Fer1HCH, Fer2LCH genomic locus. Genetic engineering techniques in Drosophila are evolving at an incredible pace and a strategy for generating mCherry knock-in alleles in Fer2LCH using the Clustered Regularly Interspaced Short Palindromic Repeat associated technology can be considered [91][92][93]. Alternatively, the use of bisarsenic fluorescent probes, activated upon cage assembly, might be adopted by site-directed mutagenesis of Fer1HCH and Fer2LCH to generate optimal bisarsenic binding pockets and visualize the process in vivo [94,95]. This latter strategy, would come with the advantage of avoiding steric complications arising from the presence of the GFP and mCherry protein tags on the outside of the ferritin cage. Materials and Methods Wild type flies used in this study were collected in Tannes, Italy [8]. The Fer1HCH G188 allele has been characterized previously [10,15,21,22]. The Gal4 drivers Fer2LCH NP2602 and Fer2LCH NP4763 [70] were obtained from the Kyoto Stock Center (#104255 and #113517, respectively). Fer2LCH 21BGal4 was generated by transposition [68] of the P{GawB} element [69] into Fer2LCH EP1059 and has been used before [10]. Tagged ferritin constructs UAS-mCherry-Fer2LCH and UAS-GFP-Fer1HCH were generated in the pCasper-UAST vector [69] by inserting, respectively, mCherry and GFP at the N-termini regions of the open reading frames for Fer2LCH and Fer1HCH, respectively, immediately following the predicted cleavage sites of the endoplasmic reticulum target sequences. GFP was inserted following aspartic acid 22 of Fer1HCH and mCherry following cysteine 23 of Fer2LCH. The diet used in all experiments was based on yeast and molasses [77]. The addition of 200 µM BPS (final concentration) decreases ferritin and iron in the flies, whereas the addition of 1 mM FAC accumulates total body iron content and induces ferritin [10,15,21,22]. 3rd instar crawling larvae were selected immediately after the end of their feeding phase as they initiated foraging away from the fly food to the sides of the plastic vials in which they were reared. The larval cuticle was broken open, the internal organs were exposed but not dissected out; instead the samples were incubated in freshly prepared 4% paraformaldehyde and kept at´4˝C for 12 h. The next day, freshly prepared 4% paraformaldehyde was replaced for 2 h at room temperature, followed by three washes with phosphate saline buffer for 20 min each. Dissections were performed directly in PBS for Drosophila (Cold Spring Harbor Protocols) and the intestines were removed and mounted on Vectashield mounting medium containing DAPI. Imaging was performed at a Leica TCS SP8 confocal system coupled to a DMI6000 inverted microscope (Wetzlar, Germany). Non-reducing SDS-PAGE was performed on 6% acrylamide gels, followed by Coomassie and Prussian blue stains, as described previously [15,84]. It is noted that the ferritin complex runs at higher apparent molecular weights in 8% and 10% acrylamide gels, but the resolution of the tagged ferritin complexes is less evident there. Conclusions Here, we described Fer2LCH-Gal4 lines, which are iron-responsive in the anterior midgut region. These were used to drive UAS-mCherry-Fer2LCH and UAS-GFP-Fer1HCH. Ferritin complexes containing the mCherry-Fer2LCH or the GFP-Fer1HCH subunits induced in this way were, however, iron poor and iron was stored instead in ferritin complexes composed exclusively from the endogenous Fer1HCH and Fer2LCH subunits. This situation contrasts what is observed when GFP is directly spliced into the endogenous Fer1HCH transcript, as is the case in the Fer1HCH G188/+ genotype, where no ferritin complexes composed exclusively of Fer1HCH and Fer2LCH subunits were detected and iron was loaded instead to ferritin complexes assembling with GFP-Fer1HCH, endogenous Fer1HCH and endogenous Fer2LCH subunits. From these findings, we conclude that the temporal delay inherent in the production of the Gal4 transcription factor and its movement to the nucleus to activate upstream sequences and produce tagged ferritin subunits impedes their incorporation into functional assembled ferritin complexes. We support this conclusion by showing that flies co-assemble iron loaded mCherry-tagged ferritin complexes when expression of mCherry-Fer2LCH is concurrent to that of Fer2LCH. Thus, ferritin assembly is a highly organized, temporally regulated, cellular process in Drosophila. Further experiments using alternative strategies are required to uncover the mechanistic details of insect ferritin assembly as it occurs in vivo. and for his comments on the manuscript. We also thank Nicanor Gonzalez-Morales, Christoph Metzendorf and four anonymous reviewers for their insightful suggestions on previous drafts of the manuscript. CONACYT supported Abraham Rosas-Arellano for this work with a national post-doctoral fellowship. Bertand Mollereau was supported by a grant "Equipe" from the Fondation pour la Recherche Médicale. Hermann Steller is an investigator of the Howard Hughes Medical Institute. Funding from the CONACYT project #179835 to Fanis Missirlis also contributed to this paper. Author Contributions: Abraham Rosas-Arellano prepared the samples for confocal microscopy and performed the imaging. Johana Vásquez-Procopio performed the SDS PAGE experiments. Alexis Gambis, Hermann Steller and Bertrand Mollereau generated the UAS-mCherry-Fer2LCH and UAS-GFP-Fer1HCH transgenic flies. Liisa M. Blowes generated the Gal4, UAS recombinant lines. Fanis Missirlis directed this project and wrote the first draft of the manuscript. All authors contributed to the final version of the paper. Conflicts of Interest: The authors declare no conflict of interest.
2016-03-14T22:51:50.573Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "761cc6d2925d15b8a8d1a6a7df69473099ec2ab7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/2/27/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "761cc6d2925d15b8a8d1a6a7df69473099ec2ab7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
218806263
pes2o/s2orc
v3-fos-license
An Accelerated Convergence Algorithm for Sparse-View CT Image Reconstruction In order to reduce radiation dose during CT scanning, sparse sampling is an effective way. Although the TV-based iterative reconstruction algorithm is a breakthrough to solve the problem of sparse-view CT image reconstruction, its applicability is still limited by huge computational burden. It is necessary to study the acceleration method of TV-based iterative algorithm. This paper show that the FISTA acceleration method is not suitable for POCS-TV algorithm, meanwhile, an improved acceleration method, IFISTA, is proposed to accelerate the convergence rate of POCS-TV. Numerical experiments show that the convergence rate of POCS-TV-IFISTA is about 35% faster than POCS-TV. Introduction At present, computed tomography (CT) has been widely used in clinical diagnosis, however, X-ray radiation dose is a potential risk for diseases such as cancer [1]. In order to reduce radiation dose during CT scanning, sparse sampling is an effective way, but it also destroys the completeness of projection data [2]. For sparse-view CT image reconstruction, conventional analytic algorithms such as FBP suffers serious artifacts because of the incompleteness of projections; iterative algorithms like ART and SART can obtain better reconstruction quality than FBP, but the fuzzy image still has no practical meaning [3]. In recent years, more and more studies have shown that the TV-based iterative reconstruction algorithm is a breakthrough to reconstruct high quality CT image form incomplete projection data [4, 5, and 6]. For TV-based algorithm, sparse-view CT image reconstruction is a constrained optimization problem as follows: In formula (1), represents the discrete image vector, represents the TV norm of and its definition is given by formula (2), M represents the system matrix and represents the measured [7,8], is one of the most classical among these. POCS-TV algorithm alternately minimizes TV norm (TV-step) and imposes data consistency or positive constraints (POCS-step), iteratively, to find the minimum TV solution satisfying all constraints However, its applicability is limited, due to it needs too many iterations to reach convergence. Therefore, it is necessary to study the method of accelerating the convergence of POCS-TV. Some study show that the convergence can be accelerated effectively by adding appropriate prediction step called FISTA in the iterative updating process of CT image reconstruction [9]. However, in our study, we found FISTA cannot steadily accelerate POCS-TV algorithm, there will be undesirable divergence after few iterations! After studying the divergence process of POCS-TV-FISTA, we propose an improved FISTA acceleration method (IFISTA), and numerical experiments prove our IFISTA method can steadily accelerate POCS-TV algorithm. FISTA acceleration method and its defects Traditional image updating method only uses the current iteration result, the key idea of FISTA is to use the linear combination of the current and last iteration results to update the image. When FISTA is used to accelerate POCS-TV algorithm, after TV-step, formula (3a-3c) is simply added. The flow chart of POCS-TV algorithm can refer to appendix A. In the appendix B, we give a schematic diagram of FISTA acceleration method. Theoretically, the appropriate linear combination of ( ) and ( ) , expressed in ( ) , is closer to the feasible region, so the iterative process may be accelerated. Unfortunately, as shown in Figure 1, when POCS-TV-FISTA is used for reconstruction, error starts to accumulate slowly after few iterations, eventually leading to unexpected divergence. This phenomenon illustrates that FISTA method cannot steadily accelerate POCS-TV algorithm! For the two algorithms, the parameters we used is shown in table1. And we use relative error (RE) defined in formula (4) to quantitatively evaluate quality and convergence rate of reconstruction image. Obviously, the smaller the RE is, the faster the algorithm converges. IFISTA acceleration method We speculate that the reason why POCS-TV-FISTA tends to diverge is that the contribution of FISTAstep is too large, which destroys the harmony of "inward projection" and "outward motion". According to Figure 4(c) and 4(d), it can be seen that if the contribution of FISTA-step is greater than TV-step, corresponding iteration will be the turning point of divergence. Therefore, A good idea to improve FISTA acceleration method is: 1) In each iteration, the contribution of FISTA-step is constrained to be less than TV-step; 2) based on 1), increase the contribution of FISTA-step appropriately to accelerate iterative convergence. The improved FISTA method is called IFISTA. We give the pseudo-code of POCS-TV-IFISTA algorithm in appendix A, now, we explain the details. We introduce a new parameter to restrict the multiplier factor ( 0 − 1) ⁄ in formula (3b). In general, the new multiplier factor ( − 1) ⁄ not only keeps the feature that IFISTA-step's contribution increases with the number of iterations, but also ensures that it is less than TV-step. Specifically, to implement the above idea 1), at the divergence turning point of ( ) ≥ ( ), we reduce in time through the new parameter to avoid iterative divergence (pseudo code line 15). To implement the above idea 2), it's important to note that because POCS-TV algorithm guarantees convergence, as long as the gain of IFISTA-step's contribution (represented by ) is less than POCS-TV process (represented by _ ) between two adjacent iterations, POCS-TV-IFISTA algorithm as a whole still tends to converge. So, can be appropriately increased to accelerate convergence rate (pseudo code line 19-20), and vice versa (pseudo code line 21-22). After a series of experiments, we find that when the multiplier used to change is related to the relative variation of POCS-TV process between two adjacent iterations (represented by _ ), the algorithm has better robustness. As for the parameter selection of IFISTA acceleration method: we set the initial value of as 1.0, and we also suggest that and be taken as smaller positive integers. Because only a small can reduce IFISTA-step's contribution to an acceptable level in time at the divergence turning point, and a small can ensure that the change of is more stable. In this paper, after a lot of experiments, we set = 0.1, = 0.09. Numerical experiment In this section, we study the performance of ART, POCS-TV, and POCS-TV-IFISTA by numerical experiments. Without losing generality, we choose a fan beam configuration to obtain the projection data. In order to simulate the sparse sampling situation, the Shepp-Logan head model is used to collect 20 projections uniformly from 0 to 180 degree. Both the noise-free projection and the projection with 0.2% Gaussian noise are reconstructed respectively, and the results are shown in Figure 2. The iteration number for all algorithms in this experiment is 200, which makes sure each algorithm reaches convergence. For POCS-TV-IFISTA, except for the newly introduced parameters that we have explained clearly, the settings of other parameters are the same as POCS-TV-FISTA displayed in Table 1. For POCS-TV, its parameters setting also has been displayed in Table 1. For ART, it only has one parameter = 1.0. In Figure 2(a) and 2(d), due to insufficiency of projection data, the quality of image reconstructed by ART is very low, including very serious noise and artifacts. In Figure 2 of sparse prior information, the image reconstructed by POCS-TV is of good quality, and even the three tissues at the bottom of the image are clearly visible. In Figure 3(c) and 3(f), superficially, it can be determined that the reconstruction quality of POCS-TV-IFISTA is not inferior to POCS-TV. In order to further compare the reconstruction quality, we show the relative error curves of reconstructed images with different reconstruction algorithm at different iteration numbers in Figure 3. As shown in Figure 3, it is clear that the relative error of POCS-TV-IFISTA is less than POCS-TV at the same iteration numbers, which illustrates that IFISTA method can indeed accelerate the convergence of POCS-TV in the case of sparse sampling. Conclusion As shown in Figure 3, for sparse-view CT image reconstruction, POCS-TV-IFISTA only needs about 130 iterations to achieve the same reconstruction quality as 200 iterations of POCS-TV, and its convergence rate is about 35% faster than POCS-TV. There is no doubt that IFISTA acceleration method can be used for reference in reducing the computational burden and promoting the popularization of iterative reconstruction algorithm.
2020-04-16T09:18:33.553Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "3dadf7b3bf7c6cba81a541b835bd67687c562ed8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/782/4/042031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5db143e6cd652944249cdfcce6aa20bdee899f5d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14143158
pes2o/s2orc
v3-fos-license
On the Atmospheric Correction of Antarctic Airborne Hyperspectral Data The first airborne hyperspectral campaign in the Antarctic Peninsula region was carried out by the British Antarctic Survey and partners in February 2011. This paper presents an insight into the applicability of currently available radiative transfer modelling and atmospheric correction techniques for processing airborne hyperspectral data in this unique coastal Antarctic environment. Results from the Atmospheric and Topographic Correction version 4 (ATCOR-4) package reveal absolute reflectance values somewhat in line with laboratory measured spectra, with Root Mean Square Error (RMSE) values of 5% in the visible near infrared (0.4–1 μm) and 8% in the shortwave infrared (1–2.5 μm). Residual noise remains present due to the absorption by atmospheric gases and aerosols, but certain parts of the spectrum match laboratory measured features very well. This study demonstrates that commercially available packages for carrying out atmospheric correction are capable of correcting airborne hyperspectral data in the challenging environment present in Antarctica. However, it is anticipated that future results from atmospheric correction could be improved Remote Sens. 2014, 6 4499 by measuring in situ atmospheric data to generate atmospheric profiles and aerosol models, or with the use of multiple ground targets for calibration and validation. Introduction Antarctica is a unique and geographically remote environment.Field campaigns in the region encounter numerous challenges including the harsh polar climate, steep topography, and high infrastructure costs.Additionally, field campaigns are often limited in terms of spatial and temporal resolution, and particularly, the topographical challenges presented in the Antarctic mean that many areas remain inaccessible.For example, despite more than 50 years of geological mapping on the Antarctic Peninsula, there are still large gaps in coverage, owing to the difficulties in undertaking geological mapping in such an environment [1].Hyperspectral imaging may provide a solution to overcome the difficulties associated with field mapping in the Antarctic. Hyperspectral sensors acquire data from a contiguous spectrum over a defined wavelength interval, which makes it possible to identify surface materials by their characteristic reflectance or emittance spectrum, and can yield information on features such as abundance and composition, including ion substitution in minerals [2,3].It is possible to produce maps of mineral composition and abundance from hyperspectral imagery without rigorous ground truth measurements, due to the development of spectral reflectance libraries (e.g., [4]).A variety of software packages are capable of applying advanced image processing algorithms to hyperspectral imagery using such spectral reflectance libraries, and thus allowing the end-user to produce mineral maps with relative ease.A comprehensive review of geologic remote sensing, including the use of hyperspectral data, is given by van der Meer et al. [5]. The reflectance spectrum of a material can, in principle, be recovered from the observed radiance spectrum over regions in which the illumination is non-zero [6].The reflectance spectrum is independent of the illumination and provides the best opportunity to identify materials by comparison with reference libraries [6].In the case of solar illumination (i.e., irradiance from the sun), many environmental and atmospheric effects complicate the process of deconvolving reflectance spectra from the measured radiance, and complex radiative transfer models are required [6].These models usually simulate the incoming solar irradiance, subsequent atmospheric effects and the final at-sensor radiance; the effects of the intervening atmosphere on the solar irradiation can then be accounted for and reflectance spectra derived.The accurate removal of atmospheric absorption and scattering is required to produce measures of surface reflectance; a process known as atmospheric correction [7].Atmospheric correction is a common preprocessing step and use of an appropriate, thorough correction is of great significance for interpretation of hyperspectral imagery and any subsequent processing such as classification [8].A variety of tools exist to perform atmospheric correction with radiative transfer models now mature enough to be used as a routine part of hyperspectral image processing; a comprehensive review of atmospheric correction techniques, including techniques based on radiative transfer, is presented in [7]. In the early 1990s, the Atmosphere Removal Algorithm (ATREM) [9] was developed to employ radiative transfer equations and produce atmospherically corrected spectral data.Since the development of ATREM, several packages have also been developed for atmospherically correcting multi-and hyperspectral data, including High-accuracy ATmospheric Correction for Hyperspectral Data (HATCH) [10], Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) [11,12] and a series of Atmospheric and Topographic Correction (ATCOR) codes [13,14]. The series of ATCOR codes have been continually updated and developed throughout the 1990s and 2000s, with the latest ATCOR-4 release using a large database containing results of radiative transfer calculations based on the MODTRAN-5 [15] radiative transfer model.Additional techniques for correcting adjacency effects, 3D code for correction of topographic effects and bi-direction reflectance distribution function (BDRF) in addition to haze and low cirrus cloud removal are included [14]. The unique atmospheric conditions present in Antarctica combined with the first known hyperspectral data acquisition afford the opportunity to assess the applicability of standard radiative transfer modelling and atmospheric correction techniques for deriving surface reflectance.Previous studies that have carried out atmospheric correction in Antarctica have used multispectral airborne [16] and multispectral satellite data [1] applying radiative transfer modelling techniques to produce reflectance data.However, atmospheric correction of airborne hyperspectral data has not been investigated (due to the previous unavailability of airborne hyperspectral data).This study presents initial results from an investigation into the applicability of the MODTRAN-5 [15] radiative transfer model and the ATCOR-4 atmospheric correction package [14] for producing atmospherically corrected airborne hyperspectral data in the unique Antarctic environment. Study Area Rothera Point (Figure 1) was surveyed in February 2011, using the ITRES (ITRES Research Ltd., 110, 3553-31st Street NW, Calgary, AB, T2L 2K7, Canada) CASI-1500 and SASI-600 instruments acquiring data in the visible near-infrared (VNIR; 0.4-1.0µm) and shortwave infrared (SWIR; 1-2.5 µm) portions of the electromagnetic spectrum.Sensor information is presented in Table 1.The acquisition system hardware and other equipment were installed into a British Antarctic Survey (BAS) DeHavilland Twin Otter aircraft.The imagers were installed onto a single mounting plate for concurrent imaging.This arrangement allowed for uniform recording of all aspects of aircraft motion relative to the two imagers with respect to the Inertial Measurement Unit (IMU).The Instrument Control Units (ICUs) were installed at the fore section of the aircraft.Six flight lines were required to acquire hyperspectral data of the study area, and during the acquisition of the imagery, three large (6 m × 6 m) calibration targets were placed within the study area; white, grey and black targets, provided by the Natural Environment Research Council (NERC) Field Spectroscopy Facility [17].The spectral reflectance measurements of these calibration targets were acquired using an Analytical Spectral Devices (ASD) FieldSpec ® Pro, which records continuous spectra across the 350-2500 nm wavelength region; the spectral resolution of the instrument was 3 nm at 700 nm, 10 nm at 1400 nm, and 12 nm at 2100 nm.Reflectance spectra were acquired in "White Reference" mode using a white Spectralon ® panel as the reference target, measured at a nadir viewing angle with illumination provided by a tungsten halogen lamp at a 45 • angle.Figure 2 shows a colour composite image of Rothera point showing the calibration targets. Data Preprocessing Standard preprocessing of hyperspectral data was carried out by ITRES to produce georeferenced and radiometrically corrected imagery.There are two major steps: Radiometric Correction and Geometric Correction, which were both carried out by ITRES' propriety tools.In the first step, radiometric and spectral calibration coefficients are applied to convert the raw digital numbers into spectral radiance values.Geometric correction utilizes measurements from the IMU and GPS to create a georeferenced mosaic image. Radiometric Correction The raw data are digitized at 14-bit resolution and are recorded as digital numbers (DN).The radiometric processing converted these digital numbers into spectral radiance values based upon calibration coefficient files, which were generated during laboratory calibration of the sensors.Due to the extreme operating conditions during acquisition (very cold temperatures), the image sensors were pushed to their limits and some anomalies were apparent in the image data, particularly in the SASI images.The SASI instrument's operating conditions were significantly different to the calibration conditions in the laboratory, hence scaling and spectral resampling adjustments, ranging from −5% to +10%, were made to the calibration files to compensate for these environment effects and minimise the anomalies introduced as a result of the operating conditions. Geometric Correction After radiometric correction, the data was geometrically calibrated.The ITRES proprietary geometric correction software utilised the navigation solution, bundle adjustment parameters and Digital Elevation Models (DEMs) to produce georeferenced radiance image files for each flight line.In addition, flight lines were combined into an image mosaic of the area.The nearest neighbour algorithm was used to populate the image pixels so that radiometric integrity of the pixels could be preserved.At the image mosaicking stage, a minimised nadir angle approach was implemented such that the spectra of the pixel with the smallest off-nadir angle from overlapping adjacent flight lines was written to the final mosaic image. Radiance Offset The spectral range of the CASI and SASI data (Table 1) has an approximate 100 nm overlap, between 950 nm and 1055.5 nm.Preliminary investigations revealed an offset in radiance values within this overlap range.In the overlap range, CASI radiance values were found to be larger than the corresponding SASI radiance, with a trend of increasing radiance offset with increasing wavelength.This radiance offset is present in the radiometrically calibrated data.Several factors are likely to have produced the radiance offset. The first and most probable contributing factor is second order light contributions.The CASI sensor's diffraction grating produces a second order diffraction spectrum, whose blue end overlaps with the red-near infrared (NIR) end of the first order spectrum.Illumination conditions at the time of acquisition may have allowed this effect to lead to additive background signal at the red-NIR end.The second contributing factor could be the reduced calibration accuracy in the NIR end of the spectrum, as the CASI sensor is less sensitive at the longest wavelengths. Thirdly, preliminary investigations also revealed a systematic underestimation of radiance values in the SWIR (from the SASI instrument).This is attributed to the conditions during acquisition.The instruments were operating in an unpressurised aircraft, with temperatures significantly outside the normal operational range; the SASI instrument was as much as 20 • C (68 • F) outside its normal operating range.These conditions meant there was a noticeable degradation in the response of the sensor, and hence the measured at-sensor radiance was lower in the SWIR data. Atmospheric Correction The Antarctic has a distinct atmosphere, dominated by cold temperatures and unusual light conditions [18].The atmosphere has stable stratification in the boundary layer, and is pristine, dry and isolated from the rest of the world's atmosphere by the polar vortex and the Southern Ocean [18].To produce atmospherically corrected data, ATCOR-4 was used. The at-sensor signal consists of three main components, as shown in Figure 3: scattered or path radiance L 1 , reflectance radiance from the pixel under consideration L 2 , and radiation reflected from the neighbourhood into the viewing direction (adjacency effect) L 3 .Component L 2 is the only component that contains information on the surface properties of the pixel under consideration, therefore atmospheric correction aims to remove the L 1 and L 3 components.The atmospheric correction has to be performed iteratively to derive surface reflectance, ρ, for each pixel in the image data.The implementation is described in detail in Chapters 2 and 10 of [19] and in [14].The MODTRAN-5 [15] radiative transfer model is used to generate Look-Up Tables (LUTs) that are used by ATCOR-4 to aid in the calculation of the L n terms and the subsequent derivation of surface reflectance, ρ.The LUTs utilised by ATCOR-4 require significant computational effort to compute and are based on the "Mid-Latitude Summer" (MLS) profile [20].LUTs are calculated using MODTRAN-5 with the scaled discrete ordinate radiance transfer (DISORT) option in regions where scattering is dominant and the more accurate correlated-k option in regions where absorption is dominant ( [19], p. 154).ATCOR-4's LUTs were generated using the MLS profile and fixed water vapour contents of 0.4, 1, 2, 2.9, and 4 g/cm 2 (rather than the water vapour defined in the standard MLS model [20]).Water vapour is the main parameter that produces differences in radiance values and the other parameters are mostly stable [21].This can be confirmed by simulating radiance values with each of the atmospheric profiles from [20] and fixed water vapour values in MODTRAN (cf. Figure 4).Full details of the algorithms applied by ATCOR-4 are detailed in [14,19].The major processing phases of ATCOR-4 are outlined in Figure 5, "Preprocessing" and "Atmospheric Correction".Standard input parameters (location, date and time) were used (Table 2).The other user-selectable parameters are the visibility, choice of aerosol model and choice of water vapour LUT.The visibility is measured hourly by the meteorologists at Rothera Point and the observation closest to data acquisition time was used.The maritime aerosol, interpolated to the flying height, was selected.Coincident meteorology data measured from a radiosonde launch at Rothera Point is shown in Figure 6 indicating the atmospheric conditions close to the time of image acquisition.Additionally, as implied by the measured visibility (60 km), the operators' notes indicated that the "flight conditions were clear and calm, with blue skies and a few scattered clouds".With regards to water vapour, the LUT with a water vapour value of 2.0 g/cm 2 was selected.However, the choice of water vapour value is not significant because during processing water vapour is recalculated per-pixel; both the CASI-1500 and SASI-600 sensors have bands that lie within water vapour regions and thus the water vapour can be calculated from the image data (see Chapter 10.4.3 in [19] for further details on the water vapour retrieval algorithm). Results The atmospheric correction results for each of the calibrated targets are presented in Figure 7.For the CASI data, there is a systematic overestimation in reflectance data for the grey and black targets.Between ∼0.86 µm and 1.1 µm there is a noticeable increase in reflectance (A) for the CASI data.This significant increase at the red-NIR end of the spectral range is likely a result of the second order light contribution from the blue end of the spectrum, resulting in additive background signal in the red-NIR end, therefore causing an increase in reflectance values.The SASI data shows a large systematic underestimation for the white target, but less so for the grey and black targets where results are close to or within the ±2% error margins.Numerous artefacts remain in the CASI and SASI data, most likely as a result of atmospheric gases and aerosols.The residual effects of water vapour (H 2 O) are most noticeable; interpolation is carried out during the ATCOR-4 processing chain across areas of H 2 O absorption from 1.0 µm to 1.2 µm (B), leaving a peak at 1.25 µm (C) and a secondary area of interpolation from ∼1.3 µm to 1.5 µm (D).A double peak is present at 1.6 µm due to CO 2 absorption (E).The absence of data between 1.7 µm and 2.1 µm (G) is due to the absorption by H 2 O and CO 2 , which reduces radiance transmission to almost zero in this portion of the spectrum.There are also some other minor artefacts, such as a double peak at 0.5 µm as a result of O 3 and a small peak at ∼0.85 µm likely a result of O 2 . In spite of these imperfections, two closely matched absorption features in the targets are found, the first at ∼1.65 µm (F) and the second at ∼2.2 µm (H). Accuracies, Errors and Uncertainties As discussed in Section 2, the instruments used for data acquisition were flown in an unpressurised BAS DeHavilland Twin Otter aircraft.This meant the instruments were subject to extreme changes in temperature between data acquisition and storage of the instruments in between flights, along with very cold operating conditions during data acquisition itself (up to 20 • C (68 • F) outside of the instrument's normal operating range).It was identified during the preprocessing of the data (Section 3.1) that the SASI instrument particularly suffered as a result of the heating and cooling cycles and the cold operating conditions it underwent during the data collection campaign in the Antarctic.As a result, during the radiometric correction of the SASI data (Section 3.1.1.) larger adjustments were made to the calibration parameters to correct for the operating conditions in the Antarctic.This introduced some uncertainty in the data, which was observed in the raw data (Section 3.1.3. ) and is manifested in the atmospheric correction results (Section 4 and Figure 7) where the SASI data shows a systematic underestimation in reflectance values. Following atmospheric correction, Root Mean Square Error (RMSE) values were calculated and are annotated on Figure 7. RMSE values were calculated from the ATCOR-4 results with respect to the laboratory measured spectra using Equation (1): where Ŷi represents the ith predicted reflectance value (as calculated by ATCOR-4) and Y i represents the ith laboratory measured reflectance value, where n = 72 for CASI and n = 71 for SASI.Whilst the SASI sensor measures 100 bands (Table 1), the actual number of usable bands is reduced to 71 (and hence n = 71), following the removal of the severely affected bands between between 1.7 µm and 2.1 µm (G), which are severely affected due to the absorption by H 2 O and CO 2 . The atmospheric correction approach presented here is subject to uncertainties introduced through the application of the MODTRAN-5 standard atmospheric profiles and aerosol models [20,22].Namely, these climatologically developed profiles are assumed to represent the true atmospheric conditions at the time of data acquisition.Radiosonde measurements were acquired from the same day (Figure 6), but there was a 4 hour difference between the radiosonde launch (11:38 UTC) and data acquisition (∼15:30 UTC).The variability and local scale differences in the atmosphere (e.g., [23]) mean that, even over this relatively short time scale, applying the measured atmospheric parameters from the radiosonde launch would not necessarily be any more valid than applying the atmospheric profile from the MODTRAN-5 model; both methods are applying a profile with the assumption that it represents the true atmosphere at the time of data acquisition, thereby introducing uncertainty in the results. These uncertainties could be removed by measuring the actual in situ atmospheric conditions using other instruments simultaneously whilst acquiring the image data.However, in this study, as is the case in most other studies applying similar techniques, simultaneous atmospheric measurements are unavailable.Despite making assumptions about atmospheric profiles and introducing uncertainties, the radiative transfer model and atmospheric correction approach has been applied successfully.As long as appropriate error metrics are calculated (e.g., RMSE) and the data is carefully applied in additional processing (e.g., spectral mapping) then these uncertainties can be managed and minimised throughout the entire processing chain. Discussion The atmospheric correction processing chain and results presented here represent the first known acquisition and subsequent processing of hyperspectral data in Antarctica.The presence of ground targets along with concurrent ground and atmospheric measurements in this study is typical of most hyperspectral campaigns; often there are not sufficient measurements to fully develop atmospheric profiles and aerosol models (to use as inputs to radiative transfer models), hence estimates are made and often standard atmospheric profiles and aerosol profiles are selected based on qualitative assessment of environmental conditions.Additionally, there are not always a large enough number of ground-based targets with the relevant concurrent spectral data to be used for both calibration and validation.The MODTRAN-5 LUTs used by ATCOR-4 were intended to be flexible enough to cover a wide variety of environments, sensor configurations, water vapour contents and flight parameters but have not been previously tested for airborne hyperspectral data in the Antarctic region. Following the application of the atmospheric correction processing chain, the results showed that workable reflectance data is obtainable.This is obtainable in spite of limited concurrent atmospheric and aerosol measurements combined with assumptions about aerosol model parameters (for example, the maritime aerosol model was selected based on qualitative interpretations of the Antarctic environment), which is an often typical scenario.As there are no aerosol measurements collected at Rothera, the maritime aerosol model [22] was selected based on the qualitative assessment of the atmospheric conditions and the assumption of a dominance of sea salt aerosols in the coastal Antarctic environment (e.g., compare [24]).For the lower reflectance targets (<20%), the results from the both VNIR (CASI-1500) and SWIR (SASI-600) sensors fell within the expected ±2% margins; however, this is likely an artefact of the low signal-to-noise ratio (<15:1) for low reflectance targets.The higher reflectance target (the white target) shows clear discrepancies, with absolute reflectance values differing by as much as 30%.The white target is perhaps more representative of the overall performance of the atmospheric correction due to its higher signal-to-noise ratio. Despite the discrepancies between absolute reflectance values, absorption features for the white target (e.g., 1.65 µm and 2.25 µm) are clearly discerned; similar absorption features in the lower reflectance (grey and black) targets also correlate well between the laboratory-measured and atmospherically corrected reflectance data.There is still residual noise manifested as small peaks and spikes in the reflectance data, which could complicate post-processing procedures, particularly those that rely on relative differences between peaks and troughs in spectra.The residual noise manifested in peaks and spikes are most likely due to the unavailability of an Antarctic-specific atmospheric profile and aerosol model, due the lack of adequate in situ measurements.Additionally, portions of the spectrum that are strongly affected by water vapour (e.g., the interpolation from 1.3 µm to 1.5 µm, and the lack of data between 1.7 µm and 2.1 µm) prove difficult to characterise; a finding that supports the conclusions of Zibordi and Maracci [16] who noted that uncertainties in calculating water vapour optical thickness could lead to "very significant error" ( [16], p. 20). These results suggest that commercially available atmospheric correction packages are flexible enough to produce working reflectance data in Antarctica.Performance is poorer with higher reflectance targets, though results fall within the expected error margins for lower reflectance targets.The ability to discriminate absorption features suggests that the atmospheric correction process would produce reflectance data capable of being applied in mapping techniques using absorption features (e.g., continuum removal).Particular care would have to be given when working with absolute reflectance values. It is recommended that, given the availability of a greater number of ground targets (>3), a hybrid approach of radiative transfer modelling followed by the Empirical Line Method (ELM) [25] be applied for potentially improved results; for example Tuominen and Lipping [26] reported a reduction in Root Mean Square Error (RMSE) from 6.8% to 1.8% when combining the hybrid approach of radiative transfer modelling through ATCOR-4 and the ELM, compared with radiative transfer modelling alone.Therefore, following the conclusions of Tuominen and Lipping [26], it can be seen that even in complex atmospheres where model-based correction methods may struggle, more accurate results can be produced using combined correction methods compared with model-or empirical-based methods alone.It was also noted that even in situations when there is a limited number of spectral ground truth measurements, a hybrid approach can improve atmospheric correction accuracy over the whole acquisition area [26]. This approach was successfully applied to Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) VNIR/SWIR data by Haselwimmer et al. [1], who utilised the hybrid approach combining the FLAASH radiative transfer model [11,12] and an empirical correction.Haselwimmer et al. [1] utilised spectral measurements of the runway at Rothera Point assuming that the runway is a Pseudo-Invariant Feature (PIF) [27,28].PIFs are large uniform targets whose spectral reflectance is assumed not to have changed over time [28].In cases where appropriate PIFs have been identified and measured, they can provide a suitable ground truth feature for calibration or validation.PIFs can be used either as a substitute for or in conjunction with calibrated targets (such as those used in this study) to provide enough targets (>3) for the hybrid approach of radiative transfer modelling followed by empirical line correction.The identification of suitable PIFs in the Antarctic generally, and particularly in the regions where the airborne hyperspectral data was acquired, remains an area of on-going investigation.If a suitable number of targets are identified, the need for deploying calibrated targets during image acquisition may be negated, as PIFs may allow for both calibration and validation of atmospheric correction methods. It must also be noted that sensor calibration still remains challenging in this environment and these issues are manifested in the subsequent atmospheric correction process.Particularly notable effects of this can be observed in Figure 7, such as the offset between VNIR and SWIR (CASI and SASI) sensor values in the overlapping region (0.95 µm to 1.05 µm), as well as the significant underestimation of reflectance for the white target for the SWIR (SASI) data. Future studies should consider the influence of radiative transfer models' standard atmospheric profiles [20] and aerosol model types [22] with a view to measuring in situ atmospheric data while simultaneously acquiring hyperspectral data; this would aid in the generation of atmospheric profiles and aerosol models that serve as inputs during the atmospheric correction process and reduce the level of uncertainty when assumed profiles are used.Such atmospheric data could also lead to the development of a generic "Antarctic" atmospheric profile and aerosol model, which may prove useful for future data acquisition (where measuring in situ atmospheric data is not possible). Conclusions This study has presented results from atmospheric correction of airborne hyperspectral data in Antarctica.The findings are significant as they represent (a) the first known acquisition and preprocessing of airborne hyperspectral data in Antarctica, and (b) the first assessment of atmospheric correction techniques applied to airborne hyperspectral data in Antarctica.The atmospheric correction technique utilised a radiative transfer model (MODTRAN-5) [15] in the Atmospheric and Topographic Correction version 4 package (ATCOR-4) [14]. Two sensors, imaging the visible near-infrared (VNIR; 0.4-1.0µm) and shortwave infrared (SWIR; 1-2.5 µm), were deployed during the data acquisition.During the radiometric correction (preprocessing) of the data it was found that, as a result of the extreme temperature variations during the data collection, the SWIR sensor had decreased sensitivity, resulting in lower measured radiance values and systematic underestimation in reflectance values following atmospheric correction.The results from atmospheric correction revealed that obtaining surface reflectance of airborne hyperspectral data in the Antarctic is possible without in situ measurements of atmospheric parameters; reflectance data had maximal Root Mean Square Error (RMSE) values of 5% in the VNIR and 8% in the SWIR.However, residual noise remains present in the reflectance data as a result of using standard atmospheric profiles and aerosol models during the atmospheric correction process. For future campaigns in Antarctica, it is recommended that instruments be sufficiently tested and calibrated to operate successfully in cold environments, with particular attention given to imagers operating in the SWIR.During acquisition it is recommended that (a) in situ atmospheric data be measured simultaneously whilst acquiring hyperspectral data to produce robust atmospheric and aerosol profiles that can be applied during the atmospheric correction process, and (b) ground truth data, such as calibrated targets or pseudo-invariant features, be present to allow for the validation of atmospheric correction results, as well as calibration of reflectance data using empirical correction techniques (if a sufficient number of ground targets allow, i.e., >3). Figure 1 . Figure 1.Location maps showing the context within Antarctica (A); the location of Adelaide Island within the Antarctic Peninsula (B) and the location of Rothera Point in the context of Adelaide Island (C; black dot). Figure 2 . Figure 2. CASI colour composite image mosaic of Rothera Point following radiometric and geometric correction, with inset showing the three calibration targets.Bands shown: Red: 650.2 nm, Green: 554.6 nm, Blue: 439.6 nm. Figure 3 . Figure 3. Schematic of the three solar radiation components in flat terrain and the pixel under consideration (ρ).Scattered or path radiance L 1 , reflected radiance L 2 , and radiation reflected from the local neighbourhood (adjacency effect) L 3 . Figure Figure 7. Atmospheric correction results for the VNIR (CASI; blue) and SWIR (SASI; red) data (±2% error estimates are shaded grey) and laboratory spectra (LAB; black), for the three calibrated targets; white (1); grey (2); and black (3).Labels are discussed in the text.Root Mean Square Error (RMSE) values are shown for each target. (3)Atmospheric correction results for the VNIR (CASI; blue) and SWIR (SASI; red) data (±2% error estimates are shaded grey) and laboratory spectra (LAB; black), for the three calibrated targets; white (1); grey (2); and black(3).Labels are discussed in the text.Root Mean Square Error (RMSE) values are shown for each target.
2016-03-22T00:56:01.885Z
2014-05-16T00:00:00.000
{ "year": 2014, "sha1": "e63c78773f0da7fc4c8dddfc7a6fd9aa5ae151a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/6/5/4498/pdf?version=1403137389", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "e63c78773f0da7fc4c8dddfc7a6fd9aa5ae151a8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology", "Computer Science" ] }
240420375
pes2o/s2orc
v3-fos-license
Diverticulitis of the appendix—case report and literature review ABSTRACT Appendiceal diverticulitis is a rare diagnostic most often mistaken for an acute appendicitis. A 72-year-old man presented with a transfixing abdominal pain for 48 hours. Appendicitis was diagnosed on computed tomography scan, but a neoplasm could not be excluded. A laparoscopic hemicolectomy was performed after a surgical consensus considering the neoplastic appearance of the lesion and anatomical feature. Histopathology finally revealed an appendiceal diverticulitis. Appendiceal diverticulum is a rare condition. Most will lead to an appendiceal diverticulitis, which present similarly to an appendicitis. Perforation rate and mortality rate are much higher in appendiceal diverticulitis than in appendicitis. Furthermore, appendiceal diverticular disease is strongly associated with neoplasms, especially mucinous neoplasms and thus pseudomyxoma peritonei. Considering the high complication rate and malignant association, an appendicectomy in case of an appendiceal diverticulitis or of an incidental finding of appendiceal diverticulosis should be recommended to the patient. INTRODUCTION Diverticulitis of the appendix is a rare diagnostic most often mistaken for an acute appendicitis due to its alike presentation. In fewer cases, diverticulitis of the appendix can be interpreted as a neoplastic lesion of the appendix. We hereby present such a case with a literature review of the subject. CASE REPORT A 72-year-old man was referred to the emergency room (ER) by his physician for a 48-hour abdominal pain. The pain was located in the left upper quadrant and was characterized as initially transfixing but had mostly subsided by the time the patient consulted to the ER. The patient's medical record was significant for an iatrogenic hypothyroidism, a right inguinal herniorrhaphy and two negative colonoscopies. The physical examination showed a soft abdomen and no fever. The total white blood cells count was 13.0 × 10 9 . A computed tomography (CT) scan was performed and revealed an enlarged appendix filled with liquid with surrounding fat stranding compatible with an acute appendicitis, although a mucocele could not be excluded (see Fig. 1). A diagnostic laparoscopy was performed and revealed a whitish granulomatous appendix and a thickened caecum with chronic-like peritoneal adherences. Because of the neoplastic suspicion and anatomic features, a right hemicolectomy with intracorporeal anastomosis was performed after reaching a surgical consensus. The pathology report confirmed a secondary appendicitis on multiple inflamed appendiceal diverticula (see Fig. 2). No neoplasm was identified. The patient was discharged without complication. DISCUSSION Appendiceal diverticula are classified as congenital or acquired. Congenital diverticula are true diverticula and are exceptional for approximately 50 cases has been reported [1]. Congenital diverticula are found on the antimesenteric line of the appendix and are thought to be associated with developmental or congenital anomalies such as trisomy 13 or 15 [2]. Acquired diverticula are pseudodiverticula and are an uncommon finding with a prevalence of 1.4% according to a study of 50 thousand autopsies [2]. They are most commonly seen in older adults (>30 year old) and in men (ratio M:F 1.8:1) [3]. Incidence is more important in patient with cystic fibrosis, reaching 14% [2]. They usually present as multiple small diverticula (2-5 mm) on the distal third of the appendix and are mostly found on its mesenteric line. This supports the theory that they are caused by an elevated pressure in the appendix resulting in herniations of the mucosa through the vascular hiatuses of the appendix. The pressure rises from an obstruction caused by a fecalith, an adhesion or a tumor. A chronic obstruction results in an appendiceal diverticulitis [2,4]. Furthermore, the active submucosal lymphoid tissues make the appendix prone to inflammation episodes and therefore weakening its muscle layer [2]. These episodes could explain that there is no association between appendiceal diverticulosis and sigmoid diverticulosis [3]. Appendiceal diverticulosis is commonly asymptomatic. It is estimated that two-thirds will evolve to an acute or a chronic diverticulitis. Thus, the presentation varies from a mild right iliac fossa pain extending over the years to an episodic intense pain. In the case of an acute presentation, the pain does not usually start around the periumbilical region and is not as commonly associated with gastrointestinal symptoms as appendicitis. Fever and leukocytosis are common findings [2]. Appendiceal diverticulitis is usually a perioperative or a pathologic diagnosis. CT can be useful to distinguish appendicitis from appendiceal diverticulitis but has a 50% false positive rate [2]. It is still a rare radiological finding, as less than 7% of appendiceal diverticulitis are diagnosed on CT [3]. The Lipton classification divides appendiceal diverticulosis and diverticulitis into four subtypes [5]: 1. Acute diverticulitis; 2. Appendicitis with acute diverticulitis; 3. Appendicitis with diverticulum; 4. Appendix with diverticulum. The most frequent complication of appendiceal diverticular disease is perforation. The perforation rate of appendicitis is estimated at 6.6% while it rises at 27% when associated with a diverticulum. An acute diverticulitis has a perforation rate of 66%, explaining the 30-fold more mortality of appendiceal diverticulitis compared to appendicitis. Other reported complications are abscesses or cysts formation, peritonitis, massive hemorrhage and vesicoappendiceal fistula. Congenital appendiceal diverticula are less associated with complications [2]. Appendiceal diverticular disease is associated with appendiceal neoplasms. The incidence of appendiceal neoplasms is 1.28% and rises at 26.94% in presence of diverticula. More than half of the neoplasms are low grade mucinous neoplasms (LGMN). It is unclear if the LGMN causes the elevated pressure resulting in diverticula or if it develops within the diverticula and weaken its walls. Because of its strong association with LGMN, appendiceal diverticular disease is thought to be associated with pseudomyxoma peritonei [2,3]. The presence of serosal and/or mesoappendiceal mucin should raise concerns of a neoplastic process [6]. CONCLUSION Appendiceal diverticular disease is an uncommon diagnosis that can easily be overlooked. Because of its strong association with appendiceal neoplasms, a thorough examination of appendix should be done after each appendicectomy as well as a careful inspection of the abdominal cavity. Considering the high complication rate, mortality rate and malignant association, an urgent appendicectomy in case of an appendiceal diverticulitis and an elective appendicectomy in case of an incidental finding of appendiceal diverticulosis should be recommended to the patient. If appendectomy cannot be done safely, a laparoscopic right hemicolectomy may be considered after discussion with the patient. AUTHORS' CONTRIBUTIONS Kristopher Bujold-Pitre is a medical student and has written the paper. Dr Olivier Mailloux is the Chief of Surgery CISSS Côte-Nord and clinical instructor at Université Laval. He was the operating surgeon and has reviewed the paper.
2021-11-03T05:14:47.379Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "558ea92a5f6b6778a11bca434b127befc979f392", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/jscr/rjab488", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "558ea92a5f6b6778a11bca434b127befc979f392", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
166737880
pes2o/s2orc
v3-fos-license
INTERNET BANKING AND ITS IMPACT ON THE SERVICE QUALITY OF BANKS IN PUNJAB The changes in IT sector constantly influencing the performance of banking sector in the world. The emergence of internet banking has changed the way of banks of how to offer the products and services to the customers. In order to survive in the rapidly changing technological environment, the banks are required to adapt such changes and to maintain and improve the services which they are offering to their customers in order to attain the customer’s satisfaction. Now the term quality does not only include the products but also the services. This paper deals with the internet banking operations and how it affects the service quality of the banks in Punjab. The research is much more of qualitative nature but to prove facts and figures quantitative approach is also used in the paper. The research is descriptive as well as explanatory. In order to arrive at the sample size, non probability method has been used. For the primary data collection a structured questionnaire is used to record the response of various respondents. Secondary data has been collected from annual reports, other published literature of the banks etc. In order to test the impact of internet banking on the service quality of banks seven service quality dimensions model is used. A model with seven dimensions service quality named reliability, assurance, responsiveness, empathy, tangibility, security and communication is used to complete the study. In these seven dimensions 37 variables are covered. For the data analysis the statistical package SPSS 20 is used. Descriptive statistics is used to analyse the data. The research proves that all the dimensions which are included in the study have a positive impact on the service quality of banks providing internet banking services to their customers in Punjab. The recommendations are also discussed with which the service quality and customers satisfaction can be improved. INTRODUCTION: As we all know that customer is the king of the market because in the modern era customer is rational. A customer gas becomes very much particular towards the quality of the products and services which he is purchasing or intends to purchase from the market. The growing awareness of a customer has made him more quality conscious. Due to this each and every organization is required to provide product and services to the customers of high quality which must meet the set perceptions of customers regarding the product or service. As far as banking industry is concerned it comes under service sector and there is a lot of pressure on the banking industry to provide high quality services to its customers. This obligation is same for the public sector banks, private sector banks and foreign banks if they want to survive in the market. Quality is a dynamic state related with products, services, people, processes and the environment that meets or exceeds customer's expectations, needs or desires (De Jager H. J., Nieuwenhuis, 2005). The difference between the perceived customers' service and their expected service is defined as service quality (Parasuraman, Zeithaml and et al., 1988). In the banking industry this concept is gaining popularity because the competitive products and services are now offered by the different banks to their customers in order to retain their customers and also to enhance the customer base. The service quality of the banks is greatly affected due to the introduction of internet banking services also. The banks are now trying to increase their customer base by delivering banking services with the help of internet rather than by means of any other distribution media. But while delivering services with the help of internet the banking sector also faces several challenges as the banks have to design and provide their internet based services with the expectations of services by the customers. Today customers want to access the internet banking services as it is convenient and time consuming for the customer as well. But this is not the end and the expectations of the customers also involves secured transactional websites, easily navigable websites, protection of online personal information, diversification of internet based services, credibility, access to variety of services, communication by bank to the customer after using internet banking services like SMS alert service etc. ( Hassan, 2012). REVIEW OF LITERATURE The various empirical studies undertaken by various researchers are explained with the help of following table on the impact of internet banking on the service quality of the banks. Study Country and Sample Size Outcomes of the Study Customer value perceptions depend on cost benefit analysis, competition and customers expectations. OBJECTIVES OF THE STUDY The study has been undertaken by taking into view the following objectives:  To study the impact of internet banking on the service quality of banks in Punjab.  To make suggestions for improvement of quality of services of banks working in Punjab.  To identify degree of importance attached to various dimensions and also with its variables of service quality under study viz. reliability, assurance, responsiveness, empathy, tangibility, security, communication. HYPOTHESIS DEVELOPMENT The following null hypothesis has been made to undertake the research: The internet banking has no significant impact on the service quality of banks in Punjab.  H0: There is no significant difference in the customer perception and expectation regarding service quality dimensions under this study. LIMITATIONS OF THE STUDY The present study is based upon the results of survey conducted on only 53 respondents. The results of the study are subject to the limitations of sample size, regional territory, psychological, financial and emotional characteristics of surveyed population. Due to such limitations the study cannot be generalized. Data Collection This research is based upon the primary as well as secondary study. In order to arrive at the sample size, non probability method has been used. For the primary data collection the structured pre tested questionnaire is used to record the response of various respondents. Data is collected from 50 respondents for the purpose of determining the impact of internet banking on the service quality of banks in Punjab. All items were measured by responses on a Five-Point Likert Scale in agreement/relevance with statements, ranging from 1= Strongly agree/ Completely relevant to 5=Strongly disagree/Completely irrelevant. The sources used for secondary data collection includes research papers, articles, websites of banks, data published by RBI. Data Analysis The reliability of data has been tested through Cronbach Alpha, It has further been analyzed thought descriptive statstics. The analysis of primary data was carried out using Statistical Package for the Social Sciences (SPSS) 20. The table 8 shows that reliability is an effective dimension to study the impact of internet banking on the service quality of banks in Punjab. Ten variables have been included in this dimension to study the response of various users of internet banking. Among these variables the last variable has maximum mean of 4.520 which shows that users are getting the exact services as promised by their banks to them. In the table we have shown the variables in the ascending order that is the variable which counts more to study this variable is at last and the variable which has minimum impact in this study is at top. And the variable having least response is related with the fact that banks are not very much able to perform services accurately in the very first attempt of the user. But still the variables under study of this dimension have given positive results and proved that reliability put significant impact on the service quality of banks offering internet banking to their customers. O c t u b e r 1 2 , 2 0 1 4 Valid N 50 PROFILE OF RESPONDENTS The table 9 depicts that users agreed that assurance is a factor which impacts the service quality of the banks. The variable with the highest mean 4.420 in this dimension provides that each and every possible attempt is undertaken by the banks which will make their customers feel safe in their transactions on internet. Five variables are covered in this dimension to study the impact. This is the dimension which is having the least mean variable as compared to other dimensions. The variable at the top in this table has mean value of 3.96 which provides that minimum respondents are agreeing with the research question that employees of the banks are polite towards them. The respondents of the study proved that employees are competent to answer the questions of customers, are instilling confidence among them and also provide feedback to them when required. Valid N 50 O c t u b e r 1 2 , 2 0 1 4 Table 10 provides that six variables are included in the dimension of Responsiveness to study the behavior of internet banking users. The variable with the highest mean 4.18 of this dimension present that maximum respondents are agreed with this variable of dimension that websites of their respective banks contains answers to FAQs (Frequently Asked Questions). The least mean value 4.10 of this dimension discloses the fact that the respondents has given their positive comments for the research question that they are keeping informed by the banks as to when services will be performed. The other thing which we want to highlight in this study is that this is the dimension on which least respondents have agreed on the maximum mean value variable. It means that the highest mean value variable of this dimension has the lowest mean value as compared to other highest mean values of other six dimensions. Valid N 50 Table 11 deals with the dimension Empathy with the four variables. The variable named employees are dealing with customers in a concerning manner has the highest mean value 4.24 and the variable with the research question that individual attention is given to the customers has the least mean value of 4.08 in this dimension. The other facts which are proved in this research of this dimension are that employees of the banks have the customers' best interest in their mind and also they understand the problems of the customers. Valid N 50 The above table shows that tangibility also affecting the service quality of banks. The maximum respondents agree that bank location and banking hours are important factors affecting service quality of banks. Internet banking users also proves that banks provide visually appealing material and facilities associated with the service. The neat and clean O c t u b e r 1 2 , 2 0 1 4 appearance of employees, modern looking equipments also have a positive impact on the service quality of banks offering internet banking to their customers. Valid N 50 Table 13 depicts that security puts positive impact on the service quality of banks providing internet banking services. Last variable of the table with highest mean value 4.280 shows that customers are confident that the services which are provided to them were done in a secured manner. Respondents also prove that their all files and banking records related to transactions are safely kept by the banks. Customers are confident regarding the management of their personal information held in the bank. RESULTS AND DISCUSSIONS 1. Maximum users of internet banking are covered under the age group of 20-40. 2. The results prove that the income of the respondents is independents to the usage of internet banking. O c t u b e r 1 2 , 2 0 1 4 27. More than 80% respondents from the study are in the favor that bank has modern looking equipments and visually appealing facilities. 28. 90% respondents are in the favor that employees of the banks have a neat, professional appearance. 29. Maximum respondents are satisfied with the business hours i.e. 24/7 in case of internet banking and bank location and these respondents are also agreeing that there is visually appealing material associated with the service. 30. As far as the security factor is concerned 44% respondents are strongly agree and 40% are agreeing that they are confident that services provided were done in a secured manner. But 16% respondents are neutral to the same. 31. 80% of respondents are secured and confident with management of customer's personal information held in the bank. 20% of the respondents are neither satisfied nor dissatisfied. 32. From the total 88% of the respondents are satisfied with the fact that all files of customers regarding banking records and transactions are safely kept. 165 are neutral and 2% are dissatisfied to the same. 33. Maximum of the respondents is agreeing that bank provides multiple languages offering online and bank ensures that customers informed in a language they can understand online. 34. 72% respondents are strongly agreed and 28% are agreeing that internet banking makes communication easier and work efficient. CONCLUDING REMARKS The research proves that all the dimensions which are included in the study have a positive impact on the service quality of banks providing internet banking services to their customers in Punjab. The communication has a very strong role to improve the service quality of the banks providing internet banking services. The order of the importance of dimensions provides communication at the top following reliability, tangibility, assurance, security, empathy and responsiveness. The study concludes that security is not the dimension which negatively impacts the service quality of banks as proved in earlier researches. As per this research paper tangibility and assurance are the factors which have least positive impact on the service quality of banks providing internet banking services to their customers. The gap prevails in the perceived quality and expected quality of dimensions tangibility and assurance can be minimized to the extent if banks ensure their customers that employees of the banks will always become polite to them and banks need to implement it from grass root level in their organisation structure. The banks also need to provide material on their websites in the form of guidance tips which will help the users to enjoy internet banking services without any difficulty.
2019-05-28T13:15:08.057Z
2014-10-12T00:00:00.000
{ "year": 2014, "sha1": "f21a86c3959bcfa950af891a0129dac4544c7625", "oa_license": "CCBY", "oa_url": "https://cirworld.com/index.php/jssr/article/download/3353/pdf_58", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d4ef070a71a1ded25450888ae0d74f48c1a4efea", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
224872839
pes2o/s2orc
v3-fos-license
Patient Perspectives on Telepsychiatry on the Inpatient Psychiatric Unit During the COVID-19 Pandemic Hospitals have eliminated many in-person interactions and established new protocols to stem the spread of COVID-19. Inpatient psychiatric units face unique challenges, as patients cannot be isolated in their rooms and are at times unable to practice social distancing measures. Many institutions have experimented with providing some psychiatric services remotely to reduce the number of people physically present on the wards and decrease the risk of disease transmission. This case report presents 2 patient perspectives on receiving psychiatric care via videoconferencing while on the inpatient unit of a large academic tertiary care hospital. One patient identified some benefits to virtual treatment while the second found the experience impersonal; both were satisfied with the overall quality of care they received and were stable 2 weeks after discharge. These cases demonstrate that effective care can be provided remotely even to severely ill psychiatric patients who require hospitalization. Introduction Hospitals around the country are working to limit the number of in-person contacts between patients and providers to prevent the further spread of COVID-19. Inpatient psychiatric units face unique challenges when implementing mitigation strategies: many patients cannot be isolated in their rooms for safety reasons while others cannot reliably wear safety equipment or follow social distancing precautions due to their psychiatric illness. To decrease exposures and the risk of disease transmission, many inpatient units are adjusting staffing levels and considering alternative care delivery models to protect patients and providers. Telepsychiatry-the use of phone and videoconferencing technologies to provide psychiatric services remotely-is used in a wide variety of settings, including correctional systems, the Department of Veterans Affairs and private practice (1). A growing body of evidence supports telepsychiatry as a means of providing mental health services, including surveys indicating high patient satisfaction (2,3). Many institutions are now experimenting with new virtual mental health care delivery models (4). However, there are far less data regarding implementation (5) or patient satisfaction (6) when telepsychiatry is utilized on inpatient units. The American Psychiatric Association has lamented the lack of broader adoption of telepsychiatry services for inpatient treatment (7); the coronavirus pandemic has made broader adoption of this method of mental health care delivery a safety necessity. This article attempts to capture the experiences of 2 patients who were under our care on the inpatient unit of a large urban academic tertiary care hospital during the early stages of the outbreak. Both patients were treated initially by psychiatrists in person but were subsequently treated remotely when the institution temporarily transitioned to a telepsychiatry model. Their cases illustrate different degrees of patient satisfaction and demonstrate that some psychiatric treatment can be effectively provided remotely even to the most acutely ill patients. Description For this case report, Ms D and Ms N were interviewed 12 and 17 days, respectively, after they were discharged from the hospital. Both patients consented to have their interviews recorded and included in this report, provided their details were anonymized. A transcript of these interviews, edited for length and clarity, can be found in the online supplement. Ms D Ms D is a 26-year-old recently engaged college graduate who had previously been diagnosed with bipolar disorder and was brought to the hospital by her family for erratic, bizarre behavior-she became convinced that her fiancé was stealing from her, thought her mother's eyes had turned black, and briefly ran away from home. She had recently been hospitalized at another psychiatric facility for 7 days, but within 24 hours of discharge her symptoms returned. At the urging of her family, she presented to our hospital for further treatment. Over the course of admission, she was treated with risperidone, which was increased to 4 mg nightly, and clonazepam 0.5 mg, which was stopped prior to discharge. With these medication changes, Ms D's sleep normalized and her paranoia and delusions resolved. Although her thought process became linear, she continued to have deficits in abstract thinking and had difficulty appreciating the nature of her illness. Her diagnosis was amended to schizoaffective disorder as she had experienced delusions for more than 2 weeks in the absence of a prominent mood episode. She was discharged home with plans to enroll in a virtual intensive outpatient program. Ms N Ms N is a 33-year-old married nurse with recurrent major depressive disorder and borderline personality disorder who was referred to the hospital from her partial hospitalization program for depressed mood and suicidal thoughts. Ms N had been hospitalized more than a dozen times previously with similar presentations, most recently in December 2019. On admission, she was on a complex regimen that included 8 psychoactive medications. During admission, Ms N was treated with electroconvulsive therapy, and her medication regimen was consolidated to venlafaxine 150 mg nightly, lithium 600 mg nightly, asenapine 20 mg nightly, trazodone 100 mg nightly, and lorazepam 1 mg twice daily. With treatment, Ms N's mood brightened and her suicidal thoughts became less intense. She was discharged to a residential program. Transition to Telepsychiatry On March 23, 2020, the inpatient psychiatrists on our unit began conducting interviews remotely, first over telephone and then via videoconferencing using iPads. Nursing staff remained on the unit to observe patients, dispense medications, and lead group therapy sessions. One psychiatrist remained in the hospital or at the nearby outpatient clinic to perform in-person evaluations when necessary. During the first week of this transition, Ms D and Ms N were treated by an attending and a resident psychiatrist who had admitted them in person. On March 30, these physicians rotated off service, and Ms D and Ms N were treated until discharge by a different attending psychiatrist and resident psychiatrist exclusively via videoconference. Ms D's Perspective on Telepsychiatry When the inpatient unit transitioned to telepsychiatry, Ms D worried that her doctors would be less capable of accurately evaluating her over the phone. She felt that her body language and hand gestures were crucial to understanding her mood and personality, and she feared that with telepsychiatry, physicians would not pick up on this type of communication. When her first treatment team rotated off the service, Ms D did not completely trust that her new team would be able to make an accurate diagnosis or sound treatment decisions based solely on videoconferencing interviews. Ms D had the impression that during her virtual interviews she did not have the doctors' full attention. She felt the interactions were rushed, and that time was not left for adequate back-and-forth conversation. Overall, she found the experience dehumanizing. She found that the nurses' physical presence on the unit helped her with this transition and she felt that her ability to work with the in-person staff that remained was an important element of her recovery. Reflecting back on her 2 psychiatric admissions-the first where all treatment was provided in person, the second where some treatment was provided remotely-Ms D felt that her care during her second hospitalization had been better. Despite her criticisms of video conferencing, Ms D felt the psychiatry team had made good decisions regarding her medication regimen and that her hospitalization had changed her life for the better. Ms N's Perspective on Telepsychiatry Ms N did not find the switch to a telepsychiatry model disruptive. As a nurse, she felt the shift was reasonable and would decrease the risk of disease transmission to patients and staff. Even though she thought videoconferencing was less personal, she did not feel it made a significant difference in her care. When her treatment team changed, Ms N found working with her new doctors challenging, but she attributed this to personal struggles and difficulty trusting new people rather than to the videoconferencing technology itself. She was comfortable with treatment adjustments made by her new team and assumed that psychiatrists were able to make good decisions by working collaboratively with nursing staff. Ms N was satisfied with telepsychiatry on the inpatient unit and noted some benefits of remote treatment. Specifically, she reported that when multiple psychiatrists had physically entered her room as a team in the past, she found the experience intimidating. Telepsychiatry was less uncomfortable. Lessons Learned Two patients on an inpatient psychiatric service were effectively treated when psychopharmacological management was provided remotely. Some patients may find videoconferencing interviews impersonal, while others may find it less intimidating than in-person evaluations performed by large teams. Providers should consider spending additional time during remote interviews, as patients may experience videoconferencing as abbreviated compared to those performed in person. Conclusion In this case report, we presented 2 patients' impressions of having received portions of their treatment remotely while hospitalized on an inpatient psychiatric unit. Neither experienced setbacks during hospitalization, indicating that virtual follow-up did not adversely impact their treatment. Although these cases support the hypothesis that telepsychiatry can be safely utilized in this setting, our results should be interpreted with caution. Ms D's symptoms had markedly improved prior to the transition and we likely would not have attempted to use videoconferencing during her initial evaluation when she was more floridly psychotic. When patients were disorganized, agitated, and aggressive, psychiatric evaluations were still performed in person. Both patients found virtual visits acceptable in part because they continued to have positive face-to-face interactions with nurses who remained on the inpatient unit; without highquality in-person nursing care, the results may have been different. Finally, the data presented are from 2 patients; many more cases will need to be reported before drawing firm conclusions. Future research should focus on evaluating outcomes in psychotic and nonpsychotic patients to assess and improve their experience with telepsychiatry. Ms N was satisfied with her experience, while Ms D felt that her interactions with doctors over videoconferencing were rushed and less personal than her in-person evaluations. That these 2 patients were effectively and safely treated remotely and were stable 2 weeks after discharge provides reasons for optimism that further refining remote care delivery methods can lead to good outcomes and patient satisfaction in the setting of the current pandemic. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online.
2020-09-25T13:10:44.618Z
2020-09-16T00:00:00.000
{ "year": 2020, "sha1": "0d39d15322a702e988cfdea1f4fef73301f71105", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2374373520958519", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "d666ce9aa9de84a4a6a1a5c26cdcd8e35e85b941", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258773497
pes2o/s2orc
v3-fos-license
UHPLC–QTOF–MS Metabolic Profiling of Marchantia polymorpha and Evaluation of Its Hepatoprotective Activity Using Paracetamol-Induced Liver Injury in Mice Marchantia species were traditionally used to treat liver failure. Marchantia polymorpha chloroform extract showed a marked hepatoprotective activity in a dose-dependent manner in paracetamol-induced extensive liver damage in mice. At a dose of 500 mg/kg (MP-500), it resulted in a reduction in aspartate transaminase by 49.44%, alanine transaminase by 44.11%, and alkaline phosphatase by 24.4% with significant elevation in total proteins by 58.69% with respect to the diseased group. It showed significant reductions in total bilirubin, total cholesterol, triglycerides, low density lipoprotein (LDL), very LDL, total lipids, and to high density lipoprotein ratio (CH/HDL) by 53.42, 30.14, 35.02, 45.79, 34.74, 41.45, and 49.52%, respectively, together with a 37.69% increase in HDL with respect to the diseased group. It also showed an elevation of superoxide dismutase by 28.09% and in glutathione peroxidase by 81.83% in addition to the reduction of lipid peroxidation by 17.95% as compared to the paracetamol only treated group. This was further supported by histopathological examination that showed normal liver architecture and a normal sinusoidal gap. Metabolic profiling by ultrahigh performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometer (UHPLC–QTOF/MS) led to the tentative identification of 28 compounds belonging to phenols, quinolones, phenylpropanoid, acylaminosugars, terpenoids, lipids, and fatty acids to which the activity was attributed. Four compounds were detected in the negative ionization mode which are neoacrimarine J, marchantin A, chitobiose, and phellodensin F, while the rest were detected in the positive mode. Thus, it can be concluded that this plant could serve as a valuable choice for the treatment of hepatotoxicity that further consolidated its traditional use. ■ INTRODUCTION Liver is the body's largest organ that performs a number of crucial tasks, including metabolism of protein, carbohydrates, and fat; the process of detoxification; the production and secretion of several enzymes; as well as the production of bilirubin. 1,2 Numerous infections, medications, long-term diabetes, as well as alcohol in addition to many poisons can act upon this organ, resulting in its deterioration with the concomitant appearance of liver necrosis and cirrhosis. 3 A large number of naturally occurring herbal products showed a prominent effect on the liver in spite of the fact that many of their bio-active constituents are still unknown. They were frequently employed for the treatment of liver disorders due to their effectiveness, fewer adverse effects and inexpensive cost compared to synthetic agents. They exerted their effect on liver via prohibition and scavenging of free radicals by their antioxidant potential that facilitates the prevention of infections and degenerative disorders as well. 4 Bryophytes are often a group of lower green land plants without well-developed vascular systems with a dominating leafy generation called gametophyte (haploid generation); meanwhile, their sporophytic (diploid generation) is a sporebearing and many times get along with the gametophyte for their whole life cycle. The bryophytic plants generally exist in less significant and low-pitched noticeable places and are frequently disregarded by human beings. 5−7 Hence, bryophytes are less explored compared to vascular plants and remained under-examined in many areas particularly their medicinal importance. Marchantia polymorpha L. is a thallus liverwort of class Hepaticae with green to brown or purple colored, hexangular marks on ramified branches of about 10 cm long and up to 2 cm in width. It maturates on moist soil, damp rocks, stream's banks, puddles, and peat bogs. Undersides, they are covered by many root-like rhizoids and give rise to reproductive structures known as gametophores. Female gametophore plants contain a stalk with rays containing archegonia which give rise to ova. Male gametophores have a flat disc bearing antheridia that develop sperms. It has been observed that people of Himalayan areas make use of a mixture of ashes made from M. polymorpha and Marchantia palmata plants, blended with honey and a small quantity of fat for healing cuts, burns, and other skin injuries. 7−9 Besides, in numerous classical Greek references and medical documents, the Marchantia species were used to cure open wounds, prevent bacterial infections, treat external wounds that are inflammatory or painful, work as snake antivenom, and treat liver failure. 10 Phytoconstituents existing in M. polymorpha are volatile metabolites belonging to terpenoids such as thujopsene and βchamigrene in addition to aromatic compounds, comprising bibenzyls and bisbibenzyls. 11,12 Regarding its biological activity, M. polymorpha exhibited a potent biological potential particularly antifungal, antibacterial, anti-inflammatory, antiviral, and anti-cancer as well that is mainly relied upon its metabolites represented by marchantin A, marchantin B, neomarchantin A, riccardin H, and perrottetin E. 13 Moreover, a recent study performed on a M. polymorpha L. extract in vitro revealed significant antioxidant and tyrosinase inhibition potential. 14 Besides, endophytes isolated from M. polymorpha showed antiviral and anticancer potential owing to their volatile cyclic dipeptides. 15 Tracing the current literature, nothing was found regarding the hepatoprotective potential of M. polymorpha L. as well as its mechanisms of action. Meanwhile, the search for an alternative therapy for treating of hepatic disorders, with minimum side consequences particularly derived from natural sources is considered mandatory worldwide. Thus, the current study aimed to comprehensively validate the hepatoprotective potential of the chloroform extract of M. polymorpha L. whole body using paracetamol (PCM)-induced liver injury in mice with subsequent measurement of the levels of oxidative stress markers and liver biomarkers that was further supported by histopathological studies. Besides, metabolic profiling of M. polymorpha bioactive chloroform extract was performed using ultra high performance liquid chromatography coupled with a mass Q-TOF spectrometer (UPLC/MS) to further correlate between the bioactivity and the prevailing secondary metabolites to consolidate folk employment of bryophytic plant species such as M. polymorpha L as a hepatoprotective agent. ■ MATERIALS AND METHODS Plant Material. Whole M. polymorpha L. plants were obtained from Khanspure and Nathigalli areas (in the northern region of Pakistan). The specimen of the plant species was identified and authenticated by Dr. Zaheer-ur-Khan, a plant taxonomist from Botany Department, Government College University Lahore. The plant specimen was allocated in the herbarium of Department of Pharmacy, University of Central Punjab, Lahore, Pakistan with Voucher number of Cog-010 for further reference. The plant material was carefully dried, garbled, pulverized, and placed in a glass container. Crude Plant Extracts Preparation. The ground plant material (4 Kg) was macerated successively with three major solvents, namely, n-hexane, chloroform, and methanol. 8 L of each of the solvent were used consecutively and the plant material was kept in each solvent for 7 days. Each solvent extract was percolated thoroughly with muslin cloth, followed by filtration through Whatman−1 filter paper and the filtration operation was repeated twice or three times to get a maximum yield of each. All the solvent extracts were subjected to concentration using rotary evaporator under reduced pressure. 16 Hexane extract was dark-orange, whereas the chloroform extract was dark-brown, whereas the methanol extract was greenish-black in color. Each of the extracted material was further dried in an oven at 37°C to get the semisolid extract. Each extract was labeled and preserved in an airtight container at 25−30°C. Chemicals and Drugs. Silymarin was purchased from local pharmacy and PCM powder was obtained from Pacific Pharma Limited, Lahore, Pakistan. The kits for the assay of serum enzymes were provided by Sigma-Aldrich. Different doses of chloroform extract (250 and 500 mg/kg) were prepared in 10% Tween20 solution, while PCM 250 mg/kg were prepared in distilled water (DW) for administration. Silymarin (50 mg/ kg) was also prepared in DW. UPLC/MS Metabolic Profiling of the Chloroform Extract of M. polymorpha L. Metabolic profiling of the chloroform extract of M. polymorpha L. was done using UPLC coupled with mass Q-TOF spectrometer (UPLC/MS). Agilent 6520 Accurate-Mass Q-TOF mass spectrometer (MS) with twin ESI 18 sources and Agilent 1290 Infinity 17 LC system UHPLC was employed. Agilent Zorbax Eclipse XDB-C18, narrow19 bore 2.1 × 150 mm, 3.5 μm (P/N: 930990-902) were the column's technical specifications. Temperatures for the column and auto-sampler 20 were kept at 25 and 4°C, respectively. The flow rate was 0.5 mL/min, whereas formic acid, concentrations of 0.1% in water and 0.1% in acetonitrile were the mobile phases employed; meanwhile, the injection's volume was 1 micro-L. There was a 25 min run and a 5 min recovery period. Electrospray ion source in both the negative mode and positive mode was employed, and full scan MS analysis was performed over the m/z 100−1000 range. Nitrogen was provided at flow rates of 25 and 600 L/h, respectively, for nebulizing and drying gas purposes. 350°C was the temperature of the drying gas, whereas the voltage for fragmentation was calibrated to 125 V. An analysis was conducted using a 3500 V capillary voltage. Agilent Mass Hunter Qualitative Analysis B.05.00 was used to process the data (Method: Metabolomics3 2017-00004.m). Compounds were identified using the following search parameters in the database: 4 METLIN AM PCDL-N-170502.cdb: 5 ppm match tolerance ions with positive charges include the following: +H, +Na, +NH4, and −H. In Vivo Hepatoprotective Evaluation of M. polymorpha L. Chloroform Extract. Experimental Animals. Studies were conducted on Swiss albino male mice weighing 25−30 g. The animals were laid in Plexiglas cages (47 × 34 × 18 cm 3 ) in the Research Laboratory of Pharmacology and Physiology, Faculty of Pharmacy, University of the Punjab Lahore, Pakistan. The laboratory temperature was sustained at 26 ± 2°C and humidity was maintained at 50−55% along with 12 h light−dark cycle. All the animals were adapted for 7 days with the experimental status and fed with standard animal food and water. The standard diet was prepared according to Institutional guidelines and was composed of 20% fat (5% sunflower oil + 14.5% cottonseed oil + 0.5% linoleic acid), 16% proteins in addition to calcium (5%), amino acids mixture (40%), vitamins mixture (1%), minerals (7%), as well as casein (11%). All the experimental protocols were approved by the Pharmacy Animal Ethics Committee with approval no. 2101 in the Faculty of Pharmacy, University of the Punjab Lahore, Pakistan. Experimental Protocol. Mice were divided into five groups, each group consisting of animals. Group I served as control and received DW only. Group II was the control diseased group that orally administered with 250 mg/kg of PCM. Meanwhile, group III served as the standard group that was orally administered with 50 mg/kg silymarin in addition to 250 mg/kg of PCM. However, groups IV and V were the examined extracts groups in which the animals were orally administered with 250 and 500 mg/kg of M. polymorpha chloroform extract (MP-250) and (MP-500), respectively, in addition to oral administration of 250 mg/kg of PCM for 14 days. 17 Mice were anesthetized with ketamine + xylazine and blood was drawn for biochemical evaluation by heart puncture following 15 days of therapy. Blood was centrifuged at 4000 rpm for 15 min at room temperature to obtain the serum. Biochemical and Oxidative Stress Markers Evaluation. All liver function tests including aspartate transaminase (AST), alanine transaminase (ALT), alkaline phosphatase (ALP), and total proteins (TP) as well as lipid profile parameters comprising total bilirubin (TB), total cholesterol (TC), triglycerides (TGs), low density lipoprotein (LDL), very LDL (VLDL), total lipids (TL), and cholesterol to high density lipoprotein ratio (CH/HDL) and HDL were evaluated using serum analysis as previously reported by Parvez et al. 18 However, for antioxidant investigations, 10% liver homogenate was employed and oxidative stress parameters were assessed including superoxide dismutase (SOD), lipid peroxidation (LPO), and glutathione peroxidase (GPx) was analyzed using the previously reported methods employed by Rajkapoor, et al. 19 Histopathological Evaluation. Liver specimens were dehydrated, cleaned, and embedded in paraffin blocks after being stored in 10% formalin solution. To report histology, paraffin slices were cut and stained with hematoxylin and eosin dye. 20 Statistical Analysis. Results were represented as mean ± SD (n = 6). Two-way ANOVA followed by post-hoc Dunnett test was performed using Graph Pad Prism (San Diego, CA, USA) software. p-value of less than 0.05 was considered statistically significant. in both positive and negative ionization modes. This led to the tentative identification of 28 compounds; four of which were identified in the negative ionization mode whereas the rest were determined from the positive ionization mode, as illustrated in Tables 1 and 2. These compounds belong to various classes including benzoic acids and their derivatives, phenols, oligothiophenes, quinolones, phenylpropanoid, acylaminosugars, terpenoids, lipids, and fatty acids. Tentative assignment of the detected metabolites was performed based upon comparing the mass of the existing metabolites in both positive and negative and ionization modes with previously reported data (references were illustrated in Tables 1 and 2) together with public online databases such as pubchem and Massbank. A scheme showing the chemical structures of the identified compounds in the chloroform extract of M. polymorpha L. is represented in Figure 1. In Vivo Hepatoprotective Evaluation of M. polymorpha L. Chloroform Extract. Effect on Liver Stress Markers. Paracetamol-induced an extensive liver damage in mice evidenced by a pronounced elevation in ALT, AST, and ALP levels estimated by 144.6, 82.57, and 55.22%, respectively, with concomitant reduction in TP by 24.68% as compared to the control group. In contrast, administration of (MP-250) and (MP-500) resulted in a significant amelioration of the extensive liver damage as revealed through the reduction of ALT by 44.05 and 49.49%, respectively, AST by 35.98 and 44.07%, respectively, ALP by 18.93 and 23.08%, respectively, in addition to a significant elevation in TP by 45.21 and 54.47%, respectively, with respect to the diseased group that received paracetamol only. They approach in this respect silymarin that revealed 51.49, 52.85, and 24.90% reduction in ALT, AST, and ALP levels, respectively, in addition to 44.83% elevation in TP (Figure 2 (Figures 3 and 4). Effect on Oxidative Stress Markers. In addition, oral administration of paracetamol triggers a pronounced elevation in oxidative stress elaborated by the diseased group expressed by a decline in endogenous antioxidants estimated by 31.17 and 58.07%%, for SOD and GPx, respectively, with concomitant elevation in LPO by 40.74%. In contrast, administration of silymarin, (MP-250) and (MP-500) resulted in an elevation of SOD by 33.56, 22.01, and 28.95%, respectively, with concomitant increase in GPx by 114.14, 76.11, and 82.42%, respectively, compared to the diseased group. Besides, they reduced LPO by 22.39, 11.59, and 14.26%, respectively, as compared to paracetamol only treated groups (Table 3). Histopathological Examination. M. polymorpha extracts elicited a pronounced amelioration in the liver histology, as presented in Figure 5. According to the histopathological studies, the control group showed normal hepatocyte (white arrow) and normal sinusoids (blue arrow), whereas paracetamol intoxicated mice revealed significant vascular degeneration and centrilobular necrosis in hepatocytes in addition to hepatocyte ballooning and degeneration (white arrow) together with abnormal sinusoid architecture (blue arrow). When compared to control, administration of various doses of M. polymorpha extract led to mild degenerative alterations in hepatocytes and sinusoids. The MP-250-treated group showed normal hepatocyte architecture (white arrow) and a mild shrinkage of sinusoid (blue arrow), whereas MP 500 revealed a better amelioration in the liver histology manifested by normal hepatocyte architecture (white arrow) and a normal sinusoidal gap (blue arrow). Meanwhile, oral administration of silymarin, standard hepatoprotective agent, showed preserved hepatocyte architecture (white arrow) and a mild change in sinusoids (blue arrow) ( Figure 5). ■ DISCUSSION Ethnopharmacological investigations showed that many plants possess hepatoprotective properties and were employed traditionally in various regions of the world to treat various liver ailments. 21,22 The current research assessed the hepatoprotective potential of M. polymorpha, which further consolidates its traditional usage. The antioxidant components existing in plants as phenols, phenolic diterpenes, are mainly responsible for their pharmacological properties. 23,24 Metabolic profiling performed using UHPLC−MS revealed the richness of the plants with secondary metabolites with antioxidant potential to which its hepatoprotective effect was attributed. Paracetamol (Acetaminophen), a popular analgesic and febrifuge medication was highly reputed to cause severe liver injury in both experimental animals and in humans. 25 Paracetamol-induced hepatotoxicity was used as a reliable approach for screening hepatoprotective drugs. The liver is the primary site for paracetamol metabolism, and the kidneys are responsible for excreting it after its conjugation with glucuronide and sulfate. 26 It is known that a portion of acetaminophen is metabolized via cytochrome P450 pathway to N-acetyl-p-benzoquinamine, a highly poisonous metabolite that is typically conjugated with glutathione and eliminated in the urine. Acetaminophen intoxication depletes glutathione reserves, resulting in the buildup of NAPQI, mitochondrial malfunction, and the emergence of acute hepatic necrosis. 26 The toxic metabolites (N-acetyl-p-benzoquineimine) can alkylate and oxidize intracellular GSH, which causes liver GSH depletion. Increased LPO is then caused by the abstraction of hydrogen from a polyunsaturated fatty acid, which ultimately causes liver damage from higher paracetamol doses. 27 Reactive metabolites can cause early cell stress in a variety of ways, such as through diminishing glutathione (GSH) levels or by attaching to enzymes, lipids, nucleic acids, and other cell components. Studies showed that compounds that affect P450 results are expressed in mean ± SD; n = 5. *P < 0.05 considered to be significant from PCM. One-way ANOVA variance followed by Dunnett's multiple comparison tests was performed using graph pad prism software. activity can prevent the liver damage caused by PCM. 28 The monitoring of enzyme levels like AST and ALT is frequently utilized in the assessment of liver damage caused by acetaminophen. The enzyme is released into circulation by necrosis or membrane injury, and as a result, it can be detected in the serum. Hepatocytes' mitochondria are the primary location of AST. ALT is a better measure for identifying liver damage since it is more specific to the liver in addition, damage of liver cells is also linked to serum ALP and bilirubin levels. 29 When acetaminophen was administered, it significantly increased the levels of several enzymes, including AST, ALT, ALP, GGTP, and TB, and decreased TP relative to the control group. The increased levels of the blood marker enzymes AST, ALT, ALP, and bilirubin were counteracted by co-administering the chloroform extracts of the plant under investigation in a dose-dependent manner. The extract may have the ability to stabilize membranes, prevents the leaking of intracellular enzymes, which would otherwise cause higher serum enzyme levels in acetaminophen-induced liver injury. This is consistent with the widely held belief that the repair of the hepatic parenchyma and the regeneration of the hepatocytes cause serum levels of transaminases to revert to normal. 30−32 Toxic metabolite NAPQI frequently causes cell death, organ damage, and covalent alteration of cellular target proteins. 33 Effective regulation of ALP, bilirubin, and TP levels suggested that the hepatic cells' secretory system was improved. 33 Any hepatoprotective medication's effectiveness depends on its ability to either mitigate the negative effects or bring back the normal physiology of the liver after being interrupted by a hepatotoxin. Both silymarin (50 mg/kg) and the plant extract (250 and 500 mg/kg) reduced acetaminophen-induced raised enzyme levels in the test groups, indicating the preservation of the structural integrity of the hepatocyte cell membrane or the regeneration of damaged liver cells. The rise in liver LPO caused by acetaminophen suggested an increased LPO causing tissue damage and failure of the antioxidant defense mechanism to stop the creation of too many free radicals. These alterations are significantly reversed by M. polymorpha treatment highlighting that the antioxidant impact of M. polymorpha is most likely the cause of its hepatoprotective effects. 34 Superoxide dimutase (SOD) enzyme activity reduction is a sensitive indicator of hepatocellular injury and is the most sensitive enzymatic signal in liver injury. According to reports, SOD is among the most crucial enzymes in the body's enzymatic antioxidant defense system. As a result, the radical's harmful effects are lessened since it scavenges the superoxide anion to produce hydrogen peroxide. The liver's reactive free radical-induced oxidative damage is decreased as a result of M. polymorpha considerable increase in hepatic SOD activity. 35 One of the most prevalent tripeptides, nonenzymatic biological antioxidants in the liver is glutathione. It keeps membrane protein thiols intact and eliminates free radical species like hydrogen peroxide and superoxide radicals. 36 It also serves as a GPx substrate. In mice treated with acetaminophen, a lower amount of GSH is connected to increased LPO. The level of GPx and GST was significantly (P < 0.05) and dose-dependently elevated after administration of M. polymorpha extract. 36 Additionally, the hepatoprotective cholesterol; (C) triglycerides; and (D) total lipids in PCM-induced hepatotoxicity in mice; results are expressed in mean ± SD; n = 5 *P < 0.05 considered to be significant from PCM. One-way ANOVA variance followed by Dunnett's multiple comparison tests was performed using graph pad prism software. efficacy of M. polymorpha extract was supported by the histological findings that greatly contributed to the counteracting of the damaged liver architecture. Acetaminophen caused extensive vascular degenerative alterations and centrilobular necrosis in hepatocytes. Administration of different doses of the chloroform extract of M. polymorpha caused only minor degenerative alterations and no centrilobular necrosis, showing that the extract was effective at protecting the liver. The investigated plant extract has hepatoprotective and antioxidant properties that regulate cellular permeability, stability, and decrease oxidative stress. Numerous studies have shown that some flavonoids, triterpenoids, and steroids have antioxidant characteristics that protect the liver. 37 Major compounds like marchantin A, 38 phellodensin F, 39 2,2,4,4,-tetramethyl-6-(1-oxopropyl)-1,3,5cyclohexanetrione emmotin A 40 identified in M. polymorpha extract using UHPLC−MS could be greatly attributed to its major role in hepatoprotective activity. Marchantin A previously reported to possess a potent anti-inflammatory activity that undoubtedly ameliorate hepatic inflammation and necrosis. 41 Moreover, chitobiose and phellodensin F showed potent antioxidant activity via free radicle scavenging properties that ultimately reflected on its ability to ameliorate liver damage. 39,42 Besides, eugenitin, a phenolic metabolite, was previously recorded to possess notable antioxidant and antiinflammatory activity that in turn could participate in the liver protective activity. 43 Thus, it can be concluded that the M. polymorpha showed marked hepatoprotective activity in a dose-dependent manner that further consolidated its traditional use. ■ CONCLUSIONS According to the current findings, M. polymorpha showed marked hepatoprotective activity counteracting hepatocellular injury in paracetamol-treated mice in a dose-dependent manner. This was evidenced by the amelioration of liver stress markers as AST, ALT, ALP, and TPs in addition to normalization of antioxidant parameters such as LPO, SOD, results are expressed in mean ± SD; n = 5. *P < 0.05 considered to be significant from PCM. One-way ANOVA variance followed by Dunnett's multiple comparison tests was performed using graph pad prism software. Results are expressed in mean ± SD; n = 5. *P < 0.05 considered to be significant from PCM. One-way ANOVA variance followed by Dunnett's multiple comparison tests was performed using graph pad prism software. and GPx. This was also accompanied by adjusting the lipid profile such as TB, TC, TG, LDL, VLDL, TL, and CH/HDL and elevating HDL that was further supported by the histopathological examination of the dissected liver sections. Furthermore, UHPLC−QTOF−MS metabolic profiling of the chloroform extract of M. polymorpha L. led to the tentative identification of 28 compounds belonging to various classes including phenols, quinolones, phenylpropanoid, acylaminosugars, terpenoids, lipids, and fatty acids to which M. polymorpha hepatoprotective activity is attributed. Thus, it can be concluded that this plant could serve as a valuable choice for the treatment of hepatotoxicity that further consolidated its traditional use. However, further preclinical studies are highly recommended to be conducted to further ascertain the obtained results.
2023-05-19T15:13:54.730Z
2023-05-17T00:00:00.000
{ "year": 2023, "sha1": "5f59d7a692f7e8ff3efc05c19f507a089c169f43", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.3c01867", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5634da7e43fb1e727899d3fc85ee50e4cae2e703", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
252893855
pes2o/s2orc
v3-fos-license
Sandwich Technique in Primary Teeth: A Review Background and Aim: The sandwich technique is a restorative method where the lost dentin is replaced with glass ionomer (GI) cement and the lost enamel is replaced with composite resin. Various modifications of this technique have been introduced in order to increase the longevity of this restoration. Hence, the aim of this review article was to assess the use of sandwich technique in primary teeth. Materials and Methods: After an initial screening of potentially relevant articles through electronic search of journals indexed in PubMed Central, Science Direct, Wiley Online Library, Springer and Google Scholar, articles on sandwich restorations in primary teeth were included. Results : Literature suggests that the sandwich technique is successfully practiced in carious lesions in permanent teeth; however, very few studies are done on primary teeth. Conclusion: With the advent of newer resin cements and bonding agents, the sandwich technique is much simplified. However not enough clinical studies are available in the literature on the sandwich technique and its modifications in primary teeth. More studies need to be conducted in primary teeth using this restorative technique. Introduction The sandwich technique was first introduced by Wilson and McLean in late 1970's and 1980's wherein glass ionomer (GI) cement was used to replace the lost dentin followed by the placement of composite restoration to replace the lost enamel [1]. The concept of sandwich restoration was based on the principle of biomimesis defined by Bugliarello as "the attempt to imitate features of living systems". It means that it would be better to replace the lost natural tooth structure with materials that best replicate the biological essence of the lost tissues [2]. Composite resins have long been used in restorative procedures wherein they are directly bonded to the enamel. The enamel is etched and conditioned followed by infiltration and polymerization of a resin material. This type of restoration can be retained in the oral cavity for a long time. After placement of a composite restoration, there are various external factors that come into play such as the masticatory forces, occlusal stress, and thermal and hydrodynamic effects that lead to microleakage and ingress of bacteria along with internal factors such as enzymatic degradation of collagen matrix and resin leaching [3,4]. This results in postoperative sensitivity. Hence, placement of a GI base under the composite is considered a smarter option. It provides advantages of establishing a reliable gap-free chemical bond to dentin and a micromechanical bond to composite resin. It protects the pulp tissue from irritation, has a fluoride releasing property which has a cariostatic effect, and helps in reducing the bulk of composite resin which leads to less polymerization shrinkage [5]. According to Croll and Cavanaugh [6], the only disadvantage of sandwich restoration is that it is a time consuming technique. However, the advantages of this type of restoration outweigh its disadvantage. With recent advances in GI cements and bonding agents, complexity of sandwich restoration technique can be simplified. Extensive search of literature did not reveal any comprehensive reviews on this topic. The following review of literature shows different techniques and modifications of sandwich restoration carried out mainly in primary teeth. Review of Literature Sandwich restorations were further categorized into 2 types by Wilson and McLean in 1977[1]. These were open and closed sandwich techniques. The closed sandwich technique involves placement of GI cement at the base of the proximal box not extending to the cavo-surface margin. After setting of GI, the cavity is etched with phosphoric acid followed by application of dentin bonding agent. Composite material is then placed as final restoration. GI is enclosed within the preparation and not exposed to the outer surface. The open sandwich technique involves application of GI restoration at the base of a proximal cavity up to the level of dentinoenamel junction. Composite resin is then placed over it leaving a portion of GI exposed to the oral cavity. The main benefit of the open sandwich technique is that exposed GI helps in buffering changes that occur in presence of an acidic pH and hence it is a commonly used technique [7]. Reid et al. [8] assessed the microleakage and gap size at GI and composite resin interface in sandwich restorations in primary teeth. Microleakage scores were found to be the highest for the closed sandwich group when the cavosurface margin was placed on either dentin or cementum. The lowest microleakage scores were obtained for the open sandwich group when the cavosurface margin was placed on the enamel. However, clinical failures were seen with the use of open sandwich technique mainly because of continuous loss of GI material from the cervical margins of proximal restorations. This was due to two main factors namely (I) moisture sensitivity of GI at the time of placement and (II) crazing and cracking seen due to early set and dehydration. Hence, newer resin modified glass ionomer (RMGI) cements came into play. RMGI has shown to have a higher bond strength as compared to the conventional GI [9,10]. The resin component in RMGI supplements the chemical bond that GI achieves with the tooth structure through micromechanical bonding. This double bonded mechanism helps in longer retention and achieving a good marginal seal in this restoration. According to Pereira et al, [11] better sealing produced by RMGI is the result of resin tag formation into the dentinal tubules along with ion exchange process that occurs at dentin/RMGI interface. An additional reason is the presence of 2-hydroxyethyl methacrylate (HEMA) in RMGI. A major advantage of using RMGI is that the material is polymerized upon light activation. Carvalho et al. [12] and Davidson [13] suggest that RMGI could help in changing the configuration factor of a material to obtain a more favorable internal structure, minimizing the polymerization shrinkage. Some authors believe that relative flexibility of RMGI helps in reducing stress produced in the restoration. Stiffness of composite after curing is also reduced, preventing bond failure [14]. Many other materials such as flowable composite, flowable compomer, and various bonding agents were evaluated as a lining agent under composite resin. Hagge et al. [15] [17] in their study compared the bond strength and fracture modes of 40 extracted primary molars restored with packable composite resin, RMGI, RMGI/packable composite resin sandwich restoration, or RMGI/packable composite sandwich restoration with K-14 bonding agent. No statistically significant difference was seen amongst these 4 types of restorations in terms of bond strength or fracture mode. Cannon [18] evaluated the efficacy of open sandwich restoration in clinical scenario for pediatric dental practice by comparing sandwich restorations with amalgam restorations and concluded that the open sandwich technique can be used in a pediatric dental practice showing good success rate. Atieh [19] evaluated the clinical performance and sustainability of stainless steel crown restorations and RMGI modified open sandwich technique in 186 primary molars. It was concluded that modified open sandwich restoration is an appropriate alternative to stainless steel crown in multi-surface restorations, especially where esthetics is of concern. Bona et al. [20] evaluated the sealing ability of conventional GI and RMGI used for sandwich restorations in 40 restorations in primary molars and examined the effect of acid etching of both these materials on microleakage of GI-composite resin interface. The results suggested that acid etching of GI before placing the composite resin did not show a significant improvement in the sealing capacity of sandwich restorations. RMGI was more effective in preventing microleakage at GI-composite-dentin interface. Fragkou et al. [21] evaluated the tensile bond strength of composite resin and RMGI in open sandwich restorations using tensile strength and strain tests in vitro. It was concluded that use of bonding agent improved the tensile bond strength of restorations. Discussion According to the literature, sandwich restorations with RMGI showed good clinical success. Advances in materials have made this technique relevant and usable even today. In a study carried out by Kleverlaan et al, [22] mechanical properties and compressive strength of GIs cured via various techniques were compared (chemically cured GI, ultrasonically activated GI or heat cured GI). The results showed that mechanical properties of GIs significantly improved after use of ultrasound or heat curing. An ultrasonically cured GI showed increased hardness, a decrease in softness of the top surface layer and negligible creep soon after placement, suggesting that the curing process may be accelerated immediately after ultrasonic activation. Fourie and Smit [23] evaluated the effect of thermocycling, cervical position and use of different materials (GI set with ultrasound, conventional GI, light-cure GI and RMGI) on cervical microleakage of 200 proximal open-sandwich restorations in permanent molars. The results suggested that ultrasonically cured GI showed the least microleakage when the cervical margins of proximal restorations were placed in dentin. Variations of open sandwich technique: Pinheiro et al. [24] introduced a newer alternative to sandwich technique, namely simultaneous activation technique (SAT). In SAT, a glass ionomer cement is placed followed immediately by a bonding agent which is light cured before placement of composite resin. The requirement of setting of conventional GI or light curing of RMGI before placement of bonding agent and composite restoration is eliminated as such. In this study, bond strength and microleakage were evaluated using SAT and conventional sandwich technique. SAT and conventional sandwich technique did not show a statistically significant difference in bond strength or microleakage. It was concluded that SAT is a less complex, quicker, and feasible alternative for bonding of GI cements to composite resins in primary molars [24]. Knight [25] gave two variations for open-sandwich restoration technique namely composite resin co-cure technique and GI cement co-cure technique. The composite resin co-cure technique involved etching of enamel and dentin followed by placing a thin layer of RMGI and curing it. A second layer of RMGI was then applied immediately followed by the application of composite resin and both of them were cured together. The first layer of RMGI sealed the cavity while the second layer of RMGI reduced the polymerization stress of composite resin during curing. For cavities deeper than 2 mm, another layer of RMGI can be added to reduce stress between composite resin layers. GI cement co-cure technique involves placement of conventional GI after etching of the cavity. GI is placed into the proximal box and as a base extending to dentinoenamel junction or just short of the cavo-margin. A layer of RMGI is immediately placed over it extending to the outer margin of the preparation. Composite resin is then placed as a final restoration followed immediately by curing. Composite resin is cured and undergoes polymerization shrinkage before the RMGI bond has cured, resulting in a stress free bond to tooth structure at the outer cavity margin. RMGI chemically bonds the composite resin to GI. Composite resin shows an exothermic reaction which in turns heats the conventional GI and starts a cascade setting reaction of GI in 20-40 seconds. According to a recent meta-analysis by Ortiz-Ruiz et al, [26] when success rate of different proximal tooth colored restorations was analyzed in primary molars after a follow-up of 24 months, it was found that RMGI was the most effective restorative material followed by RMGI placed beneath the composite resin (sandwich technique). However, only one study of sandwich technique met the inclusion criteria and hence it was concluded that more studies are required to assess the success of sandwich restorations in primary teeth (Figure 1). Conclusion The sandwich technique has been introduced for over 40 years now. It is a commonly practiced technique in permanent teeth; however, there are very few studies done on primary teeth. Hence, more clinical studies are required using the sandwich technique and its modifications as a restorative protocol in primary teeth.
2022-10-14T15:04:52.102Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "d0bbda93e81cc16ee5a5df97d8c2714cf6425f32", "oa_license": "CCBY", "oa_url": "http://jrdms.dentaliau.ac.ir/files/site1/user_files_d1a2ed/jasminwinnier-A-10-970-1-4280921.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d56e38f24635798e230ab9319c1f55879e86beec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237091656
pes2o/s2orc
v3-fos-license
A ug 2 02 1 Central spin dynamics and relaxation of antiferromagnetic order in a central-spin − XXZ-chain system Using an equations-of-motion method based on analytical representations of spin-operator matrix elements in the XX chain, we obtain exact long-time dynamics of a composite system consisting of a spin-S central spin and an XXZ chain, with the two interacting via inhomogeneous XXZ-type hyperfine coupling. Three types of initial bath states, namely, the Néel state, the ground state, and the spin coherent state are considered. We study the reduced dynamics of both the central spin and the XXZ bath. For the Néel state, we find that strong hyperfine couplings slow down the initial decay but facilitate the long-time relaxation of the antiferromagnetic order. Moreover, for fixed hyperfine coupling a larger S leads to a faster initial decay of the antiferromagnetic order. We then study the purity dynamics of an S = 1 central spin coupled to an XXZ chain prepared in the ground state. The time-dependent purity is found to reach the highest values at the critical point. We finally study the polarization dynamics of the central spin homogeneously coupled to a bath prepared in the spin coherent state. Under the resonant condition, the polarization dynamics for S > 1 2 exhibits collapse-revival behaviors with fine structures. However, the collapse-revival phenomena is found to be fragile with respect to the anisotropic intrabath coupling. I. INTRODUCTION The study of real-time dynamics of a composite system made up of a central spin and a coupled quantum spin bath is important for understanding many physical phenomena in condensed matter and statistical physics [1]. In particular, the Gaudin model [2] and its variants, which describe a central spin coupled to spin baths without intrabath coupling, play an important role in quantum decoherence [3][4][5][6][7][8][9][10], quantum information [11,12], quantum metrology [13], and even mathematical physics [14][15][16][17][18][19]. The dynamics of these noninteracting central spin models has been widely studied by many theoretical methods, including techniques based on the Bethe ansatz solutions [4,9,10,17,20], quantum master equations [5][6][7], density matrix renormalization group method [8], and so on. In the special case of homogeneous hyperfine coupling, the polarization dynamics of the S = 1/2 central spin even admits analytical solutions [12,13]. Because of the existence of integrability or an extensive set of conserved quantities, the above-mentioned approaches can often deal with noninteracting baths having a large number of spins. However, including the intrabath coupling among bath spins generally makes the evaluation of the dynamics difficult even for intermediate-size baths, mainly due to the induced breakdown of the integrability. In spite of the technical difficulties, it is however interesting and important to take into account the effect of environmental self-interaction on the central spin dynamics, as demonstrated in an early work using a fully connected * Electronic address: wunwyz@gmail.com spin−spin-bath model [21]. Recently, there appeared several theoretical works in which the decoherence of a qubit coupled to interacting quantum spin chains are investigated [22][23][24]. Among these, Wu et al. studied the decoherence of a qubit coupled to an XX chain via XX- [22] and XXZ-type [23] hyperfine couplings using an equations-of-motion method and a Chebyshev expansion technique. Based on the Bethe ansatz solution of the XXX chain, Lu et al. obtained the exact decoherence and polarization dynamics of a qubit coupled homogeneously to an XXX bath [24]. Nevertheless, the quantum dynamics of a spin-S central spin interacting inhomogeneously with an XXZ chain remains unexplored. In passing we mention that the reduced dynamics of a qubit locally coupled to a free-end XXZ chain was studied in Ref. [25] by using time-dependent density matrix renormalization group method. As a paradigmatic spin model exhibiting strong correlations, the spin-1/2 XXZ chain has served as an ideal testbed for studying nonequilibrium quantum dynamics. Barmettler et al. studied the relaxation of antiferromagnetic order in an XXZ chain prepared in the Néel state [26]. It was found that the antiferromagnetic order experiences oscillatory or nonoscillatory relaxations depending on the anisotropy parameter and the relaxation time reaches its minimum at the critical point. The same dynamical protocol was later used to study dynamical quantum phase transitions in the XXZ chain [27]. The influence of a small quantum system and the induced frustrations on strongly correlated systems is another long-studied topic [28,29]. Richter and Voigt studied the static properties of a composite spin system named "frustrated Heisenberg star" [29], which is made up of a central spin and a homogeneously coupled XXX ring. The competition between the intrabath coupling and the hyperfine coupling is found to result in interesting behaviors of ground-state spin correlations. It is therefore desirable and interesting to study the effect of the interplay between the two types of interactions on the internal dynamics of the interacting spin bath. In this work, we obtain exact quantum dynamics of a composite system consisting of a spin-S central spin and a coupled periodic XXZ chain. The two parts interact with each other through the usual XXZ-type inhomogeneous hyperfine coupling [2]. Due to the presence of the intrabath interaction, theoretical methods such as those based on the Bethe ansatz solution are no longer applicable. Here, we employ an equations-of-motion approach [22,23] to treat the time evolution of the whole system. The usefulness of the method lies in the fact that each bath spin interacts locally to the central spin, while the matrix elements of local bath operators in the diagonal basis of the XX chain admit analytical expressions [30]. Using the conservation of the total magnetization, we explicitly write out the equations of motion for the time-dependent amplitudes in the XX-chain basis, where the coefficients are associated with the spinoperator matrix elements. By numerically solving the equations of motion in each magnetization sector, we are able to calculate the dynamics of the composite system prepared in a generic initial state. We consider three types of initial states for the XXZ bath, i.e., the Néel state, the ground state of the XXZ ring, and the spin coherent state. The Néel state is one of the two degenerate ground states of the antiferromagnetic XXZ chain in the large anisotropy limit and has been realized with high fidelity in cold atom systems [31]. It has also been used to investigate the relaxation of antiferromagnetic order in the XXZ chain [26] and to probe the decoherence dynamics of a qubit coupled to spin baths [9,23]. In our setup, we assume that the composite system are prepared in a separable pure state, so that the dynamical protocol can be regarded as a simultaneous quench of both the anisotropy parameter (from infinity to a finite value) and the hyperfine couplings (from zero to finite values). We study both the central-spin decoherence and the relaxation of the staggered magnetization in the XXZ bath after such a quench. It is found that the intrabath coupling has little effect on the short-time dynamics of the decoherence factor, but can change the long-time coherence significantly. The central spin also has great influence on the relaxation of the antiferromagnetic order within the bath. We observe that strong hyperfine coupling can slow down the short-time decay but facilitate the long-time relaxation of the staggered magnetization. In addition, increasing the quantum number S of the central spin at fixed hyperfine coupling strength will accelerate the initial decay of the staggered magnetization. The central spin dynamics depends not only on the hyperfine coupling strength but also on the internal phase of the XXZ bath. Our second choice for the initial bath state is the ground state of the XXZ chain. In this case, we focus on the purity dynamics of an S = 1 central spin. We find that in the strong hyperfine coupling regime the time-dependent purity acquires the highest values when the bath is prepared in the ground state at the critical point. We finally study the polarization dynamics of a central spin homogeneously coupled to an XXZ chain in the spin coherent state. For an XXX bath and S = 1/2, we recover the results in Ref. [13]. For S > 1/2, we find that the polarization dynamics still exhibits collapserevival behaviors under the resonant condition. However, the collapse-revival phenomena are destroyed once the anisotropic intrabath coupling is introduced. The rest of the paper is organized as follows. In Sec. II we introduce the central-spin−XXZ-chain model and provide details of the equations-of-motion approach. In Sec. III we present the numerical results for the three types of bath initial states. Conclusions are drawn in Sec. IV. A. Hamiltonian We consider an interacting central spin model described by the Hamiltonian (see Fig. 1) (1) The system part describes a central spin S = (S x , S y , S z ) of size S ≥ 1/2, where ω is the Larmor frequency due to the applied magnetic field. We also include the single-ion anisotropy of the central spin with strength λ. The spin bath takes the form of a spin-1/2 XXZ chain where S j = (S x j , S y j , S z j ) is the spin-1/2 operator for the jth bath spin. We have separated the bath Hamiltonian into the in-plane component H XY and the Ising component H Z . For simplicity, we assume that N is even and impose periodic boundary conditions. We set J > 0 and the sign of J ′ essentially determines the quantum phase of H B [32]. The XXZ-type hypefine coupling between the where {g j } and {g ′ j } are, respectively, the in-plane and Ising parts of the (inhomogeneous) exchange interaction constants. It is usually the case that g ′ j /g j = Λ, ∀j, where Λ measures the anisotropy of the system-bath coupling. In the case of J = J ′ = 0, the bath becomes noninteracting and we recover the usual Gaudin model that admits Bethe ansatz solutions under certain conditions [2,16,18,19]. Let L ≡ N j=1 S j be the collective angular momentum operator of the spin bath, it can be easily checked that the total magnetizationM = S z + L z is conserved. The angular momentum of the central spin S 2 is also conserved. However, the total angular momentum of the spin bath, L 2 , is not conserved unless J = J ′ and {g j } and {g ′ j } are both homogeneous [23]. Below the eigenvalue ofM , S z , and L z will be denoted as M , s z , and l z , respectively. The total magnetization M can take the following 2S + N + 1 possible values: The structure of the states in an individual M -subspace depends on whether S < N 2 or S ≥ N 2 . In this paper, we focus on the case of S < N 2 (see Appendix A for datails). To get a universal short-time dynamics for different numbers of bath spins, we introduce the energy scale which is associated with the fluctuation of the Overhauser field [33]. B. Method: spin-operator matrix elements To numerically simulate the real-time dynamics of the composite system, we use the representation in which the noninteracting Hamiltonian H 0 = H S + H XY is diagonal. This is motivated by the fact that the matrix elements of each term in the remaining part of the Hamiltonian, H 1 = H − H 0 , can be expressed in this representation in terms of the so-called spin-operator matrix elements for the XX chain [30]. The eigenbasis of H 0 is spanned by the following (2S + 1)2 N states Here, | η n is an eigenstate of H XY having n fermionic excitations labeled by the tuple η n = (η 1 , · · · , η n ) (with the convention 1 ≤ η 1 < · · · < η n ≤ N ) with respect to the vacuum state |0 = | ↓ · · · ↓ [30]. The corresponding eigenenergy E ηn depends on the parity of n through wave numbers with σ n = 1 (even n) or σ n = −1 (odd n). For later convenience, we also introduce α = s z + n = M + N 2 , which is also conserved and can take values from α = −S to α = S + N . As we will see, the equations of motion of the system in the basis {|s z | η n } involve the following matrix elements For the homogeneous XX ring described by H XY , it is shown in Ref. [30] that F j; ηn+1, χn ≡ χ n |S − j | η n+1 admits a simple factorized form, where is the momentum transfer between | η n+1 and | χ n and is a function of the momenta [34]. From Eq. (10), we immediately get is the Fourier transform of {g j }. whereh As a byproduct, the matrix elements of the staggered magnetization, which measures the antiferromagnetic order in the XXZ chain with J ′ /J > 0, can be obtained by setting g ′ j = 1 N e iπj in Eq. (15): where Finally, the matrix elementsḠ χn, χ ′ n can also be calculated from Eq. (10) and has the form The advantage of using the eigenbasis of the XX chain now becomes clear: the system-bath coupling constants simply enter the matrix elements F ηn+1, χn ({g j }) and G χn, χ ′ n ({g ′ j }) through the Fourier transformsg ∆ η n+1 , χn andg ′ * ∆ χn, χ ′ n . The main task is to calculate the function h ηn+1, χn given by Eq. (12). Moreover, the matrix elementsḠ χn, χ ′ n given by Eq. (20) also provide an alternative way to diagonalize the XXZ chain in a basis where H XY is diagonal (in contrast, the Ising term H Z is diagonal in the real basis formed by the Ising configurations). C. Initial states, time-evolved states, and equations of motion We assume a separable initial state for the whole system, where |φ (S) is a general pure state of the central spin, is a pure state of the spin bath and can generally be written as a linear combination of the component states having fixed number of fermionic excitations: where N n=0 ηn |b ηn | 2 = 1. Since the time evolution occurs in each sector with fixed α, the most general form of the time-evolved state is with initial conditions A i,α α−n, ηn = a α−n b ηn , i = I, II, III. Diagonal elements: where H I,α is a D I,α ×D I,α matrix with The structure of H I,α is shown in Fig. 2. Similar analysis can be made for categories II and III. To obtain the time-evolved state |ψ(t) , we need only to simulate the time evolution of each amplitude vector A i,α governed by H i,α in each subspace with fixed α. III. RESULTS In this work, we mainly consider three types of initial states for the spin bath, i.e., the Néel state |AF = | ↓↑ · · · ↓↑ , the ground state |G XXZ of the XXZ chain, and the spin coherent state |Ω . A. The Néel state |AF The Néel state |AF = | ↓↑ · · · ↓↑ is one of the two degenerate ground states of the XXZ chain in the Ising limit J ′ /J → ∞. It has been used to detect the relaxation of antiferromagnetic order in the XXZ chain after a quantum quench [26], to probe the decoherence dynamics of a qubit coupled to both noninteracting [9] and interacting [23] spin baths, and so on. We use the equations-of-motion approach described in the last section to calculate the decoherence dynamics of a qubit coupled to an XXZ bath with N = 16 sites. The initial state of the central spin (spin bath) is chosen as |φ (S) = 1 2 (| ↑ + | ↓ ) (|φ (B) = |AF ). We use the following inhomogeneous hyperfine coupling [9] which corresponds to a Gaussian wave function in a twodimensional quantum dot [35]. Unless otherwise specified, we always use the inhomogeneous coupling given by Eq. (28), where the parameter g defines the overall energy scale through the relation [23] Note that the Fourier transform of g j has a simple form, It is obvious that |AF lives in the manifold with excitation number n = N/2, so that α takes two possible values, α = (N ± 1)/2, and the time-evolved state is thus of type II (we assume 2S < N 2 ), as can be seen from Eq. (25). The dimension of the relevant Hilbert space is 2 × 24310 = 48620 and the simulation can be performed on a personal workstation. Figure 3(a) shows the time-evolution of the decoherence factor |r(t)| = | S + (t)/ S + (0) | [36] for both a noninteracting bath (J/ω fluc = J ′ /ω fluc = 0) and an XXX chain (J ′ /J = 1). In the former case, the result is fully consistent with that obtained by the Chebyshev expansion technique for N = 16 [23]. The latter case can be viewed as a simultaneous quench of both the anisotropy parameter and hyperfine coupling: J ′ /J = ∞ → 1 and g j = g ′ j = 0 → g N e − j−1 N . The short-time dynamics of |r(t)| seems to be independent of the value of J/ω fluc . However, |r(t)| starts to exhibit oscillations at long times and acquires a lower value when J/ω fluc becomes finite, demonstrating the role played by intrabath interactions on the central-spin decoherence. Figure 3(b) shows the corresponding dynamics of the staggered magnetization m s (t) . The relaxation of antiferromagnetic order after a sudden quench of the anisotropy parameter J ′ /J in a pure XXZ chain has been thoroughly studied in Ref. [26] for large systems using the infinite-size time-evolving block decimation algorithm. It was found that the relaxation time is minimal for a quench to the isotropic point J ′ /J = 1, which is the critical point separating the Luttinger liquid phase (J ′ /J < 1) and the gapped antiferromagnetic phase (J ′ /J > 1). The green dash-dotted curve in Fig. 3(b) shows the result for N = 16 when the central spin and the XXZ chain are decoupled. It can be seen that m s (t) relaxes to a nearly zero value around Jt ≈ 7.5 [inset of Fig. 3(b)], indicating the occurrence of the relaxation. The revivals appearing at long times are due to the finitesize effect. The blue dashed curve in Fig. 3(b) shows m s (t) for J/ω fluc = J ′ /ω fluc = 1. It can be seen that at short times m s (t) behaves similarly to the result without system-bath coupling (comparing to the green dashdotted curve). Actually, the largest hyperfine interaction is 2g/N ≈ 0.37ω fluc for N = 16, indicating that the sys- tem lies in the weak system-bath coupling regime. The red solid curve in Fig. 3(b) shows the result for a strong hyperfine coupling (J/ω fluc = J ′ /ω fluc = 0.1). Interestingly, we found that m s (t) drops more smoothly in the initial stage of the evolution. Moreover, there is a longperiod collapse of m s (t) in the time window ω fluc t ∈ (47, 63). For a noninteracting bath with J = J ′ = 0, the staggered magnetization dynamics is solely driven by the hyperfine coupling and m s (t) reaches a minimum at ω fluc t ≈ 24 (black dotted curve). These observations show that strong coupling to the central spin can suppress both the short-time decay and long-time oscillations of the antiferromagnetic order within the bath, even for systems of intermediate size. To see the effect of the value of S on the staggered magnetization dynamics, we plot in Fig. 4 m s (t) for S = 1/2, 1, 3/2, and 2 at a strong hyperfine coupling (J/ω fluc = J ′ /ω fluc = 0.1). It can be seen that an increase in S tends to accelerate the initial decay of m s (t) , due to the increase of the number of bath states involved in the composite dynamics. B. The ground state |GXXZ It is known that for J ′ /J > −1 the ground state of H B is nondegenerate and possesses magnetization l z = 0; while for J ′ /J < −1 the bath is ferromagnetic and has two degenerate ground states, i.e., the two fully polarized states | ↑ · · · ↑ and | ↓ · · · ↓ [37]. Below we focus on the case of J ′ /J > 0, so that the initial bath state |φ (B) Recently, the purity dynamics of a qubit coupled to a bosonic bath has been studied and the long-time recovery of purity under low-temperature and weak system-bath coupling conditions is observed [38]. It is therefore interesting to study the purity dynamics of a central spin coupled to an interacting spin bath at zero temperature. This protocol can be considered as a sudden quench in the hyperfine coupling strength: at t = 0 − the system lies in a separable eigenstate associated with g j = g ′ j = 0, ∀j, and then one suddenly changes the coupling constants to the finite values given by Eq. (28). In this subsection, we mainly focus on the case of S = 1, for which the reduced density matrix of the central spin read [39] where a α j = S α j , α = x, y, z, q αα j = (S α j ) 2 , α = x, y, q αβ j = S α j S β j + S β j S α j , αβ = xy, yz, zx. (32) The purity of the central spin is defined as Figure 5 shows P (t) after a sudden quench to the strong hyperfine coupling regime with J/ω fluc = 0.1 for an XXZ chain with N = 14 sites. The results for various values of J ′ /J are shown to see the influence of different quantum phases of the XXZ chain on the purity dynamics of the central spin. In the limit of J ′ = 0, the XXZ chain is reduced to the XX chain whose ground state is simply the fermionic Fock state | η 7 = |1, 2, 3, 4, 12, 13, 14 for J > 0. In this case, the purity drops rapidly at short times and gradually approaches its minimal value ∼ 1/3 at long times after experiencing some oscillation in the intermediate-time regime (solid black curve). The overall profile of P (t) is found to be lifted up as J ′ /J increases from 0 to 1 within the gapless phase. Remarkably, we observe that P (t) acquires the highest values after a quench from the ground state at the critical point J ′ /J = 1 (red curve), beyond which its magnitude drops as J ′ /J increases further in the gapped phase. Specially, in the large J ′ /J limit P (t) drops more abruptly at the beginning and approaches a steady value close to the minimal value 1/3, indicating that the central spin is approximately in a maximally mixed state at long times. These dynamical behaviors of the purity indicate that not only the system-bath coupling but also the internal phases of the XXZ bath have a significant influence on the central spin dynamics. Figure 6 shows the purity dynamics after a sudden quench to the weak hyperfine coupling regime with J/ω fluc = 1, for which P (t) exhibits a less regular dependence on the parameter J ′ /J. Nevertheless, the initial drop of P (t) is still the slowest for J ′ /J = 1. In addition, P (t) experiences nearly perfect periodic recoveries for large enough J ′ /J. These behaviors seem qualitatively consistent with those for a bosonic bath [38]. C. Spin coherent state The polarization dynamics of a qubit coupled to a spin bath prepared in the spin coherent state has been studied in several prior works [11,12,17,22]. In this subsection, we choose the initial state as where the spin coherent state of the spin bath is defined as [40] with Q n = z n (1+|z| 2 ) N/2 C n N and z = cot θ 2 e −iφ . Here, | N 2 , n− N 2 is the Dicke state belonging to l = N/2 and has magnetization l z = n − N/2. The parameter α therefore takes N + 1 values, α = S, S + 1, · · · , S + N . As a result, all the three types of states in Eq. (25) are involved. The Dicke state can be expanded in terms of the fermionic states as [12,22] Below we focus on the polarization dynamics S z (t) = ψ(0)|e iHt S z e −iHt |ψ(0) of the central spin. Let us first look at the special case of homogeneous hyperfine couplings, i.e., g j = g, g ′ j = g ′ , ∀j. In this case, the Hamiltonian H becomes For J = J ′ and S = 1/2, H hom is reduced to the model studied in Ref. [24], which conserves the total angular momentum L 2 of the bath. If one further sets J = 0, then H hom is reduced to the qubit−big-spin model studied in Ref. [13]. The dynamics of such a model can be analytically solved by using either a recurrence method [13] or an interaction-picture method [12]. From [J N j=1 S j · S j+1 , L α ] = 0 (α = x, y, z), we have In other words, the spin coherent state |Ω is an eigenstate of J N j=1 S j · S j+1 with eigenvalue N J 4 . Therefore, the dynamics generated by H hom is independent of the value of J at the isotropic point J = J ′ , where J N j=1 S j · S j+1 commutes with the Hamiltonian H hom . The top panel of Fig. 7 shows the polarization dynamics S z (t) /S of an S = 1/2 central spin for J = J ′ and under the resonant condition ω = g = g ′ [13]. It can be seen that the polarization exhibits the so-called collapse-revival behavior and the revival peaks occur at gt ≈ mN π (m ∈ Z), recovering the analytical results presented in Ref. [13]. The middle and bottom panels of Fig. 7 show S z (t) /S for S = 1 and S = 3/2, respectively. The polarization still shows collapse and revivals during the evolution, but with rich fine structures. For example, the initial revival region seems show 2S discrete sub-peaks before the first collapse occurs. These structures reappear after the regular revival region consisting of 2S + 1 packets. Our formalism allows us to calculate the polarization dynamics in the presence of the intrabath interaction. Figure 8 shows S z (t) /S for various pairs of (J/g, J ′ /g). It can be seen that the collapse-revival behaviors are generally destroyed by the intrabath coupling, although for (J, J ′ )/g = (1, 0.8) and (1, 1.2) there is some evidence of collapse (middle column of Fig. 8) at short time since they are close to the isotropic point J ′ /J = 1. Note that (J ′ −J) N j=1 S z j S z j+1 does not commute with the remaining part of H hom , the dynamics thus depends not only on J ′ − J but also on J for J = J ′ (right column of Fig. 8). Actually, since the term (J ′ − J) N j=1 S z j S z j+1 breaks the conservation of L 2 , the time-evolved state will run out of the l = N/2 subspace, making the collapse-revival phenomena fragile with respect to anisotropic intrabath coupling. IV. CONCLUSIONS AND DISCUSSIONS In this work, we obtain exact dynamics of a composite system made up of a spin-S central spin and a coupled XXZ ring. The two parts interact with each other through inhomogeneous XXZ-type hyperfine coupling. We use the analytical representations of local spinoperator matrix elements in the XX chain to write out the equations of motion of the time-dependent ampli-tudes in each sector with fixed total magnetization. By solving these equations of motion under three types of initial bath states, i.e., the Néel states, the ground state of the XXZ chain, and the spin coherent state, we investigate the reduced dynamics of both the central spin and the spin bath. Under the Néel bath initial condition, we first simulate the decoherence dynamics of a spin-1/2 central spin inhomogeneously coupled to a noninteracting bath with N = 16 sites and get consistent result with that obtained by the Chebyshev expansion technique [23]. This demonstrates the validity of the equations-of-motion method. By turning on the nearest-neighbor coupling within the bath, we find that the intrabath coupling has significant effect on the central spin decoherence. On the other hand, the central spin also alters the dynamical behavior of the antiferromagnetic order measured by the staggered magnetization. We find that in the strong hyperfine coupling regime the short-time decay of the staggered magnetization is slowed down, while the long-time oscillations get suppressed, which facilitates the relaxation of the antiferromagnetic order. For fixed hyperfine couplings, we also find that an increase in S tends to accelerate the initial decay of the staggered magnetization. We then turn to study the purity dynamics of an S = 1 central spin coupled an XXZ chain prepared in its ground state. It is found that in the strong hyperfine coupling regime the purity reaches the highest values at the critical point of the XXZ chain. Finally, we study the polarization dynamics of a spin-S central spin homogeneously coupled to an XXZ chain in the spin coherent state. Under the resonant condition [13] and for S > 1/2, we observe collapse-revival behaviors having fine structures. Including the anisotropic intrabath coupling generally destroys the collapse-revival phenomena due to the breakdown of angular momentum conservation for the bath. Our work implies that not only the intrabath coupling can have significant influence on the central spin dynamics but also the central spin can affect the internal dynamics of the interacting spin bath. The results and theoretical method presented in this work may simulate further investigations on interacting central spin models. 2 − 2S); . . .
2021-08-17T01:16:19.433Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "77c27c5002f919157c42377bb2074d7d5bcf0662", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "77c27c5002f919157c42377bb2074d7d5bcf0662", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
255205768
pes2o/s2orc
v3-fos-license
Functional characterization and immunogenicity of a novel vaccine candidate against tick-borne encephalitis virus based on Leishmania -derived virus-like particles decrease in TBEV infection among humans. Although few vaccines against TBEV based on inactivated viruses are available for humans, due to high costs, vaccination is not mandatory in most of the affected countries. Moreover, there is still no vaccine for veterinary use. Here, we present a characterization and immunogenicity study of a new potential TBEV vaccine based on virus-like particles (VLPs) produced in Leishmania tarentolae cells. VLPs, which mimic native viral particles but do not contain genetic material, show good immunogenic potential. For the first time, we showed that the protozoan L. tarentolae expression system can be successfully used for the production of TBEV virus-like particles with highly efficient production. We confirmed that TBEV recombinant structural proteins (prM/M and E) from VLPs are highly recognized by neutralizing antibodies in in vitro analyses. Therefore, VLPs in combination with AddaVax adjuvant were used in immunization studies in a mouse model. VLPs proved to be highly immunogenic and induced the production of high levels of neutralizing antibodies. In a challenge experiment, immunization with VLPs provided full protection from lethal TBE in mice. Thus, we suggest that Leishmania -derived VLPs may be a good candidate for a safe alternative human vaccine with high efficiency of production. Moreover, this potential vaccine candidate may constitute a low-cost candidate for veterinary use. Introduction Infectious diseases remain the leading cause of morbidity and mortality in humans and animals worldwide.Respiratory viral infections and arboviral infections represent the major categories of emerging viral infections globally.Flaviviruses are vector-borne positive sense RNA viruses that can emerge unexpectedly in human populations and cause serious diseases that are medically important.Tick-borne encephalitis virus (TBEV), an important representative of this group, can cause a disorder of the central nervous system that may lead to serious medical complications, including meningitis and meningoencephalitis (Dumpis et al., 1999).The main route of TBEV transmission is tick bites; however, other routes, such as the consumption of unpasteurized milk and milk products from infected animals such as goats, cows and sheep, remain important (Růžek et al., 2010). The geographical range of TBEV, in the past confined to East Asia and Eastern Europe, is quickly increasing; at present, it is detected in almost all of Europe (Yoshii, 2019;Mansbridge et al., 2022).Recently, TBEV has also been reported in North Africa (Khamassi Khbou et al., 2020;Fares et al., 2021).The incidence of TBE has increased over 400% during the past 20 years in Europe, which makes tick-borne encephalitis (TBE) the second most serious disease transmitted by ticks (Donoso Mantke et al., 2011).According to World Health Organization data, 10,000-12, 000 tick-borne encephalitis cases are reported each year (World Health Organization, 2017). TBEV is a small enveloped, single-stranded RNA virus with a positive-polarity RNA genome of approximately 11 kb (Růžek et al., 2019).The viral RNA contains a single open reading frame (ORF), which is translated to a large polyprotein cleaved co-and post-translationally by cellular and viral proteases to yield three structural (E, C and M) and seven nonstructural proteins involved in the replication cycle of the virus within a cell (Barrows et al., 2018).Two viral proteins (glycoprotein E and the small membrane protein M) play a major role in viral entry into target cells.Envelope E glycoprotein, as the most exposed structural element of virions, participates in the assembly of infectious particles and plays a role in viral entry, since it allows interactions with specific cell surface receptors and induces fusion between the viral envelope and the host cell membrane.It is composed of three structural domains and a transmembrane domain that is required for anchoring the protein in a lipid membrane.Domain I contains an N-glycosylation site and fusion-loop peptide is located in the domain II (Lattová et al., 2020).Domain I and II together are responsible for E protein dimerization.Immunoglobulin-like domain III is the most likely candidate for interactions with cellular receptors.It has also been shown that during infection, most neutralizing antibodies are directed against domain III of glycoprotein E (Zhang et al., 2017).The prM/M glycoprotein is a small membrane protein that is cleaved to the pr peptide and M protein present in mature virions during maturation of viral particles.One N-glyco-site is present in the pr fragment.The exact role of the prM protein in flaviviruses has not been fully determined, but it is believed to be a chaperone-like protein assisting in proper folding of E glycoprotein.This protein is also required for pH-dependent rearrangements during virion maturation and protection from premature fusion with cellular membranes (Roby et al., 2015). Despite numerous strategies of research, there is currently no licensed therapeutic agent available for the treatment of TBEV infections.Patients diagnosed with TBE infection are usually treated to alleviate the symptoms.As there are no treatment procedures available, it is important to search for innovative prevention methods and potential therapies.Vaccination is the most effective means of disease prevention.Five vaccines against TBE based on inactivated virus are currently on the market; in the EU, two vaccines are marketed: FSME-Immun® by Pfizer and German Encepur® by Novartis.Both vaccines are based on formaldehyde-inactivated European subtype whole virus particles.Although vaccines are safe and highly effective, some drawbacks exist, and the vaccination schedule requires three doses to stimulate the development of a protective antibody response.Additionally, booster vaccinations are required every 3-5 years to maintain protective immunity, especially in the elderly population; vaccine failures even after a complete series of vaccine doses have been reported.Moreover, the production of inactivated vaccines carries the inherent risk of utilizing large quantities of potentially highly pathogenic viruses (Lehrer and Holbrook, 2011).Due to the high costs and required multiple doses, vaccination coverage in humans remains low in several endemic countries. Currently produced human TBEV vaccines are not approved for veterinary use, and production cost limits their potential use for immunization of animals.A candidate vaccine for veterinary use has been developed, but it is also based on inactivated TBEV and has not yet been approved for clinical use (Salát et al., 2018).Given all the drawbacks of existing vaccines, there is an urgent need for the improvement of existing TBEV vaccines and the introduction of new cheap vaccines that would be widely available for humans and could also be used for vaccination of animals to prevent the routes of transmission and reduce the number of reservoirs of virus in the environment.The intensive efforts of many laboratories mainly concentrate on recombinant vaccines such as DNA vaccines or virus-like particles (VLPs) (Růžek et al., 2019;Barrett et al., 2003). Virus-like particles based on recombinant proteins structurally very similar to the natural virions may provide alternative, specific antigens used for vaccination purposes.Biological carriers in the form of viruslike particles are an innovative approach to the construction of vaccines due to morphological, biophysical and antigenic properties almost identical to those of natural virions as well as the lack of genetic material (Lua et al., 2014).VLPs are spontaneously produced during flavivirus infection or may be produced in various expression systems as an alternative to authentic antigens, eliminating biosafety problems (Russell et al., 1980).As there is no need to work with the virus, VLPs are also much safer to produce than inactivated vaccines.Some VLP-based vaccines against hepatitis B virus and human papilloma virus have been approved by the FDA for use in humans (Lua et al., 2014;Fuenmayor et al., 2017).VLPs based on the prM and E proteins of TBEV are immunogenic and can potentially be used as vaccine antigens (Heinz et al., 1995). The main eukaryotic platforms for the production of recombinant proteins are mammalian, insect and yeast expression systems.Here, we propose a new TBE vaccine candidate based on virus-like particles produced in unconventional Leishmania tarentolae expression system.This system has been previously successfully used for the production of different proteins, especially those that require post-translational modifications, such as glycosylation (Aparecida et al., 2019).For the first time, we showed that an L. tarentolae expression system can be successfully used to produce TBEV VLPs with high production efficiency.The system leads to the production of recombinant TBEV VLPs with mammalian-type N-glycosylation patterns. The vaccine was tested in mice, and we demonstrated its safety and effectivity.The produced VLPs elicited good titers of neutralizing antibodies, making them good candidates for a safe alternative human vaccine with low cost and high efficiency of production.Moreover, this potential vaccine candidate may represent a low-cost candidate for veterinary use to protect susceptible animals from symptomatic TBE or to vaccinate small ruminants to prevent milk-borne TBEV infections in humans. Plasmids The construction of the genes used for the production of recombinant proteins is summarized in Fig. 1.Sequences of TBEV structural prM and E proteins (Neudoerfl strain) were separated by the sequence of P2A selfcleavage peptide to provide efficient separation of prM and E proteins.Additionally, a linker of 3 amino acids was added following the P2A sequence to reduce spherical hindrance in the structure of proteins.The construct was obtained by gene synthesis using L. tarentolae-adapted codons (GeneArt Thermo Fisher Scientific).Synthesized genes were ligated into SalI and NotI restriction sites in the pLEXSY_I-blecherry3 vector (Jena Bioscience). For the production of antigens used for assessing antibody titers in postimmunization sera in HEK293T cells, plasmids coding for full-length prM-E proteins and E protein without a transmembrane domain were used.The prM-E construct was used to obtain mammalian-derived TBEV VLPs, as these proteins are expressed together in mammalian cells from such particles. L. tarentolae cultivation and protein expression Recombinant prMP2AE proteins were expressed using L. tarentolae cells in the inducible expression system LEXSY according to the guidelines of the manufacturer (Jena Bioscience).Briefly, a plasmid was introduced into cells by electroporation to obtain a stable cell line.Transfected cells were subjected to polyclonal selection by bleomycin (100 μg/mL).Recombinant cell lines were cultured in selective medium with hemin at 26 • C under aerated conditions and protected from light.For recombinant protein expression, cells were induced by adding tetracycline (15 μg/mL) and grown in agitated culture for 72 h. SDS-PAGE and western blotting Analysis of protein expression and purification was conducted using SDS-PAGE.Samples were run in reducing or nonreducing conditions on 10-20% gradient Tris-glycine gels in Tris-glycine SDS running buffer.After electrophoresis, the gel was used for either Coomassie staining or western blotting.Coomassie staining was performed using Imperial™ Protein Stain (Thermo Fisher Scientific).For western blotting, the proteins were transferred onto PVDF membranes using wet overnight transfer in buffer containing 25 mM Tris-Base and 150 mM glycine.After the membrane was blocked with 5% nonfat milk in TBS-T (TBS buffer with 0.1% Tween-20 (v/v)), proteins were detected with a specific monoclonal anti-Flavivirus group antigen antibody (4G2) (Absolute Antibody) (1:2000 dilution), monoclonal anti-TBEV E protein antibody 19/786 kindly provided by Professor Matthias Niedrig (1:1000 dilution) or in-house produced polyclonal rabbit serum anti-prM protein (1:1000 dilution) followed by anti-mouse or anti-rabbit peroxidase HRPconjugated secondary antibodies (Santa Cruz Biotechnology) (diluted 1:3000).Blots were developed using a Super Signal™ West Pico Plus Substrate system (Thermo Fisher Scientific) using the Chemidoc system Alliance™ Q9-Series (UVITEC). Ultracentrifugation in sucrose density gradient The medium from induced cells was collected and ultracentrifuged through a 20% (w/w) sucrose cushion in TNE buffer (10 mM Tris-HCl, 150 mM NaCl, 2 mM EDTA, pH 7.4) for 3 h at 130,000×g.The supernatant was removed, and the pellet was resuspended overnight in PBS with protease inhibitors.Subsequently, a sample was treated with or without 1% Triton X-100 on ice for 1 h, overlaid on a 20-60% (w/w) sucrose gradient in TNE buffer and ultracentrifuged for 16 h at 135,000×g.A total of 7 fractions were collected and analyzed by western blotting and Coomassie staining as described above. Analysis of N-glycosylation N-glycosylation was analyzed with PNGase F (Thermo Fisher Scientific).The sample of purified VLPs was divided into two equal portions.Samples were incubated in denaturing conditions, and one sample was treated with PNGase F for 16 h at 37 • C, while a second one was an undigested control also incubated for 16 h at 37 • C.After digestion, the samples were analyzed by mobility shift assays with western blotting as described above. ELISAs for VLP characterization ELISA plates were coated overnight at 4 • C with purified VLPs at 5 μg/ml in PBS buffer at pH 7.4.Then, the plate was blocked with 3% BSA (w/v) in PBS-T (PBS buffer with 0.05% Tween-20 (v/v)) for 2 h at RT. Three different primary antibodies were used: mouse monoclonal anti-Flavivirus group antigen antibody (4G2) (Absolute Antibody), mouse monoclonal neutralizing antibody 19/1786 and polyclonal rabbit serum anti-prM protein in dilutions from 1:100 to 1:500,000.The antibodies were diluted in 0.3% BSA w PBS-T, and the plate was incubated for 1 h at RT.Primary antibodies were detected with anti-mouse or anti-rabbit peroxidase HRP-conjugated secondary antibodies (Santa Cruz Biotechnology) diluted 1:1500 in 0.3% BSA in PBS-T.The reaction was visualized with TMB Substrate Solution (Thermo Fisher Scientific).After the reaction was stopped with 0.5 M H 2 SO 4, the signal intensity was measured at 450 nm with a plate reader (Tecan). Electron microscopy and immunogold labeling For visualization of particles, fractions from density gradient ultracentrifugation were diluted 1:10 in PBS and deposited on carbon-coated 200 mesh nickel grids.Negative staining was performed with 2% uranyl acetate.For immunogold labeling, grid-deposited particles were blocked with Blocking Solution for Goat Gold Conjugates (Aurion).Grids were washed three times with incubation buffer (PBS buffer with 0.1% BSA-c (Aurion)) and incubated with primary 4G2 or 19/1786 antibodies diluted 1:40 in incubation buffer for 1 h at RT.Following six washes with incubation buffer, labeling was performed with goat anti-mouse IgG conjugated with 6 nm gold particles (Aurion) diluted 1:40 in incubation buffer for 1 h at RT, washed again and fixed with 4% paraformaldehyde.After washing, the grids were stained with 2% uranyl acetate.Samples were analyzed using a transmission electron microscope Tecnai G2 Spirit BioTWIN (FEI) (Faculty of Biology, University of Gdansk, Gdansk, Poland). Nanoparticle tracking analysis Size distribution and concentration analyses were carried out using an NS300 NanoSight NTA (Malvern Panalytical).Samples were prepared by dilution with sterile PBS buffer to reach a concentration of 0.1 mg/ml and were measured with five 60 s tracking repetitions.Data were analyzed using NTA 3.4 Software (Malvern Panalytical). Immunization protocol Groups of 6 female BALB/c mice, 6-8 weeks of age, were immunized subcutaneously with a mixture of antigen and adjuvant.Mice were immunized with 10 μg of antigen in sterile PBS buffer on Days 0, 14 and 28.The total protein content in the VLP antigen for immunization was quantified using a Quick Start™ Bradford Protein Assay (Bio-Rad).AddaVax (InvivoGen) was used as an adjuvant.Antigen was mixed with AddaVax in a 1:1 (v/v) ratio directly before the injection.For the first dose, animals were immunized with 200 μl of antigen-adjuvant mixture administered in two places of injection, 100 μl for each place of injection (10 μg of protein in 100 μl of PBS + 100 μl of AddaVax divided into two portions of 100 μl).For the second and third doses, the volume of antigen-adjuvant mixture was 100 μl administered in one place of injection (10 μg of protein in 50 μl of PBS + 50 μl of AddaVax).The mice used as a negative control were immunized with adjuvant and PBS buffer only.On Day 42, the mice were sacrificed, and the sera were collected for immunological response analysis.All experiments on animals were conducted by an accredited company (Tri-City Academic Laboratory Animal Centre, Medical University of Gdansk, Gdansk, Poland) in accordance with the current guidelines for animal experimentation.The protocols were approved by the Local Committee on the Ethics of Animal Experiments of the University of Science and Technology in Bydgoszcz (Permit Number: 17/2020).All surgeries were performed under isoflurane anesthesia, and all efforts were made to minimize suffering. Preparation of antigens for mouse sera titration HEK293T cells were transfected with plasmids coding for full-length prM-E proteins or E protein and cultivated for 72 h.Next, the cells and medium were collected for analysis and protein or VLP purification.E protein was purified from the cell lysate on Ni-NTA resin.Cells were lysed in buffer containing 300 mM NaCl, 0.5% Triton X-100, 5% glycerol, and 10 mM imidazole, pH 8, and sonicated.The lysate was purified on HisPur Ni-NTA Spin Columns (Thermo Fisher Scientific) according to the manufacturer's instructions.Culturing medium from cells transfected with prM-E proteins was used for VLP purification.They were purified by ultracentrifugation as described above. Analysis of mouse serum antibody titers by ELISAs Collected mouse sera were divided into two groups.The antibody response against TBEV was measured using 10 μg/ml mammalian cellderived TBEV VLPs and 15 μg/ml mammalian cell-derived TBEV E protein.The antibody response for both antigens was measured with ELISAs.After overnight coating, the plates were blocked for 2 h with 3% BSA (w/v) in PBS-T.Serially diluted mouse sera were added to the plate and incubated for 2 h.The binding of antibodies from sera to recombinant proteins was detected with secondary goat anti-mouse HRP-conjugated antibodies (Santa Cruz Biotechnology) (dilution 1:1500) and TMB Substrate Solution (Thermo Fisher Scientific).After the reaction was stopped with 0.5 M H 2 SO 4 , the signal intensity was measured at 450 nm with a plate reader (Tecan). The titers of anti-TBEV antibodies were also analyzed by a commercial IMMUNOZYM FSME IgG All-Species kit (Progen GmbH).In this test, IgG antibodies in the sera of immunized mice were quantified according to the manufacturer's guidelines.This test allowed the determination of specific IgG antibodies against TBEV in Vienna Units (VIEU/ ml) based on a standard curve and reaction with inactivated virus. Viruses For the virus neutralization assay and challenge experiment, we used the TBEV strain Hypr (Czech prototype strain originally isolated in Czechoslovakia in 1953 from the blood of a 10-year-old child infected with TBEV) passaged five times in the brains of suckling mice and once in porcine stable kidney (PS) cells before its use in the present study.The virus was provided by the Collection of Arboviruses, Biology Centre of the Czech Academy of Sciences (https://arboviruscollection.bcco.cz). Virus neutralization assay Sera were inactivated by heat (56 • C for 30 min) and diluted 1:4 in Leibowitz L-15 medium (Sigma-Aldrich) with 3% fetal bovine serum, 100 U/mL penicillin, 100 μg/mL streptomycin, and 1% glutamine (Sigma-Aldrich.Subsequently, 2-fold serial dilutions of the samples in L-15 medium (50 μL/well) were incubated with 10 3 PFU/mL of TBEV strain Hypr (50 μL/well) in 96-well plates for 90 min at 37 • C. The virus dose was adjusted to produce a near confluent cytopathic effect with 90-95% cytolysis.Porcine kidney stable cells (PS) were then added (3 × 10 4 cells in 100 μL per well).After 5 days of incubation, the cytopathic effect was examined using an inverted microscope (Olympus).The highest serum dilution that inhibited the cytopathic effect of the virus was considered the endpoint titer.Samples with a titer of 1:20 and higher were considered positive for the presence of anti-TBEV neutralization antibodies.The data represent the mean values from two independent experiments performed in duplicate. Challenge experiment Ten female BALB/c mice, 6 weeks of age (Envigo), were immunized according to the immunization protocol described above with a mixture of antigen and adjuvant.The other ten mice injected with adjuvant only served as a control group.For evaluation of the protective effect of vaccination, all immunized and control mice were infected intraperitoneally with TBEV (10 3 PFU per mouse, strain Hypr) 18 days after the third dose injection.The morbidity and survival of the infected mice were evaluated daily during a four-week experimental period.Mice were euthanized when severe signs of TBE neuroinfection occurred.The challenge experiment was performed in accordance with Czech law and guidelines for the use of laboratory animals.The protocol was approved by the Departmental Expert Committee for the Approval of Projects of Experiments on Animals of the Ministry of Agriculture of the Czech Republic and the Committee on the Ethics of Animal Experimentation at the Veterinary Research Institute (Approval No. 26674/2020-MZE-18134). Statistical analysis and graphic design Statistical analyses were performed using the GraphPad Prism 9.3.1 software. The graphic design was performed with BioRender. Expression and characterization of Leishmania-derived TBEV VLPs TBEV prM and E proteins were previously shown to form virus-like particles when expressed together in eukaryotic cells (Allison et al., 1995;Schalich et al., 1996).In the present study, the prMP2AE construct (Fig. 1) based on both proteins was used to produce TBEV VLPs in the L. tarentolae expression system.Sequences of the prM and E genes were cloned into the pLEXSY_I-blecherry3 vector.The original signal sequence for the prM protein was replaced with the one from the pLEXSY_I-blecherry3 vector-a signal peptide for LMSAP1 phosphatase from L. mexicana, which is naturally secreted from cells.This substitution was made for higher production of protein secreted into the culture medium and to provide proper post-translational processing (Wiese et al., 1995).The sequences of the prM and E protein genes were separated by P2A peptide sequence.The P2A peptide from porcine teschovirus-1 was added to facilitate the separation of proteins and further VLP formation (Fig. S1).The genetic sequence of the P2A peptide was introduced after the prM protein gene, followed by the signal sequence (ss) (second transmembrane domain of prM protein) and the E protein gene.The P2A sequence is preceded by a short, 3 amino acid linker to avoid spherical hindrance (Kim et al., 2011).The genetic sequence of the whole prMP2AE construct was codon-optimized for the L. tarentolae expression system. The expression of recombinant proteins was carried out in cell cultures of recombinant protozoa using an inducible stable cell line of L. tarentolae (Kushnir et al., 2005).The production was performed for 72 h after tetracycline induction.Protein expression in the cell extract and culture medium was confirmed by immunoblotting with specific antibodies (Fig. 2).Both prM/M and E proteins were detected in cell extracts at high levels.These proteins were also secreted in substantial amounts into the culture medium, which was chosen for further analyses.The molecular mass of the E protein was determined to be approximately 50 kDa.Two forms of prM/M were detected at approximately 17-20 kDa.The uncleaved prM protein has a theoretical mass of ~26 kDa, a pr fragment of ~17 kDa and a mature M protein of approximately 10 kDa.At least one of the detected forms may correspond to the pr fragment or uncleaved prM protein; in particular, we did not observe any band that could correspond to the M protein.This finding may indicate that the pr fragment is still not fully cleaved from the M protein and that the pr fragment as well as the intact prM protein may be present.As according to prediction using NetPhos-3.1 software (DTU Health Tech) prM protein has 11 sites of high phosphorylation potential, the presence of two bands may also be attributed to phosphorylated and unphosphorylated forms of this protein. To confirm that recombinant proteins form VLPs in the culture medium, we conducted further analyses.The formation of higher density structures was first confirmed by ultracentrifugation of VLPs from the culture medium in a sucrose density gradient.Seven fractions were harvested and analyzed by immunoblotting.Both proteins were detected in fractions with approximately 36-44% sucrose concentration (Fig. 3a).According to Schalich et al. (1996), the buoyant density of TBEV VLPs is approximately 1.14 g/cm 3 , which is in agreement with our results, as a sucrose density of 36% (w/w) corresponds to approximately 1.15 g/cm 3 .Furthermore, we analyzed the detergent sensitivity of the obtained VLPs.VLPs were treated with the strong nonionic detergent Triton X-100 and again ultracentrifuged in a sucrose density gradient (Fig. 3b).After treatment, the majority of both proteins were detected in fractions with a lower sucrose density and/or the proteins did not efficiently enter the gradient.As Triton breaks down higher protein and membrane structures, the results may indicate that some complex, enveloped particles are being formed.Coomassie staining of the collected fractions showed that ultracentrifugation allowed VLP purification (Fig. 3c).The fractions with the highest concentrations of VLPs were combined, and the protein concentration was determined by the Bradford method.The efficiency of VLP production was approximately 7-10 mg per 1 L of Leishmania culture. To finally confirm VLP formation, we performed transmission electron microscopy analysis.The analyzed samples contained spherical particles with a diameter of approximately 50-60 nm (Fig. 3d).Additionally, the quality of the obtained particles was verified with immunogold labeling.Two specific monoclonal antibodies against the E protein, a 4G2 antibody against the fusion loop epitope and a 19/1786 neutralizing antibody that binds to the conformational epitope between the DI-DIII domains of the E protein, were used (Füzik et al., 2018).Both antibodies reacted with VLPs, suggesting that these epitopes are properly exposed on the surface of the produced particles. Moreover, nanoparticle tracking analysis (NTA) was performed to assess the size distribution and concentration of purified VLPs.The analysis showed that the population of particles was homogenous in size (Fig. 3e).The mean hydrodynamic diameter was calculated to be 159.5 ± 2.0 nm.Since the same analysis performed on purified medium from wild-type L. tarentolae showed only the presence of much smaller particles (Fig. S2), this suggests that analyzed particles are proper VLPs particles. The estimated number of VLPs purified from 1 L of culture was calculated to be 9.1 × 10 10 .NTA also allowed the stability assessment of VLPs.Analysis was carried out on two samples, one freshly purified and one stored at 4 • C for 18 months after purification.There was only a slight change in particle distribution between the samples, which may suggest that the VLPs can be successfully stored for long periods of time.This was also confirmed by ELISA, western blotting and Coomassie staining, which did not show differences between the freshly purified sample and the one stored at 4 • C for 18 months after purification (Fig. S3). Purified VLPs were subjected to further functional analyses.The antigenic properties of VLPs were assessed by ELISAs using the same antibodies as in immunogold labeling: the specific monoclonal antibodies 4G2 and 19/1786 as well as anti-prM polyclonal serum (Fig. 4a).ELISAs clearly indicated that the produced VLPs are specifically and strongly recognized by the 19/1786 antibody.As the 19/1786 antibody is a neutralizing antibody, the strong binding with VLPs may suggest the proper conformation of the E glycoprotein.The detection with anti-prM serum was not as efficient in the ELISA test.Strong recognition by this serum in a previous Western blot assay may suggest that the produced and purified VLPs may be a combination of mature or only partially mature particles.The produced VLPs were also weakly recognized by the 4G2 antibody; thus, we believe that the fusion loop is covered by the prM protein or hidden in the produced VLPs. Furthermore, N-glycosylation of prM/M and E proteins present on VLPs was analyzed by treatment with endoglycosidase PNGase F (Fig. 4b).Both the E and prM (pr fragment) proteins possess one Nglycosylation site.In both cases, a shift in molecular mass was observed by western blots after enzyme treatment, which proved that both proteins are glycosylated.Moreover, both forms of the prM/M protein were affected by PNGase F treatment, which confirmed that the detected prM/M proteins were in the form of pr fragments or uncleaved prM proteins.However, taken together, the efficient expression and the data from functional analysis suggest that VLPs may have high potential as good immunogens.Therefore, VLPs purified from the cell culture medium were used for immunization studies in an animal model. Immunogenicity of Leishmania-derived TBEV VLPs For determination of the immunogenicity of TBEV VLPs, a group of BALB/c mice were immunized subcutaneously with 3 doses of 10 μg of VLPs in combination with an adjuvant on Days 0, 14 and 28.AddaVax, a squalene-based oil-in-water nanoemulsion, was used as the adjuvant to improve the immunogenic response.AddaVax is an analog of the MF59 adjuvant licensed for human use in Europe.Blood samples were taken before each immunization and 14 days after the last vaccination (on Day Western blot analysis of prM and E protein production in cell extract and medium with 4G2 anti-E mAbs (top) and L24 anti-prM serum (bottom) under nonreducing conditions.Cell extract and medium from wild-type L. tarentolae (wt) were used as a negative control.42).Mice in the control group were immunized with the same schedule only with PBS buffer in the presence of AddaVax adjuvant.Animals did not show any side effects during vaccination.Sera were pooled, and the humoral response elicited by immunization was analyzed by determining specific antibody titers using an ELISA test (Fig. 5).VLPs (Fig. 5a) and E protein (Fig. 5b) produced in mammalian cells were used as antigens for titration.The obtained results confirmed that full immunization with the produced VLPs resulted in high antibody titers reaching 1 × 10 5 .Analyses of antibody levels after each immunization showed that the titer in the experimental group began to grow after the second immunization, while the level of antibodies in sera from the control group did not show significant changes.Similar levels of antibodies for these two antigens may suggest that most of them are directed against the E protein.The commercially available test based on inactivated virus allowed estimation of antibody levels in the sera of vaccinated mice in Vienna Units (Fig. 5c).In the experimental group, the specific antibody concentration was approximately 55 VIEU/mL, while in the control group, it was equal to approximately 5 VIEU/mL.The level of antibodies after final immunization was significantly higher than in the control animals in all tests.Subsequently, the neutralizing potential Fig. 3. Analysis of virus-like particle formation.a Western blot analysis of fractions collected after ultracentrifugation in a sucrose density gradient (0-60% sucrose/TNE) with 4G2 anti-E Abs and L24 anti-prM serum under nonreducing conditions.b Western blot analysis of fractions collected after ultracentrifugation in a sucrose density gradient (0-60% sucrose/TNE) with 4G2 anti-E Abs (top) and L24 anti-prM serum (bottom) under nonreducing conditions.Prior to ultracentrifugation, the sample was treated with Triton X-100.c Coomassie staining of fractions collected after ultracentrifugation in a sucrose density gradient (0-60% sucrose/TNE) of samples not treated with Triton X-100 under reducing conditions.Mmolecular marker.d Transmission electron micrographs of virus-like particles.After ultracentrifugation in a sucrose density gradient, the particles were negatively contrasted with 2% uranyl acetate and analyzed with a transmission electron microscope (left).For confirmation of the specificity of the observed particles, immunogold labeling with 4G2 (middle) and 19/1786 (right) anti-E mAbs and secondary Abs conjugated with 6 nm gold particles was performed.Scale bar 100 nm e Analysis of the size distribution and quantification of freshly purified VLPs (left) and VLPs after 18 months of storage at 4 • C. Histograms show the average size distribution of the measured particles.Numbers in blue indicate the size of particles from each peak, and the red surface corresponds to the standard deviation values.Numbers on the y-axis are the results from samples with a concentration of 0.1 mg/ml VLPs.Mammalian-derived VLPs (10 μg/mL) were used as an antigen.Baseline was established as the serum antibody level prior to the vaccination.P values were calculated using the multiple t-test (****, P < 0.05).b Titers of anti-E antibodies in sera of immunized mice 14 days after the last immunization.The plate was coated with 15 μg/mL purified E protein.The P value was calculated using the unpaired t-test (*, P < 0.05).Antibody titers were calculated as the highest serum dilution for which absorbance value was higher than the mean background value plus two standard deviations (a, b).c The concentration of anti-TBEV antibodies in mouse sera 14 days after the last immunization based on the standard curve.The P value was calculated using the unpaired t-test (****, P < 0.0001).The data represent the values from three independent experiments performed in duplicate, and error bars indicate standard deviations. of postimmunization sera was analyzed.The experiment was conducted with the TBEV Hypr strain.The sera from the experimental group were able to neutralize the virus up to a dilution of 1:160.The serum from the control group did not show any neutralizing potential.Finally, a challenge experiment with a lethal dose of TBEV was performed to verify whether immunization with the prepared vaccine would protect animals from the development of TBE (Fig. 6a).All vaccinated and infected mice survived until the end of the experiment at 28 days post-infection (Fig. 6b) and did not show any symptoms of TBE (Fig. 6c).In contrast, mice from the control group started to develop symptoms on the sixth day post-infection and had to be euthanized by the eleventh day after infection.Therefore, we can conclude that VLPs produced in L. tarentolae were highly immunogenic, causing effective production of neutralizing antibodies and providing protection against a lethal dose of TBEV, as confirmed in the challenge experiment. Discussion Despite available vaccines, TBEV is still a major concern in many European and Asian countries (Bogovic, 2015).This report is the first to evaluate the production of TBEV VLPs in the L. tarentolae expression system, which provides post-translational processing, including glycosylation processes, very similar to that of mammalian cells.Only a few viral antigens have been previously successfully produced in the L. tarentolae system (Breton et al., 2007;Baechlein et al., 2013;Pion et al., 2014;Grzyb et al., 2016;Czarnota et al., 2016;Fischer et al., 2016).L. tarentolae cultures can be easily scaled up; therefore, they are good candidates for industrial scale production.The use of this expression system could also lead to significantly lower vaccine production costs than are currently incurred for inactivated vaccines.Production costs in the L. tarentolae system are lower than in mammalian cells, due to lower costs of the culture medium as well as lower requirements for cell cultures.The purification procedure can also be carried out more easily and at a lower cost because the culturing medium for L. tarentolae has fewer components than the medium for mammalian cells.All these, in turn, could directly translate into greater availability of the TBEV vaccine.Additionally, it may lead to the development of a cheap veterinary vaccine, as immunization of animals can be a way to reduce viral reservoirs and reduce transmission to humans (Salát and Růžek, 2020).In our study, the introduction of additional elements (e.g., the signal sequence from L. mexicana and P2A peptide) in addition to the sequences of the TBEV structural proteins made it possible to obtain a very high yield of recombinant particles.Although some studies have shown that the addition of the L. mexicana signal peptide can impair the production of some recombinant proteins (Breton et al., 2007;Pion et al., 2014), our results confirmed the data obtained by Wiese et al., 1995) and Grzyb et al., 2016(Wiese et al., 1995;Grzyb et al., 2016), which indicated that proteins fused with this signal sequence are efficiently produced and successfully secreted into the culture medium.Codons were optimized to obtain the highest possible production efficiency (Breton et al., 2007).Moreover, to our knowledge, the P2A peptide was successfully used for the first time to facilitate the separation of the TBEV prM and E proteins.All these factors allowed not only a high expression efficiency but also a very high secretion of particles into the culture medium, which translates into the ease of their purification using a one-step purification process (Figs. 2 and 3a, c).VLP formation was further confirmed by electron microscopy analysis using immunogold labeling with two different antibodies recognizing the E protein (Fig. 3d).A high level of recognition of particles by neutralizing 19/1786 antibody suggests that the exposition of important epitopes is correct and therefore can induce a strong immunological response (Füzik et al., 2018). L. tarentolae has been shown to be the first single-cell organism able to produce biantennary N-glycans similar to those in higher eukaryotic organisms, lacking only sialylation (Breitling et al., 2002).Both the TBEV prM and E proteins have one N-glycosylation site.The role of these glycans is still not fully understood, but it has been shown that the removal of N-glycans from the E protein reduces the infectivity of viral particles (Yoshii et al., 2013).Therefore, it is important that the glycosylation pattern is also maintained in the vaccine candidate antigen so that the immune response induced by the vaccine provides the highest possible level of protection from the native virus.It has also been shown that glycosylation may be important for TBEV VLP secretion (Goto et al., 2005).In this study, the glycosylation profile of proteins composing VLPs was also examined.We confirmed that both the prM and E proteins were fully glycosylated, as shown by PNGase F treatment (Fig. 4b). The strong immunogenicity potential of Leishmania-derived TBEV VLPs was confirmed by immunization of mice.Our study showed that immunization with VLPs results in high antibody titers measured by ELISAs with heterologous-derived antigens from mammalian cells (Fig. 5), and the sera from immunized animals had strong neutralizing properties against the virus.Moreover, immunization with VLPs protected mice from developing any TBE symptoms in experimental infection with a lethal dose of TBEV (Fig. 6).Without a doubt, the high safety profile and strong immunogenic potential of the vaccine antigen characterized in this study call for further investigation of these promising observations. To our knowledge, this is the first study undertaken to prove that the production of flaviviral VLPs is possible in a system based on the protozoa L. tarentolae.We have also shown that the particles produced in this system have strong immunogenic properties and are a good candidate for a cost-efficient and highly effective TBEV vaccine.Further studies, such as the analysis of protection from other virus subtypes and safety studies, need to be conducted to confirm the high potential of this vaccine antigen.However, based on the present study, we believe that the VLP particles described in this report may be good candidates for the production of TBEV vaccines on an industrial scale. Fig. 1 . Fig. 1.Schematic illustration of the amino acid sequence of the prMP2AE construct used for the construction of the L. tarentolae stable cell line.ssLsignal sequence LMSPA1 phosphatase from L. mexicana, region 1-23 in the amino acid sequence of this protein, (GenBank accession number: CAA87090.1);prMpremembrane protein of TBEV, region 114-281 in the amino acid sequence of Neudoerfl strain polyprotein (GenBank accession number: AAA86870.1);GSGlinker sequence; P2A -self-cleavage peptide from swine Teschovirus-1, region 979-997 in the amino acid sequence of polyprotein of this virus (GenBank accession number: NP_653143.1);sssignal sequence for E protein of TBEV from transmembrane domains of the prM protein, region 212-281 in the amino acid sequence of the Neudoerfl strain polyprotein; E − envelope protein of TBEV, region 282-776 in the amino acid sequence of the Neudoerfl strain polyprotein. Fig. 2 . Fig. 2. Analysis of the production of prM and E proteins by L. tarentolae.Western blot analysis of prM and E protein production in cell extract and medium with 4G2 anti-E mAbs (top) and L24 anti-prM serum (bottom) under nonreducing conditions.Cell extract and medium from wild-type L. tarentolae (wt) were used as a negative control. Fig. 4 . Fig. 4. Functional analysis of purified VLPs. a Recognition of particles with specific antibodies.The 4G2 and 19/1786 anti-E mAbs and L24 anti-prM serum were used.The plate was coated with 5 μg/ mL VLP protein.For each antibody, the mean from two independent experiments performed in triplicate is shown.Error bars indicate standard deviations.b Western blot analysis of prM and E protein mobility shifts after PNGase F treatment under reducing and denaturing conditions.The 19/1786 anti-E mAbs (top) and L24 anti-prM serum (bottom) were used. Fig. 5 . Fig. 5. Analysis of the humoral response after immunization with Leishmania-derived VLPs in BALB/c mice.VLPs + AddaVax refers to a group immunized with antigen, and PBS + AddaVax is a control group vaccinated only with adjuvant.a The results show antibody titers in mouse sera before every immunization and 14 days after the last immunization. Fig. 6 . Fig. 6.Efficacy of vaccine candidate in challenge experiments.a Experimental protocol.Mice were immunized with three doses of the vaccine candidate (VLPs + AddaVax) two weeks apart.A control group was injected with the adjuvant only (PBS + Adda-Vax).Eighteen days after injection of the third dose, the mice were challenged with authentic TBEV.Morbidity and survival were assessed during a fourweek experimental period.(Figure created with Servier Medical Art, available at www.servier.com).b Kaplan-Meier survival curve.The P value was calculated using the Mantel-Cox test (***, P < 0.001).c Histograms show disease progression in the control group receiving adjuvant alone (left) and in the group immunized with the vaccine candidate (right).
2022-12-29T16:11:24.875Z
2022-12-27T00:00:00.000
{ "year": 2022, "sha1": "5f9d49a61d5fc6e110693b6b8154bf0fe76cf4f4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.antiviral.2022.105511", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ec49cf83a691246973f30c818b82e7b866f20a52", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1616601
pes2o/s2orc
v3-fos-license
Generalized Background-Field Method The graphical method discussed previously can be used to create new gauges not reachable by the path-integral formalism. By this means a new gauge is designed for more efficient two-loop QCD calculations. It is related to but simpler than the ordinary background-field gauge, in that even the triple-gluon vertices for internal lines contain only four terms, not the usual six. This reduction simplifies the calculation inspite of the necessity to include other vertices for compensation. Like the ordinary background-field gauge, this generalized background-field gauge also preserves gauge invariance of the external particles. As a check of the result and an illustration for the reduction in labour, an explicit calculation of the two-loop QCD $\beta$-function is carried out in this new gauge. It results in a saving of 45% of computation compared to the ordinary background-field gauge. Introduction Physical processes in QCD are gauge independent but unfortunately individual Feynman diagrams are not. For that reason calculations may be greatly simplified with the choice of a convenient gauge, so as to minimize the presence of gauge-dependent terms in the intermediate steps. The background-field (BF) gauge [1] is one such gauge, partly because of its gauge-invariant property with respect to the external lines. The pinching technique [2,3] used to simplify calculations is also known to be related to this gauge [4,5]. The purpose of this paper is to discuss a graphical method for designing other convenient gauges. We start by pointing out the advantage and the flexibility of the graphical method over the conventional path-integral or operator technique. The gluon propagator g µν /p 2 will be used throughout, thus by a gauge choice we just mean the choice of vertices in making calculations. The BF vertices are different from the ordinary vertices in that its triple-gluon (3g) vertex makes a distinction between internal and external gluon lines, with the latter indicated graphically by an arrow (see Fig. 7(a) of the Appendix) and analytically possessing only four (see eq. (9) of the Appendix) rather than the usual six (eq. (17) and Fig. 7(i)) terms. The Gervais-Neveu gauge [6] would be another possible gauge choice in this sense. In the usual approach, gauge choice is implemented by a gauge-fixing term in the path integral. In the BF gauge, for example, this term for the quantized Yang-Mills field Q is 1 given by ∂ ·Q+g[A, Q], with A being an external classical Yang-Mills potential. The presence of A is the reason why external lines play a special role in the BF gauge. In a previous publication [5] we have demonstrated how this and other gauge choices can be obtained in a graphical method, which we will summarize and extend in Sec. 2. Similar techniques have also been employed elsewhere [7,8]. Essentially, in the graphical language, the fundamental difference between one gauge and another lies in their 3g vertices, which differ from one another by a combination of gradient terms. To compensate for this difference changes will have to be made in other vertices as well, changes that can be computed using the graphical method. With this technique it is possible to make different changes on different vertices, each leading to a different compensation. In contrast, a gauge-fixing term in the path-integral or the operator formalism does not possess this flexibility; whatever changes made to one vertex must be made on other identical vertices. Hence we can design newer and simpler gauges using the graphical method that cannot be obtained using the path-integral method. The generalized background-field [GBF] gauge discussed later is such a gauge, and we are also considering another one that is related to the Gervais-Neveu gauge [9]. This paper is organized as follows. In Sec. 2, we review and extend the graphical procedure for creating new gauges [5]. The GBF gauge will be defined in Sec. 3, and the computation of the 2-loop β-function using this new gauge is presented in Sec. 4, with a conclusion in Sec. 5. Graphical Procedure for the Creation of New Gauges To discuss gauge invariance graphically, it is convenient to use the Chan-Paton color factors [10] and color-oriented diagrams [11,12]. In the absence of quarks, the Chan-Paton color factors are given by the traces of products of the color matrices T a , and the products of such traces. In the presence of quarks, ordered products of the color matrices T a also enter. Diagrammatic rules can be designed to compute the spacetime amplitudes for each of these color factors, by using color-oriented diagrams. The propagators of the color-oriented diagrams are the ordinary propagators; their vertices are different but can be derived from the vertices of Feynman diagrams. For one thing, color factors are no longer present in the vertices of the color-oriented diagrams. For another, the clockwise orientation of the lines emerging from each vertex is fixed in the color-oriented diagrams (hence the name). These color-oriented vertices are given in eqs (2.1) to (2.4) as well as Fig. 1 of Ref. [9] in the Feynman gauge. In what follows, when we talk about diagrams we mean the color-oriented diagrams, and when we talk about vertices we always refer to these color-oriented vertices. The graphical rules for gauge transformation of color-oriented diagrams have been discussed before [5]. Using these rules we can create new gauges not reachable in the pathintegral formalism. By new gauges in this paper we shall mean new vertex factors; the propagators used here will always be the usual Feynman-gauge propagators. To create a new gauge B from an existing gauge A, we start by subtracting appropriate combinations of gradient terms from the triple-gluon (3g) vertices Γ αβγ (p 1 , p 2 , p 3 ) of gauge A, viz., terms proportional to (p 1 ) α , (p 2 ) β , and (p 3 ) γ . Such a gradient term will be denoted graphically by a cross (×) on the appropriate gluon line. To maintain gauge invariance and the same physical scattering amplitudes, other vertices must also be altered and/or created to compensate for this change. Using graphical methods [5] they can be computed in the following way. A gradient term on a 3g vertex becomes a divergence on the subsequent vertex. To find out the effect of this change on the 3g vertex, we need to know the divergence of every vertex possessing a gluon line in the original gauge A. If A consists of the vertices of the ordinary Feynman gauge given by eqs. (2.1) to (2.4) and Fig. 1 of Ref. [5], then these divergences are given by eqs. (2.5) to (2.8) and expressed graphically in Figs. 2 to 5 of that paper. If A consists of the vertices of the background-field (BF) gauge given by eqs. (2.1) to (2.4), as well as (4.1) to (4.7) of Ref. [5], and graphically Figs. 1 and 18 of that paper, then these divergences can be similarly computed. The BF vertices are repeated here in the Appendix, eqs. (7) to (17), and graphically in Figs. 7(a) to (k). The result of the divergence computation can be found in Figs. 8 to 13. The next step is to combine the right-hand side of the divergence relations from different diagrams. On account of local gauge invariance, many of these terms add up to cancel one another. Starting from the Feynman gauge, these cancellation relations can be found in Figs. 7 to 16 of Ref. [5]. Starting from the BF gauge, similar relations can be worked out and they are shown in Figs. 14 to 17 in the Appendix of this paper. This general scheme works in QED as well as QCD. What makes them different is the presence of ghost lines in the latter. Among other things, it gives rise to the presence of propagating diagrams in the divergence relation of 3g and possibly 4g vertices. These are diagrams in which two of the gluon lines are replaced by ghost lines (we call them wandering ghost lines), and the divergence 'cross' at the beginning of a line is moved to the end of the other line, as in Figs. 3(d) and 3(e) of Ref. [9] for the Feynman gauge, and Figs. 8(d) and 8(e) of this paper for the BF gauge. Via these diagrams, the divergence 'cross' propagates along the diagram, dragging behind it the wandering ghost line. It is the presence of these diagrams that makes the Slavnov-Taylor identity in QCD different from the Ward-Takahashi identity in QED. If the cross propagates in a closed loop to return to its original position, then local gauge compensation will be upset, thus resulting in additional terms or vertices with two ghost lines. For example, if gauge A is Feynman and gauge B is BF, then this change is given by Fig. 20 of Ref. [5]. We started with the gradient change of a single 3g vertex in gauge A for every diagram, as described above, and considered the local cancellations and the new ghost vertices thus created for gauge B. Having converted one vertex this way, we are now ready to convert a second 3g vertex from gauge A to gauge B. Unless the second vertex is adjacent to the first, the same argument holds and the net change of the second vertex is identical to that of the first. If they are adjacent, additional changes may occur because the first 3g and ghost vertices are already in gauge B, while the second vertex is still in gauge A. Mixing A and B, local gauge cancellation will generally not occur, thus producing yet other new terms or vertices from this mismatch. In getting from the Feynman gauge to the BF gauge in Ref. [5], for example, this is how the new 4g vertices are obtained through Figs. 37 and 38. We can continue this way to change all other vertices one after another from gauge A to gauge B. In principle, merging three or more adjacent vertices may produce newer vertices still, but this does not happen when we go from the Feynman to the BF gauge, nor from the BF gauge to the GBF gauge as will be discussed in the next section. Generalized Background-Field (GBF) Gauge In this section we follow the outline of the last section to convert the BF gauge to a new gauge which we shall refer to as the GBF gauge. In the BF gauge, the 3g vertices involving an external gluon are different from those without it. The former has four terms compared to the latter with six terms. The former is shown in Fig. 7(a) of the Appendix, in which the external line is represented by an arrow. The latter is just the usual 3g vertex shown in Fig. 7(i). The aim of the GFB gauge is to convert all 3g vertices to the former type with only four terms, so as to simplify the number of terms present and the algebra of the calculations. It is true that new vertices and diagrams will have to be produced to compensate for this change, but the overall saving turns out to be still substantial. In this paper, we confine ourselves to two-loop diagrams with an arbitrary number of external lines. In fact, we will carry out the explicit calculation only for two external lines but the technique can easily be generalized to an arbitrary number of external lines. With this restriction there are at most two internal 3g vertices in the BF gauge that need be converted into the arrowed type. We will choose the arrows to appear on both ends of the 'middle propagator' as shown in Figs. 6(a) and 6(b) of the next section. As before, let us make this change first on one of the two internal vertices. The change is identical to what happens when we convert from the Feynman to the BF gauge, so nothing new will be produced. Now we convert the second internal 3g vertex to the arrowed form. Via a series of propagating diagrams, the 'cross' can return to this vertex via two possible routes. Either it comes back via a line without an arrow, or it returns via the arrowed line. The former is identical to what happened before so it produces nothing new. The latter could not happen previously because the arrowed line in the BF gauge is always an external line. Now in the GBF gauge, this new situation produces a new vertex shown in Fig. 1. In addition, adjacent interaction may now take place when the ghost line returns in a loop, and this produces further changes given by Figs. 2 to 4. The vertices obtained this way for the GBF gauge is summarized in Fig. 5 and eqs. (1)-(6). [5d] = g 2 g αγ 4 β-Function at the Two-Loop level As an illustration, and a check of these new vertex rules, we will calculate the QCD two-loop β function in the GFB gauge. This β-function has previously been done in the Feynman and the BF gauges [13], with considerable savings shown when computed in the BF gauge. We will now show that a further saving of 45% is possible when computed in the GBF gauge. We choose this example for illustration because it is the simplest at the two-loop level and it can be computed analytically. The disadvantage of this example is that it is not an on-shell process, so the full-fledged simplification of the GBF gauge will not be revealed. Off-shellness gives rise to some extra diagrams, which will be absent in an on-shell process. But even so, the saving is still considerable. The diagrams in the GBF gauge are shown in Fig. 6. There are 26 basic diagrams to be found in Fig. 6(a-z), 12 extra diagrams to be found in Fig. 6(e1-e12), and 3 gaugefixed renormalization insertion diagrams to be found in Fig. 6(i1, i2, i3). The extra diagrams involve wandering ghost lines sliding to an external end. They will be absent if these external lines were on-shell. The result of the calculation is summarized in the Table I. We can compare this with the calculation in the BF gauge [13]. Although we have more diagrams in the GBF gauge, the total number of terms to be computed is less. In the GBF gauge, there are totally 728 terms, while in the BF gauge there are 1320 terms. Therefore, a 45% of computational labor is saved by using the GBF gauge. Using Mathematica R to compute, we need 150 seconds for the GBF gauge, and 260 for the BF gauge. For on-shell process, because of the absence of the extra diagrams, the saving of the GBF gauge will be greater and can be expected to be approximately 50 percent. Conclusion We have demonstrated the power of the graphical rules in making individual gauge changes on vertices. An operator method or a path-integral method must treat all the vertices the same way so this individual flexibility is lost. This method is illustrated by the creation of the GBF gauge from the BF gauge. GBF gauge maintains the gauge-invariant property of (a) ( Figure 6: (continued) 10 the ordinary BF gauge with respect to the external lines, and preserves the simple Ward-Takahashi identity when divergences are taken on them. It also contains less terms than the BF gauge in actual calculations. The saving for the two-loop QCD β-function we gave is 45%, and more can be expected for on-shell processes. The graphical method is not limited to this example nor this gauge. We can for example change the internal 3g vertices into the 3g vertices of the Gervais-Neveu gauge [6], with even greater saving [9]. Acknowledgements This research was supported in part by the Natural Science and Engineering Research Council of Canada and by the Québec Department of Education. Y.J.F. acknowledges the support of the Carl Reinhardt Major Foundation. A Divergence and cancellation relations in the BF gauge The color-oriented vertices of QCD in the BF gauge are summarized in Fig. 7. We use wavy line for gluon, dotted line for ghost. The arrowed line is an external line. All propagators in this paper are chosen to be in the Feynman gauge, so we have −1/p 2 and g αβ /p 2 for ghosts and gluons respectively. Analytically, the vertices shown in Fig. 7 are associated with the following vertex factors The divergence and cancellation relations for the BF gauge are summerized below. A.1 Divergence relations We list below the possibilities when a cross is put on a gluon line without an arrow for any of the vertices found in Fig. 7.
2014-10-01T00:00:00.000Z
1997-06-04T00:00:00.000
{ "year": 1997, "sha1": "15c7b5bf1d221a4edbdf659f3235a1f924ebf547", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9706248", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "15c7b5bf1d221a4edbdf659f3235a1f924ebf547", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247157427
pes2o/s2orc
v3-fos-license
The physiological and clinical importance of cardiorespiratory fitness in people with abdominal aortic aneurysm New Findings What is the topic of this review? This review focuses on the physiological impact of abdominal aortic aneurysm (AAA) on cardiorespiratory fitness and the negative consequences of low fitness on clinical outcomes in AAA. We also discuss the efficacy of exercise training for improving cardiorespiratory fitness in AAA. What advances does it highlight? We demonstrate the negative impact of low fitness on disease progression and clinical outcomes in AAA. We highlight potential mechanistic determinants of low fitness in AAA and present evidence that exercise training can be an effective treatment strategy for improving cardiorespiratory fitness, postoperative mortality and disease progression. Abstract An abdominal aortic aneurysm (AAA) is an abnormal enlargement of the aorta, below the level of the renal arteries, where the aorta diameter increases by >50%. As an aneurysm increases in size, there is a progressive increase in the risk of rupture, which ranges from 25 to 40% for aneurysms >5.5 cm in diameter. People with AAA are also at a heightened risk of cardiovascular events and associated mortality. Cardiorespiratory fitness is impaired in people with AAA and is associated with poor (postoperative) clinical outcomes, including increased length of hospital stay and postoperative mortality after open surgical or endovascular AAA repair. Although cardiorespiratory fitness is a well‐recognized prognostic marker of cardiovascular health and mortality, it is not assessed routinely, nor is it included in current clinical practice guidelines for the management of people with AAA. In this review, we discuss the physiological impact of AAA on cardiorespiratory fitness, in addition to the consequences of low cardiorespiratory fitness on clinical outcomes in people with AAA. Finally, we summarize current evidence for the effect of exercise training interventions on cardiorespiratory fitness in people with AAA, including the associated improvements in postoperative mortality, AAA growth and cardiovascular risk. Based on this review, we propose that cardiorespiratory fitness should be considered as part of the routine risk assessment and monitoring of people with AAA and that targeting improvements in cardiorespiratory fitness with exercise training might represent a viable adjunct treatment strategy for reducing postoperative mortality and disease progression. INTRODUCTION Abdominal aortic aneurysm (AAA) is characterized by an abnormal progressive dilatation of the aorta, below the level of the renal arteries, surpassing the normal diameter of the aorta by >50% (Upchurch & Schaub, 2006). The burden of AAA is significant, with a reported global prevalence of ∼6% and a mortality rate that accounts for ∼2% of all annual deaths in males aged >60 years (Ashton et al., 2002). Development of AAA is regarded as a local manifestation of a systemic inflammatory disease, whereby gradual degeneration of the aortic wall leads to weakening, enlargement and, ultimately, a high risk of rupture (Brady et al., 2004). Aneurysm rupture is often life threatening, and the associated mortality rate surpasses 90% ( Van 't Veer et al., 2008). To date, treatment options for AAA consist of open surgical or endovascular aneurysm repair, which are generally available only for patients with a large AAA (>5.5 cm diameter). People with a small AAA (<5.5 cm) typically undergo regular imaging surveillance, and there are no viable treatment options (Wanhainen et al., 2018). Beyond the risk of AAA rupture, common causes of mortality among people with AAA are the postoperative mortality associated with AAA repair (Eslami et al., 2017) and cardiovascular disease-related mortality (Bath et al., 2017). Of particular note, the prevalence of cardiovascular disease and associated events (e.g., ischaemic heart disease ∼45%, myocardial infarction ∼27% and stroke ∼14%) is very high in people with AAA and has been reported to increase by ∼3% year-on-year after AAA diagnosis (Bath et al., 2017). Cardiorespiratory fitness, measured as the maximal capacity to take up and utilize oxygen, relies on the health and coordinated responses of various physiological systems and organs . Among the general population, there is a strong association between cardiorespiratory fitness and the risk of morbidity and mortality, particularly that associated with cardiovascular disease (Lee et al., 2010). Current evidence demonstrates that cardiorespiratory fitness is impaired in people with AAA, relative to those without AAA (Rose et al., 2018) and age-related normative reference values (Ferguson, 2014). Moreover, the association between cardiorespiratory fitness and cardiovascular-related risk (Kodama et al., 2009) and factors related to AAA growth and rupture, such as increased arterial stiffness and endothelial dysfunction (Montero, 2015), raises the possibility that cardiorespiratory fitness might be a viable marker or determinant of clinical outcomes in people with AAA. In this review, we explore the premise that the inclusion of cardiorespiratory fitness as a treatment target has the potential to mitigate cardiovascular risk, aneurysm progression and the morbidity and mortality associated with AAA. The aim of the review is to New Findings • What is the topic of this review? This review focuses on the physiological impact of abdominal aortic aneurysm (AAA) on cardiorespiratory fitness and the negative consequences of low fitness on clinical outcomes in AAA. We also discuss the efficacy of exercise training for improving cardiorespiratory fitness in AAA. • What advances does it highlight? We demonstrate the negative impact of low fitness on disease progression and clinical outcomes in AAA. We highlight potential mechanistic determinants of low fitness in AAA and present evidence that exercise training can be an effective treatment strategy for improving cardiorespiratory fitness, postoperative mortality and disease progression. The assessment of cardiorespiratory fitness Cardiorespiratory fitness reflects the capacity of the body to take up and utilize oxygen. It is dependent on the synergistic function of key organ systems, particularly the respiratory, cardiovascular and muscle-metabolic systems, to deliver oxygen from the ambient air to the mitochondria in the working skeletal muscles (Lee et al., 2010). Oxygen consumption is described by the Fick equation, where oxygen utilization (V O 2 ) = cardiac output × arteriovenous oxygen difference (Levine, 2008). These parameters provide an insight into the physiological determinants of oxygen consumption, whereby cardiac output is primarily dependent on central factors, including heart rate, stroke volume and aortic function, and arteriovenous oxygen difference depends largely on peripheral factors, such as peripheral blood flow, blood oxygen-carrying capacity, capillary supply and mitochondrial volume and density, and the matching of oxygen perfusion and diffusion between the capillaries and the mitochondria (Del Torto et al., 2017). Cardiopulmonary exercise testing (CPET) is used to assess functional capacity and cardiorespiratory fitness (Albouaini et al., 2007). During CPET, expired ventilatory gasses are collected and analysed while the test participant undertakes incremental exercise to their maximal effort (i.e., the point at which they are not volitionally able to sustain the exercise load and continue). The maximal rate of oxygen uptake during exercise (V O 2 max ) is considered to be the gold-standard measure of cardiorespiratory fitness. TheV O 2 max is commonly defined as a plateau in oxygen consumption for a sustained period (e.g., 30-60 s) during maximal incremental exercise. However, given that this plateau in oxygen consumption is often not observed, V O 2 peak (i.e., the highest rate of oxygen consumption during a test) is commonly used as a measure of cardiorespiratory fitness . Cardiopulmonary exercise testing with expired gas analysis also enables an assessment of gas exchange thresholds (GETs), including the ventilatory threshold (VT), where expired ventilation increases disproportionately to the increase iṅ The VT provides a submaximal measure of functional capacity, occurring at ∼45-65% of theV O 2 peak (Sato et al., 1989). Although its estimation can be subjective (e.g., using the V-slope, ventilatory equivalent or excess carbon dioxide methods), it has been shown to be interpreted reliably between clinicians (Vainshelboim et al., 2017). Exercise beyond VT is associated with metabolic acidosis, hyperventilation and reduced capacity to perform work; therefore, its assessment is useful in clinical populations when a maximal CPET might be contraindicated (Ferguson, 2014). Cardiorespiratory fitness in people with AAA Cardiorespiratory fitness has been demonstrated to be impaired in people with AAA. A recent large retrospective study reported a mean reduction of 13.6 ml kg −1 min −1 [95% confidence interval (CI) 12.0−15.2, P < 0.001] inV O 2 peak in people with AAA (n = 124) compared with apparently healthy age-matched individuals (n = 108) (Rose et al., 2018). In support of this finding, a recent comparative study reported that people with a small AAA (<5.5 cm) demonstrate significantly lower cardiorespiratory fitness (n = 22,V O 2 peak 19.0 ± 3.5 ml kg −1 min −1 ) when compared with those without an AAA (n = 22, V O 2 peak 24.5 ± 2.8 ml kg −1 min −1 , P ≤ 0.001) (Perissiou et al., 2019). These findings from comparative studies suggest there is at least a ∼25% deficit in cardiorespiratory fitness in people with AAA. Although to date these are the only two studies to compare cardiorespiratory fitness directly between people with and without AAA, there have been 17 studies that have reported estimates of cardiorespiratory fitness in patients with AAA. These studies are summarized in Table 1 and include cross-sectional and exercise training investigations in people with a small or large AAA. Across the 2,259 study participants with small or large AAA (aged 69-76 years), these studies reporṫ V O 2 peak means ranging between 13.3 and20.0 ml kg −1 min −1 and a VT range of 9.4-12.5 ml kg −1 min −1 . According to age-related normative data, theV O 2 peak and VT of people with AAA are categorized as 'very poor' and within the lowest (25th) percentile of the general population (Ferguson, 2014;Vainshelboim et al., 2020). Importantly, current evidence demonstrates that aV O 2 peak < 15 ml kg −1 min −1 and a VT < 10 ml kg −1 min −1 are associated with reduced functional capacity and severe cardiovascular risk (Kodama et al., 2009). The reported meanV O 2 peak in the eight studies that included people with a large AAA (n = 1,859;V O 2 peak 13.3-17.5 ml kg −1 min −1 ; Table 1B) The potential impact of AAA on oxygen delivery and utilization Although there have been no direct investigations to understand the impact of AAA on the physiological determinants of cardiorespiratory fitness, impairments in cardiorespiratory fitness can broadly be explained by limitations in factors associated with oxygen delivery and/or oxygen utilization (Burtscher, 2013). A primary physiological determinant of cardiorespiratory fitness is the ability of the blood and the vasculature to carry oxygen efficiently from the heart to the periphery, in order to meet the oxygen requirements of working muscles (Levine, 2008). Chronic systemic inflammation, a primary determinant of AAA (Dale et al., 2015), plays a key role in the formation of vascular lesions and remodelling, which consequently leads to endothelial dysfunction and increased arterial stiffness, both markers of arterial wall damage (Castellon & Bogdanova, 2016) and main characteristics of AAA (Kadoglou et al., 2012;Siasos et al., 2015). Importantly, vascular endothelial dysfunction and elevated arterial stiffness are factors known to impact blood flow and oxygen delivery directly (Kadoglou et al., 2012;Siasos et al., 2015). We recently demonstrated that aortic stiffness and endothelial dysfunction are associated with lower cardiorespiratory fitness (V O 2 peak ) in people with a small AAA (Bailey et al., 2017;Perissiou et al., 2019). There is evidence that increased arterial stiffness is directly associated with impaired muscle oxygenation during exercise in hypertensive patients (Dipla et al., 2017). In addition, endothelial dysfunction is widely associated with hypoperfusion of regional vasculature, including limb and muscle blood flow during exercise (Vallet, 2002). Likewise, at the microvasculature, endothelial dysfunction and a disturbed production of nitric oxide derivatives is associated with impaired capillary blood flow and altered oxyhaemoglobin binding (Iankovskaia & Zinchuk, 2007), which potentially limit oxygen delivery to working muscles. Impaired function and structure of the aorta are also associated with deterioration in aortic Windkessel function (Belz, 1995). The Windkessel effect dampens the phasic systolic surges in blood flow produced by ventricular ejection into a smoother, more continuous outflow to the peripheral vessels. Interestingly, Swillens et al. (2008) demonstrated in computer-constructed models of AAA that the aneurysm itself is responsible for a deterioration in Windkessel wave reflection, leading to an impairment in cardiac output and reduced blood flow to the periphery (Swillens et al., 2008). Reduced blood flow is commonly reported at the site of aortic aneurysms (White & Dalman, 2008), and Suh et al. (2011) demonstrated a reduction in aneurysmal blood flow during cycling exercise (Suh et al., 2011). This has been interrogated further with three-dimesional computer models of large AAA, in which during rest and exercise conditions there is recirculation of blood within the aneurysm, which contributes to reduce blood distribution to the periphery throughout the cardiac cycle (Varshney et al., 2020). These impairments in vascular function TA B L E 1 Cardiorespiratory fitness in people with a small (A) or large (B) abdominal aortic aneurysm Note. Data were derived from studies that assessed cardiorespiratory fitness in people with an abdominal aortic aneurysm (AAA) using the measures peak oxygen consumption (V O 2 peak ) and/or ventilatory threshold (VT). Data that are extracted from exercise training studies include separate sets of data for the exercise training group and the comparator group (usual care, control). The large study by Carlisle et al. (2015) reported separate sets of means for each of the four hospital sites. and haemodynamics are potentially compounded by an altered blood oxygen-carrying capacity in people with AAA. Specifically, Zhang et al. (2012) retrospectively reviewed haemoglobin levels in 255 people with AAA and reported a high prevalence of anaemia (34.5%) and that the haemoglobin concentration was independently and inversely associated with aneurysm diameter (Zhang et al., 2012). Oxygen extraction and the efficient utilization of oxygen by the mitochondria are fundamental determinants of cardiorespiratory fitness (Jacobs & Lundby, 2013). To date, there have been no direct investigations of muscle oxygen utilization in people with AAA. However, several studies have established that mitochondrial dysfunction is evident in the smooth muscle of the aneurysm wall. It has also been reported that there is differential expression of a number of genes associated with mitochondrial function and oxidative phosphorylation within the aneurysm wall (Yuan et al., 2015). The reduced mitochondrial function might also be accompanied by increased glycolysis and increased lactate production (Prado-Garcia et al., 2020). Indeed, Tsuruda et al. (2012) reported an increased glycolytic activity in aneurysmal mouse models (Tsuruda et al., 2012), and Modrego et al. (2012) showed in vitro that lactate content is elevated in AAA compared with control participants. Chronic inflammation, which characterizes AAA (Dale et al., 2015), is known to contribute further to a hypoxic microenvironment within tissues, a phenomenon known as inflammatory hypoxia (Biddlestone et al., 2015). Indeed, studies have reported that AAAs demonstrate inflammation-induced tissue hypoxia and attenuated oxygen diffusion (Blassova et al., 2019). In addition, systemic chronic inflammation in AAA has been shown to favour a pro-oxidant microenvironment in people with AAA (Meital et al., 2020), a state that is associated with impairments in muscle oxygen utilization and exercise capacity (König et al., 2001). Indeed, Menteşe et al. (2016) reported that compared with control subjects, individuals with AAA demonstrate elevated oxidative stress levels with no change in antioxidant capacity (the ability of inhibiting molecules with high redox potential). Morphometric analyses of muscle biopsy samples from the anterior tibialis muscle show a predominance of atrophic type I muscle fibres in people with AAA (Albani et al., 2000). Interestingly, current evidence suggests that type I muscle fibre atrophy is a consequence of injury induced by reactive oxygen species and is associated with impaired oxygen utilization (Bonaldo & Sandri, 2013). Hence, we could speculate that muscle atrophy contributes indirectly to impaired muscle oxygen utilization in people with AAA. The studies presented here provide only indirect evidence of the potential impact of AAA on muscle oxygen utilization. There is a need for future studies directly to assess the determinants of muscle oxygen utilization (e.g., mitochondrial volume and function, capillary supply, aerobic enzyme activities and muscle oxygen extraction) at rest and during exercise in people with AAA. 2.4 The potential impact of AAA co-morbidities on the cardiorespiratory fitness of people with AAA There are several co-morbidities commonly observed in people with AAA that might contribute to their impairment in cardiorespiratory fitness. Coronary artery disease (CAD) is one of the most prevalent co-morbidities, with 25-37% of people with AAA reported also to have a diagnosis of CAD (Van Kuijk et al., 2009). There is a wellestablished impairment in cardiorespiratory fitness associated with CAD (Gander et al., 2015), because myocardial ischaemia associated with the stenosis of coronary arteries leads to a reduction in cardiac output and therefore limits oxygen delivery to the working skeletal muscles. Likewise, peripheral arterial disease, which is present in ∼20% of people with AAA (Kent et al., 2010), is associated with low levels of cardiorespiratory fitness (Hou et al., 2002). Peripheral arterial disease is characterized by blood flow impairment to the muscles of the lower limbs. There is also evidence of skeletal muscle changes, including alterations in capillary supply, mitochondrial density and function, and muscle fibre morphology and metabolism, all of which potentially contribute to impaired oxygen extraction and utilization (Baum et al., 2016;Hamburg & Creager, 2017). These skeletal muscle changes can also be exacerbated by the presence of type 2 diabetes, which is diagnosed in ∼15% of people with AAA (De Rango et al., 2014;Green et al., 2007). Furthermore, diabetes potentially limits the efficient use of glucose as a substrate during exercise, which is associated with impaired oxygen economy during exercise (Bauer et al., 2007) and a reduction in cardiorespiratory fitness (Nesti et al., 2020). Finally, people with AAA commonly present with impaired pulmonary function, and chronic obstructive pulmonary disease is reported in up to ∼28% of AAA patients (Lederle et al., 2015). Chronic obstructive pulmonary disease primarily causes a diffusion limitation at the lungs and has been shown to limit the capacity for oxygen delivery to peripheral tissues and working muscles (Broxterman et al., 2020;Nakamura et al., 2004). Sections 2.3 and 2.4 outline the systemic and local cardiovascular alterations that occur with AAA, in addition to several common comorbidities, that are likely to contribute to the impaired cardiorespiratory fitness in these individuals. A theoretical overview of the association between these mechanisms is depicted in Figure 1. Although this provides a plausible basis for understanding the limits associated with AAA, studies that interrogate the physiological mechanisms of cardiorespiratory fitness in people with AAA are lacking, and there is a need for further research in this area. CARDIORESPIRATORY FITNESS AS A PREDICTOR OF CLINICAL OUTCOMES IN PEOPLE WITH ABDOMINAL AORTIC ANEURISM UNDERGOING OPEN SURGICAL AND ENDOVASCULAR REPAIR Open surgical or endovascular repair are currently the only recognized effective treatments for AAA to prevent rupture and aneurysm-related and for this reason, it is generally reserved for those with a large AAA (>5.5 cm in diameter) and for those where the risk of rupture is greatest (Locham et al., 2017). Abdominal aortic aneurysm repair places considerable metabolic demands on patients during the repair procedure and the short-term (i.e., 3 months) postoperative period (Salartash et al., 2001). This is thought to be attributable to a strong inflammatory response that leads to an increase in basal oxygen demand of ∼110-170 ml min −1 during the postoperative period (Older et al., 1999). The increased energy requirements are reported to be necessary for wound healing and the resolution of inflammation and are associated with significant elevations in ventilation and cardiac activity (Davies & Wilson, 2004). Failure of the cardiorespiratory system to meet these increased metabolic requirements is suggested to contribute to intra-and postoperative complications and mortality in AAA (Struthers et al., 2008). Several studies have established that an impairment in preoperative cardiorespiratory fitness is closely associated with the risk of death in the short-term (≤3 months) period after open AAA repair (Table 2A) Note. Data are from published studies that assessed the risk using the hazard ratio (the relative risk of an event happening at a specific time) or odds ratio ( (Kadoglou et al., 2012;Swillens et al., 2008). Furthermore, as AAA diameter increases, the recirculating fluid in the aneurysmal site leads to further disruption of blood distribution to the periphery (Suh et al., 2011;Varshney et al., 2020). (a2) Endothelial dysfunction and disturbed production of NO derivatives contribute to a reduced oxygen-carrying capacity by the blood, leading to reduced blood flow to the periphery (Iankovskaia & Zinchuk, 2007). (b) People with AAA demonstrate increased systemic oxidative stress (Menteşe et al., 2016) and predominance of atrophic type I muscle fibres (Albani et al., 2000), factors associated with oxygen utilization determinants, such as mitochondrial dysfunction (Handy & Loscalzo, 2012), reduced oxidative phosphorylation and ATP synthase and increased lactate production (Bonaldo & Sandri, 2013). (c) Abdominal aortic aneurysms are characterized by co-morbidities that create an ischaemic environment in the central (CAD) (Kent et al., 2010) and peripheral (PAD) circulatory system (Kent et al., 2010) and affect oxygen distribution by the lungs (COPD) (Lederle et al., 2015) and oxygen utilization by the muscles (T2DM) (De Rango et al., 2014). Abbreviations: AAA, abdominal aortic aneurysm; BF, blood flow; CAD, coronary artery disease; COPD, chronic obstructive pulmonary disease; Hb, haemoglobin; NO, nitric oxide; PAD, peripheral arterial disease; T2DM, type 2 diabetes mellitus. The figure was created with BioRender.com VT ≤ 11 ml kg −1 min −1 demonstrated a 9.9% higher 30-day mortality rate after open repair compared with people who achieved a VT ≥ 11 ml kg −1 min −1 . Finally, Barakat et al. (2015) reported that different measures of fitness are associated with specific perioperative complications. A significant relationship was demonstrated between a VT ≤ 10.2 ml kg −1 min −1 and cardiac complications and between a ventilatory equivalent for carbon dioxide ≥ 42 ml kg −1 min −1 (V E ∕V CO 2 , a GET variable associated with elevated pulmonary pressures) and respiratory complications (Barakat et al., 2015). These results highlight the clinical importance and impact of cardiorespiratory fitness on the short-term postoperative mortality and morbidity of people with AAA after open surgical repair. Endovascular AAA repair was originally developed as a lower-risk non-invasive procedure that would also accommodate people who were considered physically ineligible (unfit) for open surgical repair (Parodi et al., 1991). It is associated with a significantly lower rate of aneurysm-related mortality than no repair (Greenhalgh, 2005). Indeed, studies to date have also reported that EVAR demonstrates significantly reduced short-term postoperative mortality (Goodyear et al., 2013;Hartley et al., 2012) and morbidity (Prentis et al., 2012) in unfit (VT ≤ 11 ml kg −1 min −1 ) people with AAA compared with open repair. However, it seems likely that the early benefit of EVAR with respect to short-term postoperative mortality is abolished in the long term, owing largely to fatal endograft leaks and ruptures (Patel et al., 2016). Indeed, a committee of clinicians appointed by the UK National Institute for Health and Care Excellence recently published a scientific report recommending the use of open repair over EVAR repair (Bradbury et al., 2021). Interestingly, evidence demonstrates that besides endograft ruptures, cardiorespiratory fitness is one of the main determinants of the increased long-term postoperative mortality observed after EVAR. Specifically, Straw et al. To date, several studies have demonstrated that cardiorespiratory fitness is associated with long-term postoperative mortality regardless of the AAA repair modality used (Table 2B). Specifically, a VT ≤ 10.2 ml kg −1 min −1 and aV O 2 peak ≤ 15 ml kg −1 min −1 were found successfully to predict 3-year postoperative survival (Grant et al., 2015) and length of hospital stay (Prentis et al., 2012) regardless of whether EVAR or open repair was used. Importantly, the use of a combination of cardiorespiratory fitness and GET variables, includingV O 2 peak , VT anḋ V E ∕V CO 2 , might strengthen the prediction of mortality following AAA repair and help to assess the risk versus benefit before AAA repair (Grant et al., 2015). Finally, in a large multicentre study (n = 1,096), Carlisle et al. (2015) used presurgical values ofV O 2 peak , VT anḋ Future studies should address the need for more homogeneous fitness data that will aid in the development of universal clinical thresholds to aid in the management of people with AAA. THE BENEFICIAL EFFECTS OF EXERCISE TRAINING IN PEOPLE WITH ABDOMINAL AORTIC ANEURISM Over the last decade, there has been growing interest in the use of exercise training (therapy) as an adjunct treatment for both surgical (large AAA) and non-surgical (small AAA) management of people with AAA. This is based on the many benefits that improved cardiorespiratory fitness seems to have on postoperative outcomes (as highlighted in Section 3) and in reducing cardiovascularrelated mortality (Kodama et al., 2009). This section provides a detailed review of evidence of the effect of exercise training on cardiorespiratory fitness, postoperative outcomes, cardiovascular health parameters and disease progression in people with AAA. The effect of exercise training on cardiorespiratory fitness Several studies have reported improvements in cardiorespiratory fitness after short-term (6-12 weeks) exercise training in patients with a small AAA (Table 3). All studies have used aerobic or combined exercise (aerobic plus resistance exercise) at a moderate intensity, with a frequency of two to three in-hospital exercise sessions per week. No exercise-induced adverse events have been reported in any of the studies to date. Overall, short-term exercise interventions were able to evoke significant increases in VT [change (Δ) ranging from 1.1 to 3.0 ml kg −1 min −1 ] (Kothmann et al., 2009;Tew et al., 2012) andV O 2 peak (Δ1.2-1.7 ml kg −1 min −1 ) (Lima et al., 2018;Tew et al., 2012) in patients with small AAA (< 5.5 cm) when compared with a usual care group. Importantly, most of the studies reported that the improvements in cardiorespiratory fitness met the criteria for a minimum clinically important difference (i.e., 0.5 × SD of the reported change inV O 2 peak or VT) (Lima et al., 2018;Tew et al., 2012). Conversely, in studies of people with a large AAA, findings from studies of short-term exercise are more variable. Tew et al. (2017) reported no significant increase in cardiorespiratory fitness after 4 weeks of high-intensity interval aerobic exercise in people with a large AAA (>5.5 cm). The authors reported that only 63% of the study cohort was considered adherent to the exercise intervention and that during the intervention period the exercise intensity occasionally had to be reduced for the majority of the cohort (∼74%) owing to triggered exercise safety criteria, which might have resulted in limited exercise progression. In contrast, Barakat et al. (2016) reported significant increments inV O 2 peak (Δ1.6 ml kg −1 min −1 ) and VT (Δ1.9 ml kg −1 min −1 ) after 6 weeks of combined moderate-intensity exercise training in people with a large AAA (>5.5 cm) compared with the usual care group. Importantly, it was also reported that people with a large AAA randomized to the control group demonstrated a decrease of 1.2 ml kg −1 min −1 in theirV O 2 peak during the 6-week period, indicating that exercise training potentially mitigates a deterioration in cardiorespiratory fitness over time. Overall, studies to date indicate that exercise training might induce significant improvements in cardiorespiratory fitness in people with a large AAA, but it might be that longer-duration moderate-intensity training is preferential and more feasible in this population. The effect of long-term exercise training (>12 weeks) on cardiorespiratory fitness in people with AAA has been assessed in only TA B L E 3 Summary of studies investigating the effect of exercise training in patients with a small or large abdominal aortic aneurysm Abbreviations: AAA, abdominal aortic aneurism; CR, cardiac rehabilitation; hs-CRP, high sensitivity C-reactive protein; LAP, lipid accumulation product; SBP, systolic blood pressure; VT, ventilatory threshold;V O 2 peak , peak oxygen consumption. a Change in the exercise group is reported compared with the usual care group (P < 0.05). b Change in the exercise group is reported compared with baseline. one study to date. Myers et al. (2014) assessed the effect of a long-term combined training programme (≤3 years follow-up) on cardiorespiratory fitness in people with a small AAA (<5.5 cm). The results demonstrated significant increases inV O 2 peak in the exercise group at the 3-month (Δ 0.9 ml kg −1 min −1 ) and 1-year (Δ 1.3 ml kg −1 min −1 ) evaluations. Although at the 2-and 3-year evaluations thė V O 2 peak remained stable in the exercise group, the authors reported a significant decrease for the usual care group (second year, Δ −1.6 ml kg −1 min −1 ; third year, Δ −2.3 ml kg −1 min −1 ). A potential limitation of their study was the use of a home-based exercise intervention; however, current evidence in general clinical populations supports the use of home-based programmes compared with supervised incentre programmes (Anderson et al., 2017). Importantly, these results demonstrate that despite advanced age (72 ± 7 years) and multiple co-morbidities (CAD, peripheral arterial disease and type 2 diabetes), training ≤3 years was well tolerated and feasible in patients with a small AAA. The effect of exercise training on postoperative outcomes Exercise training-induced improvements in cardiorespiratory fitness have been associated with favourable postoperative outcomes in people undergoing AAA repair (Barakat et al., 2016). The study by Barakat et al. (2016) was the first to demonstrate that an increase of 1.6 ml kg −1 min −1 inV O 2 peak and 1.9 ml kg −1 min −1 in VT in the exercise group was associated with a lower rate of postoperative complications (cardiac 8.1%, pulmonary 11.3% and renal 6.5%) when compared with the usual care group who underwent open surgery alone. Likewise, Hayashi et al. (2016) reported that increased levels of preoperative self-reported physical activity were associated with early ambulation and reduced length of hospital stay after AAA repair. Importantly, the authors also reported that individuals who engaged in exercise at the earlier stages of the disease had superior postoperative outcomes (reduced mortality and length of hospital stay) compared with those who became physically active at a later stage. Conversely, Tew et al. (2017) reported no impact on postoperative mortality after 4 weeks of high-intensity interval training in people with a large AAA. It is important to note that this was not a full-scale trial (the authors characterized it as an external pilot trial) and that no significant increases in cardiorespiratory fitness were reported after the exercise intervention. To date, these are the only studies that have assessed the effect of an exercise intervention on postoperative clinical outcomes in people with AAA. A recent meta-analysis (Wee & Choong, 2020) and a Cochrane review (Fenton et al., 2021) assessed the impact of preoperative exercise training for people with AAA. Both those studies reported that preoperative exercise training appears to be beneficial for people with AAA; however, owing to methodological heterogeneity among studies, it remains premature to conclude that exercise training as a preoperative intervention improves postoperative outcomes. 4.3 The effects of exercise training on cardiovascular parameters and aneurysm progression Exercise training-induced increases in cardiorespiratory fitness are accompanied by a cardiovascular health benefit in people with AAA. Tew et al. (2012) reported a decrease of 10 mmHg in systolic blood pressure in the exercise group after the completion of a short-term (12 weeks) exercise intervention in people with a small AAA. In addition, the authors reported a corresponding decrease in highsensitivity C-reactive protein in the exercise group that was deemed clinically important. With these changes, the risk stratification of the exercise group changed from 'moderate' to 'low' . Likewsie, Nakayama et al. (2018) reported that a reduction in high-sensitivity C-reactive protein, observed in people with a small AAA who underwent cardiac rehabilitation, was associated with slower aneurysm growth. Recently, Niebauer et al. (2021) reported sub-analysis of data stemming from the AAA Stop Trial (Myers et al., 2014). The authors reported a significant reduction in systolic blood pressure and in lipid accumulation product (a biomarker of atherosclerosis) in people with AAA after a year of exercise training compared with a usual care group. These results are promising, given that exercise-induced reductions in chronic inflammation are associated with corresponding improvements in endothelial function, blood flow and cardiorespiratory fitness in other chronic diseases, such as type 2 diabetes mellitus (Okada et al., 2010) and CAD (Cwikiel et al., 2018). Interestingly, it was recently demonstrated that even a single bout of exercise is able transiently to improve the cardiovascular profile of people with a small AAA by reducing aortic stiffness (Perissiou et al., 2019) and inflammation and improving endothelial function (Bailey et al., 2017). These parameters have all been associated with cardiovascular risk and aneurysm progression. It is apparent from present evidence that exercise can favourably influence markers of cardiovascular risk and aneurysm progression in patients with AAA; however, larger-scale clinical trials are needed in order to establish exercise as adjunct treatment modality for addressing cardiovascular risk in this population. Risks and concerns of exercise in people with AAA To date, the available studies report that only a low percentage of the AAA population is engaged in regular physical activity (Hayashi et al., 2016), with a recent meta-analysis associating physical inactivity with the risk of AAA development (Aune et al., 2020). Concerns regarding the risks of exercise in people with AAA have been expressed in the past, although this is mostly based on opinion or medical concern rather than empirical evidence. Available data indicate that exercise is safe in people with AAA. In a total of 294 patients who underwent exercise training in the available nine studies that were reviewed, only one adverse event (cardiac arrest) was reported in the exercise group, which was not aneurysm or exercise related (Kothmann et al., 2009). A recent meta-analysis that assessed the safety of exercise training in patients with AAA reported that the cardiovascular event rate was 0.8% (Kato et al., 2019), which is remarkably less than that reported (1.5%) for healthy older individuals without AAA (Goodrich et al., 2007). Further to this, it was reported that exercise training did not increase aneurysm diameter in patients with a small AAA. Importantly, our recent study reported a similar acute haemodynamic response to moderate-and higher-intensity exercise between people with a small AAA and older individuals without an AAA (Perissiou et al., 2019), suggesting that exercise can be undertaken safely to improve cardiorespiratory fitness and cardiovascular health in this population. COMPETING INTERESTS None declared. AUTHOR CONTRIBUTIONS All authors contributed to the intellectual content of the manuscript; M.P., C.D.A. and T.G.B. conceived and planned the work; M.P. drafted the manuscript; all authors revised the manuscript and provided critical input to specific sections. All authors read and approved the final manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All persons designated as authors qualify for authorship, and all those who qualify for authorship are listed.
2022-03-01T06:23:10.492Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "12028771a8af32e66a913c8d103eefcc309b839a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "e95563689cf6f3ca93717523d6bb23fd22b8a798", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257252298
pes2o/s2orc
v3-fos-license
ULK1 Depletion Protects Mice from Diethylnitrosamine-Induced Hepatocarcinogenesis by Promoting Apoptosis and Inhibiting Autophagy Purpose The uncoordinated-51 like kinase 1 (ULK1) is an important serine/threonine protein kinase involved in autophagy, especially for the initiation stage. Previous studies have shown that ULK1 could be used as a prognostic marker in predicting poor progression-free survival and a therapeutic target for hepatocellular carcinoma (HCC) when treated with sorafenib; however, its role during hepatocarcinogenesis remains to be elucidated. Methods CCK8 and colony formation assay were used to detect cell growth ability. Western blotting was performed to determine expression level of protein. Data from public database were downloaded to analyze expression of ULK1 at mRNA level and predict survival time. RNA-seq was conducted to reveal disturbed gene profile orchestrated by ULK1 depletion. A diethylnitrosamine (DEN)-induced HCC mice model was used to uncover the role of ULK1 in hepatocarcinogenesis. Results ULK1 was up-regulated in liver cancer tissues and cell lines, and knockdown of ULK1 promoted apoptosis and suppressed proliferation of liver cancer cells. In in vivo experiments, Ulk1 depletion attenuated starvation-induced autophagy in mice liver, reduced diethylnitrosamine (DEN)-induced hepatic tumor number and size, and prevented tumor progression. Further, RNA-seq analysis revealed a close relationship between Ulk1 and immunity with significant changes in gene sets enriched in the interleukin and interferon pathways. Conclusion ULK1 deficiency prevented hepatocarcinogenesis and inhibited hepatic tumor growth, and might be a molecular target for the prevention and treatment of HCC. Introduction Hepatocellular carcinoma (HCC) accounts for nearly 85% of primary liver cancer, which is the sixth most prevalent and the third leading lethal malignancy. 1 Despite increased global vaccine coverage which has substantially decreased hepatitis virus infection, the global burden from HCC remains challenging, probably due to the increasing population with obesity and diabetes mellitus. 2 Most HCC cases were diagnosed at advanced stage with limited therapeutic options, and the 5-year survival rate has been very low (~24.3%). 3 Thus, it is of great importance to better understand the underlying molecular mechanisms in hepatocarcinogenesis, which will help improve the current strategies for the prevention, early diagnosis, and treatment of HCC. HCC always arises from the sequence of liver injury, chronic inflammation, fibrosis, and cirrhosis, which can be caused by hepatitis virus infection, aflatoxin, alcohol consumption, obesity, and type II diabetes mellitus. 4 During these processes, multiple genes/pathways including Ras/PI3K/mTOR, Wnt/β-catenin, and TP53 are involved, 5 many of which have been identified and considered as potential therapeutic targets. Nonetheless, more hepatocarcinogenesis-related genes remain to be identified. The uncoordinated-51 like kinase 1 (ULK1) is the mammalian orthologue of yeast Atg1, a serine/threonine protein kinase that is important for autophagy. 6 There are five mammalian ULK homologues (ULK1, ULK2, ULK3, ULK4, and STK36 (serine/threonine kinase 36)), among which ULK1 and ULK2 are reported to be involved in conventional autophagy signaling while ULK3 participates in stress-induced autophagy. 7 ULK1 functions in a complex with autophagy-related 13 (ATG13), focal adhesion kinase family-interacting protein of 200 kDa (FIP200), and ATG101 to initiate autophagy in response to upstream signals such as mTORC1 and AMPK. 8 Though ULK2 has a higher degree of homology with ULK1 than others, it is ULK1 but not ULK2 that acts as the predominant isoform involved in inducing autophagy. 9 Besides the canonical role in autophagy, ULK1 also participates in other physiological processes. For example, ULK1 sustains glucose metabolic fluxes by directly phosphorylating key glycolytic enzymes during deprivation of amino acid and growth factors, 10 and promotes cell death via regulating the activity of PARP1 under oxidative stress. 11 Accumulating evidences have also uncovered the relationship between ULK1 and cancer; however, the role of ULK1 in cancer remains to be carefully examined as it can either promote or suppress tumor growth depending on the type of cancer investigated. For example, ULK1 was significantly down-regulated in breast cancer, 12 and ULK1 inhibited breast cancer metastasis. 13 On the other hand, ULK1 inhibition could suppress cell growth in lung cancer, colon cancer, and ovarian cancer. [14][15][16] For HCC, it has been reported that ULK1 was overexpressed in clinical samples, silencing ULK1 inhibited liver cancer cell growth while increasing the therapeutic effects of sorafenib, 17 and it was suggested that ULK1 could be used as a potential prognostic biomarker for HCC. 18 However, the exact role and function of ULK1 in hepatocarcinogenesis remains to be elucidated. Therefore, in the current study, we first examined the expression of ULK1 in liver cancer cell lines and human HCC samples, as well as clinical tissues from public databases. Then, using an Ulk1-knockout (Ulk1KO) mouse strain, the development of tumor was evaluated in the diethylnitrosamine (DEN)-induced HCC model. As reported here, we found ULK1 was overexpressed in liver cancer cell lines and clinical samples, and Ulk1 deficiency inhibited DEN-induced hepatocarcinogenesis, probably through the induction of apoptosis along with the inhibition of autophagy. Database Search A gene expression profile of 424 LIHC (liver hepatocellular carcinoma) patients was downloaded from UCSC Xena datahub (https://gdc-hub.s3.us-east-1.amazonaws.com/download/TCGA-LIHC.htseq_fpkm.tsv.gz). Then, normalized mRNA expression of ULK1 in liver cancer and normal liver tissues was analyzed by GraphPad Prism 7.0 (GraphPad Prism Software, La Jolla, CA). Comparison of ULK1 expression among cancer cell lines was performed using the online tool Cancer Cell Line Encyclopedia (https://sites.broadinstitute.org/ccle/). Tumor tissues from patients diagnosed with HCC accompanied by hepatitis B virus infection and paired adjacent non-tumor tissues were used to detect ULK1 protein expression in liver cancer. Cell Lines The human HCC cell lines Huh-7, Hep3B, and HCCLM3, as well as the normal human liver cell line HL7702, were obtained from The Cell Bank of the Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). The cells were maintained in DMEM medium (Cienry, China) supplemented with 10% fetal bovine serum (Gibco, USA) at 37°C with 5% CO 2 in a humid atmosphere. The Ulk1KO mouse strain was originally generated as previously described 19 and was a generous gift from Dr. Toshifumi Tomoda (University of Toronto) by way of Dr. Hanming Shen (National University of Singapore). And the use of Ulk1KO mouse strain was approved by Ethics Committee of Laboratory Animal Care and Welfare, Zhejiang University School of Medicine. Both Ulk1KO and wild-type male mice were divided randomly into control (n=5 (WT); n=7 (Ulk1KO)), and DEN groups (n=7 (WT); n=12 (Ulk1KO)), respectively. Mice in DEN groups were injected intraperitoneally with 25 mg/kg body weight of DEN (Sigma-Aldrich, Cat# N0756), while mice of control groups received equal volume of saline at the age of 2 weeks and were allowed to grow for 30 weeks. All experiments were performed under the Guide for the Care and Use of Laboratory Animals (The National Academy Press, 2011) and approved by both Ethics Committee of Laboratory Animal Care and Welfare, Zhejiang University School of Medicine and Animal Committee of Zhejiang Chinese Medical University. Genotyping of Ulk1KO Mice Mice were genotyped by using tail DNA and PCR as described below: an initial denaturation at 94°C for 3 min; followed by 40 cycles of denaturation at 94°C for 40 s, annealing at 61°C for 40 s, and extension at 72°C for 40 s; ending with a final extension at 72°C for 5 min. A common primer (5'-CCT TCC CAT GCA GGC AAC ATA TAA GC-3') and a wild-type specific primer (5'-AAG CAC GAC CTG GAG GTG GC-3') amplified a 500 bp fragment from the wild-type (WT) mice. The common primer (5'-CCT TCC CAT GCA GGC AAC ATA TAA GC-3') and a mutation-specific primer (5'-AGT TCG AGT TCT CTC GCA AGG AC-3') amplified a 340 bp fragment from Ulk1KO mice. The PCR products were subjected to agarose gel electrophoresis or Sanger sequencing to identify the genotypes of mice ( Figure S1). Liver Preparation Mice were killed by anesthesia (pentobarbital sodium, 150 mg/kg (i.p.)) after overnight starvation. Unbroken livers were removed, weighed, and photographed with measuring scale. Sections from left lateral lobe were fixed in 10% formalin for hematoxylin-eosin (HE) staining, and the remaining tissue was snap frozen in liquid nitrogen and then stored at −80°C for further analysis. For in vivo detection of autophagy in mice liver, WT and Ulk1KO mice were injected intraperitoneally with 50 mg/kg body weight of chloroquine or equal volume of saline, then the mice were subjected to fasting for 24 h. Finally, the mice were killed as mentioned above, and the liver tissues were obtained for Western blot to determine autophagic markers. Analysis of Liver Tumor All visible nodules on liver surface were counted for each mouse. Tumor diameters per mouse were measured by vernier caliper or calibrated software according to HE staining. Liver histopathologic lesions were classified according to the standardized and internationally accepted nomenclature for classification of rodent tumors 20 by two experienced pathologists independently in a blind fashion. Western Blot Analysis Cells were lysed in RIPA (Beyotime) for 30 mins on ice. Cell lysates were then centrifuged (14,000g for 10 min at 4°C), and protein concentration of the supernatant was determined by BCA protein assay kit (Beyotime). Cell lysates were separated by SDS-PAGE and electro-blotted onto a PVDF membrane (Millipore). In detail, 70 μg of cell lysates was separated on 8% acrylamide gel to detect ULK1 expression in liver tissues and liver cancer cells, 70 μg of cell lysates Cell Proliferation and Colony Formation Assay The HCC cell line Huh-7 was treated with siULK1-1, siULK1-2, or a scramble control; 12 h later, the cells were digested and seeded (4000 cells/well) into 96-well plates. The cells were allowed to grow for 24, 48, 72, and 96 h. Then, into each well was added 10 μL CCK-8 (Beyotime) solution suspended in 100 μL 1% FBS DMEM medium for 1 h at 37°C, and the absorbance was measured at 450 nm at last. Colony formation assay was carried out in 6 cm dish. Briefly, cells were transfected with siULK1-1, siULK1-2, or a scramble control, and, 24 h post-transfection, 1000 cells per well were seeded and allowed to grow for 10 days; the medium was replaced every three days. Then cells were washed twice with PBS, stained with a solution of 0.5% crystal violet and 70% ethanol, washed with PBS twice, and dried. Clusters consisting of at least 50 cells were defined as colonies 21 and were counted. Each assay was conducted in triplicate, and three separate assays were performed. RNA-Sequencing and Data Analysis Liver tissues from WT and Ulk1KO mice were obtained for RNA-seq. The process of RNA isolation and sequencing were performed by Novogene Inc. under standard procedure as we previously reported. 22 In brief, RNA from liver tissue was used to construct the strand-specific library and was subjected for sequencing using Illumina 4000 platform with pair-end 150 base pair sequencing scheme, aiming for a minimum of 20 M reads per sample. Genes showing differential expression with FDR < 0.05 and a fold change (FC) > 1.5 were defined as differentially expressed genes. The raw data of RNA-seq have been deposited in Sequence Read Archive (SRA) database, and the accession number is PRJNA907497. Statistical Analysis All values were expressed as mean ± SEM. A two-tailed Student's t-test was used to compare means between two groups. Statistical analysis was performed by using GraphPad Prism 7.0. A P-value of less than 0.05 was considered statistically significant. ULK1 Was Up-Regulated in HCC Tissue and Cell Lines To confirm whether ULK1 was overexpressed in HCC patients as previously reported, we retrieved RNA-seq data from UCSC for further analysis, and the results showed that ULK1 was significantly up-regulated in HCC tissues ( Figure 1A). However, ULK1 expression was not correlated with the stages of HCC ( Figure 1B). Immunoblot was further performed to detect ULK1 protein level in human liver tissues, and it was found that ULK1 protein was also expressed at higher level in liver tumor tissues compared to non-tumor tissues ( Figure 1C). In addition, the expression level of ULK1 in cancer cell lines was examined using data from the Cancer Cell Line Encyclopedia (CCLE), and liver cancer cell lines including Huh-1, Huh-7, JHH2, JHH7, and SNU761 showed relatively higher level of ULK1 expression ( Figure 1D). All these data indicated a close relationship between ULK1 and cancer. To further elucidate clinicopathological relevance of ULK1 with liver cancer, Kaplan-Meier survival analysis was performed with an online tool (http://kmplot.com/analysis/). Unexpectedly, the results showed that higher ULK1 expression predicted better overall survival (OS, P = 0.013) and disease-specific survival Depletion of ULK1 Inhibited Cell Growth and Induced Apoptosis in Liver Cancer Cells To further clarify the function of ULK1 in liver cancer cells, we also examined the expression of ULK1 in several HCC cell lines, among which Huh-7 showed the highest expression and was chosen for subsequent experiment (Figure 2A). ULK1 expression was then silenced in Huh-7 by siRNA, and the knockdown efficiency was confirmed for both siRNAs by Western blotting ( Figure 2B). Results from CCK8 assay demonstrated that ULK1 depletion inhibited liver cancer cell growth ( Figure 2C). Similar results were also obtained from microscopic observation and colony-formation assay ( Figure 2D, G, and H). Furthermore, Western blotting was used to assess the expression of apoptosis-related proteins, and the results showed that ULK1 deficiency increased the expression of pro-apoptosis proteins while decreasing the expression of anti-apoptosis proteins ( Figure 2E and F). Ulk1 Deficiency in Mice Liver Disturbed the Expression of Genes Involved in Immune Response To uncover the role of Ulk1 at the molecular level, liver tissue from Ulk1KO mice and WT mice was subjected to RNAseq. Principal component analysis showed that the two groups were distinguished well from each other, while samples within one group shared much similarity despite variance still existing in WT mice ( Figure S3A). Surprisingly, only 43 genes were identified as differentially expressed genes (DEGs) in liver of Ulk1KO mice versus wild type (WT) mice ( Figure 3A). Among these DEGs, 15 genes, including Slc34a2, Cry1, Leap2, and Nmi, were down-regulated, while genes such as Gucd1, Rangap1, and Fosl2 were up-regulated in Ulk1KO mice liver compared with WT mice. Gene set enrichment analysis (GSEA) was performed, and gene sets associated with up-regulated genes in Ulk1KO mice were related to interleukin 2 (IL2)-STAT5 (signal transducers and activators of transcription 5) and mitotic spindle signaling ( Figure 3B, Figure S3B), while the gene sets with lower expression genes were involved in interferon (IFN)-α and IFN-γ response ( Figure 3C, Figure S3C). Ulk1 Depletion Inhibited Starvation-Induced Autophagy in Mice Liver Considering the vital role of ULK1 in autophagy especially for the initiation stage, we evaluated the effects of ULK1 depletion on autophagy in vivo. Fasting is a known factor to induce autophagy, and results from immunoblotting showed that fasting did induce the conversion of LC3-I to LC3-II in WT mice liver tissue, indicating the occurrence of autophagy. Moreover, the autophagy inhibitor chloroquine (CQ) blocked the autophagy flux, which was demonstrated by the further accumulation of LC3-II in fasting and chloroquine double treatment group ( Figure 4A). In contrast, in Ulk1KO mice, chloroquine did not lead to the further accumulation of LC3-II under fasting condition when compared with fasting treatment alone ( Figure 4B). And the ratio of LC3-II/LC3-I also showed that fasting in WT mice promoted conversion of LC3-I to LC3-II, while no significant difference was found in Ulk1KO mice ( Figure 4C). Taken together, these results suggested that starvation-induced autophagy was impaired in Ulk1KO mice liver. DEN-Induced Hepatocellular Carcinoma Was Suppressed in Ulk1KO Mice To determine the role of Ulk1 in hepatocarcinogenesis, two-week-old male Ulk1KO mice and WT mice were subjected to 25 mg/kg DEN injection as we previously reported. Thirty weeks after DEN administration, WT mice developed apparent big tumors on liver surface, while only sporadic and much smaller nodules could be found on liver surface in Ulk1KO mice based on macroscopic observation. Correspondingly, histopathological examination results using HE- staining further identified that HCC arose in WT mice, whereas most Ulk1KO mice just presented characteristics of lower-grade foci. As expected, no nodules were found in control groups for both WT and Ulk1KO mice ( Figure 5A). Visible tumor nodules at liver surface were counted for each mouse, and Ulk1KO mice developed significantly fewer hepatic tumors than the WT ones ( Figure 5B). Nonetheless, no statistical significance for tumor incidence was found between WT and Ulk1KO mice, although one Ulk1KO mouse was classified as no tumor formation ( Figure 5C). According to the international nomenclature for classification of rodent tumors, liver histopathologic lesions were classified as foci, hyperplasia, hepatocellular adenoma (HCA), and HCC. Based on such classification, by the end of 30 weeks of DEN treatment, ten (83.3%) Ulk1KO mice developed foci, one (8.3%) developed hyperplasia, and one 322 (8.3%) showed no tumor, whereas most WT mice progressed to hyperplasia (42.9%) and the more aggressive HCA (28.6%) and HCC (14.3%) ( Figure 5D). Sizes of the tumors were also measured, and it was clear that Ulk1KO mice had much smaller liver tumors than WT mice ( Figure 5E and F). Discussion As an essential kinase for the initiation of autophagy, the function of ULK1 has been well studied, and its role in various types of cancers was also investigated. For HCC, using 55 paired patient samples, Xu et al found that ULK1 expression was higher in HCC tissue than adjacent tissue, but not reaching a statistically significant level; on the other hand, higher ULK1 expression was associated with tumor size and worse survival time. 18 Wu et al examined the expression of ULK1 in 156 HCC patients using tissue-microarray-based immunohistochemistry and found that it was highly expressed in 53.2% of the specimens; however, ULK1 expression was not related to any clinicopathological indicators. Further analysis revealed that ULK1 was not associated with 5-year OS, but was an independent prognostic biomarker for PFS. 23 These contradicting results prompted us to evaluate the role of ULK1 in HCC in depth. By analyzing the TCGA database, it was found that ULK1 was overexpressed in HCC samples ( Figure 1). However, by KM plot analysis, we uncovered a paradoxical role of ULK1 in predicting survival of HCC patients: high-level ULK1 expression was positively associated with better OS and DSS, but negatively correlated with RFS ( Figure S2). Such results indicated that ULK1 alone may not be a good biomarker in predicting prognosis of HCC. Indeed, Wu et al demonstrated that combining autophagic biomarkers of ULK1 and LC3B was better in predicting the prognosis than used individually. Although the above results cast doubt on the prognostic value of ULK1, its function in liver cancer growth is relatively clear. For example, it was shown that silencing ULK1 inhibited liver cancer cell growth and proliferation, and deletion of ULK1 abrogated tumor growth in a xenograft mouse model. 17 In this study, our results also confirmed that silencing ULK1 led to the inhibition of liver cancer cell growth and colony formation; in addition, it was shown that the expression of anti-apoptosis proteins was down-regulated, indicating apoptosis might be responsible for the decreased cell growth (Figure 2). Compared with the corresponding normal cells, tumor cells always exhibit aberrant proliferation. 24 To fulfill the rapid biosynthetic demands associated with proliferation, cancer cells usually promote autophagy as a self-catabolic process for recycling engulfed cargos to survive and proliferate in metabolically unfavorable conditions. 25 Therefore, the loss of ULK1 could cause decreased autophagy, which might lead to decreased proliferation of cancer cells. Autophagy is regarded as a double-edged sword in cancer since it can suppress tumor formation by maintaining homeostasis of normal cells or promote tumor progression by enabling cancer cells to survive under stress. 26 For instance, on one hand, monoallelic deletion of Becn1 in mice resulted in diminished but intact autophagy, and the mice developed spontaneous hepatocellular carcinomas. 27 On the other hand, Atg7 deficiency dramatically changed the nature of lung tumors driven by BrafV600E from adenomas to benign oncocytomas. 28 These results indicated that autophagy was important for the suppression of spontaneous tumorigenesis but was required for tumor progression to malignancy. In our study, first it was found that knockout of Ulk1 impaired starvation-induced autophagy but not basal autophagy ( Figure 4). Then in the hepatocarcinogenesis experiment, no spontaneous tumor formation was found in Ulk1KO mice, but DEN-induced primary liver tumor formation was inhibited ( Figure 5). Such results indicated that Ulk1 deficiency did not promote tumor formation but suppressed tumor progression in mice, which suggested the role of ULK1 in tumor was different from other autophagy-related genes such as beclin1, ATG5, and ATG7. Several studies have indicated the close relationship between ULK1 and immunity. 29,30 In our study, GSEA results revealed that IFN-α and IFN-γ signaling were down-regulated. This is inconsistent with the conclusion of previous studies in which ULK1 was a key mediator of type I IFN signals and was required in IFN-γ-inducible antiviral responses. 31,32 It was reported that IFN-α could either stimulate immune cells to eliminate cancer cells or enable cancer cells to escape immune clearance with immune exhaustion because of prolonged stimulation. 33 IFN-γ could either promote melanoma development or inhibit breast cancer by reducing cancer stem cells. 34,35 However, what exact role the IFN-α and IFN-γ signaling played in DEN-induced hepatocarcinogenesis remained to be clarified. Conversely, GSEA results showed that IL-2-STAT5 and mitotic spindle pathway were up-regulated in Ulk1KO mice. STAT5 is a critical downstream mediator of IL-2 signaling and affects immune function in many aspects. 36 suppressor by affecting STAT3 signaling and as a tumor promoter under other circumstances. 37 Interestingly, it was reported that IFN-α could inhibit IL-2 signal transduction in T-lymphocytes, 38 thus the activation of IL-2-STAT5 pathway might be a consequence of the down-regulation of IFN-α. Still, more clear evidence is required to verify the relationship between the different immune signaling pathways, as well as their impacts on cancer development. In summary, by using DEN-induced HCC model, we expanded our understanding of ULK1 function in cancer by uncovering that ULK1 deficiency suppressed primary HCC. Clues from RNA-sequencing of Ulk1KO mice pointed to the direction of altered immune status in cancer development. Taken together, we believe that ULK1 could be a potential target for HCC prevention and treatment. Funding This work was supported in parts by grants from the National Natural Science Foundation of China (Nos. 31971138 and 32270186 to J.Y.), Zhejiang Provincial Natural Science Foundation of China (No. LQ20H160013 to T.D.). Disclosure The authors declare that they have no conflicts of interest.
2023-03-01T16:17:41.001Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "5c2da8aea4f642ac9caa74b59639e7c8648043c2", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=87775", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ad49ed1e8189abd79f8193cc7372b966b910854", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
33348619
pes2o/s2orc
v3-fos-license
A Framework for Optimal Matching for Causal Inference We propose a novel framework for matching estimators for causal effect from observational data that is based on minimizing the dual norm of estimation error when expressed as an operator. We show that many popular matching estimators can be expressed as optimal in this framework, including nearest-neighbor matching, coarsened exact matching, and mean-matched sampling. This reveals their motivation and aptness as structural priors formulated by embedding the effect in a particular functional space. This also gives rise to a range of new, kernel-based matching estimators that arise when one embeds the effect in a reproducing kernel Hilbert space. Depending on the case, these estimators can be found using either quadratic optimization or integer optimization. We show that estimators based on universal kernels are universally consistent without model specification. In empirical results using both synthetic and real data, the new, kernel-based estimators outperform all standard causal estimators in estimation error. Introduction Compared to controlled experiments, observational studies are uniquely characterized by a lack of control on membership in the treatment and control groups. While in controlled experimentation, randomization ensures comparability and hence unbiased and consistent estimation of e↵ect; in observational studies, valid inference about a causal e↵ect of treatment requires adjusting the groups so that they become comparable. Comparable for the purpose of causal inference means as similar as possible in some observed covariates. The covariates constitute the relevant information known about each observational subject and, as long as these covariates account for any confounding between the e↵ects of treatment and the e↵ects of selfselection, making the groups comparable with respect to these makes the groups comparable for the purpose of causal inference. Matching has been some of the most popular ways to achieve this comparability [7,22,32]. In matching, we sample a subset from the groups to get samples that are more similar to one another than the original samples. More generally, we may re-weight the original sample, where weights that are integer multiples correspond to (multi)subsets. For example, in nearest-neighbor matching (NNM) [21], one composes a matched sample out of pairs of treated and control subjects so that the total pairwise distance between covariate vectors is small or even minimal, mimicking a randomized matched-pair experiment [12]. If we allow subjects to be paired with replacement, we can have a sample with duplicates, resulting in weights that correspond to a multisubset rather than a regular subset. In coarsened exact matching (CEM) [15], one coarsens the covariates to create strata and re-weights the samples so that they have equal frequency in each stratum, mimicking a randomized block experiment [8], which results in general weights that may not correspond to taking a subset of the data. We focus on these and similar matching estimators that balance the covariates themselves rather than imputed propensity scores, as in propensity score matching (PSM) [23]. Matching on covariates addresses imbalance and not just confounding [18,17]. Nonetheless, we include PSM in numerical experiments. In this paper, we develop a novel and encompassing framework for estimators that balance the covariates via matching (in the broader re-weighting sense). There are many di↵erent such estimators and each addresses imbalance di↵erently. Our framework teases out how a particular notion of imbalance corresponds to a notion of structure. By decomposing the error arXiv:1606.05188v2 [stat.ME] 28 Feb 2017 of matching estimators, we formulate the error of the estimator as an operator on the conditional expectation function of outcomes given covariates. This conditional expectation function is unknown (or else there would be no need to conduct the study) and when one considers what the worst-case error may be over a space of possible such functions one recovers the dual norm of the error if the space is a Banach space. The dual norm of the error is an observable quantity, expressed only in terms of the given data. We term any estimator that chooses matched subsamples by minimizing this quantity as error-dual-norm minimizing (EDNM). A surprising result is that a great variety of standard methods used in the practice of causal inference are all EDNM. This observation leads us to consider new methods that are EDNM. Using reproducing kernel Hilbert spaces (RKHS) to express structure we obtain a new class of kernel-based matching estimators for causal e↵ects. These have desirable properties like consistency and perform exceptionally well in practice. All proofs are given in the supplement. Set Up. We begin by describing the set up. We consider an observational study with n subjects, indexed i = 1, . . . , n. We let this order be arbitrary so that the subjects are exchangeable (later, we consider subjects comprising an iid process). Of these, n 1 received a treatment whose e↵ect is of interest (denoted by T i = 1) and n 0 received a control treatment against which we want to compare (denoted by T i = 0). Let T 0 = {i : T i = 0} and T 1 = {i : T i = 1} be the sets of subjects that received treatment and control, respectively. We let T = (T 1 , . . . , T n ). Using Neyman-Rubin potential outcome notation [29], we let Y i (0), Y i (1) be the (real-valued) potential outcomes for subject i. We observe the outcome for the treatment to which subject i was exposed, Y i = Y i (T i ). And, Y (1 T i ) represents the unobserved, counterfactual outcome we would have observed if subject i were exposed to the opposite treatment. Y (1 T i ) is missing data. Throughout the paper, for these to be well defined, we assume that the stable unit treatment value assumption (SUTVA) holds [26]. Let X i , taking values in some X , be the side covariates that we observe for subject i. Let X = (X 1 , . . . , X n ) denote the collection of all baseline covariates of all n subjects, which constitues part of the observed data. The space X is general; assumptions about it will be specified as necessary. As an example, it can be composed real-valued vectors X ✓ R d that include both discrete (dummy) and continuous variables. We denote by TE i = Y i (1) Y i (0) the unobservable causal treatment e↵ect for subject i. The primary quantity of interest for estimation is the sample av-erage (causal) treatment e↵ect on the treated sample: ). We consider estimators for SATT based on matching in the form of re-weighting. We restrict to honest weights that only depend on the observed X, T and not on any observed outcome data. (If we used outcome data one might complain that we are mining for an e↵ect that is not there.) In particular, we consider the choice of a function W = W (X, T ) that produces a weight W i 2 R + for each subject i, leading to the estimator Because we are estimating SATT and we in fact observe Y i (1) for each i 2 T 1 , we always set W i = 1/n 1 for i 2 T 1 , leading to estimators of the form We also always assume P i2T0 W i = 1. We let W = W 0 ⇥ W 1 denote the space of allowable weights, where W 0 and W 1 are the space of weights for the control and treated sample, respectively. We required that W 0 ✓ {W T0 2 R T0 : P i2T0 W i = 1} and that W 1 = {(1/n 1 , . . . , 1/n 1 )}. If all weights in W 0 are rational with a fixed denominator, then⌧ W corresponds to constructing a (multi-)set from the control subjects to match the treated sample. We note some special cases of W 0 that correspond to a variety of existing classes of estimators for SATT: -Probability (convex combination) weights: -Multisubsets of cardinality n 0 0 (with replacement): W n 0 0 -multisubset 0 = W probability 0 \ {0, 1/n 0 0 , 2/n 0 0 , . . . } T0 . A standing assumption in this paper, essential for causal inference from observational data, is that of weak ignorability in expectation. Assumption 1. For each t = 0, 1 and i = 1, . . . , n, conditioned on X i , Y i (t) is mean-independent of T i and each value of T i is possible. That is, for each t = 0, 1 and i = 1, . . . , n, Ignorability, also known as unconfoundedness, means that we have the right covariates needed to separate the e↵ect of the treatment itself from the e↵ect of self-selection [23]. The form of ignorability we use is termed "weak" because it need only apply for each t = 0, 1 separately, and it is termed "in expecta-tion" because only mean-independence, rather than full stochastic independence, is assumed. EDNM Decomposing the Error. Denote conditional expectation of the control potential outcome given the covariates The nonrandom function f 0 does not depend on i due to exchangeability. By iterated expectation, the residual ✏ i = Y i (0) f 0 (X i ) has mean 0, is mean-independent of X i , and is uncorrelated with any function of X i . By conditioning on X i , we can decompose the error of the estimator into two terms: error that can be controlled by matching on X i and the orthogonal residual error, which cannot be controlled by X i but which disappears in expectation due to ignorability. The Dual Norm of the Error. The target of matching for causal inference is to eliminate error in comparing the treatment and control samples. Theorem 1 provides an explicit form of the controllable error in terms of the observed covariates X. However, it involves the unknown function f 0 : X ! R. As alluded to in Sec. 1, we consider matching schemes that guard against any possible such function by minimizing the worst-case error over the unit ball of a Banach space. A normed vector space is a Banach space if the corresponding metric space is complete (see [19], and [24], Ch. 10 for more on Banach spaces). Let V denote the vector space of all functions X ! R under usual pointwise addition and scaling. Let F ✓ V be a subspace of functions, against which we wish to guard. Endow this space with a semi-norm k·k : F ! R (a semi-norm can assign zero magnitude to nonzero vectors). For f / 2 F, let us write kf k = 1. Thus, the assumption that f 0 2 F is encapsulated by kf 0 k < 1. Given only that kf 0 k < 1, we will consider matching schemes that choose W to minimize the worst-case error, max kf kkf0k |E(W ; f )| = kf 0 k max kf k1 E(W ; f ), where the equality holds because E(W ; ↵f ) = ↵E(W ; f ) is degree-1 homogeneous and k↵f k = |↵| kf k is degree-1 positively homogeneous and symmetric. Clearly, it only matters that kf 0 k < 1 and the particular finite value of it does not change which W minimizes the above. In light of this, we define the worst-case error as We assume (F, k·k) satisfies the following conditions: Since E(W, f ) is also linear in f , these assumptions imply that, for each W , the operator E(W, ·) is in the continuous dual space of F. Hence, is precisely the dual norm of the error, where the dual norm of a continuous linear operator A on a Banach space with norm k·k is kAk ⇤ = sup kuk1 A(u). This also guarantees that E(W ; F) is finite and well-defined. Let E min (F) = min W 2W E(W ; F) be the optimal value. Clearly, if a matching method W (T, X) is EDNM with (F, k·k) and W then the error of⌧ W is bounded by |E(W ; f 0 )|  kf 0 k E min (F). Existing Methods as EDNM Surprisingly, many methods for causal inference that are standard in practice are also in fact EDNM. On the one hand, this interpretation gets at the core of the structural motivations behind many of these methods (e.g., "if you believe the conditional expectation is Lipschitz and nothing more then you should pairwise match") and allows one to choose a method appropriate to one's beliefs about problem structure. On the other hand, these results provide motivation that EDNM is the right framework in which to think about matching for causal inference and this motivates us to consider new EDNM methods in Sec. 2.2. Nearest-Neighbor Matching. NNM is by far the most common matching method. In NNM, each treated subject is paired with one control subject so that the sum of pairwise distances is minimized as measured by some distance metric (x, x 0 ) on X [21]. Usually, the Mahalanobis metric is used: , where⌃ is the pooled sample covariance matrix. NNM can be done either without replacement (each control subject used at most once; aka one-to-one) or with replacement (control subjects may be reused; aka many-to-one). The estimate of SATT is the average pairwise di↵erences of outcomes. This estimator is exactly⌧ W where the weight on control subject i is 1/n 1 times the number of times subject i was matched, i.e., the matched control sample is the (multi-)set of control subjects that got matched to treated subjects. NNM is EDNM as we show next. Note that even if the weights are not restricted to be multiples of 1/n 1 , the optimal unrestricted weights will end up to be multiples of 1/n 1 regardless. That is, the optimal general-form weighting is optimal subset matching for Lipschitz functions. Note that (F, k·k) is not a Banach space. In particular, constant functions have zero Lipschitz constant. However, as required, F/R is a Banach space and evaluation di↵erences are continuous because they are bounded by the magnitude. Algorithmically, NNM with replacement amounts to finding the control subject of minimal distance to each treated subject in a greedy manner. NNM without replacement amounts to minimum-sum-of-distances bipartite matching with unbalanced parts, which is easily solved by the Ford-Fulkerson algorithm [9]. A close cousin is caliper matching whereby we only match subjects that are within a distance 0 from one another. This method is also EDNM. Coarsened Exact Matching. CEM [15] is a matching method whereby one coarsens the covariates into a few (M ) strata via a coarsening function C : X ! {1, . . . , M}, and then matches exactly within each stratum. For example, if there are 5 treated subjects and 3 control subjects in a given stratum then each of the 3 control subjects is given weight propor-tional to 5/3, whereas if there were 0 treated subject the weights would be 0. The case of a stratum containing only treated subjects is not allowed (no extrapolation).( [16] suggests that in this case one not estimate SATT.) Under Assumption 1, this case happens with vanishing probability. CEM is also EDNM. assuming no extrapolation. Mean-Matched Sampling. Often, practitioners evaluate the quality of a matched control sample by measuring the Mahalanobis distance between the matched control sample and the treated sample: where X ✓ R d and V is some positive semidefinite matrix usually taken to be V =⌃ † 0 , the inverse sample covariance matrix of X T = 0. This distance is a rotated 2-norm between the sample means. Mean-matched sampling are methods that find match a control sample of prescribed size to reduce this distance [10,25] and optimal mean-matching sampling (OMMS) fully minimizes this distance and is EDNM. Since finite, the space (F, k·k) is always a Banach space and evaluations (and hence their di↵erences) are always continuous. See Thms. 5.33 and 5.35 of [14]. Kernel Matching In the previous section we saw that a variety of standard methods for causal inference are EDNM. Each was recovered using a di↵erent form of structure on the conditional expectations of outcomes. In this section we develop a range of new EDNM based on kernels and their corresponding reproducing kernel Hilbert spaces (RKHS). Kernels are standard in machine learning (ML) as ways to generalize the structure of learned conditional expectation functions, like classifiers or regressors [27]. Kernels also have many applications in statistics such as in independence testing [3,11,34] and goodness-of-fit testing [11]. The same way kernels are used to generalize the structure of learned functions in ML, we can use these to generalize the structure of f 0 . This will lead to new methods for causal inference that are potentially very powerful. A Hilbert space is an inner-product space such that the norm induced by the inner product, kf k 2 = hf, f i, yields a Banach space. An RKHS F is a Hilbert space of functions for which, for every x 2 X, the map f 7 ! f (x) is a continuous mapping [3]. Continuity and the Riesz representation theorem imply that for each ) p , whose RKHS spans the finitedimensional space of all polynomials of degree up to p; the exponential kernel K (x, x 0 ) = e x T x 0 / 2 , the infinite-dimensional limit of the polynomial kernel; and the Gaussian kernel The corresponding RKHS is infinite-dimensional [31]. For X 2 X n and a kernel K, the Gram matrix is K ij = K(X i , X j ), which is always positive semidefinite (PSD). Generally, one would normalize the covariate data before putting it in a kernel using an a ne transformation so that the control sample has zero sample mean and identity sample covariance. Some kernels have a special property, known as universality, that allows them to approximate any continuous function arbitrarily well. Both the Gaussian and exponential kernels are universal [30]. Definition 2. For X compact Hausdor↵, a kernel is universal if for any continuous function g : X ! R and ✏ > 0, there exists f 2 F in the corresponding RKHS such that sup x2X |f (x) g(x)|  ✏. Note that any RKHS F satisfies Assumptions 2 and 3. As such it gives rise to EDNM matching methods. Theorem 6. Let F be an RKHS with kernel K. Let K be the Gram matrix on X. Then, The above theorem makes clear that kernel matching with a linear kernel K V (x, x 0 ) = x T V x 0 is exactly equivalent to mean-matched sampling, i.e., it leads to E(W ; F) = M V (W ). Moreover, if W T0 2 {0, 1/n 0 0 } T0 then E(W ; F) is exactly the kernel maximum mean discrepancy (MMD) statistic between the treated sample and the matched control sample. Kernel MMD is a common test statistic in two-sample goodness-of-fit testing [11,28]. We can interpret minimizing this discrepancy as trying to make the two samples appear to come from the exact same distribution. Next, we review the various possible methods this can give rise to. In the following, we let k 0 = K T0T1 e n1 /n 1 . Kernel Matching with Probability Weights. For probability weights, we can formulate a linearlyconstrainted convex-quadratic optimization problem to find the optimal weights: This problem can be solved in polynomial time with interior point methods [4] and is amenable to solution with o↵-the-shelf solvers like Gurobi. Kernel Multisubset Matching. For matching with replacement, we can formulate a linear-integerconstrainted convex-quadratic optimization problem to find the optimal weights: where we used the change of variables W 0 = n 0 0 W T0 . This problem is NP-hard (reducible to number partitioning for rank(K T0T0 ) = 1), but it is also amenable to solution by o↵-the-shelf integer programming solvers like Gurobi. Kernel Subset Matching. For matching without replacement, we can formulate a linear-integer-constrainted convex-quadratic optimization problem to find the optimal weights: Again, the problem is generally "hard" but can be solved in practice using o↵-the-shelf integer programming solvers. Consistency Next, we express conditions for EDNM estimators to have error converging to zero. Definition 3. A Banach space is said to be B-convex if there exists N 2 N and ⌘ < N such that for every g 1 , . . . , g N with kg i k  1 8i there exists a choice of signs so that k±g 1 ± · · · ± g N k  ⌘. It is easy to verify that all the Banach spaces so far considered are B-convex. All Hilbert spaces and all finitedimensional Banach spaces are B-convex [19,Ch. 9]. and either (a) F is B-convex and ⌫ = 2 or (b) F is a Hilbert space and ⌫ = 1. Then, F W ! 0 almost surely. Clearly, one way to satisfy condition (4) is to have f 0 2 F, i.e., to make the correct structural assumption. But, it is su cient that f 0 is close to F. Both universal RKHSs (for kernel matching) and the space of Lipschitz functions (for NNM) are dense in continuous functions (i.e., they reside in their closure) in the sense of condition (4). Empirical Results In this section, we study empirically the comparative e ciency of various causal estimators, including our new kernel estimators. First, we consider a simple synthetic observational study that allows us to investigate the interaction between underlying structure and matching method used. Second, we consider an observational study based on a dataset compiled by [13] from the Infant Health and Development Program [5]. Fictitious Study. Consider the following fictitious observational study with one treatment and control. Subjects are drawn at random from a population. For each subject we observe a two-dimensional vector of covariates X i 2 R 2 . In the population, these are distributed as uniform on [ 1, 1] 2 . Each subject has either received treatment or control and we observe T i . In the population, T i is distributed as Bernoulli with probability 0.8/ 1 + p 2 kX i k 2 , which ranges 0.27 ⇠ 0.8. The potential outcomes are is independent noise. We focus on the case of small residual noise (variance not explained by X i ) so to tease out the comparative e ciency in matching X (if residual noise is big, any method that only matches on X will do badly). We let f 1 be any function whatsoever. We consider a variety of possible cases for f 0 : -sinusoidal: f 0 (x) = sin(⇡(x 1 + x 2 )) + cos(⇡(x 1 x 2 )). For each n = 10, 20, . . . , 300, we produce 100 replicates. For each, we consider a variety of estimators: -No matching: we take the whole control sample to be the matched sample (W i = 1/n 0 ); -One-to-one: we match n 1 control subjects using NNM without replacement, i.e., using optimal bipartite matching on the matrix of pairwise Mahalanobis distances between treated and control subjects; -CEM: we find the largest b 0 such that coarsening each of the covariates into even bins {[ 1, 1 + 2 b 1 ), . . . , [1 2 b 1 , 1]} leaves no box (product of two bins) that contains only treated subjects, then we perform exact matching within each box; -Mahal. means: we match n 1 control subjects with replacement to minimize the Mahalanobis distance between the means of the two samples (same as matching n 1 control subjects with replacement using kernel matching with the linear kernel); -PSM: we match n 1 control subjects using propensity score matching by fitting a logistic regression to impute propensity scores and doing optimal bipartite matching on imputed scores; -Quad kernel weight: we use kernel matching with probability weights and the quadratic kernel; -Exp kernel weight: we use kernel matching with probability weights and the exponential kernel; -Gauss kernel weight: we use kernel matching with probability weights and the Gaussian kernel; -Exp kernel match: we match n 1 control subjects with replacement using kernel matching with the exponential kernel; and -Gauss kernel match: we match n 1 control subjects with replacement using kernel matching with the Gaussian kernel. We let = 1 for all kernels. We use Gurobi v6.5 (www. gurobi.com) to solve all quadratic and integer optimization problems. For each estimator, we computê ⌧ W SATT. Then, we measure the RMSE over the 100 replicates, RMSE = (Ê 100 We plot the results in Figs. 1(a-d). Note the log scale. The results clearly show the power of our approach. In each case, every one of our exponential-or Gaussiankernel-based estimators outperforms standard causal estimators by an order of magnitude (base 10). The advantage is particularly noticeable in smaller samples (notice the initial sharp drop in most plots). Indeed, it can be di cult to find a good control pair for every treated subject in small samples, and similarly it can be di cult to have a fine enough coarsening of the data without creating a stratum that only has treated subjects. Whereas, at the same time, by optimizing the mismatch as characterized by the dual norm of the error one can achieve small mismatch with even small samples (in agreement with the observation made by [17] about multi-objective partioning). Another observation is that matching based on parametric models can be fragile. This can be seen here for PSM, which is based on a misspecified logistic model, and also for estimators that match on X itself. We also see that mean-matched sampling does very poorly in every example, even doing worse than no matching. Indeed, matching the means only makes sense if the e↵ect is purely linear. A linear model assumption is very fragile and even small violations can trip up mean-matched sampling. Similarly, matching per the quadratic kernel depends on an assumption of quadratic e↵ect. Indeed, the estimator based on the quadratic kernel does the best of all estimators when the e↵ect is quadratic (panel b). However, unlike linear, a quadratic model is generally more robust as quadratics can better approximate a wider range of functions. Accordingly, we see that the estimator based on the quadratic kernel has reasonable performance even when the e↵ect is not quadratic (panels a and c), while extreme violations trip it up (panel d). Overall, the universal kernels (exponential and Gaussian) seem to do the best by far. They appear to provide a good balance between generality of model with e ciency of balancing. They are general enough so that we can ensure consistency even if the true e↵ect is not in the corresponding RKHS. And, fully optimizing mismatch as measured by the dual norm of the error in their RKHS can lead to small objective value even for moderate n. Infant Health and Development Program (IHDP). IHDP was a randomized experiment intended to measure the e↵ect of a program consisting of child care and home visits from a trained provider on early child development [5], as measured through cognitive test scores. The data form this study was used by [13] to evaluate causal estimators where each study subject is one child. We use a similar setup to evaluate the matching estimators above. There are 985 children in the dataset, of which 377 received the treatment of interest. We consider the same d = 25 covariates X i (6 continuous and 19 binary) used by [13] and normalize these in the same manner. The covariates include physical measurements of the child at birth such as weight, mother behavior during pregnancy such as smoking, and mother characteristics at time of birth such as marital status and education. We let Y i (0) be generated in the same way as the nonlinear response of [13] and using ✏ 0i ⇠ N (0, 0.1). Following [13], we prune the data to simulate an observational setting, but we consider a somewhat different pruning procedure. First, we sample Treat uniformly at random from { 1, 1} d and assign the score Q i = T Treat X i + kX i k 2 + ⌫ i to each subject where ⌫ i ⇠ N (0, 1) is a randomly and independently drawn standard normal random variable. Then, we prune away the half of the treated sample with the largest scores Q i and also prune away the half of the control sample with the smallest scores Q i , leaving 492 subjects (with the same proportion of treated to control). Finally, we consider subsampling n subjects at random from the pruned pool of 492 subjects. For each n = 10, 20, . . . , 450, we produce 100 replicates of the data, compute an estimate for SATT using each of the matching estimators listed above, and measure the RMSE over the 100 replicates. For CEM, the dimension of the data prohibits coarsening every covariate (coarsening each of the 25 covariates into only just two levels would result in over 33 million strata, and the probability that a stratum containing a treated unit wold also contain a control unit would be vanishing) and therefore we consider sampling just 3 dimensions at random and coarsening each into values above and below the mean. For PSM, we omit all replicates wherein the control and treated populations are perfectly separated by a hyperplane (necessarily, any sample with n  d + 1), which means that the logistic regression fit is undefined. We let = p d/2 = 2.5 for all kernels. We plot the results in Fig. 1(e). Again, the results indicate a significant improvement due to kernel matching, which lead to RMSE that is nearly an order of magnitude smaller than most other methods across the board. The various non-linear kernel matching methods are very similar in performance, with the quadratic kernel slightly edging out the rest. The di↵erence between matching with general probability weights or with a multisubset is nearly indistinguishable. Mean-matched sampling (linear kernel matching) performs less well than non-linear kernel matching but better than other methods, indicating a strong linear component that is still not prevalent enough to ignore the non-linear remanent. Other estimators based on matching covariates, such as NNM and CEM, perform badly in this example due to the increased dimension of covariates. As the number of covariates increases, it becomes di cult to find units that are adequately similar on all dimensions, making the resulting one-to-one matching poor. The intuition extends to CEM, where it is impossible to exactly match on even very coarsely coarsened covariates due to their dimension, necessitating that we choose only a few covariates to match on, making the resulting match poor when considering all covariates. In comparison, matching the overall samples globally, as in kernel matching, instead of locally at the unit or coarsened stratum level, allows us to achieve much better balance while addressing estimation error directly. The failure of one-to-one matching to find good pairwise matches in the presence of moderate to high dimensions is cited by [33] as a reason to favor PSM. In this particular example, PSM (which is often not well-defined for n  100) does better than one-to-one matching and CEM but worse than kernel matching (including linear kernel). The latter observation can be justified by noting that PSM simulates a control covariate sample drawn from the treated population, mimicking a completely randomized experiment [18], whereas matching on the covariates and doing so in a global manner mimics a well-balanced controlled experiment [17]. Conclusion We presented a novel framework for matching estimators for causal inference from observational data. The framework is based on minimizing the dual norm of the error operator with respect to a space of possible conditional expectation functions. Many existing methods common in practice appear to fit this framework. We developed new, kernel-based estimators using the framework and showed they satisfy consistency. Our new estimators prove exceedingly successful in comparative empirical studies of matching estimators. Supplementary Material Proofs Proof of Theorem 1. Let us write SATT as . It is then clear that SATT di↵ers from⌧ W only in the second term, that is, where the first equality is by definition of ✏ i and the fact that W i = W i (X, T ) and the second is by Assumption 1. Proof of Theorem 2. Let D be the distance matrix D ii 0 = (X i , X i 0 ). For this choice of (F, k·k), by linear optimization duality we get This describes a min-cost network flow problem with sources T 1 with inputs 1, sinks T 0 with outputs W i , edges between every two nodes with costs D ii 0 and without capacities. Consider any source i 2 T 1 and any sink i 0 2 T 0 and any path i, i 1 , . . . , i m , i 0 . By the triangle inequality, D ii 0  D ii1 + D i1i2 + · · · + D imi 0 . Therefore, as there are no capacities, it is always preferable to send the flow from the sources to the sinks along the direct edges from T 1 to T 0 . That is, we can eliminate all other edges and write In the case of with replacement and W 0 = W probability 0 , using the transformation W 0 i = n 1 W i , we get min This describes a min-cost netwrok flow problem with sources T 1 with inputs 1; nodes T 0 with 0 exogenous flow; one sink with output n 1 ; edges from each i 2 T 1 to each i 0 2 T 0 with flow variable S ii 0 , cost D ii 0 , and without capacity; and edges from each i 2 T 0 to the sink with flow variable W 0 i and without cost or capacity. Because all data is integer, the optimal solution of W 0 = n 1 W is integer [1]. Hence, since W n1-multisubset 0 ✓ Z/n 1 , the solution is the same when we restrict to W 0 = W n1-multisubset 0 . This solution (in terms of W 0 ) is equal to sending the whole input 1 from each source in T 1 to the node in T 0 with smallest distance and from there routing this flow to the sink, which corresponds exactly to one-to-one matching with replacement. In the case of no replacement and W 0 = W n 1 1 -bounded 0 , using the transformation W 0 i = n 1 W i , we get min This describes the same min-cost netwrok flow problem except that the edges from each i 2 T 0 to the sink have a capacity of 1. Because all data is integer, the optimal solution of S and W 0 = n 1 W is integer [1]. Hence, since W n1-subset 0 ✓ Z/n 1 , the solution is the same when we restrict to W 0 = W n1-subset 0 . The optimal S ii 0 is integer and so, by P i 0 2T0 S ii 0 = 1, for each i 2 T 1 there is exactly one i 0 2 T 0 with S ii 0 = 1 and all others are zero. S ii 0 = 1 denotes matching i with i 0 . The optimal W 0 i is integral and so, by W 0 i  1, W 0 i 2 {0, 1}. Hence, for each i 2 T 0 , P i 0 2T1 S ii 0 2 {0, 1} so we only use node i at most once. The cost of S is exactly the sum of pairwise distances in the match. Hence, the optimal solution corresponds exactly to one-to-one matching without replacement. That is, the worst-case f assigns ±1 to each partition in order to make the di↵erence of values in that partition be nonnegative. Then clearly the optimal choice of W 2 R T0 is to make each of these absolute values equal zero. This happens exactly when, for each i 2 T 0 , num treatment subjects in same partition as i num control subjects in same partition as i , where 0/0 = 0 and we never encounter dividing a positive integer by 0 due to the no-extrapolation assumption. Because the weight is nonnegative, the solution is unchanged when restricting to nonnegative weights. Proof of Theorem 5. By duality of norms, The optimal W minimizes this discrepancy over subsamples from control with the allowable size. Proof of Theorem 6. We have ( 1) Ti+Tj W i W j K ij , which when written in block form gives rise to the result.
2017-02-28T14:59:57.000Z
2016-06-16T00:00:00.000
{ "year": 2017, "sha1": "2da489ddf9f7ad96cab4e67f9b2800f1cb30d3cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2da489ddf9f7ad96cab4e67f9b2800f1cb30d3cc", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
261426658
pes2o/s2orc
v3-fos-license
Lever Arm Compensation of Autonomous Underwater Vehicle for Fast Transfer Alignment Transfer alignment is used to initialize SINS (Strapdown Inertial Navigation System) in motion. Lever-arm effect compensation is studied existing in an AUV (Autonomous Underwater Vehicle) before launched from the mother ship. The AUV is equipped with SINS, Doppler Velocity Log, depth sensor and other navigation sensors. The lever arm will cause large error on the transfer alignment between master inertial navigation system and slave inertial navigation system, especially in big ship situations. This paper presents a novel method that can effectively estimate and compensate the flexural lever arm between the main inertial navigation system mounted on the mother ship and the slave inertial navigation system equipped on the AUV. The nonlinear measurement equation of angular rate is derived based on three successive rotations of the body frame of the master inertial navigation system. Nonlinear filter is utilized as the nonlinear estimator for its capability of non-linear approximation. Observability analysis was conducted on the SINS state vector based on singular value decomposition method. State equation of SINS was adopted as the system state equation. Simulation experiments were conducted and results showed that the proposed method can estimate the flexural lever arm more accurately, the precision of transfer alignment was improved and alignment time was shortened accordingly. Introduction Transfer alignment is the process of initializing the position, velocity and attitude of a slave INS, using the data supplied by another INS known as the master inertial navigation system [John and Leondes (1972)]. As the initial attitude errors cause the navigation errors to increase much rapidly than the initial velocity and position errors [Cheng, Wang and Liu (2014)], thus the relative attitude error of the slave INS with respect to the master INS is a major error source to result in the position error growth after launching the inertial guided weapons [Zhu and Cheng (2013)]. A digital filter method was applied in compensating the lever arm effect [Xu and Wan (1994) O X Y Z stands for the body frame. b O is the swaying center of the ship, also is known as the gravitational center of the body. The position of the center of gravity is calculated according to the general design of the load distribution, assuming that the gravity center is fixed and the master INS installation position is in coincidence with b O , the slave INS accelerometer is fixed point P mounted on the carrier coordinates, position vector coordinate origin, P point position relative to the inertial coordinate vector at P, relative to the position vector vector coordinate origin. Obviously, they have the following relationship: The position of the gravity center is usually calculated according to the load distribution of design, which is always fixed and coincides with the master inertial navigation system equipping position. The accelerometers of slave inertial navigation system are equipped at the fixed point p in the body frame. The position vector of the origin of the carrier coordinate system is 0 R , the position vector of p to inertial frame origin is p R , the position vector of p to the carrier frame origin is p r , abiding the following rules: The differential of the upper formulae to time can be obtained: The differential of the upper formulae to time can be obtained: According to the relative micro quotient principle of vector differential, it can be obtained: presents linear acceleration of p to carrier frame. The same reason can be obtained: The linear acceleration of p to inertial frame can be expressed as: The carrier structure is rigid in the study of lever arm effect. Other methods are needed to compensate the error caused by the flexible deformation. P is fixed to the carrier frame here. The installation point should be in the swing center of the carrier in the ideal situation, 0 p r = , the lever arm effect did not exist in the situation. The master inertial navigation is equipped at the carrier swing center, while the slave inertial navigation system cannot satisfy the needs in transfer alignment. Lever arm effect cannot be neglected in real application. The latter two components are caused by the lever arm effect, sensed by the slave inertial navigation system, not sensed by the master inertial navigation system. f δ stands for the lever arm acceleration, the standard equation of the lever arm effect error is, Suppose the vehicle did not move linearly, that is , the above formulae can be simplified as The lever arm effect acceleration can be expressed in the navigation frame as follows And for 0 cos 3 Observability analysis Before AUV entering the water, transfer alignment is conducted before integrated navigation period. Transfer alignment uses the navigation information of position, velocity and attitude from master inertial navigation system equipped on the big ship. Observability analysis was conducted on the SINS state vector based on SVD method. The measurement information is the vehicle velocity (north velocity and east velocity) and velocity from the master inertial navigation system. The motion process of initial alignment on moving base can be designed as follows, the gyro drift is estimated under three-axis swaying and linear acceleration. The accelerometer bias is estimated under linear acceleration and turning, the angle error can be estimated under linear constant navigation. The observability degree of the state variables under three-axis swaying with velocity, heading angle error and position matching is presented in Tab. 1. From Tab. 1, eastern velocity error, northern velocity error, up heading angle error, longitude error, latitude error, eastern gyro drift, northern gyro drift, up gyro drift are high in observability degree and the filter effects are good. Eastern accelerometer bias and northern accelerometer bias are low in observability degree and the filter effects are not good. From the analysis above, the velocity error can be well estimated with velocity matching while heading angle and position error cannot be well estimated. The heading angle error can be better estimated with velocity and heading matching, while the position error cannot be well estimated. The heading angle error, velocity error, position error can be well estimated with velocity, heading and position matching. Revised lever-arm effect compensation method When the ship is under mooring condition, the linear acceleration and the linear velocity is zero, the error equation of the inertial system can be reduced as follows, ' n is the navigation frame, b is the body frame, V δ is the body velocity error, φ is the vehicle attitude error, ' n ie ω is the earth rotation rate on the navigation frame, C is the strapdown matrix. In the strapdown inertial navigation system, Kalman filter is applied in the estimation of lever arm length. The accelerometer error and the gyro drift gyro are expanded as the states in the Kalman filter. The system state equation is as follows, Simulation and results In the simulation experiments, the performance of each transfer alignment algorithm is evaluated. The ship speed is about 30knots. The angular motions of the ship are generated as follows Suppose the initial longitude is 118°, the initial latitude is 32°, the initial height is 0. The initial longitude error is 5'', the initial latitude error is 5'', and the initial height error is 5''. The initial eastern velocity is 10m/s, the initial northern velocity is 10m/s, the initial eastern velocity error is 0.1m/s, the initial northern velocity is 0.1m/s, the initial heading 0 H is 45°, the initial pitching angle is 0°, the initial rolling angle is 0°. Initial heading, pitching and rolling angle error is 10 ,10 , 30 ′′ ′′ ′′ respectively. Accelerometer constant drift and random drift is 100ug. The gyro constant drift and random drift is 0.1°/h. The swing amplitude of heading, pitching and rolling is 14°, 9°and 12° respectively. The swing period of heading, pitching and rolling is 6 s, 8 s, and 10 s respectively. The ship swing model which is suffering the wind and tide under the mooring condition is as follows, 0 sin(2 / ) sin(2 / ) The lever arm length is [ ] Conclusions This paper presents an efficient transfer alignment approach for swaying base big ship navigation system. A novel algorithm associated with the transfer alignment is employed to obtain the mathematical platform for the navigation system. Considering the environmental disturbances and the sensor drift as the main error source, a nonlinear filter approach is applied to the system to reduce the lever arm effect on the acceleration measurement. Observability analysis is conducted to different vehicle motion. Simulation experiments were conducted and the results showed that the novel method is able to improve the rapidity and precision of transfer alignment, overcoming the lever arm effect and disturbances existing in the host inertial navigation system and the slave inertial navigation system in the application of big ship navigation initial alignment.
2021-10-19T17:39:33.913Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "d30981b665177484453956b9b8cf24a7a171a808", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/cmc.2019.03739", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f4e082d350e1c2681688d06e4c3f04dc3f279167", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
119426352
pes2o/s2orc
v3-fos-license
Four variations on Theoretical Physics by Ettore Majorana An account is given of some topical unpublished work by Ettore Majorana, revealing his very deep intuitions and skilfulness in Theoretical Physics. The relevance of the quite unknown results obtained by him is pointed out as well. Introduction Probably, the highest appraisal received by the work of Ettore Majorana was expressed by the Nobel Prize Enrico Fermi in several occasions [1], but such opinions could appear as overstatements or unjustified (especially because they are expressed by a great physicist as Fermi), when compared with the spare (known) Majorana's scientific production, just 9 published papers. However, today the name of Majorana is largely known to the nuclear and subnuclear physicist's community: Majorana neutrino, Majorana-Heisenberg exchange forces, and so on are, in fact, widely used concepts. In this paper, we focus on the less-known (or completely unknown) work by this scientist, aimed to shed some light on the peculiar abilities of Majorana that were well recognized by Fermi and his coworkers. The wide unpublished scientific production by Majorana is testified by a large amount of papers [2], almost all deposited at the Domus Galilaeana in Pisa; those known, in Italian, as "Volumetti" has been recently collected and translated in a book [3], and we refer the interested reader to this book for further study. Here we have chosen to discuss only four topics dealt with by Majorana in different areas of Physics, just to give a sample of his very deep intuitions and skilfulness, together with the relevance of the results obtained. We start with a discussion of a peculiar approach to Quantum Mechanics, as deduced by a manuscript [4] which probably corresponds to the text for a seminar delivered at the University of Naples in 1938, where Majorana lectured on Theoretical Physics [5]. Some passages of that manuscript reveal a physical interpretation of the Quantum Mechanics, which anticipates of several years the Feynman approach in terms of path integrals, independently of the underlying mathematical formulation. The main topic of that dissertation was the application of Quantum Mechanics to the theory of molecular bonding, but the present scientific interest in it is more centered on the interpretation given by Majorana about some topics of the novel, for that time, Quantum Theory (namely, the concept of quantum state) and the direct application of this theory to a particular case (that is, precisely, the molecular bonding). It not only discloses a peculiar cleverness of the author in treating a pivotal argument of the novel Mechanics, but, keeping in mind that it was written in 1938, also reveals a net advance of at least ten years in the use made of that topic. In the second topic, we report on a more applicative subject, discussing an original method that leads to a semi-analytical series solution of the Thomas-Fermi equation, with appropriate boundary conditions, in terms of only one quadrature [6]. This was developed by Majorana in 1928, just when starting to collaborate (still as a University student) with the Fermi group in Rome, and reveals an outstanding ability to solve very involved mathematical problems in a very interesting and clear way. The whole work performed on the Thomas-Fermi model is contained in some spare sheets, and diligently reported by the author himself in his notebooks [3]. From these it is evident the considerable contribution given by Majorana even in the achievement of the statistical model [7], anticipating, in many respects, some important results reached later by leading specialists. But the major finding by Majorana was his solution (or, rather, methods of solutions) of the Thomas-Fermi equation, which remained completely unknown, until recent times, to the Physics community, who ignored that the non-linear differential equation relevant for atoms and other systems could even be solved semi-analytically. The method proposed by Majorana can also be extended to an entire class of particular differential equations [8]. Afterwards we discuss a subject that was repeatedly studied by Majorana in his research notebooks; namely that of a formulation of Electrodynamics in terms of the electric and magnetic fields, rather than the potentials, which is suitable for a quantum generalization, in a complete analogy with the Dirac theory [9] [10]. This argument was already faced in 1931 by Oppenheimer [11], who only supposed the analogy of the photon case with that described by Dirac, but Majorana explicitly deduced a Dirac-like equation for the photon, thus building up the presumed analogy. Finally, we report on another topic particularly loved by Majorana, after the appearance (at the end of 1928) of the seminal book by Hermann Weyl [12], that is the Group Theory and its application to physical problem. As testified by the large number of unpublished manuscript pages of the Italian physicist, the Weyl approach greatly influenced the scientific thought and work of Majorana [13]. In fact, when Majorana became aware of the great relevance of the Weyls application of the Group Theory to Quantum Mechanics, he immediately grabbed the Weyl method and developed it in many applications. In one of his notebooks [3] we find, for example, a preliminary study of what will be one of the most important (published) papers by Majorana on a generalization of the Dirac equation to particles with arbitrary spin [14]. In particular, in 1932 Majorana obtained the infinite-dimensional unitary representations of the Lorentz group that will be re-discovered by Wigner in his 1939 and 1948 works [15], and the entire theory was re-invented by Soviet mathematicians (in particular Gel'fand and collaborators) in a series of articles from 1948 to 1958 [16] and finally applied by physicists years later. What presented here is, necessarily, a very short account of what Majorana really did in his few years of work (about ten years), but, we hope, it serves in the centennial year at least to understand the very relevant role played by him in the advancement of Physics. Path-Integral approach to Quantum Mechanics The usual quantum-mechanical description of a given system is strongly centered on the role played by the hamiltonian H of the system and, as a consequence, the time variable plays itself a key role in this description. Such a dissymmetry between space and time variables is, obviously, not satisfactory in the light of the postulates of the Theory of Relativity. This was firstly realized in 1932 by Dirac [17], who put forward the idea of reformulating the whole Quantum Mechanics in terms of lagrangians rather than hamiltonians. The starting point in the Dirac thought is that of exploiting an analogy, hold-ing at the quantum level, with the Hamilton principal function S in Classical Mechanics, thus writing the transition amplitude from one space-time point to another as an (imaginary) exponential of S. However, the original Dirac formulation was not free from some unjustified assumptions, leading also to wrong results, and the correct mathematical formulation and the physical interpretation of it came only in the forties with the work by Feynman [18]. In practice, in the Feynman approach to Quantum Mechanics, the transition amplitude between an initial and a final state can be expressed as a sum of the factor e iS[q]/h over all the paths q with fixed end-points, not just those corresponding to classical dynamical trajectories, for which the action is stationary. In 1938 Majorana was appointed as full professor of Theoretical Physics at the University of Naples, where probably delivered a general conference mentioning also his particular viewpoint on some basic concepts on Quantum Mechanics (see Ref. [4]). Fortunately enough, we have some papers written by him on this subject, and few crucial points, anticipating the Feynman approach to Quantum Mechanics, will be discussed in the following. However, we firstly note that such papers contain nothing of the mathematical aspect of that peculiar approach to Quantum Mechanics, but it is quite evident as well the presence of the physical foundations of it. This is particularly impressive if we take into account that, in the known historical path, the interpretation of the formalism has only followed the mathematical development of the formalism itself. The starting point in Majorana is to search for a meaningful and clear formulation of the concept of quantum state. And, obviously, in 1938 the dispute is opened with the conceptions of the Old Quantum Theory. According to the Heisenberg theory, a quantum state corresponds not to a strangely privileged solution of the classical equations but rather to a set of solutions which differ for the initial conditions and even for the energy, i.e. what it is meant as precisely defined energy for the quantum state corresponds to a sort of average over the infinite classical orbits belonging to that state. Thus the quantum states come to be the minimal statistical sets of classical motions, slightly different from each other, accessible to the observations. These minimal statistical sets cannot be further partitioned due to the uncertainty principle, introduced by Heisenberg himself, which forbids the precise simultaneous mea-surement of the position and the velocity of a particle, that is the determination of its orbit. Let us note that the "solutions which differ for the initial conditions" correspond, in the Feynman language of 1948, precisely to the different integration paths. In fact, the different initial conditions are, in any case, always referred to the same initial time (t a ), while the determined quantum state corresponds to a fixed end time (t b ). The introduced issue of "slightly different classical motions" (the emphasis is given by Majorana himself), according to what specified by the Heisenberg's uncertainty principle and mentioned just afterwards, is thus evidently related to that of the sufficiently wide integration region required in the Feynman path-integral formula for quantum (rather than classical) systems. In this respect, such a mathematical point is intimately related to a fundamental physical principle. The crucial point in the Feynman formulation of Quantum Mechanics is, as well-known, to consider not only the paths corresponding to classical trajectories, but all the possible paths joining the initial point with the end one. In the Majorana manuscript, after a discussion on an interesting example on the harmonic oscillator, the author points out: Obviously the correspondence between quantum states and sets of classical solutions is only approximate, since the equations describing the quantum dynamics are in general independent of the corresponding classical equations, but denote a real modification of the mechanical laws, as well as a constraint on the feasibility of a given observation; however it is better founded than the representation of the quantum states in terms of quantized orbits, and can be usefully employed in qualitative studies. And, in a later passage, it is more explicitly stated that the wave function "corresponds in Quantum Mechanics to any possible state of the electron". Such a reference, that only superficially could be interpreted, in the common acceptation, that all the information on the physical systems is contained in the wave function, should instead be considered in the meaning given by Feynman, according to the comprehensive discussion made by Majorana on the concept of state. Finally we point out that, in the Majorana analysis, a key role is played by the symmetry properties of the physical system. Under given assumptions, that are verified in the very simple problems which we will consider, we can say that every quantum state possesses all the symmetry properties of the constraints of the system. The relationship with the path-integral formulation is made as follows. In discussing a given atomic system, Majorana points out how from one quantum state S of the system we can obtain another one S ′ by means of a symmetry operation. However, differently from what happens in Classical Mechanics for the single solutions of the dynamical equations, in general it is no longer true that S ′ will be distinct from S. We can realize this easily by representing S ′ with a set of classical solutions, as seen above; it then suffices that S includes, for any given solution, even the other one obtained from that solution by applying a symmetry property of the motions of the systems, in order that S ′ results to be identical to S. This passage is particularly intriguing if we observe that the issue of the redundant counting in the integration measure in gauge theories, leading to infinite expressions for the transition amplitudes, was raised (and solved) only after much time from the Feynman paper. Solution of the Thomas-Fermi equation The main idea of the Thomas-Fermi atomic model is that of considering the electrons around the nucleus as a gas of particles, obeying the Pauli exclusion principle, at the absolute zero of temperature. The limiting case of the Fermi statistics for strong degeneracy applies to such a gas. Then, in this approximation, the potential V inside a given atom of charge number Z at a distance r from the nucleus may be written as With a suitable change of variable, r = µx and the Thomas-Fermi function ϕ satisfies the following non-linear differential equation (for ϕ > 0): (a prime denotes differentiation with respect to x) with the boundary conditions: The Fermi equation (3) is a universal equation which does not depend neither on Z nor on physical constants (h, m, e). Its solution gives, from Eq. (1), as noted by Fermi himself, a screened Coulomb potential which at any point is equal to that produced by an effective charge Ze ϕ r µ . As was immediately realized, in force of the independence of Eq. (3) on Z, the method gives an effective potential which can be easily adapted to describe any atom with a suitable scaling factor, according to Eq. (5). The problem of the theoretical calculation of observable atomic properties is thus solved, in the Thomas-Fermi approximation, in terms of the function ϕ(x) introduced in Eq. (1) and satisfying the Fermi differential equation (3). By using standard but involved mathematical tools, in his paper [19] Thomas got an exact, "singular" solution of his differential equation satisfying only the second condition (4). This was later (in 1930) considered by Sommerfeld [25] as an approximation of the function ϕ(x) for large x (and is indeed known as the "Sommerfeld solution" of the Fermi equation), and Sommerfeld himself obtained corrections to the above quantity in order to approximate in a better way the function ϕ(x) for not extremely large values of x. Until recent times it has been believed that the solution of such equation satisfying both the appropriate boundary conditions in (4) cannot be expressed in closed form, and some effort has been made, starting from Thomas [19], Fermi [20], [21] and others, in order to achieve the numerical integration of the differential equation. However, we now know [6], [7] that Majorana in 1927-8 found a semi-analytical solution of the Thomas-Fermi equation by applying a novel exact method [8]. Before proceeding, we will indulge here on an anecdote reported by Rasetti [22], Segrè [23] and Amaldi [24]. According to the last author, "Fermi gave a broad outline of the model and showed some reprints of his recent works on the subject to Majorana, in particular the table showing the numerical values of the so-called Fermi universal potential. Majorana listened with interest and, after having asked for some explanations, left without giving any indication of his thoughts or intentions. The next day, towards the end of the morning, he again came into Fermi's office and asked him without more ado to draw him the table which he had seen for few moments the day before. Holding this table in his hand, he took from his pocket a piece of paper on which he had worked out a similar table at home in the last twenty-four hours, transforming, as far as Segrè remembers, the second-order Thomas-Fermi non-linear differential equation into a Riccati equation, which he had then integrated numerically." The whole work performed by Majorana on the solution of the Fermi equation, is contained in some spare sheets conserved at the Domus Galilaeana in Pisa, and diligently reported by the author himself in his notebooks [3]. The reduction of the Fermi equation to an Abel equation (rather than a Riccati one, as confused by Segrè) proceeds as follows. Let's adopt a change of variables, from (x, ϕ) to (t, u), where the formula relating the two sets of variables has to be determined in order to satisfy, if possible, both the boundary conditions (4). The function ϕ in Eq. (6) has the correct behavior for large x, but the wrong one near x = 0, so that we could modify the functional form of ϕ to take into account the first condition in (4). An obvious modification is ϕ = (144/x 3 )f (x), with f (x) a suitable function which vanishes for x → 0 in order to account for ϕ(x = 0) = 1. The simplest choice for f (x) is a polynomial in the novel variable t, as it was also considered later, in a similar way, by Sommerfeld [25]. The Majorana choice is as follows: with t → 1 as x → 0. From Eq. (7) we can then obtain the first relation linking t to x, ϕ. The second one, involving the dependent variable u, is that typical of homogeneous differential equations (like the Fermi equation) for reducing the order of the equation, i.e. exponentiation with an integral of u(t). The transformation relations are thus: Substitution into Eq. (3) leads to an Abel equation for u(t), with Note that both the boundary conditions in (4) are automatically verified by the relations (8). We have reported the derivation of the Abel equation Now the point with x = 0 corresponds to t = 0. In order to obtain again a first order differential equation for u(t), the transformation equation for the variable u involves ϕ and its first derivative. Majorana then introduced the following formulas: By taking the t-derivative of the last equation in (12) and inserting Eq. (3) in it, one gets: By using Eqs. (12) to eliminate x 1/2 and ϕ ′2 , the following equation results: Now the quantityẋϕ 1/3 can be expressed in terms of t and u by making use again of the first equation in (12) (and its t-derivative). After some algebra, the final result for the differential equation for u(t) is: The obtained equation is again non-linear but, differently from the original Fermi equation (3), it is first-order in the novel variable t and the degree of non-linearity is lower than that of Eq. (3). The boundary conditions for u(t) are easily taken into account from the second equation in (12) and by requiring that for x → ∞ the Sommerfeld solution (Eq. (11) with t = 1) be recovered: Here we have denoted with ϕ ′ 0 = ϕ ′ (x = 0) the initial slope of the Thomas-Fermi function ϕ(x) which, for a neutral atom, is approximately equal to −1.588. The solution of Eq. (15) was achieved by Majorana in terms of a series expansion in powers of the variable τ = 1 − t: Substitution of Eq. (17) (with the conditions in Eq. (16)) into Eq. (15) results into an iterative formula for the coefficients a n (for details see Ref. [6]). It is remarkable that the series expansion in Eq. (17) is uniformly convergent in the interval [0, 1] for τ , since the series ∞ n=0 a n of the coefficients converges Majorana was aware [3] of the fact that the series in Eq. (17) exhibits geometrical convergence with a n /a n−1 ∼ 4/5 for n → ∞. Given the function u(t), we now have to look for the Thomas-Fermi function ϕ(x). This was obtained in a parametric form x = x(t), ϕ = ϕ(t) in terms of the parameter t already introduced in Eq. (12), and by writing ϕ(t) as (with this choice, ϕ(t = 0) = 1 and the first condition in (4) is automatically satisfied). Here w(t) is an auxiliary function which is determined in terms of u(t) by substituting Eq. (19) into Eq. (12). As a result, the parametric solution of Eq. (3), with boundary conditions (4), takes the form: with Remarkably, the Majorana solution of the Thomas-Fermi equation is obtained with only one quadrature and gives easily obtainable numerical values for the electrostatic potential inside atoms. By taking into account only 10 terms in the series expansion for u(t), such numerical values approximate the values of the exact solution of the Thomas-Fermi equation with a relative error of the order of 0.1%. The intriguing property in the Majorana derivation of the solution of the Thomas-Fermi equation is that his method can be easily generalized and may be applied to a large class of particular differential equations, as discussed in [8]. Several generalizations of the Thomas-Fermi method for atoms were proposed as early as in 1928 by Majorana, but they were considered by the physics community, ignoring the Majorana unpublished works, only many years later. Indeed, in Sect. 16 of Volumetto II [3], Majorana studied the problem of an atom in a weak external electric field E, i.e. atomic polarizability, and obtained an expression for the electric dipole moment for a (neutral or arbitrarily ionized) atom. Furthermore, he also started to consider the application of the statistical method to molecules, rather than single atoms, studying the case of a diatomic molecule with identical nuclei (see Sect. 12 of Volumetto II [3]). The effective potential in the molecule was cast in the form: V 1 and V 2 being the potentials generated by each of the two atoms. The function α must obey the differential equation for V , (k is a suitable constant), with appropriate boundary conditions, discussed in [3]. Majorana also gave a general method to determine V when the equipotential surfaces are approximately known (see Sect. 12 of Volumetto III [3]). In fact, writing the approximate expression for the equipotential surfaces, as functions of a parameter p, as he deduced a thorough equation from which it is possible to determine V (ρ), when the boundary conditions are assigned. The particular case of a diatomic molecule with identical nuclei was, again, considered by Majorana using elliptic coordinates in order to illustrate his original method [3]. Finally, our author also considered the second approximation for the potential inside the atom, beyond the Thomas-Fermi one, with a generalization of the statistical model of neutral atoms to those ionized n times, including the case n = 0 (see Sect. 15 of Volumetto II [3]). As recently pointed out, the approach used by Majorana to this end is rather similar to that now adopted in the renormalization of physical quantities in modern gauge theories [26]. Majorana formulation of Electrodynamics In 1931, in his "note on light quanta and the electromagnetic field" [11], Oppenheimer developed an alternative model to the theory of Quantum Electrodynamics, starting from an analogy with the Dirac theory of the electron. Such a formulation was particularly held dear by Majorana, who studied it in some of his unpublished notebooks [9]. Majorana's original idea was that if the Maxwell theory of electromagnetism has to be viewed as the wave mechanics of the photon, then it must be possible to write the Maxwell equations as a Dirac-like equation for a probability quantum wave ψ, this wave function being expressible by means of the physical E, B fields. This can be, indeed, realized introducing the quantity since ψ * · ψ = E 2 + B 2 is directly proportional to the probability density function for a photon 1 . In terms of ψ, the Maxwell equations in vacuum then write 1 If we have a beam of n equal photons each of them with energy ǫ (given by the Planck relation), since 1 2 ( E 2 + B 2 ) is the energy density of the electromagnetic field, then By making use of the correspondence principle these equations effectively can be cast in a Dirac-like form with the transversality condition where the 3x3 hermitian matrices (α i ) lm = i ǫ ilm have been introduced. The probabilistic interpretation is indeed possible given the "continuity equation" (Poynting theorem) are respectively the energy and momentum density of the electromagnetic field. It is interesting to observe that, differently from Oppenheimer, who started from a mere, presumed analogy with the electron case, Majorana built on analytically the analogy with the Dirac theory, at a dynamical level, by deducing the Dirac-like equation for the photon from the Maxwell equations with the introduction of a complex wave field. As noted by Giannetto in Ref. [10], the Majorana formulation is algebraically equivalent to the standard one of Quantum Electrodynamics and, in addition, also some relevant problems concerning the negative energy states, that induced Oppenheimer to abandon his model, may be elegantly solved by using the method envisaged in a later work [27], thus giving further physical insight into Majorana theory. Lorentz group and its applications The important role of symmetries in Quantum Mechanics was established in the third decade of the XX century, when it was discovered the special relationships concerning systems of identical particles, reflection and rotational symmetry or translation invariance. Very soon it was discovered that the systematic theory of symmetry resulted to be just a part of the mathematical theory of groups, as pointed out, for example, in the reference book by H. Weil [12]). A particularly intriguing example is that of the Lorentz group which, as well known, underlies the Theory of Relativity, and its representations are especially relevant for the Dirac equation in Relativistic Quantum Mechanics. In the mentioned book, however, although the correspondence between the Dirac equation and the Lorentz transformations is pointed out, the group properties of this connection are not highlighted. Moreover, only a particular kind of such representations are considered (those related to the two-dimensional representations of the group of rotations, according to Pauli), but an exhaustive study of this subject was still lacking at that time. The situation changes [13] quite sensibly with several (unpublished) papers by Majorana [3], where he gives a detailed deduction of the relationship between the representations of the Lorentz group and the matrices of the (special) unitary group in two dimensions, and a strict connection with the Dirac equation is always taken into account. Moreover the explicit form of the transformations of every bilinear in the spinor field Ψ is reported. For example, Majorana obtains that some of such bilinears behave as the 4-position vector (ct, x, y, z) or as the components of the rank-2 electromagnetic tensor (E, H) under Lorentz transformations, according to the following rules: where α x , α y , α z , β are Dirac matrices. But, probably, the most important result achieved by Majorana on this subject is his discussion of infinite-dimensional unitary representations of the Lorentz group, giving also an explicit form for them. Note that such representations were independently discovered by Wigner in 1939 and1948 [15] and were thoroughly studied only in the years 1948-1958 [16]. Lucky enough, we are able to reconstruct the reasoning which led Majorana to discuss the infinite-dimensional representations. In Sec. 8 of Volumetto V we read [3]: from which he deduces the general commutation relations satisfied by the S and T operators acting on generic (even infinite) tensors or spinors: etc. Next he introduces the matrices which are Hermitian for unitary representations (and viceversa), and obey the following commutation relations: [a x , a y ] = i a z , etc. The quantities on which a and b act are infinite tensors or spinors (for integer or half-integer j, respectively) in the given representation, so that Majorana effectively constructs, for the first time, infinite-dimensional representations of the Lorentz group. In [14] the author also picks out a physical realization for the matrices a and b for Dirac particles with energy operator H, The Majorana equation for particles with arbitrary spin has, then, the same form of the Dirac equation: but with different (and infinite) matrices α and β, whose elements are given in Eqs. (43). The rest energy of the particles thus described has the form: and depends on the spin s of the particle. We here stress that the scientific community of that time was convinced that only equations of motion for spin 0 (Klein-Gordon equation) and spin 1/2 (Dirac equation) particles could be written down. The importance of the Majorana work was first realized by van der Waerden [29] but, unfortunately, the paper remained unnoticed until recent times.
2019-04-14T03:15:15.035Z
2006-04-07T00:00:00.000
{ "year": 2006, "sha1": "ca4a5e727a7d3f383811d786eb162851e7dc11a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b2eb82c7c2fc78f38df6fdd7301a25dfb8714842", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12012375
pes2o/s2orc
v3-fos-license
Quantile Forecasting of Wind Power Using Variability Indices Wind power forecasting techniques have received substantial attention recently due to the increasing penetration of wind energy in national power systems. While the initial focus has been on point forecasts, the need to quantify forecast uncertainty and communicate the risk of extreme ramp events has led to an interest in producing probabilistic forecasts. Using four years of wind power data from three wind farms in Denmark, we develop quantile regression models to generate short-term probabilistic forecasts from 15 min up to six hours ahead. More specifically, we investigate the potential of using various variability indices as explanatory variables in order to include the influence of changing weather regimes. These indices are extracted from the same wind power series and optimized specifically for each quantile. The forecasting performance of this approach is compared with that of appropriate benchmark models. Our results demonstrate that variability indices can increase the overall skill of the forecasts and that the level of improvement depends on the specific quantile. Introduction Wind power is one of the fastest growing renewable energy sources (Barton and Infield [1]).According to the European Wind Energy Association (EWEA), the wind industry has had an average annual growth of 15.6% over the last 17 years (1995-2011).In 2011, 9616 MW of wind energy capacity was installed in the EU, making a total of 93957 MW, which is sufficient to supply 6.3% of the European Union's electricity.These figures represent 21.4% of new power capacity showing that wind energy continues to be a popular source of energy. However, due to the large variability of wind speed caused by the unpredictable and dynamic nature of the earth's atmosphere, there are many fluctuations in wind power production.This inherent variability of wind speed is the main cause of the uncertainty observed in wind power generation.Recently, scientists have been directly or indirectly attempting to model this uncertainty and produce improved forecasts of wind power production. According to Boyle [2], the most important application for wind power forecasting is to reduce the need for balancing the energy and reserve power which are needed to optimize the power plant scheduling.Moreover, wind power forecasts are used for grid operation and grid security evaluation.For maintenance and repair reasons, the grid operator needs to know current and future values of wind power for each grid area or grid connection point.Wind power forecasts are also required for small regions and individual wind farms. The length of the relevant forecast horizon usually depends on the required application.For example, in order to schedule power generation (grid management), forecast horizons of several hours are usually sufficient, but for maintenance planning forecast horizons of several days or weeks are needed [3]. Since there is no efficient way to store wind energy, the wind power production decreases to zero if wind speed drops below a certain level known as the "cut-in speed".On the other hand, excessively strong winds can cause serious damage to the wind turbines, and hence they are automatically shut down at the "disconnection speed", leading to an abrupt decline of power generation.In addition, the wind power generated is limited by the capacity of each turbine.Therefore, it is important to produce accurate wind power forecasts for enabling the efficient operation of wind turbines and reliable integration of wind power into the national grid. The literature of wind power forecasting starts with the work of Brown et al. [4] where they used autoregressive processes to model and simulate the wind speed, and then estimate the wind power by applying suitable transformations to values of wind speed.Most of the early literature focuses on producing wind power point forecasts, directly, or indirectly in the sense that the focus is on modelling the wind speed and then transforming the forecasts through a power curve [5,6].The approach of modelling the wind speed series is found to be quite useful because in many situations researchers do not have access to wind power data due to its commercial sensitivity.This approach has as an advantage the fact that the wind speed time series is much smoother than the corresponding wind power time series.An obvious disadvantage is that, since the shape of the power curve may vary with the time of year and different environmental conditions, it is much more difficult to model this type of behaviour. Recent research has focused on producing probabilistic or density forecasts, because the point forecast methods are not able to quantify the uncertainty related to the prediction.Point forecasts usually inform us about the conditional expectation of wind power production, given information up to the current time and the estimated model parameters.Only a fully probabilistic framework will give us the opportunity to model the uncertainty related to the prediction, and avoid the intrinsic uncertainty involved in a point forecasting calibrated model.Up to now, the number of studies on multi-step quantile/density forecasting is relatively small compared with point forecasting. Moeanaddin and Tong [7] estimated densities using recursive numerical methods, which are quite computationally intensive.Gneiting et al. [8] introduces regime-switching spaceCtime (RST) models which identify forecast regimes at a wind energy site and fit a conditional predictive model for each regime.The RST models were applied to 2-h-ahead forecasts of hourly average wind speed near the Stateline wind energy center in the U.S. Pacific Northwest.One of the most recent regime-based approaches is the one used by Trombe et al. [9], where they propose a general model formulation based on a statistical approach and historical wind power measurements only.The model they propose is an extension of Markov-Switching Autoregressive (MSAR) models with Generalized Autoregressive Conditional Heteroscedastic (GARCH) errors in each regime to cope with the heteroscedasticity. Pinson [10], by introducing and applying a generalised logistic transformation, managed to produce ten-minute ahead density forecasts at the Horns Rev wind farm in Denmark.Pinson and Kariniotakis [11] described a generic method for the providing of prediction intervals of wind power generation and Sideratos and Hatziargyriou [12] proposed a novel methodology to produce probabilistic wind power forecasts using radial basis function neural networks.Taylor et al. [6] used statistical time series models and weather ensemble predictions to produce density forecasts for five wind farms in the United Kingdom.This is a relatively new approach for wind power forecasting that uses ensemble forecasts produced from numerical weather prediction (NWP) methods [6,13].Moreover, Lau and McSharry [14] produced multi-step density forecasts for the aggregated wind power series in Ireland, using ARIMA-GARCH processes and exponential smoothing models.Jeon and Taylor [15] modelled the inherent uncertainty in wind speed and direction using a bivariate VARMA-GARCH model and then they modelled the stochastic relationship of wind power to wind speed using conditional kernel density (CKD) estimation.This is a rather promising semi-non-parametric model but unfortunately cannot be used as benchmark in this article because we aim to make predictions using only wind power data. The quantile regression method [16] has been extensively used to produce wind power quantile forecasts, using a variety of explanatory variables among which are wind speed, wind direction, temperature and atmospheric pressure. Recent literature includes papers by Bremnes [17], Nielsen et al. [18], and Moller et al. [19].More specifically, Bremnes [17] produced wind power probabilistic forecasts for a wind farm in Norway, using a local quantile regression model.The predictors used for the local quantile regression were outputs from a NWP model (HIRLAM10), and used lead times from 24 to 47 h.Nielsen et al. [18] used an existing wind power forecasting system (Zephyr/WPPT) and showed how the analysis of the forecast error can be used to build a model for the quantiles of the forecast error.The explanatory variables used in their quantile regression model include meteorological forecasts of air density, friction velocity, wind speed and direction from a NWP model (DMI-HIRLAM).Moreover, Moller et al. [19] presented a time-adaptive quantile regression algorithm (based on the simplex algorithm) which manages to outperform a static quantile regression model on a data set with wind power production.In addition, Pritchard [20], discussed ways of formulating quantile-type models for forecasting variations in wind power within a few hours.Such models can predict quantiles of the conditional distribution of the wind power available at some future time using information presently available. Davy et al. [21], proposed a new variability index that is designed to detect rapid fluctuations of wind speed or power that are sustained for a length of time, and used it as an explanatory variable in the quantile regression model they constructed.Bossavy et al. [22] extracted two new indices that are able to recognize and predict ramp events (A ramp event is defined as a large change in the power production of a wind farm or a collection of wind farms over a short period of time.) in the wind power series, and used them to produce quantile estimates with the quantile regression forest method as their basic forecasting system.Finally, Gneiting [23] studied the behaviour of quantiles as optimal predictors and illustrated the relevance of decision theoretic guidance in the transition from a predictive distribution to a point forecast using the Bank of England density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest. This article does not have as a purpose to develop models that can compete with the commercially available models that focus on forecast horizons greater than six hours (and are using NWPs).This is also the main reason we chose a very short forecast horizon (six hours), since it has been shown that statistical time series models may outperform sophisticated meteorological forecasts for short lead times within six hours [24].In fact, NWPs are not even available (for some regions) for lead times shorter than three hours.So, as mentioned above, our choice of such a short forecast horizon is particularly useful for the assessment of grid security and operation.We would like to investigate the extent to which the use of quantile regression models with endogenous explanatory variables can improve the forecasting performance of probabilistic benchmarks such as persistence and climatology. In this article we use wind power series from three wind farms in Denmark, to produce very short-term quantile forecasts, from 15 min up to six hours ahead.In order to produce quantile forecasts, we will use a linear quantile regression model, with explanatory variables extracted from the same wind power time series.Modelling the wind power series directly is preferable to a method based on wind speed forecasts because we avoid the uncertainty involved in transforming wind speed forecasts back to wind power forecasts using the power curve.The fact that we use only endogenous explanatory variables is also a very important practical consideration that we have taken on board to ensure the ability to apply our model to all wind farms.Power systems operators will require an approach to forecast a wide range of sites, where a collection of different wind farm owners implies that the only variable that they are guaranteed to have access to is the wind power generation over time. Four new variability indices will be produced (extracted from the original wind power time series), which serve to capture the volatile nature of the wind power series.These indices, together with some lagged versions of the wind power series, will be used as explanatory variables in the quantile regression model.As for any regression model, we need predictions (point forecasts) for the future values of the explanatory variables in order to produce future quantile estimates.To produce these predictions we will use time series models that are able to model both the mean and the variance of the underlying series. The motivation behind the chosen model structure is based on understanding the way that the underlying weather variability can affect the conditional predictive density of the wind power generation.We would like to keep the model structure as simple as possible and therefore assume that the probability of observing a value of wind power below a certain level can be written as a function of some local mean plus the local variability involved in observing the specific wind power value.A linear combination of recently observed wind power values seems to be the easiest way to identify a function that can forecast the expected value of a specific quantile, given recent information.It is worth noticing that the model may be linear in parameters but the nonlinearity is attained in the explanatory variable themselves, and especially in the variability indices.In addition, the variability indices can capture the underlying weather variability, and hence help to improve the probabilistic forecasts given a certain weather regime. The three Danish wind farms were chosen according to their monthly wind power capacity and standard deviation.We choose one high, one low, and one average variability wind farm, in order to understand better the ability of each model to produce probabilistic forecasts under different circumstances. The indices used will be independently optimized for each of the three wind farms, using a one-fold cross validation technique.In fact, two different optimizations will take place for each wind farm: The first one will aim to minimize the Check Function Score (defined in Section 4.2) produced by a 1-step ahead quantile regression forecast, for each of 19 different quantiles.The second one will aim to minimize the averaged Check Function Score, produced by taking the average over all 24 predicted lead times (equal to six hours), for each quantile.The final forecast results will be compared with those of some widely used benchmark models (persistence distribution and unconditional distribution). The remainder of the article is presented as follows.In Section 2 we will introduce the wind power data, and the new variability indices will be derived in Section 3. Section 4 will present the methodology behind the various models and explain ways to evaluate the resulting quantile forecasts.In Section 5 we will present the four competing quantile regression models and optimize their quantile forecast performance on the in-sample testing set.In Section 6 the out-of-sample quantile and density forecast performance of the competing quantile regression models will be assessed, and Section 7 will conclude the article. Wind Power Data We use wind power data recorded at three wind farms in Denmark summarized in Table 1.These wind farms were chosen to have different amounts of wind power variability, located in different geographical regions (The 446 wind farms in Denmark are assigned to 15 different geographical regions, but no further information about the actual locations of the wind farms is disclosed), and have the smallest percentage of missing values among all available wind farms.The percentage of missing values (mostly isolated points) is found to be less than 0.025% for all three wind farms, and missing values were imputed using linear interpolation.For such a small percentage of missing values, the smoothing effect caused by using linear interpolation to impute the missing values is practically negligible.Our data sets contain wind power measurements recorded every 15 min for four years, from 1 January 2007 to 31 December 2010.The data of each wind farm is bounded between zero and the maximum capacity of the wind farms.The zero value is attained in the case of excessively strong wind, where the turbines shut down in order to prevent them from damage, or in the case of very weak wind (the cut-in wind speed, usually 3-4 ms −1 according to Pinson [10]).In order to facilitate comparisons between the data sets of different capacities, we normalize the wind power data of each wind farm by dividing by the total (rated) capacity, which is constant over the four years period.Hence, the data is now bounded within the interval [0,1]. We dissect the data of each farm into a set of exactly two years (2007 and 2008) for in-sample model training and calibration, and an out-of-sample testing set (the remaining two years) for out-of-sample testing and model evaluation.The in-sample set is dissected again into two sub-sets, a training set and a testing set.For the in-sample training set we use the first 1.5 years and for the in-sample testing set the remaining half year.This way, we can use a one-fold cross validation technique to optimize the indices introduced in Section 3, and test the performance of our final chosen model using the out-of-sample testing set. The time series plots for the year 2010, together with the monthly mean power output and standard deviation, are shown in Figure 1.The monthly mean power output and monthly standard deviation were generated by taking the mean and standard deviation of wind power, respectively, for each month over the entire four year period.As we observe, the three wind farms have different wind power variability.More specifically, the first and last wind farms of Figure 1 have the lowest and highest possible wind power variability for all four years (from all the available wind farms in Denmark), without having any significant changes (Wind power variability may change from year to year by addition of new turbines or removal (maybe for maintenance) of existing ones.) in the capacity from year to year.The second wind farm of Figure 1 was chosen to have an average (medium) wind power variability compared with the other two farms, but again without having any significant changes in the monthly capacity from year to year. Indices of Wind Power Variability Davy et al. [21] proposed a variability index that is designed to detect rapid fluctuations of wind speed or power that are sustained for a length of time.They defined this variability index as the standard deviation of a band-limited signal in a moving window, and they constructed such an index for a wind speed time series.This variability index depends on four parameters: the order of the filter (integer greater than one), the upper and lower frequencies of the extracted signal, and the width of the moving window.We would like to use such an index as an explanatory variable in our quantile regression, but a proper optimization of this is too computationally expensive because of the number of parameters involved. Figure 1.Time series plots of normalized power data for the three chosen Danish wind farms, for the year 2010.Please note that the point on the time axis labelled Jan refers to 00:00 on 1 January and similarly for every month. Instead, we propose a parsimonious variability index which depends only on two parameters, (m, n) where m, n ∈ N 0 \{1}, and is constructed as follows.Firstly we smooth our original wind power series using an averaging window of size m, in order to obtain the smoothed wind power series, for t ≥ m.Note that this series behaves in a fully retrospective way, in the sense that each point of the series depends only on the historical values of the original series.Since the smoothed series is m − 1 points smaller than the original series, we set r t = r m , for t = 1, 2, ..., m − 1. Finally, the new variability index is just the standard deviation of the extracted smoothed wind power series in a moving window of width n.So, if r t is a given point of the smoothed series, we define the new index as for t ≥ n.Again, we impute the first n − 1 points of the series by setting SD t = SD n , for t = 1, 2, ..., n − 1.This index can be optimized much more easily than the one proposed by Davy et al. [21], since it has only two parameters: the smoothing parameter m, and the variability parameter n. By similar reasoning, we create another three variability indices.We create the smoothed wind power series, r t , as defined by Equation (1), and then instead of finding the standard deviation we find the sample interquartile range (IQR), the 5% and the 95% sample quantiles of the smoothed series over a moving variability window (different for each series) of width n. There are many different ways to define the quantiles of a sample.We use the definition recommended by Hyndman and Fan [25] and presented as follows.Let R t = {r t−n+1 , ..., r t−1 , r t } for t ≥ n > 1, denote the order statistics of R t as {r (1) , ..., r (n) } and let QRt (p) denote the sample p-quantile of R t with proportion p ∈ (0, 1).We calculate QRt (p) (for a chosen proportion p) by firstly plotting r (k) against p k , where p k = k−1/3 n+1/3 and k = 1, .., n.This plot is called a quantile plot and p k a plotting position.Then, we use linear interpolation of (p k , r (k) ) to get the solution (p, QRt (p)) for a chosen 0 < p < 1.Therefore, the three new indices can be defined as: for t ≥ n.We also impute their values for t = 1, ..., n − 1 in a similar way as we did for the SD index.An example of the construction of the three variability wind power indices is shown in Figure 2. A first observation is that the IQR and SD indices behave similarly, but the IQR index has higher peaks than the SD index, and hence gives more emphasis to the high variability regions of the wind power series.Moreover, the Q05 and Q95 indices also behave quite similarly, capturing the two tails of the wind power distribution over a predefined window.These indices will be properly optimized and will be used, together with some lagged values of the original power series, as explanatory variables in the quantile regression introduced in the next section.It is worth mentioning that the choice of firstly smoothing the wind power series is taken in order to take into consideration the fact that any noise may hide or alter the pattern of the underlying weather regime we wish to capture.By choosing m = 0 we do not remove any of the underlying noise, and hence we assume that the weather variability is fully captured by using the original wind power time series. Quantile Regression, Forecasting, and Evaluation Methodology In order for the paper to be self-consistent, we include the theory of linear quantile regression in Section 4.1.In Section 4.2 we introduce the methodology we will use to evaluate the produced quantile and density forecasts. Quantile Regression Given a random variable, y t , and a strictly increasing continuous CDF, F t (y), the α i -quantile, q (α i ) t (y), with proportion α i ∈ [0, 1] is defined as the value for which the probability of obtaining values of y t below q Note that the notation y t is used for denoting both the stochastic state of the random variable at time t = 1, 2, ..., T , and the measured value at that time for a training set of size T . Quantile regression, introduced by Koenker and Bassett [16], models q as a linear combination of some given explanatory variables (also called regressors or predictors).So, the α iquantile is modelled as: where γ are unknown coefficients depending on α i , and x t,j are the p known explanatory variables.In quantile regression, a regression coefficient estimates the change in a specified quantile of the response variable produced by a one unit change in the corresponding explanatory variable. We define the quantile loss function [16], also known as the check function, for a given proportion α i ∈ [0, 1] as: where u is a given value.Then, the sample α i -quantile can be calculated by minimizing T t=1 ρ α i (y t −q) with respect to q.Hence, we can estimate the unknown coefficients, γ , by replacing q with the right-hand side of Equation (7): where γ(α i ) is a vector containing the unknown coefficients.Usually, these estimates are calculated using linear programming techniques as in Koenker and D'Orey [26]. In this article we will use quantile regression to forecast the values of quantiles with nominal proportion α i = {0.05,0.10, ..., 0.95}, for forecast horizons k = 1, 2, ..., 24, measured in time steps of 15 min.We denote the forecast for the quantile with nominal proportion α i issued at time t for forecast time t + k, by q(α i ) t+k|t (y).In order to produce these forecasts, we use Equation ( 7), and the estimated coefficients, γ(α i ) : where xt+k|t,j for j = 1, ..., p denote the forecasts of the explanatory variables x t,j , issued at time t with lead time t + k. The random variable y t will represent the normalized wind power time series, (y t ), and the explanatory variables will be represented by time series, (x t,j ), extracted from the normalized wind power series.In order to produce the forecasts, xt+k|t,j , we will fit suitable time series models to the variables (x t,j ), and then predict from these models up to t + k values ahead. It is worth mentioning that by producing quantile forecasts using quantile regression, we may end up with some quantile forecasts crossing each other.This is a not very common phenomenon for so few quantile forecasts (19 in our case), but monitoring its occurrence is very important.In our analysis, whenever this phenomenon happens (it occurs very rarely because we fit the models to a large amount of data) we just shift the crossing quantile forecasts in order to keep Ft+k q(α i ) t+k|t = α i , for α i = {0.05,0.10, ..., 0.95}, a strictly increasing function. Quantile and Density Forecast Evaluation The evaluation of the quantile forecasts, for each quantile, α i = {0.05,0.10, ..., 0.95}, will be undertaken using the quantile loss function: The quantile loss function, also known as the check function [3,27] is used to define a specific quantile of the distribution and was defined in Section 4.1, Equation (8).Hence, given a testing set of size N , we can estimate a particular quantile, q(α i ) , with proportion α i , using and therefore we can evaluate a series of quantile forecasts, q(α i ) t+k|t , issued at time t with lead time t + k and nominal proportion α i , using: This is the average over the whole testing set of the check function score, ρ α i (y t+k − q(α i ) t+k|t ), for the quantile α i , for a k-step ahead prediction.From now on we will call this function the Check Function (CF), and the its score the Check Function Score (CFS) . Using the different quantile forecasts we can also reconstruct the whole probability / cumulative forecasted distribution.We use the Continuous Ranked Probability Score (CRPS) in order to evaluate the density forecasts for each forecast horizon: The crps [28] is computed by taking the integral of the Brier scores for the associated probability forecasts at all real valued thresholds, where Ft+k|t (y) corresponds to the CDF forecast, and y t+k to the corresponding verification.1 {y≥y t+k } is an indicator function that equals one if y ≥ y t+k and zero otherwise.The quantile score, QS α i [29], is defined by Hence, the average of these crps values over each forecast-verification pair gives the CRPS for each forecast horizon k: where QL(k, α i ) is CF defined in Equation (12).Representation ( 17) is useful to produce a rough estimate of the in-sample CRPS for each forecast horizon, using the CFS for each quantile.This is a rather poor approximation of the CRPS, because the number of quantiles used in this article (19 quantiles), is not large enough to produce an accurate approximation of the integral in Equation (17). In order to find the out-of-sample CRPS for each k, we will use the following alternative representation of the crps, introduced by Gneiting and Raftery [29]: where X and X are independent copies of a random variable with CDF Ft+k|t .This representation is particularly useful when F is represented by a sample, as in our case.Then, the CPRS for each forecast horizon k is given by Equation ( 16).Moreover, it will be necessary to quantify the gain/loss of some forecasting models with respect to a chosen reference model.Following McSharry et al. [3], this gain, denoted as an improvement with respect to the considered reference forecast system, is called a Skill Score and is defined as: where k is the lead time of the forecast and SCORE is considered the evaluation criterion score (such as CRPS or CFS).By using the above definition we can also introduce the Average Skill Score.This is just the Skill Score with the scores of the competing and reference models averaged over all forecast horizons.It is defined as: So, when we are talking about Score, the lower the value the better the performance; but, when we are talking about Skill Score (or Average Skill Score), the higher the value the better, since we are comparing the candidate model to the reference model.Please note that the reference model will be different each time, and chosen according to the comparison we wish to make. In order to formally rank and statistically justify any possible difference in the CRPS and CFS of the competing models with respect to the reference models, we will use the Amisano and Giacomini test [30] of equal forecast performance.This test is based on the statistic where SCORE again is considered the evaluation criterion score such as the CRPS or CFS, N is the out-of-sample size, and The functions S and S ref represent the before averaging scores (such as the crps of Equations ( 13) or (18) and check function score defined just after Equation ( 12)) of the competing and reference models, respectively.Assuming suitable regularity conditions, according to Amisano and Giacomini [30], the statistic t N,k is asymptotically standard normal under the null hypothesis of zero expected score differentials.Small p-values of this test provide evidence that the difference in the forecast performance of the two forecasting (given a specific evaluation score) is statistically significant. Optimization of the Variability Indices In this section we will introduce four different quantile regression models, and using one-fold cross validation try to optimize their probabilistic forecasting performances.Our main goal is to evaluate whether or not the four variability indices (introduced in Section 3) can help to provide trustworthy quantile forecasts of wind power, when used as explanatory variables in the quantile regression model (7).For this purpose, we have to find the optimal set of parameters (m, n) of these indices, which provides the best quantile forecast performance, for each individual quantile.We do that using the following procedure. For each index, we sample different combinations of parameters from the range m, n = {0, 8, 16, ..., 192}, in order to produce 625 different realizations of each index, for each wind farm.A preliminary analysis showed that creating a moving window larger than 192 time-points wide (2880 min i.e., 2 days) did not increase the performance of the indices. Then, for each set of parameters, we fit the following four different quantile regression models on the in-sample training set (of each wind farm), for each of the 19 quantiles α i = {0.05,0.1, ..., 0.95}: SD model: IQR model: where q t ≡ q is defined in Equation ( 6), γ hl ≡ γ hl are the regression coefficients, and y t−j are lagged wind power series.The choice of the number wind power series lags used as explanatory variables was taken by considering the AIC (a prediction based criterion according to Akaike [31]) of different quantile regression models which have different numbers of lags as explanatory variables.We also investigated the improvement obtained by adding to the right hand side of Equation ( 23) a combination of variability indices.Due to collinearity effects, the SD and IQR indices cannot coexist in the same equation.Any other combinations of the variability indices did not provide reduction to the AIC for more than 14 out of 19 quantile regression equations, at any of the three wind farm sites.Hence, we considered examining the effect that each individual variability index will provide by being included as an explanatory variable to the quantile regression equations, as defined by Equation (6). Moreover, we also considered adding to the right hand sides of Equation ( 23) a trigonometric function (also introduced in Equation ( 24) below) which uses two pairs of harmonics to regress wind power quantile, q t , on the 15 min time step of the day.The addition of this function, which is used to model the diurnal component of each quantile of the wind power production at each wind farm, was not found to provide reduction to the AIC of 17 out of 19 quantile regression equations, at any of the three wind farm sites.Hence, in order to obtain parsimonious models we excluded these functions from the final models.Nevertheless we must acknowledge the fact that a diurnal effect may be relevant and very important for wind farms in other locations or countries. The models in Equation ( 23) are regression models, and hence, in order to predict their responses, q , we need predictions for their explanatory variables.These are just lagged versions of the original wind power series, and the different variability indices.All of these explanatory variables have similar characteristics as they result from the original wind power series.The lagged versions of the wind power series are certainly non-stationary and all 4 × 625 different realizations of the variability indices (for every wind farm), even though they can be much smoother (for large values of m, n) than the original wind power series, are also non-stationary. The predictions (point forecasts) of the explanatory variables are produced using ARIMA and ARIMA (in mean)-GARCH (in variance) models.By modelling the mean of the series using an ARIMA model, we allow for its non-stationary nature, and by modelling the variance using a GARCH process we allow for its heteroskedastic nature.Due to the fact that the wind power series (and the resulting variability indices) is bounded and does not follow any known parametric distribution, one may argue that an ARIMA or an ARIMA-GARCH model may not be appropriate.A modified (This version of ARIMA-GARCH model limits the forecasts to be bounded between two specific values (zero and one in our case) ARIMA/ARIMA-GARCH model with limiter (as proposed by Chen et al. [32]) is used to deal with the problem of the data being bounded.Moreover, the empirical density of the differenced series is close to a Student's t-distribution density.Hence, we fit an ARMA/ARMA-GARCH model to the transformed series, (w t ) (or differenced variability index), assuming those data come from a Student's t-distribution whose parameters are estimated for each series.We incorporate this distributional assumption by assuming the resulting residual series (white noise) follows a Student's t-distribution. The next step is to produce point forecasts from 15 min up to 6 h ahead (k = 1, 2, ..., 24), from each point of the in-sample testing set, by fitting ARIMA(1, 1, 1) models to each realization of the four variability indices of the above regressions.Our choice of ARIMA(1, 1, 1) model may seem unappealing and arbitrary, but was made mainly for simplicity after exploring the forecast performances of various time series models.Choosing the best ARIMA-GARCH model (according to AIC) for each of the 625 different realizations of each index (for each wind farm) is extremely computationally expensive and hence we have to make some simplifications in order to make our optimization process computationally feasible.An ARIMA(1,1,1) is able to capture the non-stationary nature of the indices, and avoid overfitting at the same time.In order to assess the goodness of the fits, we use the Ljung-Box test, and restrict our selection to the fits that do not reject the null hypothesis of this test (so the corresponding residuals are consistent with white noise). Modelling the variance of the indices using ARCH/GARCH models (in combination with an ARIMA model for the mean) does not provide a consistent and significant improvement of the RMSE (We used the Root Mean Square Error to evaluate the point forecast performance of various time series models.) of the point forecasts.This is mainly because of the very small forecast horizon we have, and hence it suffices to use a simple ARIMA model with limiter.In order to produce point forecasts of the lagged wind power series, the model solution using AIC (results are also the same using BIC) identified an ARIMA(0, 1, 2) -GARCH(1, 1) model for the low variability farm, an ARIMA(1, 1, 3) -GARCH(1, 1) for the medium variability farm, and an ARIMA(2, 1, 1) -GARCH(1, 1) for the high variability farm.These models have the ability to capture the heteroskedastic effects that the wind power series have, taking into account the non-linear nature of the variations.Also, these forecasts are calculated only once for all different realizations of the quantile regression models, and hence there is no point in this case to sacrifice the (small) accuracy gain for simplicity and computational efficiency.Table 2 shows the selected time series models for each wind farm and the two tests that assess their fit.ARIMA(2, 1, 1)-GARCH(1, 1) 1.00, 1.00, 1.00 1.00, 1.00, 1.00 After producing quantile forecasts for 24 different forecast horizons, we evaluate them (i) using the CFS of only the first step ahead forecasts; and (ii) using the CFS averaged over all forecast horizons.The results justify our inspection of better forecast performance for the models with small (smoothing and variability) moving windows.We repeat the above procedure by restricting the range of our parameters even more for each variability index, and sample every different combination of parameters from the range m, n = {0, 1, 2, ...50}. We end up with distinct sets of parameters (for each model and wind farm) that minimize the averaged and 1-step ahead CFS of each different quantile.The CFS minimization results are shown in Tables 3-6.In general, we cannot distinguish any particular parameter pattern, but there are some features that are worth mentioning.For all the models, it is more common to have the smoothing window width (m) smaller than the variability window width (n), especially for quantiles less than or equal to the median.This pattern changes for the upper quantiles (larger than the median) where we do not observe a clear pattern.Also, on average, the parameters for the averaged over 24-steps ahead optimization are smaller than the corresponding ones of the 1-step ahead optimization. Out-of-Sample Forecast Performance Results In this section, after fitting the four optimized models of Equation ( 23) to the whole two years in-sample learning set (for each farm), we will produce quantile forecasts from 15 min up to six hours ahead from each point of the out-of-sample forecasting set, and assess their forecast performance using the CFS, and the CRPS.In short, the CFS will be used to assess the skill of individual quantile forecasts, and CRPS to assess the skill of density forecasts (produced by using all 19 quantile forecasts). In order to facilitate the comparison of forecast performance across different models, we will introduce two widely used probabilistic benchmarks: • Persistence distribution: It is defined as the distribution of the last n observations.The persistence benchmark is independently optimized (by estimating n) for each wind farm, by using the same optimization methods as for the variability indices: 1-step ahead CFS minimization, and averaged over 24-steps CFS minimization.So, when the persistence is optimized using one of the two CFS minimization methods, different values of n are chosen to forecast each quantile.• Unconditional distribution: We construct this benchmark by using all the past observations of the time series.This benchmark assumes that the time ordering of the observations is not relevant when attempting to predict the distribution of the response.It is also referred to as climatology. The third benchmark used in this article is the quantile regression model with only the three lags of wind power series as explanatory variables.This benchmark will help us to identify the gain in forecast performance acquired by using the four variability indices and is defined as the 3-lagged series benchmark. Predictive distributions are often taken to be Gaussian even though the wind power series is bounded and non-negative.Moreover, in our record of wind power measurements we have values of exactly 0 and 1 and hence the predictive distributions may require point masses at 0 and 1.A convenient way to embed this property is through the use of cut-off normal predictive distributions as achieved by Sanso and Guenni [33], Allcroft and Glasbey [34], Gneiting et al. [35] and Pinson [10].The fourth benchmark of this article uses a cut-off normal predictive density, N 0,1 (µ t+k|t , σ 2 t+k|t ), and a fitted diurnal trend component to the three wind power series.The parameters µ t+k|t and σ t+k|t > 0 for k = 1, ..., 24 are called the location parameter (or predictive centre) and scale parameter (or predictive spread) of the cut-off normal density with point masses at 0 and 1. Please note that a truncated normal predictive distribution (with cut-offs at 0 and 1) has also been considered, with results very similar but worse than those of the cut-off normal predictive distribution benchmark. The procedure to construct the fourth benchmark used in this article (also described in Gneiting et al. [35] and Gneiting et al. [8]) is as follows.At each of the three sites we firstly fit a trigonometric function, where y t represents the normalised wind power for each farm at time t, and d(t) is a repeating function that numbers the time variable (in 15 min steps) from 1 to 96 within each day.We then remove the ordinary least square (OLS) fit from each wind power series and use the resulting residual series, denoted by r t , to determine the predictive centre and predictive mean of the cut-off normal predictive distribution.More specifically we introduce the following linear autoregressive system use this to determine the forecasts ˆ r t+k|t in a straightforward way, for each k = 1, ..., 24 (from 15 min up to 6 h ahead).Then, the predictive centre of the cut-off normal distribution is modelled as where ŷt+k|t is the forecast issued at time t with forecast horizon k for the fitted diurnal trend of Equation (24). Finally, in order to model the predictive spread we introduce, following Gneiting et al. [8], the volatility function at time t: So this benchmark allows for conditional heteroskedasticity by modelling and setting the predictive spread as the forecast of v t issued at time t for a forecast time t + k: These four benchmarks will be used as the reference models mentioned in Section 4.2.In the following tables we will present the evaluation results of the four models, for each evaluation criterion and optimization type.As the relative performances of the methods are similar for each of the three locations, following Taylor et al. [6], we will present the averaged results over the three wind farms.Moreover, we will present only the Skill and Average Skill Scores of each evaluation criterion, as we are particularly interested to quantify and statistically test (using the Anisano-Giacomini test [30]) the relative increase in forecast performance of the four competing models with respect to the four benchmarks (reference models). Out-of-Sample Model Comparison and Evaluation-Quantile Forecasting In this subsection we compare the out-of-sample forecast performance of the competing models for each quantile and model optimization method.We have a total of 19 quantile forecasts for each model and for two different optimization methods.Please note that in order to avoid presenting any unnecessary information, we summarise the results on the forthcoming tables by including results of 11 out of 19 quantiles (0.05, 0.10, 0.20,...,0.80, 0.90, 0.95 quantiles).Firstly, we present the results obtained using the 1-step ahead CFS optimization, followed by the results obtained using the averaged over 24-steps CFS optimization.For both optimization methods, the scores will be averaged over the three wind farms because the relative performance of the models is similar across the wind farms. Quantile Forecasting: 1-Step Ahead CFS Optimization Since the models in this subsection are optimized using a 1-step ahead CFS optimization method, it makes sense to present results for the first lead time only, for each quantile and for each model.Table 7 shows the Skill CFS (as defined by Equation ( 19)) of the best performing model among the four competing ones and its percentage gain/loss with respect to the four reference (benchmark) models, for each quantile.Moreover, the asterisks next to the scores indicate the level of statistical significance (obtained using the Amisano-Giacomini test of Section 4.2) of the corresponding gain/loss in performance with respect to the four reference models. Table 7.The best performing model among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for each quantile.Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from a 1-step ahead CFS optimization, and we use the CFS only for the first predicted step.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1.A general observation is that for almost all quantiles (except the 0.50-0.60quantiles scores which have negative signs), the best forecast performance is achieved by one of the four competing models and not by the four benchmarks.The 0.05-0.10 and 0.90-0.95quantiles form the two tails of the predictive density, and represent the rare events (such as ramps, cut offs) of a wind power series.As we observe from this table, both tails of the predictive density are quite well captured by the Q05 and Q95 models.Out of the four competing models, the lower tail of the predictive density is better predicted by the Q95 model and the upper by the Q05 model, but assuming the structure of the two variability indices used in these models, we might intuitively expect the opposite to happen. Skill CFS (%) Best 3-lagged This phenomenon can be explained by having a look at Figure 3. Figure 3(a) shows the probability density function (PDF) of the medium variability wind farm, together with the function values when the normalized wind power is equal to zero and one.Figure 3(b) shows an example of a wind power curve as presented by McSharry et al. [3].On this plot we mark the "cut-in speed" (w 1 ), the "nominal speed" (w 2 ) and the "disconnection speed" (w 3 ).So, for very low wind speeds (<w 1 ) the wind power production is almost zero, for wind speeds greater than w 2 but less than w 3 the normalized wind power production is equal to one, and for wind speeds greater than w 3 the turbines shut down in order to prevent damage, and hence the wind power production falls again to zero.By combining these two plots, we can plot a rough estimate of the normalized wind power PDF versus the wind speed.We expect the 0.95 quantile of the unconditional density (not to be confused with the predictive density) to be close to the nominal (normalized) wind power value of one.But the produced wind power is driven by the actual wind speed at any given time, and hence falling from the nominal wind power production (one) to zero can happen unexpectedly (Exceeding w 3 can happen unexpectedly, given that we do not have any information about the wind speed at any given time.)if we exceed the disconnection speed w 3 (Figure 3(b)).This results in a sudden jump from the 0.95 quantile to the 0.05 quantile of the unconditional probability density and is represented by the lower tail of the predictive density.Hence, the Q95 index which captures this sort of events can provide some extra information about the lower tail of the predictive distribution that the 3-lagged series and the Q05 index do not describe.Similarly, if the wind speed falls below w 3 , we are suddenly jumping from the 0.05 to the 0.95 quantile of the unconditional density and these kind of rare events (jumping from low to high values) are represented by the upper tail of the predictive density.Therefore, the Q05 index can provide some extra information about the upper tail of the conditional predictive distribution. In addition, Table 7 shows that the strongest benchmark for all quantiles is the 3-lagged series model.The biggest and statistically significant improvement with respect to this benchmark is achieved near the tails of the predictive density, and decays as we move towards the median.This has important practical applications because it is exactly these extreme fluctuations that are of interest to transmission system operators (TSOs).More specifically, we get a performance gain of up to 4.01% (for the 0.05 quantile, achieved by the Q95 model), which is certainly not negligible.Unfortunately, the performance gain by using one of the competing models with respect to this benchmark in order to forecast the quantiles 0.30-0.70 is negligible (statistically insignificant), and is in the range of −0.21% to 0.81%.Furthermore, there is no gain for the quantiles close to the median of the predictive density (0.50-0.60). Since the 3-lagged series model is our strongest benchmark and the performance gain with respect to it is only worth mentioning near the tails of the predictive density, it makes sense to focus on the performance gain achieved with respect to the last two benchmarks only for quantiles near the tails.Table 7 shows that the increase in forecast performance with respect to the cut-off normal benchmark is at least 78%, which shows that the cut-off normal is not capturing the tails of the predictive distribution as well as our competing models. Moreover, we get more than 53.25% increase in the forecast performance with respect to the persistence benchmark when we use one of the four competing models.At the tails, where the Q05 and Q95 models are more suitable, we have a gain with respect to the persistence benchmark of up to 54.82%.By using the climatology benchmark as reference model, we observe that the maximum performance gain at the tails goes up to 86.46% (for the 0.90 quantile, achieved by the Q05 model), and in general the Q05 and Q95 models manage to maintain the performance gain (at the tails) above 65.63%. Quantile Forecasting: Averaged over 24-Steps CFS Optimization This subsection has similar structure to the preceding one, but now we present the results for the models which minimize the averaged (over six hours) CFS.Because the models are optimized on their forecast behaviour over all 24 forecast horizons, it makes sense to present results with the scores averaged over the 24 horizons, for each of the 11 selected quantiles. Table 8 is analogous to Table 7, but here we provide the averaged over 24-steps CFS optimization results.It presents the Average Skill CFS (instead of Skill CFS) as defined by Equation (20).Once more, a general observation is that for almost all quantiles (except the first two and the median), the best forecast performance is achieved by our competing models and the strongest benchmark is the 3-lagged series model.Table 8.The best performing model among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for each quantile.Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from an averaged over 24-steps CFS optimization, and the CFS are also averaged over all 24 forecast horizons.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1.8 also shows that the lower tail of the predictive density is quite poorly captured by our competing models, and the last two benchmarks (persistence, climatology) are performing much better than any other model.On the other hand, for all the other quantiles, the SD and IQR models have quite similar performances and manage to outperform all the benchmarks.Moreover, the performance gain by using one of the four competing models to forecast the quantiles near the median (0.50-0.60) is statistically negligible or does not exist.A final general observation is that, as mentioned for the previous optimization method, the 0.05 quantile is better predicted by the Q95 model and the 0.95 quantile by the Q05 model. Average By using one of the SD or IQR models (which perform almost identically) we get a performance gain with respect to the 3-lagged series benchmark of up to 5.95% (for the 0.20 quantile, achieved by the IQR model), which is statistically significant with a p-value less than 0.001.In addition all the competing models are outperforming the cut-off normal model by at least 5.27% and attain the maximum increase in forecast performance near the tails of the predictive density (up to 43.82% achieved by the Q95 model for the 0.05 quantile). Table 8 also shows that we have up to 20.47% (for the 0.40 quantile, achieved by the IQR model) increase of the forecast performance with respect to the persistence benchmark.The SD and IQR models maintain the gain over the persistence benchmark above 8.29% for all quantiles larger than 0.10.By using the climatology benchmark as reference model, we observe that the maximum performance gain goes up to 54.67% (for the 0.70 quantile, achieved by the IQR model), and in general the SD and IQR models can maintain the percentage performance gain with respect to the climatology benchmark above 11.94% for all quantiles larger than 0.10. Out-of-Sample Model Comparison and Evaluation-Density Forecasting In this subsection, we evaluate the out-of-sample density forecast performance of the competing models, for each optimization method.We will use the quantile forecasts obtained from each optimization method to reconstruct the whole predictive density, and assess its skill using the Skill CRPS or the Average Skill CRPS.Firstly, we present the results obtained using the 1-step ahead CFS optimization, followed by the results obtained using the averaged over 24-steps CFS optimization.Moreover, because the relative performance of the models is similar across the wind farms, the scores will be averaged over the three wind farms. Density Forecasting: 1-Step Ahead CFS Optimization In this subsection, the models' forecast performance is optimized for only the first predicted step, so it makes sense to focus (initially) on the first lead time and present the out-of-sample Skill CRPS for the first step ahead. Table 9 presents the out-of-sample Skill CRPS (%) for the 1-step ahead CFS optimized models, together with significance codes for the Amisano-Giacomini test of equal forecast performance.This table shows that the best benchmark model is the 3-lagged series.That was expected because this benchmark was also the strongest one (for most quantiles) when we were looking at the quantile forecast results for the same optimization method (Section 6.1.1).The SD and IQR models behave almost identically and manage to outperform all the other benchmarks.The SD model performs slightly better than the IQR model, and managed to outperform the 3-lagged series model by 1%, the cut-off normal model by 1.48%, the persistence benchmark by 58.38% and the climatology benchmark by 84.23%. Table 10 shows the best performing model among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for a collection of forecast horizons.For simplicity, we present the results for seven of the 24 forecast horizons.The SD model is outperforming the 3-lagged series for the first 16 forecast horizons (except for the second one) where the improvements in forecast performance are also statistically significant for a 90% significance level.For the second forecast horizon we get the maximum forecast performance gain over the 3-lagged series model (equal to 1.96%) achieved by the IQR model.The SD model also manages to outperform the cut-off normal benchmark for all forecast horizons, with all improvements in forecast performance being statistically significant for a 99% significance level.When the persistence and climatology benchmarks are used as reference models, Table 10 shows that the gain in forecast performance by using the SD model is at least 5.87% and 8.24%, respectively.Moreover, the noted density forecast improvements are statistically significant for all forecast horizons, for a 99% level of significance. In addition to the above results, we carried out a marginal calibration analysis and investigated how the CRPS evolves conditional to some wind power levels.More specifically, Table 11 presents the marginal Skill CRPS (%) conditional to the normalized wind power being ≤0.20 or ≥0.80, for a collection of seven forecast horizons.We choose to focus on these specific wind power levels, because these form the two tails of the unconditional wind power density (not to be confused with the predictive density). Table 11.The best performing model (according to the Marginal Skill CRPS) among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for forecast horizon k (measured in 15 min steps).Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from a 1-step ahead CFS optimization.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1.Given that the normalized wind power is less than or equal to 0.20, the IQR seems to be the best performing model for all except the first forecast horizon (where the Q05 model is performing better).For small forecast horizons we observe statistically significant improvements over all competing models.These improvements (with the exception of the cut-off normal benchmark) are getting smaller as we move to larger forecast horizons, which is perfectly reasonable because the results are the outcome of a 1-step ahead optimization.Conditioning on power levels which belong to the upper tail of the unconditional wind power density, we observe that the IQR and SD models seem to provide the largest performance gain according to the CRPS.These two models are outperforming all the benchmarks with improvements that are also statistically significant for all forecast horizons, for a 99% level of significance. Density Forecasting: Averaged over 24-Steps CFS Optimization Now we would like to assess the out-of-sample density forecast performance of the four competing models, for the averaged over 24-steps CFS optimization method.Our assessment criterion will be the out-of-sample Skill CRPS or Average Skill CRPS. Initially, it makes sense to have a look at the out-of-sample Average Skill CRPS (Equation ( 20)) with the four benchmarks as reference models (Table 12).The IQR model outperforms the 3-lagged series benchmark by 2.45%, a considerably larger improvement than for the 1-step ahead results given in Table 9.This model also outperforms the cut-off normal benchmark by 16.13%, the persistence benchmark by 12.77% and the climatology benchmark by 39.15%.Moreover, the density forecast performance of the SD model is quite close to that of the IQR model.Since our optimization considers all 24 forecast horizons, it will be interesting to investigate how the four competing models perform in producing density forecasts for each forecast horizon, k, from 15 min up to 6 h ahead.As for the 1-step ahead optimization case, we present the results for only a collection of seven out of 24 forecast horizons.The best competing model together with the performance gain obtained for each forecast horizon k with respect to the four benchmarks can be found in Table 13.Clearly, the best performing benchmark is the 3-lagged series model, and the IQR is the best performing model out of the four competing models. The competing models' performances are disappointing for the first lead time, where the 3-lagged series benchmark offers a performance gain (of at least 1.57%) with respect to these models.On the other hand, for predictions larger than 30 minutes ahead (second predicted step), Table 13 shows that the IQR model manages to maintain the gain in density forecast performance with respect to the 3-lagged series model above 2.14%, with a recorded maximum of 3.59% (achieved at the fourth predicted step).Moreover, all the scores (except the first two) produce p-values which give strong evidence to reject the null hypothesis of equal forecast performance between the competing and reference model.Hence, the observed gain in forecast performance is statistically significant for a 99% significance level.The gain in forecast performance with respect to the cut-off normal model is at least 4.96% (excluding the first lead time) and attains a maximum of 17.18% for the 24th predicted step. Table 13.The best performing model among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for forecast horizon k (measured in 15 min steps).Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from an averaged over 24-steps CFS optimization.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0. If we consider the persistence benchmark as the reference model (column 4 of Table 13), we note that the Skill CRPS of the best model starts at 59.08% (Q95 model) and then decays to meet approximately the performance of the persistence benchmark for the last forecast horizon.When the climatology benchmark is used as a reference model (column 5 of Table 13), we again observe a decay of the skill scores, with the performance gain remaining above 9.85% for all forecast horizons (for the IQR model). From the results presented we conclude that this optimization method is found to produce models (mainly the IQR model) that can substantially outperform the density forecast performance of the widely used benchmarks (persistence, climatology) and fully parametric models such us the cut-off normal benchmark.Moreover the gain used by including a variability index such as the (IQR) improves considerably the performance (up to 3.59%) of a quantile regression model which uses only autoregressive terms as explanatory variables (3-lagged series benchmark). Finally, as for the 1-step ahead optimization case, we present some marginal calibration analysis results by investigating how the CRPS evolves conditional to some wind power level.Table 14 presents the marginal Skill CRPS(%) conditional to the normalized wind power being ≤0.20 or ≥0.80, for a collection of seven forecast horizons. Given that the normalized wind power is less or equal to 0.20, the IQR model is outperforming all the benchmarks for forecast horizons larger than two steps ahead (except the last forecast horizon of the persistence and climatology benchmarks).The Q05 seems to be the best performing model for the first two steps ahead, but still cannot outperform the 3-lagged series benchmark for the first step ahead. The second part of this table shows that, given normalized power levels greater or equal to 0.80, the SD model is the overall best model among all the others.It manages to outperform all the benchmarks for all forecast horizons, with improvements that are also statistically significant using a 99% level of significance.Table 14.The best performing model (according to the Marginal Skill CRPS) among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for forecast horizon k (measured in 15 min steps).Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from an averaged over 24-steps CFS optimization.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1. Conclusions In this paper we showed how to produce wind power quantile and density forecasts, for lead times from 15 minutes up to six hours ahead, using three different univariate wind power series.This was achieved by introducing innovative variability indices, which are able to capture the volatile behaviour of the wind power series. We used linear (in parameters) quantile regression as our main tool for producing quantile forecasts for 19 different quantiles, with three lagged versions of the wind power series as the main explanatory variables.Four models were proposed, each one having as a fourth explanatory variable one of the four extracted variability indices. In order for the final results to be consistent, we used data from three wind farms in Denmark, each one chosen to have different wind power variability (low, medium and high).We investigated four years of wind power data, with a 15 min resolution, for each wind farm.The first two years were used for estimating the parameters of the models, and the final two years for out-of-sample forecast evaluation. All four quantile regression models were optimized using the in-sample training data set, in order to find their specific set of indices' parameters, (m, n), which minimizes (i) the first lead time CFS and (ii) the Average CFS over all forecast horizons, for each individual quantile. Our main goal was to evaluate how well these models performed compared with the cut-off normal, persistence and unconditional distribution (climatology) probabilistic benchmarks.It is worth mentioning that persistence is a strong yet simple benchmark for very short forecast horizons, and was optimized using the same cost (optimization) functions as the four regression models.The use of a cut-off normal benchmark provided a good comparison between a fully parametric model (as the cut-off normal model) and the non-parametric quantile regression models used in this article. The fourth and strongest benchmark used was a quantile regression model with three lags of the original series as explanatory variables.The comparison of the competing models with this benchmark provides evidence of how useful our extracted variability indices are for forecasting wind power production.The individual (out-of-sample) quantile forecasts were evaluated using the Skill or Average Skill CFS for direct comparison between the competing models and the benchmarks.The density forecasts of the models were evaluated using the Skill or Average Skill CRPS. In the following we summarize the quantile and density forecasts results found using the two different types of model optimization: Quantile forecasting: 1-step ahead CFS optimization • The best competing models are the Q05 and Q95 models, which outperform our best benchmark (3-lagged series) by a maximum of 3.44% (0.95 quantile) and 4.01% (0.05 quantile), respectively.• The largest gain in performance with respect to the best benchmark is noticed when forecasting the quantiles which form the tails of the conditional predictive density.In addition, the Q05 model performs better for the upper tail, and the Q95 model for the lower tail.• The best quantile regression models for each forecast horizon manage to maintain the performance gain with respect to the cut-off normal, persistence and climatology benchmarks above 65.73%,53.25% and 65.63%, respectively. Quantile forecasting: Averaged over 24-steps CFS optimization • The SD and IQR models have the best quantile forecast performance, with similar CFS.They manage to maintain the performance gain with respect to the best benchmark (3-lagged series) above 1.99% for 11 out of 19 quantiles.The maximum Skill CFS is 5.95%, and is achieved by the IQR model for the 0.20 quantile.• The SD and IQR models maintain the performance gain with respect to the cut-off normal, persistence and climatology benchmarks above 5.25%, 12.86% and 21.00%, respectively, for 15 out of 19 quantiles.The performance gain by using one of the two quantile models over the persistence and climatology benchmarks is much lower (or does not exist) for predicting the tails (0.05, 0.10,0.90,0.95 quantiles) than for predicting the quantiles close to the median of the conditional density. Density forecasting: 1-step ahead CFS optimization • The best competing model is the SD model, which has almost equal density forecast performance with the IQR model.It manages to outperform the best benchmark (3-lagged series) by 1.00% (improvement which is statistically significant for a 95% significance level), for the first lead time. All four competing models manage to outperform the cut-off normal, persistence and climatology benchmarks by at least 0.69%, 58.04% and 84.11%, respectively, for the first lead time.• Across all 24 forecast horizons, the average gain in forecast performance using the SD or IQR model with respect to the best benchmark is statistically significant (using a 90% significance level) for the first 16 forecast horizons.Moreover, these two models manage to outperform the cut-off normal persistence and climatology benchmarks by at least 1.48%, 5.87% and 8.24%, respectively. Density forecasting: Averaged over 24-steps CFS optimization • The IQR model is the best competing model, and manages to outperform the best benchmark (3-lagged series) by, on average (over all forecast horizons), 2.45%.It also outperforms the cut-off normal, persistence and climatology benchmarks by, on average, 16.13%, 12.77% and 40.96%, respectively.• Across all 24 forecast horizons (excluding the first two lead times), the IQR model manages to maintain a performance gain over the best benchmark by more than 2.14%.Moreover, the noted improvements in density forecast performance are statistically significant for 22 out of 24 forecast horizons, for a 99% significance level. Figure 2 . Figure 2. Wind power time series plot of the low variability farm, together with the four variability indices (Q05, Q95 on upper plot, and SD, IQR on lower plot).The parameters are chosen to be the same for all indices to facilitate comparison (m = 30 and n = 30). Figure 3 . Figure 3. (a) Histogram of normalized wind power data (b) Deterministic power curve (c) Probability density function of normalized wind power data (PDF) versus wind speed.The equations used to reproduce (b) and (c) were taken from McSharry et al. [3]. Table 1 . The Danish wind farms used in this study. Table 2 . Best fitted models for the three wind power time series according to the AIC, with Ljung-Box and LM tests p-values. Table 3 . (23)ep and averaged over 24-steps CFS optimization results for the SD model of Equation(23).Var.Med Var.High Var.Low Var.Med Var.High Var. Table 4 . 1-step and averaged over 24-steps CFS optimization results for the IQR model of Equation (23).Var.Med Var.High Var.Low Var.Med Var.High Var. Table 4 . Cont.Low Var.Med Var.High Var.Low Var.Med Var.High Var. Table 5 . 1-step and averaged over 24-steps CFS optimization results for the Q05 model of Equation (23).Low Var.Med Var.High Var.Low Var.Med Var.High Var. Table 5 . Cont.Low Var.Med Var.High Var.Low Var.Med Var.High Var. Low Var.Med Var.High Var.Low Var.Med Var.High Var. Table 9 . Out-of-sample Skill CRPS (%) (averaged over all wind farms) for the 1-step ahead CFS optimized models.The scores are just for the first lead time.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1. Table 10 . The best performing model among the four competing ones, and its performance gain/loss with respect to the four reference (benchmark) models, for forecast horizon k (measured in 15 min steps).Reference models: 3-lagged series (column 3), Cut-off normal (column 4), Persistence (column 5) and Climatology (column 6).These results are outcomes from a 1-step ahead CFS optimization.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1. Skill CRPS(%) conditional to the normalized wind power being ≤ 0.20 k Best model 3-lagged series Cut-off normal Persistence Climatology Table 12 . Out-of-sample Average Skill CRPS (%) (also averaged over all wind farms) for the averaged over 24-steps CFS optimized models.The asterisks indicate the statistical significance of the gain/loss according to the Amisano and Giacomini test with the following significance codes for the p-value of the test: ***: p ≤ 0.01, **: 0.01 < p ≤ 0.05, *: 0.05 < p ≤ 0.1. 1. Skill CRPS(%) conditional to the normalized wind power being ≤ 0.20 k Best model 3-lagged series Cut-off normal Persistence Climatology
2014-10-01T00:00:00.000Z
2013-02-05T00:00:00.000
{ "year": 2013, "sha1": "1006f618a9f3a621850bb894d8390ceffc3e0f57", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/6/2/662/pdf?version=1426590956", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "1006f618a9f3a621850bb894d8390ceffc3e0f57", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219183646
pes2o/s2orc
v3-fos-license
Synthesis of a novel 89Zr-labeled HER2 affibody and its application study in tumor PET imaging Background Human epidermal growth factor receptor-2 (HER2) is an essential biomarker for tumor treatment. Affibody is an ideal vector for preparing HER2 specific probes because of high affinity and rapid clearance from normal tissues, etc. Zirconium-89 is a PET imaging isotope with a long half-life and suitable for monitoring biological processes for more extended periods. In this study, a novel 89Zr-labeled HER2 affibody, [89Zr]Zr-DFO-MAL-Cys-MZHER2, was synthesized, and its imaging characters were also assessed. Results The precursor, DFO-MAL-Cys-MZHER2, was obtained with a yield of nearly 50%. The radiochemical yield of [89Zr]Zr -DFO-MAL-Cys-MZHER2 was 90.2 ± 1.9%, and the radiochemical purity was higher than 95%. The total synthesis time was only 30 min. The probe was stable in PBS and serum. The tracer accumulated in HER2 overexpressing human ovarian cancer SKOV-3 cells. In vivo studies in mice bearing tumors showed that the probe was highly retained in SKOV-3 xenografts even for 48 h. The tumors were visualized with good contrast to normal tissues. ROI analysis revealed that the average uptake values in the tumor were greater than 5% IA/g during 48 h postinjection. On the contrary, the counterparts of MCF-7 tumors kept low levels ( ~ 1% IA/g). The outcome was consistent with the immunohistochemical analysis and ex vivo autoradiography. The probe quickly cleared from the normal organs except kidneys and mainly excreted through the urinary system. Conclusion The novel HER2 affibody for PET imaging was easily prepared with satisfactory labeling yield and radiochemical purity. [89Zr]Zr-DFO-MAL-Cys-MZHER2 is a potential candidate for detecting HER2 expression. It may play specific roles in clinical cancer theranostics. Introduction Targeting of a biomarker in cancers with specific agents is a promising strategy in the management of malignancies [1]. Human epidermal growth factor receptor type 2 (HER2) is a 185-kDa transmembrane protein and belongs to the family of receptor tyrosine kinases. It is involved in the signal transduction pathways regulating cell motility and proliferation. HER2 is an essential clinical tumor biomarker since it is overexpressed in various solid tumors, including ovarian, gastric, and breast cancers. Also, abundant expression of the receptor is associated with aggressiveness, recurrence, and reduced survival [2][3][4]. Monoclonal antibodies, including trastuzumab and pertuzumab have been used for therapy of HER2-positive cancers [5][6][7]. Current selection of patients for targeted therapy is mainly dependent on the status of HER2, determined by biopsy using immunohistochemistry or fluorescence in situ hybridization [8]. However, the invasive method may not be reliable because the HER2 expression is heterogeneous in the tumors and varies during the progress of the disease [9]. Almost 20% of the outcomes was inaccurate [10]. Noninvasive molecular imaging techniques such as single photon emission computed tomography (SPECT) and positron emission tomography (PET) provide a reliable method for repetitive investigating the distribution of the receptor in the whole body [11,12]. Compared with the former, PET has higher image resolution and quality. PET scanner can sensitively detect gamma radiation from positron decay of nuclides (e.g., [ 11 C] and [ 18 F]). Due to high sensitivity, PET imaging with trace amount of radiotracers (10 −6− 10 −8 grams) can accurately measure molecular targets in the living body without perturbing the biological system. PET with specific probes is a benefit for disease diagnosis and monitoring therapeutic response [13][14][15]. Radiolabeled antibodies have shown promise in identifying the presence of HER2 in the tumor [16][17][18]. For example, [ 89 Zr]Zr-trastuzumab PET/CT detected unsuspected HER2-positive metastases in patients with HER2-negative primary breast cancer [19]. It also found lesions in patients with metastatic HER2-positive esophagogastric cancer [20]. However, the optimal images with favorable contrasts should be acquired at several days (3-5 days) after the administration of the antibodies [19]. Affibody is an engineered small protein that originated from the IgG-binding staphylococcal protein A and can be used as an alternative ligand towards HER2. It is an ideal compound for recognizing the desired targets due to ease of chemical synthesis, quick tumor accumulation, rapid blood clearance, etc. [21,22]. ZHER 2:342 affibody binds explicitly to HER2, and its derivatives have been labeled with PET nuclides ([ 18 F], [ 68 Ga], etc) [23][24][25]. 18 F-labeled ZHER 2:342 analog and [ 18 F]F-FBEM-ZHER 2:342 showed specific binding towards HER2positive tumors and could be a benefit for detecting status of the receptor in response to therapeutic interventions [23,26]. 68 Ga-labeled HER2 affibody and 68 Ga-ABY-025 discriminated HER2 positive and negative metastatic breast tumor in sixteen patients. After PET/ CT scan with the tracer, treatment plans were changed in three patients [27,28]. Besides the above short half-life radionuclides, few HER2 affibodies labeled with other PET isotopes are reported. Zirconium-89 is an attractive commercially available PET radionuclide with a long half-life (T 1/2 = 78.4 h), which allows for imaging of biological processes at late time points [29]. Meanwhile, attachment of zirconium-89 to desferrioxamine (DFO) chelator coupled in bioactive substance (such as protein and peptides) can be achieved under mild conditions with excellent stability [30]. Besides, 89 Zr-labeled compounds are also ideal surrogates for the corresponding therapeutic 90 Y or 177 Lu-labeled radiopharmaceuticals to calculate dosimetry and plan therapy program in preclinical or clinical studies [31]. Previous studies found that 18 Ga]Ga-NOTA-Cys-MZHER2), owned satisfactory specific tumor uptakes and favorable tumor-to-muscle and tumor-to-blood ratios during 4 h after injection [32,33]. Due to short half-life, further evaluating the properties of the affibody was difficult beyond 12 h. To better assess the characters of the modified HER2 affibody, the molecule was firstly coupled with a maleimide derivative of desferrioxamine, MAL-DFO, then the resulting compound was radiolabeled with zirconium-89. The efficiency of the resulting probe, [ 89 Zr]Zr -DFO-MAL-Cys-MZHER2, was also investigated in tumor models. Methods Cys-ZHER 2:342 and Cys-MZHER2 were purchased from Apeptide Co., Ltd. (Shanghai, China). MAL-DFO was purchased from Macrocyclics (Dallas, TX). [ 89 Zr]Zr-oxalate solution was supplied by Cyclotron VU(Netherlands). Human ovarian cancer cell lines SKOV-3 and breast cancer cell lines MCF-7 were obtained from Cell Bank of Shanghai Institutes for Biological Sciences. Female Balb/c nude mice were purchased from SLAC Laboratory Animal Co., Ltd., China. Analytic and preparative high-performance liquid chromatography (HPLC) were carried out according to the literatures [32,33]. Radio thin-layer chromatography (TLC) was operated on silica gel impregnated glass fiber sheets and analyzed by a BioScan. Sodium citrate solutions (0.1 M) were used as solvent systems. Mass spectra were acquired from a Waters LC-MS system (Waters, Milford, MA). Preparing of 89 Zr-DFO-MAL-Cys-MZHER2 DFO conjugated affibody, DFO-MAL-Cys-MZHER2 (200 μg, 25 nmol), was dissolved in 30 μL deionized water and incubated with 185 MBq [ 89 Zr]Zr-oxalate in 1 mL 2 M Na 2 CO 3 solution (pH = 4) for 20 min at room temperature (Fig. 2). After diluted with 10 mL deionized water, the complex was purified by a Varian BOND ELUT C18 column. After washing the column with 10 mL deionized water again, the product was eluted with 0.3 mL of 10 mM HCl in ethanol. The solution was diluted with 10 mL saline and passed through a 0.2-μm Millipore filter into a sterile vial. Radio HPLC and TLC were used for quality control. In vitro stability Aliquots of [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 solutions were incubated with 1 mL human serum or PBS for 48 h at 37°C. At the preselected time points, the radiopurity was analyzed by TLC. Cell lines Cells were cultured using RPMI-1640 medium supplemented with 10% (v/v) heat inactivated fetal bovine serum and grown as a monolayer at 37°C in a humidified atmosphere containing 5% CO 2 . Cell uptake studies Uptake studies of [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 in SKOV-3 cells were performed according to the described method [32]. Cells (1 × 10 6 /well) were incubated at 37°C for various times with 37 KBq-labeled affibody in a 0.5-mL serum-free DMEM medium. The nonspecific binding of the tracer was determined by coincubation with 5 μM Cys-ZHER 2:342 . After washed with chilled PBS, the cell pellets in the tube were obtained by centrifugation and measured using a γ-counter (Perki-nElmer). The cell uptake was expressed as the percentage of the added activity (%AA/10 6 cells) after decay correction. Animal model All animal experiments were performed according to the national guidelines and approved by the Ethics Committee of Jiangsu Institute of Nuclear Medicine. Tumor models were established by subcutaneously implanting 5 × 10 6 SKOV-3 or MCF-7 tumor cells suspended in 0.2 mL PBS into the shoulder region of mice. When tumor sizes reached 100-300 mm 3 , the mice were used for the following experiments. MicroPET imaging PET imaging was performed on a microPET scanner (Siemens Inc.). After anesthetized using isoflurane, the mice bearing tumors were placed in the center of the scanner and injected intravenously with 3.7 MBq [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 in the presence or absence of excessive Cys-MZHER 2:342 ( 10 mg/kg body weight) via the lateral tail vein. Static PET images of 10 min were performed at selected times after tracer injection. Quantitative analysis was operated using the reported methods [32,33]. Biodistribution Mice were injected with 0.74 MBq of the tracer through the tail vein and sacrificed at 1, 4, 8, 18, 24, 48, and 72 h after administration, respectively. For the blocking study, four mice were coinjected with an excess of Cys-ZHER 2:342 (10 mg/kg body weight) and killed at 1 h after administration. Tumor and normal tissues of interest were harvested and weighed. The radioactivity uptake in tissues was measured in the γ-counter and expressed as a percentage of the injected activity per gram of tissue (% IA/g). Autoradiography and histology After microPET imaging, the tumors were harvested and sectioned into slices with 5 μm thickness at -80°C. Ex vivo autoradiography was conducted using the previous method [34]. To determine the intratumoral distribution of the tracer, the slices were placed on a phosphorimaging plate for 1 h. Phosphorimaging plates were read with a plate reader. Quantitative analysis was carried out using the OptiQuant software. After radioactive decay, the slices were used for routine HE staining and HER2 analysis by immunohistochemistry. The procedures were processed following the published literature [33]. An epifluorescence microscope (Olympus, X81, Japan) was used to acquire the corresponding images. Statistical analysis Statistical analyses were performed using GraphPad Prism. Data were analyzed using the unpaired, 2-tailed Student t test. Differences at the 95% confidence level (p < 0.05) were considered to be statistically significant. Results Chemistry DFO conjugated affibody was readily prepared with a yield of 50%. The chemical purity of the compound was greater than 90% determined by analytical HPLC. Stability studies in vitro [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 was stable during the investigated periods (Fig. 4). No free [ 89 Zr]Zr-oxalate was found after incubation of the tracer in PBS or serum for 2 days at 37°C. Cell uptake Cell uptake studies are shown in Fig. 5. The probe quickly accumulated in SKOV-3 and reached plateaus with 10.23 ± 0.94% AA/10 6 cells at 30 min incubation. By contrast, the uptake levels were significantly decreased in the presence of excess unlabeled Cys-ZHER 2:342 at the same time points (2.35 ± 0.43%AA/10 6 cells). It also showed that uptake in the liver was deficient, and the highest values were nearly 2% IA/g at 1 h after injection. Accumulated radioactivities were found in kidneys. It suggested that the affibody is mostly excreted through the renal system and urinary tract. Biodistribution studies The biodistribution data of 89 Zr-labeled affibody in mice bearing tumors are presented in Table 1. Similar with PET imaging, radioactivity concentration in SKOV-3 tumors was higher than those in MCF-7 tumors and other healthy organs except for kidneys. Acclamation in SKOV-3 tumors was 11.27 ± 1.55% IA/g at 1 h after administration and maintained 5.80 ± 0.75% IA/g at 48 h postinjection. A rapid washout of radioactivity was noted from receptor-negative tissues except kidney. The uptake ratios of tumor-to-blood and tumor-to-muscle values increased from 8.38 ± 3.73 and 17.80 ± 3.08 at 1 h postinjection to 198.00 ± 12.77 and 393.50 ± 38.18 at 72 h postinjection in mice bearing SKOV-3 tumors, respectively. Ex vivo autoradiography and histology Autoradiography studies showed that higher radioactivity accumulated in the periphery of tumors than those at internal tissues (Fig. 8). The ratios of radioactive intensity between the two regions were determined to be 5.03 ± 0.69. The pathological analysis confirmed that the peripheral of tumor tissue grew vigorously, and the HER2 receptor was overexpressed. On the contrary, the internal tumor tissue grew slowly and even died with low levels of HER2. The results were consistent with the findings by autoradiography. Discussion Desferrioxamine is a hexadentate chelator used for treating iron overload. It is consisted of three hydroxamate groups and forms a thermodynamically stable complex with zirconium [35]. For site-specific labeling, desferrioxaminemaleimide (MAL-DFO) was successfully introduced into MZHER2 by conjugated the thiol group in cysteine residue with the maleimide. Zirconium-89 was attached to DFO-MAL-Cys-MZHER2 under mild conditions with nearly quantitative yields. The radiochemical purity was satisfactory determined by both HPLC and TLC. Absence of radiolysis was detected in PBS and serum during in vitro incubation up to 48 h. The stability was similar with those of other [ 89 Zr]Zr-labeled compounds. It means that 89 Zr-labeled affibody could be prepared at least 1 day before preclinical or clinical PET studies. contrast might be originated from the structure of different chelate agents coupled in the peptides. The uptake in SKOV-3 xenografts was about 10-fold higher than the counterparts of MCF-7 tumors at any time point. Compared with the blood pool and most of healthy tissues, the retention of the probe remained stable in SKOV-3 tumors over time. Even after 48 h postinjection, still, 40% of the initially accumulated radioactivity was observed in the SKOV-3 xenografts, and the image contrast was favorable. The uptake ratios of the tumor-to-blood and tumor-to-muscle after administration of [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 at 72 h postinjection are nearly 200 and 400, respectively. The results are comparable to those of reported 18 F or 68 Ga-radiolabeled HER2 affibody at later time points (at most, 6 h) [26,32,36,37]. It implied that the tracer might be a benefit for monitoring the status of HER2 in tumors with favorable contrast images for long periods at least 72 h. The radioactivities distributing in the tumor showed significant heterogeneity by ex vivo immunohistochemistry and autoradiography. Abundant HER2 was expressed in the periphery of cancer ,and the corresponding radio signal was strong. On the contrary, weak radioactivity was detected in the internal necrotic tissues. Receptor specificity was also confirmed by decreasing the tumor uptake with excessive HER2 affibody. It implied that the targeting property of [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 was consistent with the performances of 18 F or 68 Ga-labeled MZHER2 affibody [32,33]. The uptake values in the liver are comparable to those of 18 F or 68 Ga-labeled affibody (~2% IA/g at 1 h p.i. then decrease to~1% IA/g at 4 h p.i.). It suggested that the modification with a hydrophilic linker was effective to decline abdomen background. Similar to other 89 Zr-labeled affibody such as [ 89 Zr]Zr-DFO-ZEGFR :2377 , higher radioactivity was found in the kidney after blocking. The detailed mechanism was not cleared. The scavenger receptor systems recycling proteins from the urines might mediate the re-absorptions of radiometal-labeled peptide in the kidney [38]. Despite this, high renal uptakes may not prevent the visualization of the tumor near the organ. For example, metastases in the adrenal gland were visualized after administration of 111 In-labeled HER2 affibody, [ 111 In]In-ABY-025 [39]. Low uptakes (< 2% IA/g) were observed in the bone. It suggested that the in vivo stability of [ 89 Zr]Zr-DFO-MAL-Cys-MZHER2 was good since free 89 Zr accumulates irreversibly in the mineralized bone.
2020-06-03T14:31:38.942Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "8a6dd274d6e8c552a685864648eccc1d92b8fc61", "oa_license": "CCBY", "oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/s13550-020-00649-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a6dd274d6e8c552a685864648eccc1d92b8fc61", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
197557045
pes2o/s2orc
v3-fos-license
Evaluation of the influence of slag heaps on the state of the urban residential area The paper is concerned with the analysis of heavy metal content in soils depending on the distance from the considered source of pollution of Magnitogorsk and its suburbs. It was found that the main investigated heavy metals polluting soils are zinc, lead and copper. The main objective of this research work is to carry out the analysis of heavy metal content dynamics in soil from 2014 to 2017 taking into account environmental protection measures taken within the frame of the environmental program of the PJSC MMK. The slagheap of the III order located in the North of the city on the left-bank valley side of the river Ural was considered as the source of pollution. The research group calculated the following characteristic values: the total pollution index of soils and the ecotoxicological index of chemical pollution of soils with pollutants of different classes of hazard. Introduction Residential areas located in the vicinity of large industrial centers have to face strong man-caused impact due to, among other things, a huge amount of solid wastes formed by:  industrial enterprises emitting gaseous, liquid and solid wastes including chemical and radioactive ones in the process of operation or in emergency situations;  urban environment emitting wastes of the housing maintenance and utilities, wastes from vehicles, storm wastewater, snowpack etc.;  living environment emitting liquid and solid wastes. The following changes can be predicted for the near future:  the world population will continue to increase steadily and in 2050 it will be approximately 9 billion people;  per capita GDP will increase by 2-4% per annum on average;  the amount of specific industrial wastes emission into environment will to a great extent depend on the way of waste screening and recycling. Thus in the short term, the only way to reduce the amount of wastes is to cut down the specific value of wastes per unit of GDP and to make efficient use of natural resources by means of waste recycling [1]. Magnitogorsk is one of the largest industrial centers of ferrous metallurgy. A great number of industrial enterprises are located around the city. So far, the PJSC "Magnitogorsk Iron and Steel Works" has developed a special ecological program to achieve objectives in the field of environment protection. In accordance with this program, 48 technical arrangements were fulfilled to reduce and prevent the negative effects on the environment in 2017. The actual costs to fulfill the PJSC MMK ecological program totaled to 4777.0 mln. roubles in 2017. Total emissions of pollutants decreased from 219.1 to 199.3 thousand tons from 2014 to 2017; the unit value of pollutant emissions per 1 ton of metal product decreased from 18.80 to 17.58 kg/t. The amount of wastes used by the PJSC MMK as secondary material resources in the sintering mixture in the ore dressing process was 2.35 million tons; 11.4 million tons of slag was recycled [2]. The accumulated wastes of metallurgical industry in the form of slag are considered as hazardous sources of soil pollution with heavy metals and the degree of impact of these sources on the environment increases annually due to such processes taking place within them as dissolution, migration interchange oxidation and deoxidation. The main purpose of this research work is to carry out the analysis of heavy metal content dynamics in soil taking into account the measures taken within the framework of the ecological program of the PJSC MMK. At present the sites where wastes are disposed are not meant for storage of metal-containing wastes, as a result they can turn into sources of repeated emission of pollutants and sources of industrial polymetal anomalies. A large amount of accumulated wastes of metallurgical industry resulted both in pollution of atmosphere with industrial pollutants and in accumulation of pollutants in soil. At the same time, this soil acts both as an accumulator of polluting substances and as the initial link in migration of toxic agents along the surface trophic chains and it also has certain transforming properties in relation to lots of pollutants and can serve as an indicator of the ecological state of the area [3][4][5][6]. The slagheap of the III order located in the North of the city on the left-bank valley side of the river Ural was considered as the source of pollution [1]. The territory of Narovchatsky state farm Agapovsky district, located at a distance of 40 km from the city, was considered as the reference point. Experiment description The pollution form of the soil adjacent the investigated industrial source of pollution with heavy metals was estimated by comparing the actual results of the research work with the admissible concentration limits and with approximate permissible concentrations (table 1) as well as with the corresponding parameters on the reference site. Pollution of soil was evaluated by the summary/total index of pollution (Zc): Where Zc = the total index of pollution with heavy metals; n = the number of the totaled elements; Ki = the content of a certain element in soil. mg/kg; 3 Kф = the value of the local geochemical background element of a certain element: for Cu it is 30 mg/kg; for Zn it is 70 mg/kg; for Pb it is 10 mg/kg; for Mn it is 40000 mg/kg; for Сd it is 0.2 mg/kg; for Ni it is 1 mg/kg; for Со it is 1 mg/kg. The content of heavy metals in soil was investigated on the territory exposed to the industrial impact from the slagheap of III order. Comparison of average values of the total content of the investigated metals in soil from 2014 to 2017 made it possible to arrange them into the following decreasing series: Mn>Zn>Cu>Pb>Ni>Со>Cd. The results of the investigations carried out in 2014 made it possible to come to the conclusion that the total content of zinc and lead in the investigated soil of all sample plots exceeded the admissible concentration limit, while in 2017 only total zinc content exceeded it. In 2014 soil was contaminated with cadmium within a radius of 200 m. Pollution with copper within 1.5 km is still preserved by the results of 2017 too. The content of nickel and cobalt in this soil did not exceed the permissible rates either by the results of investigations of 2014 or 2017. It should be noted that in 2014 the total content of cadmium and manganese in the soil of the listed sample plots was maximum near the source of pollution and gradually decreased with the increase of the distance from the source. At the same time the total content of cadmium exceeded the admissible concentration limit by 1.3 times, while for zinc this figure was exceeded by 5.2 times. In 2017 pollution with cadmium was not detected, while the excess of the admissible concentration limit for zinc was 4.71 times, for manganese it was 6.7 times and for copper it was 3.2 times. In 2014-2017 no clear relationship was found between the content of other investigated metals and the distance from the source of pollution. In 2014 the highest value for Cu was found at a distance of 500 m (up to 4 admissible concentration limits), for Zn and Pb this distance was 1500 m (7 and 5.7 admissible concentration limits. respectively). In 2017 the content of nickel and cobalt in this soil did not increase the permissible rates and the highest value for nickel was found at a distance of 5000 m. Calculated values of the total pollution index are given in table 2. As a result of the analysis of the calculation results, it was found that in 2014 the degree of soil pollution in the area exposed to the impact of the slagheap of the III order made it possible to refer the soil located within the range of 0.5 km from it to highly hazardous one, in all other cases it could be referred to the admissible category; in 2017, the soil located within the range of 0.2 km from the slagheap was considered to be highly hazardous, while in all other cases it was considered admissible. The most significant characteristic of the ecological state of the territory is the ecotoxicological index (EС) of the soil quality, which can be calculated as the relationship between the pollutant concentration and the admissible limit value [7]: EС = К1 + ... + Кn. where Кn = the coefficient of pollution agent concentration in the soil, which is calculated as the relationship between the average concentration of the element in soil and its admissible concentration limit in accordance with the GN 2. In the process of the investigation, heavy metals were referred to the following classes of hazard: I class of hazard included zinc, lead and cadmium. II class of hazard included nickel, copper and cobalt; III class of hazard included manganese. The total evaluation of the ecological situation of the region aimed at defining environmental emergency zones and zones of ecological catastrophe is given in the procedure. The criteria of the ecological state of soil are given in table 3.  total ecotoxicological index of soil for metals of II class of hazard is within the range of 1.073 and 4.224, which makes it possible to refer the investigated area to the zones with the critical ecological situation;  ecotoxicological index of soil for metals of III class of hazard is within the range of 0.902 and 8.537, which makes it possible to refer the investigated area to the zones with the critical ecological situation. Thus, it was found that the territory of Narovchatsky state farm Agapovsky district, located at a distance of 40 km from the city, can be referred to the zones with a relatively satisfactory ecological situation (EС< 1) as to the investigated heavy metals of III class of hazard. In 2017 the research group found that:  total ecotoxicological index of soil for metals of I class of hazard is within the range of 1.508 and 9.615, which makes it possible to refer the investigated area to the zones with environmental emergency situation and to the zones with ecological disaster in the vicinity of the source of pollution;  total ecotoxicological index of soil for metals of II class of hazard is within the range of 0.847 and 3.498, which makes it possible to refer the investigated area to the zones with the critical ecological situation;  ecotoxicological index of soil for metals of III class of hazard is within the range of 0.710 and 6.690, which makes it possible to refer the investigated area to the zones with the critical ecological situation. It was found that the territory of Narovchatsky state farm Agapovsky district, located at a distance of 40 km from the city, can be referred to the zones with a relatively satisfactory ecological situation (EС< 1). Calculations show that concerning the investigated heavy metals of II and III classes of hazard, the territory of Narovchatsky state farm Agapovsky district, located at a distance of 40 km from the city, can be referred to the zones with a relatively satisfactory ecological situation (EС< 1). Thus, in our research work, the following two indexes were calculated: the total index of soil pollution and the ecotoxicological index of chemical contamination of soil with pollutants of different class of hazard. The analysis of the obtained data showed that the territory located at a distance of 1500 m from the source of pollution is not considered to be hazardous by the total pollution index in 2017. However, the calculated value of the ecotoxicological index of chemical contamination of soil with pollutants of I class of hazard also showed that by the characteristics of the soil state the territory mentioned above is referred to the zones of ecological disaster. Calculation of the total pollution index cannot provide the complete evaluation of the positive dynamics of the content of heavy metals in soil taking into account the measures taken within the framework of the ecological program of the PJSC MMK. In accordance with the procedural guidelines of 2.1.7.730-99 "Sanitary assessment of soil quality for populated areas". the assessment of the degree of chemical contamination of soil must be made according to the indexes calculated by the combined geochemical and geohygiene surveys of the city environment with active sources of pollution. Thus when the distance from the source of pollution increases, concentration of heavy metals in soil decreases significantly, which shows clearly the contribution of the residential areas to the pollution of soil. Soil is the constitutional unit between the components of the biosphere as well as the biogeochemical barrier absorbing heavy metals and at the same time cleaning natural environment (atmosphere and hydrosphere) from them [10]. When exposed to blowing erosion and transport erosion, it can turn into the source of secondary pollution when the threshold value of heavy metal MIP IOP Conf. Series: Materials Science and Engineering 537 (2019) 062009 IOP Publishing doi:10.1088/1757-899X/537/6/062009 6 concentration is exceeded and the self-cleaning ability is lost. Conclusion The chemical analysis carried out by the research group showed spatial non-uniformity of distribution of heavy metals in soil. No clear relationship was found between the indexes of soil quality. Calculations showed that by the start of the research work in 2014 the territory of Magnitogorsk and its suburbs was referred to the zones with environmental emergency and ecological disaster in the vicinity of the source of pollution. At present taking into account the measures taken within the frame of the ecological program of the PJSC MMK, only the territory at a distance of 200 m from the source of pollution is considered to be the zone with environmental emergency and moreover one can see obvious decrease of values of the calculated indexes. Thus, it is very important to calculate both the total index of soil pollution and ecotoxicological index of chemical pollution of soil with pollutants taking into account their class of hazard. These calculated values make it possible to come to the conclusion about the decrease in the content of heavy metals in soil taking into account the taken ecological measures.
2019-07-19T20:04:02.449Z
2019-06-18T00:00:00.000
{ "year": 2019, "sha1": "edb25bd8d980450d92da43de6d952b7a7ec94c80", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/537/6/062009", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "394a2dc80749c38b8912a317736c666ec7b1993f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
25582665
pes2o/s2orc
v3-fos-license
Outcomes of Sacral Nerve Stimulation For Faecal Incontinence in Northern Ireland Background Sacral nerve root stimulation (SNS) is an effective and developing therapy for faecal incontinence, a debilitating condition that can result in social and personal incapacitation. Objectives The objectives of this study are to assess the morbidity of the procedure, improvement in the incontinence scores and Quality of Life (QoL) following SNS. Materials and methods Patients were identified from the Northern Ireland regional SNS service from 2006 to 2012. Numbers of patients who had temporary placement and permanent placement were collated. Pre and postoperative assessment of severity of incontinence and QoL was performed using Cleveland Clinic Incontinence Score (CCIS) and Short Form-36 (SF-36) respectively. Statistical analysis was undertaken using Wilcoxon signed rank test. Morbidity was assessed by retrospective review of patient records. Results Seventy-five patients were considered for trial of a temporary SNS. Sixty-one proceeded to insertion of a temporary SNS and, of these, 40 elected to have a permanent SNS. There was a significant reduction in the pre-SNS and post-SNS Cleveland Clinic Incontinence Scores from median of 14 to 9 respectively (p=0.008). There was a significant improvement in Role Physical (p=0.017), General Health (p=0.02), Vitality (p=0.043), Social Functioning (p=0.004), Role Emotional (p=0.007), Mental Health (p=0.013) and Mental Health Summary (p=0.003). However, this is not reflected in the bodily pain and physical functional domains. Conclusion Permanent sacral nerve stimulation is effective and results in significant improvement of faecal incontinence scores and quality of life. INTRODUCTION Up to 1.4 percent of the population, aged over 40 years, in the United Kingdom is affected by major faecal incontinence, 1 a debilitating condition associated with a high level of physical and social disability. Prevalence increases with age and incontinence is reported in 7% of otherwise healthy adults over 65 years of age. 2 The aetiology of faecal incontinence is multifactorial with obstetric trauma one of the commonest causes. Other causes include sphincter damage secondary to perineal surgery for perianal fistulas and haemorrhoidectomy, idiopathic degeneration of the sphincter muscles, neurological conditions like pudendal nerve neuropathy, multiple sclerosis, diabetes mellitus, traumatic spinal cord injuries and congenital anorectal malformations. The symptoms of faecal incontinence can be helped by changes in lifestyle and dietary habits. In particular, use of bulking and anti-diarrhoeal agents and biofeedback, can help in improving symptoms in a significant proportion of patients. When conservative measures fail to bring about improvement however, surgical options can be considered. Sphincter repair, graciloplasty, artificial anal sphincter, conventional and dynamic gluteoplasty, antegrade continence enema procedures and colonic conduit formation are well investigated surgical alternatives but the long term results are not promising. Failure of these treatment options often result in patients considering a permanent colostomy. Sacral Nerve root Stimulation (SNS), was first developed in 1979 and used as a treatment for faecal incontinence in 1995. It is now established as a safe procedure that offers a unique opportunity to select appropriate patients through a temporary trial prior to permanent implant placement, and is an effective alternative therapeutic option in addition to the UMJ is an open access publication of the Ulster Medical Society (http://www.ums.ac.uk). The Ulster Medical Society grants to all users on the basis of a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Licence the right to alter or build upon the work non-commercially, as long as the author is credited and the new creation is licensed under identical terms. conventional procedures outlined. [3][4][5] In this study, we report the Northern Ireland experience with SNS in the management of patients with faecal incontinence and assess incontinence scores and QoL following permanent implant placement. The complications encountered as a consequence of the procedure are also reported. PATIENTS AND METHODS: All the patients aged 18-75 years who presented to clinic with one or more episodes of faecal incontinence per week and having failed conservative treatment were selected for temporary external stimulator placement. Endo-anal ultrasound, ano-rectal manometry and pudendal nerve terminal motor latencies were performed preoperatively with manometry repeated postoperatively. Patients with 50% reduction of incontinence score at 2 weeks follow-up were selected for permanent implant placement. Data were collected retrospectively from patient records. Cleveland Clinic Incontinence Score (CCIS) was used to quantify the severity of incontinence and was assessed at 6 weeks and 12 months post-operative follow up. SF-36 questionnaires were completed retrospectively to compare the preoperative quality of life (QoL) with that at 6 weeks following surgery. Statistical analysis was undertaken using Wilcoxon signed rank test. Post-procedure morbidity was assessed by retrospective review of patient records. Temporary and permanent procedures were carried out with the patient in a prone jack-knife position under general anaesthesia. The temporary wire was placed in the S3 and S4 foramen and the one that gave the maximum perianal spasm and toe flexion when the temporary wire was stimulated was used for the two weeks of the test. Electrodes for the permanent implant were placed in the same foramina to duplicate the response achieved during the test period. A Medtronic (Model No. 3023) [Pulse width: 210μs, Frequency: 14 Hz] stimulator was inserted in a subcutaneous pocket created above the iliac bone. One dose of prophylactic antibiotic was administered at induction of anaesthesia. Before discharge, patients were counselled by the senior author and the stimulator programmed to the amplitude just below the threshold for individual patient sensation. Patients were reviewed at the clinic at 6 weeks, 3 months and one year following the procedure by the senior author. Severity of incontinence and QoL were assessed using CCIS and SF36v2 forms respectively. Patients were sent postal questionnaires with a postal and telephone reminder at 4 weeks. RESULTS: 75 patients presenting to the colorectal clinic between 2006 and 2012 were identified as having been assessed as suitable for consideration of a sacral nerve stimulator. 70 (93.3%) of these patients were female. The major indication for assessment was faecal incontinence (72 patients, 96%). This was mostly urge incontinence or urge and passive incontinence (49.3%). Preoperative Assessment 61 of the 75 patients were selected as appropriate for trial with temporary implant placement, 14 either declining the procedure, not having true faecal incontinence, or not having tried all conservative measures. Of these, 60 were female of whom 70% had at least one previous pregnancy. 64.2% had required perineal intervention during delivery, which included perineal tear, forceps delivery or episiotomy. 57.3% of the initial 75 patients considered for temporary placement of SNS had previously undergone perianal surgery, ranging from anal sphincter repair, haemorrhoidectomy and anal pull through ( Table 1). The median age of patients was 42 years (range: 22-76 years). Patients were discharged on the same day following temporary wire placement and the following morning after placement of the permanent implant. All the patients had either Ultrasound Scan (USS) or Magnetic Resonance Imaging (MRI) assessment of their anal canal. 61.7% of the patients who proceeded to a temporary wire had either a defect, scar or thinning of their anal sphincter, with the rest having no abnormality on imaging. Thirty-nine patients had a pudendal nerve assessment prior to temporary SNS assessment. This demonstrated bilateral delay in 33.3% of patients, right-sided delay in 12.8% of patients, left-sided delay in 5.1% of patients and 48.7% of patients' pudendal nerve assessments were reported as normal. Temporary SNS A temporary SNS was placed in 61 patients. Of these, 40 patients (65.6%) reported an improvement in their Cleveland Clinic Incontinence Score of greater than 50% and all of these patients proceeded to permanent SNS implant placement. There was no morbidity from the procedure itself, however, there were some technical failures reported with two patients having wire failure due to wire dislodgement and one patient suffering battery failure, giving a total complication rate of 4.9%. Cleveland Clinic Incontinence Scores: Permanent SNS In the patients who proceeded to permanent implant placement there was a significant reduction in the pre-SNS and post-SNS Cleveland Clinic Incontinence Scores from median of 14 to 9 respectively (p: 0.008). There was no difference in improvement at 6 weeks or 12 month follow up and at their most recent follow up 78% of patients reported continued improvement from their baseline symptoms prior to placement of the SNS ( Morbidity: In 6 patients there was an initial suspicion of infection. Five of these patients were given antibiotics for erythema around the wound and 1 of these patients had wound breakdown. A further patient was found to have a sterile abscess. Ten patients (25%) initially reported pain at the site of permanent implant, however in 6 of these cases it resolved with reprogramming or spontaneously and 4 had persistent pain requiring analgesics for more than six weeks. Technical issues & Follow up: The device required reprogramming in 62.5% of cases, however, this was usually performed at an outpatient appointment. Reprogramming by a Medtronic representative was required in 10% of cases. Repositioning of the SNS was required in three patients including one case where the stimulator had to be replaced due to infection following wound breakdown. There was one episode of wire failure in this cohort and one episode of battery failure after the device had been in place for over five years. DISCUSSION Faecal incontinence is a debilitating condition associated with significant stigmatisation and embarrassment. Difficulty in travelling, working and maintaining interpersonal relationships frequently results in the patient suffering from social isolation, depression and a reduced quality of life. This has substantial economic implications on individuals, family members and the healthcare system. 2 Community costs in the Netherlands were measured at €2169 in 2005 and $4110 per year in the US in 2012. 6 7 Conservative treatment is effective in more than half of patients but more intensive treatment is required in a proportion of them. 8 Various studies have reported short term success rates, varying from 33 to 100% 9 , with sphincter repair procedures, such as post anal repair, perineal reefing and overlapping sphincteroplasty, although the results worsened with increasing length of follow-up. Total pelvic floor repair, which combines anterior sphincter plication with levatorplasty, and post anal repair is reported to be a viable option when compared to post anal repair or levatorplasty for idiopathic incontinence. 10 Other procedures, such as neo-sphincter procedures, graciloplasty (stimulated or non-simulated) and artificial bowel sphincter insertion, are technically demanding with high initial costs. 8 Dynamic graciloplasty is associated with morbidity and mortality rates of 0 to 13% and 0.14 to 2.08% respectively. 11 Artificial bowel sphincter insertion has success rates of 70-88% with morbidity rates as high as 33% 12 and explantation rates of up to 40%. 13 Stoma formation has the associated costs of hospitalisation and maintenance. SNS continues to develop as therapy for faecal incontinence. 14 It was initially used for the treatment of urinary urge incontinence and non-obstructive urinary retention. 15 These patients observed a simultaneous improvement in bowel symptoms and its use was consequently investigated extensively in the treatment of faecal incontinence and constipation. Matzel et al were the first to report its use in faecal incontinence in 1995. 5 The mode of action of SNS remains unknown. The clinical effect may be due to voluntary somatic, afferent sensory and efferent autonomic motor stimulation achieved by sacral nerve root stimulation. 16 In addition, the pelvic part of the sympathetic chain and large myelinated alpha motor neurones that innervate the external anal sphincter and levator ani muscles are also stimulated. The resulting neuromodulation probably results in a change in sphincter function, hindgut function or a combination of these leading to improved continence. 17 There is no evidence as yet to suggest why some patients do not gain sufficient benefit to warrant permanent implantation. In our series, 40 of the 61 patients (65.6%) had marked improvement in incontinence scores with temporary wire placement and went on to permanent implant placement. Three of these remaining patients in our series opted for permanent colostomy. Jarrett MED (2004) in a systematic review of published literature found that 56% of 266 patients proceeded to permanent implant. 17 [17][18][19][20] This shows that our rate was within the previously reported range and the differences of conversion may reflect variation in selection of patients and willingness to offer something to people with a very debilitating condition. Various authors report improved continence scores and quality of life but using different scales of measurement (Wexner score, Cleveland clinic incontinence scores; SF-36, American Society of Colon and Rectal Surgeons questionnaire and Royal London Hospital questionnaire) perhaps due to the unavailability of a single validated scoring system to assess faecal incontinence. 14 21 22 This can make direct comparison between studies quite difficult. Our study showed that, overall, there was a significant reduction in Cleveland Clinic Incontinence score from median 14 to 9 (p=0.008). This compares favourably with other studies which show a similar reduction in CCIS from a range of 12-18 to a range of 1-10. 14 The number of patients in these studies is very variable, as is the length of time of follow up, which could be as short as 6 months, making valid comparison difficult. 14 It is noted that the extent of improvement in these studies varies considerably and it is unclear whether there is a bigger improvement when starting from a higher or lower baseline, however, they are all statistically significant. In keeping with our results, several studies have shown significant improvement in quality of life with effective SNS and specifically a long-term sustained clinical benefit in 80% of patients at 7 years. 23 24 It was pleasing to see that there was very little tailing off in improvement amongst our cohort. In our study, two patients had no change in the CCIS at 6 weeks follow up. One of them had associated proctitis of unknown aetiology that might have contributed to persistent symptoms. Incontinence score in this patient was 20 preoperatively and at 6 weeks follow up. This is reflected in the physical function, general health and vitality sub scores of SF-36 that remained the same post operatively. Another patient with a migrated electrode had no improvement in incontinence scores at 6 weeks. Interestingly, all the subscores of SF-36 remained the same post-operatively except for social function (35 vs. 29.6). However, the incontinence scores improved from 10 to 8 after the electrode was reprogrammed. Another patient had painful serous collection around the implant for which the implant was replaced on the opposite side. Other reported adverse events in the literature include implant related pain due to the lead running subcutaneously over the iliac crest to the abdominally placed generator, pain over the generator when it was set as the anode, unspecified pain, infection of the implant and superficial wound dehiscence. 17 By placing the implant in the upper outer quadrant of the buttock on the patient's dominant side, the stimulator is not felt when sitting down and there is decreased lead associated pain. The tined lead electrodes, although more expensive, inhibit axial movement of the lead and probably reduce the migration rates. 25 Nearly half of all patients experience loss of efficacy at some point. 62.5% patients required reprogramming at least on one occasion, with 10% requiring a Medtronic representative to assist with reprogramming for either symptom control or discomfort. Alternative stimulator settings at higher frequency would increase treatment efficacy in patients experiencing loss of efficacy if alternative settings are tested. 26 When the stimulators have been in place for some time battery failure is not uncommon and may require exchange of the pulse generator, seen in the original cohort at a rate of 89% at an average of 7.4 years. 27 This study reports our early experience with sacral nerve stimulation. Limitations of this study include small patient population and a limited follow up period of 12 months. Although the success rates are good at 12 months, longer-term efficacy needs further evaluated. Furthermore, this procedure was subject to limitations by the purchasing commissioners in Northern Ireland. Following on from this review of SNS results it is planned to make it available more widely. CONCLUSIONS: This study has shown that the use of SNS for faecal incontinence results in significant improvement in incontinence and quality of life scores. Patient selection based on the improvement in continence with minimally invasive temporary wire stimulation is effective at predicting those who will benefit over the medium term. There are relatively low rates of morbidity associated with the procedure.
2018-04-03T02:55:03.612Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "5ebb07c2f6e367de36864471406091598812a49c", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5ebb07c2f6e367de36864471406091598812a49c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221782524
pes2o/s2orc
v3-fos-license
Magnitude of Episiotomy and Associated Factors among Mothers Who Give Birth in Arba Minch General Hospital, Southern Ethiopia: Observation-Based Cross-Sectional Study Background Episiotomy is the most common obstetric procedure, performed when the clinical circumstances place the patient at a high risk of high-degree laceration. However, episiotomy should be done with judicious indication to lower perineal laceration with fewer complications. Despite its adverse effects, the magnitude of episiotomy is increasing due to different factors. Therefore, this study is aimed at determining the recent magnitude of episiotomy and at identifying associated factors among women who gave delivery in Arba Minch General Hospital, Southern Ethiopia. Methods An institution-based cross-sectional study was conducted from December 15, 2018, to January 30, 2019. A systematic random sampling technique was used to select study participants. A semistructured questionnaire was used to collect data. This was supplemented with a review of the labor and delivery records. Binary and multivariable logistic regression analyses were performed to identify factors associated with the magnitude of episiotomy. P value ≤ 0.05 was used to determine the level of statistically significant variables. Results The magnitude of episiotomy was found to be 272 (68.0%) with 95%CI = 64.0‐72.5. Women who attended secondary education [AOR = 10.24, 95%CI = 2.81‐37.34], women who attended college and above [AOR = 4.61, 95%CI = 1.27‐16.71], birth weight ≥ 3000 g [AOR = 4.84, 95%CI = 2.66‐8.82], primipara [AOR = 4.13, 95%CI = 2.40‐7.12], being housewife occupants [AOR = 3.43, 95%CI = 1.20‐9.98], married women [AOR = 2.86, 95%CI = 1.40‐5.84], and body mass index < 25 kg/m2 [AOR = 2.85, 95%CI = 1.50‐5.44] were independent variables found to have significant association with episiotomy. Conclusion The magnitude of episiotomy was 68.0% which is higher than the recommended practice by WHO (10%). The study participants' occupational status, marital status, educational status, parity, birth weight, and BMI were significantly associated with the magnitude of episiotomy in the study area. Therefore, to reduce the rate of episiotomy, it is better to have periodic training for birth attendants regarding the indication of episiotomy. Background An episiotomy is one of the widely used obstetric interventions which is done by the birth attendant to minimize the risk of severe tears which occur due to enlarging of the birth canal during childbirth at a time when the fetus's head descends [1,2]. The American College of Obstetricians and Gynecologists (ACOG) and Federation of International Gynecology and Obstetricians recommend that episiotomy should be done with judicious indication to lower perineal laceration with fewer complications [3,4]. Existing evidence also supported the recommendation to restrict episiotomy use [5]. The finding from studies conducted in a different part of the world shows that episiotomy increases the risk of thirdand fourth-degree perineal lacerations which had short-and long-term complications for mothers [6][7][8]. A study conducted in Taiwan indicated that episiotomy increased the number of women who had pain at the first, second, and sixth weeks of postpartum and urinary incontinence [9]. Most of the consequences of episiotomy affect the parturient, greatly impacting her quality of life and leaving her with an unpleasant childbirth memory [10]. Besides, the findings show that the perineal tear and pelvic floor morbidity can be increased among women receiving episiotomy [11,12]. Episiotomy use is associated with a higher incidence of perineal pain in the immediate postpartum period, where it predisposes them to risk of psychological morbidity and stress urinary incontinence in 6 weeks postpartum. Despite its adverse effects, the magnitude of the episiotomy is increasing due to different factors [13]. Findings from studies conducted in India and China show that the magnitude of episiotomy continues to be high which range from 60% to 80% [13][14][15]. It also continues to be high in developing countries [1,16,17]. The finding from studies conducted in different parts of Ethiopia revealed that the magnitude of episiotomy became over 30% and the practice was reported to increase up to 2.3-folds more in a rural part of Ethiopia [15,17]. The rate of episiotomy practice reported was significantly higher than recommended, noting that perineal repair without analgesia needs to be revised and a less painful method should be advocated [18]. Moreover, a national health facility report in Ethiopia indicated that episiotomy alone had caused 9% and 8% of primary postpartum hemorrhage and maternal sepsis, respectively [19]. The magnitude of episiotomy practice varies according to the obstetric procedure, maternal and fetal conditions, type of birth attendant, level of education, and years of experience of birth attendant. Therefore, this study is aimed at determining the recent magnitude of episiotomy and at identifying associated factors among women who gave delivery in Arba Minch General Hospital, Gamo Zone, southern Ethiopia, which may help to reduce adverse consequences to the mother. Moreover, the finding of this study may help clinicians to make an informed decision about episiotomy-related clinical practice, thereby achieving the best pregnancy outcome. Study Area, Period, and Design. An institution-based, cross-sectional study was conducted from December 15, 2018, to January 30, 2019, in Arba Minch General Hospital. Arba Minch is an administrative town of the Gamo Zone, located about approximately 500 km south of Addis Ababa, the country's capital city. It consists of 11 Kebele (the smallest administrative unit) with a total population of 112,724. There are 26,265 reproductive age group [14] women residing in the town of whom 4428 were pregnant and 3261 were giving birth at the facility level. There is one general hospital, two health centers, and 17 primary and 14 medium private clinics in the town. Arba Minch Hospital is teaching center in different disciplines and specialty areas for medicine and health science students .. Source and Study Population. Those mothers who gave birth in Arba Minch General Hospital during the specified study period were eligible for the study. However, mothers who underwent destructive delivery were excluded from this study. Sample Size Determination and Sampling Procedure. The sample size was estimated using a single population proportion formula considering the following assumptions: the magnitude of episiotomy (P = 41:4%) from the study conducted in public health institutions of Axum Town, Northern Ethiopia [20]; confidence level of 95%; 5% of the margin of error; and 10% nonresponse rate. As a result, the calculated sample size for this study was 410. To select study participants, the systematic random sampling technique was employed. The sampling process was stopped when the required sample size was met. The sampling interval was determined based on the monthly average number of deliveries. Accordingly, the hospital report in the year 2018 of the average monthly number of deliveries was 1315 (i.e., K th value 1315/410 = 3:2~3). The initial mother was picked by lottery method, then the next mother was selected every three intervals according to their order of admission to labor until the final sample size was fulfilled. Data Collection Method. Data was collected using a pretested, interviewer-administered, semistructured questionnaire. This tool was developed from similar studies conducted in a different part of the world [17,[20][21][22][23][24][25][26][27]. The questionnaire was primarily developed in English, translated into Amharic (local language), then translated back to English and rechecked by the third person, to ensure its correctness and consistency. The interview questionnaire consisted of four key items: sociodemographic characteristics of participants, labor-and delivery-related factors, and maternal-and fetal-related variables. Fetal gestation and weight were collected from the maternal follow-up sheet. Seven BSc midwives who had experience in data collection were selected for data collection and three MSC midwifery students supervised the data collection. All data collectors were responsible for observing respondents starting from the commencement of the active first stage of labor to the occurrence of the outcome. Then, the secondary data extraction follows until the patient has had a stable vital sign and her medical condition has been confirmed for interview by a duty physician. Then, the interview was conducted in a place where the mother's privacy and comfort could be kept. All interviews were conducted in the local language. The data collectors were supervised by MSC midwifery students based on hospital ward rotation. The data collectors were scheduled in every shift to collect data from the admission of parturient to the end of the first two hours after childbirth through an interview, observation, and delivery records. The data collectors used a unique numeric identifier to track the mothers' card back during data extraction. They were instructed to put this code on each questionnaire's front page. Each time during the data collection phase, between 5:00 and 5: 30 pm, we had a regular meeting to discuss challenges. Then, the research was notified right away to make a solution for the next day. The interviews ranged from 10 to 15 min per participant. All study subjects were permitted to be interviewed. 2 Journal of Pregnancy Data Quality Management. To control data quality, the questionnaire was pretested in 5% of the sample size among women who gave birth in Gidole Hospital. A minor amendment on consistency, coherence, and skipping patterns was made after a pretest was conducted. Besides, both the data collectors and supervisor had been given one-day training on how to complete questionnaires, interview puerperal patients, and extract data from the delivery registration by the researcher. During the data collection phase, the supervisor checked the completeness of the questionnaire each day. 2.6. Data Analysis. The collected data were coded, entered, and cleaned by using Epi Info version 7.2.0 software. Then, it was exported to SPSS version 20. Descriptive statistics were carried out and summarized by tables, frequencies, graphs, and means. An association between the magnitude of episiotomy and potential factors was examined using binary logistic regression. The odds ratio and confidence interval were calculated to determine the strength of the association. From the bivariable analysis, those variables P ≤ 0:25 and biological plausibility were potential candidates for multivariable logistic regression analysis. We check multicollinearity between variables and outcome variables by using a variance inflation factor (VIF). Variables with a VIF greater than 10 were dropped from the candidate variables to be fitted into the final model. The goodness-of-fit was assessed using the Hosmer-Lemeshow test. Variables with a nonsignificant Pearson chi-square test but a significant omnibus test were considered eligible to be fitted to the multivariable model. Variables with a P value ≤ 0.05 in the multivariable logistic regression model were considered statistically significant. Finally, the significance of an association between episiotomy received and independent variables was reported with corresponding 95% CI. Sociodemographic Characteristics of the Study Subjects. Out of 410 mothers who were expected to participate, 400 mothers participated in this study, which gave a 97.6% response rate. The majority of 193 (48.3%) of the study subjects were less than 27 years of age with a 3.9 standard deviation. Of all study subjects, more than 85% were married. Most of the 255 (63.8%) study subjects in this study lived in an urban area. More than 29% of the study subjects were either government or self-employees. About 26.8% of the study participants had completed at least a college education (Table 1). Labor-and Delivery-Related Characteristics. In this study, about 207 (51.8%) respondents had given birth during the night time. More than 39% of the respondents had given birth assisted by a vacuum extractor. Regarding duration of labor, 224 (56%) laboring mothers stayed more than 7 hours in the first stage of labor. However, 222 (55.5%) of laboring mothers took less than 2 hours to deliver their neonates after commencement of the second stage. Two hundred and thirty-four, 58.5%, of study subjects had given birth assisted by midwife professionals (Table 2). Maternal-Related Characteristics. Of the total multiparous women in this study, 91 (48.4%) of them had a history of previous episiotomy being performed. Among mothers who had pregnancy above 28 completed weeks, 60 (31.9%) of them had a history of previous breech delivery. Among the total respondents, the majority, 325 (81.3%) respondents, had female genital mutilation. About 212 (56%) of the respondents had given birth for the first time. Approximately fifty-two percent of the respondents were found to have a body mass index of more than the optimal range. In 103 (25.8%) mothers, a previous history of chronic illness was reported (Table 3). Fetal-Related Characteristics. Out of the total delivery, 298 (74.5%) were born at the beginning of 37 and above completed weeks of gestation. The majority of 363 (90.8%) of the reported fetal presentation was cephalic. During the current study, the fetal condition was described by the presence of clear amniotic fluid in 308 (77%) of the respondents. More than half of the delivered neonates had less than 3300 gm (IQR ± 2000) ( Table 4). 3.5. Magnitude of Episiotomy. The findings of this study revealed that the magnitude of episiotomy was found to be 272 (68.0%) with 95%CI = 64:0-72:5. In this study, the main reason for performing episiotomy procedure was fear of spontaneous perinatal laceration, accounting for 55.8%, followed by 15.8% where it was rated for soft tissue dystocia ( Figure 1). From the bivariable analysis, those variables P ≤ 0:25 and biological plausibility were potential candidates for multivariable logistic regression analysis. Variables like marital status, maternal age, parity, educational status, occupation, birth attendant, duration of second-stage labor, residence, gestational age, time of delivery, history of FGM, estimated fetal weight, history of chronic illness, and body mass index were fitted in the final multivariable model. Those variables like occupation, BMI, birth weight, parity, marital status, and educational status were significantly associated with the outcome variable in the final multivariable analysis. In this study, those women who were housewife occupants were found to have a significant statistical association, where being a housewife occupant was more than 3.4 times more likely to get incised during delivery than student mothers [AOR = 3:4, 95%CI = 1:2-9:9]. Mothers who had married were 2.9 times more likely to be incised during delivery than those mothers who were unmarried [AOR = 2:9, 95%CI = 1:4-5:9]. Mothers' educational status was another independent variable which found to have a statistically significant association with increased magnitude of episiotomy performance. Mothers who attended secondary education [AOR = 10:2, 95%CI = 2:8-37:3] and those mothers who attended college and above [AOR = 4:6, 95%CI = 1:3-16:7] were compared to those mothers who did not attend formal education. On the other hand, mothers who were primiparous were four times more likely to incur episiotomy procedures than those who were multiparous [AOR = 4:1, 95%CI = 2:4-7:1]. Mothers who gave birth to a neonate whose weight was more than or equal to 3300 gm were 4.8 times more likely to incur an episiotomy procedure during delivery than those of newborns whose weight was below 3300 gm [AOR = 4:8, 95%CI = 2:7-8:8]. Another explanatory variable that had a significant association with the magnitude of episiotomy in this study was maternal body mass index (BMI). Those mothers whose BMI was <25 kg/m 2 were nearly three times more likely to incur an episiotomy procedure during delivery than those mothers whose BMI was ≥25 kg/m 2 [AOR = 2:9, 95%CI = 1:5-5:4) ( Table 5). Discussion The primary objectives of this study were to determine the magnitude of episiotomy and to identify associated factors among women who give birth in Arba Minch General Hospital. As a result, while more than half of women who give birth incurred episiotomy during delivery time, the study partici-pants' occupational status, marital status, educational status, parity, birth weight, and BMI were significantly associated with the magnitude of episiotomy in the study area. The magnitude of episiotomy was 68.0% (95%CI = 64:0-72:5). This finding is lower than the study conducted in Uganda (73%) [28] and in northern Nigeria (89.3%) [29]. However, this finding is higher than the study conducted in Journal of Pregnancy Vietnamese-born women in Australia (29.9%) [27]; Nigeria (21%) [30]; Brazil 29.1% [22]; Iran 41.5% [31]; Kano, Nigeria 41.4% [32]; Nepal 22% [33]; East African women in Australia 30% [34]; Eastern Nigeria 45% [23]; Mizan Aman 30.6% [17]; Addis Ababa 40.2% [16]; Shire 41.4% [20]; and Jimma 25% [15]. This difference might be due to the difference in time of the studies conducted, study settings of the study participants, and characteristics of the study population. The high prevalence may be due to the characteristics of the study participant since Arba Minch Hospital was a referral center for three catchment zones. Most women who attend labor in this hospital were high risk and most often referred to with complications; this may increase the risk of episiotomy to shorten the second stage of labor. Another high magnitude of episiotomy might be associated with experiences of birth attendants, which may suggest a more restrictive use at the study center. Also for birth attendants to reduce the episiotomy rate, applicable perineal massage, use of certain birthing positions (e.g., hands and knees), and labor support [35] are suggested. Furthermore, the high magnitude may be associated with operative delivery. In our study, we did not exclude those mothers whose delivery was by vacuum and forced. Evidence shows that operative delivery will increase the rate of episiotomy [36]. Besides, the finding of this study gives a hint on the need for training for professionals in the practice of episiotomy to lower the magnitude of episiotomy. The finding of this study also revealed that those women whose occupational status was housewives and those women who were married were more likely incised during delivery. This finding is supported by evidence from a study conducted in Mizan Aman Hospital [17]. The educational status of women was also one of the factors with statistically significant associations with an increased magnitude of episiotomy performance. Those women who had attended secondary education and those mothers who attend college and above were more likely incised during delivery compared to those mothers who did not attend formal education. This finding was supported by a study conducted in Iran [31]. This may be due to being educated allows having information or getting aware of the needed interested area. This, in turn, may influence the study subjects to develop a fear towards episiotomy that may influence obstetric caregivers performing episiotomy [37]. This finding also gives a hint for the health profession to apply the WHO recommendation without the influence of mothers. The parity of respondents was also one of the risk factors for the episiotomy. Those women who were primiparous were more likely to have episiotomy than multiparous women. This finding was supported by studies conducted in Brazil [22], France [8], East African migrants in Australia [34], Taiwan [38], Iran [31], Vietnamese-born women in Australia [27], and Jimma [15]. This might be since most of the time, primiparous women were prone to perineum tightening which is one indication of episiotomy, and the old recommendation of routine episiotomy in primiparous women performed by many health professionals might still have an influence in the indication of this procedure for those women [20]. Birth weight of the newborn had a significant statistical association with episiotomy, where mothers whose newborns had median weight which was equal and more than 3300 mg were 4.8times more likely to have an episiotomy during parturition. This is in agreement with studies conducted in Israel [39], Nigeria [25], Austria [40], Spain [41], Thailand [42], USA [43], and Japan [44]. However, there was no association reported from a study conducted in Jimma [15] and Iran [31]. This finding gives a hint for clinicians would tend to give episiotomy for a fetus if they assumed the weight was higher. In fact, the higher the estimated fetal weight, the more it could predispose for perinatal trauma if the provider tends to give judicious episiotomy in time. Evidence indicated that one of the main reasons why clinicians used to perform episiotomy was fear of a perineal tear [37]. Training should be given for birth attendants on the indication of episiotomy; this may reduce the fear of the birth attendant. Also, those who fear perineal tear should consult early for an experienced birth attendant. In the current study, women whose body mass index was less than 25 kg/m 2 were nearly 3 times more likely to have an episiotomy. This is supported by a study conducted in the UK [45], New Mexico [46], and USA [47]. The risk of episiotomy was lower in women who have had an increased BMI [48]. An increased BMI at enrolment was associated with a reduced incidence of minor perinatal trauma at delivery [49]. Obese women were less likely to use tobacco, were more likely to have their labor augmented or induced with oxytocin, and had shorter second stages than women who were not obese [46]. This finding gives a hint for the health professional to follow strictly to those women who were obese. Limitation of the Study As this study was exclusively conducted in the hospital, the findings cannot be generalized to all women who attend labor in Ethiopia. Besides, there may be social desirability bias since we collected the data by using the interviewer administer technique. The cause and effect relationships in this study could not be determined due to the use of the cross-sectional study design for this study. Thus, we strongly recommend further study by using a better study design to ascertain cause relationships. Conclusion In conclusion, this study found that more than half of the study participants have had an episiotomy. Gravidity, occupational status, marital status, educational status, birth weight of the neonate, and BMI of women were significantly associated with the magnitude of episiotomy. Therefore, different stakeholders working on maternal health programs should work on those factors to reduce the magnitude of episiotomy. Furthermore, it is better to give episiotomy-restrictive interventions to birth attendants. Moreover, clinicians and any responsible body should critically follow the work done in the hospital.
2020-09-03T09:10:35.571Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "49650792f994687a61cfe9553efdf39bb02ec90f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jp/2020/8395142.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b1c6030e77a2a1ec396d0db4fd5ba4433d278e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199047716
pes2o/s2orc
v3-fos-license
SEDATION IN THE INTENSIVE CARE UNIT Sedation is the reduction of irritability or agitation by the use of certain drugs mostly to facilitate therapeutic or diagnostic procedures. The first drug which was used as a hypnotic was chloral hydrate. Although it was synthesized in 1832, it was not analyzed as a hypnotic until 1869 by the Berlin chemist Oscar Liebreich. Chloral hydrate substituted morphine very quickly, due to its simple and practical use. It has shown its effect without the application of injections, which is why it has been rather suitable for home use [1]. This drug still has its place in sedation of pediatric patients. Nitrous oxide, “laughing gas”, was discovered in 1844 by Horace Wells, an American dentist. He first tried the effect of this gas himself, with a help of a colleague while the extraction of wisdom tooth. Afterwards, Wells applied nitrous oxide on his patients, using an animal bladder and a wooden tube, which he put into the patient’s mouth, while the nose was blocked. He performed successful operations during a period of one month. The first public demonstration was unfortunately unsuccessful, since the gas was not applied properly. Wells was declared a fraud and he gave up dentistry. In 1848, disappointed, he committed suicide using chloroform [2]. Summary Introduction. Sedation is the reduction of irritability or agitation by the use of certain drugs mostly to facilitate therapeutic or diagnostic procedures. Scales for evaluation of the depth of sedation. Riker Sedation-Agitation Scale and Richmond Agitation-Sedation Scale are the most commonly used scales. Drugs. Sedation is generally produced by using medications from the group of opioids, benzodiazepines, intravenous and inhalation general anesthetic agents, neuroleptics, phenothiazines, α-agonists and barbiturates. Adverse effects of sedatives. Sedation is often associated with hypotension, prolonged mechanical ventilation and longer time on respiratory support, higher frequency of delirium, immunosuppression, deep vein thrombosis, increased risk for development of nosocomial pneumonia, all of which leads to the prolonged recovery time. Conclusion. Sedatives currently used in intensive care units are widely used, but they have limitations. The goal is to get the desired level of sedation with as few side effects as possible. Introduction Sedation is the reduction of irritability or agitation by the use of certain drugs mostly to facilitate therapeutic or diagnostic procedures. The first drug which was used as a hypnotic was chloral hydrate. Although it was synthesized in 1832, it was not analyzed as a hypnotic until 1869 by the Berlin chemist Oscar Liebreich. Chloral hydrate substituted morphine very quickly, due to its simple and practical use. It has shown its effect without the application of injections, which is why it has been rather suitable for home use [1]. This drug still has its place in sedation of pediatric patients. Nitrous oxide, "laughing gas", was discovered in 1844 by Horace Wells, an American dentist. He first tried the effect of this gas himself, with a help of a colleague while the extraction of wisdom tooth. Afterwards, Wells applied nitrous oxide on his patients, using an animal bladder and a wooden tube, which he put into the patient's mouth, while the nose was blocked. He performed successful operations during a period of one month. The first public demonstration was unfortunately unsuccessful, since the gas was not applied properly. Wells was declared a fraud and he gave up dentistry. In 1848, disappointed, he committed suicide using chloroform [2]. In the period between 1920s and mid-1950s, barbiturates were practically the only drugs used both as sedatives and hypnotics. Barbiturates were synthesized in 1864 by Adolf fon Bayern, although the synthetic process was developed and perfected by a French chemist Edouard Grimaux in 1879. He facilitated further development of barbiturate derivatives, which were widely applied [3]. As far as contemporary sedation is concerned, benzodiazepines (midazolam), dexmedetomidine opioids are most often used [4]. Morphine still occupies a significant place in the therapy, due to its analgesic and mildly euphoric effect and cost-effectiveness. Chloral hydrate is used for sedation in pediatrics, whereas in developed countries, nitrous oxide is the drug of choice, due to its practical use by a mask, leading to a mild euphoria and having an optimal analgesic effect. In dental practice, nitrous oxide is most often used in combination with oxygen [5,6]. The development of intensive care units dates back to the times when artificial ventilation was established using rudimentary machines which did not have the ability to synchronize with patient's respiratory efforts. The consequence was deep sedation, up until the point when the patient was able to breathe without the help of a respirator. Apart from the use of microprocessor controlled ventilators in last decades, which are synchronized with patient's respiratory effort, new, short-acting sedatives and analgesics have significantly changed this approach. Today, intensive care is part of a multidisciplinary approach including a large team which participates in treating critically ill patients. As far as the sedation of patients is concerned, the choice of analgesics and sedatives is important, taking into consideration the potential allergy to drugs, organ dysfunction (especially liver and kidneys), the need for rapid start of action and/or cessation of the drug induced effect, the extended duration of therapy, as well as the primary response to the therapy. Analgesics and sedatives are used according to patients' needs, using the smallest effective dosage. The accumulation of drugs and their metabolites is being taken into consideration, as well as the adverse effects to which the application of these drugs may lead, particularly in critically ill patients. The manner of drugs administration is being planned, in the sense of continuous or intermittent administration [7]. Scales for evaluation of the depth of sedation The assessment of the depth of sedation implies the use of various scales. Riker Sedation-Agitation Scale (SAS) ( Table 1) and Richmond Agitation-Sedation Scale (RASS) ( Table 2) are the most commonly used. These scales are also a part of the protocol for the assessment of the state of delirium in the intensive care unit (ICU), Confusion Assessment Method (CAM) for the ICU, as a part of Intensive Care Delirium Screening Check-list (ICDSC). Scales for assessment of the sedation depth are used in order to achieve the optimal level of sedation. However, if that is not achieved, the patient is agitated, which leads to a poor synchronization between the patient and the ventilator and consequently to insufficient ventilation. A possibility of delirium must be considered, involuntary removal of ICU -intensive care unit GABA -gamma-aminobutyric acid CNS -central nervous system ETT -endotracheal tube Legend/Legenda: ETT -endoracheal tube/endotrahealni tubus electrodes and catheters, as well as the development of the post-traumatic stress. On the other hand, the elevated level of sedation leads up to unnecessarily prolonged mechanical ventilation, which can be accompanied by complications such as ventilator associated pneumonia or other lung damage, neuromuscular dysfunction, diaphragm dysfunction and numerous other damages. Due to these reasons, it is very important to find the right balance and to establish the appropriate level of sedation in ICU [8]. Drugs Numerous types of drugs are used for sedation. They include opioids, benzodiazepines, intravenous and inhalation general anesthetic agents, neuroleptics, phenothiazines, α-agonists and barbiturates. On one side these drugs are used in order to help the patient, while on the other they have potentially harmful and adverse effects. Therefore, doctors in the ICU have to be well acquainted with all characteristics of these medications, in order to provide the patient with the most adequate care [9]. Sedatives are drugs most often used in the ICU. However, there are no ideal sedatives. The properties of an ideal sedative include: sedative, analgesic and anxiolytic effects, minimal cardio-vascular and respiratory side-effects, rapid onset and offset of its effects, no adverse effects on kidney and liver functions, having inactive metabolites, no interactions with other drugs and being cost-effective. This is exactly why there is no ideal sedative agent, and a large number of drugs and their combinations are in use. There are no defined sedation regimes, so the choice of suitable drugs is being made according to individual needs of the patient, his characteristics and clinical symptoms [10]. Intravenous anesthetic agents Propofol. Propofol is an intravenous anesthetic agent which has a sedative, hypnotic, anxiolytic and retrograde effect in subanesthetic doses, but has no analgesic effects. It has a wide range of advantages including anticonvulsant and antiemetic effects, and it decreases the intracranial pressure [11,12]. The most important side-effect of propofol is that it leads to hypotension due to peripheral vasodilation and negative inotropic and chronotropic effects. It is a lipid emulsion; therefore its intravenous application is painful. Out of other side effects, a dose dependent respiratory depression and hyperlipidemia may occur. Propofol infusion syndrome is rare, but a very serious drug reaction. It is characterized by a progressive heart dysfunction, a severe metabolic acidosis, hyperkalemia, hyperlipidemia, acute renal insufficiency, and rhabdomyolysis. Hemodialysis or hemofiltration is recommended for elimination of propofol and its toxic metabolites [13]. Benzodiazepines Benzodiazepines are most frequently used drugs for sedation of patients with severe illnesses or injuries. They lead to sedation, anxiolysis or hypnosis, depending on the number of receptors which are activated. Anxiolytic effect is manifested by binding to so called benzodiazepine receptors, which represent locations in the limbic system, after which occurs the activation of the inhibitory transmitter gamma-aminobutyric acid (GABA) affecting the nearby neurons (serotonin, dopamine, acetylcholine, noradrenaline and others) via GABA receptors. There are GABAa, GABAb and GABAc receptors. Benzodiazepines manifest their effect via GABAa receptors [14]. Generally, they lead to enhanced affinity of GABAa towards receptors, which consequentially makes easier for chloride channels to open up and lead to fast hyperpolarization. This is how their sedative and hypnotic effect is explained [15]. They do not lead to general anesthesia, but can induce respiratory and cardiovascular depression. They are bound with plasma proteins and are not eliminated by dialysis. Midazolam. Midazolam is the most commonly administered short-lasting, water soluble benzodiazepine which becomes liposoluble in the blood and rapidly crosses the hematoencephalic barrier and enters the central nervous system. Midazolam is suitable for sedation in ICU due to titration to a desired level of sedation, anterograde amnesia which does not change previously learned information, respiratory and cardiovascular stability and existence of a specific antagonist [16]. Anterograde amnesia is developing almost momentarily after the intravenous administration and it usually persists for 20 -40 minutes after a single dose. It significantly influences the patients' stay in ICU, since they do not remember unpleasant experiences. The time of halfelimination varies greatly, and neutralization is unpredictable due to prolonged distribution. This is exactly why unpredicted awakening and extended extubation time can appear if it is administered for more than 72 hours. In comparison to propofol, midazolam induces a lower frequency of hypotension, but a greater time variation in the recovery after the cessation of drug administration [17]. The antagonist is Flumazenil (Anexate), which can neutralize the effect of benzodiazepine. Lorazepam. Lorazepam is a long-acting benzodiazepine, relatively low as far as liposolubility is concerned and relatively slow acting. Due to these features, it is not a good choice for rapid agitation control. If administered through continuous intravenous infusion, it has a long time of halfelimination (10 -30 hours). Because of that, it is cumulating and the sedation is prolonged. That is why it is more appropriate for bolus administration. Solutions which are used for preparation of lorazepam may lead to hyperosmolarity, lactic acidosis and renal tubular acidosis, if administered alongside the drug in extended or higher dose. If higher dose is taken orally, it may cause diarrhea [18,19]. Other benzodiazepines. Diazepam is not commonly used for sedation of patients in ICU and it can be administered intravenously; however, continuous administration should be avoided due to a long half-time of elimination, from 30 -60 hours. It may lead to renal dysfunction [20]. Diazepam is also used in pediatrics, especially when administered rectally. Barbiturates Thiopentone. Barbiturates are still occasionally used in ICU. Deep sedation with thiopentone can be used for burst suppression of status epilepticus, although today propofol is more often used. Also, by administering continuous infusion one can induce so called "barbiturate coma" with severe trauma of the central nervous system (CNS), with the aim of decreasing cerebral metabolism. Thiopentone has immunosuppressive effect in certain doses. Literature data indicate the influence on serum potassium level during thiopentone induced coma. It is necessary to monitor serum potassium level in these cases, in order to avoid additional complications [21]. Alpha2 agonists Dexmedetomidine. Dexmedetomidine is an al-pha2 agonist of newer generation, with sedative, sympatholytic and anxiolytic properties. It shows greater affinity towards alpha2 receptors compared to Clonidine, due to which it has more expressed sedative effects. Sedation by alpha2 agonists differs from sedation with other sedatives. Patients can be awaken readily and their cognitive performance on psychometric tests is usually preserved. This is exactly why patients are more communicative and they cooperate better compared to other types of sedation. Dexmedetomidine reduces the postoperative vomiting reflex and enables better tolerance of endotracheal tube, in comparison to other sedatives [22]. Bolus administration has an important influence on cardiovascular system. Initially it leads to peripheral vasoconstriction, which consequently induces hypertension and reflex bradycardia, and later leads to central effects which are shown in vasodilation, hypotension and bradycardia. Cases of arrhythmia and sinus arrest have also been recorded. Due to these reasons, bolus administration of dexmedetomidine is not recommended, that is caution and monitoring is necessary during administration. Dexmedetomidine provides the anesthesiologist to rapidly awake the patient, who tolerates the endotracheal tube well, without respiratory depression, which makes it an ideal sedative [23]. Clonidine. Clonidine is also from the group of alpha2 agonists which reduces blood pressure and lowers heart rate by reducing sympathetic stimulation. Although it was initially used as an antihypertensive drug, it has not found its adequate and expected application in the field of cardiology. Clonidine provides sedation with minimal respiratory depression and has analgesic properties in larger doses, with scarce opioid effects. It also decreases cerebral blood flow and cerebral oxygen consumption [24]. There are some data showing that sedative doses of Clonidine lead to reduced rapid eye movement sleep phase in healthy volun- Tatić M, et al. Sedation in the Intensive Care Unit teers. It is often used as the second choice drug with good effects in controlling tachycardia and hypertension which occur as the consequence of sedation. The effects of the drug are significant in controlling delirium and abstinence syndromes when taking opioids, benzodiazepines, alcohol and nicotine. It is being excreted via kidneys, unchanged in 40 -60%, and around 40% of the drug is metabolized into inactive metabolites [25,26]. Opioids Opioids such as morphine, fentanyl, sufentanil, alfentanil and remifentanil represent the basic pain therapy in ICU. They are agonists of µ receptors of the CNS, which lead to analgesia, sedation, but also to respiratory depression, nausea, constipation, urine retention and occasional confusion and bewilderment. The choice of opioid depends on the preferred commencement and duration of the effects of the drug. It is important to take care of their solubility in fatty tissue, since their continuous infusion may lead to accumulation and consequently prolonged duration of the drug effects. Doses are titrated according to patient's individual needs [7]. Morphine. Morphine is still considered a significantly strong and very frequently used opioid analgesic. It causes depression of the respiratory, vasomotor, and cough center, but on the other side, it stimulates the vomiting center. It causes a decrease of basal metabolism and circumstantially the decrease of the body temperature. It also causes bradycardia, miosis, and increased intraocular and intracranial pressure [27]. Morphine is metabolized in the liver into morphine-6-glucuronide, which is being eliminated a lot slower than the morphine itself and it crosses the brain barrier a lot slower, which, as an aftereffect, causes the prolongation of its impact. The analgesic effect is the most important characteristic of morphine. In a dosage-dependent manner it causes the increase of the pain barrier, and it also changes the emotional reaction to the pain and causes general sedation [15]. Euphoria occurs with approximately half of the patients, whereas with some of the patients dysphoria is possible as well [15]. Due to its positive characteristics, as well as its efficiency, morphine still represents ''the gold standard'' in the postoperative period. Fentanyl. Fentanyl is a synthetic opioid, which is 100 times more potent than morphine. It has, above all, a wide application in the treatment of intraoperative pain. In case of prolonged infusion, its accumulation occurs, and this is the matter one should take care of [28]. Sufentanil. Sufentanil is an opioid with the most powerful analgesic effect. It is 500 -1000 more potent than morphine. It is suitable for sedation, since if it is used in mild dosage, it does not compromise hemodynamic stability. It is metabolized in the liver, metabolites are inactive and they are being eliminated through the kidneys. Alfentanil. Alfentanil is an analogoue to fentanyl with approximately 1/10 of the fentanyl potency, but it is a short-acting opioid used in a single dosage. It is frequently metabolized in the liver. It has a small volume of distribution. A smaller bit is being egested with no alterations, whereas the greater part is eliminated in the form of metabolites, through urine [7]. Remifentanil. Remifentanil is a popular opioid analgesic of newer generation and its metabolism does not depend on the liver function. Studies show a higher quality of sedation, good hypnotic effects and a shorter time for extubation. When using this medicine, it is very important to know its characteristics. Special bolus application is unnecessary and it is potentially hazardous due to bradycardia and hypotension [29]. Ketamine Ketamine is an antagonist of N-methyl-D-aspartate receptors. It can be used for the introduction and maintenance of anesthesia, as well as a medication for sedation in the ICU. It causes a condition which is, due to is symptoms, known as ''dissociative anesthesia''. In some aspects, it might be an ideal sedative, since it has both sedative and analgesic impact. It is also significant that it causes cardiovascular stability and bronchodilation. However, due to its connection with hallucinations, its independent usage in the ICU is not recommended. Ketamine is useful for patient comfort in painful procedures within the scope of intensive care, especially in pediatrics (punctures, drainages) and with bending of burns. It is also useful with patients who endured trauma, for maintenance of the respiratory musculature tonus and reflexes and for preservation of hemodynamic stability. It is frequently used in prehospital conditions, as well as a supplement to the opioids in the check-up of the post-operative pain. Ketamine was traditionally contraindicated for the check-up of increased intracranial pressure. However, contemporary attitudes have changed, since if there is a risk of hemodynamic instability, ketamine might be a very useful medication. It is also used with very severe bronchospasm, although its bronchodilator effect is very small. Inhalation anesthetics and propofol are more efficient in this respect [30,31]. Inhalation anesthetics In sedation, of inhalation anesthetics, the following are most frequently used: isoflurane, sevoflurane and desflurane. According to some studies, isoflurane has presented efficient, safe sedation up to 96 hours, with quicker awakening in relation to midazolam, and similar awakening in relation to propofol, however with increased number of patients with delirium. Isoflurane is also a powerful bronchodilator and it has a significant role in therapy of status asthmaticus [32]. Desflurane has also shown faster awakening after a short-term postoperative sedation (< 12h), as well as quicker mental recovery in relation to propofol. There are special systems for application of inhalation anesthetics (sevoflurane) in the ICU. Antipsychotics (tranquilizers) Neuroleptics are used in the treatment of agitation caused by hyperactive delirium, with the option to include haloperidol and oral antipsychotics, such as chloropromazine, olanzapine, risperodine. Haloperidol is most frequently used since it can be applied intravenously, frequently as a preventive measure, taking care of adverse effects for the cardiovascular system. Patients should be followed for arrhythmia, such as torsade de pointes, and it should be applied with precaution in the patients whose QT interval is prolonged. Dosing of medicine is performed according to the individual needs of the patient. Contemporary manuals for the control of delirium recommend a short-time application of haloperidol or olanzapine, with the recommendation of dexmedetomidine for prevention [33]. Non-opioid analgesics Nonsteroidal anti-inflammatory drugs (NSAID) are used as a supplement to opioids in the therapy of pain in certain patients in the ICU. They must be used with precaution since they might cause damage to kidneys and erosion of gizzard mucosa due to their impact on renal production of prostacycline. They also have a characteristic of higher risk of myocardial heart failure and a stroke [34]. Adverse effects of sedatives Prolonged sedation is an intervention whose side effects are often underestimated. They cause hypotension and decrease in perfusion, prolong mechanical ventilation, and in the worst case, the need for tracheostomy. Apart from this, prolonged sedation causes postponed respiratory support, higher frequency of delirium, immunosuppression, deep vein thrombosis, increased risk for development of nosocomial pneumonia, all of it leading to prolonged recovery time [35]. On the other side, decreased sedation causes the condition of general discomfort, as well as hypertension, tachycardia, hypercatabolism, increased usage of oxygen, atelectasis, infection and psychological trauma [36]. Consequently, due to this very reason, doctors in ICU must be very familiar with the medicines applied in the therapy, in order to achieve desired effects. As it was stated beforehand, ideal sedative does not exist. Every medicine has certain side-effects, and it is the task of every doctor to estimate whether the application of medicine causes more benefit than harm to the patient. Sedation and functions of the Central Nervous System A great number of clinical studies have analyzed the relationship between the usage of benzodi-azepines and deterioration of the CNS functions, especially in severely sick patients (critically ill, surgical patients, with trauma, burns) treated in ICU for a long period of time. Data on the effects of opioids differ a great deal. Sedation with dexmedetomidine, in comparison with benzodiazepine, decreases the possibility of damage or duration of dysfunctions of the CNS. The ABCDE strategy, that stands for AB (Awakening and Breathing trials), C (Choice of sedation), D (Delirium monitoring and management), as well as E (Early Exercise), may decrease the incidence of acute and prolonged dysfunction of the CNS [37]. In the same manner, using bispectral index (BIS) for monitoring the depth of sedation makes it possible to establish the level of sedation. Monitoring the impact of sedatives on the CNS expressed numerically (above 60 -80) may, correctly represent the level of consciousness i.e. the level of being awake [38]. Early mobilization Contemporary studies show that early mobilization has a significant impact on the patients' functional outcomes, safety, and the length of stay in the ICU. Early physical therapy significantly reduces the incidence of delirium in the ICU. In the same manner, the protocol of early mobilization decreases the usage of sedatives and analgesics, and supports enhanced recovery after surgery. In patients on mechanical ventilation, every day without sedation, alongside the physical therapy, significantly improves the functional status and shortens the time in the ICU [39,40]. Conclusion Sedation is a very significant issue in the management of the critically ill patients. Consultations with doctors and the manner of sedation according to specific and individual characteristics of patients provide safe and adequate treatment. Currently accessible sedatives that are used in Intensive Care Units are acceptable and are widely used, but they also have limitations. Instead of searching for ideal sedatives for critically ill patients, their application should be based on the principles of pharmacology and pharmacokinetics of medicines. By establishing the aims of sedation, according to individual characteristics and current conditions of the patient, it is possible to provide a rational treatment strategy for each patient in the intensive care unit. In the same way, early mobilization of patients who are still in the intensive care unit, reduces the occurrence of intensive care unit delirium and consequently reduces the usage of sedatives and analgesics, thus contributing to enhanced recovery and shorter stay in the intensive care unit.
2019-08-02T13:24:53.779Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "828d160799b10b712e82efde4d9abae19c23cac8", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0025-81051904123T", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fffa3cf28794b529c9e36830d9017aef97100a51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195772174
pes2o/s2orc
v3-fos-license
Craniovertebral Junction Anomalies: Changing Paradigms, Shifting Perceptions: Where Are We and Where Are We Going? Craniovertebral junction especially with bony anomalies like basilar invagination (BI) and atlantoaxial dislocation (AAD) have often been considered as the last bastion of the spine surgeons. Some tread with caution, other decide to veer clearly away from it. Perhaps the single most factor contributing towards this thinking has been the inability for the physicians to look at this pathology from a different window. The treatment paradigm has always been directed towards (1) classifying these pathologies as reducible or irreducible, based on flexion/extension digital X-rays; (2) treating irreducible pathologies with transoral excision of the dens followed by posterior fixation; and (3) treating reducible pathologies with wiring techniques. For several decades, these techniques became so standard that it was felt that no other treatment was possible for these complex pathologies. Let’s explore how these myths concepts were systematically exploded. Craniovertebral junction especially with bony anomalies like basilar invagination (BI) and atlantoaxial dislocation (AAD) have often been considered as the last bastion of the spine surgeons. Some tread with caution, other decide to veer clearly away from it. Perhaps the single most factor contributing towards this thinking has been the inability for the physicians to look at this pathology from a different window. The treatment paradigm has always been directed towards (1) classifying these pathologies as reducible or irreducible, based on flexion/extension digital X-rays; (2) treating irreducible pathologies with transoral excision of the dens followed by posterior fixation; and (3) treating reducible pathologies with wiring techniques. For several decades, these techniques became so standard that it was felt that no other treatment was possible for these complex pathologies. Let's explore how these myths concepts were systematically exploded. Irreducible AAD & BI Cannot Be Reduced: The concept that AAD and BI cannot be reduced is fast disappearing. Following the pioneering work of Goel et al., 1,2 it is now clear that there is nothing like 'irreducible' as long as there is no bone fusion (either ventral or dorsal). It is thus, best to reduce and realign the deformity intraoperatively. C1-2 fusion has been also demonstrated to be the best possible option to achieve a stable, short segment fixation. World over, this technique is now accepted. The surgical technique should of course be done with a certain degree of caution as if not careful, can lead to uncomfortable blood loss. It is also mostly advised in cases where the C1 is not fused with occiput (see below). Distraction is the Only Movement That May Be Achieved With Spacers; No Other Motion Is Possible: Goel et al. 2 have shown that distraction is eminently possible which can correct BI. However, it was often seen that there were other challenges to overcome. Some of them include: (1) both BI and AAD resulting in severe superior and posterior tilt of the dens causing severe cord compression, (2) C1-2 joints becoming completely vertical, (3) anomalous vertebral artery positioned directly over the joint. All the above were situations, where the C1 arch was generally fused with the occiput. These challenges were addressed by the author with the development of the technique of distraction, compression, extension and reduction (DCER) along with all its modifications (joint remodeling, extra-articular distraction, and vertebral artery mobilization). The author also described the concept of pseudo-joints and how they may be used for reduction and realignment. We also introduced the concept of 3 axis reduction, where C2 could be aligned through reduc tion in multiple axis of motion. [3][4][5][6][7][8] The Only Treatment for Chiari Malformation Is Foramen Magnum Decompression With or Without Duroplasty: Behari et al. 9 in his impressive review of literature has shown significant improvement (upto 70%) may be achieved both with foramen magnum with or without duroplasty. 10 He also stressed the low mortality of this procedure (<1%). This is supported ample objective imaging evidence. However, Behari et al. 9 concluded from his extensive review that fixation is necessary in cases of Chiari associated with AAD and BI. Goel et al. 11,12 recently have suggested that C1-2 fixations should be performed for all cases of Chiari malformations. He proposed that tonsillar herniation is secondary (and a protective and compensatory mechanism) to the subtle or gross instability which happens in all cases of Chiari malformations. Thus, an instrumented fixation without foramen magnum decompression is very effective for all Chiari malformations. While Dr. Goel is quite positive about this method of treatment, it is yet to gain universal acceptance as a standard of mode of treatment for Chiari. Well-designed randomized double blinded studies would be necessary to establish the role of such treatment modality. But the fact remains the actual cause of this ambiguous pathology remains elusive. : Goel 13 has been skeptical about occipital fixation. This fact was also supported well by Sanjay Behari 9 in his article who mentioned that occipital fixation is known to lead greater incidence of biomechanical instability (thin bone, longer lever of fixation). 14 Behari also pointed out that occipital cervical fusion usually excludes the C1 arch. However, he also cautioned that in the presence of significant bleeding from paravertebral venous plexus; a very high BI, condylar hypoplasia and occipitalised atlas, where the occipital condyle and lateral mass of atlas are fused on either side, gross C1-2 rotation or vertical C1-2 joints with unilateral C1 or C2 facet hypoplasia, as well as in the presence of subaxial scoliosis, where insertion of C1-2 screws may endanger the neuraxis or the ipsilateral vertebral artery. Chandra et al. [3][4][5][6][7][8] in his earlier articles has used occipital fixation in all cases of his described technique DCER both for reason mentioned above and also for the advantage of maintaining a long lever arm, which is one of the fundamental bases of DCER. He also showed that in vertical joints, posterior along fixation may be done with the technique of extra-articular distraction. Is Occipital Purchase Justified The Dens Always Dislocates Backward: Another fundamental concept of AAD has been that the dens always dislocates posteriorly. Goel 13 has shown that it is not the dens that dislocates but the joints. When the C1 joint is dislocated posteriorly, AAD is produced (type I). When the C1 joint is dislocated forwards over C2 (type II), there may be no AAD, but the joint is still unstable, hence the spine has to be fixed. He added a third interesting category of type III stating that even in cases where there is no joint dislocation, but the patient having Chiari and 'loose joints' are observed at surgery, this was enough to diagnose bony instability and the patient should undergo bony fusion. What is interesting is that both type II and III examples shown by Dr. Goel had severe platybasia which was not commented upon. This thinking is of course a shift of paradigm and more studies will be required to justify this hypothesis. Can the Concept C1-2 Instability Be Extended to Subaxial Cervical Spine: Goel et al. 13 finally end their article stating that ossified posterior longitudinal ligament and even Hiramaya's disease occurs because of subaxial joint instability and a multiple long segment joint screw fixation without laminectomy is enough to treat the pathology. Finally, I will conclude by saying that "Change is the only thing that is constant. " The past decade has seen a shift in the paradigm of treatment of craniovertebral junction. Some changes like all changes have evoked intense criticism and some appreciation. But nevertheless, one cannot ignore them. Only time will tell. I wish the reader a happy reading of all these articles.
2019-07-03T13:05:23.077Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "0c1ddbe316f52c6cd44175e1fd579c60db22eee5", "oa_license": "CCBYNC", "oa_url": "http://www.e-neurospine.org/upload/pdf/ns-19edi-004.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c1ddbe316f52c6cd44175e1fd579c60db22eee5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
148041
pes2o/s2orc
v3-fos-license
Host Specificity in the Honeybee Parasitic Mite, Varroa spp. in Apis mellifera and Apis cerana The ectoparasitic mite Varroa destructor is a major global threat to the Western honeybee Apis mellifera. This mite was originally a parasite of A. cerana in Asia but managed to spill over into colonies of A. mellifera which had been introduced to this continent for honey production. To date, only two almost clonal types of V. destructor from Korea and Japan have been detected in A. mellifera colonies. However, since both A. mellifera and A. cerana colonies are kept in close proximity throughout Asia, not only new spill overs but also spill backs of highly virulent types may be possible, with unpredictable consequences for both honeybee species. We studied the dispersal and hybridisation potential of Varroa from sympatric colonies of the two hosts in Northern Vietnam and the Philippines using mitochondrial and microsatellite DNA markers. We found a very distinct mtDNA haplotype equally invading both A. mellifera and A. cerana in the Philippines. In contrast, we observed a complete reproductive isolation of various Vietnamese Varroa populations in A. mellifera and A. cerana colonies even if kept in the same apiaries. In light of this variance in host specificity, the adaptation of the mite to its hosts seems to have generated much more genetic diversity than previously recognised and the Varroa species complex may include substantial cryptic speciation. Introduction The Western honeybee Apis mellifera, originally native to Europe, Africa, and the Middle East, has been repeatedly introduced in almost all regions of the world due to its importance for apiculture [1]. Introductions into Eastern Asia have been ongoing for over a century with most drastic negative consequences for global beekeeping [2]. Following its introduction, A. mellifera came into contact with a broad range of parasites and pathogens infecting native Asian we found reproducing mites in both drones and workers cells. The sampling was conducted in the apiary of the University of Los Banos (Philippines) and in Dien Bien and Son La (Vietnam) in 2013. In these locations, both honeybee host species were kept in sympatry on the same or on adjacent apiaries, within honeybee flight distance range (<1000m). Additionally, mites were collected in 2013 on the island of Cat Ba (Vietnam) a natural reserve where only A. cerana occurred. Finally, dead mites were sampled from boards placed at the bottom of three Western honeybee colonies in Lipa city, Philippines in 2015. All mites were directly transferred into 99% ethanol and stored at -20°C shortly after sampling. The DNA of individual mites was extracted using a standard Phenol-Chloroform protocol [15]. The quality and amount of each DNA extract was determined using a Nanodrop spectrophotometer (Thermo Fisher Scientific Inc., Wilmington, USA). All in all, 372 mites were analysed (263 from 20 A. mellifera colonies and 109 from 14 A. cerana colonies) ( Table 1). Mitochondrial DNA analysis A 950bp fragment of the mitochondrial Cytochrome Oxidase I gene (coxI) was amplified and sequenced using the coxI primer set (10KbCOIF1 and 6,5KbCOIR, [14]) for three mites per host species and location. The resulting sequences were trimmed using the software BIOEDIT [16] and subsequently aligned with the software MEGA V. 6.0 [17]. These fragments were compared with the NCBI database using the NCBI-BLAST tool [18] to infer which of the previously described V. destructor haplotypes best matched our samples. The amount of divergence between all distinct haplotypes generated in this study was calculated using the software MEGA V 6.0 [17]. In parallel, a maximum likelihood tree was built using the same software. Finally, a median-joining network was constructed using the software NETWORK v. 4.6.1.2 [19]. Microsatellite DNA analyses All Varroa were genotyped at six polymorphic microsatellite DNA loci (VD112, VD125, VD152 from [20], and VJ275, VJ292 and VJ295, from [13]) using the Fragment Profiler software V. 1.2 of the MEGABACE DNA Analysis System (GE Healthcare Life Science, Buckinghamshire, England). The number of alleles (NA), allelic richness (R) and the observed heterozygosity (H O ) were estimated for each sampling location and host species using the software Fstat V. 2.9.3 [21]. Hardy-Weinberg equilibrium tests were performed within samples for each marker using the former software. A Principal Component Analysis was conducted on the overall microsatellite data using the R package Adegenet [22] to identify the main genetic clusters among the different mite samples based on the individual mites' genotypes. The fixation indexes (F ST ) between and within the main clusters provided by the PCA analysis were estimated using Fstat V. 2.9.3 [21]. In addition, Jost's population differentiation index (D, [23]) was estimated using the software SMOGD [24] between all locations and honeybees host species. Finally, AMOVAs were performed using the microsatellite data to identify the relevant level of Varroa genetic structuring within the PCA clusters (between locations, between colonies within locations and within colonies) using the software Arlequin V. 3.5.1.3 [25]. Mitochondrial DNA analysis All sequences generated in this study were registered in the NCBI database under accession numbers KR528378 to KR528387 (S1 Table). Origins of the mites from Vietnam. The mtDNA sequences of the Vietnamese samples clearly segregated according to host species. Both haplotypes were however highly similar ( Fig 1 and S1 Fig). The sequences of the Vietnamese mites sampled on A. mellifera colonies were all identical to the previously described Korean AmK1-1 haplotype (accession GQ379056; [14]). The mites from A. cerana colonies showed more variability both within and between locations (Fig 1, Table 2). The previously described haplotype AcV1-1 (accession GQ379061; [14]) matched the haplotype of mites sampled on A. cerana in Dien Bien and Son La. Our samples from Cat Ba were close to the AcC1-1 haplotype from Guangdong province, in Southern China (accession GQ379065; [14]). The haplotypes in Vietnam were also similar to the Japanese haplotype (accession GQ379074.1; [14]) with only five substitutions between the haplotypes of mites sampled at Son La. This was in the same range as the variance found within the Vietnamese samples where the haplotypes of the mites sampled at Cat Ba differed by five substitutions to the Son La haplotype. The distances within the Vietnamese haplotypes were not significantly larger than those separating the Korean and the Japanese haplotype (t test, p > 0.05) but significantly larger than the haplotypes sampled on the Phillipines (t test, p < 0.001) ( Table 2). Origins of the mites from the Philippines. The coxI sequences we obtained from the mites sampled in A. cerana and A. mellifera pupae cells in Los Banos were all identical to the Luzon 1 sp. (accession AF106894.1, [7]). In that location, the mites we sampled all shared the identical native mite haplotype irrespective of host species, unlike in Lipa city where all mites sampled in A. mellifera colonies were of the ubiquitous Korean AmK1-1 haplotype (accession number GQ379105.1; [14]). Differences between the mites from Vietnam and Los Banos. The divergence levels between the mites we sampled in Los Banos and in Vietnam were high, with the sequences differing at the average by 3.50% ± 1.30 SD and 3.60% ± 1.30 SD from the A. mellifera and A. cerana mites from Vietnam, respectively (Table 2). Microsatellite markers Analysis The microsatellite markers used in this study were highly polymorphic with an average of 18.83 ± 3.33 alleles ( Table 3). None of the six markers were in Hardy-Weinberg equilibrium, due to a lack of heterozygotes which is expected as a result from obligate brother-sister mating and inbreeding in the Varroa life cycle. Principal Component analysis. Based on the microsatellite data of all mites, the two first components obtained with the PCA explained together 38.32% of the genetic variation in our samples (first component: 30.67% and second component: 7.45%, Fig 2). When the individual mite genotypes were plotted on these two main axes, three distinct clusters could be observed, matching the three different haplotype described with the mitochondrial DNA analysis. The first one consisted of the mites sampled in A. cerana colonies in the three populations in Vietnam ("Vietnamese cluster"). A second cluster included the Varroa collected in A. mellifera in Vietnam and Lipa city ("Korean cluster"). Finally, the third cluster comprised the parasites from the colonies of A. mellifera and A. cerana from Los Banos ("Philippine cluster"). Comparison of the Genetic Diversity between the different Varroa types. The mites belonging to the Vietnamese type had a significantly higher allelic richness (14.13 ± 2.04) compared to the Varroa mites of the Korean cluster (2.84 ± 1.39) and the Philippine cluster (R = 3.63 ± 0.91, t test: p = 0.001) ( Table 1). The overall heterozygosity in the mites was low. Genetic structuring of the Varroa haplotypes. The genetic differentiation within the clusters was low and non-significant for the mites sampled in A. cerana and A. mellifera colonies in Los Banos (F ST = 0.052, p > 0.05) ( Table 4). However, medium and significant F ST values were obtained when comparing the different sampling locations where the Korean cluster was found (F ST = 0.172, p < 0.05) and the different sampling locations where the Vietnamese cluster was found (F ST = 0.205, p < 0.05). Much higher and highly significant F ST values were found when comparing the three clusters ( Table 4). The AMOVA revealed that the geographic location had a highly significant importance for the mites of theVietnamese cluster (15.03%, p < 0.001) and in the Korean cluster (15.12%, p < 0.01) ( Table 5). Notably, the genetic distance in the Korean cluster was lower within the two Vietnamese populations (D = 0.008) than between the mites from Lipa city and the two Vietnamese populations (D = 0.023) (S2 Table). The amount of genetic diversity varied significantly within location for both, the Vietnamese and the Korean clusters (16.72% and 15.85%, respectively, p < 0.001). Finally, the largest source of variation in these two groups resulted from the differences among mites within colonies (68.24% for the Vietnamese and 69.02% for the Korean clusters, p < 0.001). For the mites sampled in Los Banos, only this latter level was significant (80.47%, p < 0.05), but not the differences among hosts or among colonies within host. Hybrid detection. We found no evidence of direct hybridization between mites of the two host species. Only eight Varroa mites in the whole dataset carried alleles that were also found in mites sampled in the alternative host species. These individuals were exclusively found in A. cerana colonies in Vietnam and carried one or two alleles that were otherwise specific to the Korean haplotype in A. mellifera colonies. However, none of these mites were direct hybrids, as all other microsatellite loci had private alleles of the Vietnamese haplotype. Moreover, these few shared alleles were found in the homozygotic state, suggesting that they were independent homoplasic alleles of the same length as in the A. mellifera sampled mites but not a result of hybridisation. Discussion In this study, we found that the Varroa mite shows different patterns of host specificity between A. cerana and A. mellifera in the Philippines and in Vietnam. Whereas the native Philippine mite was found in colonies of both host species in Los Banos, we found strong host specificity and complete reproductive isolation between the Varroa types parasitizing A. cerana and A. mellifera colonies in Vietnam. The Korean haplotype was only found in A. mellifera colonies but never found in any A. cerana colony we sampled in Vietnam and the Philippines. Origins and diversity of the Varroa mites from Vietnam Our findings support the suggestion of Fuchs et al. [26], who reported on an almost complete host specificity of the two Varroa lineages in Northern Vietnam, suggesting sympatry of two host specific Varroa types that do not hybridize. Despite only a minute divergence of the Vietnamese coxI haplotype from the Japanese and also the Korean V. destructor haplotype, our nuclear DNA analyses suggest a complete genetic isolation of the mites from the different host species. Not only did we not detect any indication of hybridization, we also failed to sample mites typical for the one host species in colonies of the other host species in Vietnam. Hence the Korean haplotype found in A. mellifera, which may have switched hosts only about 60 years ago [4], is not able to infect different populations of its original host species established in Northern Vietnam. Our results show that the arm race between Varroa and its hosts has led to the evolution of very specialized mites. Although the underlying mechanisms of this coevolution are not well understood, previous work suggests that the mite is able to mimicry the cuticular hydrocarbons of its host [27][28] to avoid the hygienic behaviour of the honeybees [29]. Even though this trait appears to be plastic [30], the Korean haplotype is apparently not able to overcome the defenses of the populations of A. cerana found in Vietnam. The level of genetic diversity and genetic structuring among the three sampling locations in Vietnam were higher in the mites sampled in A. cerana colonies than between the two locations where we sampled in A. mellifera colonies. This matches reports of the global spread of very few, genetically almost identical V. destructor lineages in A. mellifera [13]. Origins and diversity of the Varroa mites from the Philippines The archipelago of the Philippines accommodates distinct and diverse A. cerana host populations that show haplotype variation at the subspecies level compared to mainland Asian populations [31][32]. This clearly sets the stage for independent coevolution between mites and hosts and may explain the large genetic differences between mainland Asia and the Philippine mites previously described by Anderson and Trueman [7]. In that study, Varroa from three provinces of the Philippines were analyzed: A. cerana colonies were sampled in two provinces on the island of Luzon: Batangas (which is the adjacent province located South of our sampling location) and San Fernando (a province located about 100 km to the North of our sampling location), and a third one in Mindanao, a different island. Each of these three sampling locations hosted a distinct mite haplotype in A. cerana colonies, grouping apart from the rest of the V. destructor sequences coming from A. cerana mites sampled in other Asian countries. However, the mites sampled from A. mellifera colonies in the Philippines all shared the Korean haplotype suggesting a separation of native and introduced mite populations as we observed in Vietnam in this study. We also found that the sequences of the mites we sampled in A. cerana in the Philippines differed significantly from the rest of the haplotypes previously described [14]. However, contrary to Anderson and Trueman [7], we also found the Luzon 1 haplotype in the Philippine A. mellifera colonies in Los Banos. In addition to the lack of mitochondrial DNA variability in Los Banos, we also failed to detect any level of subpopulation structuring in this Varroa population. In fact, the microsatellite markers we analyzed suggested that the mites readily infect both host species. The presence of the native Philippine type in A. mellifera shows that more types than previously thought may be able to infect both Apis species. A clearer picture on the Varroa genetic and functional diversity By coupling both mitochondrial and nuclear DNA approaches, we were able to infer the origin of the mites, but also to detect more functional mechanism such as the hybridization potential of different Varroa types in their native and non-native range. The substantial differences between the genetic diversity and infestation abilities of the mites sampled in Los Banos and in Vietnam may have far reaching consequences for our understanding of the host parasite biology of honeybees and Varroa. The mites we sampled in Los Banos show all genetic prerequisites to qualify as a novel Varroa species. Although Anderson and Trueman [7] did not find morphological distinctiveness (based on body size) to completely separate this "Luzon haplotype 1" to the other Varroa species, we found further evidence that the mites from Los Banos differ markedly from the four other previously described Varroa species. Both mitochondrial DNA sequence divergence and the ability to parasitize both A. cerana and A. mellifera render this Varroa type a potential novel species. In addition, however, also the Varroa types we found in Vietnam appear to segregate as if they were distinct species. Although the mitochondrial haplotypes were rather similar to the Japanese haplotype of V. destructor, the microsatellite DNA markers showed a complete separation of these two mite groups even if kept on the same apiary. Thus, despite the fact that the number of substitutions is well below the 2% coxI divergence level typically considered as separating two species [33], there is no indication of any hybridization of nuclear markers. Given the Biological Species Concept of the reproductive isolation between the two sympatric groups [34][35], the mites with the Korean and the Vietnamese haplotypes could also be considered as two distinct species. Solignac et al. [13] estimated the divergence time between the Korean and the Japanese Varroa type between 5 000 and 15000 years ago. Assuming a constant mutation rate and population size, we can estimate the divergence time among the various haplotypes found in our study. Four substitutions in the cox I sequence separate the Japanese from the Korean haplotype [14] resulting in an estimate 1250 and 3750 years of divergence per substitution. Similarly, the time of divergence between the two types found in Vietnam (which differ in by between five and eight substitutions) from the Japanese type would be between 10 000 and 30 000 years. Considering the short generation time of the Varroa mite [3], this is a rather long speciation period: This result could explain why the ubiquitous Korean haplotype can no longer parasitize A. cerana in Vietnam in spite of ongoing spill overs and spill backs in Korea and in Japan [14]. Following this reasoning, 50000 to 150000 years would separate the mites from the Philippines and the two Vietnamese haplotypes. This falls within the Pleistocene, during which A. cerana may have arrived in archipelago of the Philippines [31][32]. The ancestor of the Varroa mite found in the Philippines nowadays may have evolved in allopatry from the mainland populations since then. Conclusions We here provide an example of how host-parasite coevolution can rapidly lead to speciation within a short time span. The initial extreme selection on the Korean mites after the host switch has allowed for only a very limited number of individuals to reproduce in the novel host species less than a century ago. Subsequently, the constant brother-sister mating of Varroa has led to almost clonal population-specific mite types, which have differentiated considerably with time as they were taken away from their native region into allopatry. These extreme characteristics of the mite set the stage for the potential alloxenic speciation [36] of highly specific and most virulent types. The consequences of keeping the introduced Western honeybee and the native Asian species is a most unfortunate example of transhumance having devastating consequences by promoting the global spread of parasites and associated viruses [8,[37][38][39]. Since apiculture has facilitated the global transmission of Varroa, selection will inevitably favor the most virulent types in A. mellifera as seen for the global spread of the Korean haplotype. Yet, given the tremendous increase of A. mellifera beekeeping in Asia and the wide diversity of Varroa in the native A. cerana populations, it seems possible that more mite types might switch to the western honeybee. At the same time, the spill back of virulent Varroa strains from A. mellifera to A. cerana may also become a risk potential for apiculture for these two economically and ecologically crucial species. Although some Varroa types are apparently strongly specific (Vietnam), others are more generalist (Philippines). If those generalist mites would spread to mainland Asia, it is likely that they would also invade both A. mellifera, but also A. cerana colonies.
2017-07-08T20:30:21.673Z
2015-08-06T00:00:00.000
{ "year": 2015, "sha1": "93e6a14024a23dec45ec63031cec42fad2373d8a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135103&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93e6a14024a23dec45ec63031cec42fad2373d8a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
21675794
pes2o/s2orc
v3-fos-license
The Power of Genetic Algorithms: what remains of the pMSSM? Genetic Algorithms (GAs) are explored as a tool for probing new physics with high dimensionality. We study the 19-dimensional pMSSM, including experimental constraints from all sources and assessing the consistency of potential signals of new physics. We show that GAs excel at making a fast and accurate diagnosis of the cross-compatibility of a set of experimental constraints in such high dimensional models. In the case of the pMSSM, it is found that only ${\cal O}(10^4)$ model evaluations are required to obtain a best fit point in agreement with much more costly MCMC scans. This efficiency allows higher dimensional models to be falsified, and patterns in the spectrum identified, orders of magnitude more quickly. As examples of falsification, we consider the muon anomalous magnetic moment, and the Galactic Centre gamma-ray excess observed by Fermi-LAT, which could in principle be explained in terms of neutralino dark matter. We show that both observables cannot be explained within the pMSSM, and that they provide the leading contribution to the total goodness of the fit, with $\chi^2_{\delta a_\mu^{\mathrm{SUSY}}}\approx12$ and $\chi^2_{\rm GCE}\approx 155$, respectively. I. INTRODUCTION Experimental constraints on supersymmetry continue to make the simplest realisations of the Minimal Supersymmetric Standard Model (MSSM) less credible. One is forced to consider less constrained alternatives such as the pMSSM [1]. This is the most general version of the R-parity conserving MSSM under the assumption of CP conservation, Minimal Flavour Violation, and degenerate first and second generation sfermion masses 1 . It has a multi-dimensional parameter space -23 in total, consisting of 19 fundamental parameters and 4 nuisance parameters. Analysis of such high dimensionality models becomes very difficult. The traditional technique of "slice-and-scan" that suffices for the Constrained MSSM (CMSSM) for example, is entirely infeasible. Typically one uses Monte-Carlo and nested sampling approaches as in Refs. [2][3][4][5]. It is probably fair to say that, even if analysis can be made feasible by these methods, it is not always clear what one should conclude from the results. Suppose for instance that upon scanning a 23D cube of the parameter space of the pMSSM one found that in every 2 dimensional slice the allowed region occupies the inside of a circle that just touches the edges of the cube. This "allowed ball" would appear to almost fill the 23D cube inside which it just fits, and yet it would actually occupy only 0.4% of the volume. This is the infamous "large dimensionality problem": taking slices of a high dimensional object inevitably gives a very misleading impression of its structure. On a more practical level, how can one attempt to falsify a model such as the pMSSM, when superficially it seems that virtually any set of observables could be accommodated somewhere in the parameter-space? And compounding the problem associated with the multi-modality of variables, is the multi-modality of observables. If several suitable areas of parameter space are discovered, do they represent a single cluster or several disjoint favoured regions? Do they give a prediction for the spectrum? Which observables have most influence over the favoured regions? All of these issues suggest the use of heuristic search and visualisation techniques. In this paper we consider the effectiveness of Genetic Algorithms (GAs), in assessing and analysing the pMSSM. GAs seek optimal solutions by evolving a population of models in the search-space which, by means of a suitable definition of "fitness", is transformed into a fitness landscape [6][7][8][9][10][11]. In the case of models such as the pMSSM the optimisation in question is of course to find the minimum overall χ 2 , whose inverse can therefore serve directly as a measure of the fitness. There are several advantages of GAs that this study will highlight. The first is simply the extreme efficiency of such techniques versus traditional scanning techniques, or even more sophisticated Bayesian Inference techniques, such as that employed by MultiNest [12][13][14]. Indeed, compared to the latter, GAs can find a best fit point orders of magnitude more quickly, because the number of models that need to be built is considerably smaller 2 . As a practical demonstration, we show that with this approach it is easily possible to exclude the pMSSM, and to identify the main culprit that apparently cannot be reconciled with experiment in any of the parameter space, namely (g − 2) µ . It becomes clear that a GA can efficiently find regions of parameter space in which the χ 2 of all other parameters are reasonable, with (g − 2) µ standing out as the dominant contribution. In the present case only ∼ 10 4 models need to be evaluated in order to reach this conclusion 3 . While they are not exhaustive in the usual sense, GAs do probe the entire search-space, albeit in a highly non-linear way [6]. Therefore, one can now be confident that the pMSSM does not have any remaining regions of parameter space that harbour better solutions for (g − 2) µ . However, no other observable is particularly problematic to fit. Another advantage of GAs arises from the fact that they are a dynamical process. It has been argued that whether a problem is "GA-hard" or "GA-easy" depends on the "fitness-distance correlation" in the parameter space [15,16]. Problems that are GA-hard (or that are not tackled well) resemble "needle-in-a-haystack" problems, in which all incorrect solutions are equally bad and one has as much chance of landing on the correct solution as when performing a random scan. In this context, it is important that the GA is performed so that a "fitness landscape" is established as a function of continuous parameters such as χ 2 . Any hard experimental exclusions are essentially step-functions in the fitness landscape that can locally weaken the fitness-distance correlation. This fitness-distance correlation is made manifest by the flow of the population as it evolves in the GA. By observing this flow over successive generations, one sees the pull of various observables. Naturally those that are well constrained experimentally within the parameter space, for example soft-terms such as A t that govern the Higgs mass, exert a strong pull (through the contribution to χ 2 ), and the population evolves rapidly towards suitable values. Conversely, the limited precision in the measured Higgs couplings leads to less focused values for e.g. tan β. In the SUSY context, this can be thought of as a measure of the fine-tuning in the theory. Such flows can incidentally be understood by taking slices of the space of observables, where it becomes clear if a particular observable is becoming significantly focussed. The efficiency of GAs in this context compared to other techniques suggests that the problem of optimising χ 2 for a multi-modal model such as the pMSSM is very "GA-easy": the fitness-distance correlation (by virtue of χ 2 ) is very good. A final advantage of GAs lies in their end product, which (by construction) is a large population of models focussed around those regions of parameter space that are the most interesting given the current constraints. This provides a natural tool with which new observables can be tested. The example we will consider here is the Fermi-LAT Galactic Centre excess. Given such a new observable, one could of course just fold it into the original study and start from the beginning. But one can also, either test the final population to see if it predicts the observed value, or even better add the new observable into the fitness of the final population and continue to evolve it to a new equilibrium. If the new best fit is considerably worse than the old one, then we can conclude that the new observable is in conflict with the model. This is a natural approach to take when new experimental results need to be taken into consideration. In this sense GAs are able to provide a (literally) evolving population of "Snowmass points". The paper is organized as follows. In Section II we briefly review the GA technique, and in Section III we explain how we apply it to the specific case of the pMSSM, with 19 parameters defined at the GUT scale and 4 nuisance parameters. We also discuss there the different experimental constraints that are included in our analysis. The results are presented in Section IV, analysing first the case of the muon anomalous magnetic moment and then the Galactic Centre gamma-ray excess. The conclusions are presented in Section V. Finally, Appendix A contains some complementary plots that illustrate the evolution of the GA in the pMSSM parameter space. We should mention where this work stands in relation to previous studies. In fact in our view the number of studies in the High Energy Physics arena employing GAs is still remarkably small considering the robustness and utility of the technique. It has been used in the model building context in Refs. [17,18]. In those cases the construction of fitness landscape is more directly related to desirable properties such as small positive cosmological constant, number of generations and soforth. As such one is looking for a small number of "perfect solutions", and the technique becomes more of a "black-art". In the model-exclusion/profile-likelihood context it was discussed in Refs. [19][20][21]. The main body of the study conducted here is most closely related to Ref. [21] 4 , but in a much higher dimensionality. II. THE GENETIC ALGORITHM TECHNIQUE We begin by briefly reviewing the GA technique with specific reference to the task at hand (for more pedagogical introductions see Refs. [9,18,20]), namely surveying regions of model parameter space, excluding disfavoured regions and selecting favoured regions of some framework. The physical "observable" we wish to optimise in the parameter-space is the overall χ 2 . We shall focus in particular on the particular properties of the PIKAIA 1.2 package which is used here to perform the GA [22][23][24][25]. Any GA is an optimisation based on evolving a population of N pop trial individuals, typically 50-100. Each individual consists of a string of data (the so-called chromosome), that encodes the parameters defining a particular individual. This encoding can take various forms, and is referred to generically as the individual's genotype. In this case, it is simply all the input parameters collected together in one long string of data. The entries in the chromosome are called alleles. Often a binary encoding is preferred as it can work with smaller populations, however PIKAIA 1.2 uses a decimal encoding. It is convenient to also introduce the notion of uniformly sized small groups of alleles, called genes, that each encode a single physical parameter, for example a soft-mass squared. The population is initially chosen with random genomes for the N pop -individuals, and then the algorithm consists of repeated application of the following three basic elements: Selection: Individuals are first selected from the population to make "breeding pairs". If the population size is preserved (the usual scheme) then there will be N pop breeding pairs, and the average individual will be selected for breeding twice. The first step in this process is to assign to each individual a fitness based on its physical properties (the phenotype). In the present case, the phenotype is the collection of all the experimental observables of interest, for example Higgs masses, decay widths, and so forth. The fitness is a single function of all these variables whose theoretical maximum value corresponds to the perfect individual. In this study, the fitness functions is taken to be 1/χ 2 (typically the convergence to solutions is quite independent of this function). This step is usually the most casedependent and time-intensive part of the whole procedure, because it is where the physics is bolted on. Once fitnesses have been assigned to the entire population, breeding pairs are formed by selecting individuals based on their fitness (with obviously fitter individuals being selected more often). Typically the fittest individual may breed a few times more than the average, but it is important that less fit individuals are allowed to mate. The selection process may take many different forms, such as roulette-wheel, rank-weighting, tournament selection, and so on 5 . Breeding/Cross-over: A new population of individuals is formed by splicing together the chromosomes of the two individuals in each breeding pair. Again there are many different ways to do this, but a typical choice (uniform cross-over) might be to cut the chromosomes at two random points along their length and swop the middle sections. PIKAIA 1.2 uses both one-and two-point cross-over in roughly equal proportions to reduce end-point biasing. Mutation: With only the two previous elements, one would already observe convergence of the population around good solutions over generations. However, the real power of GAs comes from the third element which is mutation. This is the feature which is chiefly responsible for the orders-ofmagnitude gain in efficiency over a simple Monte-Carlo. Once a new generation is formed, a small fraction (usually around a percent) of the alleles have their values flipped at random. This prevents stagnation in the population, where the entire population clusters around a local maximum in the fitness, when there are better solutions globally. It is important to understand that mutation is not just an improvement to the convergence, but is absolutely integral to the entire process. Depending on the problem and the structure of the fitness landscape, the nett effect is a dramatic increase in the overall rate of convergence. (As can be seen practically by optimising the mutation rate.) One of the innovations of PIKAIA 1.2 in this aspect is its use of creep mutation in order to overcome the so-called Hamming walls, which occur when the population is close to an optimum solution in terms of phenotype, but far away in terms of Hamming distance: for example the number 0.999 versus 1.000 requires a change in all 4 digits, but this very large change in genotype produces a very small change in phenotype. In short, creep-mutation "carries the 1" if a "9" is mutated by adding +1. As this kind of mutation results in small moves in physical parameter space, PIKAIA 1.2 invokes creep-mutation and one-point mutation with equal probability. This modification is also expected to mitigate somewhat the drawbacks of using decimal instead of binary encoding. And then the process repeats. We should add that, so that the maximum fitness is monotonically increasing, it is common at this point to copy the fittest individual from the last generation into the new one and to kill the least fit new individual, known as elitist selection. The particular parameters used for this study are shown in Table I. In summary, a GA incorporates and balances competing forces. Selection and breeding tends to produce convergence around local maxima in the fitness landscape, drawing the population in over generations. On the other hand the effect of mutation is to push the population away from local maxima (on average), so that as a whole it can explore the entire parameter space. The power of GAs then is in their ability to keep performing, regardless of the dimensionality of the physical parameter-space, which can even as large as the chromosome itself (as was the case of Ref. [18]), and in their ability to be sensitive to the entire landscape, but simultaneously respond to and converge on interesting regions. Note that there are many other practical elements, such as fitness "crowding penalties", and "niching", that we do not discuss (or use). They are covered in the literature (see Refs. [9,10]) along with the underlying reasons for the effectiveness of GAs, such the Schema theorem. III. APPLICATION TO THE PMSSM We now turn to the object of study, which is the phenomenological MSSM (pMSSM), with its 19 fundamental parameters. Here we define them at the Grand Unification Theory (GUT) scale and take sign(µ) = 1. (In its usual definition the pMSSM takes parameters at the weak scale, however as a GA is not frequentist there is essentially no difference except for the effect of running on flavour degeneracy and consequently flavour changing observables. These effects are expected to be negligible for this study given that experimental constraints ultimately favour very large soft-terms. Note that δa µ will be important, but precise first/second generation degeneracy would have little bearing on it.) As well as these parameters, we include four additional parameters to account for the SM parameters with the largest uncertainties that could have an impact on the final theoretical predictions. These nuisance parameters are: the electromagnetic coupling constant evaluated at the Z-boson pole mass, Table II. Hence, there is a 23 dimensional parameter space, whose range of variation is listed in Table III. We restrict the study to positive gaugino masses, due to convergence issues in the selected SUSY spectrum calculator which occurred when negative gaugino masses were present 7 . In order to evaluate the fitness as a function of the initial parameters, the pMSSM predictions were implemented in a joint likelihood comprising the following experimental constraints: • Electroweak precision observables (EWPOs): i.e. Z pole observables and M W . The theoretical prediction for the W boson pole mass M W were calculated with SOFTSUSY 4.1.0 [27], and the effective electroweak mixing angle for leptons sin 2 θ lept eff with FeynHiggs 2.13.0 [28][29][30][31]. The SM contributions to the total decay width of the Z boson Γ Z and the Z invisible width Γ inv Z were computed with ZFITTER 6.42 [32,33] and those of the MSSM with micrOMEGAs 4.3.2 [34]. L EWPO , Eq. (1), contains a Gaussian probability distribution function for each of these quantities, with central values and experimental and theoretical uncertainties added in quadrature (see Table IV): • Flavour observables from B physics: BR(Bu→τ ν)SM (Eq. 2). Theoretical predictions were calculated with micrOMEGAs. As in the previous case, L B includes Gaussian likelihoods for every B observable, with mean values and uncertainties given in Table IV: • Constraints from the Higgs sector: L Higgs accounts for the likelihood of the model predictions for the Higgs masses, branching ratios, production cross sections and total decay widths of the Higgs sector computed with FeynHiggs 2.13.0. These predictions were tested against exclusion bounds from Higgs searches at the LEP, Tevatron and LHC experiments using HiggsBounds 4.3.1 [35,36] and HiggsSignals 1.4.0 [37]. L Higgs also includes a Gaussian likelihood around the central value of the Higgs mass, the experimental and theoretical uncertainties considered here can be found in Table IV: ln L Higgs = ln L m h 0 + ln L Higgs sector . (3) • LEP bounds on chargino and slepton masses: mχ± 1 , mẽ R , mμ R , mτ 1 and sneutrino mass constraints are incorporated in L LEP . Using the generic limits implemented in micrOMEGAs [38], smeared step-function likelihoods were constructed for each of them, at 95% CL, as in Ref. [39]. • LHC results on SUSY searches: These were incorporated using SModelS 1. Now, let us describe the pMSSM-GA implementation. As mentioned in Section II, the fitness function was chosen to be the inverse of the chi-squared (as of course the GA seeks to maximise the fitness). In detail, (for each model) first the input parameters were evolved from the GUT scale down to the electro-weak (EW) scale to compute the SUSY spectrum, branching ratios and decay widths using SOFTSUSY. Then, the Higgs sector was evaluated with FeynHiggs. Next, the DM relic abundance and the aforementioned observables were calculated as previously outlined. These data constitute the phenotype of each individual. Finally, the predictions were combined into a likelihood as in Eq. (5) to compute a total chi-squared and hence the fitness. On a practical level, the value of the fitness function of each individual in a given population, which as mentioned in the Introduction is by far the most computationally intensive step of a GA, is of course independent for each individual, providing inherent parallelism and an opportunity to improve the performance of the heuristic search. To take advantage of this, we used the public parallel version of PIKAIA 1.2 [64], which implements the Message Passing Interface (MPI) for a more efficient exploration of parameter space. Every package for the calculation of physical observables was modified accordingly and properly interfaced to PIKAIA to avoid data loss and disruption. The number of individuals in a population, N pop , was fixed to be 100. We explored a wide range of possibilities for the number of generations N gen , and determined that for N gen > 300, there was no significant improvement in the minimum χ 2 . In other words, N gen = 300 generations, and hence only N pop × N gen = 3 × 10 4 evaluations of the fitness function, were sufficient to achieve a good convergence of the total χ 2 . (The number of times a model has to be evaluated is one of the best indicators of the overall efficiency gain: as mentioned earlier a useful point of comparison is the most rudimentary approach, namely a flat scan with just 2 points in each of the 23 dimensions, which would require 10 7 evaluations.) The complete set of selected GA parameters is shown in Table I. Overall we performed 10 runs of this pMSSM-GA implementation, varying only the initial seed of the random number generator. The results did not change significantly between runs, or for longer runs. A. Muon Anomalous Magnetic Moment The measured muon anomalous magnetic moment [65] shows a 3.5σ deviation from the SM value, which could potentially be explained by supersymmetric contributions. The value of δa SUSY µ for the MSSM was computed with micrOMEGAs, and the latest experimental average used from Ref. [26] (see Table IV) in a Gaussian probability distribution function, L δa SUSY µ . Thus, the joint likelihood function reads, ln L Joint = ln L EWPO + ln L B + ln L Higgs + ln L LEP + ln L LHC + ln L ΩDMh 2 + ln L δa SUSY µ . (5) B. The Galactic Center Excess For the later treatment of the Galactic Center Excess (GCE), we incorporated it into the joint likelihood as Note that here we do not now take into account the likelihood from δa SUSY µ . To evaluate χ 2 GCE , the procedure outlined in Ref. [66] was followed. That is we convoluted the differential photon spectrum of a given point of the parameter space with the energy resolution of the LAT instrument. We used the P8REP-SOURCE-V6 total (front and back) resolution of the reconstructed incoming photon energy as a function of the energy for normally incident photons. Then χ 2 GCE was calculated as follows [67]: where Σ ij is the covariance matrix containing the statistical errors and the diffuse model and residual systematics obtained in Ref. [68] using the reprocessed Fermi-LAT Pass 8 data from 6.5 yr of observations. dN/dE i (dN /dE i ) stands for the measured (predicted) flux in the ith energy bin. The measured flux corresponds to the GCE spectrum from Ref. [69], derived using the Sample Model (see Section 2.2 of Ref. [69] for a complete description of this model). The vector θ refers to the pMSSM parameters that determine the predicted photon flux. A. Muon Anomalous Magnetic Moment In Fig.1, we represent the evolution of the minimum χ 2 (associated with the maximum fitness) as a function of the generation number for each of the ten runs. As already mentioned, the maximum fitness is a monotonically increasing function (due to the elitism), which results in a monotonically decreasing χ 2 . The evolution proceeds rapidly during the first iterations and stabilises after approximately 100 generations, with no apparent differences among the various runs. The goodness of the best-fit point for each run is shown in Table V, where we also include the contribution from each observable. The total χ 2 is of order χ 2 ≈ 16 for the ten runs. The greatest contribution always comes from the muon anomalous magnetic moment (χ 2 δa SUSY µ ≈ 12), while the predictions for the other observables are in good agreement with the experimental results. For example, the combination of Higgs observables leads to χ 2 HiggsSignals ≈ 1.2. The fit to the invisible Z-width, which leads to χ 2 Γ Z is consistent with the SM prediction. There is an evident tension between the muon anomalous magnetic moment and the rest of the observables. A good fit to the latter is only possible at the expense of a very small supersymmetric contribution to a µ . Table VI shows the corresponding values of the observables for these best fit points, where we can observe that the resulting δa SUSY µ is always two orders of magnitude smaller than the observed δa SUSY µ = 26.8 +6.3 −4.3 × 10 −10 . The tension between the observed value of the Higgs mass and the muon anomalous magnetic moment is well documented in the literature (see e.g. Ref. [70]). The top plot of Fig. 2 shows the resulting SUSY spectrum for the particular case of run 3. The colour code is a visual aid to illustrate the evolution of the GA towards a final result. Blue corresponds to early generations, green to late ones, and the final generation, 300, is shown in yellow. The same colour map will be used throughout all the plots in this paper. Note that it is entirely expected that there will still be unfit individuals in the population exhibiting a large χ 2 . For this reason, a useful approach is to collate the best fit points from all the different runs. The bottom plot of Fig. 2 includes the information from all the ten runs, together with the corresponding best fit points. For convenience, these are also listed in Table VIII. As the population evolves, one can observe clustering around certain solutions. Whereas the best fit points seem to favour specific ranges of masses in the lightest neutralino and chargino, they appear more spread in the squark and slepton sector. A pattern emerges where mχ0 1 ≈ mχ± 1 ≈ 2 TeV, the squark masses are generally above 6 TeV (except for the lightest stop, for which mt 1 ≈ 2 − 3 TeV), and slepton masses show a wide range of variation 2 − 10 TeV. For completeness, the pMSSM input parameters (19 soft supersymmetry-breaking terms and four nuisance parameters) for the best fit points of each run are listed in Table VII Fig. 2). Let us discuss the results more in detail. We will use run 3 as an example, but the results for other runs are qualitatively similar. First, it is clear that the dark matter relic density is one of the main drivers of the evolution of the fitness function, as we can see from the right panel of Fig. 3, which shows the correlation between the total χ 2 and χ 2 Ωχ0 1 h 2 . This is due to the high precision of the observed value of the dark matter relic abundance, but also to the fact that the relic density of the neutralino is in general very large. In order to reproduce the observed value, resonant annihilation (generally through the pseudoscalar Higgs, when 2mχ0 1 ≈ m A 0 ) or coannihilation with the next-to-lightest supersymmetric particle (NLSP) is required [71]. The flexible structure of the pMSSM allows for various forms of coannihilation, where the NLSP can be either the lightest stau [72,73], the lightest stop [74][75][76], electroweakinos (such as the second lightest neutralino or the lightest chargino) [77][78][79][80][81]. The latter can occur in the so-called focus point region, where both the neutralino and chargino are 1 TeV Higgsino-like particles [82,83] or 2 − 3 TeV wino-like particles [84]. The choice of non-universal soft parameters at the GUT scale [85] facilitates obtaining these various solutions, contrary to more constrained scenarios such as the CMSSM. The values of the wino soft mass parameter at the GUT scale, M 2 ≈ 2.5 TeV (see Table VII) and the hierarchy of the gaugino masses M 2 < M 3 < M 1 ensure that the lightest neutralino and the lightest chargino are both wino-like and with very similar masses (degenerate to order 1%). This facilitates coannihilation effects, without introducing a large fine-tuning in the dark matter sector [86], and is the clearest characteristic of all the runs. The composition of the lightest neutralino is shown in Fig. 4, clearly showing that the last generation corresponds to wino-like neutralino, with a subleading Higgsino component. The GUT values of the gaugino masss parameters are represented in Fig. 14. This feature occurs for all the runs. It is well known, from previous studies in non-universal SUSY models [82,83], that a wino-like neutralino can have the correct relic abundance for a range of masses around 2 − 3 TeV. The final generations of all the runs cluster around the observed value of the dark matter relic abundance, as the left panel of Fig. 3 shows. Satisfying the dark matter relic density while fulfilling all the other experimental constraints requires in general a careful choice of the initial parameters, only possible in narrow bands of the parameter space. Finding these solutions in scans of the parameter space is therefore very costly, and it is here that the GA excels, by the population quickly condensing on the relevant subspaces. It is indeed remarkable how easily these are obtained by a GA, requiring a relatively small number of generations. As we mentioned, each of the runs required approximately 10 4 model evaluations. Refs. [21,87] concluded that evolutionary algorithms can outperform Bayesian inference tools even in relatively low dimensional models such as the CMSSM, but we find here that in broader models such as the pMSSM they become orders of magnitude more efficient. It is indeed interesting to compare these results in more detail with the previous GA scans performed in the context of the Constrained version of MSSM (CMSSM) [21], which only contains five free parameters and in which gaugino masses are assumed to be universal at the GUT scale. In that case, after applying the corresponding RGEs, one obtains M 2 > M 1 at low-energy. Thus, the lightest neutralino cannot be wino-like, and instead, the best fit point is obtained for Higgsino-like neutralinos (with an approximate mass of 1 TeV). A wino-like neutralino is not particularly easy to find through direct detection techniques (as the elastic scattering cross section with nuclei is generally dominated by Higgs exchange diagrams which are enhanced by the Higgsino component). In Fig. 5, we show the predicted contribution to the spin-independent (SI) and spin-dependent (SD) scattering cross section for all the different runs and in Table IX we include the values obtained for the best fit points. Note that these plots only include points with Ωχ0 1 h 2 ≤ Ω DM h 2 + 1σ: solutions with Ωχ0 1 h 2 < Ω DM h 2 have been weighted by ξ = min[1, Ωχ0 1 h 2 /Ω DM h 2 ] as indicated in each panel. It is interesting to observe that all the best fit points are nicely grouped around the same solution, with σ SĨ χ 0 1 p ≈ 10 −11 pb and mχ0 1 ≈ 2 TeV. This is just below the projected sensitivity of LZ and potentially within the reach of the planned Darwin experiment. Notice, however, that it is extremely close to the region where the background due to coherent neutrino scattering becomes important. The spin-dependent contribution is negligible for these points. Regarding indirect detection, the predicted thermal averaged annihilation cross section at zero velocity is also shown in this table. It is of the order of σv 0 ≈ 10 −26 cm 3 s −1 , just within the reach of the future CTA [88], as we can see in the lower panel of Fig. 5. The solid violet lines represent the leading constraint on SI and SD interactions from XENON1T [89] and LUX [90], respectively. The dashed and dot-dashed lines correspond to the sensitivity projections for LUX-ZEPLIN (LZ) [91] and DARWIN [92]. As a reference, we also show the irreducible neutrino background for a xenon target in yellow for SI (proton) and SD (neutron) cross sections [93]. Bottom: Thermally averaged neutralino annihilation cross section in the Galactic halo, ξ 2 σv 0, as a function of the lightest neutralino mass. The upper bound on σv 0 for the W + W − annihilation channel derived from an analysis of 15 dwarf spheroidal (dSph) galaxies using the Fermi-LAT Pass 8 reprocessed data set [94] is depicted in violet. The dashed line corresponds to the expected sensitivity of CTA for the same annihilation channel [88]. The Higgs sector is of course another important source of constraints. The Higgs boson mass is properly recovered and, as Fig. 6 shows, it is an important influence in the evolution of the likelihood. The final population of models is grouped around the observed value. In contrast, the resulting χ 2 HiggsSignals is always smaller than 2, which shows that the values of the Higgs couplings are never too far from the observed experimental values (compatible with the SM Higgs) and are thus not relevant in minimising the total χ 2 . In general, the predicted Higgs mass in the pMSSM is below the observed value, and in order to maximise the one-loop contributions, the stop trilinear coupling has to lead to maximal LR mixing in the stop mass matrix [95][96][97][98]. The GUT values of these quantities for the best fit points (Table VII) are such that this relation is fulfilled at low energy. This pushes A t to large values, whereas tan β ≈ 20 is favoured, as we can see in Fig. 8. The best fit points feature typical values of the µ parameter in the range of 5 − 6 TeV, which leads to an EW fine-tuning of the order of thousands [99]. FIG. 6. χ 2 vs. Higgs mass. As a reference, we show the m h 0 mean value (solid black line), the 1σ (grey) and 2σ (light grey) regions, see Table IV for the exact values. Figure 9 contains the fit to EW observables. We can see that both M W and sin 2 θ lept eff have an important influence on the fitness function. All these observables are properly reproduced in the final generation of points. The Z boson invisible width is only due to decay into neutrinos, as the neutralino mass is in general very large, and therefore compatible with that of the SM. Finally, the goodness of the fit to the muon anomalous magnetic moment is shown in Fig. 10. It is evident from this plot that the observed value of δa SUSY µ is not properly reproduced and that δa SUSY µ 10 −10 throughout the whole evolution (which is almost equivalent to having just the SM contribution). The supersymmetric contribution to this observable is very small, thus resulting in a 3σ discrepancy with respect to the observed value, and χ 2 δa SUSY µ ≈ 12 for all the points. As we can see in the right-hand side plot in Fig. 10, δa SUSY µ has no impact in the GA evolution and χ 2 δa SUSY µ does not vary through the different generations 9 . This may seem counterintuitive but is in fact a general 9 Notice that, as the µ-term is taken to be positive, δa SUSY µ is positive and always adds to the SM contribution. Thus, the fit is always marginally better than the SM discrepancy with the observed experimental value. expectation: attempting to fit this observable would degrade the fitness of the population much more than ignoring it altogether. Consequently the fitness of the entire population is degraded equally by including it in the likelihood, but the relative fitness (which is what determines the evolution) is relatively unaffected. We conclude that this observable simply cannot be fit within the model without severely degrading the χ 2 . These results evidence the well-known tension between the muon anomalous magnetic moment and the rest of the observables. Whereas the former requires a light spectrum (in particular, light sleptons and neutralinos or charginos), LHC bounds and the value of the Higgs mass favour much heavier supersymmetric particles. There are very many LHC constraints, and in fact calculating them is the most costly part of determining the likelihood and hence fitness of a particular model. In contrast, other observables, such as BR(B 0 s → µ + µ − ) are properly recovered, and in fact, contribute to minimising the total χ 2 , as shown in the left-hand side of Fig. 11 (BR(B → X s γ) and BR(Bu→τ ν) BR(Bu→τ ν)SM , shown in the middle and right panels of Fig. 11, are in general in very good agreement with the experimental result). Although we have used run 3 as an example, it should be pointed out that the other runs produce similar results. As a second example, let us consider fitting the observed GCE in the context of the pMSSM. In order to reproduce the measured gamma-ray spectrum, a small range of values for dark matter pairannihilation cross section are required, around σv 0 ≈ 10 −26 cm 3 s −1 , which is roughly consistent with the expected value to obtain the correct relic abundance. Interestingly, this leads to an upper bound on Ω DM h 2 , which in the previous example was not constrained from below. We can observe in Fig. 15 an increase of the global χ 2 for points where the neutralino relic density is too small. The requirement of fitting the GCE is consistent with recovering the correct relic abundance as well, which contributes to the clustering of solutions. In Fig. 16 we can see how the GCE contributes to the total χ 2 . We can identify two types of behaviour. Points on the vertical branch correspond to those in which the annihilation cross section is too small and the neutralino relic abundance is too large, whereas points along the horizontal branch are those where the annihilation cross section is too large (thus the relic density is too small). Table IV. As a reference, we show the 1σ and 2σ regions around that value in grey and light grey, respectively. Right: χ 2 vs. The χ 2 for the best fit points is shown in Table X, together with the contribution for each individual observable. The total χ 2 ≈ 160 is quite large in this example, but the main contribution is solely due to the fit to the GCE, whereas the rest of the observables are properly fit. It is illustrative to compare this table with Table V, which shows that the goodness of the fit to all observables is similar and that there is only one outlier (the fit to either δa SUSY µ or the GCE). Likewise, the best fit to the different observables is shown in Table XI. We have included in this table the mass of the DM (the neutralino) and its annihilation cross section in the halo. We can observe that, although the annihilation cross section is of the right order of magnitude, the neutralino mass is approximately 2.2 TeV, too heavy compared to the best fit to the GCE, which requires DM masses of the order of 100 GeV or below, depending on the leading annihilation channel (see for example Ref. [67]). This is the main reason for the high value of χ 2 GCE . As a consequence, the input parameters for the best fit points, shown in Tab. XII, are indistinguishable from those obtained in the previous section, and the same holds for the low-energy supersymmetric spectrum of Tab. XIII. The spectrum for all the generations is shown in Fig. 17, in which the best fit points (red lines) seem to show more clustering than in the previous section (Fig. 2). Once more, the rest of the observables drive the evolution of the GA and we are left with a heavy SUSY spectrum, featuring wino-like 2.2 TeV neutralinos/charginos (see Fig. 18 for the neutralino composition), with a heavy colour sector and where slepton masses vary in the range of 3 − 10 TeV. As we already observed in the previous section, the GA has singled out one observable (the GCE) which cannot be fit. As in the previous section, the Higgs mass is contributing to the GA evolution (see Fig. 19). Finally, Fig. 20 shows the predictions for direct and indirect dark matter detection. The results for direct detection are very similar to those of the previous section (Fig. 5), with neutralinos marginally within the sensitivity of future multi-ton xenon and argon experiments. The plot of the annihilation cross-section in Fig. 20 still shows the best fit point with heavy neutralinos and σv 0 ≈ 10 −26 cm 3 s −1 . As mentioned above, this is far from the preferred region that would explain the GCE in terms of DM with masses of order 100 GeV. V. CONCLUSIONS In this article, we have investigated the use of Genetic Algorithms (GAs) to study the crosscompatibility of experimental constraints in high-dimensional models. We have focused on the pMSSM, which features 19 input parameters (soft supersymmetry-breaking terms) defined at the GUT scale, and 4 nuisance parameters (the electromagnetic coupling constant evaluated at the Zboson pole mass, the strong coupling constant at M Z , the pole mass of the top quark, and the pole mass of the bottom quark), for a total of 23 parameters. GAs seem to be extremely effective in finding a best fit point that minimises the total χ 2 . With only 10 4 model evaluations, solutions could be found that were consistent with results that employ MCMC scans to probe the whole parameter space, and that require many more model evaluations. The GA leads to a final population of models with a roughly 2 TeV wino-like neutralino, which has the correct relic abundance due to coannihilations with a quasi-degenerate chargino. The resulting SUSY spectrum is shown in Fig. 2 (Table VIII) and Fig. 17 (Table XIII). The coloured sector is predicted to be heavy, mg > 5 TeV, except for the lightest stop, for which mt 1 ≈ 2.3 TeV. We find that the pMSSM does not give a clear prediction for the slepton sector, and the masses span a wide range, mτ 1 ∼ 2.3 − 8 TeV. The neutralino relic abundance and the Higgs mass are the most important constraints driving the GA evolution. We also demonstrated how one can deal with potential signals for new physics, by considering the muon anomalous magnetic moment (which shows a large deviation with respect to the SM value) and the Fermi-LAT excess in the gamma ray spectrum from the Galactic Centre (which can be interpreted as a hint for DM pair-annihilation). A GA proves to be an excellent tool for assessing the compatibility of these observations with all the other experimental constraints, including LHC and LEP bounds on SUSY masses and on the Higgs sector, Planck measurement of the DM relic abundance, and constraints on low-energy observables. Moreover, it also yields a good diagnosis of which are the problematic observables. In both these examples, the main contribution to the final χ 2 was due to either the muon anomalous magnetic moment, χ 2 δa SUSY µ ≈ 12, or the Galactic Centre excess, χ 2 GCE ≈ 155, whereas the fit to all the other observables was good. This is an indication that the pMSSM, despite its large number of free parameters, cannot successfully include these potential hints for new physics. (A compromise could in principle have been possible, in which they were fit reasonably well by sacrificing χ 2 elsewhere, but this turned out to be impossible.) In our view, GAs offer a superior approach to probing BSM physics, especially in an era when the underlying principles are less clear, but when there are nevertheless definite hints of new physics. The technique we discussed here could for example be easily applied to the most general form of MSSM with its 124 parameters, as well as more general Higgs sectors, with no obvious impediment. Compared to other more conventional techniques, GAs are able (by sacrificing a little statistical rigour) to divine patterns of interesting models, and assess their consistency exceedingly quickly.
2018-05-09T16:54:26.000Z
2018-05-09T00:00:00.000
{ "year": 2018, "sha1": "ae90cb85e778ecaeaaf89f2016dec0c1a10a9534", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6987f369ae364c1e9e3b99617a68d956ba1c3610", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
248798446
pes2o/s2orc
v3-fos-license
Scaling limit of a generalized contact process We derive macroscopic equations for a generalized contact process that is inspired by a neuronal integrate and fire model on the lattice $\mathbb{Z}^d$. The states at each lattice site can take values in $0,\ldots,k$. These can be interpreted as neuronal membrane potential, with the state $k$ corresponding to a firing threshold. In the terminology of the contact processes, which we shall use in this paper, the state $k$ corresponds to the individual being infectious (all other states are noninfectious). In order to reach the firing threshold, or to become infectious, the site must progress sequentially from $0$ to $k$. The rate at which it climbs is determined by other neurons at state $k$, coupled to it through a Kac-type potential, of range $\gamma^{-1}$. The hydrodynamic equations are obtained in the limit $\gamma\rightarrow 0$. Extensions of the microscopic model to include excitatory and inhibitory neuron types, as well as other biophysical mechanisms, are also considered. Introduction The derivation of macroscopic deterministic time evolution equations from underlying microscopic dynamics is one of the central problems of non-equilibrium statistical mechanics. This micro-to-macro transition is a very difficult mathematical problem with only limited progress so far [14,3,11]. This can be overcome to some extent when the underlying microscopic dynamics is stochastic with very strong ergodic properties. Examples are the time evolution of the stochastic Ising model via Glauber or Kawasaki dynamics. There one has derived rigorously macroscopic equations in a space-time scaling limit [8,2,7]. These equations are of the mean field type using long range Kac type interactions on the microscopic scale. In this note we derive macroscopic equations presented and partially solved in [1]. The microscopic model system described here is inspired by neuronal integrate-and-fire models [6]. In the simple version of this model the membrane voltage increases until it reaches a maximum threshold value at which time it fires (spikes). When it fires that neuron's membrane voltage gets reset to its minimum value. At the same time other neurons connected to it, whose potential is below threshold, increase their potential at a rate depending on the strength of their connectivity to the neuron which has just spiked. In the macroscopic equations we considered in [1] we discretized the values which the membrane potential can take restricting it to the integer set {0, 1, ......, k}. When and only when a neuron is in state k, its maximum value, it causes other neurons connected to it with potential j < k to transit to the next level j + 1. Independently, neurons with potential values k spike and assume the value 0. The neurons in the microscopic model live on the d-dimensional lattice Z d with spacing γ and their interaction is given by a Kac type function, J(γ|x − y|) [8]. In the limit γ → 0 one obtains the macroscopic equations. It turns out that for k = 1 the model is equivalent to the well known contact process with the state j = 0, corresponding to the healthy state and the state j = 1 the infected one [12,13]. For k > 1 the model can be thought of as a generalized contact process with only the state j = k being infectious. In terms of neural models the case k = 1 corresponds to the stochastic Wilson-Cowan model [9,15], which is a popular simplified model of neural systems. Setting k > 1 introduces inactive states which behave like subthreshold neuron potentials and leads to more complicated behavior. The analysis in this note will be done entirely in the context of the generalized contact process. We consider an extension of the classical contact process where the state of an individual is described by a potential U: when U = 0 the individual is healthy, when 0 < U < k is sick but not contagious and when U = k is both sick and contagious (in the classical contact process k = 1). Infections are long range and described by a Kac potential with range γ −1 . We study the system in the macroscopic limit γ → 0. In more realistic models there are two types of neurons, excitatory ones which act as those described above and inhibitory ones [10,6]. The latter ones also have a threshold for firing but instead of increasing the potential of other neurons when firing it decreases them. This, as well as other generalizations can be incorporated in the microscopic model studied here. They lead to more complicated macroscopic equations, which we are currently exploring. Their derivation uses the same formalism as the derivation given here. These will be discussed briefly at the end of this note. The outline of the rest of the paper is as follows. In section 2 we give a precise definition of the microscopic model and present the hydrodynamic limit equations. In sections 3-7 we prove the hydrodynamic limit. In section 8 we describe some generalizations of the model. The model The macroscopic region. The macroscopic region Ω is a torus in R d of side L, for simplicity L is a large positive integer. The microscopic region. Let γ = 2 −n 1 , n 1 a positive integer. The microscopic region is the torus Ω The Kac potential. Dynamics is defined in terms of the Kac potential J γ (x, y) = a γ γ d J(γx, γy), x, y ∈ Ω γ . a γ is the normalization coefficient which makes J γ (x, y) a probability; J(r, r ′ ) is a smooth, non negative, symmetric probability kernel with finite range R. Call the range of the interaction J γ (x, y). The macroscopic limit is defined by letting γ → 0. The time evolution. Time evolution is described by a jump Markov process where there are two types of jumps related to infection and recovery. The individual at site x with U(x) = k recovers at rate 1 and the potential after recovery becomes U(x) = 0. Moreover the individual at site x with U(x) = k infects the one at site y if U(y) < k at rate λ * J γ (x, y) and the effect of the infection is that U(y) → U(y) + 1. We denote by U t (x), x ∈ Ω γ the potentials at time t and we denote by P γ the law of this process in Ω γ . Definition 2.1. The initial condition. For any fixed γ the potentials U 0 (x), x ∈ Ω γ , at time 0 are mutually independent and the following holds: ., k is such that i ρ 0 (r, i) = 1 for any r ∈ Ω. The variables ρ 0 (r, i) are the initial densities. To prove this limiting behavior we first prove the hydrodynamic limit for a modified dynamics, called the auxiliary process, with a Kac potential A γ which is a coarse grained version of J γ . We then obtain in the limit γ → 0 a macroscopic equation with a kernel A ξ which is a coarse graining version of J, see Theorem 3.2. In the limit ξ → 0 we get (2.2)-(2.4), see Theorem 3.3. The auxiliary process The auxiliary process is defined as the previous one but with a piecewise constant kernel A γ (x, y) in the place of J γ (x, y). In order to define A γ we need the following definition. Definition 3.1. The basic partition. We call The basic partition of Ω is denoted by π ξ , ξ = L2 −n 3 ; its atoms C ξ ∈ π ξ are cubes of side ξ and C ξ (r), r ∈ Ω, denotes the atom which contains r. The microscopic basic partition π γ,ξ is made by atoms We say that two atoms C γ,ξ and D γ,ξ of the basic partition interact with each other if there are x ∈ C γ,ξ and y ∈ D γ,ξ such that J γ (x, y) > 0. To simplify the notation we will drop the superscript (γ, ξ) from the cubes C γ,ξ unless confusion may arise. The new piecewise constant kernel. In the new process the rate at which x infects y = x is λ * A γ (x, y). The new kernel A γ (x, y) is defined by averaging J γ (x, y) over the atoms of the basic partition π γ,ξ , more precisely where J γ (x, x) = 0 and N = |C γ,ξ |. Properties of A γ . • For any x, A γ (x, ·) is a probability, in fact We denote by P Aγ,γ the law of the auxiliary process with initial conditions unchanged, see Definition 2.1 and by P Aγ ,γ the corresponding law of of the density variables The following Theorem will be proved in Section 6. Theorem 3.2. Fix T > 0 and t ∈ [0, T ], we denote by P Aγ ,γ t the restriction of P Aγ ,γ to time t. In analogy with (3.2) we define the kernel A ξ (r, r ′ ), r, r ′ ∈ Ω * as Then P Aγ ,γ t converges as γ → 0 to a probability P A ξ t which is supported by ϕ ξ (r, i; t) r ∈ Ω which is the solution at time t of the equations and for i = 0: the initial condition being ϕ ξ (r, i; 0) = 1 |C ξ (r)| C ξ (r) dr ′ ρ 0 (r ′ , i), ρ 0 as in Definition 2.1. Observe that ϕ ξ (r, i; t), r ∈ Ω is constant on the cubes C ξ . Convergence of the space-time joint distribution of the densities will be proved in Section 8 together with the following Theorem. Sketch of the proof of Theorem 3.2 In the theory of hydrodynamic limit for stochastic interacting particle systems a typical procedure is to use the martingale decomposition for the variables of interest, see for instance the book [11]. Applied to our case we have where L γ is the generator of the process and M γ,ξ i,t (x) is a martingale. M γ,ξ i,t (x) is a "fluctuation term" and one can often prove that in the hydrodynamic limit N → ∞ M γ,ξ i,t (x) vanishes with probability going to 1. The hardest problem is to control L γ v γ,ξ i,t (x) whose explicit expression for 1 ≤ i ≤ k − 1 in our case is By compactness v γ,ξ i,t (x) converges (by subsequences) weakly in probability to some limit density but the problem is that in (3.11) the functions v γ,ξ appear quadratically and in general the weak limit of a product is not the product of the weak limits of the factors. To close the equations one then needs to prove a factorization property for the v γ,ξ i,t , i.e. propagation of chaos or local equilibrium. We overcome this difficulty by using the same method as in [2] and [5]. We discretize time, see Section 4: we use a mesh δ which will vanish after taking the limit γ → 0 and study the process in the generic time interval [nδ, (n + 1)δ] with n ≤ δ −1 T having conditioned on the values of the potential U t (x) at time t = nδ. The The crucial point is to prove the probability estimates stated in Theorem 5.1 and in Theorem 5.4. We use a graphical representation of the process where we represent by an arrow (x, y) the infection to the individual at y due to the individual at x; the recovery of an individual at x is described by a "marked point". The collection of arrows and marked points define a natural graph structure, see the paragraph A graph structure in the next section. To reconstruct the true process we introduce time variables t(x, y) and t(x), t(x, y) is a finite sequence of times t m (x, y) and t(x) of times t m (x). t(x, y) and t(x) are mutually independent Poisson processes with mean λ * A γ (x, y) and respectively mean 1. The above graph structure is realized by drawing an arrow (x, y) at a time t ∈ t(x, y) and a marked point at x at t ∈ t(x). Knowledge of all t(x, y) and t(x) allows to reconstruct the true process, see the paragraph A realization of the process: the clock process in Section 4. However to know whether at t = t(x, y) there is an infection we need to know all the values t(x ′ , y ′ ; s) and t(x ′ ; s) for all s ≤ t as well as the values of the initial potentials. The analysis of the graph structure of arrows and marked points ignoring the times when they are drawn is quite simple because the variables t(x, y) and t(x) are mutually independent. The first crucial point is that an arrow (x, y) corresponds to an infection if at the initial time U(x) = k and U(y) = i, i < k, provided that the cluster containing (x, y) is made only by the arrow (x, y), see Lemma 4.1 and the paragraph A graph structure in the next section for the definition of clusters. Analogous property holds for marked points. Thus when clusters have only one element the time when the event occurs is not relevant. The second crucial point is that clusters with more than one element are probabilistically negligible. An estimate is proved in Corollary 4.4. As argued after (5.8) this is good enough for clusters with at least 3 elements, for clusters with only two elements we have a more refined argument proved in Lemma 5.3. The crucial step in the proof of Corollary 4.4 is to reduce to a branching process which is studied in Appendix A. 4. Time discretization and a realization of the process Time discretization. We discretize time with mesh δ = 2 −n 2 , n 2 ≥ 1. We fix δ and a time interval [nδ, (n + 1)δ], for a while we will study the process in such a time interval having conditioned on the values U nδ of the potentials at time nδ. By choosing δ small enough the process becomes considerably simpler and we will exploit the following realization of the process. A realization of the process: the clock process. We attach to any ordered pair (x, y), x = y, independent clocks called (x, y)-clocks which ring at exponential rate λ * A γ (x, y). The clocks start at time nδ and are stopped at time (n+1)δ, recall that we are studying the process restricted to the time interval [nδ, (n+1)δ]. We denote by t(x, y) the times when the (x, y)-clock rings. We introduce also x-clocks which ring at rate 1, t(x) being the times when the x-clock rings. All the above clocks are independent of each other. The true process is recovered as follows. If the (x, y)-clock rings and at the time of the ring U(y) < k and U(x) = k then U(y) → U(y) + 1. Moreover if the x-clock rings at a time when U(x) = k then U(x) → 0. All these rings are effective while the other rings where the above conditions are not fulfilled are ineffective, the potentials are unchanged and they can be ignored. However it is a very complicated task to understand whether a clock ring is or is not effective, it depends on all the clock rings {t(x, y); t(x)}. As already mentioned it is convenient to introduce a graph structure. A graph structure. When the (x, y)-clock rings we draw an oriented arrow (x, y), when the x-clock rings we draw a marked point at x. Two arrows are connected if they have a point in common, a marked point is connected to an arrow if it is one of the two points of the arrow. Clusters are the maximal connected sets of marked points and arrows. Notice that a same arrow may appear several times in a cluster as well as a same marked point. We denote by C 1 the clusters made by a single element, i.e. either a marked point or an arrow. C j are the clusters with j elements. We will see that if the time mesh δ is small the relevant clusters are the single clusters C 1 . In such a case we have: Proof. The potentials U nδ (x) and U nδ (y) can only change when the (x, y)-clock rings because (x, y) ∈ C 1 and all the other arrows are not connected to (x, y) nor the marked points. Then by the assumption U nδ (x) = k and U nδ (y) = i the (x, y)-ring is effective hence the statement on the lemma. The case of C 1 = x is proved similarly. Definition 4.2. In the sequel we will denote by c constants which do not depend on N, ξ and δ. Theorem 4.3. For any a ∈ (0, 1) and any ǫ > 0 such that 1 − a − 2ǫ > 0, there is a constant c so that for any γ and δ small enough the following holds. Let (x, y) be an arrow then for any two atoms C and D of the basic partition We will use the following consequence of Theorem 4.3: Corollary 4.4. Let C and D be two atoms of the basic partition, then for any j * ≥ 1 for ǫ as in Theorem 4.3. Analogously Proof. We take a > 0 and write Then using Theorem 4.3 we have By the Markov inequality we then have (4.3). The proof of (4.4) is similar and omitted. For simplicity, in the sequel we write P γ instead of P Aγ,γ . Definition 4.5. We denote by κ x,y,i (n), i < k, the number of effective (x, y)-rings in the time interval [nδ, (n + 1)δ), namely those such that when the clock rings U(x) = k and U(y) = i; we denote by κ x (n) the number of effective x-rings, namely the times t in t(x) when U t (x) = k. We then define for two cubes C and D Since in the following n is fixed we drop the dependence on n in (4.6). Recalling (3.5) for notation we have for 0 Probabilty estimates The aim is now to get estimates on M D and M C,D;i , i < k, see Definition 4.5. They are in general very complicated functions in the space of the clock rings {t(x, y); t(x)}, we shall see however that only cases with few rings are important, the others give a small contribution. This will be the crucial point in the proof of the following theorem, Theorem 5.1, which concerns the number of events where an individual in a cube C with potential k infects an individual in the cube D which has potential i < k. Probability estimates on M C,D;i Recall that we have fixed a time interval [nδ, (n + 1)δ] and we will not made explicit the dependence on such interval unless confusion may arise. Theorem 5.1. There are θ > 0, a ∈ ( 1 2 , 1), b ∈ ( 1 3 , 2 3 a) and a constant c so that for all i < k and all x such that U nδ (x) = k and all y so that U nδ (y) = i Notice that the conditions a ∈ ( 1 2 , 1) and b ∈ ( 1 3 , 2 3 a) imply that 2a − 3b > 0. The proof will be obtained in several steps. The first step is to reduce to cases where t(x, y)| = 1. To this end, recalling Definition 4.5, we consider two cubes C and D and for i < k we write There is a constant c (independent of N and δ) so that for any ǫ > 0: Proof. Recall that by definition We then bound and using (3.4) we get By the Markov inequality we then get (5.3). By Corollary 4.4 we have: We will eventually need to iterate the estimate over all the time intervals [δn, δ(n + 1)], i.e. δ −1 times, so that we want δ −1 δ jb and δ −1 δ j(a−b)+1−a to vanish when δ → 0. When applied to the case j = 2, the above requires that b > 1/2 and also that b < a/2 < 1/2, so that for j = 2 the conditions cannot be fulfilled. The analysis of T C,D requires a more refined estimate which is the content of the next lemma. in the last two terms besides the arrow (x, y) the single poins x and y are marked points and in the previous terms z is any point different from x and y.. Since the estimates are similar for simplicity we just examine the case with two arrows, (x, y), (y, z). We denote by η x,y,z ∈ {0, 1} the indicator of this set, thus η x,y,z = 1 |t(x,y)|=1 1 |t(y,z)|=1 1 C 2 ={(x,y),(y,z)} We call We first compute the expectation recalling that the clocks are independent (see the paragraph A realization of the process: the clock process) and using (3.4) we get x∈C y∈D z e −λ * δ(Aγ (x,y)+Aγ (y,z)) (λ * δ) 2 NA γ (x, y)NA γ (y, z)) ≤ cξ d δ 2 We next compute the variance and using independence we get Thus (5.10) follows from Chebishev inequality. Proof of Theorem 5.1. We fixx andȳ as in the hypothesis of the Theorem and we call C = C(x) andD = D(ȳ). We write (recall (5.6)) The term RC ,D;i has been treated in Lemma 5.2 and the one with TC ,D is estimated in (5.8) for j = 3 and Lemma 5.3 for j = 2. We first compute E γ [S N ], from (5.4) we get By (3.4) NA γ (x,D) ≤ cλ * ξ d so that the right hand side of (5.13) is bounded by δλ * cξ d . Since the clocks are independent we get Thus, given any θ > 0 by Chebishev inequality and using(5.14) and (5.15) we get We thus get the theorem. Probability estimates of M D The analysis of M D defined in (4.6) is very similar to the one we did for M C,D;i and sketched below. The analogue of Theorem 5.1 is: and a constant c so that for all x such that U nδ (x) = k Proceeding as in the proof of Lemma 5.2 we have (proof is omitted): Lemma 5.5. For all ǫ > 0 we have Analagously to (5.5) we write The following lemma is a consequence of Corollary 4.4 and an argument very similar to the one of Lemma 5.3: Lemma 5.6. For any a ∈ ( 1 2 , 1) and b ∈ ( 1 3 , 2 3 a) we have We only examine the case {x, (x, y)} the other being similar. We call Using that the clocks are independent and (3.4) we get We next compute the variance and using independence we get The Lemma follows from the Chebishev inequality. Proof of Theorem 5.4. We fixx as in the hypothesis of the Theorem and we call D = D(x). Recalling (5.21) we write The term RD has been treated in Lemma 5.5 and the one with TD in Lemma 5.6. Since the clocks are independent we get Since 1 − e −δ ≤ c δ N , given any θ > 0 and using (5.27) we get The aim in this section is to study the time-continuum limit of the densities when first N = (γ −1 ξ) d → ∞ and then δ → 0. To this end we study the process in the time interval Proof. Observe that the sets in the curly brackets in (6.1) and (6.2) are constant on the cubes, then the lemma follows from Theorems 5.1 and 5.4. We first take N → ∞ and then δ → 0, thus the probability of G γ is as close to 1 as we want if for any δ small enough we take N sufficiently large. To underline the dependence of v on γ we write below v γ i,nδ (y) and we rewrite (3.5) as Recalling (4.7), (4.8) and (4.9) in G γ we have that for any n ≤ δ −1 T and i ∈ {0, 1, .., k}, having used that y A γ (y, x)v γ k;nδ (y) = C NA γ (y C , x)v γ k;nδ (y C ), with y C any point in C. Define u γ (x, i; nδ), i ∈ {0, 1, .., k}, nδ ≤ T as the solution of the equations Observe that since the initial datum is constant on the cubes C of the partition then also u γ (x, i; nδ) is constant on the cubes for any n. We thus may call u i (C, n) = u γ (x, i; nδ) with any x ∈ C. Proof. Let C be a cube of the basic partition, and let K n (C) = x λ * A γ (x, y)u γ (x, k; nδ), y ∈ C (6.11) observing that the right hand side does not depend on which y we take in C. We then have The upper bound Let i such that Θ(n + 1) = u i (C, n + 1) and j such that Θ(n) = u j (C, n). By (6.12) bounding u i−1 (C, n) ≤ u j (C, n) and dropping the last term −δu i (C, n)1 i=k ≤ 0 we get It follows by induction from (6.14) that K n (C) ≤ λ * for all n. Thus the right hand side of (6.14) is negative for δ small enough and therefore Θ(n) ≤ Θ(0) ≤ ǫ which proves the upper bound. Proof of (6.15). Recalling the definitions of i and j from (6.12) we have We bound from below u i−1 (C, n) ≥ u j (C, n). We also write the last term as [(u i (C, n) − u j (C, n)) + u j (C, N)] δ having bounded 1 i=k ≤ 1 and get For δ small enough the curly bracket term is positive and (6.15) is proved. Proof. There is c so that and therefore by (6.8) which concludes the prove the Lemma. Stronger version of Theorems 3.2 and 3.3 In this Section we prove Theorem 3.3 together with a stronger version of Theorem 3.2. We introduce some new notation and definitions besides those in the previous sections. We fix a time t = n 0 δ 0 for some n 0 and δ 0 . Since we consider the parameter δ of the form 2 −n 2 with n 2 ∈ N we then have that for any δ < δ 0 there is m so that t = mδ. where by an abuse of notation we have called U(r), r = γx, the potential U(x); U t (r) is the potential at time t. So far we have studied the one-body correlation functions. In the next theorem we study the many body space-time correlations, namely the law P γ,δ,ξ v of the finite dimensional distribution v = v C ℓ ,i ℓ ;t ℓ , ℓ = 1, 2, ..., m in the limit first γ → 0 then δ → 0 and finally ξ → 0. Theorem 7.1. With the above notation where P v is supported by w = (w 1 , .., w m ) with ρ is the solution of (2.2)-(2.4). Positive, real valued times. Even though the set T is dense in [0, T ], yet it sounds non physical to restrict times to T . The problem can be fixed easily using a variable time mesh. To explain the idea we refer first to the simpler case of a single time t ∈ [0, T ] as in Theorem 3.2. Suppose t / ∈ T . We then consider a mesh δ ∈ {2 −n t, n ∈ N}, and similarly a second mesh δ ′ ∈ {2 −n (T − t), n ∈ N}. We can then use the proof of Theorem 3.2 in [0, t] where the mesh is δ and again the proof of Theorem 3.2 in [t, T ] with the mesh δ ′ . The extension to the case of Theorem 3.3 is similar. We have m times 0 < t 1 , · · · < t m < T we then consider a mesh δ 1 ∈ {2 −n t 1 , n ∈ N}, . . . , δ m+1 ∈ {2 −n t m , n ∈ N}, and use the proof of Theorem 3.2 in each one of the above time intervals. Extensions In this Section we study the macroscopic limit of other infection/recovery models. 8.1. Additional recovery jumps. Here we consider the case where also individual with potential i < k may recover i.e. the individual at site x with U(x) = i recovers at rate λ i and the potential after recovery becomes U(x) = 0. The macroscopic equations are: and for i = 0: The proof is similar to the proof of Theorem 7.1 and omitted. 8.2. The excitatory-inhibitory network model. Referring to a neural network here we consider excitatory and inhibitory neurons; both neurons have a potential in {0, .., k}. When a excitatory neuron with potential k fires the potentials of all the other neurons with potential < k increase by 1. Similarly when an inhibitory neuron with potential k fires the potentials of all the other neurons with potential > 0 and < k decrease by 1. The rates of firing are λ * 1 kJ γ (x, y), j = 1 for excitatory and j = 2 for inhibitory. Besides that neurons with potential k decay at rate 1 to a state with potential 0. For this model we derive the following macroscopic equations. We denote by ρ 1 (r, i; t) the limit macroscopic density at position r and time t of the excitatory neurons with potential i and by ρ 2 (r, i; t) the limit density of the inhibitory neurons with potential i. Convergence is in the sense of the finite dimensional distributions as in Theorem 7.1. 8.3. General microscopic model. Place at each site x ∈ Z d a finite-state, continuoustime Markov chain U(x, t) with state space S = {0, . . . , k}. For any pair of states i and j there is an intrinsic transition rate from i to j, denoted as g ij γ (x), dependent on the scaling parameter γ and location x. Any other site y at any state U(y, t) = l ∈ {0, . . . , k} will have an additive effect λ ijl J ijl γ (y, x) on the transition rates at x from i to j. Then together, for any i = j ∈ S, the time-dependent transition rate from i to j, denoted q ij (x, t), is given by where λ ijl ≥ 0, g ij γ (x) = g ij (γx), and J ijl γ (x, y) = γ d J ijl (γx, γy). The functions g ij (x) and J ijl (x, y) describe the intrinsic transition rates and the site-to-site interactions in the scaling limit. We take the assumptions on J ijl to be the same as before, and we assume g ij to be continuous and bounded over x. The scaling limit. In the scaling limit γ → 0, the state of the system is described by local state distributions v(x, t) = (v 1 (x, t), . . . , v N (x, t)) which vary continuously in x ∈ R d , and evolve in time according to macroscopic equations, for 0 Obtaining the original, generalized contact process. The original, generalized contact process can be recovered by choosing particular g ij and J ijl . Let g k0 = 1 and all other g ij = 0 for i = j. Next, for 0 ≤ i < k, let λ i,i+1,k = λ * and all other λ ijl = 0. Let J i,i+1,k be defined as for the generalized contact process. Then the microscopic and macroscopic equations are the same as for the generalized contact process. Adding new features. The E-I neuron model from subsection 8.2 can be recovered by letting S = {0, . . . , k} × {0, . . . , k}, where the first coordinate corresponds to the E neuron voltage, and second coordinate the I neuron voltage at a site. Excitatory interactions will correspond to setting the terms λ (i,j),(i+1,j),(k,l) , λ (i,j),(i,j+1),(k,l) , as well at the associated J terms, and inhibitory interactions will correspond to setting the terms λ (i,j),(i−1,j),(l,k) , λ (i,j),(i,j−1),(l,k) , as well as their associated J terms. An external drive can be added by making nonzero the terms g (i,j),(i+1,j) and g (i,j),(i,j+1) . A neuronal leak could be modeled at least approximately by setting g (i,j),(i−1,j) and g (i,j),(i,j−1) . The proofs of all these extensions follow the same steps as in the proof of Theorem 7.1 with new clock processes associated to g ij γ (x) and λ ijk J ijk γ (y, x). As before all these clocks are independent and we can repeat the arguments used in Sections 3-7. Appendix A. Clusters and branching processes In this appendix we will prove Theorem 4.3. For completeness we first recall some definitions and notation. A configuration {m(z, z ′ ), m(z ′′ } is the set of arrows and marked points with their multiplicity which are described respectively by integer valued functions m(z, z ′ ) and m(z ′′ ), m(z, z ′ ) = 0 if the arrow (z, z ′ ) is absent and m(z ′′ ) = 0 if z ′′ is not a marked point. Then the probability of a configuration is where P = P Aγ ,γ and A(z, z ′ ) = A γ (z, z ′ ). Let V = V (C) be the set of arrows in C, then a cluster C is a maximal connected set of arrows plus the specification of the multiplicity m(z, z ′ ) of the arrows and the multiplicity m(z ′′ ) of the points z ′′ ∈ T V , where T V is the union of starting and endpoints of the arrows in V . Maximality means that there is no arrow starting from T c V and ending at T V . Then 2) With reference to (4.1) we fix an arrow (x, y) and write j≥1 δ −aj where |C| is the number of elements in C, namely The purpose is to prove which then implies (4.1). We perform the sum over C ∋ (x, y) by first summing over the multiplicities and get Since A(z, z ′ ) ≤ c ξ d N , |e λ * δ 1−a A(z,z ′ ) − 1| ≤ cλ * δ 1−a ξ d N , e −δ e δ 1−a ≤ 1 + cδ 1−a , e −λ * δA(z,z ′ ) ≤ 1 (A.8) We also split δ 1−a = δ 1−a−2ǫ δ 2ǫ , ǫ > 0 such that 1 − a − 2ǫ > 0, and get with c ′ > c(1 + cδ 1−a ). The first factor because there is at least one arrow (namely (x, y) which starts from x). We will prove that the term multiplying δ 1−a−2ǫ is bounded by c/N and thus prove (A.6). The proof will exploit the branching structure of V . We call x the root of the branching, (x, z 1 1 ), .., (x, z 1 n 1 ), z 1 1 = y, the arrows which start from the root x (n 1 ≥ 1 because V ∋ (x, y)) and z 1 1 , .., z 1 n 1 the nodes of the first generation. From each node z 1 i of the first generation may or may not start new arrows: if no arrow starts from all the nodes z 1 i then the branching ends, otherwise we call z 2 1 , .., z 2 n 2 the nodes which are the endpoints of the new arrows: these are the nodes of the second generation. Notice that there may be arrow which go back to x, in that case that arrow will not produce descendants because they are already included in the arrows of the first generation. Analogously we call {z i 1 , .., z i n i } the endpoints of the arrows starting from nodes of the (i − 1)-th generation. The branching ends when no arrow starts from the nodes of the last generation. In terms of the branching the configuration are described by the following parameters: where • k ≥ 1 is the number of generations, • z i j are the position of the nodes, • R i j is the number of arrows which start from z i j . Writing (A.9) in terms of the branching we get By (A.10) and since z i j is the endpoint of an arrow, so that it may have at most N values, then the sum over (z i j ) is bounded by N n 1 −1+n 2 +..+n k (recall that z 1 1 = y) we get We first estimate the sum on (R k−1 j ) which satisfy (A.10) and get (δ 2ǫ ) m n k−1 ≤ (1 + cδ 2ǫ ) n k−1 (A.11) writing δ 2ǫ = δ ǫ δ ǫ and using (A.10) we get (δ ǫ ) m n k−2 ≤ δ ǫn k−1 (1 + cδ ǫ ) n k−2 (A.12) The other sums on (R i j ) with i < k − 2 are estimated as in (A.12) getting [c ′ λ * ξ d ] n 1 +..n k [δ ǫ (1 + cδ ǫ )] n 1 +..+n k−1 [c ′ λ * ξ d δ ǫ/2 ] n 1 +..n k [δ ǫ/2 (1 + cδ ǫ )] n 1 +..+n k−1 where for δ small enough we have bounded c ′ λ * ξ d δ ǫ/2 < 1. Thus (A.6) follows from (A.13)˙ The proof of (4.2) is reduced to that of (4.1) when we write l.h.s. of (4.2) = 1 N x m(x)≥1 e −δ δ m(x) m(x)! + 1 N x,y S x,y (A.14) where S x,y ≡ S as in (A.4), having made explicit its dependence on x, y. The first term in (A.14) is the contribution of clusters with only the marked point x, the other clusters give rise to the second term in (A.14). (4.2)then follows from (A.14) and (A.6).
2022-05-16T01:15:09.342Z
2022-05-13T00:00:00.000
{ "year": 2022, "sha1": "a1fd806491edef5ba032088b1645f3806fc5f41c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a1fd806491edef5ba032088b1645f3806fc5f41c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
270020883
pes2o/s2orc
v3-fos-license
A computational analysis of the molecular mechanisms underlying the effects of ibuprofen and dibutyl phthalate on gene expression in fish The impact of emerging pollutants such as ibuprofen and dibutyl phthalate on aquatic species is a growing concern and the need for proper assessment and evaluation of these toxicants is imperative. The objective of this study was to examine the toxicogenomic impacts of ibuprofen and dibutyl phthalate on Clarias gariepinus, a widely distributed African catfish species. Results showed that exposure to the test compounds caused significant changes in gene expression, including upregulation of growth hormone, interleukin, melatonin receptors, 17β-Hydroxysteroid Dehydrogenase, heat shock protein, doublesex, and mab-3 related transcription factor. On the other hand, expression of forkhead Box Protein L2 and cytochrome P450 was downregulated, revealing a potential to induce female to male sex reversal. The binding affinities and hydrophobic interactions of the test compounds with the reference genes were also studied, showing that ibuprofen had the lowest binding energy and the highest affinity for the docked genes. Both compounds revealed a mutual molecular interaction with amino acids residues within the catalytic cavity of the docked genes. These results provide new insights into the toxic effects of ibuprofen and dibutyl phthalate on Clarias gariepinus, contributing to a better understanding of the environmental impact of these pollutants. Introduction Micropollutants, also known as emerging contaminants (ECs), have been studied in diverse water bodies, including surface water, drinking water, subsurface water, and effluent/wastewater.Among these micropollutants are everyday household chemicals and industrial additives.Emerging contaminants are synthetic or natural chemicals or microbes that are not typically regulated in the environment yet have the potential to enter the environment and create detrimental ecological or human health consequences that are known or suspected [1].Some examples of emerging contaminants include plasticizers, plastic additives, personal care products, pharmaceuticals active compounds, surfactants, brominated flame retardants, and per fluorinated compounds [2].Plasticizers and pharmaceuticals, in particular, offer a considerable environmental risk due to their global annual output totaling several kilotonnes [3,4]. Plasticizers are low volatile, colorless chemicals that render flexibility, elasticity and durability to polymers by lessening their elastic modulus, melt viscosity and glass transition temperature.Phthalate plasticizers (PAEs) are deployed in numerous consumer goods and personal care products, representing almost 80 % of the global polyvinyl chloride plasticizer market [5].More than 3 million tonnes of phthalate derivatives are utilized annually in various industries throughout the world [6].Phthalates are prone to leaching out of products due to their lack of covalent bonding, thus, posing risk to environment contamination.The principal cause of PAE contamination in aquatic ecosystems is the incursion of diverse industrial and commercial effluent containing artificially created PAEs.Most PAEs entering water have low volatility and migrate easily due to their high Kow (n-octanol-water partition coefficient) and low vapour pressure, Among the large variety of PAEs commonly detected in the aquatic environment, DBP is one of the most prevalent in environmental samples ranging between ng/L to μg/L [7].In Nigerian freshwaters during the rainy season, Fatoki and Ogunfowoka [8], reported DBP levels exceeding 500 mg/L.The high prevalence of DBP in Nigerian waters makes them a priority pollutant of study [9]. Pharmaceutical chemicals are increasingly used in human and veterinary medicine, including agriculture and aquaculture [10].Non-steroidal Anti-inflammatory drugs (NSAIDs), which relieve pain and inflammation in humans and animals, are one such molecule.NSAIDs inhibit prostaglandin synthesis and release from arachidonic acid via non-selectively inhibiting cyclooxygenase (COX) enzymes, including COX-1 and COX-2 isoforms [11].Ibuprofen (IBU; ((+/− )-2-(p-isobutylphenyl) propionic acid) is the third most common, prescribed, and sold over-the-counter NSAID [12].It is included in the Essential Drugs List 2010 compiled by the World Health Organization (WHO).IBU is a major drug detected in aquatic ecosystems due to its high consumption and excretion rate (~70-80 % of the therapeutic dose) [11].IBU has been found in surface waters near sewage effluent releases in Nigeria at 0.02-79.45 μg/L and in Brazil, Canada, China, Germany, Italy, Sweden, Britain, and the US at 0.0002-5.04μg/L [13][14][15].According to Zur et al. [16], IBU was found in receiving waters at a mean concentration of 0.98-67 μg/L in Canada and Greece, 1.0-67 μg/L in Greece, <15-414 μg/L in Korea, and 5.0-280 μg/L in Taiwan. The ubiquitous presence of these emerging contaminants (ECs) has become a serious issue of ecological concern, as they may be damaging to freshwater resources [17].Although EC concentrations in freshwater are minimal, their strong biological activity may pose a substantial risk to non-target species at different levels of the ecological hierarchy, resulting in varied toxic impacts.These compounds have a tendency to intercalate with specific proteins in the body, disrupting their functioning.Furthermore, they can induce endocrine disruption and oxidative stress by increasing free radical concentrations.Thus, causing oxidation of key biomolecules, which could result in function loss [2].According to Bommarito et al. [18], these modifications can have functional cellular ramifications, influencing health outcomes later in life.In addition to epigenetic changes, several of these substances can directly modify the genetic sequence of the DNA and cause mutations.These modifications can have far-reaching consequences in aquatic ecosystems.These changes influence the adaptation and evolution of aquatic species, affecting population dynamics, responses to environmental stressors, and ecological interactions. The use of toxicogenomic as tool to evaluate environmental stress markers has gained traction in recent years, partly due to the increase in efficiency of the technology.According to the National Research council (NRC) of the USA, Toxicogenomic is defined as the application of genomic technologies such as genetics, genome sequence analysis, gene expression, profiling, proteomics, metabolomics and related approaches to study the adverse biological effects of exogenous agents of environmental and pharmaceutical chemicals on human health and the environment [19].One major goal of toxicogenomic is to detect the relationships between changes in global gene expression and toxicological endpoints. Xu et al. [20] reported that after adult zebrafish were exposed to DBP, aberrant transcription levels of genes related to T/B cells were found, indicating the activation of specific immunity.DEHP, another characteristic PAE pollutant, can significantly alter the phagocytic index of juvenile yellow catfish [21]; alter the expression of inflammatory markers in juvenile yellow catfish larvae [22,23].Male Murray rainbowfish exposed to DnBP demonstrated an increased proportion of spermatogonia in the testes, as well as elevation of oestrogen receptor and chorionic angiogenin genes in the liver, induction of brain aromatase activity, and an increase in plasma vitellogenin levels [24].Naproxen and diclofenac exposed to the three-spined stickleback (Gasterosteus aculeatus) induced the expression of the hepatic c7 gene which is part of the innate immune system [25].Hong et al. [26] found elevated levels of cyp1A, p53, and vtg in Japanese medaka (Oryzias latipes) at a very low diclofenac concentration (1 g/L).Long term exposure of naproxen at environmentally relevant concentration was reported to have induced the disruption of both triiodothyronine (T3) and thyroxine (T4) hormones via the significant down regulation of the dio1, dio2, nis, nkx2.1, pax8, tg, tpo, trβ and ttr genes. Fish as a representative of aquatic organisms are susceptible to influx of pollutants, thus, they can serve as suitable models for ecological risk assessment study.Studies have demonstrated that fish have conservative human pharmacological targets (receptors, enzymes), therefore compounds designed to penetrate biological membranes at the same concentration would likewise affect fish with the same target [27].Clarias gariepinus, also known as the African sharp tooth catfish, was chosen as the model organism for this study because it is the most cultivated fish, widely available, native to Africa, a common source of protein in poor economies, and may be found in other tropical regions of the world [28].Its year-round availability, vast environmental distribution, and relative ease of acclimatization to laboratory conditions, makes this species this a desirable and good model for ecotoxicological research. Numerous toxicogenomic studies have been done on the impact of phthalates and non-steroidal anti-inflammatory drugs (NSAIDS) on freshwater fish [23,25].To the best of the author's knowledge, there are very few toxicogenomic studies on the impact of dibutyl phthalate (DBP) and ibuprofen (IBU) on a tropical fish.The high prevalence of these emerging contaminants in freshwater sources in Nigeria calls for research to evaluate the risk they portend for fish and other non-target aquatic organisms [2].Therefore, this study seeks to evaluate the endocrine disruptive effects and molecular interference of DBP and IBU on Clarias gariepinus on selected genes encoding for growth hormone (GH), interleukin (IL-1β), melatonin receptors (MEL1C), 17β-Hydroxysteroid Dehydrogenase (HSD17B2), heat shock protein (HSP70), doublesex and mab-3 related transcription factor 1 (DMRT1), Forkhead Box L2 (FOXL2), and G.A. Ogunwole et al. the cytochrome P450 family 11 subfamily A member 1 (CYP11A1).These genes have been of particular interest due to their critical roles in various biological pathways including growth, development, metabolism, immune response, sleep-wake cycle regulation, biosynthesis and metabolism of estrogens, cellular stress response, and sex differentiation, and have been implicated in various pathological conditions [29][30][31][32].We hypothesize that exposure to DBP and IBU will induce significant changes in gene expression profiles in aquatic organisms, reflecting the disruption of cellular processes and the activation of stress response pathways.A thorough analysis of these molecular alterations, binding poses, and binding sites of the contaminants in fish would help identify sensitive biomarkers associated with DBP and IBU exposure.This, in turn, would contribute to the development of effective monitoring strategies for these pollutants in aquatic environments." Test chemical One gramme of standard analytical ibuprofen (CAS Number 15687-27-1 technical grade 99 % purity) and analytical n-Butyl phthalate (CAS Number 84-74-2 technical grade 99 % purity) in ampule (Soluble liquid) was purchased from Sigma-Aldrich.A stock solution of 1 g of ibuprofen in 1 L of distilled water was prepared by dispersing the compound using a bath sonicator for 8 h, while nbutyl phthalate (ampoule) was dissolved in 1 L of water and then stored at room temperature. Experimental exposure One hundred and eighty post-juveniles of Clarias gariepinus were obtained from a commercial fish farm in Akure, Ondo State, Nigeria, with a mean weight of 103-117 g and a total length of 21.50 ± 0.3 cm.To alleviate stress, the organisms were conveyed to the laboratory in well-aerated glass tanks filled with water from the point of collection.They were acclimatized for 10 days and kept in a laboratory setting (temperature 30 ± 2 • C) with a 12-h light/dark cycle.Throughout the experiment, the fish were fed twice daily with a Coppens designed diet (44 % crude protein), and the water in the fish's holding tank was changed every 24 h to eliminate the accumulation of waste metabolites.A static renewal acute toxicity bioassay was performed in accordance with the Organization for Economic Cooperation and Development's (OECD) criteria for fish toxicity testing [33], and was authorized by the local ethics commission (CERAD/REC/11/19/011).The test organisms, comprising 10 fish each in three replicates, were randomly divided into five exposure groups and a control group.The division was done without regard for sex.Each group was then randomly assigned to three repeat tests of ten fish each in 10-L glass aquaria.A predetermined concentration of IBU (0, 3.0, 3.5, 4.0, 4.5, and 5.0 mg/L), and DBP (0, 5, 10, 20, 25, and 30 mg/L) was measured from the stock solution and added to the bioassay tanks; the mixture was carefully stirred with a glass rod to ensure even distribution of the chemical and allowed to stand for 30 min before randomly introducing the test organisms.The sublethal experiment was based on environmentally relevant values extrapolated from the 1/100th and 1/1000th of the 96 h lethal concentration (LC 50 ) for IBP (3.8 mg/L) and DBP (22.44 mg/L).The test material was renewed with the same concentration and untreated control every 24 h.On days 15 and 30, at the end of the exposure period, test organisms were recovered and anaesthetized with tricaine methanesulfonate (MS-222) and dissected to get liver organs required for molecular studies. Quantitative real-time PCR Using TRI reagent (Zymo Research, USA), total RNA was extracted from fish liver tissues.Following DNAse I (ThermoFisher Scientific) treatment, DNA impurities were removed by following the manufacturer's methodology.The extracted RNA was then reconstituted in nuclease-free water and quantified with a Hitachi-U1900 spectrophotometer at 260 nm by measuring the absorbance [34]. 1 μg of the extracted RNA was used for the reverse transcription reaction to synthesize complementary DNA (cDNA) with the ProtoScript ll First Strand cDNA synthesis kit (BioLabs, New England) under the following conditions: 65 • C for 5 min, 42 • C for 1 h, and 70 • C for 5 min.OneTaq® 2X Master Mix (BioLabs, New England) was used to perform PCR amplification, which was subsequently run on a Labgene thermocycler.The primers were personally designed, then sent for synthesis and purchase from Inqaba Biotec (Hatfeild, South Africa).PCR was conditioned as follows: 1 cycle at 95 • C for 5 min, 30 cycles at 95 • C for 30 s, 30 cycles at 55 • C for 30 s, 30 cycles at 72 • C for 1 min and a final extension step at 72 • C for 5 min.The relative amount of cDNA was subsequently quantified using ImageJ software and the gene expression normalized with β-actin gene as the housekeeping gene.The forward and backward primer sequences of the following genes, namely, the growth hormone (GH), interleukin (IL-1β), melatonin receptors (MEL1C), 17β-Hydroxysteroid Dehydrogenase (HSD17B2), heat shock protein (HSP70), doublesex and mab-3 related transcription factor 1 (DMRT1), Forkhead Box L2 (FOXL2), and the cytochrome P450 family 11 subfamily A member 1 (CYP11A1) are presented in Table 1. Statistics The densitometric analysis was subjected to an analysis of variance (ANOVA) using Graphpad Prism version 7 at a 5 % (P < 0.05) level of significance.Prior to conducting ANOVA, the assumptions of normality and homogeneity of variances were assessed using the Shapiro-Wilk test and Levene's test, respectively.Both assumptions were met (Shapiro-Wilk test, p > 0.05; Levene's test, p > 0.05).The Tukey test was applied to further separate the means, with Bonferroni correction applied to account for multiple comparisons. AutoDockVina was used for the docking investigations of the compounds with proteins from Clarias gariepinus.An expansive searching grid was used in this application to sift through ligand molecules.Table 2 displays the box spacing and coordinates used for the blind docking.A homology model of the proteins was used to dock the chemicals, and the poses with the best scores were chosen.Optimal docking positions were theorized to represent the most stable configuration of each chemical upon protein binding. Gene expression studies The result of the densitometric analysis of the reference genes is shown in Fig. 1a-f.Data are expressed as mean ± SEM (n = 6); ***p < 0.05; denotes the significance of the data of the exposed groups when compared with the control group. Molecular docking Using AutodockVina, both test compounds were docked into their active site of GH, IL-1β, MEL1C, HSD17B2, HSP70, DMRT1, FOXL2, and CYP11A1.The binding affinities and the hydrophobic interactions of the test compounds with the reference genes are presented in Table 3 and Figs.2-9[A -D].The top poses of the test compounds possessing a higher binding interaction were found lying deep into the binding cavity of, IL-1β, MEL1C, MEL1C, HSD17B2, HSP70, FOXL2, and CYP11A1; lying at the surface of the binding site of GH and DMRT1; showing all significant interaction and favorable energy of interaction ranging from − 4.2 kcal/mol to − 6.2 kcal/ mol (Table 3 and Figs.2-9(A -D)).Among the tested compounds, IBU exhibited the lowest binding energy ranging between − 5.2 and 6.8 kcal/mol, and the highest affinity for the docked genes.The binding potential of IBU to the docked genes was further strengthened by hydrogen bonds and hydrophobic interactions.In contrast, the binding energy of DBP to the docked genes ranged between − 4.2 and 6.1 kcal/mol (see Fig. 3). Table 2 The blind dock setting box spacing and coordinates for the ligands.ILE66A, ILE191A, VAL239A for HSD17β2; LYS33A for HSP70; LYS45A, TYR82A, TRP89A for FOXL2) with amino acids residue within the catalytic cavity of the docked compounds. Discussion Growth hormone (GH) is a polypeptide hormone secreted by somatotrophs in invertebrates and vertebrates.It plays a crucial role in regulating somatic growth, maintaining protein, carbohydrate, lipid, and mineral metabolisms [35].In this study, the overexpression of the GH gene suggests rapid growth, which is accompanied by increased metabolic rates and, consequently, elevated aerobic respiration [36].Reactive oxygen species (ROS), a byproduct of oxidative metabolism and a cell signaling molecule, are generated in the mitochondria.Several studies have linked elevated ROS levels to rapid growth [37].When ROS production exceeds the cellular antioxidant capacity, oxidative stress can ensue, damaging macromolecules like DNA, proteins, and lipids.This phenomenon has been implicated in cellular senescence [38].The free radical theory of aging, proposed by Harman [39], suggests that aging processes are influenced by the accumulation of unrepaired somatic damage caused by free radicals.Therefore, excessive upregulation of growth hormones could lead to accelerated aging and a shortened lifespan [40].To compensate for the increased metabolic demand induced by GH, organisms experience an elevation in feeding motivation and appetite to meet their energy deficit [41].This enhanced foraging activity may increase their willingness to risk exposure to predators while feeding.Jonsson et al. [42] observed that growth hormone-treated trout resumed feeding earlier, consumed more food than control trout, and foraged closer to the water surface, making them more susceptible to aerial predation. IL-1β, a prominent member of the interleukin cytokine family, holds the distinction of being the first characterized interleukin.It plays a central role in mediating the body's response to tissue injury, microbial invasion, inflammation, and immunological reactions in fish [43].Numerous studies have firmly established IL-1β as a potent proinflammatory cytokine that plays a crucial role in modulating immunological and inflammatory responses to infection, injury, and immunological challenges in animals [44].While the tissue expression profile of IL-1β in healthy fish exhibits significant variation across teleost species, the observed upregulation of IL-1β in exposed organisms strongly suggests the induction of inflammatory responses triggered by immunological challenges caused by toxicants [43].This notion is further supported by studies documenting elevated IL-1β expression in the gill, spleen, and head kidney of Cynoglossus semilaevis [45], the spleen of rainbow trout [46], and the head kidney and gill of the Larimichthys crocea [43] upon exposure to microbial infection.These observations collectively point to the central role of IL-1β in mediating the acute inflammatory response of this fish species. Melatonin receptors (MEL1C), apart from their well-established role in regulating circadian rhythms in animals, have emerged as potent agents in regulating oxidative damage in fish and other vertebrates.MEL1C and its metabolites, including N1-acetyl-N2-formyl-5-methoxykynuramine, 6-hydroxymelatonin (6-OHM), and N1-acetyl-5-methoxykynuramine, have demonstrated free radical scavenging capabilities [47].A study by Vázquez et al. [48] revealed the potential of endogenously synthesized or exogenously added melatonin to counteract stress-related diseases by modulating various transcription factors through its receptor protein.The upregulation of MEL1C in exposed organisms could be attributed to the hormone's ability to mitigate cellular damage caused by oxidative stress, a direct consequence of elevated GH levels.Moniruzzaman et al. [49] demonstrated that melatonin treatment effectively restored GSH levels in H 2 O 2 -stressed fish hepatocytes, similar to control fish hepatocytes, via the Erk/Akt/NFkB pathway.This underscores melatonin's potential in alleviating liver dysfunction and mitigating oxidative stress [49].Another plausible explanation for the upregulation of MEL1C could be linked to its GH-modulating capacity.Falcon et al. [50] demonstrated the efficacy of melatonin in modulating the secretion of GH and Prolactin in Oncorhynchus mykiss (female rainbow trout).The upregulation of melatonin could be a compensatory mechanism to counteract the elevation of GH in the test organisms. 17b-hydroxysteroid dehydrogenases (17b-HSDs) belong to the short-chain dehydrogenase/reductase (SDR) protein superfamily and play a crucial role in the final stages of steroid biosynthesis in both lower and higher vertebrates [51].Among the various 17b-HSDs, type 2 (17b-HSD2) stands out as the primary enzyme responsible for inactivating estrogens and androgens [52].Their biological function is believed to maintain a delicate balance of active and inactive androgens and estrogens in target tissues while also inactivating these hormones in nontarget tissues [51].The significant expression of 17b-HSD2 observed in this study suggests the onset of early pubertal development, which typically occurs between two and six months of age in C. gariepinus [53].The upregulation of 17b-HSD2 is essential in both male and female fish for maintaining the balance between the release of estradiol and estrone, and this upregulation persists until maturity before being downregulated in tissues [51].Puberty in fish has been associated with adverse effects on growth, feed utilization, health, and welfare due to the redirection of resources and energy away from somatic growth and maintenance towards gonad growth, gamete production, and reproductive behavior [54].Therefore, the observed increase in growth hormone in this study may not necessarily translate into somatic growth that is favorable in aquacultural practices. Heat shock proteins (HSPs) are a group of highly conserved proteins found in a wide range of organisms, from bacteria to mammals.Their expression is regulated by the Akt signaling pathway [55], and they play a critical role in maintaining cellular homeostasis.HSPs act as molecular chaperones, detecting and binding to misfolded or damaged proteins and guiding them towards repair or proper folding [49].This protective mechanism is crucial for cells to withstand environmental stressors such as chemical exposure, heat shock, and UV or γ-irradiation [56].The significant upregulation of HSP70 observed in this study suggests enhanced cellular defense against apoptosis, a programmed cell death mechanism triggered by cellular stress [57].Additionally, the upregulation of HSPs may serve as a cellular response to combat oxidative stress, a condition caused by an overabundance of reactive oxygen species (ROS).HSPs can Table 3 Binding Affinity, H-Bonding and Hydrophobic Interaction of Di butyl Phthalate and Ibuptofen GH), interleukin (IL-1β), melatonin receptors (MEL1C), 17β-Hydroxysteroid Dehydrogenase (HSD17B2), heat shock protein (HSP70), doublesex and mab-3 related transcription factor 1 (DMRT1), Forkhead Box L2 (FOXL2), and the cytochrome P450 family 11 subfamily A member 1 (CYP11A1) of C. gariepinus.modulate glutathione metabolism, maintaining glutathione in its reduced state, a crucial factor in neutralizing ROS and protecting cells from oxidative damage [58].The notable upregulation of HSP70 in fish exposed to toxicants has been consistently observed in various studies.For instance, red cherry shrimp (Neocaridina Denticulate) exposed to diethyl phthalate (DEP), dipropyl phthalate (DPrP), and diphenyl phthalate (DPP) exhibited increased HSP70 expression [59], as did common carp (Cyprinus carpio) exposed to DBP [60].This suggests that the upregulation of the HSP70 gene in response to environmental pollutants may be a general adaptation mechanism that enhances the ability of organisms to survive in polluted environments.The DMRT1 gene plays a pivotal role in regulating sex determination and/or gonadal sex differentiation across metazoan animals.In non-mammalian vertebrates, DMRT1 is sometimes located on sex chromosomes and directly influences sex determination [61].Previous studies cited by Amaury and Manfred [62] have consistently demonstrated male-restricted expression of DMRT1 in various fish species, including zebrafish (Danio rerio) [63], Nile tilapia (Oreochromis niloticus) [64], North African catfish (Clarias gariepinus) [65], and southern catfish (Silurus meridionals) [66].The significant expression of the DMRT1 gene observed in this study strongly suggests a robust male-biased sex expression pattern. Forkhead box protein L2 (FOXL2), a critical member of the Fox gene family, plays a pivotal role in ovarian differentiation and oogenesis in vertebrates [67].Previous studies have firmly established that FOXL2 modulates the expression of aromatase, encoded by the cyp19a1a gene [68], which is the critical enzyme responsible for synthesizing estrogen, specifically estradiol-17 (E2), in XX gonads in a female-specific manner in tilapia.Pannetier et al. [69] further reinforced the ovarian differentiation preference of FOXL2 by demonstrating the co-localization of FOXL2 and aromatase in the somatic cells of developing XX gonads at both the mRNA and protein levels.The significant down-regulation of FOXL2 observed in fishes exposed to IBU and DBP can be attributed to the antagonistic roles of DMRT1 and FOXL2 in fish sex differentiation [70].Wang et al. [71] reported that DMRT1 suppresses the female pathway by repressing aromatase gene transcription and estrogen synthesis in XY gonads.In Nile tilapia (O.niloticus), the antagonistic roles of FOXL2 and DMRT1 in sex differentiation, mediated through the modulation of the aromatase gene (Cyp19a1a) expression and estrogen synthesis, were further validated in vivo using Transcription activator-like effector nucleases (TALENs) [70].Overexpression of DMRT1 in this study can also result in female-to-male sex reversal.Li et al. [72] reported that disruption of Cyp19a1a and FOXL2 expression led to female-to-male sex reversal in tilapia.A previous investigation by Zhang et al. [70,73] revealed that transgenic overexpression of DMRT1 in XX tilapia led to an inhibition of Cyp19a1a expression, decreased E2 levels, and eventually resulted in sex reversal.This aligns with observations in other vertebrate groups where masculinization has been induced via inhibition of endogenous estrogen synthesis using an aromatase inhibitor [74][75][76][77].Interestingly, the sex-reversed male's sperm of fishes retained their ability to fertilize eggs from wild-type females, showing no significant difference in fertilization rate compared to wild-type males [70].This finding highlights the robustness of male reproductive function in the context of sex reversal induced by hormonal disruptions. CYP11A1, also known as cytochrome P450 side-chain cleavage (P450scc), is a mitochondrial monooxygenase that plays a crucial role in the initial step of steroidogenesis, catalyzing the conversion of cholesterol to pregnenolone, the precursor of all steroid hormones [78].Beyond its involvement in steroid hormone biosynthesis, CYP11A1 also exhibits a broader role in cell physiology, influencing cell proliferation, differentiation, and apoptosis [79].Notably, purified CYP11A1 and/or steroidogenic mitochondria containing CYP11A1 can act on a range of steroids and secosteroids, beyond cholesterol, to produce novel CYP11A1-derived secosteroids and Δ7steroids with antiproliferative, pro-differentiation, and anti-inflammatory properties [80].The significant downregulation of CYP11A1 may indicate the potential of the test chemicals to impair the reproductive process of C. gariepinus and the decline in egg numbers, hatching and survival rates.This finding aligns with previous studies, such as the work of Yang et al., who reported that bisphenol B can disrupt steroid hormone homeostasis in zebrafish [81]. Hydrogen-bonds play a crucial role in determining the specificity of ligand binding [82].The hydrogen bond interaction of IBU with GH, HSP70 and DMRT1 of C. gariepinus was enhanced with 1 (ALA147), 1 (ASP31A) and 2 (SER43A, ASN53A) amino acid residue's respectively at their active sites.This is an indication that IBU has a high potential to bind strongly and form very stable complexes in the target site of these genes, thus increasing the duration and persistence of their toxic action in exposed and vulnerable aquatic animals.Although DBP had a lower affinity to MEL1C, they have a greater chance to provoke a toxic action at their preferred active site due to their strong hydrogen bond with four amino acid residues.DBP hyper bond formation with MEL1C was further intensified by their hydrophobic interactions with PHE50A, GLN52A, VAL62A, VAL63A, HIS66A, PHE67A amino acid residues.The common molecular interaction shared by both compounds, TYR98A, ILE132A, PRO150A for GH; PHE50A, VAL62A for MEL1C; LYS33A gives credence to the general promiscuity of proteins to interact with multiple distinct ligands [83].Gao and Skolnick [84] suggested that binding pockets with similar shape may have a diverse composition of amino acid, consequently generating various physicochemical environments favoured by chemically different ligands, e.g., homologs with modified substrate specificities.He further suggested that another reason is that, for large pockets, some small-molecule ligands may be bound to at least partially different regions of the pockets and these ligands may not necessarily have similar chemical properties.IBU and DBP sharing similar binding pockets may further give insight into their potential joint toxicity since they co-exist in the aquatic environment with a high-risk to cause adverse effects, as observed in this study.Binding site competitiveness between compounds in mixtures usually result in an additive, synergistic or antagonistic effect.The relatively stronger affinity IBU has on the active sites presents some advantage for the compound to bind more tightly and to specifically target the protein. IBU, in water posed severe ecological risk to cause adverse effect in fish in both rivers, an implication of the significant number of drug targets orthologs present in fish [85].Gunnarsson et al. [85] reported that 90 % of all human drug targets had orthologues in zebrafish (D. rerio) 64 % of the targets had orthologues in the water flea (D. pulex) and 34 % in the green algae (Chlamydomonas reinhardtii).DBP and DEHP in water also posed a severe ecological risk to cause an adverse effect in Algae, Daphnia and fish populations in both rivers.If their influx into the environment is not checked, the delicate balance in the ecosystem could be breached thus leading to catastrophic loss of biodiversity. Conclusion The toxicological evaluation in this study demonstrated that IBU and DBP were toxicogenomic with the potential to act as both upregulators and inhibitors of various physiological processes.The test compounds revealed the potential to induce stress, distort fish sleep cycle, induce the synthesis of pro-inflammatory cytokines in fish which is contradictory to their mode of action in humans through the inhibition of the pro-inflammatory precursors such as the prostaglandins.Furthermore, exposure to the test compound may have the potential to distort sex differentiation by inducing female-to-male sex reversal, thus resulting in a male-biased fish population.This phenomenon is capable of upsetting the delicate balance in the ecosystem. To stem the decline of aquatic life, it is advised that more thorough monitoring campaigns be carried out in freshwater bodies of water, particularly in regions with heavy anthropogenic activity. Fig. 2 . Fig. 2. Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus growth hormone (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structures (panel B & D). Fig. 3 . Fig. 3. Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus interleukin-1beta (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structures (panel B & D). Fig. 4 . Fig. 4. Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus melatonin receptor (panel A & C), molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structures (panel B & D). Fig. 5 . Fig. 5. Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus 17β-Hydroxysteroid dehydrogenase (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structures (panel B & D). Fig. 6 . Fig. 6.Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus heat shock protein 70 (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structure (panel B & D). Fig. 7 . Fig. 7. Binding poses and binding sites of dibutyl phthalate and ibuprofen with C. gariepinus double sex and mab-3 related transcription factor 1 (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structures (panel B & D). Fig. 8 . Fig. 8. Binding poses and binding site of dibutyl phthalate and ibuprofen with C. gariepinus forkhead box L2 (panel A & C); molecular interaction of dibutyl phthalate and ibuprofen with amino acid residues within the binding pocket of the protein structure (panel B & D). Fig. 9 . Fig. 9. Binding poses and binding site of dibutyl phthalate and ibuprofen with C. gariepinus Cytochrome P450 11 AI (panel A & C); molecular interaction of dibutyl phthalate with amino acid residues within the binding pocket of the protein structure (panel B & D). Table 1 Forward and Reverse Primer Sequence used in the Study.
2024-05-26T15:18:06.547Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "9150d0054720bd1e08ff2e255fe4611358ac5663", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024079118/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7fc1636addc84f46b5830971bce60112aa20af4b", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221340671
pes2o/s2orc
v3-fos-license
Conformal Regge theory at finite boost The Operator Product Expansion is a useful tool to represent correlation functions. In this note we extend Conformal Regge theory to provide an exact OPE representation of Lorenzian four-point correlators in conformal field theory, valid even away from Regge limit. The representation extends convergence of the OPE by rewriting it as a double integral over continuous spins and dimensions, and features a novel “Regge block”. We test the formula in the conformal fishnet theory, where exact results involving nontrivial Regge trajectories are available. Introduction The interactions between highly boosted objects is a topic of longstanding interest in relativistic field theory. On the on hand, due to time dilation effects, the Regge limit (large boost with fixed impact parameter) provides an instantaneous snapshot of essentially frozen objects. On the other hand, since probes move near the lightcone, observables in this limit are intrinsically dynamical and are strongly constrained by relativistic causality. Many systems in the Regge limit exhibit a transient regime where interactions grow as a function of boost, before saturating as required by quantum mechanical conservation of probability. Regge theory quantifies the growth by the spin of effective excitations. A famous example is the rising hadronic cross-sections attributed to so-called Pomeron exchanges. Regge theory applies as well to highly boosted correlators in conformal field JHEP05(2021)059 theories [1][2][3]. In strongly coupled, holographic CFTs, the dominant effective excitation is nothing but the bulk graviton. Its exchange grows as fast as allowed by the bound on chaos [4], making the consistency constraints mentioned above particularly stringent. Indeed, the fact that gravity grows with boost restricts its very structure at all energies [5]; more generally, growing amplitudes must satisfy positivity properties related to the Average Null Energy Condition [6]. In many studies of the Regge limit, it is often sufficient to consider only the leading term at large boost (in the intermediate growth regime). However, there may be situations where subleading effects are important. A simple example would be to study effects from photons in addition to gravitons. Another example would be saturation. Finally there may be theories where interactions do not grow, warranting precision studies. It was recently argued that the critical three-dimensional O(N) and Ising models are of this type, with Regge intercept less than unity (j * < 1). This leads to transparent scattering at large boost [7,8]. In general, the Regge limit in conformal theories probes intermediate operators of large scaling dimension [9][10][11]. In transparent theories one might thus hope to use Regge theory, with exchange of a few dominant trajectories, to precisely bound the heavy spectrum, which could improve convergence of bootstrap calculations. The goal of this paper is to extend formulas from Conformal Regge Theory so as to retain the exact energy dependence of four-point correlators. Generally, the OPE in a conformal field theory converges whenever two local operators act on the vacuum, no matter where they are inserted in spacetime (see [12] for a review). The physical picture of effective Reggeized particles however arises from the OPE between an initial and a final state of a scattering process -often called the "t-channel" -and the OPE in such channels diverges. As reviewed below, this divergence occurs after Euclidean correlation functions are analytically continued to a "Regge sheet". Our main result is an exact resummation of the OPE which converges on the Regge sheet. Since correlators on the Euclidean sheet are well understood, we can state the result in terms of a difference or discontinuity: The salient feature, familiar from (conformal) Regge theory [2,3], is that a discrete sum over spins has been replaced by an integral. The power of Regge's idea is that this enlarges the radius of convergence of the OPE. We expect eq. (1.1) to converge anywhere on the Regge sheet. The novel feature of eq. (1.1), in comparison with earlier work, is the "Regge block" R (a,b) ∆,J (z, z), defined in eq. (3.17) below, which accounts for subleading power corrections. Perhaps surprisingly, the Regge block is not simply the conformal block that one might have guessed from the leading-power formulas. The Regge block can be defined as the unique solution to conformal Casimir equations with a certain vanishing discontinuity. This combination turns out to cancel certain spurious poles, and we find that it neatly packages terms which otherwise might have been split in other treatments. JHEP05(2021)059 Starting from eq. (1.1), concrete formulas for order-by-order asymptotic expansions in a given model can be obtained, as detailed in eq. (3.26). These formulas will be tested in the conformal fishnet model [13], a recently proposed limit of N = 4 SYM that retains only scalar fields, but remains integrable at the cost of sacrificing unitarity. The OPE data corresponding to certain four-point correlators is known exactly and the correlators can be expanded in the coupling in terms of known special functions (harmonic polylogarithms). Using this expansion, we analytically continue correlators to the Lorentzian regime and compare their high-energy behaviour with eq. (1.1). This paper is organized as follows. In section 2 we review kinematics of the Regge limit and the required analytic continuation, and we review the fishnet model. Section 3 derives our exact formula for the Regge limit, after reviewing analogous manipulations in the S-matrix context. We also obtain a formula for the double-discontinuity and confirm that it inverts the "Lorentzian inversion formula". In section 4 we test these formulas for various correlators in the fishnet model. Section 5 presents our brief conclusions. Review of conformal Regge kinematics A conformal four-point correlator in Minkowski space M d−1,1 can be expressed as ) are combinations of the operators' scaling dimensions and the conformal cross-ratios z, z are related to the coordinates {x i } by (2. 2) The Regge limit of the correlation function is attained by applying large and opposite boosts to the pairs (12) and (34), sendind the operators to infinity along the lightcone: Here we have rewritten the vectors in lightcone coordinates In the kinematics considered in this paper, both separations (x 4 − x 1 ) and (x 2 − x 3 ) are timelike while all other separations remain spacelike. To evaluate the Regge limit, the Lorentzian correlator must be obtained from the Euclidean theory described above. It is calculated by analytically continuing the theory from the region where z = z * , namely by rotating z around the branch point at z = 1 while keeping z fixed [3]. The scattering process and analytic continuation are illustrated in figure 1. To understand the continuation path a little more explicitly, we recall that for Lorentzian correlators, time-like distances acquire a small imaginary part x 2 23 → −|x 23 | 2 ± i0 which is positive if the operators are in time-ordering and negative otherwise. The second cross-ratio in eq. (2.2) thus accumulates a phase e 2πi which is indeed what happens along the path. By further defining σ 2 = zz and w 2 = z/z, we see that the Regge limit corresponds to σ → 0 while w is fixed. In analogy to the QFT Regge limit, we have an identification to the Mandelstam variables σ ∼ 1/s and w ∼ t in s-channel scattering. In our chosen kinematics (with pairs (1, 4) and (2,3) in separate Rindler wedges) we have access to four operator orderings. Two are equivalent, and rather trivial: if one pair is time-ordered and the other anti-time-ordered, the continuation phases cancel out and the path does not leave the Euclidean sheet. All novel Lorentzian information is contained in commutators, or discontinuities, of which we can define two natural ones: The different phases originate from the prefactor in eq. (2.1). These two discontinuities contain effectively the same information, and the fourth independent operator ordering, G(z, z ), can be reached by complex conjugation. Review of conformal fishnet theory Conformal fishnet theory is a recently proposed integrable theory in d = 4 that is neither a gauge theory nor supersymmetric [13]. A chief interest of this theory is the fact that very few Feynman diagrams contribute to any given process -often a unique diagram at each loop order (or at each order in the 't Hooft 1/N expansion). In this way, integrability of the theory allows for the calculation of certain Feynman diagrams which have been incalculable thus far by standard methods. The theory contains two complex (matrix-valued) scalar fields, and its simplification comes at the price of unitarity: the basic 4-point interaction includes a term Tr(Y † X † Y X) but not its complex conjugate. Non-unitarity means that certain formulas below will contain unusual factors of the imaginary number i, but there otherwise appears to be no obstructions to resumming perturbation theory and discuss finite-coupling correlators. An example of a class of diagrams which have been resummed to all orders in the planar limit are the "fishnet" diagrams drawn in figure 2. They describe the "zero-magnon" JHEP05(2021)059 An example of zero-magnon u-channel fishnet ladder diagram evaluated in the computation of G u (z, z). The dashed/solid four-point interaction sites have coupling ξ 2 so this diagram contributes at order ξ 12 . The only other diagrams are of this form but with any number of "rungs". correlator (trace implied): which was computed exactly in the u-channel in [14] (eqs. (3.12) and (A.6) there) as a sum over conformal blocks where the scaling dimension is ∆ = 2 + iν, and the normalization coefficient is [14] 1 (2.8) The conformal block G (a,b) ∆,J (z, z) is a combination of hypergeometric functions in d = 4, see appendix A. For this calculation, we have ∆ 1 = ∆ 2 = ∆ 3 = ∆ 4 = 1 and hence a = b = 0. The t-channel ladders are given by the same expression (2.6) without the (−1) J . We also require ξ 2 to have a small and negative imaginary component for the ν integration to well-defined [14]. Eq. (2.6) can be related to the usual operator product expansion by noticing that since the conformal blocks decay exponentially as Im(ν) → −∞, we can close the integration contour in the lower half-plane and apply Cauchy's residue theorem. The poles of the integrand occur when ν solves while all other poles are spurious and cancel in pairs [14]. The spectrum of this correlator thus consists of exactly two Regge trajectories: only two operators contribute for each spin. JHEP05(2021)059 This result is valid for any finite coupling ξ, and in particular to all orders in perturbation theory, where the correlator is expanded as (2.10) By analyzing the series expansions in small z and z, the authors of [14] found that G (n) 's are combinations of single-valued harmonic polylogarithms (HPLs), a basis for special iterated integrals. Several useful properties and definitions of these functions are reviewed in appendix B. For example, and the order ξ 2 contribution is This particular function can be written explicitly in terms of ordinary dilogarithms (see eq. (B.4)): where Li 2 is the dilogarithm function. We verified the formulas provided in ref. [14] up to 6-loops (order ξ 12 ) and order σ 4 . The functions G (L) provide a "data mine" on which we can precisely test conformal Regge theory. Regge theory allows us to resum the OPE beyond its radius of convergence in cross-ratio space and to evaluate the correlator in the Regge limit via eq. (1.1). Our first goal will be to check that this agrees, order by order in the coupling and power by power in σ, with the analytic continuation of the G (L) 's. A schematic of the calculation is provided in figure 3. We also applied this technique to the "one-magnon" four-point function, which has very similar structure to the zero-magnon case and is reviewed in section 4.3. The Regge limit at leading power has been preceedingly studied in ref. [15] and was extended to other fishnet correlators in [16,17]. Conformal Regge theory with exact energy dependence The extension of the s-channel OPE to the Regge limit described in section 2.1 was obtained in the seminal paper [3]. This is nontrivial since the sum over spins diverges in the Regge regime. The solution is to rewrite the sum as an integral via the so-called Sommerfeld-Watson transform. Our contribution here will be to extend the formulas of [3] to an exact expression (see eqs. (3.24)-(3.26) below) which can be used to obtain arbitrary subleading powers of z, z. As we will see, a new sort of term then appears. In this section we keep the spacetime dimension and external operator dimensions generic. Resummed Correlator Analytic Continuation Regge limit Figure 3. A sketch of the processes evaluated in the following sections. We analyze the Sommerfeld-Watson resummation in generic conformal theories and demonstrate that the diagram commutes in the fishnet model. Sommerfeld-Watson resummation in S-matrix theory We begin by reviewing the classic resummation of SO(d) spherical harmonics, which will give us intuition about what we should, and should not, expect (see also [18,19]). Consider a function of one angle: where the SO(d) spherical harmonics C J are defined in eq. (A.5). We use a normalization which trivializes the Regge limit: lim x→∞ C J (x) → (2x) J . We will borrow nomenclature from S-matrix theory, where in d spacetime dimensions one would use SO(d−1) partial waves, cos θ = 1 + 2t s (say for massless scattering), and the coefficients would depend on center-of-mass energy s. Regge theory aims to use such s-channel partial waves to study the large-t, fixed-s Regge limit; the s-dependence will play no role in our discussion. As reviewed below, in S-matrix applications the partial waves a J are the sum of a part which is analytic and one which alternates with spin: where each of a t,u J is analytic and polynomially bounded in a half-plane Re(J) > j * . These are associated with t-and u-channel cuts, representing singularities at positive and negative x, respectively. Many references use instead even-and odd-combinations (a t J ± a u J ). To rewrite the sum (3.1) as an integral, we need to think of the analytic properties of the spherical harmonics C J (x). These are entire functions of J (except for the gamma function poles at negative J) which generally have a "u-channel" cut for x ∈ (∞, −1]. In fact we have two natural functions: C J (±x). They are related by an overall sign (−1) J when the spin is an integer but generally they are distinct. The Sommerfeld-Watson transform pairs a t J with the function with a t-cut, and a u J with the function with a u-cut: where the contour C encircles clockwise the poles of 1/ sin(πJ) with J ≥ 0, see figure 4. Since the residue of 1/ sin(πJ) is proportional to (−1) J , it is easy to verify that the integral reproduces the sum in eq. (3.1). The subtractions are a polynomial in x, accounting for the possibility that for a finite number of spins the analytic continuation of a t J may not agree with the coefficient in eq. (3.1). On the contour C, the integral (3.3) converges when the original sum does, ie. when | cos θ| is not too large. To gain anything from this trick one must deform the contour to a vertical line. It will be convenient to center it on the fixed line of the Weyl reflection The contour should remain to the right of all singularities of the a t,u J . In Euclidean kinematics, C J (cos θ) ∼ e ±iθJ at large imaginary J and the integral converges (possibly as a distribution) as long as θ ∈ [0, π]. In Lorentzian kinematics with |x| 1, C J (x) ∼ (2x) J and we retain convergence on this contour as long as arg(x) ∈ [0, π]. We stress that, given a t J and a u J , eq. (3.4) is an exact representation for the function F (x). Let us comment on the meaning of the coefficients a t,u J in eq. (3.4). In general, they are analytic functions in some half-plane Re(J) ≥ j * which may not include the vertical line In drawing the second contour in figure 4 we assumed that singularities occur at finite Im J, so that there are no obstructions to reaching the vertical line at large imaginary J. This seems physically reasonable since large-spin is often a semi-classical limit. The same comment will apply below in CFT. A typical application of eq. (3.4) is to obtain large-x asymptotics. The intermediate steps are subtle if one is interested in subleading terms, but since the answer is surprisingly simple, it will be worth going through the steps (following appendix A of [19]). The basic idea is to split C J (x) into two parts, which decay in the left and right J half-planes respectively: where C pure J (x) satisfies the same Casimir equation but contains a single tower of term as We should thus deform the integration contour left for the C pure J terms, and right for C pure 2−d−J . In principle, one has to include the following types of singularities: 1. Physical left poles or cuts from a t,u j . The surprise, remarkably, is that poles 2-5 all cancel out, and only the physical singularities of a t,u j contribute! In brief, poles 2-3 are related by a Weyl reflection and cancel in pairs; poles 4 are generically absent due to a cancellation between t-and u-channel coefficients, 2 and poles 5 are absent due to 1/Γ(−J) in eq. (3.5). The cancellations are detailed in appendix C and can be readily understood using a concrete formula for the a t,u J , known as the Froissart-Gribov formula. Spurious left poles at The upshot is that only physical singularities (as defined precisely in the appendix) contribute. Considering, for notational simplicity, the case in which these consist of discrete poles at J = j n , and taking x to be positive and above the real axis cut, the result is: JHEP05(2021)059 This is a fundamental result of Regge theory. Since C pure More generally, the sum gives an asymptotic expansion in 1/t. The phases of the two terms in eq. (3.7) are simply those of (−t−i0) J and (−u−i0) J , respectively. The remarkable feature of eq. (3.7) is that, to correctly reproduce the amplitude to any desired order in the 1/t expansion of F , it suffices to replace C J by C pure J in the Sommerfeld-Watson formula (3.4) and ignore spurious singularities of 1/ sin(πJ) and a t,u J . Analytic continuation to the Lorentzian regime and Regge block The spectral representation of correlation functions is the starting point for Regge analysis in conformal theories. It allows to write the correlation function as an integral over continuous dimensions: where the exchanged operator scaling dimension is parametrized as ∆ = d/2 + iν, where ν is a complex number. The meromorphic function c(J, ∆) contains the OPE coefficient data of a particular theory, and has poles at the location of local operators. The "nonnormalizable" modes account for operators with ∆ < d/2 (which includes, notably, the identity) [20]. The conformal partial waves F J,∆ are a sum of conformal block and its shadow [3,21,22], with coefficient that are products of gamma functions . (3.10) The spectral representation (3.8) involves a discrete sum over spins, analogous to eq. (3.1). To reach the Lorentzian regime, we must first replace the sum by an integral, and then analytically continue z counterclockwise around 1. This process has been discussed many times, but we found an unexpected twist: the first step enjoys some freedom because one can add to F ∆,J terms which vanish for integer spin. We find that the next steps is greatly simplified, especially at subleading powers, by making such an improvement. This discussion will be somewhat technical. Let us recall the defining properties of F ∆,J : it satisfies the same Casimir equation as G ∆,J , and it is Euclidean single-valued (meaning, it has no branch cut when z = z * ). The problem with eq. (3.9) is that this property does not hold for non-integer spin -this combination is then not natural in any sense! In fact no combination of G's can satisfy Euclidean single-valuedness for non-integer J, because it is violated in the z, z → 0 limit: JHEP05(2021)059 The last factor is only Euclidean single-valued when J is an integer. Our proposed resolution is that one can still impose Euclidean single-valuedness in either the left or right half-plane. Given our kinematics of interest, we pick the second option, meaning in particular that we cancel the monodromy around the point (z, z) = (1, 1). To construct the corresponding block, we make an ansatz using the three functions: G ∆,J , G d−∆,J and G J+d−1,1−∆ . These solve the same Casimir equation, are regular at z = z, and do not have contain singular powers (zz) −J/2 at positive J. Using the method detailed shortly, we find that the natural non-integer spin version of eq. (3.9) contains a third term: 12) where s is a product of sines which will often reoccur: . For the moment, we remark only that the second line of eq. (3.12) manifestly vanishes for integer J ≥ 0, due to 1/Γ(−J), so F good reduces to F in that case. Also, trigonometric identities can be used to show that the definition is invariant under the symmetry (a, b) → (−a, −b). To our knowledge, the function in eq. (3.12) is new. It would be interesting to interpret it in the language of shadow representation, light transforms or integrability [21,23,24], and also to compare with the function called G in ref. [25]. Our method to analytically continue F (a,b)good ∆,J (z, z) to the Regge sheet, following the path in figure 1, is the same method that we used to find the coefficients in eq. (3.12). We first decompose each block G (a,b) ∆,J into pure power solutions according to where each g pure contains a single tower of terms in the limit (3.11). This decomposition is identical to that used for spherical harmonics in eq. (3.5). Contrary to G, the g pure 's are not symmetrical in (z, z). They are however easy to analytically continue around z = 1: since z is held fixed during the continuation, the exponent of z cannot change [3,11,22]: (z, z) leaves us with eight g pure 's with various complicated coefficients. Now the crux is that a combination of blocks F (z, z) is Euclidean single-valued around (1, 1) if and only if the g pure 's can be re-packaged into G's. The reason is that we can reach the Regge sheet by rotating z, z counter-clockwise starting from the region z, z > 1, where JHEP05(2021)059 0 1 z z Figure 6. Rotation of z,z counterclockwise from z,z > 1. Euclidean single-valued functions are symmetrical: figure 6). Since we can reach the Regge sheet by continuing z, z symmetrically from a region where the correlator is symmetrical, it follows that the continuation of a singlevalued correlator is also symmetrical: F (z, z ) = F (z , z) (and nonsingular at z = z). This property ensures that it is a sum of G's. This property fails for non-integer spins for the combination F . This is how we determined eq. (3.12). The coefficients of the four resulting G's contain a part that is essentially the original F (a,b)good ∆,J . We thus subtract those off and record the discontinuity: which is given in terms of a new "Regge block": (3.17) Here we defined the following product of Γ-functions: . (3.18) Importantly, the continuation (3.16) is exact even for noninteger J. A simple defining property of R (a,b) ∆,J is that, being a discontinuity of blocks, its other discontinuity vanishes: We find that eq. Sommerfeld-Watson transformation With the analytic continuation of blocks worked out, one can try to evaluate the continued correlation function following the path figure 1 and discontinuity: (3.20) After some inspection, one finds that this expression makes no sense: the Regge block R ∆,J scales as σ 1−J as σ → 0 in the Regge limit, so the sum diverges. Just as for the S-matrix Regge limit, the solution is to step back and rewrite the sum as an integral before analytically continuing the cross-ratios to take the discontinuity. This requires first promoting the spin J to a complex variable and the partial wave coefficient c(J, ∆) to analytic functions of J. In the S-matrix case, this possibility was first observed by Regge and was soon proved generally by Froissart and Gribov; the analogous result in CFT was proved recently [20,22,23]. In general, this works for J > j * where it is known that j * ≤ 1 in a unitary theory. The partial waves form not one, but in fact two analytic functions of spin: where each term is nicely behaved (power-law bounded) at large imaginary J. Regge's idea allows us to express the sum over integer spins as an integral in the complex plane, where the contour C envelopes the positive real j axis, as illustrated in figure 7. Once this contour is in place, we can drag it around the complex plane to obtain a form where analytic continuation is possible. The general technique is known as the Sommerfeld-Watson transform. In the contour deformation of figure 7 we may encounter poles from the coefficients c t,u (∆, J), as well as possible spurious poles from F . Such spurious poles were discussed in [2]. However, we find that these are absent when using the block F good , for a simple reason: as the unique Casimir eigenfunction satisfying certain regularity conditions, F good ∆,J is automatically analytic for Re(J) > − d−2 2 when ∆ is along the principal series Re(∆) = d 2 . We have also verified explicitly the cancellation of poles using residue formulas from [26]. We can thus write eq. (3.22) with a vertical contour: On this contour we are now allowed to analytically continue to the Regge sheet. In particular we can take the discontinuity directly under the integration sign to get the Regge block in eq. (3.17): We note that the sign of the phase e iπJ is opposite in coordinate space as in momentum space. The sign is forced on us since during the continuation, the block contains a factor F JHEP05(2021)059 We then deform the J-contour left on the first term, and right for the rest. Similar to section 3.1, we find the following types of poles: 1. Physical left poles or cuts from c t,u (∆, J). Poles of types 2-4 cancel by the same two mechanisms discussed above. Namely, types 2-3 cancel among Weyl-reflected pairs J → 2 − J − d (see figure 8), by the mechanism detailed in eq. (C.5). The crux is that argument is that the Regge block R (a,b) is free of spurious poles. Type 4 poles multiply an explicit zero in the Lorentzian inversion formula, and so are generically absent in the sense discussed in the S-matrix case. Poles of type 5 however do not cancel in the CFT case and must be retained. The result is the following asymptotic expansion in the Regge limit, including subleading powers: Spurious left poles at ∆,J (z, z) . (3.26) The first line could have been easily guessed and is as in S-matrix Regge theory (see eq. (3.7)). The second line is a new contribution which to our knowledge has not been discussed explicitly before; it is important at subleading orders. The block F is defined similarly to eq. (3.9) with κ → κ from eq. (3.18): Eqs. (3.24) and (3.26) constitute the main results of this paper: an exact expression for correlators in Regge kinematics, and a corresponding all-order asymptotic expansion in the Regge limit. The latter will be confronted in the next section with explicit expressions in the fishnet model. Formula for double-discontinuity: recovering Lorentzian inversion As a first test of eq. (3.24) we will now verify that it is consistent with the Lorentzian inversion formula. The Lorentzian inversion formula extracts the OPE data from the double discontinuity: JHEP05(2021)059 where the measure is µ = 1 and the double-discontinuity is defined as where the single discontinuity as defined in eq. (2.4) and Disc is the opposite analytic continuation with i → −i. On the other hand, we just obtained an exact formula (3.24) for the discontinuity of the correlator. One might think that the double discontinuity should vanish since dDisc ∝ Disc 23 Disc 14 which vanishes for any block, however, as stressed below eq. (3.24), the phase e iπJ is only valid for the counter-clockwise path. It is easy to see from the second form of dDisc that the dDisc is just the imaginary part of that phase, so that the c u term and sine denominator simply cancel out: This is the main result of this subsection. A similar formula was used recently in a paper involving one of the authors [8], but using only the G (a,b) 1−J,1−∆ (z, z) part of the Regge block (3.17), which was valid since that reference only considered the leading power. In contrast, eq. (3.30) is an exact representation. Following the method above eq. (3.26), eq. (3.30) can be used to obtain asymptotic expansions in the Regge limit. The difference between the formula with the R block and G block is a function for which the J contour can be deformed to the right, and whose purpose is to cancel type-2 spurious poles on the left. Thus eq. (3.30) with R As a check, it is tempting to view eq. (3.30) as the "forward" version of the Lorentzian inversion formula, with the G and R blocks dual to each other. This requires the following pairing to act as an orthogonality relation of sorts: (3.31) In appendix D we compute the integral exactly in d = 2 and d = 4, using the fact that it factorizes into one-dimensional pairings which we could compute exactly using the Casimir equation satisfied by the blocks. We find that in both dimensions the pairing is given by the following single formula: ∆ ,J , which is the appropriate relation between the block and its shadow. It would be interesting to compute eq. (3.32) in other spacetime dimensions. Eq. (3.32) can't be quite the full answer when d = 2, 4, since in these cases it does not transform correctly under either ∆ or J shadow transformations. Plugging the Regge limit in eq. (3.30) into the Lorentzian inversion formula in eq. (3.28), the pairing should in principle recover the OPE data: (3.34) We are reduced to a single integral. Note that the two terms in the parenthesis cancel out when ∆ = ∆ so there are no singularities along the integration contour. To perform the integral, we notice that the top line is devoid of singularities in the right half-plane Re ∆ > d/2, since the coefficient c t (∆ , J ) is analytic between there and the unitarity bound, and the twist ∆ − J = ∆ − J is held constant (and below the unitarity bound for sufficiently large J) during integration. Similarly, using the shadow relation between c t (∆ , J ) and c t (d − ∆ , J ) the second is devoid of poles in the left half-plane in d = 2, 4. Starting from a contour slightly to the left, and deforming the contour in the two lines to the right and left, respectively, we thus pick a single pole from the top line: to the Lorentzian regime directly. A nice technique for determining this continuation based on properties of HPL functions is outlined in appendix B. On the other hand, the correlator (2.6) is already written in the spectral decomposition required for the analytic continuation of the conformal blocks. The only challenge after applying the continuation is the navigation of complex j and ν planes as the integration contours are deformed. The zero-magnon correlator: u-channel ladders The relevant equations for the zero-magnon correlator were outlined in section 2.2. We reintroduce the shadow block to (2.6) by exploiting the shadow symmetry and compare with equation (3.8). This allows us to extract all the OPE data which can be inserted directly into equation (3.26) for the discontinuity. Since the u-and t-channel ladders were computed separately, we treat them separately in this section as well. The u-channel ladder contributes only to the c u part of eq. (3.26) due to the (−1) J factor of eq. (2.6). To compute the discontinuity of the u-channel ladders (eq. (2.6)), we first focus on the modified block G (0,0) 1−∆,1−j which enters eq. (3.26). As prescribed, we isolate the physical poles in the j-plane from the four solutions to eq. (2.9), which are labelled J i for i = 1, . . . , 4, illustrated in figure 9. These solutions correspond to the j n (∆)'s discussed in section 3.3. After evaluating the j-residues in eq. (3.26) we only have the ν integral remaining: where we have absorbed factors from the analytic continuation and normalization into where as before ∆ = 2 + iν. At lowest order in σ, only the trajectories with the positive square root, J 1 and J 2 , will contribute since G 1−∆,1−j ∼ σ 1−j . The integral over ν can be evaluated by residues, though the pole and branch structure is significantly more complicated than in the Euclidean case. At subleading powers, all trajectories contribute. We expand the integrands of (4.2) to a desired order in σ = zz so the ν integration becomes manageable. The initial contours for all trajectories run along the real ν-axis, and for J 1 and J 2 they go below and above poles at ν = −2ξ 2 and ν = 2ξ 2 , respectively, since Im(ξ 2 ) < 0. The contours are then deformed as illustrated in figure 10. Each of integrands has a branch cut running between the two poles where the J 1 and J 2 sheets intersect. These cuts cancel perfectly when the two integrands are added. The first step in the integration is to drag both contours to the right, picking up a residue at ν = 2ξ 2 . In the Regge limit we strive to decrease J (to make the integrand as small as possible) and so we must drag the J 1 contour back across its branch cut and onto the J 2 sheet, as drawn in panel 3 of figure 10. Since the labellings no longer refer to the original solutions, we relabel the integrands as J L on the left and J R on the right. The residues at this step contribute to Disc 14 G u (z, z) at order σ and higher. We also find that within an O(ξ 4 ) radius each of the branch points and poles shown in figure 10 there are poles from the cosecant function in the ν-plane that must be included in the calculation. They are not included in the plots because in general we expand in small ξ for calculations, at which point the cosecant poles coinside with the plotted solutions. Later on in the calculations the poles from the cosecant will be independent and must then be treated independently. At leading power, the correlator is thus saturated by the contribution from the branch cut shown in the third column of figure 10, ranging over ν ∈ [−2ξ 2 , 2ξ 2 ]. This phenomenon was observed in refs. [15,16]. At subleading powers, we will obtain similar contributions from other intersections, as we now see. Figure 11. The movement of contours along the J L trajectory past the branch cut around ν = i. As the contour is dragged further to the left along the J 3 solution it will pick up residues of the cosecant function when J 3 (ν) is an integer. The next feature that the J L contour encounters is a branch cut running from ν = −i − 2ξ 2 − O(ξ 4 ) to ν = −i + 2ξ 2 + O(ξ 4 ). This branch is analogous to the first but for the J 2 and J 3 intersection. The contribution to the integrand at this location is the contour around the branch cut, as illustrated in figure 11. The J L contour can now be dragged to imaginary infinity in the ν-plane with the only obstructions being poles of the cosecant at integer j. The first of these is at J 3 = −2 and hence contributes at σ 3 . Due to the ν ↔ −ν symmetry of the integrand, the computation for the right-moving contour is equivalent up to signs from contour orientations. Since J = −1 around these branches, these residues contribute to Disc 14 G u (z, z) at order σ 2 . JHEP05(2021)059 Also at order σ 3 , we must consider the J 3 and J 4 trajectories at J = −2 in the same way as the J = 0 intersection. This contour deformation is illustrated in figure 12. Additional contributions to the correlation function at σ 4 come from the poles of the cosecant function along the J 3 and J 4 trajectories, arising from all the contours being pulled to more negative J. These additional points are plotted in figure 13. To summarize, the ν integration contour was deformed along the Regge trajectories while collecting contours around poles and branches at the J = 0 intersection which start We can now present the various contour integration results up to σ 3 and to the first two orders of ξ 2 . The corresponding locations on the ν-contours are indicated in figure 13. We computed these terms along with additional residues at the poles of the cosecant function up to orders (σ 4 , ξ 8 ) and (σ 3 , ξ 12 ). We found perfect agreement with the direct HPL continuations; to illustrate the nontrivial interplay between the contributions, we now record explicit formulas at lower order. (4.8) Finally, the F (0,0) J,∆ (z,z) contribution: The sum of these expressions A − E is then found to be equal to the Disc 14 of the fishnet correlation function given in equation (2.10), after the analytic continuation of harmonic polylogarithms detailed in appendix B: (4.10) t-channel ladders and their double discontinuity The Regge limit can also be considered for the t-channel ladders, for which the Euclidean OPE is given by the same expression as eq. (2.6) but without the overall (−1) J factor. The t-channel data is interesting because it is the only contributor to dDiscG(z, z) (see eq. (3.30)). We checked that the Sommerfeld-Watson calculation matched the direct HPL continuation up to order σ 2 and ξ 8 , this time including the e iπJ factor in eq. (3.26). At this level, we would have detected any errors in the formula for the analytic continuation that would not have been sensed in the u-channel case. The t-channel HPL continutations were obtained from the u-channel results by substituting z → z/(z − 1) and z → z/(z − 1) with appropriate phases. In section 3 we presented equation (3.30) for the double discontinuity in terms of a double integral of the OPE data and the Regge block over spin and scaling dimension, which we also checked explicitly using the t-channel ladders the fishnet model. The calculations are performed almost identically to those for the Disc 14 , however the cosecant function in spin is removed along with several constant coefficients from definitions. For the G 1−∆,1−J block, the analytic structure and ν integration follows the contour deformations drawn in figures 10, 11 and 12. The contributions from the poles at ν = ±2ξ 2 vanish and therefore JHEP05(2021)059 the double discontinuity contains only terms at even powers of ξ 2 . Moreover, there are no longer poles from a cosecant contributing at orders σ 3 and higher. In reference to figure 13, only terms from locations A-C contribute. As before, the remaining shadow blocks are irrelevant for the calculation. We verified that equation (3.30) was correct for the zero-magnon four-point function by comparing the direct integration of the (3.30) to the analytic continuations of the HPL functions appearing in the first line of (3.29). We computed the double discontinuity to orders (σ 4 , ξ 12 ) in both ways and found perfect agreement. In the Regge limit, the double discontinuity is The one-magnon correlator We can follow the main steps of the zero-magnon case to compute the Regge limit of the one-magnon four-point function. The interest is that the external operators have varying scaling dimensions, namely ∆ 1 = ∆ 4 = 2 and ∆ 2 = ∆ 3 = 1, so it will allow us to further verify the equations of section 3 (a and b vanished in the zero-magnon case and are now non-zero, a = b = −1/2). Interestingly, the Regge trajectories are significantly simpler! Moreover, when computing contributions to the Regge limit, there is only one branch cut to worry about (which contributes at leading order) and the only additional features leading to subleading corrections include the poles of the cosecant function attached to the G 1−J,1−∆ block and sum over spins in the F ∆,J block. Since the formulae for the analytic continuations are relevant at the leading order (σ 2 in this case) and first subleading order (now σ 3 ), we compute Disc 14 of the one-magnon correlator only to order σ 3 and ξ 6 . We again find agreement with known fishnet data once analytically continued to the Lorentzian regime and evaluated at high energy. We first review the physics of the Euclidean one-magnon four-point function derived in [14] and sketched in figure 14. The correlator is given by (again, trace implied) The sum over conformal blocks is slightly modified to with a new set of energy eigenvalues, . (4.14) -23 -JHEP05(2021)059 x 1 x 2 x 3 x 4 Figure 14. An example of a fishnet ladder diagram evaluated in the computation of G(z, z) with one-magnon operators. The four-point interaction sites have coupling ξ 2 so this diagram contributes at order ξ 4 and the shading of the propagator indicates how it "winds". The points x 1 and x 2 "source" the propagator and account for the modified scaling dimensions of the operators inserted at these points (see [14] for additional diagrams and discussion). A novelty is that (−1) J occurs both in the numerator and denominator of eq. (4.13); as remarked in [14], its appearance in H is necessary to cancel the spurious poles of the blocks with the spurious poles of the normalization coefficient, which in this case includes the external operator dimensions [14] C . (4.15) Just as in the zero-magnon case, the correlator can be expanded in the coupling, (4.16) and the G We verified the expansions from [14] to order σ 4 and ξ 8 . We now wish to analytically continue our correlator (4.13) to the Lorentzian kinematics regime. To make contact with eq. (3.21), one can work out that where c even and c odd represent the OPE data of the correlator (4.13) with even and odd spin, respectively, that is, with (−1) J set to ±1. Unlike the zero-magnon case, both channels contribute to the discontinuity Disc 14 G 1 . The Regge trajectories were computed by solving for the physial poles of the correlation function and are plotted in figure 15. We denote them J even ± (ν, ξ) = −1 ± −ν 2 + 4ξ 2 and J odd ± (ν, ξ) = −1 ± −ν 2 − 4ξ 2 . We can now plug our OPE data into our main equation (3.26), in which our Regge trajectories take the place of the j n (ν)'s. We have where we collected the OPE data into (4.21) The e iπJ clearly distinguishes the t-and u-channel data. The ν integration proceeds very much like the zero-magnon case. Since the Regge intercepts is now J = −1 (see figure 15), the leading order physics comes at order σ 1−Jmax = σ 2 . The analytic structure at J = −1 is similar to the zero-magnon cases except that instead of poles at the end of the branch cuts, we find only branch points. Thus the contribution at leading order comes only from the residue around the branch cut. For the J even trajectories, this branch runs from −2ξ 2 to 2ξ 2 and the integration contours are deformed similarly to figure 10. For the J odd trajectories, the branch runs from −2iξ 2 to 2iξ 2 and the contours look more like those in figure 11. The movement of these contours is plotted in figure 16. At subleading orders in σ we have to take into account the poles of the cosecant function. Only the integrands involving the J even solution contribute at this order. Additional subleading terms come from the block whose integration contour wasn't deformed. As in the zero-magnon case, we simply have to add a coefficient to the integrand of the Euclidean case to compute the F (− We can now present the results of the one-magnon calculations at order σ 3 and the first two orders of ξ. We checked to orders (σ 3 , ξ 6 ) that these contributions matched the direct HPL continuations. Conclusion This paper extended the formalism of Regge theory applied to four-point correlation functions in conformal field theories. Our main result, eq. (3.24) provides an exact expression for the resummed OPE in a Lorentzian spacetime, which can be expanded at high energies according to eq. (3.26) to compute subleading power corrections in a given model. At leading power, the formula reproduces existing work from the conformal bootstrap literature. The key new ingredient is the Regge block R defined in eq. (3.17), which allows to seamlessly deal with subleading powers. We also obtained an exact representation for the expectation value of a double commutator, eq. (3.30). The second goal of this paper was to check eq. (3.26) explicitly in conformal fishnet theory, a treasure trove of data. We found perfect agreement to high orders in energy and the coupling in both the zero-and one-magnon four-point functions. As mentioned in introduction, we expect this formula to be useful in situations which require going beyond the single-exchange approximation, such as situations involving saturation or for precision studies in theories where forward scattering is asymptotically transparent. where the C {a i } 's denote constant contour integrals starting at the z-plane origin and looping counterclockwise around z = 1. These are illustrated in figure 17 and can be decomposed into integrals from z = 0 → z = 1, around a countour at z = 1 and then from z = 1 → z = 0. For example, where MZV denotes the MZV value with the binary indices flipped (for example MZV 1,0,1 = MZV 0,1,0 ). The additional (−1) 3 term comes from a change of variables in the integral and the 2πi comes from the countour integral. In general, these C {a i } constants are calculated by summing all divisions the HPL integrals into the three regions of integration. The middle contour region returns (−2πi) n /n! if all the indices are 1's and zero otherwise. The example above was calculated by considering the integration regions of C 1,0,1 , C 1,0,1 = C 101|| + C 10|1| + C 1|01| + C 10||1 + C |101| + C 1|0|1 + C 1||01 where the |'s denote these separated regions, the red indicates terms that vanish and the blue indicating terms that cancel against each other. For example, The | in the equation above symbolically denotes the dividing of the integral rather than an absolute value. By analytically continuing the HPL functions of z in the fishnet correlator expansion, this continuation technique provides equivalent results to the log(1 − z) replacement while drastically reducing computation time. JHEP05(2021)059 C Froissart-Gribov formula and cancellation of spurious poles In section 3.1, we claimed that all the spurious poles cancel against each other in the Sommerfeld-Watson resummation for flat space scattering. In this appendix, we provide justification. To understand the necessary cancellations, we need a concrete expression for the coefficients a t,u J known as the Froissart-Gribov formula. In brief (this is reviewed in [18,19], see also [22,28]) we may use the orthogonality of spherical harmonics to write partial wave coefficients (for integer J) as an integral over [−1, 1] against the polynomial solution C J , which is equal to an integral over the discontinuity of the nonpolynomial solution C pure introduced in eq. (3.5). This allows the contour to be deformed to pick the discontinuities of F (x): whereC J (x) is just the hypergeometric function in eq. (A.5) (without Γ-factors), the contour on the second line encircles the cut of C pure 2−d−J (x) counter-clockwise, and on the third line we assumed that singularities of F (x) consist of a right-cut for x > x 0 (and a left-cut for x < −x 0 which give a u J in the decomposition (3.2)). A technical comment: the Froissart-Gribov integral (C.3) is valid for J large enough that we can ignore arcs at infinity: Re(J) > j * if F ∼ x j * . To the left of that, the analytic continuation of a t J need not agree with the coefficients entering eq. (3.1), whence the subtraction terms in eq. (3.3). We can now explain the two mechanisms responsible for spurious pole cancellation. We begin with the cancellation of type-2 and type-3 poles defined above eq. (3.7). The Froissart-Gribov integral (C.3) produces singularities for two reasons: "physical" singularities from divergences of the integral, and "spurious" poles due to the integrand itself. It is helpful to denote the two solutions of the Gegenbauer equation, for a given value of J, as "small" and "large" depending on whether they vanish or grow as x → ∞. Large solutions can have poles with residue proportional to the small solution. In the right half-plane the small solution is C pure 2−d−J but the roles get exchanged when Re(J) = − d−2 2 , and so it acquires the following spurious left pole: These are the spurious left-poles of a J called type-2 above eq. (3.7). On the other hand, the right-poles of type-3 come from the combination (P J (x) − C pure J (x)) for which we deform the contour to the right in deriving eq. (3.7). Since P J is pole-free on the right, this combination has the same spurious pole as the large solution −C pure J (x). Thus in the contour deformation argument leading to eq. (3.7) we see that all spurious poles come from
2020-08-28T01:01:17.065Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "4864728ab328cce0736f3a2b89dc6fead8a62864", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2021)059.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3a34c0d4d49d8a42cb26634ebde9f19cb22bfae3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14092054
pes2o/s2orc
v3-fos-license
New Information in Naturalistic Data Is Also Signalled by Pitch Movement: an Analysis from Monolingual English/spanish and Bilingual Spanish Speakers Index: 1. Introduction. 1.1. General Introduction. 1.2. Goals and Characteristics of the Present Paper. 2. Methods. 2.1. Subjects. 2.2. Speech Samples In communication, speakers and listeners need ways to highlight certain information and relegate other information to the background. They also need to keep track of what information they (think they) have already communicated to the listener, and of the listeners' (supposed) knowledge of topics and referents. This knowledge and its layout in the utterance is commonly referred to as information structure, i.e., the degree to which propositions and referents are given or new. All languages have 'chosen' different ways to encode such information structure, for instance by modifying the pitch or intensity of the vocal signal or the order of words in a sentence. In this study, we assess whether the use of pitch to signal new information holds in typologically different languages such as English and Spanish by analyzing three population group monolingual California English speakers, bilingual speakers of English and Spanish from California (Chicano Spanish), and monolingual Mexican Spanish speakers from Mexico City. Our study goes beyond previous work in several respects. First, most current work is based on sentences just read or elicited in response to highly standardized and often somewhat artificial stimuli whose generalizability to more naturalistic settings may be questionable. We opted instead to use semi-directed interviews whose more naturalistic setting provides data with a higher degree of authenticity. Second, in order to deal with the resulting higher degree of noise in the data as well as the inherent multifactoriality of the data, we are using state-of-the-art statistical methods to explore our data, namely generalized linear mixed-effects modeling, to accommodate speaker-and lexically-specific variability. Despite the noisy data, we find that contour tones including H+L or L+H sequences signal new information, and that items encoding new information also exhibit proportionally longer stressed vowels, than those encoding given information. We also find cross-dialectal variation between monolingual Mexican Spanish speakers on the one hand and monolingual English speakers and Chicanos on the other: Mexican Spanish speakers modify pitch contours less than monolingual English speakers, whereas the English patterns affect even the Spanish pronunciation of early bilinguals. Our findings, therefore, corroborate Gussenhoven's theory (2002) that some aspects of intonation are shared cross-linguistically (longer vowel length & higher pitch for new info), whereas others are encoded language-specifically and vary even across dialects (pitch excursion & the packaging of information structure). INTRODUCTION It is probably uncontroversial to assume that communication between individuals is perhaps the primary means to which language is put to use.To that end, speakers and listeners require ways to refer to -as well as direct their attention to -propositions, referents, actions, state of affairs and ways to express their attitudes and emotions to towards all these.Directing attention to referents etc. also requires speakers to keep track of and manipulate knowledge/inferrability of topics and attentional states, towards of propositions, referents, etc.This knowledge/attentional state, i.e. the degree to which propositions, referents, etc. are (assumed by the speaker to be) given or new or various intermediate states such as inferrable.(e.g., not mentioned before directly but inferrable from the discourse or the linguistic or non-linguistic context of the utterance) is commonly referred as information structure.All languages have ways to encode information structure, for instance by modifying the pitch or intensity of the vocal signal.Since some of the meanings attributed to these physical ways of encoding information are also shared by other animals' means of communication, some correlates are deduced to be universal: For instance what Ohala (1983) and Gussenhoven (2002) call the frequency code, i.e. that higher pitch signals nondominance because it correlates with smaller production organs, and vice versa threatening calls are usually delivered through the lowest possible frequencies (indicating bigger body size).The other two factors that can have a universal 'interpretation' are the effort code, i.e. more effort in the movement of the larynx can avoid undershooting of targets and therefore more muscular effort implies imparting importance to the information delivered; and the production code whereby physical correlates of intonation must interact with breathing, and therefore beginnings of utterances are usually more energetic than their ends because they must correspond with exhalation phases.These physical factors can however also be grammaticalized in human language, and this gives rise to an interaction between universal interpretations of these acoustic correlates, and language-specific ones. Moreover, in human language, there may be different, purely grammatical ways of encoding such information for hearers (cf.Haviland & Clark 1974, Chafe 1976, Prince 1981), including lexical, morphological, syntactic, or word-ordering means (Féry 2007, Gussenhoven 2007), as well as the intonational means that are at the core of the present paper.Intonation, i.e., the non-lexical variation of spoken tones as manifested in a multi-layered complex of modulation of pitch (F0), intensity, and vowel duration, is manipulated by speakers and their speech communities, and, as Complutense Journal of English Studies 2014, vol. 22, 11-33 13 part of the phonological system of a language, can become grammaticalized in the sense of developing conventionalized (intonational-)form-function pairings (Gussenhoven 2002).One such function is concerned with signaling information structure.Crucially, the indicators of different information-structural states typically also have functions unrelated to the packaging of information in an utterance (Féry 2007:162), such as the marking of sociolinguistically relevant information (Warren & Daly 2000, Daly & Warren 2001;Clopper & Smiljanic 2011) and are subject to constraints imposed on them by their physical expression (cf., e.g., Ohala 1983, Cruttenden 1997, Gussenhoven 2007).This means that the different functions of information-structural devices and their encoding give rise to potentially complex interactions with other grammatical or semantic components and their physical correlates, whose interpretation is language-/dialect-/variety-specific (cf.Gussenhoven 2002Gussenhoven , 2007;;Arvaniti & Garding 2007): different varieties can attribute a different semantic interpretation to the same tunes (sequences of H(igh) and L(ow) pitch on the different syllables of the intonation units) regardless of informationstructural packaging.For instance, in peninsular Spanish, Italian, and in English, a neutral statement ends in a L tone indicating the finality of the utterance and hence the boundary of the intonation unit (Ladd 1996, Martínez Celdrán & Fernández Planas. 2003:185, D'Imperio et al. 2005), but in Mexican Spanish a neutral statement is more likely to end in a circumflex tone (Butragueño 2004). GENERAL INTRODUCTION A further complexity is caused by the scope of information structure, which is necessarily laid out in a multi-word, multi-sentence domain, since the speaker and listener can only keep track of whether an item belongs to given or new information over a textual chunk comprised of at least several utterances.This means that processing information from a read text or spoken discourse requires a complex cognitive engagement of memory and domain-general attention strategies in order to extract meaning from a string of separate word units while they are being assembled into larger, multi-word constituents, and while the listener keeps track of the most important components of the conversation.This complex effort in processing linguistic information has been termed unification, an activity central to the language faculty recruiting frontal lobe structures, such as the left inferior frontal gyrus (Hagoort 2005). Despite the complex interactions to which the different language-specific, grammatical and pragmatic functions of intonation give rise, some studies have underlined the cognitive importance of the prosodic markings of information structure for sentence processing (Cowles et al. 2007, Wang et al. 2009, van Leeuwen et al. 2014).The above-mentioned process of unification has been shown to be particularly sensitive to the encoding of information structure: ERP studies showed for instance 14 that an N400 effect is obtained when an unexpected word is found in a reading task after a focusing device, such as clefting in English (Cowles et al. 2007, Wang et al. 2009), but also in studies of auditory processing of language (van Leeuwen et al. 2014:65 and references cited therein).These neurolinguistic studies show that whatever is mentioned in previous discourse/textual context creates expectations as to the information structural status of a specific item that follows, and that more processing resources (as measured through ERPs) are required if there is a mismatch between the salience of the information presented and the expected way in which it is supposed to be encoded through pitch manipulation (van Leeuwen et al. 2014). There is a considerable amount of literature on intonation in English (starting from Pierrehumbert 1980, Pierrehumbert & Hirschberg 1990, Ladd 1996, Gussenhoven 2004, Arvaniti & Girding 2007 and literature cited therein), but most of it concerns the meaning of intonational tunes and the realization of different types of focalizations; less has been published on acoustic correlates of information structural packaging.Moreover, there are different layers of emphasis that can be influenced by pitch: words both in English and Spanish have lexical stress, but an added level of prominence is provided by phrasal emphasis, or 'pitch accents,' i.e. pitch modifications on the phrasal or intonational unit level (Ladd 1996, Gussenhoven 2004).While pitch accents may be used as focusing devices or to mark prosodic boundaries, we are only interested in their information-structural use.Baumann (2005) found that, while a H pitch accent correlates with new information and deaccentuation (L) with given information (as in Pierrehumbert & Hirschberg 1990), neither completely new nor completely given items, a status which arguably covers most items in discourse, were marked by contour tones, i.e. sequences of H+L pitch movements 1 .In this paper, therefore, we focus on the role that is played by these ever so common contour tones and we provide further evidence for the use of pitch movement as an acoustic correlate of information structure in spoken discourse.We address typological questions by focusing on the distinctions between two languages that are supposed to privilege different means to set off new information from given information: English privileges pitch changes (Reinhart 1981, Pierrehumbert and Hirschberg 1990, Cruttenden 1997), whereas Spanish is supposed to prefer the manipulation of syntactic structures and word order (Zubizarreta 1998, Zubizarreta & Nava 2011).We also analyzed the speech of a group of Spanish-English early bilinguals speaking Spanish in order to assess the effects of bilingualism on the encoding of information structure in the Spanish of these speakers. GOALS AND CHARACTERISTICS OF THE PRESENT PAPER The present paper has two main goals.First and as already mentioned briefly above, we are exploring (i) how speakers of languages that are known to mark information structure differently: monolingual English (argued to use pitch movement) and Complutense Journal of English Studies 2014, vol.22, 11-33 15 monolingual Spanish (argued to use syntax and constituent order) and (ii) how bilingual speakers' Spanish compares to the monolingual speakers' (lack of) use of pitch movement.That is, we focus on how intonation -i.e. the suprasegmental melody of language and its acoustic correlates -affects the encoding of the information-structural status of items in discourse.Specifically, the main hypotheses we explore are the following: 1. New information is generally signaled by pitch movement on the relevant word; 2. monolingual English speakers use pitch excursion more than monolingual Spanish speakers to signal new information (the latter may not do it at all); 3. balanced early bilingual speakers speaking Spanish may be influenced by English and use pitch excursion to signal new information more than their monolingual Spanish counterparts. Second, as mentioned briefly above, we are also trying to advance the study of intonational correlates of information structure in two methodological ways: (i) by using much more naturalistic data than most prior work has, and (ii) by using more statistically sophisticated methods than has been customary in this area of research.With regard to these two methodological goals, it is necessary to bear in mind that a vast majority of studies in this area use constructed stimuli or passages, typically in reading or auditory tasks.Specifically, information-structural states can be simulated and/or targeted with manipulations of pitch and syntactic structure (this has been done in many existing studies on various languages) 2 , as exemplified also by Daly & Warren (2001:88) or Röhr & Baumann (2010).However, such experimental designs expose speakers to overall unrepresentative stimuli -unrepresentative in the sense that the range of stimuli/situations that speakers/subjects are exposed to are by design (i) limited in various ways compared to the richness of naturalistic situations and (ii) characterized by (typically balanced) probability distributions that do not represent the typically skewed and Zipfian distributions of natural data. Given these considerations, we decided to use a corpus of semi-directed interviews of different dialects of English and Spanish collected by the Phonetics Lab of the Spanish and Portuguese Department at the University of California, Santa Barbara (see Section 2 for details).A frequent counterargument to the use of (such more) naturalistic speech is that it is supposed to provide less robust data sets (Butragueño 2004, Clopper & Smiljanic 2011).However, not only can the same be true of the supposedly less noisy experimental conditions -see for instance, the constructed sentences read by Röhr & Baumann's participants, which produced rather noisy data (2010:4) -but we are using statistical methods that are well-suited to handle the kinds of interrelated and potentially noisy data that arise from (more) naturalistic samples.This in turn allows us to work with speech samples that are more attuned to regular language use (again, see Section 2 for details) as well as cover data from a larger The remainder of the paper is structured as follows: Section 2 explains how our data were gathered, the characteristics of the participants, and how the data were annotated and analyzed both acoustically and statistically.Section 3 explains the results obtained by our statistical multifactorial analysis of the factors correlated with pitch movement.In the final section, Section 4, we discuss the results, the conclusions, and future research developments. METHODS In this section, we discuss how our data were gathered, prepared for analysis, and then analyzed using corpus-linguistic and statistical tools.Specifically, Section 2.1 outlines how the materials for analysis were gathered, Section 2.2 outlines the type of speech samples obtained, and Section 2.3 the statistical analysis with which we explore them. SUBJECTS Data from three different groups of subjects were culled for the present study.The three groups were all composed of 10 subjects each 3 ; all subjects were students at a major university of the area where they were interviewed; five female and five male participants were interviewed per group, with age ranges between 20 and 25, and similar linguistic, socio-economic, cultural, and ethnic background within each group.They were asked to provide a minimum of biographical data that would ensure the correct ascription of the speaker to the relevant group, while maintaining the anonymous character of the data gathered in the interviews.Such biographical data allowed the researchers to establish whether the students were monolingual or bilingual (language spoken at home, languages spoken by the parents/caregivers, place of birth and number of years of residence in California or Mexico City respectively).The speakers were split into three rather homogenous groups: a group of monolingual Southern California English speakers and one of monolingual Spanish speakers, who had never resided abroad for a period of more than 4 weeks, and were born and raised either in Southern California, or in Mexico City by monolingual parents/caregivers of English and Spanish, respectively.The third group was also composed of undergraduate university students born and raised in Southern California, but raised in Spanish-speaking households and encountering English either since birth because both Spanish and English were spoken in the household, or as soon as they entered the US school system, in any case before their 8 years of age.Bilingual subjects were fluent both in English and Spanish at the time of recording. Recordings of monolingual English and bilingual Spanish-English speakers from Southern California were made in the phonetics lab of the Spanish and Portuguese Dept. at UCSB, using a Gretch-Ken Industries professional sound booth (anechoic chamber with NIC rating of 34) with a Shure SM86 condenser vocal microphone, connected directly to an M-Audio Fast Track Pro interface, feeding into a computer with the program Audacity (http://audacity.sourceforge.net).Recordings in Mexico City were carried out in a silent room at U.N.A.M. university's main campus in Mexico City using a portable MacBook computer, and an M-Audio Microtrack 24/96 professional digital recorder with a dual electret microphone, and GarageBand software. The participants reported having no known hearing or speech impediments, and they were all asked in writing to agree to the recordings according to Human Subjects handling both in the U.S.A. and abroad.Participation was entirely voluntary and unpaid.No distinction was detected in the extent to which individual participants or the different groups engaged in the tasks they were requested to perform. SPEECH SAMPLES AND THEIR ANNOTATION Speech samples culled from participating speakers were obtained with semi-directed interviews lasting between 10 and 20 minutes each.The participants were given free rein to tell anecdotes after receiving the same set of prompts.These included items such as 'Tell me about the scariest moment of your life,' 'Tell me what you remember about the first day of school/university,' 'Tell me the plot of your favorite movie,' 'Who was your favorite teacher in high school and why?,' 'What was your favorite subject in high school and why?' etc.This allowed participants to speak in fully fledged utterances without interruptions -unless these were self-imposed pauses -in as naturalistic a way possible according to their own speech patterns and rhythms while being recorded. Although information status can be broken down into a more complex hierarchy than just the distinction of new vs. given, to simplify matters and to make sure we obtained a sufficient number of tokens from the naturalistic interviews, in this study, we applied only this binary distinction to nouns in declarative sentences -questions and other types of syntactic frames where pitch could be used for different semantic purposes were excluded from the sample (e.g.narrow or contrastive focus).The speech samples were analyzed manually using PRAAT software (Boersma & Weenink 2014), pitch was normalized visually where the program provided spurious values due to creakiness, or excluded where creakiness impeded measurements.The resulting 1043 data points were then annotated with regard to the following variables, which had proven useful in a pilot study (Miglio, Gries, & Harris 2014): PITCHMOVEMENT, the binary dependent variable: no (the annotated word exhibits no pitch movement/excursion over the word through the rater's visual and aural perception) vs. yes (the annotated word exhibits pitch movement/excursion); Complutense Journal of English Studies 2014, vol.22, 11-33 18 − SPEAKERTYPE: monoengl (for utterances by monolingual English speakers) vs. monospan (for utterances by monolingual Spanish speakers) vs. bispan (for Spanish utterances by bilingual speakers of Spanish and English); − GIVENNESS: no (the referent of the word whose pitch movement was annotated was mentioned in the discourse for the first time) vs. yes (the referent of the word whose pitch movement was annotated was mentioned before in the discourse); − PHRASEFINALITY: no (the annotated syllable is not in a phrase-final position) vs. yes (the annotated syllable is in a phrase-final position); − SEX: the sex of the speaker, female vs. male; − DURATION: the natural log of the duration of the stressed vowel in milliseconds; − INTENSITY: the average intensity of the stressed vowel in decibels. In addition, we also noted the specific speaker from whose speech the token was sampled as well as the specific word whose PitchMovement level was studied in order to include those as random effects in the regression model. STATISTICAL EVALUATION We then explored the degree to which the above predictors can predict whether speakers will employ pitch movement in their utterances by using generalized linear mixed-effects modeling (GLMEM).This kind of model has several attractive characteristics for the present study.First, it allows the researcher to study several predictors' effects as well as their interactions at the same time.That is to say, one avoids the potential risk of monofactorial studies -studies in which only one predictor is studied at a time -namely that (i) the studied predictor may be significant but only because it is correlated with another one or (ii) the studied predictor might not have the same (significant) effect in all parts of the data (e.g., GIVENNESS may not have the same effect on PITCHMOVEMENT for all speaker types). A second big advantage is that this kind of modeling approach allows to ensure that statistical assumptions of standard regression modeling are not violated.In our data, as in most linguistic data sets in fact, every speaker contributes more than one data point, which means that the assumption that all data points are completely independent of each other is violated.The GLMEM approach, on the other hand, allows us to include in the analysis individual speakers' preferences to (not) use pitch movement in the analysis, as well as account for the possibility that particular lexical items are more likely to come with a (dis)preference for pitch movement. We undertook a model selection process in which we first fit a regression model that in Miglio, Gries, & Harris's (2014) pilot study proved useful to distinguish uses of pitch movement in a part of the present data.In that model, we modeled PITCHMOVEMENT as a function of SPEAKERTYPE, GIVENNESS, DURATION, and the interaction SPEAKERTYPE:GIVENNESS, with varying intercepts for both speakers and lexical items.We then considered adding potential two-way interactions of fixed effects (using an exploratory significance level of p=0.1) and varying slopes for GIVENNESS to the regression model to achieve the best possible model fit while at the same time following Occam's razor. OVERALL RESULTS AND MAIN EFFECTS In this section, we discuss the results of the model selection process.That process was concluded quickly because only one additional predictor had to be added to the final model of Miglio, Gries, & Harris (2014), the interaction GIVENNESS:PHRASEFINALITY -no other fixed-effect predictors nor the varying slopes for GIVENNESS improved the model significantly. The final model makes for an intermediately good fit: it achieves a classification accuracy of 72.1%, which, compared to the baseline of correct random choices of 51.2% is highly significantly better (pbinomial test<10-40); this degree of accuracy yielded a C-value of 0.78.However, the extreme variability of observational data also results in comparatively low amounts of explained variability: R2marginal (i.e. the 'correlation coefficient' that quantifies the effect of the fixed-effect predictors) is a mere 0.164; R2conditional (i.e. the 'correlation coefficient' that quantifies the effect of both fixed-effect predictors and the random effects) is also just 0.249.Overdispersion and collinearity did not pose any problems: poverdispersion>0.98 and all VIF<2.85. . Coefficients of the final mixed-effects regression model As Figure 1 indicates, there is a clear effect such that the longer the duration of the stressed vowel of the word analyzed, the higher the predicted probability of pitch movement, an effect that is attested across all speaker types and givenness levels. Figure 1. The effect of DURATION (logged) on the predicted probability of pitch movement (regression line with 95%-confidence band) Figure 2 shows that female speakers make more use of pitch movement than men, again regardless of speaker types and givenness levels. Figure 2. The effect of SEX on the predicted probability of pitch movement (with 95%-confidence intervals) Figure 3 reflects the overall effect that GIVENNESS has on pitch movementnew information is more marked with pitch movement than given information -but this effect is qualified in an interaction, which is why we revisit it again below. Figure 3. The effect of GIVENNESS on the predicted probability of pitch movement (with 95%-confidence intervals) A similar situation arises with the effect of PHRASEFINALITY in Figure 4: Its overall effect is that utterance-final phrases exhibit more pitch movement than nonfinal ones, but PHRASEFINALITY participates in a significant interaction with GIVENNESS and will thus be analyzed in more detail below. Figure 4. The effect of PHRASEFINALITY on the predicted probability of pitch movement (with 95%-confidence intervals) The final main effect to be discussed briefly is that of SPEAKERTYPE in Figure 5: As the planned contrasts in Table 1indicate, the main findings are (i) that the speaker types form a cline from monolingual English speakers via bilingual 22 Spanish/English speakers to monolingual Spanish speakers and (ii) that the bilingual speakers do not differ from the two kinds of monolingual speakers combined, but the monolingual English speakers use pitch movement significantly less than the monolingual Spanish speakers.However, this effect, too, will have to be revisited given the significant interaction with GIVENNESS that it participates in. Figure 5.The effect of SPEAKERTYPE on the predicted probability of pitch movement (with 95%-confidence intervals) INTERACTION EFFECTS In addition to the above main effects, we also obtained two significant two-way interactions in the data, which qualify three of the above main-effects results. Figure 6 represents the first of these two relevant interactions to be discussed here: SPEAKERTYPE:GIVENNESS, which qualifies the main effects of the predictors involved in it; both panels show the same results but perspectivized differently.While we saw above how GIVENNESS (given → new) results in a strong overall increase in the probability of pitch movement, we now see that this effect is different for the different speaker types.The left panel shows clearly that, for monolingual Spanish speakers at the top, the contrast of GIVENNESS has the least effect (resulting in a just about significant but still small adjusted pitch-movement probability difference of 10.7%).However, for both the monolingual and the bilingual speakers of English, the difference that GIVENNESS makes is much more pronounced: For the monolingual English speakers in the middle, GIVENNESS results in a highly significant pitchmovement probability difference of nearly 26%; for the bilingual Spanish speakers at the bottom, the difference is an even greater (and more significant) difference of 30.5%. . The effect of SPEAKERTYPE:GIVENNESS on the predicted probability of pitch movement (with 95%-confidence intervals) The right panel, on the other hand, makes it very obvious that the above results for SPEAKERTYPE shown in Figure 5 still hold, but only for when the information embodied by the word analyzed is given: at the bottom of the right panel we again find the cline from monolingual English speakers via bilingual Spanish/English speakers to monolingual Spanish speakers -but the top of the right panel shows that, with new information, the bilingual speakers now do not fall between the two monolingual speaker groups anymore because of the big difference in pitch movement in response to GIVENNESS. Finally, Figure 7 represents the final predictor, the interaction PHRASEFINALITY:GIVENNESS.Again, we saw above how both GIVENNESS (given → new) and PHRASEFINALITY (no → yes) result in a strong overall increase in the probability of pitch movement, but now we also find that neither effect is equally strong but connected to where the word under scrutiny occurs (phrasefinally or non-phrase-finally) or to whether the referent of the word in question is given or new: When the word being analyzed is phrase-final, pitch movement is more likely overall, but the different levels of GIVENNESS make a very significant but smaller difference (16.6%); however, when the word is not phrase-final, pitch movement is less likely overall, but the different levels of GIVENNESS make a highly significant much larger difference (25.2%). Figure 7. The effect of PHRASEFINALITY:GIVENNESS on the predicted probability of pitch movement (with 95%-confidence intervals) As a result of the analysis, we also obtained the regression model's adjustments for the lexical items whose pitch levels we measured as well as for all the speakers in our data.Space does not permit a more systematic exploration of these, but it is instructive to note that the adjustments made for the speaker are larger than those for the lexical items, which makes sense given that one would not expect words to have default pitch movement characteristics associated with them whereas it is easily conceivable that speakers differ more consistently in their use of pitch excursion.In addition, it is this aspect of the model that allows us to model each speaker's baseline tendency to use pitch movement separately and, therefore, get results for all other predictors that are not tainted by the fact that all data points of a speaker may be characterized by idiosyncrasies. On a final and more methodological note, in addition to the mixed-effects model discussed above, we also fit a standard binary logistic regression (BLR) model in order to compare both the classification accuracies and the coefficients of the model predictors.Figure 8 reveals the dangers of not using the right kind of regression modeling.All coefficients of both models are shown on the x-axis and the percentage to which the BLR model misestimates the coefficients is shown on the y-axis.On average, the coefficients of the binary logistic regression are off by 12.4%, but one coefficient -for one contrast of the main effect of SPEAKERTYPE -is off by more than 50%.Correspondingly, the overall classification accuracy of the BLR is about 10% worse than that of the more sophisticated mixed-effects model, which would also generalize better when applied to new speakers' data.Finally, the standard BLR approach would also suggest to include another predictor in the model, SPEAKERTYPE:DURATION, whereas the mixed-effects model recognizes that this effect is better considered to consist of speaker idiosyncrasies rather than what Complutense Journal of English Studies 2014, vol.22, 11-33 25 speakers of the different types share.Thus, the mixed-effects modeling approach not only makes the predictors it flags as significant more precise and generalizable, it also protects researchers against falsely accepting effects as significant. DISCUSSION AND CONCLUDING REMARKS Given the results laid out in the previous section, the discussion will focus on three different aspects of the analysis: one related to the acoustic correlates of intonation in marking information structure (Section 4.1), one on the interaction between dialectal/linguistic variation and the encoding of information structure (Section 4.2), and finally one on gender and intonation (Section 4.3). ACOUSTIC CORRELATES In our data, as manifested Figure 1, we found a clear correlation between stressed vowel duration and pitch movement, i.e. the predicted probability that there would be a raising or lowering of pitch on longer vowels, across speaker types and regardless of information structure.This result confirms that segments with a particular structure, in this case longer vowels, are more likely targets for contour tones, i.e. complex tones made up of two separate targets, either a H+L sequence or L+H sequence.This is unsurprising, since from both an articulatory point of view and a perceptual point of view, longer duration provides more time to produce separate movements of the muscles and cartilages of the larynx in order to modify F0, as well as more time to perceive them as separate targets.Thus, this finding confirms what previous literature has remarked about the feasibility of certain segments to bear tones; especially for complex tones such as those exhibiting more than one target (i.e.those with pitch movement), longer vowels are better suited than shorter ones, since "contour tone bearing ability is […] crucially dependent on duration" (Zhang 2001:33). One of our initial hypotheses was that new information would be marked by a complex pitch movement (based on the high frequency of H*+L sequences found in Baumann's (2005) study).As mentioned in the introduction, while English tends to use pitch modification to signal new information (Cruttenden 1997, Vallduví 1992), Spanish is supposed to use its more flexible word order for the same purpose (Suñer 1982, Zubizarreta 1998, Zubizarreta & Nava 2011 and literature cited therein).However, as we can already see from Figure 3, which embodies a main effect found in the overall data, we noticed that in naturalistic speech a contour tone in fact is more likely to mark new information than given/old information for all three speaker groups, i.e. regardless of language or dialect spoken.This is remarkable in and of itself, since at least some of the Romance languages, the ones that Vallduví terms as 'non-plastic' (1992) such as Spanish and Italian, are supposed to manipulate word order rather than use pitch modulations for this purpose.Yet even the final main effect (of SPEAKERTYPE) seems to contradict the predictions found in the literature 4 , since monolingual Mexican Spanish speakers are shown to be most likely to use pitch movement, compared to bilingual and monolingual English speakers. As mentioned in the methodology section, we controlled for pitch accents unrelated to information structure by eliminating utterances with narrow and contrastive focus from the data.Another area where pitch accents are likely to appear unrelated to information structure is at the end of the utterance, since they can mark boundary tones in different languages such as English and German (Baumann 2005:3).In fact, we do find an interaction between the prosodic packaging of information structure in our data and phrase-final position, as shown in Figure 7.When the word analyzed is in phrase-final position, it is more likely to show pitch movement overall, and the fact that this did not also interact with SPEAKERTYPE shows that this effect is found for all three speaker groups, i.e. regardless of language spoken.This is, in a sense, not surprising, given that both in English and Spanish the main pitch accent in an utterance (also called phrasal or nuclear stress) can fall on the rightmost content word, i.e. towards the right boundary of the sentence; in Spanish this is strictly enforced, whereas English has more positions where nuclear stress can fall (Zubizarreta & Nava 2011:652).However, as mentioned in the Section 3 above, what is important here is that there is an interaction with the expression of information structure: We see in fact that pitch movement is a much more important resource to mark information as new (as opposed to given) when the word is not in utterance-final position, where the givenness difference amounts to a 25.2% difference of predicted probability of pitch movement, as opposed to the smaller Complutense Journal of English Studies 2014, vol. 22, 11-33 27 corresponding difference due to GIVENNESS (16.6%) when the word is in final position.Since pitch movement is also used across languages to mark boundary tones, this is an important finding because our naturalistic data show that pitch movement is a discriminating factor especially when the word is not found in utterance-final position.This also confirms that our study corroborates hypotheses that different acoustic correlates of intonation are used for different purposes in language (Gussenhoven 2002, Gordon andNafi 2012): we show that pitch movement is a correlate of information-structural marking at least across the languages and varieties we here. INTERACTION BETWEEN INFORMATION STRUCTURE AND DIALECTAL/LINGUISTIC VARIATION As we have seen above, the role of GIVENNESS for PITCHMOVEMENT is qualified by interactions, as shown for instance in Figure 6.The left panel specifically corroborates the existing literature in showing that while monolingual Spanish speakers do use pitch movement, they exhibit the smallest difference in the likelihood of using pitch movement to distinguish given and new information in comparison with the other two groups.For monolingual Spanish speakers, in fact, that difference only accounts for approximately 11% predicted probability, while for the other two groups (English monolinguals and English-Spanish bilinguals), the distinction made through pitch movement accounts for 26% and 30.5% respectively.This is compatible with the view of Spanish -a embodied by the monolingual population -as a language that has other mechanisms at its disposal, such as word order, to foreground new information and therefore uses pitch sparingly for this purpose.English, on the other hand, has a more fixed word order, and therefore uses pitch movement more in order to distinguish between new and given information in spoken discourse.The righthand panel in Figure 6 is also revealing in so far as it still clearly shows the main effect whereby new information is characterized by pitch movement for all SPEAKERTYPE groups (top part of right panel); moreover, for given information (bottom part of right panel) we find the bilinguals neatly nested between the monolingual English and the monolingual Spanish speakers as expected in our third hypothesis above -showing that English bilingualism does influence Chicano linguistic behavior even when they are speaking Spanish.However, in using pitch movement to encode new information, the bilingual group is no longer wedged between monolingual Spanish and monolingual English groups (top part of right panel), but goes 'over the top' in exploiting intonation for information structural purposes, showing that Chicanos use pitch movement more than either monolingual group in signaling new information.This shows that intonational packaging of information structure is a language-specific area of linguistic behavior, and one that is not easily mastered natively even by early bilinguals.Such difficulties are also corroborated by L2 studies: Zubizarreta & Nava (2011:667), for instance, find that in grammatically comparable contexts 5 , native speakers of Spanish find it hard to acquire English pitch modulations that encode information structure in broad focus contexts, but not necessarily those that encode contrastive focus. Although Chicano Spanish is a peculiar variety of Spanish in the sense that it is a contact variety spoken by early bilinguals, our findings confirm that research cannot avoid distinguishing among different dialectal varieties, especially for languages that have many millions of speakers scattered across vast surfaces of the globe, such as English and Spanish.This is the kind of criticism that Arvaniti & Garding (2007:5) level at many intonation studies, namely that a language such as English is "treated as a homogenous language when in fact in most cases the research involved speakers of quite distinct varieties." 6The same could certainly be said for Spanish (cf.Prieto & Roseano's 2010 careful distinction of different varieties of Spanish), where research exploring the acoustic correlates of intonation is generally relatively scarce; as far as we know, in fact, no study has been published on acoustic correlates of intonation and information structure in Spanish before this one.However, what has been published on general intonation in different dialects of Spanish points to considerable distinctions in the semantic interpretation of prosodic cues depending on the dialect analyzed (Butragueño 2004, and the articles collected in Prieto & Roseano 2010).We do not wish to maintain, therefore, that Mexican Spanish is representative of the use of intonation in encoding information structure for all or even just for any other noncontact variety of Spanish, such as, say, Iberian Spanish.We chose Mexican Spanish because the bilingual Chicano speakers from California are most likely to speak a dialect of Spanish closely related to Mexican Spanish (Parodi 2011). GENDER AND INTONATION Finally, our naturalistic speech also provides new data to corroborate previous findings in the literature related to gender and pitch movement, as shown in Figure 2. Females, regardless of language spoken and of information structure packaging, are always more likely than males to use pitch modulations.This finding is in tune with Daly & Warren's findings (2001:92, also Warren and Daly 2000) that women use more dynamic pitch than males in their New Zealand English study.They found that this was especially true of their story-telling task 7 (rather than the read sentences), and this may well be why we also find it in our data, since semi-directed interview can be considered akin to a story-telling task, where participants relate anecdotes from their past.There are still relatively few studies on acoustic correlates of discourse and gender identity (see Clopper & Smiljanic 2011:238 and literature cited therein) that go beyond evolutionary observations (Ohala 1983, Gussenhoven 2002).Our findings corroborate what has often been considered a stereotype, which has nonetheless been hard to substantiate with actual data: i.e. that women's speech exhibits more swooping pitch changes, a truism no doubt related to women expressing their emotions more Complutense Journal of English Studies 2014, vol.22, 11-33 29 patently than men.Some early studies had failed to produce actual data proving that there was any truth in the stereotype (Henton 1989(Henton , 1995)), whereas Daly & Warren (2001:85) did find more pitch dynamism in (New Zealand) female speech compared to the prosodically 'flatter' speech of men, and the experiments discussed by Gussenhoven (2002, section 3.1) also show that there is some widespread expectation of wider pitch ranges in female than in male speech.Whatever the sociolinguistic interpretation of a more dynamic pitch use in female speech may be, we do find that women are more likely to use contour tones, i.e. pitch movement, regardless of language variety or information structural concerns.Our study, therefore, also contributes new data for the study of prosody as a sociolinguistic marker of gender identity.Impressionistically, however, we can say that from the simple coding of the data, many women are simply more engaged story-tellers, carefully evaluating the character of the information they communicate and imparting the value they themselves attribute to it using pitch dynamism as a performative device to alert the listener of the importance of the various parts of the utterance.This evaluation of female pitch dynamism, while admittedly impressionistic, seems, however, to adjust well to some findings discussed by Gussenhoven for a Bantu language (2002, section 3.1), where a compressed pitch range indicates withdrawal of information. IMPLICATIONS AND WHERE TO GO FROM HERE There is still a lot to be done in researching intonation: both regarding acoustic correlates of information structure, and regarding the interpretation of different tunes in various languages and dialects, particularly in Spanish.What has been published on general intonation in different dialects of Spanish points to considerable distinctions in the semantic interpretation of prosodic cues depending on the dialect analyzed (Butragueño 2004, and the articles collected in Prieto & Roseano 2010), which makes instrumental studies of the prosodic characteristics of different Spanish dialects such as ours all the more timely. With the use of sophisticated statistical modeling such as that used in this paper, it is possible to use naturalistic data, rather than ad-hoc read sentences or artificial stimuli in order to study intonation and its various acoustic correlates.This type of study has wide-reaching implications not only for phonetics and phonology, but also for the effects of language dominance and education on the speech of early bilinguals, on the sociolinguistic analysis of gender identity, on different textual & discourse registers, as well as performativity in language. The study of bilinguals in this paper yielded interesting and unexpected conclusions as to the use of pitch movement in the Spanish of Chicano speakers: an analysis of their English is what we will carry out next to compare their use of intonation between both languages and compare their English intonation to their monolingual counterparts.The analysis of further dialects of Spanish in relation to information structural packaging also promises to yield interesting results and we have a corpus of central Iberian Spanish semi-directed interviews that we intend to analyze for this purpose. Finally, a further study of correlates of stress and intonation, such as pitch range, intensity, and duration in different dialects of English and Spanish would certainly provide much needed materials and analysis to improve our understanding of universal and language-specific phonetic features overall. NOTES 1. Pitch contours or 'tunes,' i.e. sequences of H+L, L+H relative pitch frequencies, are referred to in this paper as 'pitch movement.'Flat tones, H or L, are sometimes subsumed under the 'lack of pitch movement.' 2. Dutch: van Leeuwen (2014); Dutch and Italian: Swerts et al. (2002); German: Röhr & Baumann (2005); for intonation in Spanish see Butragueño 2004, Herrera & Butragueño (2003), and Prieto & Roseano (2010).3.One monolingual English student had to be excluded because of technical problems with the recording.4.However, some authors do talk about these tendencies in non-categorical ways: "Germanic languages and, to a lesser extent, Romance languages use pitch accents to mark focused parts of sentences" Gussenhoven (2002, section 3.3).5.They explored the use of prosody in wide focus clauses in signalling the distinction between sentences that distinguish between topic and comment and eventive, topicless clauses in native Spanish speakers learning English as L2. 6.It should also be pointed out that most studies in intonation use a small number of speakers from which to analyze data, despite the fact that interspeaker variability is well known to be problematic.Studies such as Arvaniti & Garding for English (2007) use 13 speakers, the studies they mention in their article vary from an undefined number, to two, to five speakers (2007:5); Sluijter & van Heuven (1996) use 6 speakers, Clopper & Smiljanic (2011) use 10 speakers -all considerably smaller numbers of speakers than those analyzed in our study, which takes data from 29 speakers (10 for monolingual Spanish, 10 for bilingual Spanish, and 9 for monolingual southern California English).7.Although they call it story-telling in Table 1 (p.92), it is really a storyreading task as explained in the section on materials on page 90.
2015-03-20T15:25:33.000Z
2014-10-22T00:00:00.000
{ "year": 2014, "sha1": "4e1fbc932a9e92601c0b81c714870b842c19473d", "oa_license": "CCBY", "oa_url": "https://revistas.ucm.es/index.php/CJES/article/download/46957/44061", "oa_status": "HYBRID", "pdf_src": "Grobid", "pdf_hash": "4e1fbc932a9e92601c0b81c714870b842c19473d", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
20471478
pes2o/s2orc
v3-fos-license
Presumptive meningoencephalitis secondary to extension of otitis media/interna caused by Streptococcus equi subspecies zooepidemicus in a cat A 5-year-old castrated male domestic longhair cat was presented with neurological signs consistent with a central vestibular lesion and left Horner’s syndrome. Computed tomography images revealed hyperattenuating, moderately contrast-enhancing material within the left tympanic bulla, most consistent with left otitis media/interna. Marked neutrophilic pleocytosis was identified on cerebrospinal fluid analysis. Streptococcus equi subspecies zooepidemicus (SEZ) was isolated from the cerebrospinal fluid. Intracranial extension of otitis media/interna is relatively infrequent in small animals. There are no reports of otitis media/interna caused by SEZ in dogs or cats. This is the first report of otitis media/interna and presumptive secondary meningoencephalitis caused by SEZ in a cat. A 5-year-old castrated male domestic longhair cat was presented with neurological signs consistent with a central vestibular lesion and left Horner's syndrome. Computed tomography images revealed hyperattenuating, moderately contrast-enhancing material within the left tympanic bulla, most consistent with left otitis media/interna. Marked neutrophilic pleocytosis was identified on cerebrospinal fluid analysis. Streptococcus equi subspecies zooepidemicus (SEZ) was isolated from the cerebrospinal fluid. Intracranial extension of otitis media/interna is relatively infrequent in small animals. There are no reports of otitis media/interna caused by SEZ in dogs or cats. This is the first report of otitis media/interna and presumptive secondary meningoencephalitis caused by SEZ in a cat. Date accepted: 13 April 2011 Ó 2011 ISFM and AAFP. Published by Elsevier Ltd. All rights reserved. A 5-year-old castrated male domestic longhair cat was presented to the Ohio State University Veterinary Medical Center for evaluation of a 12-h history of falling over. The cat lived exclusively indoors, and was current on vaccines. Physical and neurological examination revealed dull mentation, a left-sided head tilt and vestibular ataxia, characterized by falling to the left when walking. Cranial nerve examination revealed miosis, ptosis, enophthalmos and third eyelid protrusion on the left eye (OS), consistent with complete Horner's syndrome. There was also ventral strabismus OS, non-positional, rotary nystagmus of both eyes with fast phase to the right, and absent physiologic nystagmus when turning the head to the left. The remainder of the cranial nerve examination was unremarkable. Postural reactions were mildly delayed in the right thoracic and pelvic limbs. Spinal reflexes and cutaneous trunci were normal. Spinal palpation revealed mild lumbar discomfort. Based on these findings, a multifocal neurolocalization was suspected. A left vestibular lesion with both peripheral and central involvement was considered likely. The presence of concurrent Horner's syndrome OS and left-sided vestibular signs suggested involvement of the left middle/inner ear and the presence of dull mentation was supportive of central disease. Also, extension across midline with involvement of the right side of the brainstem was considered possible, due to the presence of right-sided postural reaction deficits. Differential diagnoses included otitis media/interna (OMI), nasopharyngeal polyp, neoplasia, infectious diseases (toxoplasmosis, cryptococcosis, feline infectious peritonitis (FIP), bacterial) and cerebrovascular event. A complete blood count showed leukocytosis (15.8 Â 10 9 /l, reference interval (RI) 4.0e14.5 Â 10 9 /l), mature neutrophilia (13.1 Â 10 9 /l, RI 3e9.2 Â 10 9 /l) and monocytosis (1.3 Â 10 9 /l, RI 0e0.5 Â 10 9 /l), consistent with an inflammatory leukogram. A biochemical profile revealed increased total protein (7.9 g/dl, RI 5.6e7.6 g/dl) due to hyperglobulinemia (4.8 g/dl, RI 3.1e4.1 mg/dl). Enzyme-linked immunosorbent assays (ELISAs) (Idexx, Westbrook, ME, USA) for feline leukemia virus antigen and antibody against feline immunodeficiency virus were negative. Non-invasive Doppler blood pressure and thoracic radiographs were normal. Ophthalmologic examination revealed a normal fundus. Otoscopic examination revealed intact, translucent tympanic membranes bilaterally. Cytology of the external ear canal was normal. The cat was anesthetized for computed tomography (CT) (GE Lightspeed Ultra) of the brain and cerebrospinal fluid (CSF) collection. Anesthesia consisted of intramuscular dexmedetomidine (20 mg/kg), induction with intravenous propofol (2 mg/kg) and maintenance with inhalatory isofluorane and oxygen using mechanical ventilation. The CT study consisted of 1.3 mm contiguous transverse acquisitions, pre-and post-contrast administration (iohexol, Omnipaque 240 mg/ml, dose: 2 ml/kg IV). Hyperattenuating material was noted completely filling the left tympanic bulla, with mild contrast enhancement after iohexol administration (Fig 1). The left retropharyngeal lymph node was mildly enlarged (Fig 1C). No brain parenchyma abnormalities were noted. However, CT has inherent limitations when imaging soft tissues, so the lack of brain parenchyma abnormalities in this case may have been related to the limitations of this imaging modality. 1 The differentials considered were left OMI or a polyp, with neoplasia considered less likely based on the presence of only mild contrast enhancement and no lytic lesions. CSF was collected from the cerebellomedullary cistern. The fluid was colorless and slightly hazy, with a total protein of 19.9 mg/dl (RI < 25 mg/dl), a white blood cell (WBC) count of 1368 cells/ml (RI < 5 cells/ ml) and a red blood cell (RBC) count of 99/ml (RI < 5 cells/ml). Cytology revealed 61% non-degenerate neutrophils, 27% large mononuclear cells and 12% lymphocytes (Fig 2). The large mononuclear cells were vacuolated and interpreted as reactive. The lymphocytes were small and well differentiated. No evidence of hemosiderin, erythrophagia, etiologic agents or neoplastic cells was seen. The findings were consistent with a neutrophilic pleocytosis with mild blood contamination. Based on the combination of the CT images and CSF results, a bacterial meningoencephalitis secondary to extension of OMI was considered the most likely presumptive diagnosis. Other differentials included cryptococcosis, viral infection (feline infectious peritonitis) and toxoplasmosis. Cerebrospinal fluid was cultured on trypticase soy agar with 5% sheep's blood (TSAII, Becton-Dickenson, NJ, USA) and incubated at 35 C in 5% CO 2 ; reduced thioglycolate broth (Becton-Dickenson, NJ, USA) was inoculated to recover fastidious organisms and anaerobes. A polymerase chain reaction (PCR) was performed for Toxoplasma gondii, feline coronavirus and feline leukemia virus. Streptococcus equi subspecies zooepidemicus (SEZ) was isolated from the CSF in high numbers in pure culture. The organism was speciated using Lancefield grouping (Streptocard, Becton-Dickenson, NJ, USA) and conventional biochemicals (API-20 STREP System, Biomerieux, MO, USA). PCR results were negative. Cryptococcus species antigen enzyme immunoassay was negative. The cat recovered uneventfully from anesthesia. Treatment was initiated with ampicillinesulbactam (30 mg/kg IV q 8 h, Unasyn; Pfizer), enrofloxacin (5 mg/kg IV q 24 h, Baytril; Bayer), dexamethasonesodium phosphate (0.15 mg/kg IV q 24 h for 2 days) and famotidine (0.5 mg/kg PO q 12 h), pending culture results. The day after presentation the cat developed severe hypersensitivity to light, touch and sound, and self-inflicted multiple bite wounds to his limbs. The cat was started on a dexmedetomidine constant rate infusion (3 mg/kg/h IV) for sedation, which was slowly weaned off over the next 8 h. The intravenous catheter was removed on the third day of hospitalization because of poor patient tolerance. After obtaining the culture results, the cat was started on trimethoprimesulfamethoxazole (TMSeSMZ, 15 mg/kg PO q 12 h), as treatment for the meningoencephalitis, and amoxicillineclavulanic acid (62.5 mg PO q 12 h, Clavamox; Pfizer) to prevent infection from the selfinflicted bite wounds. The neurologic status of the patient improved gradually. A left ventral bulla osteotomy was performed 5 days after presentation. A large amount of purulent material was removed from the bulla. Histopathology revealed marked suppurative and lymphoplasmacytic otitis media with no signs of a polyp. No etiologic agents were noted but the inflammation was suggestive of a chronic bacterial infection. Aerobic, anaerobic and Mycoplasma species cultures of the material removed from the bulla were negative. A second cerebellomedullary cistern CSF sample was obtained at the time of surgery. The fluid was colorless and clear, with a total protein of 8.0 mg/dl (RI < 25 mg/dl), a WBC count of 7 cells/ ml (RI < 5 cells/ml) and an RBC count of 3/ml (RI < 5 cells/ml). Cytology showed 2% non-degenerate neutrophils, 3% large mononuclear cells and 95% lymphocytes. The results of the second CSF showed marked improvement (7 versus 1368 WBC/ml) in the magnitude of the pleocytosis. The cat recovered uneventfully from surgery. A severe left Horner's syndrome was noted postoperatively, which along with the rest of the neurological signs, gradually improved and resolved over the following weeks. The total duration of TMSeSMZ therapy was 8 weeks. Upon last contact with the owners 8 months after diagnosis, the cat remained neurologically normal. Central nervous system (CNS) complications of OMI have been recognized in animals, although they are considered uncommon. 1e3 In people, the incidence of these complications has decreased with the wider availability of antibiotics; however, they are still associated with mortality rates ranging from 5 to 31%. 4e7 The case reported here made a full recovery. A variety of organisms have been isolated from the few feline cases of intracranial extension of OMI reported to date, including Pasteurella multocida, Escherichia coli, Enterococcus species, Staphylococcus aureus, Mycoplasma species, and Streptococcus canis. 1 Streptococcus equi subsp zooepidemicus is considered a commensal organism of the mucous membranes and skin of various animals, notably horses. 8e11 It frequently acts opportunistically in horses, causing respiratory infections, wound infections, endometritis, and abortion. 8,10 This bacterium is not regarded as a component of the commensal flora of neither dogs nor cats. 10,12 Over the last few years, SEZ has been reported as an emerging pathogen in dogs, associated with severe hemorrhagic pneumonia in shelter dogs. 9,13 Only recently, two reports have documented infections caused by SEZ in cats. 10,11 One report described an outbreak of respiratory disease in a cattery. 11 Four of the cats necropsied showed signs of pyogranulomatous meningoencephalitis. 11 The other report described two cases of rhinitis and meningitis caused by SEZ in two cats housed in separate shelters. 10 Neither of the two cats or their attendants had any known exposure to horses. 10 In our case, no exposure to horses or farm animals was identified upon questioning the owner. As there were no clinical signs or history of otitis externa, it is likely that the route of infection into the middle/inner ear was via the oral mucosa and/or the nasopharynx. Negative bacterial culture from the tympanic bulla is likely due to the 5 days of antimicrobial therapy given to the cat between the original CSF collection and the bulla osteotomy. Infection with SEZ is a rare cause of meningitis in humans with only 22 cases reported so far. 14,15 The majority of these cases were caused by contact with animals (mostly horses) or ingestion of unpasteurized dairy products. The reported mortality rate was 24%. 14 In this case, antimicrobial therapy using TMSeSMZ was elected. This is a bactericidal drug that penetrates both normal and inflamed meninges and achieves therapeutic levels in the CSF. 16 Two doses of intravenous dexamethasone were also administered, starting with the first dose of antimicrobials. In spite of the controversy regarding the use of steroids in bacterial meningitis, 1,2,17 we elected to use it in our patient following the most current recommendations for treatment of acute bacterial meningitis in people. 17 A recent meta-analysis, which reviewed 24 randomized controlled trials of corticosteroids use for acute bacterial meningitis in people, revealed a lower rate of short-term neurologic sequelae and a trend toward lower mortality in the corticosteroid-treated group in adults in high-income countries. 17 To the authors' knowledge, this is the first report of OMI and secondary meningoencephalitis caused by SEZ in a cat. Clinicians should be aware of the rare zoonotic disease potential of this agent, which seems to be an emerging pathogen in small animal companion species.
2017-08-14T22:07:41.645Z
2011-08-01T00:00:00.000
{ "year": 2011, "sha1": "2d9f31ad104ffb78cf34dfad668fc7d950f32990", "oa_license": null, "oa_url": "https://journals.sagepub.com/doi/pdf/10.1016/j.jfms.2011.04.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "78a8160fcd2fa09e9ed38b567566738d51a5bdc9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
211069285
pes2o/s2orc
v3-fos-license
microRNA-mediated noise processing in cells: a fight or a game? In the past decades microRNAs (miRNA) have much attracted the attention of researchers at the interface between life and theoretical sciences for their involvement in post-transcriptional regulation and related diseases. Thanks to the always more sophisticated experimental techniques, the role of miRNAs as"noise processing units"has been further elucidated and two main ways of miRNA noise-control have emerged by combinations of theoretical and experimental studies. While on one side miRNA were thought to buffer gene expression noise, it has recently been suggested that miRNA could also increase the cell-to-cell variability of their targets. In this Mini Review, we focus on the miRNA role in noise processing and on the inference of the parameters defined by the related theoretical modelling. Introduction To carry out all vital functions, cells must express proteins with a high precision in time and protein numbers. Protein production, i.e. gene expression, results from the complex interactions among a high variety of molecules, among which transcription factors, genes, short and long RNAs, and ribosomes. Due to the inherent stochasticity of chemical reactions, gene expression is naturally highly noisy, thus leading to a wide range of possible values of produced proteins. Contrary to expectations, Poissonian distributions are not the standard experimental outcome for most genes and larger fluctuations in the number of transcripts are instead observed. Amongst others, stem cells show clear examples of this. Analysis of the expression variability landscape in pluripotent stem cells (PSCs) shows indeed that several gene transcripts display lognormal or bimodal distributions across the population [1]. In spite of the apparent uniformity of PSCs, a cell population can even contain rare subpopulations expressing markers of different cell lineages. Analysis of gene expression data collected upon perturbation of single PSCs allowed the identification of the main variability axes: the key genes that generate this heterogeneity resulted to be the main pluripotency transcription factors (PFs) that are considered to play a primary role in maintaining the pluripotency state of a cell. Indeed, the key PFs were observed to fluctuate in a reciprocally correlated manner throughout the population, and the regulatory relationships amongst them were shown to adopt different config- urations depending on the cell state. Similarly, mouse embryonic stem cells (mESCs) appear to be heterogenous in their gene expression profile as well [2]. This heterogeneity may indicate either reversible fluctuations or already ongoing differentiation processes. In fact, during differentiation, gene-expression correlations displayed important changes due to PFs switching off in an alternate way, thus allowing the appearance of novel cell states. Gene-expression variability might therefore play an essential role in fundamental biological processes such as cell fate decision. Large efforts in the past few years have been dedicated to identify the mechanisms that generate these fluctuations. Stochasticity comes as an inherent feature of small-number probabilistic phenomena, thus the interactions between small amounts of molecules, such as the reactions underlying gene expression, are intrinsically noisy. Besides, mechanisms of large-fluctuation generation could also be attributed to large variations in the state of gene-specific promoters, acting in a switch mode [3]. When in the on-state, the promoters lead to bursts in gene expression [4], consequently increasing the variance of the final protein product. Nevertheless, gene-expression noise can also arise from factors external to the gene, that indirectly affect its function. Cell-tocell variability such as fluctuations in the environment (e.g., thermal fluctuations), ribosome abundance and ATP availability are further sources of noise. Given this background, noise in gene expression is normally classified into intrinsic noise, due to the inherent stochasticity of transcription, translation and decay processes, and extrinsic noise, due to any external fluctuation that indirectly leads to expression variations [5]. A natural quantitative measure of gene-expression noise is the size of protein fluctuations compared to their mean amount [5], thus if PðtÞ is the protein concentration at time t, the noise gðtÞ is given by: g 2 ðtÞ ¼ hPðtÞ 2 iÀhPðtÞi 2 hPðtÞi 2 (1), that is the ratio of variance to mean of the number of protein molecules per cell. By considering the expression variability of a particular gene across a cell population, Swain and colleagues suggested how noise can be mathematically decomposed into intrinsic and extrinsic contributions [5]. They showed that the total noise is given by the sum in quadrature of intrinsic and extrinsic components, that is g tot ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Different works have proposed analytical expressions for g int and g ext that depend on the sources of fluctuations and on the available measurable quantities [5][6][7]. On the experimental side, the issue of how intrinsic and extrinsic noise contributions can be discriminated has been initially addressed by Elowitz and co-workers [8]. By considering two identical gene copies present in the same cell, they measured their protein products simultaneously. Because the gene copies are both exposed to the same intracellular environment, one can assume that their variability is solely due to intrinsic factors. Thus by tagging the two genes with distinguishable fluorescent probes and measuring the average deviation between the two protein amounts over a cell population, intrinsic noise can be quantified and extrinsic noise will then follow from (2) [5]. To date, this simple dual-reporter framework has been pursued by a number of studies aimed at measuring noise in gene expression both in vivo [9,10] and within in silico simulations [11]. Over the identification of gene expression noise sources, increasing research has committed to the understanding of how cells process these fluctuations to achieve the high precision required for life maintenance. Recently, compartmentalisation as phase separation in cells [12] has been found to be able to play a role in the decrease of noise at the level of proteins [13]: the formation of protein aggregates increases the local number of interacting molecules, therefore decreasing the noise level. Yet, most importantly, variability has been shown to be buffered by cells thanks to elementary gene regulatory pathways where molecules play together in tuning gene expression, i.e. network motifs. Therein, the signs of the regulatory interactions shape the target gene's response in a way that decreases the variance of the final protein outcome [14,15,4]. Through a number of stages, these molecular interactions are able to transform noisy signals into precise outputs [16]. For example, a network where a TF both enhances the expression of a gene and simultaneously activates a repressor of the same gene, i.e. a type of feed-forward loop, has been shown to act as a noise bufferer [15]. In these loops, short non-coding RNAs called microRNAs (miRNAs) have often been found to mediate the repressive path by targeting transcript mRNA and preventing its translation [15,17]. In fact, miRNAs have been found to be involved in several types of regulatory pathways [18] and their functions appear to be tightly related to noise processing [19]. All these noise-managing mechanisms are the result of the evolution process, during which cells have been selected according to an unknown fitness landscape. It is commonly believed that the minima of this landscape correspond to biological structures aimed at decreasing the noise level in gene expression [20,21], in order to increase individuals' robustness against fluctuations. Yet, some studies pointed at the opposite possibility [22], i.e., to the positive selection of highly noisy genes for variability advantage. A recent study based on a combination of theory and experiments [23] showed that selective pressure might even increase expression noise and the positively selected genes with elevated noise are also those highly regulated by transcription factors. On the same idea that cells do not necessarily buffer noise, a recent work showed that introduction of extrinsic noise in microRNA-mediated regulatory networks, i.e., increased variability in gene expression, can instead favour cell differentiation [24][25][26][27]. These recent studies arouse the possibility that cells do not only buffer noise but they rather take advantage of stochasticity to optimise specific needs, e.g., cell-to-cell variability, protein number precision, information flow [28,29], etc. Therefore, the initial question about what mechanisms lead to noise buffering in cells should be instead changed into: what are the mechanisms that allow cells to optimise their interplay with noise? In this Mini Review, we focus on the role of miRNAs in processing gene-expression variability. In the following section we will discuss how, although commonly supposed to act against noise, miRNAs are instead involved in this optimisation game. The second section is devoted to review works on the inference of miRNA-target interactions, an essential tool along the way of understanding how miRNAs deal with gene expression noise when combining theory and experiments. The role of miRNAs in noise processing MiRNAs are short ($ 22nt long) non-coding RNAs that work as post-transcriptional regulators by establishing and maintaining gene expression patterns [30][31][32]. They are encoded in nearly 1% of the genome of nematodes, flies and mammals [30], and they are implicated in the regulation of a variety of processes, such as the timing of developmental events, cell differentiation, proliferation and apoptosis [33], as well as tumorigenesis and hostpathogen interactions [34]. To carry out their regulatory roles, miRNAs bind to their mRNA targets through base pairing, with the degree of the pairing complementarity determining whether the target will undergo translational repression or mRNA increased degradation. The pairing occurs thanks to miRNA loading into RISCs, complexes involving Ago proteins that guide miRNAs to cognate mRNAs. MiRNAdependent regulation is combinatorial, where a typical miRNA has many targets and every target is regulated by many miRNAs [35]. Although overrepresented in gene regulatory networks [36,19,37], they exert a mild repressive role on most of their targets, with a typical fold repression smaller than two [38]. Thanks to the always more sophisticated experimental techniques, i.e. microfluidic devices, deep RNA-sequencing and single-cell transcriptome data, the role of miRNAs as noise processor units has been further elucidated and diverse ways of microRNA-mediated noise control have emerged by combinations of theoretical and experimental studies. The first and older idea sees miRNAs playing a pivotal role in gene regulatory networks by reducing fluctuations in protein expression, thus conferring stability to the gene-expression network [19,[39][40][41]. Indeed gene expression may gain precision, thereby stabilising the identity of individual cells, through miRNA-mediated noise filtering [42]. The way miRNAs act as noise buffers is through network motifs [43]. As mentioned, interaction networks where transcription factors control the expression of the miRNAs as well as their targets (miRNA-mediated feed forward loops) are efficient in maintaining a desired expression level besides changes in gene dosage or fluctuations at the level of the master transcription factor. One of these examples is the Incoherent Feed Forward Loop (IFFL), a type of circuit where a regulator TF both directly favours and indirectly inhibits the expression of a target gene through activation of a miRNA [15]. Theoretical modelling revealed as a useful tool for formulating predictions on the IFFL's behaviour [15,44,45], as these were soon experimentally confirmed [17,46]. Also regulatory modules where a miRNA and a TF mutually inhibit one another, i.e. toggle switches, have been shown to be capable of maintaining stable gene expression [19]. Siciliano and co-workers verified this ability by building a synthetic miRNAmediated toggle-switch [47]. They showed that such circuit is able to generate two different protein states, with the miRNA controlling the switch: in the absence of miRNAs, the cell randomly switches from one state to the other. However, noise appears to be endogenously controlled not only in a static way. Transcription of regulators often occurs in a fashion that alternates bursts of mRNA production and silent intervals, rather than by a constant rate of transcript accumulation [48]. A more recent investigation in the performance of regulatory elements [49] shows how static control of protein noise is not stable. In fact, transcriptional bursting appears to be an ingredient that hampers noise reduction in feedforward loops. An instance of this is given by the lin-4 miRNA involved in an iFFL: its pulsatile transcription allows to isolate an important developmental factor from upstream fluctuations [17]. Although noise reduction seemed a hallmark of miRNA action, recent developments suggest that miRNAs may have a different effect on protein expression noise depending on the protein expression level [50]. One of the first evidences in this direction is provided by the work of Schmiedel and coworkers [9], who investigated the role of miRNAs in gene-expression noise by combining mathematical modelling and single-cell reporter assays. The authors created a bidirectional plasmid reporter, depicted in Fig. 1a, encoding two fluorescent versions of the same protein, ZsGreen and mCherry. The first protein is unregulated, thus its amount represents a proxy for transcriptional activity, whereas mCherry is equipped with miRNA binding sites in its 3'UTR. Since the two proteins are transcribed together, this system allows a quantitative comparison between miRNA-regulated and unregulated gene products. In order to test the effect of endogenous miR-NAs, this dual reporter was transfected in mESCs and single-cell fluorescence was measured upon providing mCherry with one or multiple miR-20 binding sites. Expression fluctuations in the unregulated and regulated cases were compared for similar transcriptional activity, by binning cells according to their reporter expression level. Their results suggested that miRNA-mediated effects on noise depend on protein expression intensity: in cells with low reporter expression, mCherry noise was reduced with respect to the unregulated case, whereas in cells with high expression noise was increased. Moreover, the steepness of transition between the two regimes increased with the target complementarity and the number of miRNA binding sites. A theoretical model describing transcription, translation and miRNA-mediated regulation was compared to experimental data. With the noise expression decomposed as suggested by Swain et al. [5], the model predicted different effects on intrinsic and extrinsic components upon miRNA regulation: intrinsic noise is reduced with respect to the unregulated case, and the reduction depends on miRNA-mediated fold repression r, that is As suggested by Ebert et al. [40], this reduction stems from the reduced protein translation due to miRNA regulation and thus from the transcription speed up required to achieve the same expression level of the unregulated case. The g int reduction was confirmed experimentally by measuring the products of two identical gene copies, one unregulated and the other equipped with miRNA binding sites. The results suggest that the reduction of intrinsic noise is an inherent feature of miRNA-mediated regulation and of post-transcriptional regulation. Bearing in mind that the overall noise increases at high expression levels, because g int is reduced, g ext must undergo an increase upon miRNA-mediated regulation. Extrinsic noise was modelled as g ext ¼g l  /, wherẽ g l is the miRNA pool noise and / is the strength of repression. As expected,g l plays a decisive role in determining the amount of extrinsic noise: the more variable the miRNA pool, the higher g ext . Also, different miRNAs display different pool noise levels.g l estimates were similar among different constructs with the same miRNA binding sites, and they appeared to depend on miRNA repression strength. In fact, miRNA pool noise tends to decrease for highly repressive miRNAs. Moreover, the measuredg l values were lower when the same miRNA was transcribed by multiple independent gene copies, suggesting that uncorrelated fluctuations in miRNA transcription average out. In addition, protein expression noise appears to be reduced if miRNA-mediated regulation is combinatorial. Reporters with multiple miRNA binding sites displayed lower noise values than those regulated by a single miRNA. As mentioned, this reduction is further enhanced if the different miRNA pools are transcribed in an uncorrelated way. Since endogenous targets often contain several imperfect miRNA binding sites, the authors tested for this scenario by providing mCherry with multiple unrelated sites. This resulted in higher fold repression as compared to a non-combinatorially regulated case. Thus combinatorial regulation by miRNAs might reduce noise due to independent fluctuations compensating each other. Consistent with previous predictions, the overall noise was reduced, except when mCherry levels were high, and the g int dependence on ffiffi ffi r p was confirmed. Therefore, although miRNA action displays opposing effects on intrinsic and extrinsic noise levels depending on the protein's expression level, the combination knocks down the overall noise at low expression and amplifies it at high expression with respect to unregulated protein production. However, according to [51,52], miRNAs mostly target lowly expressed genes, that is they preferentially regulate those genes for which noise is more reduced upon miRNA-mediated regulation. Thus Schmiedel's findings suggest that the endogenous combinatorial regulation by miRNAs reduces g tot despite the additional extrinsic noise due to the variability of the miRNA pool. Interestingly, Zare and co-workers [53] conducted a systematic analysis of the distribution of miRNA binding sites throughout the mouse genome, and they showed that such sites are found significantly more often within genes encoding fundamental regulatory proteins, especially those with high intrinsic transcriptional noise. [9]. The circuit consists of a bidirectional plasmid encoding two copies of a gene, one of which contains a number N of miRNA binding sites in its 3'UTR. Gene transcript levels are quantified through fluorescence measurements. The unregulated gene, measured through ZsGreen intensity, can be considered as a proxy for transcriptional activity. The miRNA-regulated gene, measured through mCherry intensity, can be compared to the unregulated one in order to quantify the effects of miRNA-mediated repression. When the target gene transcript is sequestered by the miRNA as described in (d), the fluorescence of the miRNA-regulated gene (mCherry) can be assumed as a proxy for the amount of free target transcript. The qualitative plot on the right represents the amount of mCherry as a function of ZsGreen. The cyan line represents the case where both ZsGreen and mCherry are devoid of miRNA binding sites, N = 0, while the orange line qualitatively describes a case with N -0. The first scenario results in a linear relationship between ZsGreen and mCherry amounts. By contrast, in the second scenario the miRNA-mediated target sequestration generates a threshold behaviour. Adapted from [9]. (b) Schematic representation of the bimodal distributions obtained when combining threshold-like response and noise. The grey shadowed region around the threshold identifies a transcription rate range for which the target may be bimodal in case of pure intrinsic noise (upper right panel) or extrinsic noise in the miRNA pool (lower right panel). With intrinsic noise only, a high miRNA-target interaction strength is necessary to have bimodal target (red line), while with extrinsic noise bimodality is present also for mild interactions (blue line). (c) Schematic example of a miRNA-target regulatory network with the associated thresholdlike behaviour. All miRNAs act as repressors of all targets but with different strengths of interaction, which are represented by the different thicknesses of the links. Adapted from [58]. (d) Theoretical circuit representing the interaction of a miRNA and one of its targets. The target is transcribed from gene t into mRNA transcript T. T can be degraded, translated into the protein P (which can be degraded as well) or sequestered by the miRNA. The miRNA is transcribed from gene l. The corresponding miRNA transcript l can either be degraded or form a complex with its target mRNA T. These results appear to fit well into Schmiedel's idea of miRNAs reducing intrinsic noise while increasing extrinsic noise. Ultimately, Schmiedel's work suggests an additional role for miRNAs in noise processing, potentially related to their presence in those biological processes that take advantage of gene expression variability, such as cell differentiation. MiRNAs have indeed been found to be largely involved in cell fate decision contexts, as reviewed in [54]. Their role in differentiation has been investigated in the aforementioned work on PSCs [1] and in [2]. Kumar et al. showed that miRNA knockdown, with respect to standard culture conditions, results in gene expression changes similar to those observed when culturing cells in conditions that inhibit differentiation. However, miRNA knowdown cells seem to be more heterogenous than the latter, consistent with a role of miR-NAs in buffering gene expression noise, and the authors suggest that this higher heterogeneity might be due to cells partly committing to the ground state. By profiling miRNA expression in PSCs, the authors highlighted the presence of two main miRNA groups, that is the EScell-specific cell-cycle regulating miRNAs (ESCC), well-known for being highly expressed in PSCs, and the let-7 miRNA family. By experimentally testing the exclusive and simultaneous expressions of the two miRNA groups and observing how gene expression was affected, they suggested that ESCC miRNAs can drive PSCs in a transition state where they are likely to differentiate, whereas let-7 alone appears to be able to repress a set of pluripotency genes, effectively leading to differentiation. Klein and co-workers showed that the intrinsic dimensionality of gene expression in pluripotent cells decreases after differentiation. A key role of miRNAs in mediating cell commitment to developmentally more advanced states by governing this fine-tuning is therefore suggested. Eventually, Garg and Sharp proposed that miRNAs may not only control cell-to-cell heterogeneity, but also generate it [55]. Indeed, their suggestion is that miRNAs could enhance variability of the PFs through noise in the miRNA pool, in agreement with Schmiedel's idea. This noise could be transmitted on PFs through the regulatory network and PFs could in turn determine the miRNA expression profile, thus maintaining the cell in an established phenotypic condition. This idea would also be consistent with the several observations that miRNA profiles identify cell states. Interestingly, the way miRNAs and their targets mostly interact is via titration, with the target responding in a threshold-linear fashion upon induction of its transcription rate or miRNA amount [56][57][58][59]. The presence of a threshold-like behaviour defines mainly two regimes: a ''repressed" (low target) regime, in which miRNA amount overcomes that of target, most of the targets are bound by miRNA molecules and the target is effectively repressed; and an ''unrepressed" (high-target) regime, in which targets overcome miRNAs and there are enough free target molecules that can be translated [57], see Fig. 1b. Around the threshold between the two regimes, where miRNAs and targets are highly coupled, the system is sensitive to fluctuations. This means that a fluctuation at the level of miRNA or target can propagate to other targets or miRNAs [58,60] (a phenomenon called retroactivity [61]), and closer the system is to the threshold, stronger is the retroactivity [24]. Since the steepness of the threshold between the two regimes depends on the interaction strength between miRNA and target, if the steepness is high (i.e. strong interaction), what may happen is that small intrinsic fluctuations induce single cells to sample the two regimes, thus giving bimodal distributions on the target at the population level. This said, the presence of extrinsic noiseas that in the miRNA pool -facilitates this sampling. Indeed, the broader the noise, i.e. the broader the miRNA distribution, the easier to have values of miRNA such that the target is for one cell in the repressed regime and for another cell in the unrepressed one. Bimodal distributions may then appear even if the interaction strength is mild [25,26]. Such a scenario may be valid not only for a one-miRNA/one-target system but also when multiple miRNAs and targets are interacting. In this situation, indeed, it is still possible to define a threshold around which all the targets and miRNAs are coupled, with the strength of these couplings determined by the particular interaction strengths [58], see Fig. 1c. The net effect is that several targets can simultaneously display bimodal distributions, thereby allowing the emergence of multiple phenotypic configurations, each defined by a combination of target states. This scenario, driven by competition for miRNA binding, has been extensively studied from a theoretical point of view [58,60,25] and verified in ad hoc in vitro experiments involving two targets of the same shared miRNA [24]. However, its extent to endogenous situation is still debated, with experimental reports that refuse the plausibility of competition when considering endogenous expression levels of miRNAs and targets [62][63][64]. It is known that cell types are identified by a small set of miR-NAs that dominates the total miRNA pool (master miRNAs) [65,66]. Suzuki et al. [67] analyzed the relationships between master miR-NAs and regulatory regions called super-enhancers (SEs), known for controlling cell identity. SEs were found to be connected to a few highly abundant miRNAs, which turned out to be the previously identified master miRNAs. Moreover, SEs were observed to widely shape miRNA expression. Since the interplay between master miRNAs and SEs appears to identify cell state, it is suggested that these miRNAs play a fundamental role in transitions between cell states, i.e. differentiation processes, and SEs might act as noisegenerators by enhancing the miRNA pool noise, thereby favouring the emergence of bimodal phenotypes. These results suggest that miRNA could increase the cell-to-cell variability of their targets. If the targets are key developmental factors, this variability can be the trigger of cell state transitions [27]. These experimental works along with theoretical modelling showed that quantitative investigation is crucial for understanding the impact of miRNAs in managing noise. Yet, current studies are often limited by difficulties in exactly quantifying the molecular interactions between miRNAs and their targets. These limitations and the recent advances in this field are discussed in the next section. Quantitative inference on miRNA-target interactions In order to understand how miRNAs deal with gene expression noise, the combination of experiments and theoretical modelling of miRNA-target interactions provides an essential tool. Consistent parameter estimates allow precise quantitative predictions on expression variability. Yet, theoretical modelling of biological interactions is often built on network theory. Therein, the geneexpression machinery is described as a set of nodes representing molecules such as miRNAs, mRNAs and proteins which are connected by links representing the interactions amongst them, see Fig. 1d. It stems clearly out that this kind of approach may normally require a high number of coarse-grained parameters to be defined. In fact, a long lasting question related to the parameters modulating miRNA-mediated processes is to what extent their values influence the processes' outcomes, a question that falls on parameter estimate. Precise quantitative estimates would indeed improve both the algorithms aimed at predicting the target genes of specific miRNAs as well as the understanding of the biological mechanisms underlying experimental observations, therefore increasing the predictive power of theoretical models. In the last few years, mathematical modelling of miRNA-target interactions has greatly focused on the aforementioned interaction networks involving miRNAs, TFs and target genes, as reviewed in [68,18]. Laurenti and colleagues focused on theoretical interaction circuits aimed at reducing noise, that is circuits that act as molecular filters [16], by using a network-theory approach. These net-works are constituted by simple combined biological interactions, such as the co-expression of two species that subsequently bind together, i.e. the iFFL. The authors' suggestion, which follows what pointed out by Riba and coworkers few years before [45], is that these molecular filters are pervasive in gene expression and that miRNAs participate in such modules. In fact, a few miRNAmediated noise-reducing networks have been proven to be overrepresented in mammalian genomes by Tsang et al. [69]. An important example in the framework of miRNA-mediated network motifs has been brought forward by Lai and colleagues [70]. Therein, the role of miRNA-mediated regulatory circuits in fine-tuning gene expression by buffering noise was elucidated by theoretical means such as Ordinary Differential Equation (ODE) modelling. For instance, the endogenous feedback loop formed by E2F1 and the miR-17-92 family was shown to display bistability, that is the E2F1/miR-17-92 can only switch between ''ON/OFF" and ''OFF/ON" states. Only a crucial drop in the upstream E2F1inducing signal can cause the system to switch to the opposite state, thus this type of network is inherently robust to fluctuations [71]. However, this kind of circuits are also known to be importantly involved in transitions between cell types. For instance, the same loop has been found to regulate the epithelial to mesenchymal transition, where the two phenotypes correspond to the two bistable states mentioned above. Indeed, the miRNA level determines in which state the system will collapse, thus an increased noise in miRNA expression could act as a trigger for the transition. In fact, when involved in feedback loops, miRNAs have been shown to increase variability if required for achieving cell differentiation or cell state changes [19]. The iFFL's noise-buffering properties have also been widely studied in a number of theoretical and experimental works [44,15,46,49]. Results have highlighted its ability to adapt to transient signal changes, that is the target gene's expression level displays little susceptibility to upstream fluctuations for a wide parameter range. Nevertheless, many quantitative aspects of these interactions, i.e. how noise is affected by the parameter regime, are still not fully understood. Carignano and co-workers tried to find an answer to this issue by searching for the iFFL's parameter regimes where noise is most efficiently buffered [72]. They demonstrated that if extrinsic noise is static, miRNA-mediated translational inhibition rejects noise for a broader parameter range than protein decay amplification. As of dynamic extrinsic noise, a special case of the iFFL where the target gene and the miRNA are transcribed together was shown to either reduce or amplify product variability depending on the relationship between the timescale of extrinsic fluctuations and that of mRNA and miRNA degradation. In general, miRNAs' characteristic timescales of biogenesis, action and decay are of course crucial in determining a network's outcome. Since miRNA expression does not require protein synthesis, miRNAs were generally viewed as fast regulators of gene expression compared to transcription factors [73,74]. However, by combining theoretical modelling with miRNA induction and transfection datasets, Hausser and co-workers showed that the timescale of miRNA-mediated regulation is slower than expected [32]. Indeed, miRNAs only function as part of complexes with Argonaute (Ago) proteins [75], with the concentration of miRNA-Ago complexes usually considered constant [76]. Thus, the commonly observed small changes in protein levels seem to be due to both delays in miRNA loading into Ago proteins and to the slow protein decay. Huge effort was also spent in quantifying the strength of miRNA-target interactions, which represents a pivotal quantity for miRNA-target prediction algorithms [77] and an important parameter in theoretical models [58,60]. Having in mind the miRNA-induced linear-threshold target behaviour mentioned above, the affinity of a miRNA and its target determines the steepness of the threshold, and thus the susceptibility of the target to fluctuations in the amount of miRNAs or in the amount of other endogenous targets competing for the same miRNAs. Wu and colleagues [78] worked on mutations in the miRNA-binding mRNA sequences: they quantified each binding energy change of 67159 different mutations. Dealing with 21 cancer types, they showed that the higher the loss of strength, the more expressed were the cancer-related genes. As mentioned, miRNA-target affinity determines the extent to which miRNA affects mRNA translation compared to its degradation. Thus these results suggest that poor mRNA degradation may be a determinant factor in cancer. With a series of seminal papers, Zavolan's and van Nimwegen's groups moved as well in the direction of uncovering miRNA-target strengths of interaction [77,79,59]. It is worth discussing this work a little deeper in order to exemplify how a quantitative study on miRNA-target interactions involving theoretical and experimental tools can be performed. The authors first defined a model-based method to infer perfectly and unperfectly complementary miRNA targets, i.e. canonical and non-canonical sites, from Argonaute 2 cross-linking and immunoprecipitation data [77]. The model (MIRZA) includes parameters related to base pairs, loops in the sequences and position-dependent energy constraints imposed by Argonaute proteins. With these parameters, MIRZA computes the energy of a miRNA-mRNA hybrid, which allows calculating the frequencies of RISCs binding to each miRNA in a pool of different miRNAs. Parameters are then inferred from Ago-CLIP data collected in HEK293 cells by maximising the binding probabilities of mRNA fragments observed in the samples. The inference procedure is performed by calculating a ''target quality" RðmjlÞ that gives the affinity of each miRNA l with each mRNA fragment m, which can also be read as the fraction of frangment m among target sites bound to miRNA l. This quantity is obtained by summing over all possible hybrid structures that m can form when binding l. The fraction of time that m is bound to a RISC loaded with miRNA l is proportional to RðmjlÞp l , where p l is the total fraction of l-loaded RISC bound to mRNA. These fractions, called miRNA priors, are inferred from each CLIP dataset. The total probability of fragment m being bound to miRNA is RðmÞ ¼ P l RðmjlÞp l and the total likelihood of a dataset is RðDÞ ¼ Q i Rðm i Þ. Results on parameters capture several already known features of the miRNA-mRNA bound. For instance positions 2-7 of binding sites, commonly known as the seed region, have the largest contribution to the energy, and multiple other predictions on nucleotides depending on their position come out. By applying the model with fitted parameters, it is possible to predict which miRNA l is more likely to bind each fragment m, and even the structure of the most likely miRNA-mRNA hybrid. What has emerged from such predictions is that non-canonical sites are bound to a larger extent to miRNAs more bound to RISCs, that is miRNAs with higher p l . In other words, p l correlates positively with l's expression. Thus lowly expressed miRNAs target sites with high affinity, whereas highly expressed miRNAs also target low-affinity sites. To test the effectiveness of predicted sites, mRNA fold changes estimated by MIRZA were validated by comparing them to those measured upon miRNA transfection. MIRZA predicts the existence of many functional non-canonical sites that had not been previously found by other miRNA-target interaction models. Moreover, they are found to be evolutionarily conserved, as their presence is significantly larger than expected by chance. The authors suggest that MIRZA could be further improved by adding conservation information to the model. In a subsequent work, MIRZA was used to quantify the strength of miRNA-target interactions [79] and the results showed that the computationally predicted binding energies strongly correlate with the energies estimated from biochemical measurements of Michaelis-Menten constants. Single-cell RNA-seq analysis then opened the way to infer parameters describing the response of even hundreds of miRNA targets and ideally verify predictions that were only possible in a theoretical framework. A great example of such quantitative estimation is the work of Rzepiela and coworkers [59]. There, the authors inferred the sensitivity of individual targets to miRNA regulation from their expression in cells with varying miRNA level. Results showed that the response of miRNA targets to miRNA induction is hierarchical: the targets of a miRNA can be ordered in a hierarchy based on the miRNA concentrations at which they respond within the endogenous context of all other miRNAs and targets in the cell. Specifically, the few targets with higher Michaelis-Menten constants displayed higher sensitivity to changes in miRNA amount. Moreover, responses followed behaviours that were theoretically predicted in [58,60]. Quantifying target response to miRNA-mediated regulation can make a decisive contribution in the comprehension of miRNAmediated noise. Indeed, as shown by Schmiedel and colleagues [9], intrinsic noise is related to fold repression. Thus quantitative estimations on target sensitivity to miRNA induction such as the ones by Rzepiela et al. [59] can be of great value for the understanding of such variability. Also the inference of miRNA-target strengths of interaction can be used to improve predictions on miRNA-mediated gene expression noise, as shown in a recent work [24]. It is indeed observed that in the sole presence of intrinsic noise, with combinatorial miRNA-target interactions, an important parameter governing cell-to-cell variability appears to be the interaction strength, with the latter proportional to the number of miRNA binding sites. For instance, target bimodality is achieved only for high strength of interaction values. A subsequent theoretical study by Del Giudice et al. [25] investigated the relationship between extrinsic noise, target response bimodality and miRNAtarget affinity. The results suggested that if extrinsic noise is added to the system, target bimodality appears also when the miRNAtarget interaction strength is small, with the size of the bimodality range again dependent on this parameter. Another interesting quantitative aspect is the extent to which miRNAs affect mRNA decay compared to their translation rate. The general idea was that miRNAs affect the messenger decay rate more than they affect translation [80]. According to this, one would expect changes in mRNA levels and in protein levels to be strongly correlated. Instead, mRNA and protein amount variations seem uncoupled, with the repression of target translation preceding the increase in its degradation rate and the protein amount typically changing less than that of the mRNA [81,32,80]. However, theory suggests [82] that the observed uncorrelation might be partially explained by a delay due to miRNA maturation [32]. Bearing in mind that the way miRNAs preferentially affect protein production depends on miRNA-target affinity, and is thus related to interaction strength, the estimation of parameters that measure the impact of miRNAs on target degradation against translation could also play a role in the predictions on miRNA-mediated noise. Conclusions In this Mini Review, we discussed the literature underlying the recent efforts in the understanding of miRNAs' role in target noise control. While in the past miRNAs were believed to mainly act as noise bufferers, more recent works suggest that extrinsic sources of noise, such as fluctuations within miRNA pools, lead to target noise increase, possibly driving the formation of different phenotypes. Thus, altogether, the recent works shed light on the possibility that living systems do not function by only minimising stochasticity but that they have instead evolved by optimising the possible effects of randomness. Also, they highlight the importance of interdisciplinary approaches in defining directions for the quantitative identification of the optimisation mechanisms orchestrating life. In this respect, the parallel advancing of inference methods to quantitatively estimate parameters related to miRNA-target interactions from theoretical modelling is of extreme importance. Indeed, parameter estimation is the only way to precisely predict expression variability. In this direction, the studies revised in the last section may all be used to improve the predictions on miRNA-mediated noise and hopefully pave the way for model-based therapeutic perspectives, with a constant interdisciplinary approach, and more in general for the understanding of the hidden secrets of living systems.
2020-02-11T02:00:45.573Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "f51fb3971c7da332fde8b8eed18dab44afac0d8e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.csbj.2020.02.020", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2f8b7c460fd7a85fac566b07c710abf68cfc38f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Biology" ] }
73631405
pes2o/s2orc
v3-fos-license
Advances for Opaque PBR Internally Illuminated for Fiber Optic for Microalgae Production The production of microalgae in laboratory systems is restricted almost exclusively since 500 mL to 20 liters tanks of transparent materials such as glass and plastic, under fluorescent lamps on shelves. In this work we developed a laboratory system which produced up to 50 liters of microalgae cultivation with comparable productivity to the traditional system in the laboratory, with the potential to increase productivity scale and lower energy consumption per produced volume. The system is built with opaque plastic tank, and illuminated by Plastic Optics Fiber (POF) and LED. The quality of the biomass grown in the culture system on LED is comparable to the traditional cultivation, with scale-up without increasing the occupied area, only with increased height of the tank. The productivity of the tank on LED to the strain Scenedesmus sp. with a phototrophic cultivation reached productivity of around 20 mg∙L−1∙d−1 and continuing studies may increase further. Introduction The illumination of photobioreactors (PBR) for the cultivation of microalgae in the world is done by the use of solar energy.About 90% of world production of microalgae is held in open tanks, so-called raceways/open-ponds.In such the productivity is significant, but to be opened it becomes more exposed not only to the sunlight but also to contamination by various microorganisms, including some competition with microalgae for the same nutrients or feed them, producing often a lower biomass [1]. Figure 1 shows different raceways around the world. Currently, the main uses of microalgae are: animal feed, high value drugs, such as carotenes and antioxidants, as well as supplements for human consumption and also biofuels, such as biodiesel and bioethanol.The term microalgae is related to microscopic size and most is suspended in water (planktonic), although it can be found in the deep ocean, rivers or lakes, benthic [2] and [3].The light in organisms with carbon dioxide (CO 2 ) is essential to photosynthesis [4].The potential use of microalgae has attracted interest in growing high-quality search. The improvement found to reduce the exposure to contamination is the use of closed tanks of transparent materials such as glass or polycarbonate.The use of transparent material is required for the lighting of microalgae, since the light energy is essential for the growth of most of them.The disadvantage of using this type of system would be the cost of materials, both installation and maintenance.The requirement of constant transparency of the material is required for biomass productivity maintenance [5].Figure 2 shows models of closed systems. In expectation of lower cost used in closed vessels, tanks in opaque materials are alternatives, such as PVC and polyethylene.In the photoautotrophic culture, i.e.where the presence of light for microbial growth is required, the use of plastic optical fibers-POF allows an internal illumination of the growth of microalgae, eliminating the use of transparent materials.Richmond (2004) mentioned the alternative use of photobioreactors with optical fibers began in the United States for over 30 years ago from microalgae study for the production of hydrogen gas and light energy was delivered from the concentration of solar mirrors for later delivery to the PBR.Some studies were developed in opaque tanks with optical fibers [6]- [8].In coming years to 2000, this concept has been taken over by a small and relevant research center in Japan in microalgae by gas mitigation from the carbon dioxide biofixation and greenhouse gases, including hydrogen production, and understanding was no time told how technically simple to understand, but uneconomical [4].Other closed tanks can be found with volumes even higher than proposed in this paper, but more complicated set up and system assembly.Some examples are described in [9]. Thus, in the present work low cost materials were used, such as reused and easy to purchase tanks, in order to reduce the cost in the tank, and some possible improvements were proposed in the internal distribution of fiber internally, in order to bypass the main critical points shown in the literature for such systems, which are: the difficulty of scale-up, mainly due to high cost and also low surface area illuminated by volume of culture [7] and [10]. The traditional cultivation of microalgae in the laboratory is done through a special type of fluorescent lamp.Figure 3 shows the spectra of different fluorescent lamps.The light traditionally used for the cultivation of microalgae is the daylight type.This model has different luminous intensity energy spectrum of the LED source used in cultivation under LED, which has a higher intensity at a wavelength related to blue (440 -485 nm).Despite the difference between them in the growth of microalgae perspective, this difference is not a relevant issue as the microorganism grows with the absorbed light energy in the range of 400 -700 nm, so the most important is the amount of energy to be comparable.This work carried out was a cultivation system under LED Cool White, two super bright LEDs as light source.They tested two cropping systems were compared to the traditional system of cultivation in the laboratory.To distribute the fibers internally discs were used with holes for the POF.The discs were set one above the other and distributed fibers and kept controlled distance between the lighting points.In the scientific literature the productivity of microalgae is associated, among other factors, the smallest optical path, that is, a shorter distance between the lighting points [11].Initially it was tested the growth potential of microalgae and based on these data was made increasing scale in a second tank.In the second place, about half the culture was removed and placed in fresh nutrients to continue microalgae production.Furthermore, in order to increase the amount of light dispensed into the tank, it was tested a reflecting surface on the fiber tip to verify a possible potentiation of illumination within the culture through the residual light reflection at the fiber tip without exchanging the source light. The Photosynthetically Active Radiation (PAR), comprised within the range of 400 -700 nm, can be understood as irradiance, or radiant energy flow to any source of light energy and is measured in micro mol of photons per square meter second (umolphotons m −2 •s −1 ).This unit is common to biologists and 4.57 umol photons m −2 •s −1 equivalent to 1 Wm −2 , considering the sun as a source of light on sunny days [4] and [12].The ratio depends on the type of light source considered. The ratio depends on the type of light source considered.They were measured PAR radiation only at the beginning and the end of the cultivation, to avoid exposure of mono-algal culture to possible contamination by other microorganisms.The graph of the behavior of light energy flow within the cultivation of microalgae and is known as exponential decreasing, then the first value is a maximum and occurs as the cell density increased, the amount of light energy available decreases [13]. This work is the first stage of the project on construction of an internally illuminated PBR through POF and lighting system using a solar tracking, Fresnel lens, in order to concentrate solar energy on the POF bundle.The larger scale tank will include pH control, which will increase the productivity of biomass.The cylindrical tank will have 1000 liters of culture in a lower área (1 m 2 ).The literature describes that designs ofsolar concentration with the transmission of light energy through the optical fibers started in 1980 by a group of French research, but the Project stoppeddue to the high cost of fiber [14].Over the years solar concentration was subject of interest from different research groups, but in smaller volumes and different types of photobioreactors [7] [15] [16].The current cost of 1m plastic optical fiber is less than 0.90 US$ and can decrease according to the number of coils purchases as soon as there is a possibility the viability of this type of tank. Microalgae, Culture Medium and Microscopy The microorganism used in this work was the strain of microalgae Scenedesmus sp.SCIB-01, courtesy of Ecophysiology Laboratory of Toxicology Cyanobacteria (LETC) Institute of Biophysics Carlos Chagas Filho, UFRJ.This microalgae was collected in Lagoa de Ibirité (20˚01'19"S 44˚03'32"O), Minas Gerais and isolated by LETC in 2011; classified by the Department of Botany of the National Museum /UFRJ; and preserved in the collection of microalgae cultures of LETC in ASM-1 medium.All experiments used 1-ASM medium culture [17].The direct cell count was performed using optical microscope SC30 model (Olympus) and lens with 40× magnification. Plastic Optical Fiber (POF), Distribution Discs of POF and Reflective Surfaces Inside the tank used were 126 POF segments was cut into 117 cm long with 2 mm diameter each (Mitsubishi-ESKA™).The fibers were polished on both sides with sandpaper P600 and P1500 type and also a finish polishing sandpaper to mirror the surface in contact to the crop and with the LED.Inside the tanksit was used two polycarbonate discs with 29 cm diameter each to support and distribute optical fibers homogeneously.Concentric cuts were made on the discs in a similar way to existing openingsas in a Ferris wheel, resulting in 18 rods and their respective gaps between them.In each rod were performed 7 holes at a distance of 2.5 cm between them.Total of 126 holes for distribution of fibers within the tank.The average speed of the fiber in front of the laser beam promoted by pulling machine was 2 cm•s −1 and the optical laser power used was 7.5 W. The holes were adjusted to the diameter of POF in the upper disc and the lower disc are slightly larger holes for receiving the rivets with the reflecting surfaces, as shown in Figure 4. The maximum distance between fibers was 5 cm disc edge, referring exactly to the difference between the rods, and less than 4 mm between the closest center of the disc fibers.The disks were attached through three stainless steel screws 37 cm long.The reflective surfaces used were pieces of stainless steel 2.2 cm long by 2.2 mm.The reflective surfaces in the tank were adjusted based on the lower disk 126 rivets with inside diameter 2.3 mm and about 4 cm long, to ensure better contact "face to face" between the reflective surface and the polished POF.It was placed quick fastening glue after placing the fiber.Similarly, the fibers were polished stainless steel pieces were used for this sandpaper P220, P360, P600 sandpaper and mirroring surface.At the base of the rivets was held a grip to locking pliers.The tightening of the rivet base was needed to prevent the output of the stainless steel pieces. Tanks and Apparatus for Aeration In this work wereused two tanks, one reused to maximum 30l capacity of polypropylene (PP) existing in the laboratory (plastic bucket), and a barrel of 50l in polyvinylchloride-PVC widely used in laboratories for distilled water storage (reservoir).Measurements of the plastic bucket diameters are 28, 30 and 34 cm and height of 40 cm (bottom to top).The barrel with an internal diameter of about 40 cm and 1 meter high (Figure 5(a)). The apparatus aeration was built through 4 stainless steel pipe connection.The tubes were individually twisted at one end to twist machine, resulting in four pipes connected with semicircles at one end.In each tube was possible injection of compressed air through four hoses in systems with LED (Figure 5(b)).The air flow was measured by flowmeter.The semi-circles were made in diameters of 8, 13, 18 and 23 cm, and semi-circles are concentric.The height of the piece was 33 cm.Small holes were made with 45˚ between them, resulting in 8 holes per semicircle (Figure 6). LEDs and Support Brackets of Plastic Optical Fibers Two super bright LEDs to 50 W and 25 lighting points were used.The LED external measures were 51.6 × 56.1 mm and 4.4 mm in height (model ZM-J50W6P45-10C5BM, Zeme).Each LED has been properly adapted to a heat sink and fan assembly to ensure the integrity of the fiber due to possible overheating.The two supports for connecting the fibers to the LED were made in solidrectangular blocks of aluminum with 64 holes, and these were fixed with four screws to the heat sink. CO2 Laser, Pulling Machine and PAR Meter For the drilling of thefiberitwas used CO 2 laser (model 48-2, Synrad) withwavelength of 10.6 nm.A convergent lens has also been used in ZnSe to perform the discrete grooves in the fiber.A machine pull was required to maintain the average speed of the fiber during laser drive and ensuring the thickness of the groove in the fiber.The photosynthetically active radiation (PAR) was measured with the laboratory scale radiometer-QSL-2100. Traditional System of Microalgae Cultivation in the Laboratory The traditional system of cultivation of microalgae in the laboratory consisted of cultivation shelves to fluorescent lamps type daylight 20 W (OSRAM) and transparent gallon polycarbonate 24 L capacity (Nalgene).In flowmeters for compressed air and carbon dioxide.4 lamps were used.The air intake is controlled by the flowmeter and the amount of air per volume was cultivated under comparable to LED system (Figure 5(b)). Cell Growth Cell growth was accompanied by two traditional tests: cell count directly by microscopy and dry weight test [3] and periods of light and dark (called photoperiod) were 12 hours.Samples were taken at the beginning of each day Optical Power Measurement in the POF Tips The optical power measurements at the ends of POF were taken before and after the groove bya power meter instrument (model 2931C, Newport), under the following conditions: continuous current, wavelength of 488 nm and automatic evaluation range.With the photodiode (model detector 918D, Newport).As a light source to evaluate the effect before and after the groove in the fiber, a simple low-power LED has been used.And to evaluate the reflective surface was used an optical device called 3 dB beam splitter.The 3 dB beam splitter is a simple optical device composed of optical fibers and has the ability to evenly divide the light intensity introduced at the leading edge between the two opposite ends.Also used were specially prepared adapters for connectingthe fiberwith 2 mm in diameterto 3 dB beam splitter and moreoverto the detector and source. Characterization of the Produced Biomass They were estimated in all biomass produced after collection, centrifugation and lyophilization using the following methods: oils [18], total lipids [19], carbohydrates [20] and proteins [21]. Experimental Procedure For assembly of the system under LED, POF pieces were cut and polished and have been used in both systems tested under the LEDs (126 units).Following these were scratched by the laser and connected to twosuperbright LEDs through two aluminum blocks with holes for connection and support of the fibers by the LED. For preparation of the initial inoculum in experiments n.1, n.2 and n.3 initially had a period of activation of cells removed from the incubator and were made with the initial inoculum microalga Scenedesmus sp.among autoclaved ASM-1 in two conditions: traditional and under LED.The experiments were performed sequentially.In n.1 were prepared Laboratory conventional cultivation (LCC1) with 20 L cultivation in a gallon of 24 L capacity of transparent material and that was exposed to four vertical lamps on shelf.Furthermore, in a sterilized and reused bucket were placed: Aeration apparatus, fiber support, the supported optical fiber and 27 L of medium ASM-1 culture.The following were introduced 3 L inoculum with microalgae Scenedesmus sp.(Keeping the ratio 1:10 inoculum and culture medium).Then, to increase the amount of radiation in the PAR cultivation reflective surfaces were placed at the bases of the optical fibers inside the tank with water.The reflective surface used was polished stainless steel.And the PAR radiation were measured before and after placement surfaces.The test was performed in ultrapure MilliQ water.From the results of the PAR radiation increases, the cultures were performed 2, 3 and 4 with the introduction of reflecting surfaces at the tip of the fibers. In the second experiment (n.2) an attempt was made to reproduce the same volume n.1 with the possibility of microalgae production by semi-continuous batch.So it made a 30 L cultivation of 50 L barrel with tap.The barrel for presenting a diameter a bit larger than the diameter of the bucket, the culture volume was slightly below the whole length of the grooves, i.e., the culture was illuminated with the surface, instead of light being delivered internally.Therefore, it was not the best use of light through the slots of the POF, which caused a differentiated lighting in cultivation.Thus, the experiment was conducted n.3 where they were grown 50 L of microalgae cultivation in the barrel. In the experiment n.4 were two batches, i.e., in exp.n.3 were removed about 25 L of barrel cultivation and added 25 L of ASM-1 fresh culture.The same was done again, totaling two batches in n.4.In the second batch of exp.n.4 happened an intermittent problem in the flow meter and decrease air flow. The experiments were injected into the compressed air flow: 7 mL•min −1 in traditional cultures (20 L cultivation), 10 mL•min −1 (30 L cultivation) and 16 mL•min −1 with the barrel 50 L, i.e. maintaining the approximate ratio of 0.3 mL of air per liter of culture.This relationship of air per volume of culture was defined in the n.1 good aeration condition in the bucket under LED after some previous attempts unpromising where microalgae deposits on the tank floor.But a more detailed assessment can be carried out in the future.At the end of each crop biomasses were collected, centrifuged, frozen and freeze-dried.In freeze-dried biomass were characterized total lipid levels, oils, carbohydrates and proteins. The monitoring of the experiments was done by direct cell count by microscopy and dry weight of biomass.In addition, the photosynthetically active radiation was assessed (PAR) within the tank at the beginning and at the end of crops.And the measurements were made in two different way of farming systems.Inside the tanks under LED, the measurements were performed in 18 positions between the rods, recording the values measured in the region closest to the center of the fiber distribution disks, and also in the most extreme area to discs.In transparent gallon of conventional breeding measures were made in the tank in the central region, the part of the tank nearest to the shelf lamps illuminated at least the area opposite and on both sides of the tank.Furthermore, it was also accompanied by pH variations, injecting manually for 20 s of carbon dioxide gas (99.999%) to 1 mL•min −1 , in the morning and in the afternoon from experiment n.2. Tests were made to verify the possibility toincrease in light reflection by the reflective surfaces before placement within the tank.These were performed in the laboratory with the use of optical instrument powermeter coupled to the photodiode detector and the 3 dB beam splitter.Thus it was introduced in one of two equivalents ends a source of low intensity LED, just to compare before and after the intensity of reflected light, and equivalent other end connected to the photodiode detector.It is measured initially reflected light in the main point of the beam splitter (background).The following was introduced a piece of polished stainless steel on the main tip and again the measurement was made.For reliable results of optical power, although qualitative, adapters tailored to fit between beam splitter and all other items of this experiment were used.The optical power measurements were performed under the least amount of ambient light as possible in both readings. There were also measures the electrical current necessary for the operation of the light sources of both systems under LED and traditional.Thus were measured currents used for superbright LEDs and ventilation fans coupled to the heat sinks and also the supply system of 20 W lamp (ballast and lamps).It was also measured voltage available in the culture laboratory. There were also measured optical powers before and after the groovemadein the POF.For this measurement were cut three fibers 117 cm long and then the ends were polished by polishing abrasives.Grooves were made in two of the three fiber through the laser and the ends were polished again, to avoid any edge. Equations The principal equations used were biomass productivity and specific growth rate as electric power involved in each of the systems tested.The specific growth rate (μ) was determined in the exponential phase of cell growth curve, according [22].Equation (1) and Equation (2) are: ( ) And, P is electric power em Watts, V is voltage (Volts) e i = electric current in Amperes. Results and Discussion The optical power values at the ends of the optical fibers before and after the groove are shown in Table 1.It is noticed that the values after the groove decreased to 50%.This suggests that part of the light is dissipated through the slot for the cultivation, without excluding the possibility of a small portion of light energy is lost in the modified material by heating the laser beam on the fiber.The conditions under which the grooves were first empirical, being taken as the best condition where it can visually perceive the light output by the grooves of the fiber and the fiber structure was maintained more resistant to breaking, i.e., the condition where the fibers were less friable and more intense light through the slot.This information may be useful for the study of an optimized condition. For performing traditional crops quantities lamps and the distance between the sources and culture systems were adjusted so that the amounts of light energy were paired systems in approximate (in between the traditional and LED systems).Although it is known that the sources are not equal, bearing in mind the light intensity spectra for growth effects of microalgae this difference is not significant if the amount of PAR radiation is comparable, and which does not prevent any component in microalgae have different amount.The growth curves of all the experiments are shown in Figure 7.The behavior of LCC1 and LCC3 curves show that cell growth in traditional farming is less compared to growing under LED, i.e. there are more cells in culture under the LED two conditions (BCE). Figure 7(b) shows the impairment of cell growth by the lack of adequate lighting in both the tanks, for different reasons since the traditional culture received less illumination to follow what was present in culture under LED, which was partially illuminated.Thus, it is clear that the lighting deficiency directly affects cell growth.The behavior of the barrel 50l curve was more promising in cell growth than other curves, it showed the highest specific rate of all the experiments, but the continuation of the batch showed that the growth curves showed different performances.The second batch of exp.n.4 was impacted by lower air intake which possibly explains the marked difference in behavior (Figure 7(d) (1) and ( 2)).Biomass production curves added to the cell growth curves are shown in Figure 8 and the behavior is different. The behavior on the increase of biomass is different from that cell growth, in most experiments.In LCC1 cell growth accompanies biomass production, which in no case as plastic bucket, since cell growth is more pronounced with increased biomass, i.e. greater number of cells than the traditional cultivation LCC1 however with less weight (Figure 8(a) and Figure 8(b)).Cell growth in LCC2 is not so significant but the amount of biomass produced is considerably higher through 350 mg•L −1 , i.e., the small amount of light caused any increase of the mass of micro-organism (Figure 8(c)).The literature notes that under stress, either nutritional or light energy, an increase of biomass of microalgae [5].The growth curve of biomass 30 L reservoir is also increased, but less than in LCC2 despite the unbalanced cell growth (Figure 8(d)).In exp.n.3 under greater illumination than other experiments, to standard culture LCC3 cell growth is lower than that in LED but the amount of biomass produced is higher (Figure 8(e) and Figure 8(f)), i.e., more cells were grown under LED but are lighter.In both sequential batches in exp. 4 occurs one significant productivity in the first batch, higher than the previous simple cultivation and slightly higher productivity in LCC3 obtained (Table 3).This shows that the amount of light favored the production of this inoculum biomass ratio and higher fresh culture medium [23].In exp.n.4, the second batch has a slow biomass increase compared to the previous batch, quite possibly because of impaired inlet for a few hours for the first 2 days of growth.What showed the dependence of effective aeration, addition of carbon dioxide availability. Table 2 shows the values PAR maximum and minimum measured in the different systems under LED, the 3).Then the experiment with the same volume grown in n.1 was not successful, it is necessary to increase the volume.Therefore, it performed exp.n.3 with the production of 50 liters of culture, where the assessment of tank with a tap was made.The two experiments in n.4 served to barrel of assessing the potential for semi-continuous batch production. The relationship between 1:10 inoculum medium microalgae and ASM-1 culture was tested on previous studies and obtained a better cost-benefit since other more productive tested (1:4) resulted in longer time to beginning of cultivation effective by the largest amount of initial inoculum [11].In other experiments have observed the same steps of preparation of crops, considering the difference in the volumes grown, except the experiment n.4, being the continuation of the experiment n.3.The pH measurements were doneduring the experiments were between 7.2 and 8.8. The time considered for productivity calculation was the period of cultivation, with the exception of batches in exp.3 and 4 which were considered 14, 6 and 8 days of culture, respectively.In addition to the yield and specific velocity measurements were also characterized lipid levels, oils, carbohydrates and proteins in the freeze-dried biomass and the results are shown in Table 3.The values of productivity of traditional systems: LCC1 and LCC2 showed slightly different values, probably due the lowest amount of radiation PAR in LCC2.As for the other traditional crops, there were about 53% less radiation PAR max LCC2 in front of LCC3 (the value is 100 in LCC1 and 188 umol photons m −2 •s −1 in LCC2) which impacted the productivity results in LCC2.The LCC1 and LCC3 crops were subjected to different PAR radiation values which resulted in a slightly higher LCC3 of productivity LCC1. Most of the results obtained for total lipid content is comparable, but differences were observed likely to be a non-exhaustive extraction method, i.e., the obtained values represent the minimum to be fetched may be considered semi method quantitative.Despite lower the total lipid content found in exp.3 for reservoir 50 L, this difference can be further investigated because the continuity of the experiment in exp.4, the total lipid content obtained in the two sequencing batch showed levels between 14% and 17% i.e. comparable to other results.Soon perhaps the repetition can clarify the issue. The Tukey-Kramer procedure [24] suggests that there is significant evidence of differences between pairs: Plastic Bucket and LCC2, reservoir 30 L and 50 L, reservoir 50 L and LCC2 and, reservoir 50 L and batch n.1.The oil content not found sufficient statistical evidence to prove differences between all the results.The LED under tank productivity in the most optimized n.3 conditions was not comparable to the shelf, but the continued cultivation showed that productivity was slightly higher than the traditional cultivation (exp.4).The featured content: total lipids, oils, carbohydrates and protein were also comparable in matched systems under LED and LCC, except exp.3 especially as the total lipid content.That indicates that the biomass obtained has similar characteristics. In evaluating the optical reflecting surface, of optical power results showed that reflection of light at the fiber tip favored significant increase illumination.The values obtained before placing the reflector surface were 73.75 ± 0.1 nW, considered background of the experiment.This amount of energy as is inherent in the reflection of light in optical fiber Beamsplitter in contact with air at one end, resulting from the difference between the means: plastic optical fiber and air.Next, after the introduction of the reflecting surface of the optical power value changed to 757.9 ± 1.9 nW, or more than ten times the background obtained in reflection.Thereafter then assembly of the system was carried out using reflective surfaces fibers from exp.2, even if the increase could be observed of the same order of magnitude in order that the experiments were carried out in completely different conditions. The evaluation of the PAR radiation inside the tank by placing the reflecting surfaces was performed in Milli-Q ultrapure water and the increase was from 50% to 70%, maintaining the same power LEDs super bright (Table 4). After input of inoculum to the minimum and maximum values are for 46 up to 180 umol photons m −2 •s −1 .The cell concentration in this condition was 10 5 cel mL −1 . The fiber distribution disk was efficient because it favored the aeration cultivation and allowed a representative PAR radiation measurement within the crop.The empty spaces between the rods facilitated the movement of the crop in the tank by injection of compressed air, and reduced the possibility of microalgae deposits inside the tank.Many previous designs have been tested but the reduction of moving parts favored movement of the crop in the tank.Even after all the crops in the experiments n.3 and 4 was not observed significant deposit of microalgae.The maximum distance between the fibers within the disc is at most 5 cm, because it is the optical path mentioned as most promising, or more productively in many articles mentioned in [25]. The electric current values in systems under LED and traditional showed lower energy consumption for the system under LED.The measurement of voltage laboratory power supply was confirmed at 123.4 Volts and currents added fans and the LEDs were 1.488A and resulted in 183.62 W spending electric power (Equation (3)).In the traditional system (LCC) with four lamps and their ballasts the total current was 0.720 A with the same voltage, so the resulting electrical power was 89 W.However, as the volumes of the different systems are energy advantage of LED use results in theoretical 17.5% of electricity savings, considering the volumes under LED 50 L and 20 L for the traditional system (LCC). Conclusions It is possible to produce microalgae in low cost tanks opaque, internally lighted through optical fibers.The quantity of biomass can be compared to traditional cultivation in the laboratory which is maintained the same amount of light, starting an amount of microalgae cells nearest the exponential portion of the growth curve.The tank on LED has potential for improvement with increased grown volume besides productivity, since there are tanks with a diameter close to the rated but higher which allows for increased volume cultivated.The energy expenditure of the system under LED offers better conditions than the traditional system.To scale up it is essential to control the pH, since the growth intensified the amount of carbon dioxide also increases.In a work carried out in 2014 in the same laboratory [23] that strain showed higher productivity in a smaller volume (about 800 mL) under PAR lighting 400 umol photons m −2 •s −1 with nutrition and aeration conditions comparable to the experiment conducted in this work.Thus, probably the greatest amount of light in the fiber optic system allows even higher biomass productivity.The yield obtained previously was 124 mg•L −1 •d −1 while the yield obtained in this work was about 20 mg•L −1 •d −1 .Thus, the LED system is comprised with growth potential and with the advantage of increased volume.For that some improvements are possible as: best fit between the points of illumination of LEDs and the ends of the fibers.In this work two LEDs were used and these were twenty-five lighting points.However, the metal brackets that supported the POFs are designed to support the number of fibers necessary to the tank to maintain optimal intervals.Thus, as the number of lighting points (25 each) and the number of lit fibers (126) happened a differentiated lighting of the fibers, then the fibers and lighting points were not "face to face" as desired, causing loss of light immediately on the LED and POF contact.An improvement in this respect could reduce the number of optical fiber present, which would decrease the cost. And yet, further study on the speed of the machine to pull and power value dispensed by the laser to perform the slot in the fiber.Improvements in the slot so that the light delivered at the end of the fiber was dispersed in the cultivation and did not arrive at the other end of the POF, and without prejudice to the integrity of the material. Another possible improvement would be to conduct a thin film on the fiber tip, which allows the light that should arrive at the tip; this was reflected on the metal film deposited by making the reference light through the grooves, which promote an increase of lighting without the need to change the source, and also replace the need for contact of the fiber with a reflective surface in the submerged tip, decreasing light losses.Another further evaluation is the photosynthetic efficiency of the system.This evaluation system under different light intensities is required [26]. The best fit of distribution disks of the fibers is in the working tank.In this work it was accepted the difference between the existing diameter of the disc and the tank with tap (exp.n.2 to 4) because the discs were initially designed for the tested bucket (n.1).Despite the acceptable difference, less than 5 cm from the disc edge to the walls of the tank, certainly a better fit adjustment could bring better productivity. Figure 4 . Figure 4. Design of holes for the optical fibers in the fiber holders and photo of the low tank disc with rivets adjusted to the fibers. Figure 5 . Figure 5. (a) Sequence of the tested tanks (under LED) and (b) laboratory conventional cultivation-LCC and tanks under LED with air intakes and light sources. Figure 7 .Figure 8 . Figure 7. Cell growth curves of different experiments (a) plastic bucket and LCC 1, (b) 30l reservoir and LCC 2 (c) 50l reservoir and LCC 3 e (d) 50l and batch (1)-(2).closer to the center of the tank and around the edge positions.In systems matched the initial values PAR radiation were comparable.But in exp.n.2 PAR values were lower in the cultivation under LED and submitted additional PAR radiation at the top of the crop (Table2) and lower inside of cultivation.In two experiments performed in n.4 initial PAR values were lower than the top n.3 due to the higher number of cells at the start of cultivation, since the cell growth began closer to the exponential growth phase by phase to be where growth is faster.In addition, foaming occurred with fresh medium input to continue the cultivation.In exp.4 the average value in the position nearest the ends of the support with the fibers was 20 umol photons m −2 •s −1 , although in some places the present value only 6 umol photons m −2 •s −1 and the nearest the distribution disc center about 75 umol photons m −2 •s −1 , this difference is possibly due to the presence of foam, which makes the means of Table 1 . Results of optical power measurements. Table 2 . Internal PAR radiation in different PBR, traditional and under LED. Table 3 . Characterization of the produced biomass and biomass yields. Table 4 . Conditions before and after reflective surface on the POF tip. * PC: center of the vessel.** Pb: edge of the reservoir.
2018-12-29T18:30:01.311Z
2016-08-11T00:00:00.000
{ "year": 2016, "sha1": "dfe04ff5ba7a3915aa48290a8c4673a5510d2576", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=69732", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "dfe04ff5ba7a3915aa48290a8c4673a5510d2576", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
256007762
pes2o/s2orc
v3-fos-license
Q-branes Non-topological solitons (Q-balls) are discussed in some stringy settings. Our main result is that the dielectric D-brane system of Myers admits non-abelian Q-ball solutions on their world-volume, in which $N$ D$p$-branes relax to the standard dielectric form outside the Q-ball, but assume a more diffuse configuration at its centre. We also consider how Q-balls behave in the bulk of extra-dimensional theories, or on wrapped branes. We demonstrate that they carry Kaluza-Klein charge and possess a corresponding Kaluza-Klein tower of states just as normal particles, and we discuss surface energy effects by finding exact Q-ball solutions in models with a specific logarithmic potential. Introduction and background One of the most interesting features of field theories with conserved charges is the possibility of non-topological solitons, in particular Q-balls [2][3][4][5][6][7]. 1 Q-balls are localized field configurations that are stable simply because they are a more energy efficient way of holding charge than a collection of asymptotically free quanta. Such objects have been shown to occur very generally in field theory. Indeed the charge Q can be global (the simplest case) or local [3,4] and the corresponding symmetry (which is spontaneously broken in their interior) can be either abelian or non-abelian. For certain types of potential, the energy deficit or binding energy grows with charge, so that Q-balls can in principle be large macroscopic objects, and naturally there has been much interest in their cosmological implications and their impact on scenarios beyond the Standard Model [11,12]. 2 For example, Q balls have been proposed as dark matter candidates [13,14] in particular in gauge mediated SUSY breaking models [15][16][17][18]. Experimental searches for Q-balls have also been proposed and carried out. For these, the possible electric charge a Q-ball may carry obviously plays a central role in determining their experimental signature. For instance, neutral Q-balls can be detected by Super-Kamiokande [15,19,20] by probing proton absorption. Conversely charged Q-balls could be seen directly in detectors such as MACRO [15,21]. Given this interest, it is important to determine the ubiquity of Q-balls in scenarios of physics at the most fundamental scales. In this paper we study Q-lumps in various stringy settings, including configurations with extra dimensions, namely charged bulks, and wrapped branes. Our main result, in section 5, is the explicit construction of stable JHEP11(2015)096 Q-ball solutions on systems of Dp-branes, in which the scalar fields in the solutions describe their displacements. It is well known that the Dp-branes can be spread over a 2-sphere by turning on a background field, forming so-called dielectric branes [22]. The global minimum of a dielectric brane has a non-commutative form, with the vacuum falling into an N × N irreducible representation of SU (2). However, in this minimum there are additional nonabelian symmetries that can be broken by reducible representations of SU (2). We show that the resulting charges support Q-balls, with the N Dp-branes relaxing to the standard dielectric form outside the Q-ball, but assuming a more complicated dielectric configuration at its centre, in which the 2-sphere itself is diffuse. Remarkably, even in the simplest case the dielectric brane potential has the correct coefficients for the Q-ball configuration to be energetically stable. As well as presenting this construction we will, as a warm-up exercise, look at a number of additional issues that make Q-balls in extra dimensional setups a somewhat more complex problem than in 4 dimensional field theory. The first is that generally they will be wrapped on compact dimensions of various size. The extent of the Q-ball can therefore be limited, forcing the configurations to be anisotropic. The second issue, is that the Qballs carry a Kaluza-Klein momentum in the extra dimensions which is quantized. Thus one expects to find a tower of Q-balls, corresponding to Kaluza-Klein excitations. In the limit of large compactification, one naturally expects the momentum to become continuous corresponding to the Q-balls moving freely in the extra-dimensions. We shall look at these issues by way of introduction to Q-balls in the following two sections, using a U(1) model in 5 space time dimensions with one compact space dimension (i.e. corresponding to a Q-ball on a wrapped 4-brane). We first discuss, in section 2, the large volume limit of Q-balls for complex fields carrying both global charge, Q, and Kaluza-Klein momentum of a single compact extra dimension, P 5 . The solutions are found to have rather natural O(3) and O(4) symmetric limits depending on the size of the extra dimension. We find that the momentum modes correspond to an infinite set of Kaluza-Klein excitations of the lowest lying Q-ball; the spectrum has a tower of P 5 momenta, where Q is the global charge, p is the lowest mode and R is the compactification radius. The states with non-zero n can be thought of as Kaluza-Klein excitation of the lowest mode. If p = 0 the lowest mode is precisely the usual D = 4 Q-ball, albeit possibly constrained by compact extra dimensions, while p = 0 corresponds to giving this state additional mass by the Scherk-Schwarz mechanism [23]. In section 3 we discuss Q-balls in a special logarithmic potential that allows us to reduce the task of finding a Q-ball solution to the canonical one-dimensional problem, showing in detail a Q-ball going from O(4) to O(3) symmetric configurations, and demonstrating the energetic preference of the surface tension term for large radii and more symmetric configurations. Finally in section 4 we discuss the non-abelian Q-ball solutions that are shown generally to exist on dielectric branes. JHEP11(2015)096 2 Warm up; large Q-balls in small boxes In order to see how Q-balls behave with finite dimensions, consider the large charge limit of a Q-ball in 5 space-time dimensions. In this limit one neglects the surface effects. (In the following section we discuss these using a particular logarithmic potential.) The specific set-up is as follows. We shall take a single scalar field in M 4 × S 1 . The Minkowski dimensions we call x, and the dimension that is compactified on S 1 we call y, with y and y + 2πR identified. Almost certainly the discussion will hold also for the untwisted sector of orbifolded extra dimensions, and as will become clear the qualitative behaviour would most likely be the same in non-flat compactifications. The action can be written as Reparameterization invariance leads to the following conserved charges; where i = 1, 2, 3. In addition, assume invariance under a global U(1) transformation, φ → e iα φ, so that there is a conserved charge, By assumption the origin is a global minimum of U 5 and the global U(1) symmetry is unbroken there. As mentioned in the Introduction, in more general cases the transformation could be that of any compact group, and the Q-ball could be constructed from a local as well as a global symmetry. These extensions will be discussed in more detail later when we come to consider dielectric brane configurations. Since we seek a solution that is localized in the x coordinates the global minimum in energy must have U(1) symmetry restored at large radius for any y. Hence it is convenient to separate out the time dependent U(1) phase; where ϕ and θ are real. The equations of motion now give us two relations, By analogy with standard Q-balls we now choose θ to be linear, parameterizing it with α and ω; θ = α(t + ωy). (2.6) JHEP11(2015)096 The equations of motion then require and The Q-ball solution then corresponds to the usual problem of a real field rolling in the inverted potential −Û where y and x i replace time. For completeness let us find the same result using the, perhaps more familiar, method which deduces the solution by minimising the energy of a generic field configuration whilst fixing the charges using Lagrange multipliers. That is we minimise the expression [1] (2.10) for a given ω and ω , and then minimise in ω, ω . First completing the square in the kinetic terms gives Now θ only appears in the first integral as (2.13) The energy is minimised where the imaginary contributions vanish, which independently determines θ, θ(y, t) = ω 1 − ω 2 (ωy + t), (2.14) and in addition, ϕ = ϕ(y + ωt). As one might have expected, the solutions with non-zero P 5 are going to be "lumps" travelling in the y direction with speed ω. The energy is now Here the centre of the Q-ball is offset by 1/2 a bulk radius. Figure 1e is close to the solution for an infinite radius. Figure 1a is slightly larger than the radius corresponding to the "natural" frequency of oscillation in the upturned potential. For a transverse radius smaller than a certain critical value, the only solution is trivial in the compact direction (i.e. constant in y). The slight squashing in the compactified direction is Lorentz contraction due to the non-zero P 5 . and clearly extremizing this gives the same equation of motion as before if we identify α = ω /(1 − ω 2 ). The physical interpretation is that the boost factor (squared) 1/(1 − ω 2 ) is a result of both Lorentz contraction and time dilation in the phase factor, that will ultimately feed into the charge Q. We can now proceed to the large and small (in a sense to be defined shortly) limits of compactification radius, R. In the large R limit the ϕ configuration that minimises ε is approximately the same as that in the decompactified space. The variation of ϕ proceeds as for tunneling in d = 4 Euclidean space dimensions in the potentialÛ ωω . In this limit the symmetry of the problem dictates that, for the stationary (ω = 0) Q-ball, we have a fully O(4) symmetric solution and so the minimum is the action S 4 [ϕ] of the bounce solution. Since ϕ is a function of y + ωt, the factor (1 − ω 2 ) includes a Lorentz contraction which squashes the solution in the y direction. Clearly for the O(4) limit to apply, R should be much greater than the radius of this solution (which we shall call r 4 ). JHEP11(2015)096 In the opposite limit, where R < r 4 , the (∂ y ϕ) 2 term makes any significant variation of ϕ in the interval y ∈ [0, 2πR] very costly in energy. In this limit we can therefore take ϕ(x, y) = ϕ(x) and write (2.17) Now the variation of ϕ proceeds as for tunneling in d = 3 Euclidean dimensions in the potentialÛ ωω , and the minimum energy is the action S 3 [ϕ] of the relevant bounce solution. An example of the two limits, which we will be discussing in detail in the following sections, is shown in figure 1. Consider now the large volume solution, where the field is approximately constant, ϕ 0 , inside a 4-volume V 4 (whose form depends on whether we are considering the large or small R limit). In this case we find HenceŨ is the potential after applying the Scherk-Schwarz mechanism to the field φ [23]. One can now minimise the energy with respect to V 4 to find where ϕ 0 is the field value that minimises E. Since the volume is proportional to Q, the point where R becomes relatively small is determined by Q: Note that the potentialÛ ωω is the same for the large and small R solutions (from now on we will drop the ωω suffix), and actually the energy is independent of R. We can see this in an interesting scaling limit of the thin wall approximation, given by JHEP11(2015)096 Without the intervention of P 5 small ω would always imply small Q-balls, but because of the simultaneous second limit we have and at the same timeÛ = U − O(ω ), thereby maintaining the thin-wall requirement of U (ϕ 0 ) ≈ 0. We conclude that in this limit the potential at ϕ 0 only needs to be shifted down by a parametrically small amount in order to develop a Q-ball solution, which nevertheless has large Q. In more physical terms, the squared boost factor 1/(1 − ω 2 ) is able to keep Q and P 5 large even though ω is small. In this limit the energy is given by which is minimised when ( 2.27) We conclude that there is an energetically optimal volume for the Q-ball to occupy given by the parameters on the right of this equation, but the Q-ball can achieve this minimal volume for any radius of compact dimension R because there is no surface tension term in the energy. Note that substituting back in we find that the energy of this configuration is E = P 5 + O(ω ); i.e. the 4-dimensional rest-mass is made up almost entirely of P 5 in this limit, while the 5-dimensional rest-mass is negligible in this limit. Returning to the generic case, we still need to show stability of the Q-ball with respect to decay into a collection of Kaluza-Klein modes. Decay is allowed into Q Kaluza-Klein modes i = 1 . . . Q, with total momentum i P 5i = P 5 , and total rest mass where µ 2 = ∂ 2 U/∂ϕ 2 . A simple geometric argument shows that this expression is minimised when the P 5 momentum is equally distributed amongst the Kaluza-Klein modes, P 5i = P 5 /Q. Stability therefore requires This is precisely the condition for a Q-ball to exist in the 4 dimensional theory. Hence the Q-balls with additional integer global Q-charge can be simply understood as the Kaluza-Klein ladder of the lowest lying Q-ball. 3 This can be trivially seen from eqs. (2.20), (2.22) which gives E 2 (P 5 ) = E 2 (0) + P 2 5 (2.31) 3 We should remark that our findings contradict those of ref. [24] which concluded that different stability conditions and types of Q-balls can result. That analysis began with a decomposition of the action into JHEP11(2015)096 so that in the thin wall limit the Kaluza-Klein momentum P 5 can be boosted away to leave the rest-mass of the Q-ball in a non-compact volume: the large Q-ball is blind to the compactness of the extra dimension. The momentum P 5 is naturally decomposed into Kaluza-Klein modes as where n is an integer parametrising the Kaluza-Klein tower, whilst the non-integer p represents the Scherk-Schwarz phase with The interpretation of the phase p is that it is the non-integer momentum per unit charge. Similar solutions can be found for a global unitary symmetry as we shall later see for Q-balls on dielectric branes. In the more general cases we have to replace the phase by a time dependent unitary rotation, but the rest of the analysis will go through unchanged. Generalization to d compact dimensions The treatment above can be straightforwardly extended to multi-dimensional compact flat spaces. Consider a toroidal compactification on an untilted torus with d compact radii R a where a = 1 · · · d. Then eq. (2.10) becomes (2.34) where ω = {ω a } and P are now d-vectors. The square in the kinetic terms is completed as As long as ω · ω < 1 so that the "dual metric" is positive definite, the previous arguments go through unchanged, and the energy is minimised where 37) Fourier modes, arriving at an infinite and intractable set of coupled differential equations for the Kaluza-Klein modes. However, the interactions among the different modes must be consistent with the fact that they come from higher dimensional interactions. Once this constraint is taken into account the stability condition must be as above. There is in effect one and only one kind of Q-ball however one chooses to squash it into extra dimensions, and at least in the large charge limit there are for example no special bounds on the mass per unit charge associated with the finite compactification radius. An exact solution In the previous section we saw that Q-balls in the large charge limit are energetically independent of the size of the compactification, and in the thin wall approximation it is only the total volume they occupy in the bulk that matters. In this section, we wish to get some idea of surface effects. We therefore turn to a logarithmic potential for which exact Q-ball solutions can be found in certain limits; continuing with the definition φ = ϕe iθ , the particular U(1) invariant potential of interest is This potential is particularly interesting for studying surface effects because it admits exact Q-ball solutions whose 'surfaces' constitute the whole Q-ball, whatever the charge. It has found use in a limited number of related works in the past, most recently [25]. The last term in the potential, O(ϕ n ) (where n is some large power), is added to lift the potential at large field values ϕ √ eϕ 0 thereby ensuring that it satisfies the requirement that the origin be the global minimum. However, the modified potential for finding the Q-ball can be written,Û where Even for modest value of α the potential goes negative at field values that are exponentially smaller than the values at which ϕ n dominates and consequently, for the purposes of finding the Q-ball solution, the latter is negligible. JHEP11(2015)096 Neglecting this term allows one to solve the equations of motion in eq. (2.8) by separation of variables. Of course the analysis regarding the phases of φ goes through as before, but now the solution for its modulus ϕ can be written, giving The problem is reduced to the one dimensional task represented by this last equation. In the large radius limit it naturally just gives the expected Lorentz boosted version of the solutions in the x i directions, (3.7) In more general cases it can easily be solved numerically imposing the boundary conditions of periodicity in y → y + 2πR. Note that the typical width of the Q-ball in the y-direction, √ 1 − ω 2 /µ has the expected Lorentz contraction. Some examples are shown in figures 1a-e where the compactification radius is increased from Rµ ≈ (1 − ω 2 )/2 to Rµ = 2 (1 − ω 2 )/2. In figure 1a the value of the radius corresponds to the 'natural' period; that is Y (y) is oscillating close to Y = 1 in the upturned potential −Û . The oscillation period is monotonically increasing with amplitude so that (uniquely for this potential) there is a hard cut-off below which the solution is completely three dimensional: when Rµ < (1 − ω 2 )/2 there can be no solutions except the O(3) symmetric trivial one, Y (y) = 1. As the radius increases so does the amplitude of oscillation in order to maintain the correct periodicity. Extending the oscillation period (i.e. compactification radius) significantly, forces Y to approach the origin of ϕ. In other words, the solution quickly collapses to the O(4) symmetric one. At the radius Rµ 2 (1 − ω 2 )/2, there are two available solutions. One is the isolated O(4) symmetric configuration of figure 1e, and the other is the doubled solution in figures 2a-b. The latter corresponds to Y (y) oscillating twice in the period 2πR. Further expansion of the radius causes the doubled solution to condense into two isolated Q-balls in the bulk. However, the doubled solution of figure 2a is energetically unstable to decay into the single isolated Q-ball with the same charge and P 5 . Similarly two completely isolated Q-balls of this type will coalesce into one large one. 4 Figures 2a,b show two cases of interest, the first being the solution with Y ≈ 1 and the second the isolated solution in eq. (3.7). We now present the charges, momentum and energy for these different configurations. JHEP11(2015)096 To do so we define The general expressions are (redefining y + ωt → y) where in the last line we used eq. (3.6) and integrated by parts for this example. This then gives for the squeezed and isolated limits respectively This determines ξ while ω can be determined by the equations for P 5 ; defining JHEP11(2015)096 and we can then parameterize the energy as (3.13) in the squeezed and isolated cases respectively, where as before α = ω /(1 − ω 2 ). Notice that the minimum value for the energy of the isolated Q-ball, i.e. Qµ, is less than that for the squeezed Q-ball √ 2Qµ, indicating that the effect of surface tension is for Q-balls energetically to favour large radius where they can assume a more symmetric configuration. To complete this discussion, we should remark that the Q-balls considered in this example are not unstable to decay into free states despite the fact that E > Qµ. This is because the parameter µ is not the physical mass of any asymptotic quantum at the origin. (Formally the mass at the origin is logarithmically infinite so there are no asympotic states there at all.) Indeed consider the physical system in which Q-balls with such a potential could appear, namely the F and D-flat directions corresponding to conserved B −L current in supersymmetry, as was considered in for example ref. [8][9][10]. The one-loop improved tree-level potential of this system would typically be of the form where now ϕ is the scalar denoting the VEV along the flat direction. The scale ϕ 2 + m 2 is the approximate renormalisation scale due to the field ϕ giving a mass to for example squarks and sleptons along the flat direction, and the scale m would therefore naturally correspond to the scale of supersymmetry breaking which would in a typical supersymmetric phenomenology be of order µ itself. Provided ϕ 0 m the Q-ball analysis goes through unchanged upto corrections of order O(m 2 /ϕ 2 0 ), while the mass-squared of the asymptotic states at the origin, µ 2 1 + log ϕ 2 0 /m 2 , is now regulated by the infra-red cut-off m, and is parametrically larger than µ 2 . The thick wall/small charge approximation We can also consider more general "small" Q-balls which would be more appropriate for the Q-balls on dielectric branes we discuss later. Following ref. [5] our task is to minimise for fixed ω and ω , where in the thick wall limit we keep only the first two terms in an expansion of the potential,Û ωω = µ 2 2 ϕ 2 − Aϕ 3 + . . . , (3.16) and the effective mass-squared is (3.17) Note that in a 5D theory, µ has mass-dimension 1, but A has mass-dimension 1/2. JHEP11(2015)096 The bounce action can be related by a simple rescaling to the bounce action for the rescaled potential V ψ = 1 2 ψ 2 − ψ 3 which can be computed numerically in certain cases [5]. The rescaling is of the form ψ = ϕA/µ 2 andx = µ x,ŷ = µ √ 1 − ω 2 y, so that the typical isolated solution would have O(4) symmetry and width ∼ 1 in the rescaled units, and again we infer squeezed solutions for Rµ √ 1 − ω 2 < 1. It is not possible to obtain the solution in full generality, however we can again restrict ourselves to either squeezed (O(3) symmetric) or isolated (O(4) symmetric) solutions as in ref. [28]. Considering the former for definiteness gives S ψ = 4.85 and an energy of Minimising in ω and ω gives with of course the second relation following from eq. (3.9). The energy can then be written The usual thick-wall solution of [5] has w ≡ 0 and hence This gives E > 2 √ 2 3 Qµ so the mass cannot be made arbitrarily small with respect to a collection of asymptotic quanta of the same charge. With non-zero P 5 a similar situation obtains, but with non-zero ω acting to increase the mass, such that We conclude that thick-wall Q-balls with ω > 1/2 are always unstable to decay. Background: the dielectric brane potential We now turn to our particular application of the previous discussion, Q-balls as deformations of stacks of Dp-branes. Let us briefly recap the Lagrangian for this system. As is well known, the massless modes of the open string form a supersymmetric U(1) gauge theory with a vector A µ , µ = 0, 1, . . . , p, 9 − p "collective coordinate" scalars Φ i , i = p + 1, . . . , 9 and their fermionic partners. The dynamics of a single Dp-brane is described by the DBI-action JHEP11(2015)096 where T p , µ p are respectively the tension and the RR charge of the Dp-brane, C (n) is the n + 1-form RR potential and φ d is the string theory dilaton. We denote by [. . .] the pull-back of spacetime tensors to the Dp worldvolume; for example A collection of N coincident Dp-branes supports a supersymmetric U(N ) gauge theory with gauge field A µ , and scalars Φ i in the adjoint of U(N ). The latter act as the collective coordinates of the branes. The action which describes the dynamics of such a collection of coincident Dp-branes is not completely known. For example, replacing the abelian U(1) in the action (4.1) and taking the symmetrized trace over the gauge group as was suggested in [29,30] does not capture the full infrared dynamics [31], and in fact additional commutators of the field-strength are neeeded at sixth order [32]. Some progress can be made especially for the structure of the Chern-Simons term, the second term in (4.1), in the nonabelian case [22]. By using T-duality arguments, Myers showed that a Dp-brane couples not only to the p + 1-form RR potential but also to the RR potential with form degree higher than p + 1 [22]. A collection of N D0-branes for example in an electric RR four-form flux develops a dipole moment under the three-form potential. This is a "dielectric" property of the Dp-branes similar to the dielectric properties of neutral materials in electric fields. Indeed, in general, the Chern-Simons term for N coincident Dp-branes is modified to [22] where i Φ denotes the interior product by Φ i if the latter is considered as a vector in the transverse space. The existence of these additional couplings in turn modifies the scalar potential of the world-volume theory. In the case of N Dp-branes, for flat world-volume metric and vanishing RR and B-fields, the DBI-action, in lowest order in α , turns out to be where the scalar potential is Then by turning on an electric p + 3-form potential C 01...pAB , an additional coupling of the Dp-brane appears as can be seen from the Chern-Simons term (4.3), so that the total potential turns out to be Q-balls on dielectric branes Now let us consider simple Q-ball configurations on such dielectric branes. For definiteness we will take N coincident D3-branes; as the collective coordinate of the D3-branes plays JHEP11(2015)096 the role of the (non-abelian) internal Q-charge, the Q-lumps will describe the physical displacement of the D-branes within the compact dimensions, with the D3-branes oriented so their internal Neumann dimensions fill space-time. The results extend trivially to other Dp-branes. Generally speaking, non-abelian Q-ball solutions can be found in theories that have scalar fields φ ab in a real M × M matrix representation of some non-abelian symmetry group [4,6]. We should remark that the latter will turn out to be a subgroup of the U(N ) gauge symmetry described by the Dp-branes so that the result will be gauged Q-balls rather than global. As discussed in [3] such objects are subject to a further constraint on their size coming from Coulomb repulsion of the charge which distributes itself over the surface of what is effectively a superconductor; however we will work in the small coupling limit in which this effect is negligible and the solutions are the same as the global ones. In the absence of gauge fields VEVs then, the action to lowest order after appropriate field redefinitions is where U (φ) is the scalar potential, and traces over the M × M matrix indices are implied. Generalising the results of the earlier sections, reparameterization invariance leads to the following conserved energies and momenta; where i = 1, . . . 3 + d, again traces are inferred, and we are assuming canonical kinetic terms. The conserved charges of the non-abelian symmetry are where T k are the relevant generators. To each charge we can associate a Lagrange multiplier, ω k , so that we must minimise the expression Completing the square and minimising as before we find and of course now ϕ is also M × M matrix-valued. Note that we could have found the same result by using the equations of motion, as we did for the U(1) case discussed earlier. JHEP11(2015)096 So far the discussion applies completely generally for non-abelian Q-balls. Our task now is to find a local minimum of the dielectric potential that preserves such a global non-abelian symmetry, and to determine U (φ) there. To do this let us turn on a background field, where we will take the Φ A ab to be three N × N matrix-valued fields transforming under the U(N ), with A = 1 . . . 3, a, b = 1 . . . N . As before, A labels the three arbitrarily chosen extra dimensions in which we turn on the background field. As an ansatz, let the three fields Φ A fall into an irreducible SU(2) multiplet as follows: where the α A form an N/M × N/M irreducible representation of SU(2),φ(t, x) is an arbitrary M × M real matrix and β −1 = 2πα T 1/2 p e −φs/2 is a parameter that ensures canonical kinetic terms forφ. In particular we have that and Inserting this ansatz into eq. (4.6) and using the BPS condition for the tension and RR charge of the Dp-branes, T p = µ p , we find that V becomes The M × M matrix-valued field φ(t, x), corresponding to the displacement around the minimum, is precisely our desired non-abelian Q-ball field. Substituting into V gives a potential for it of the form (ignoring a vacuum energy term) where ϕ(x) is in the adjoint of SO(3), and T k (k = 1, 2, 3) are SO(3) generators in the fundamental representation. The potentials U andÛ are shown in figure 3. This case was explicitly worked out in [4,6], with the result that the necessary and sufficient condition for the existence of Q-balls [4] is 1 ≤ g 2 µ 2 λ < 9. (4.24) The lower bound is the energetic condition for the existence of the Q-balls (ensuring that the Q-ball will not decay into free mesons), whereas the upper bound is the condition that the cubic coupling is not very large so that φ = 0 is the global minimum. For the case at hand we have g 2 and thus eq. (4.24) is satisfied. Therefore, dielectric branes support stable Q-balls in their world-volume. Given this, it is interesting to ask what their mass can be. Adopting the small Qball approximation, eq. (3.21) gives E > 2 √ 2 JHEP11(2015)096 (which would correspond to dielectric D4 branes wrapped on a dimension of size 2πR) would give Q-balls with a mass less than Qµ = 81πS ψ ω R λ , so the minimum Q-ball mass is proportional to the compactification radius measured in units of the Compton wavelength 1/ω . As we saw this number can in principle be less than unity. (Precisely how small it can be depends on the complicated dynamics of the Q-charge exchange which is beyond the scope of this paper to discuss, but would require further studies along the lines of [25][26][27].) In addition λ scales as n 3 and can therefore be large. We conclude that such fundamental Q-balls could be significantly less massive than the fundamental scale.
2015-07-16T13:13:56.000Z
2015-07-16T00:00:00.000
{ "year": 2015, "sha1": "9efd45d5624269426622086d9f4128522b2e4b8e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2015)096.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "477c8a56d6ee260c8ecec2b4382f982b9f1d975c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226290793
pes2o/s2orc
v3-fos-license
Field-testing of primary health-care indicators, India Abstract Objective To develop a primary health-care monitoring framework and health outcome indicator list, and field-test and triangulate indicators designed to assess health reforms in Kerala, India, 2018–2019. Methods We used a modified Delphi technique to develop a 23-item indicator list to monitor primary health care. We used a multistage cluster random sampling technique to select one district from each of four district clusters, and then select both a family and a primary health centre from each of the four districts. We field-tested and triangulated the indicators using facility data and a population-based household survey. Findings Our data revealed similarities between facility and survey data for some indicators (e.g. low birth weight and pre-check services), but differences for others (e.g. acute diarrhoeal diseases in children younger than 5 years and blood pressure screening). We made four critical observations: (i) data are available at the facility level but in varying formats; (ii) established global indicators may not always be useful in local monitoring; (iii) operational definitions must be refined; and (iv) triangulation and feedback from the field is vital. Conclusion We observe that, while data can be used to develop indices of progress, interpretation of these indicators requires great care. In the attainment of universal health coverage, we consider that our observations of the utility of certain health indicators will provide valuable insights for practitioners and supervisors in the development of a primary health-care monitoring mechanism. Introduction Under the thirteenth general programme of work and the triple billion targets, 1 the World Health Organization (WHO) aims to increase the number of people benefitting from universal health coverage (UHC) by one billion between 2019 and 2023. Central to this effort is the expansion and improvement of primary health-care services. 2,3 Progress in achieving UHC can be analysed using the WHO and World Bank's UHC monitoring framework, 4,5 but this requires adaptation to local contexts to ensure health reforms keep pace with targets. Health programmes in India, 6 as well as the national health policy 7 and flagship Ayushman Bharat scheme, 8 are being evaluated in relation to the aims of UHC; various efforts are currently underway at both a national 9 and state level, notably in Haryana 10 and Tamil Nadu. 11 According to National Sample Survey estimates from 2017-2018, morbidity levels in the southern state of Kerala are reportedly four times the national average with disparities by sex and place of residence. 12 Although the state has made gains in maternal and child health, 13 it must sustain these gains while addressing the substantial and growing burden of hypertension, diabetes 14 and cancer; 15 vaccine-preventable diseases; 16,17 and emerging viral infections such as Nipah virus 18 and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). [19][20][21] Kerala has been subject to unregulated privatization and cost escalation, 22 resulting in persistent inequalities in service access and health attainment between population subgroups. 13 In 2016, the Government of Kerala announced Aardram, a programme of transformation of existing primary health centres to family health centres; 23 with increased staffing, these family health centres provide access to a greater number of services over longer opening hours compared with the original primary health centres. Apart from the WHO's monitoring framework, 24 many countries have done UHC and primary health centre monitoring exercises 25,26 alongside independent exercises such as the Primary Health Care Performance Initiative. 27 However, most of these frameworks are intended for global comparison or decision-making at national levels. The argument for tracking health reforms is clear, but such a monitoring process must be specific to Kerala and local decision-making, while also complying with national and global reporting requirements. Periodic household surveys offer population-level data, but are not frequent enough to inform ongoing implementation decisions. Routinely collected and disaggregated health system data are vital, 28 but are often marred by quality issues as well as technological and operational constraints. 29 We began a 5-year implementation research study assessing equity in UHC reforms in January 2018. In our first two phases we aimed to develop a conceptual framework and a health outcome indicator shortlist, followed by validation of these indicators using data from both health facilities and a population-based household survey. We report on the fieldtesting and triangulation components of this implementation research project, which took place during 2018 and 2019. 30 We reflect on early lessons from the field-testing and triangulation and, drawing broadly from Ostrom's institutional analysis and development framework, 31 we emphasize how monitoring can support learning health systems. 32,33 We also discuss how the monitoring of UHC progress requires a flexible approach that is tailored to the local political economy. [34][35][36] Objective To develop a primary health-care monitoring framework and health outcome indicator list, and field-test and triangulate indicators designed to assess health reforms in Kerala, India, 2018-2019. Methods We used a modified Delphi technique to develop a 23-item indicator list to monitor primary health care. We used a multistage cluster random sampling technique to select one district from each of four district clusters, and then select both a family and a primary health centre from each of the four districts. We field-tested and triangulated the indicators using facility data and a population-based household survey. Findings Our data revealed similarities between facility and survey data for some indicators (e.g. low birth weight and pre-check services), but differences for others (e.g. acute diarrhoeal diseases in children younger than 5 years and blood pressure screening). We made four critical observations: (i) data are available at the facility level but in varying formats; (ii) established global indicators may not always be useful in local monitoring; (iii) operational definitions must be refined; and (iv) triangulation and feedback from the field is vital. Conclusion We observe that, while data can be used to develop indices of progress, interpretation of these indicators requires great care. In the attainment of universal health coverage, we consider that our observations of the utility of certain health indicators will provide valuable insights for practitioners and supervisors in the development of a primary health-care monitoring mechanism. Research Field-testing of health-care indicators, India Devaki Nambiar et al. Study design We began with a policy scoping exercise for the state of Kerala in 2018. We then created an 812-indicator longlist from existing primary health-care monitoring inventories, 9,[37][38][39][40] and undertook an extensive data source and mapping exercise, adapting a process previously conducted in the region. 41 We applied a modified Delphi process in two rounds, consulting key health system stakeholders of the state (frontline health workers, primary care doctors, public health experts and policymakers), and obtained a shortlist of 23 indicators (available in the data repository). 42 We then field-tested and triangulated some of the indicators using facility-based data (phases 1 and 2) and a population-based household survey (phase 2). Phase 1: facility data collection In phase 1 (December 2018) we selected three family health centres in coastal, hilly and tribal districts (Trivandrum, Idukki and Wayanad, respectively) of the state. We communicated the definitions and logic of the indicators to facility staff, and studied their data-recording methods to synergize our processes with theirs. From these initial steps, we prepared a structured data collection template (available in the data repository) 43 that we provided to the three family health centres. Phase 2: facility data collection Based on inputs from phase 1 and a second round of consultations with state-level programme officers, we refined the indicator list. In phase 2 (June-October 2019), we used a multistage random cluster sampling technique to generate data related to the indicators at the population and facility level. We applied principal component analysis using Stata version 12 (StataCorp, College Station, United States of America) to data from the latest National Family Health Survey (2015-2016) 44 to categorize districts into one of four clusters according to health burden and systems performance. Using an opensource list randomizer from random. org, we randomly selected one district from each of the four clusters, and then randomly selected both a primary and a family health centre from each of the four selected districts. The people served by these eight health facilities were the population of interest in our study. We held on-site meetings with the staff of the eight health facilities and provided them with Excel-based templates (Microsoft Corporation, Redmond, United States of America) to input data for the financial year March 2018-April 2019 (data repository). 43 Data were sourced from manual registers maintained at facilities. In addition to off-site coordination, we also provided data-entry on-site support to the health staff, visiting each facility at least four times between May and December 2019. We compiled data from the facilities to obtain annual estimates for all health outcome indicators using Excel. Phase 2: household survey Our sample size estimation was based on the proportion of men and women eligible for blood pressure screening under the national primary care noncommunicable disease programme, that is, those aged 30 years or older. We estimated a sample size using routine data reported by the noncommunicable disease division of the Kerala Health and Family Welfare Department (2018-2019), aiming at a precision of 8% at a 95% confidence interval (CI), with a conservative design effect of 2 (i.e. a doubling of the sample). Health facility catchment areas were grouped by wards, also referred to as primary sampling units. Eligible households within a primary sampling unit had at least one member aged 30 years or older. Individual written informed consent was sought from each participant before administration of the survey. We employed and trained staff to collect data using hand-held electronic tablets with a bilingual (English and Malayalam) survey application. The survey, conducted during June-October 2019, included questions on sociodemographic parameters, health outcome indicators (e.g. noncommunicable disease risk behaviours and screening; awareness of components of Aardram and family health centre reform) and financial risk protection (e.g. out-ofpocket expenditure). National Family Health Survey (Round IV) state level weights were applied during analysis. 44 Triangulation of phase 2 data We compared data on selected indicators using Stata and Excel. Since our focus was on how indicators were being understood and reported across facilities, we did not expect indicators to directly correspond between facilities and households, but only to approximate each other. Ethics All components of the study were approved by the Institutional Ethics Committee of the George Institute for Global Health (project numbers 08/2017 and 05/2019). Results We obtained data from 11 health facilities in total (seven family health centres and four primary health centres) during phases 1 and 2. During phase 2, we acquired facility data on indicators from eight health facilities (four family, four primary) jointly serving a population of 273 002 ( Table 1). The household survey was undertaken in the catchment areas of these facilities, and we acquired data from a representative sample of 13 064 individuals in 3234 households (Table 1). We observed both variations between and uniformity in the indicators from health facilities and the household survey (Table 2). In studying these patterns, we made four key observations (Box 1). First, the method of reporting our indicators varied between facilities, even although all raw data required to calculate selected indicators were present in manual registers. In the case of indicators related to national programmes (e.g. reproductive, child health and tuberculosis-related indicators), data were uploaded directly to national digital portals without any analysis at the facility level; officers responsible for data compilation and analysis exist only at the district level. Feedback from facility staff included requests for adequate training on new or revised reporting systems, and clarification of their role. This situation may improve with the complete digitization of health records under Kerala's e-health programme. Our second observation is that there exist two problems with the globally recommended indicators: (i) manual routine data reporting at the facility level may be inadequate to construct the global indicator precisely; and (ii) globally relevant data may not be considered relevant to the periodicity (monthly) or level (facility) of review. From the facility-level data, the cover- Research Field-testing of health-care indicators, India Devaki Nambiar et al. age of antenatal care reported by family health centres was 109.9% (2479/2255); in household surveys, full coverage of antenatal care was observed for 90.9% (85/94) of eligible women (Table 2). Here, antenatal care refers to women aged 15-49 years having a live birth in the past year and receiving four or more antenatal check-ups, at least one tetanus toxoid injection, and iron and folic acid tablets or syrup for at least 100 days as numerator. The coverage rate is calculated from a denominator of the total number of women aged 15-49 years who had a live birth in the past year, which requires retrospective verification of antenatal coverage. However, in some facilities, the antenatal care coverage indicator was calculated using the previous year's number of deliveries plus 10% as the denominator, and the number of pregnant women who had received antenatal care as the numerator. It was therefore not always clear that the data from any particular individual were included in both the numerator and denominator and, with a target as the denominator, coverage could surpass 100%. Practitioners noted the disconnect between monthly target-based reporting and annual retrospective measurement. Our third observation is that definitions and reporting that reflect actual health-provision patterns require to be standardized; otherwise, discrepancies will be observed between data sets. For example, the indicator for acute diarrhoeal diseases among children younger than 5 years was 6.7% (912/13 552) according to facility records; however, a prevalence of more than 3 times this percentage (21.6%; 195/900; 95% CI: 18.1-25.2) was reported in the household survey (Table 2). Several chronic care indicators, newly introduced as part of the introduction of family health centres, also showed discrepancies. For instance, the percentage of people screened for blood pressure and blood glucose was 85.9% (5467/6367; 95% CI: 84.5-87.2) and 82.5% (5254/6367; 95% Our fourth observation is that such triangulation exercises, as well as obtaining feedback from health workers, programme managers and administrators, are vital for accurate assessment of UHC coverage. 45 A major problem reported by staff and officials is that health facility data are usually just a tally of patient visits, which is simple to produce, as opposed to the actual number of (potentially repeat) patients receiving care or services. State officials have been encouraging a move towards electronic health records to generate more precise indicators, but adoption and integration of these will only be possible when the technology itself is better aligned to facility-level process flows, requiring user inputs, investment and time. Other issues raised include: the need for appropriate staff (including temporary contractual staff) training in programme guidelines and reporting requirements; the need for clarity in definitions of treatment (e.g. chronic disease patients may be advised to modify lifestyle factors, which would be missed if treatment monitoring included only those prescribed medication); and the availability of free or subsidised tests relevant to disease control that are reflected in monitoring indicators, particularly for chronic care (e.g. glycated haemoglobin tests for diabetes care 46 ) at the primary health centre level. Discussion As already observed in India and other low-and middle-income countries, 29 our results indicate that any approach to improving or monitoring the quality of health-care must be adaptable to local methods of data production and reporting, while ensuring that emerging concerns of local staff are considered. Although validity checks are a staple of epidemiological and public health research, such triangulation processes in health systems are infrequent. The Every Newborn-BIRTH study was a triangulation of maternal and newborn healthcare data in low-and middle-income countries, 47 and some smaller-scale primary-care indicator triangulation exercises have been undertaken by India's National Health Systems Resource Centre. 48,49 While there exists a variety of approaches to monitoring primary health-care reforms, 30 we consider the most appropriate to be the generation (and modification, if necessary) of indicators from routine data, and their triangulation with household survey data. 49 Increasingly, routine data are being digitized to improve accessibility and interpretation, as is the case in Kerala. Useful considerations when introducing digital health interventions in low-and middle-income countries are intrinsic programme characteristics, human factors, technical factors, the health-care ecosystem and the broader extrinsic ecosystem. 50 Our observations demonstrate the continuous and complex interplay between these characteristics; the real value of selected indicators may also be determined by how staff understand and interpret them. Our study had several limitations. Our indicator selection using the Delphi method could have undergone additional rounds, but we considered it more important to get the monitoring process underway and reduce the burden on health workers. Some facility-based information could not be acquired due to the additional health department burden of flood relief and Nipah outbreak management in the state. Our household survey sample was the population aged 30 years and older, resulting in undersampling for other indicators being fieldtested (e.g. newborn low birth weight). An increase in sample size could allow a more precise estimation of all indicators. Finally, the reference periods for the facility data and the household survey did not directly overlap; a timed sampling should be undertaken in the future to improve the precision of triangulation. Observing the utility of indicators in practice is a key first step in the move towards UHC, requiring investment and commitment. Using indicators, standards and other forms of technology, which are easy to adopt, can be problematic because we amplify certain aspects of the world while reducing others. 51 Our examination of family health centre reforms cautions that, while data can be used to develop indices of progress, interpretation of these indicators requires great care precisely because of the way they are related to powerful decisions around what constitutes success or failure, who will receive recognition or admonition and, ultimately, the legacy of Aardram reforms. We anticipate that our observations will contribute to healthcare reforms in low-and middle-income countries, such as the use of field triangulation to enhance the accountability and relevance of global health metrics. 36 If such activities are carried out in constructive partnerships with state stakeholders and do not introduce unfeasible costs to the system, they may contribute to a sustained and reflexive monitoring process along the path to UHC. ■ Box 1. Four observations from field-testing and triangulating health-care indicators, Kerala, India, 2018-2019 Observation 1: Data are available at the facility level, but in varying formats and platforms meant for different purposes; digitization may improve this situation. Observation 2: Established global indicators may not be useful or interpreted as intended in a local context, and may need to be adapted. Observation 3: Operational definitions, thresholds for interpretation and processes of routine data collection must be refined for older indicators and developed for newly introduced indicators. Observation 4: Triangulation and feedback from the field level, with qualitative input from local actors, remains vital, particularly for chronic diseases. Research Field-testing of health-care indicators, India Devaki Nambiar et al.
2020-11-05T09:11:17.541Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "8aa0b81f3b444153c6359c4ca3121280eca4f30c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2471/blt.19.249565", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aeec885c47b18c93f08968e104ffe98e18d4c9e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
86841245
pes2o/s2orc
v3-fos-license
Comparison of RIPASA and Alvarado scoring in the diagnosis of acute appendicitis and validation of RIPASA scoring One of the most common surgical emergencies worldwide is acute appendicitis, with a prevalence rate of 1 in 7 approximately. The incidence is 1.5 and 1.9 per 1000 in the male and female population respectively. A delay in performing an appendicectomy, in order to improve the diagnostic accuracy increases the risk of appendicular perforation and sepsis, which in turn increases morbidity and mortality. The opposite is also true, where with reduced diagnostic accuracy, the negative or unnecessary appendicectomy rate is increased, and this is generally reported to be approximately 20%-40%. more extensive than the Alvarado scoring system, the latter did not contain certain parameters such as age, gender, duration of symptoms prior to presentation. These parameters are shown to affect the sensitivity and specificity of the Alvarado scoring system in the diagnosis of acute appendicitis. 6 An ultra-sonogram has comparatively less specificity and computerized tomography (CT) helps in confirming the diagnosis, however it is expensive and sometimes inaccessible. The Alvarado score was assessed as to its accuracy in the preoperative diagnosis of acute appendicitis by Kalan, Talbot and Cunliffe in 1994. 4 A high score aids in early diagnosis of acute appendicitis in children and men whereas for women, the false positive rate of appendicitis was high. Chong et al in 2010 did a prospective study on patients presenting to the Accident and Emergency department or the surgical wards in RIPAS Hospital, the national hospital at Brunei Darussalam with right iliac fossa pain. 7 They concluded that the RIPASA scoring system is the more suitable appendicitis scoring system developed for local settings that is south-east Asia and has high sensitivity, specificity and diagnostic accuracy. The purpose of this study is to validate the scoring system in our set up. METHODS The study was conducted in the department of General Surgery & the department of Emergency medicine at Manipal Hospital, Bangalore. Clearance from the institution's ethical committee was obtained before the commencement of the study. A prospective observational study was conducted in all those a patients having acute right iliac fossa pain who underwent appendicectomy based on clinical judgment, USG correlation and in some cases with CT correlation during the period October 2014 to March 2016. We included those in the age group of 15 to 60 years. Those excluded were pregnant females, patients who presented with right iliac fossa mass, chronic recurrent right iliac fossa pain and previous history of pelvic inflammatory disease. All 75 patients were scored on the basis of 18 parameters of RIPASA scoring system (Table 1) and 8 parameters of Alvarado scoring system (Table 2). Operative notes and histopathology reports will be reviewed and correlated with both scoring systems. The score taken for RIPASA was more than or equal to 7.5 and that of Alvarado score was more than 7. The data collected was then recorded in a study proforma, entered into an Excel worksheet and analysed using a Statistical software namely SPSS 23.0, MedCalc 9.0.1, Systat 12.0 and Microsoft office tools were used to generate graphs and tables. Descriptive and other statistical analysis were carried out in the present study. Results which are in continuous measurements are presented on MeanSD (Standard Deviation) (Min-Max) and results on categorical measurements are presented in Number and its percentage (%). Significance of tests was assessed at 5% level of significance. Chi-square test was used to study the significance of parameters on a categorical scale between two or more groups. Sensitivity, specificity, PPV, NPV, accuracy were computed to find the diagnostic properties of Alvarado score, RIPASA score in relation to HPE findings. ROC curve analysis was performed to assess the role of Alvardo and RIPASA score to predict the appendicitis. RESULTS The mean age of our study population was 29.83±9.69 years. The gender distribution was 53 (70.7%) males and (Table 3). The percentage distribution of the patients with respect to age group ( Table 4). The subjects were scored according to RIPASA system and were categorized into high probability group if the score was equal to or more than 7.5 and low probability group if the score was less than 7.5 ( Table 5). Most of the patients (93.3%) scored equal to or more than 7.5. The subjects were also scored according to Alvarado system and were categorized into high probability group, if the score was equal to or more than 7 and low probability group if the score was less than 7 (Table 6). According to the Alvarado system only 53.3% of the study populations were categorized as having a high probability of acute appendicitis as against 93.3% according to RIPASA system. Patients classified as having low probability of acute appendicitis were 46.7% as against 6.7% according to RIPASA. The diagnoses of 75 patients were confirmed by HPE (Histopathological Examination). 70 patients (93.3%) were confirmed as acute appendicitis. 5 patients turned out to be negative for acute appendicitis in HPE resulting in a negative appendectomy rate of 6.7% in this study (Table 7). At the optimal cutoff threshold of 7.5 for the RIPASA score, the calculated sensitivity and specificity were 94.1% and 60% respectively, for the diagnosis of acute appendicitis, taking histopathology report as reference. In this study, application of RIPASA score resulted in a negative appendectomy rate of 2.9% (Table 8). At the optimal cut-off threshold of 7.0, for the Alvarado score, the calculated sensitivity and specificity were 52.9% and 40% respectively, taking histopathology report as reference. In this study, application of Alvarado score resulted in a negative appendectomy rate of 7.5% ( Table 9). The RIPASA score correctly classified 68 patients with histopathology confirmed acute appendicitis to the high probability group (RIPASA score ≥7.5). 37 patients with the Alvarado score more than 7 had acute appendicitis according to histopathology report. PPV and NPV for RIPASA score were 97.1 and 60 respectively, compared with 92.5 and 5.7 for the Modified Alvarado score. NPV was significantly higher for the RIPASA score compared to Alvarado score (p<0.001). The diagnostic accuracy was 94.67% for the RIPASA score and 52% for the Alvarado score, showing a difference of 42.67%, which amounts to a total of 70 patients who were correctly diagnosed by the RIPASA scoring system over the Alvarado scoring system, with reference to HPE (Table 10). Figure 1: ROC curve analysis for RIPASA and Alvarado scoring systems. A ROC (receiver operating characteristic) curve was plotted with true positive in x -axis and false positives in Y-axis for both RIPASA and Alvarado scoring systems. Using ROC curve, the area under the curve (AUC) for RIPASA was 0.920 which was more than that for Alvarado score, which was 0.490. The difference in the AUCs is 0.430 ( Figure 1) which is strongly significant with a p<0.001. DISCUSSION Acute Appendicitis is one of the most common surgical emergencies, with a life time prevalence rate of approximately one in seven. 1 Despite being a common problem, acute appendicitis remains a difficult diagnosis to establish, particularly among the young, the elderly and females of reproductive age group, where a host of other genitourinary and gynecological inflammatory conditions can present with signs and symptoms that are similar to those of acute appendicitis. 8 The differential diagnosis of acute appendicitis being Crohn's disease, ulcerative colitis, renal colic, perforated peptic ulcer, pancreatitis, rectus sheath hematoma, diverticulitis, intestinal obstruction, colonic carcinoma, mesenteric ischemia in general and ectopic pregnancy, dysmenorrhea, pelvic inflammatory disease, endometriosis in females and testicular torsion in males specifically. A delay in performing an appendectomy in order to improve its diagnostic accuracy increases the risk of appendicular perforation and sepsis, which in turn increases morbidity and mortality. The opposite is also true, where with reduced diagnostic accuracy, the negative or unnecessary appendectomy rate is increased, and this is generally reported to be approximately 20%-40%. 4 Several authors considered higher negative appendectomy rates acceptable in order to minimize the incidence of perforation. 9 Diagnostic accuracy can be further improved through the use of USG or computed tomography imaging. Although ultrasonography has some limitations such as, it does not reveal any abnormalities despite the presence of appendicitis especially in early appendicitis before the appendix has become significantly distended and in adults where larger amounts of fat and bowel gas make visualization of appendix actually difficult. Such routine practice of USG and CT may inflate the cost of health care substantially. A recent study has suggested that indiscriminate use of CT imaging may lead to early low-grade appendicitis and unnecessary appendectomies which would otherwise be resolved spontaneously by antibiotics therapy. 10 Hence a host of scoring system were derived in order to diagnose acute appendicitis. Among them, the most popular being Alvarado scoring system. This scoring system had very good sensitivity and specificity when applied to a Western population. Subsequently, when this scoring system was applied to oriental populations, it showed relatively less specificity and sensitivity to diagnose acute appendicitis. So, a new scoring system was devised called the RIPASA scoring system which was more extensive yet a simple scoring system consisting of 18 fixed parameters and an additional parameter (NRIC) that is unique to Asian populations. The study was a comparison of the Alvarado scoring system with the RIPASA scoring system. The RIPASA score is superior to Alvarado score in diagnosing acute appendicitis. Diagnostic accuracy was significantly higher in all age groups using the RIPASA scoring system when it was compared with the Alvarado scoring system. Using the RIPASA scoring system, 97.1% of patients who actually had acute appendicitis were correctly diagnosed and placed in the high probability group (RIPASA score > or = 7.5) compared to only 52.85% when using the Alvarado scoring system on the same population sample. Thus, the Alvarado scoring system failed to diagnose 47.15% of patients with acute appendicitis and wrongly classified them into the low probability group (Alvarado score <7.0), when compared to the RIPASA scoring system that failed to diagnose only 2.9% with acute appendicitis. Likewise, for patients who were classified in the low-probability group with the RIPASA score <7.5 and Alvarado score <7.0, the RIPASA scoring again outperformed the Alvarado scoring by correctly diagnosing 60% of patients who did not have acute appendicitis, comparing to the Alvarado score, which only able to correctly diagnose 40% (p<0.001). The sensitivity and the specificity of the RIPASA scoring system is 97.14% and 60% respectively. The sensitivity and the specificity of the Alvarado scoring system is 52.85% and 40%. The positive predictive value of the RIPASA scoring system is 97.14% and negative predictive value is 60%. The positive predictive value and negative predictive value of the Alvarado scoring system is 92.5% and 5.7% respectively. The diagnostic accuracy of RIPASA scoring system is 94.67% and that of Alvarado scoring system is 52%. The above results indicate that the RIPASA scoring system is a better diagnostic tool for the diagnosis of acute appendicitis than the Alvarado scoring system. Our study corroborates well with the study done by Chong et al in 2010. 5,7 They showed sensitivity of 97.5% and diagnostic accuracy of 91.8% of the RIPASA scoring system. The difference in diagnostic accuracy was 42.67% between the RIPASA scoring system and the Alvarado scoring system was statistically significant (p<0.001), and also area under the curve difference was 0.430, indicating that the RIPASA scoring system is a much better diagnostic tool for the diagnosis of acute appendicitis in Indian continent. The RIPASA scoring system is a useful, rapid diagnostic tool for diagnosing acute appendicitis, as it requires only the patient's details (age, gender and nationality which are all available on registration), clinical history (RIF pain, migration to RIF, anorexia, nausea, vomiting and fever), clinical examination (RIF tenderness, localized guarding, rebound tenderness, Rovsing's sign) and two simple investigations (raised white cell count and negative urinalysis, which is defined as an absence of red and white blood cells, bacteria and nitrates). The RIPASA scoring system can also help us to reduce unnecessary and expensive radiological investigations such as routine CT imaging. CONCLUSION From the present study, it is observed that the RIPASA scoring system has higher sensitivity and higher specificity compared to Alvarado scoring. It also has higher diagnostic accuracy, high positive predictive value, high negative predictive value; consequently, it has low negative appendicectomy rate. Therefore, it can be concluded that the RIPASA scoring can be effectively conducted for the better evaluation of acute appendicitis which holds promise as an improved cost effective way of diagnosis.
2019-03-28T13:33:53.246Z
2019-02-25T00:00:00.000
{ "year": 2019, "sha1": "08278f643b8eaa5041c0589e4354ec30b8f5d2ae", "oa_license": null, "oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/3842/2691", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "84725bc1c4998e19b78d0f54a136535b8f1c279c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258887470
pes2o/s2orc
v3-fos-license
Improving few-shot learning-based protein engineering with evolutionary sampling Designing novel functional proteins remains a slow and expensive process due to a variety of protein engineering challenges; in particular, the number of protein variants that can be experimentally tested in a given assay pales in comparison to the vastness of the overall sequence space, resulting in low hit rates and expensive wet lab testing cycles. In this paper, we propose a few-shot learning approach to novel protein design that aims to accelerate the expensive wet lab testing cycle and is capable of leveraging a training dataset that is both small and skewed ($\approx 10^5$ datapoints, $<1\%$ positive hits). Our approach is composed of two parts: a semi-supervised transfer learning approach to generate a discrete fitness landscape for a desired protein function and a novel evolutionary Monte Carlo Markov Chain sampling algorithm to more efficiently explore the fitness landscape. We demonstrate the performance of our approach by experimentally screening predicted high fitness gene activators, resulting in a dramatically improved hit rate compared to existing methods. Our method can be easily adapted to other protein engineering and design problems, particularly where the cost associated with obtaining labeled data is significantly high. We have provided open source code for our method at https:// github.com/SuperSecretBioTech/evolutionary_monte_carlo_search. Introduction The design and optimization of proteins with specific functionality is a long-sought pursuit in protein engineering. Since proteins are composed of sequences of amino acids which ultimately dictate their structure and function, the protein engineering problem can be reformulated as finding the optimal mapping from amino acid sequence s of length L to biological function f : s → f (s), where we call f the fitness function. Finding the optimum of f can be seen as a high-dimensional discrete combinatorial optimization problem [1]. The enormous size of the protein sequence space (e.g. 20 L possible peptides of length L; 3.87e110 for L = 85) and the presence of sensitive and sporadic high fitness regions in the fitness landscape [2] makes novel protein design extremely challenging. The traditional experimental approach involves high-throughput, iterative laboratory methods such as directed evolution [3,4], deep mutational scans [5], and semi-rational design [6]. However, these methods typically require multiple rounds of engineering and analysis, making them tedious, expensive, and time-consuming [7]. Furthermore, the number of variants capable of being tested in even the most advanced laboratories (≈ 10 5 to 10 6 ) is miniscule in comparison to the size of the total sequence space ; additionally, high-throughput screening can be challenging to implement for some classes of proteins [8]. In the past decade, the application of machine learning methods to protein engineering problems has been massively successful [9]. In this context, machine learning models are trained to learn the sequence-to-function map (also called the fitness function) and then used to propose new sequences that maximize the fitness (thus maximizing predicted function). Typically these are two distinct steps, where the fitness is estimated with a machine learning model and then this sequence-to-function map is used to explore the fitness landscape with methods such as Metropolis-Hastings Monte Carlo Search [8]. In recent years, other methods such as generative models have been proposed to tackle this problem, including deep generative networks [10][11][12], generative adversarial networks [13,14] and diffusion models [15]. In these cases the exploration problem is trivial, as the model produces an embedding in a real, typically low-dimensional, space where sampling from that space is computationally inexpensive. However, generative approaches typically require huge amounts of training data and a large number of positive examples to ensure that the model embeddings are meaningful and so that they do not simply memorize positive examples, an issue that has been widely observed to happen in image GANs [16,17]. Given the relatively small number of sequences in our training data and the extreme paucity of positive examples, we anticipated our small and skewed training data would would prove insufficient for a generative modeling approach. On the other hand, transfer learning of large protein language models (LPLMs) has shown success in modeling and designing novel proteins with fitness functions trained on small numbers of positive hits [8,18]. While transfer learning and ML-based sequence-to-function mapping are beginning to receive a good deal of attention, model-guided fitness landscape exploration remains an understudied problem in the context of protein engineering [19]. The Metropolis-Hastings Monte Carlo Search (MHMCS) method [19][20][21] is the standard method for the exploration of high-dimensional discrete landscapes, including those generated by machine learning algorithms [8,22,23]; however MHMCS suffers from an inability to escape deep local optima. Other approaches for sampling the sequence space include gradient-based sampling [2,24], and modified Gibbs sampling [25]. While powerful, these approaches require significant computation near the local neighborhood of the fitness landscape and are therefore too computationally intensive for sequences of any significant length, (e.g. gradient-based methods require 19 · L computations and Gibbs requires L computations per iteration). Evolutionary Monte Carlo (EMC) [26,27] is an advanced sampling method that draws inspiration from genetic recombination as well as physics-based MCMC techniques. While EMC has previously been used for a variety of sampling tasks [26,[28][29][30][31], its potential as an exploratory algorithm for protein design remains unexplored. In this paper, we modify EMC as a search tool for exploring the complex fitness landscape of protein sequences capable of gene regulation, which we call EMC Search (EMCS). While EMCS is much less computationally intensive than gradient-based and Gibbs sampling (and only slightly more intensive than MHMCS), we expect it to benefit from faster convergence (due to parallel tempering) and to provide a more comprehensive and efficient exploration of the fitness landscape (by allowing for interpolation on the molecule level between chains). Overall, we propose a design strategy for novel protein sequences using a few-shot transfer learningbased approach. Though our method is generally applicable to a diverse range of problems, we apply our method to the design of small gene activator proteins. We previously [32] performed a high-throughput screen of protein sequences to discover novel gene activators, and identified less than 200 sequences which validated as positive hits (resulting in a hit rate of ≈ 0.5%). The low number of positive examples presents a particular problem for ML-guided engineering because it is difficult to ensure that the fitness function will extrapolate well outside the small neighborhood of the positive examples in the training set. In this study, we demonstrate that EMCS is not only capable of improving the sequence diversity and novelty of designed sequences, but it dramatically improves the hit rate of the proposed sequences compared to the original high-throughput screen. Additionally, EMCS can be initialized from known hits and still identify candidate sequences that are vastly different than any of the original molecules, while MHMCS has difficulty escaping from the local optima of known hits. Training Transfer Learning-Based Fitness Models We previously performed and independently validated a high-throughput screen in which 85 amino acid (85aa) peptides were assayed for their ability to activate a synthetic genetic locus using the dCas-Mini Gene Expression Modulator System (dCasMini-GEMS) [32]. This resulted in the identification of 173 gene activators ("positive hits") from a training set of 34217 protein sequences (0.51% hit rate). Using these data, we sought to train a machine learning model capable of predicting proteins capable of gene activation from sequence alone. Since a data set of peptide sequences is essentially composed of strings of amino acid characters, each peptide sequence needs to be numerically encoded to be used as input to train supervised classification models. We compared OneHot encoding with transfer learning using a 650 million parameter LPLM (ESM-2 model) [33] as input features for two models: an XGBoost model, where we flatten the features by taking the mean, and a CNN model. In our testing phase we found that transfer learning significantly improved prediction by both models (see Supplementary Tables) and that the sequences proposed by each model appeared to capture different features of our training data. Indeed, this is not surprising since mean flattening the feature embeddings for XGBoost is equivalent to training on global features of these peptide sequences, while the CNN model is capable of learning local features. We therefore used both models with transfer learning to individually design molecules (Fig 1a), as well as a transfer learning-based ensemble model in order to leverage both the global and local features learned by the XGBoost Algorithm 1 Evolutionary Monte Carlo Search (EMCS) Select N chains of amino acid sequences [0, 1, .., i, .., N ], with corresponding temperature ladder Set crossover rate γ such that γ ⊆ [0, 1), and define maximum mutation, crossover, and swap events as µ, α, β Set minimum and maximum number of iterations k min and k max , respectively Make random point mutations at q loci for each sequence i to yield new set of proposed sequences denoted by j, where q ∈ {1, . . . , µ} is chosen uniformly at random Update each sequence by accepting or rejecting each proposed sequence using the metropolis hasting criterion i.e. with probability min(1, for number of crossover events α do Let i 1 , j 1 be two random sequences corresponding to temperatures T i , T j . Pick a random crossover locus between [2, N-1], where N is the length of the peptide. Propose a set of two sequences i 2 and j 2 by crossing over i 1 , j 1 at the chosen crossover locus. This results in i 2 being identical to sequence i 1 prior to our crossover locus, and identical to sequence j 1 post our crossover locus. Similarly, j 2 is identical to sequence j 1 prior to the crossover locus, and identical to sequence i 1 post the crossover locus. For two temperatures T i and T j such that T i ≤ T j , order i 2 , j 2 , and i 1 , Tj . If accepted, assign i 2 , j 2 to chains at temperatures T i , T j respectively. end for end if for number of swap events β do Select two sequences i and j at chains corresponding to T i , T j , such that j = i ± 1, and swap their sequence positions such that i → j and j → i with probability min(1, r re ), where r re is defined as Tj end for until iterations > k max or ((f i ≥ C) for any sequence and iterations > k min ) and CNN models respectively. Specific model architectures and training details are available in the Supplementary. Metropolis-Hasting Monte Carlo Search (MHMCS) The MHMCS algorithm operates by proposing a low number of mutations to modify the current molecule and then evaluating the new molecule's fitness; if fitness improves, the proposal is accepted, while, if fitness decreases, the proposal is accepted with probability weighted by the ratio of the proposed fitness to the current fitness. The latter possibility ensures that sub-optimal moves can be made to ensure that the search is capable of escaping from a local optima, although MHMCS tends to struggle with extremely deep optima. Evolutionary Monte Carlo Search (EMCS) Evolutionary Monte Carlo Search (EMCS) extends traditional Metropolis-Hastings Monte Carlo Search (MHMCS) by introducing genetic crossover events in a parallel tempering setup [27,34]. In parallel tempering, multiple MHMCS chains are run simultaneously at different temperatures (referred to as a temperature ladder) and are swapped at two randomly chosen temperatures after a predetermined number of iterations. The primary advantage of parallel tempering is that it allows MHMCS to occur over a larger search radius without sacrificing resolution. EMCS builds upon parallel tempering by adding genetic crossover events (domain swapping through chain interpolation). This allows for an even larger search radius (Fig 1b), while also adding the possibility of aggregation of favorable protein domains, which we hypothesize is critical to exploit the small number of positive hits in our training data. Algorithm 1 details our implementation of EMC as a search tool. EMCS is highly versatile and allows for vastly different exploratory behaviors compared to traditional sampling techniques due to the implementation of a custom temperature ladder, as well as predefined crossover, mutation, and swap rates [34]. These parameters can be tuned for more efficient exploration depending on the specific design problem and the complexity of the discrete high-dimensional fitness landscape. Each primary iteration in EMCS can potentially change the state of the algorithm in one of three ways, namely, point mutations, swaps, and crossovers between different temperature chains. The possibility of the acceptance of sub-optimal moves for each of these three classes depends on how we define the acceptance criterion. We use r mh , the standard Boltzmann Metropolis-Hastings acceptance criterion, for mutation-based moves, which as described earlier, accepts sub-optimal moves with probability weighted by the ratio of the proposed fitness to the current fitness. For swaps between two consecutive chains, we use r re , the standard parallel tempering criterion also used in [34]. Using this criterion, any proposed swap in which the higher fitness sequence in proposed to move to the lower temperature chain is accepted. In a swap in which a higher fitness sequence is proposed to move to the higher temperature, the move is accepted with probability inversely proportional to the magnitude of the difference of the temperatures of the two chains, as well as the fitness of the two sequences. Finally, the crossover criterion r c , also adapted from [34], accepts crossover moves taking into account the difference in fitness between the set of old and new sequences, in addition to the difference of temperatures of the two chains involved in the crossover. For simplicity, we have summarized the behaviour of the crossover criterion in the supplementary, and we note that in general, the crossover criterion penalizes an overall decrease in fitness when taking into account both chains. Results The protein fitness landscape is known to be highly sensitive, multi-peaked, and rugged [1,2], reflecting the possibility that a complete loss of function can arise due to a relatively small number of point mutations (e.g. mutations in catalytic domains, mutations that cause misfolding, ...). The complexity of this space presents obvious challenges for efficient exploration. Here we compare how EMCS and MHMCS respectively explore the discrete fitness landscape of 85aa proteins capable of gene activation, and evaluate prediction success rates, sequence diversity, and convergence speeds. Experimental Screening For experimental validation, we used EMCS and MHMCS to design novel proteins using all three of our models (XGBoost, CNN, ensemble). Together, EMCS and MHMCS designed 4600 novel sequences that are largely distinct from the sequence space occupied by the original training data (Fig 2), confirming that both model-guided sampling techniques are capable of proposing diverse novel proteins. To ensure that we could accurately identify gene activators in our experimental validation, we also included 300 previously validated negative controls (random sequences) to the library. We then experimentally assayed the peptides for their ability to activate a genetic locus (full details of experimental design can be found in Supplementary). In total, we identified 357 positive hits (7.59% hit rate) where a positive hit indicates that the peptide was found to activate a synthetic gene reporter significantly over background fluorescence. In contrast, the initial screen had a hit rate of only 0.51%. If we use the latter number as a proxy for the fraction of naturally occurring 85aa peptide sequences that are capable of gene activation, then our approach increased the baseline hit rate by ≈ 15-fold. In fact, the best model-guided sampling technique (ensemble model + EMCS from known hits), increased the hit rate ≈ 45-fold (Table 1) by this metric. Even with initialization from known positive hits, the sequences proposed by EMCS were highly dissimilar from anything in the training set, which suggests that EMCS is capable of escaping deep local optima to efficiently traverse the fitness landscape and identify diverse high fitness peptides. Sequence Diversity To compare sequence proposals between EMCS and MHMCS, we performed an in silico sampling experiment where we explore the fitness landscape 4000 times with each algorithm using identical and controlled initial conditions (see Supplementary for additional details). A unique advantage of EMCS is its ability to identify novel high fitness sequences even when initialized from sequences that were known positive hits (and thus already in a high fitness neighborhood). When initialized from known positive hits, the final edit distances of sequences discovered by EMCS are significantly higher when compared to the sequences discovered by MHMCS using a similar temperature regime (see Supplementary). Consistently, using entropy as a measure of information change, we calculated the average entropy change per iteration of EMCS and MHMCS over 10 7 iterations (Fig.3a) and we show that the average entropy change per iteration in EMCS is ≈ 3-fold higher (using the default parameters of crossover rate of 0.5 and a total of 4 chains) than that of MHMCS (assuming the same mutation rate). We postulate that the increased proposed sequence diversity and increased entropy per iteration seen with EMCS is due to the genetic crossover steps, where functionally beneficial protein domains can be exchanged between known sequences which are then further refined via point mutations. Escape from local minima is further encouraged by the incorporation of a temperature ladder, which allows for an increase in the search radius. In contrast, MHMCS is restricted to a single temperature and can only access domains in the fitness function that are accessible via point mutations alone. This hinders the ability of MHMCS to converge at a domain that corresponds to a diverse sequence when starting from a known positive sequence because it will require many sub-optimal moves to escape for the local optima of the initial sequence. Convergence When initialized at random sequences, EMCS converges 1.25 -5x faster than MHMCS (depending on choice of temperature and crossover rates, as shown in Fig.3b) likely due to the algorithm's increased versatility over MHMCS. With default parameters, we achieved convergence for 1171 EMCS runs where we obtained at least one sequence per run that had a fitness ≥ 0.95. In addition, due to the inclusion of 4 chains, EMCS yielded an average of 2.322 sequences per run that had fitness ≥ 0.5, thereby giving us a total of N = 2720 sequences of fitness ≥ 0.5 for 1171 runs. For MHMCS, chains that started at temperatures greater than 2.5x10 −2 had a minimum failure rate of 50%, and were dropped from the experiment. When excluding those sequences, we obtained a total of N = 2571 sequences from 2571 runs. 2361 of those sequences had fitness ≥ 0.95. The remaining 210 failed to reach convergence, but still had final fitness ≥ 0.5. Discussion In this work, we propose a two step machine learning and sampling approach for protein engineering problems where training data is limited and positive hits are rare. Our method involves leveraging Large Protein Language Models (LPLMs) with transfer learning to estimate a fitness landscape, and then efficiently sampling the fitness landscape with Evolutionary Monte Carlo Search (EMCS) to propose novel high fitness protein sequences. As a proof-of-concept, we apply this approach to the problem of designing small gene activators and demonstrate that our method is capable of successfully designing novel and diverse protein sequences with dramatically higher experimental validation rates when compared to a more traditional sampling method (MHMCS) or baseline discovery from high-throughput screening. The importance of this approach is magnified when taking into account the complexities of the wet lab testing cycle: a single round of screening involves library design, DNA synthesis, plasmid cloning, viral packaging, cell line infection, fluorescence-activated cell sorting (FACS), DNA library preparation, next-generation DNA sequencing, and downstream bioinformatic analysis. Furthermore, in the field of rational protein engineering, multiple rounds of iterative screening are generally required to discover and validate novel proteins with desired functionality. Given the financial, temporal, and technical costs associated with the wet lab testing cycle, there is obvious value in accelerating this process to reduce the experimental burden of protein engineering. The immensity of the protein sequence space coupled with the computational cost of embedding a protein using LPLMs like ESM-2 called for an efficient sampling algorithm that could escape local optima without compromising resolution. The EMC algorithm is ideally suited to this use case, as the incorporation of a temperature ladder allows for the simultaneous existence of multiple acceptance ratios. Furthermore, the genetic crossover steps allow for more efficient exploration of the fitness landscape, as shown by sequence diversity and average entropy change per iteration of MHMCS vs. EMCS. We believe that the power of our approach lies in the combination of transfer learning via LPLMs and EMCS. Since LPLMs are trained on an immense number of diverse protein sequences, modern LPLM embeddings implicitly contain a wealth of features describing a protein's biochemical, biophysical, evolutionary, and even 3-dimensional structure information [33]; as such, we reason that LPLM embeddings of novel proposed sequences are capable of capturing the predicted functional consequences of genetic crossovers from EMCS such that swaps resulting in misfolded or non-active proteins are assigned low fitness and thus not selected by EMCS. Conversely, potential swaps and domains that can act synergistically will be assigned a high fitness by our semi-supervised transfer learning-based model and selected for by EMCS, even if those domains are not evolutionarily related. In contrast, since GANs and diffusion models sample from a low-dimensional latent space, and then pass the sample through the model to obtain the proposed sequence, only sequences that are close to the training data in latent space can be designed by these methods; additionally, there's no guarantee that high synergy domains will be close in the latent space (especially if they're not evolutionarily related) limiting the potential diversity of sequences that can be proposed by generative algorithms trained on limited and skewed training data. We believe our framework has a number of advantages over both prior ML-guided protein design approaches with traditional sampling techniques as well as the classic laboratory protein engineering approach. Firstly, assays that screen diverse, natural proteins for peptides of specific function typically have extremely low hit rates whereas novel sequences proposed by our approach had significantly higher hit rates in the validation experiment. Additionally, the small number of positive hits in the training data of protein engineering problems inherently limits the accuracy and generalizability of the fitness function; by leveraging information from LPLMs and incorporating multiple positive hits in the proposal of novel sequences through EMCS domain swapping, we believe our approach is capable of attenuating these disadvantages. Finally, though our proof-of-concept involved the design of relatively small proteins, we anticipate that our approach will generalize especially well to protein engineering problems involving larger proteins with multiple well characterized domains. While we aim to extend our approach to the application of larger proteins, our sampling algorithm will first need to be modified and optimized as random swaps within larger proteins are increasingly likely to result in low fitness predictions due to the presence of longer conserved domains. The approach described here should be of benefit to the wider scientific community, especially those involved in protein engineering challenges, and has the potential to accelerate the design and testing of novel proteins for a variety of purposes including therapeutic medicines.
2023-05-26T01:16:16.372Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "672ec959f3822cf587fe2789d018c4bf556430d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "672ec959f3822cf587fe2789d018c4bf556430d7", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
118597043
pes2o/s2orc
v3-fos-license
Heat transfer and wall temperature effects in shock wave turbulent boundary layer interactions Direct numerical simulations are carried out to investigate the effect of the wall temperature on the behavior of oblique shock-wave/turbulent boundary layer interactions at freestream Mach number $2.28$ and shock angle of the wedge generator $\varphi = 8^{\circ}$. Five values of the wall-to-recovery-temperature ratio ($T_w/T_r$) are considered, corresponding to cold, adiabatic and hot wall thermal conditions. We show that the main effect of cooling is to decrease the characteristic scales of the interaction in terms of upstream influence and extent of the separation bubble. The opposite behavior is observed in the case of heating, that produces a marked dilatation of the interaction region. The distribution of the Stanton number shows that a strong amplification of the heat transfer occurs across the interaction, and the maximum values of thermal and dynamic loads are found in the case of cold wall. The analysis reveals that the fluctuating heat flux exhibits a strong intermittent behavior, characterized by scattered spots with extremely high values compared to the mean. Furthermore, the analogy between momentum and heat transfer, typical of compressible, wall-bounded, equilibrium turbulent flows does not apply for most part of the interaction domain. The pre-multiplied spectra of the wall heat flux do not show any evidence of the influence of the low-frequency shock motion, and the primary mechanism for the generation of peak heating is found to be linked with the turbulence amplification in the interaction region. I. INTRODUCTION In a wide range of high-speed applications in the aerospace industry shock-wave turbulent boundary layer interactions (SBLI) are crucial for an efficient aerodynamic and thermodynamic design, SBLI being responsible for increased internal machine losses, thermal and structural fatigue due to increased heat transfer rates and substantial modification of the wall-pressure signature, flow unsteadiness, shock/vortex interaction and broadband noise emission. Improving the understanding of these critical features is essential to enhance the capability to predict important quantities like the location and magnitude of peak heating, as well as for the development of effective flow control methods [1]. Most of prior scientific work on SBLI, of both experimental [2][3][4][5][6][7] and numerical nature [8][9][10][11][12][13][14], has been aimed at the case of adiabatic wall condition and many efforts have been invested in the last decade to characterize the large-scale, low-frequency unsteadiness typically found in the interaction region. This phenomenon can be particularly severe when the shock is strong enough to produce separation of the incoming boundary layer [15]. The influence of wall thermal conditions on the characteristics of SBLI can be considerable and wall cooling is often advocated as a possible candidate for flow control, strong cooling being capable of [16]: i) shifting the laminar-turbulent boundary layer transition toward higher Reynolds numbers; ii) producing a fuller incoming boundary layer velocity profile; and iii) reducing the thickness of the subsonic layer by decreasing the local speed of sound. Unfortunately, only a few experimental studies have been conducted on this topic, all based on the analysis of mean flow properties. The effects of heat transfer in turbulent interactions over a compression ramp have been investigated by Spaid and Frishett [17], who performed experiments at freestream Mach number M ∞ = 2.9, by considering a cold (wall-torecovery-temperature ratio T w /T r = .47) and a nearly adiabatic wall (T w /T r = 1.05). Their results showed that the effect of wall cooling, relative to the adiabatic condition, is to increase the incipient separation angle and to decrease the separation distance. Similar conclusions were later reported by Back and Cuffel [18], who considered an oblique shock-wave impinging on a turbulent boundary layer at M ∞ = 3.5 with surface cooling (T w /T r = .44). An in depth experimental analysis of a shock reflection over a strongly heated wall (T w /T r = 2) was carried out by Delery [19], who considered a two-dimensional test arrangement for an upstream Mach number M ∞ = 2.4 and two incident shock wave intensities. The experimental measurements showed that heating the surface greatly increases the extent of the interaction zone and the separation point moves much farther upstream than under adiabatic conditions. More recently, an investigation of the impact of wall temperature on a M ∞ = 2.3 shock-induced boundary layer separation has been carried out by Jaunet et al. [20] for shock deflection angles ranging from 3.5 o to 9.5 o under adiabatic (T w /T r = 1) and wall heating conditions (T w /T r = 1.4, 1.9). Their extensive experimental analysis based on Schlieren visualizations, particle image velocimetry (PIV) and time-resolved hot-wire measurements highlighted that a hot wall leads to an increase of the interaction length-scales, which is mainly associated with changes of the wall incoming conditions. A slight influence was also observed on the onset of separation, shifted to smaller flow deviations in the heated case. This scale change due to wall thermal conditions has also an effect on the flow unsteadiness, the lower frequencies becoming more and more important by heating the wall. Measurements of heat transfer in SBLI were first reported by Hayashi et al. [21], who considered a M ∞ = 4 boundary layer developing over an isothermal cold wall (T w /T r ≈ 0.6) interacting with an oblique shock at various incident angles. They observed a complex spatial variation of the heat transfer coefficient, characterized by a rapid increase near the separation point, followed by a sharp reduction within the separation bubble and a further increase in the proximity of the reattachment point. Combined measurements of skin friction and heat transfer have been recently reported by Schülein [22] who considered an impinging shock at M ∞ = 5 and three values of the incident angle. Their results show a strong increase of the heat flux in the separation zone, characterized by a complex non-equilibrium behavior, in which the Reynolds analogy between momentum and heat flux is not valid. A relatively large number of direct numerical and large-eddy simulations (DNS/LES) of both compression ramp and impinging shock interactions have appeared over the last decade [8,10,11,23,24]. However, all these studies addressed the case of adiabatic wall conditions and to our knowledge, no high-fidelity simulations have been carried out to explore the effect of neither wall heating nor cooling in SBLI. The main objective of the present work is to fill this gap by providing a numerical study on the influence of wall thermal conditions on the behavior of oblique SBLI. The analysis is based on direct numerical simulations to explore the effect of different wall-to-recovery-temperature ratios. This can be beneficial for the improvement of current turbulence modeling for SBLI, in particular for the computation of the heat transfer, which is the most challenging aspect of these flows, and it is well known that numerical predictions based on the solution of Reynolds Averaged Navier-Stokes equations are rather poor [25,26], with significant differences (up to 100%) among different turbulence models. A careful characterization of how the separation bubble, the skin friction and heat transfer are affected by the wall thermal conditions is a core objective of this work, and it represents the key stepping-stone towards harnessing wall-cooling to stabilize and control SBLI. Conclusions are finally provided in Section 4. A. Flow solver We solve the three-dimensional Navier-Stokes equations for a perfect compressible gas with Fourier heat law and Newtonian viscous terms. The molecular viscosity µ is assumed to depend on temperature T through Sutherland's law, and the thermal conductivity is computed as k = c p µ/Pr , the molecular Prandtl number being set to Pr = 0.72. The Navier-Stokes equations are discretized on a Cartesian mesh and solved by means of an in-house finite-difference flow solver, extensively validated for wall-bounded flows and shock boundary layer interactions in the transonic and supersonic regime [27,28]. The solver incorporates state-of-the-art numerical algorithms, specifically designed to cope with the challenging problems associated with the solution of high-speed turbulent flows, i.e. the need to accurately resolve a wide spectrum of turbulent scales and to capture steep gradients without undesirable numerical oscillations. In the current version of the code the convective terms are discretized by means of a hybrid conservative sixth-order central/fifth-order WENO scheme, with a switch based on the Ducros sensor [29]. To improve numerical stability, the triple splitting of the convective terms [30] is used in a locally conservative implementation [31]. The viscous terms are approximated with sixth-order central differences, after being expanded to Laplacian form to guarantee physical dissipation at the smallest scales resolved by the computational mesh. Time advancement is performed by means of a third-order, low-storage, explicit Runge-Kutta algorithm [32]. B. Flow conditions and computational arrangement A schematic view of the flow configuration investigated is shown in figure 1. A turbulent boundary layer developing over a flat plate is made to interact with an impinging shock. The computational domain extends for L x × L y × L z = 96 δ in × 11.7 δ in × 5.5 δ in , in the streamwise (x), wall-normal (y) and spanwise (z) directions, δ in being the inflow boundary layer thickness. The oblique shock is introduced in the simulation by locally imposing the inviscid Rankine-Hugoniot jump conditions at the top boundary so as to mimic the effect of the shock generator and the nominal shock impingement point is x sh = 69.5δ in , Non-reflecting boundary conditions are enforced at the outflow and at the top boundary, away from the incoming shock. A recycling/rescaling procedure is used for turbulence generation at the inflow plane, whereby staggering in the spanwise direction is used to minimize spurious flow periodicity [27]. The recycling station is placed at x rec = 48δ in , sufficiently distant from the inflow to guarantee proper streamwise decorrelation of the boundary layer statistics [33] and to prevent any spurious low-frequency dynamics associated with the recycling procedure. A characteristic wave decomposition is used at the no-slip wall, where perfect reflection of acoustic waves is enforced, and the wall temperature is held fixed. The turbulent boundary layer develops under nominal adiabatic conditions up to x T = 54δ in (the wall temperature T w being equal to the recovery temperature T r ) and local cooling/heating is applied for x > x T by specifying the wall-to-recovery-temperature ratio s = T w /T r to the desired value. To avoid a discontinuity in the wall temperature distribution a smoothed step change is prescribed ϕ is the incidence angle of shock generator, s = T w /T r the wall-to-recovery temperature ratio in the interaction zone, L is the interaction lengthscale, and L sep is the length of the recirculation bubble. The subscript 0 refers to properties taken upstream of the temperature step change at x 0 = 50δ in . T is the time span used for the computation of the flow statistics. according to Five DNS have been carried out at various values of the wall-to-recovery-temperature ratio, spanning cold (s = 0.5, 0.75), adiabatic (s = 1.0) and hot (s = 1.4, 1.9) walls. These cases are labelled as SBLI-s0.5, SBLI-s0.75, SBLI-s1.0, SBLI-s1.4, SBLI-s1.9, respectively. The flow conditions for the various runs are reported in table I. For all cases, the free-stream Mach number is M ∞ = 2.28, the deflection angle of the wedge shock generator is ϕ = 8 • and the Reynolds number of the incoming boundary layer based on the momentum thickness, evaluated at a reference station upstream the temperature step change (x 0 = 50δ in ) is Re θ0 ≈ 2500. For reference purposes, two additional simulations have been also carried out, corresponding to DNS of spatially evolving boundary layers (in the absence of impinging shock) subjected to the same temperature step change as in SBLI-s0.5 and SBLI-s1.9. These two cases are denoted as BL-s0.5 and BL-s1.9, respectively. The domain is discretized with a mesh consisting of 6144 × 448 × 448 grid nodes, that are uniformly distributed in the spanwise direction. In the streamwise and wall-normal directions stretching functions are employed to better resolve the interaction region and to cluster grid nodes towards the wall. In particular, a hyperbolic sine mapping is applied from the wall y = 0 up to y = 3.5δ in . A uniform mesh spacing is then used above this location and an abrupt variation of the metrics is avoided by a suitable smoothing of the connection zone. In terms of wall units (based on the friction velocity u τ and viscous length-scale δ v ) evaluated in the undisturbed turbulent boundary layer at x 0 , the streamwise and spanwise spacings are ∆x + = 5.9, ∆z + = 3.1; in the wall-normal direction the spacing ranges from ∆y + = 0.49 at the wall to ∆y + = 6.7 at the edge of the boundary layer. We point out that such mesh spacings are significantly smaller than those usually employed for DNS of SBLI under adiabatic conditions. The motivation is dictated by the need of maintaining adequate resolution even when strong cooling is applied, which is the most challenging case in terms of spacing requirements, due to the drastic reduction of the viscous length-scale. The simulations have been run on a parallel cluster using 4096 cores, for a total of 7 Mio CPU hours. The time span over which the flow statistics have been computed is reported in table I. In the following, the boundary layer thickness in the undisturbed boundary layer at station x 0 is assumed as reference length for all flow cases (δ 0 = 1.45 δ in ). The results are reported using scaled interaction coordinates x * = (x−x sh )/δ 0 , y * = y/δ 0 . For the sake of notational clarity, the streamwise, wall-normal and spanwise velocity components will be hereafter denoted as u, v, w, respectively, and either the Reynolds (ϕ = ϕ + ϕ ) or the mass-weighted (ϕ = ϕ + ϕ , ϕ = ρ ϕ/ρ) decomposition will be used for the generic variable ϕ. It is worth pointing out that the flow conditions of case SBLI-s1.0 are essentially identical to that of our previous DNS, reported in Pirozzoli and Bernardini [11], based on the experiment by Piponniau et al. [5]. The extensive comparison available in that paper (not repeated here) showed that the global structure of the flow (mean velocities and turbulence velocity fluctuations) predicted by DNS is in very good agreement with that observed in the experiment, provided that the differences in the overall size of the interaction zone are suitably compensated. Indeed, the size of the separation bubble found in the computation is approximately 30% smaller than the experimental one. As later shown by Bermejo-Moreno et al. [34] this difference can be ascribed to the assumption of spanwise periodicity applied in the numerical simulation, which avoids confinement effects from lateral walls that are inevitable in the experiment and are known to cause substantial increase of the separation bubble size. H and H i are the compressible and incompressible shape factor, respectively, computed with mean velocity u. C f is the skin friction coefficient. A. Characterization of the incoming flow A comparison of the basic velocity statistics of the incoming turbulent boundary layer with reference experiments and numerical simulations is shown in figure 2. The DNS data are taken at the reference station x 0 = 50δ in , which is still in the adiabatic portion of the wall, and where the friction Reynolds number (ratio between the boundary layer thickness and the viscous length-scale) is Re τ ≈ 450. The global properties of the boundary layer at this location are summarized in Table II. As expected, when the van Driest-transformation dU V D = (ρ/ρ w ) 1/2 du is applied to take into account for the variation of the thermodynamic properties through the boundary layer, a collapse with reference low-speed data at The grey diamonds denote reference experiments with s = 2 by Debiève et al. [37]. The vertical line denotes the impingment shock location for SBLI simulations. comparable Re τ [36] is observed, and the mean velocity profile exhibits the onset of a small region with a nearly logarithmic behavior. The density-scaled Reynolds stresses, reported in 2b, highlight close similarities with the incompressible distributions and a very good agreement is also obtained with reference compressible experiments, except for the wall-normal velocity variance, which is typically underestimated by measurements. The main effect of the temperature step change on the incoming flow can be understood by looking at figure 3 where the temperature-velocity relationship in the boundary layer is reported for simulations BL-s0.5 and BL-s1.9 at various stations along the streamwise direction, from x 0 to the end of the computational domain. This representation is very suited to describe the adaptation process of the boundary layer to the new thermal conditions at the wall. The shape of the profiles at the various x-stations suggests that the outer region of the boundary layer significantly deviates from the equilibrium Walz solution, and even at the end of the computational domain the recovery process is not yet completed for both the cold and hot wall cases. A similar conclusion was also reported by Debiève et al. [37], who investigated the effect of heating by considering a step change in the wall temperature distribution of a spatially evolving supersonic turbulent boundary layer at freestream Mach number M ∞ = 2.3, wall-to-recovery temperature ratio s = 2 and Reynolds number based on the momentum thickness at the temperature step change Re θ = 4100. Their data, taken 8 boundary layer thicknesses downstream the beginning of the heated wall, are also included in figure 3. The close agreement between the experimental measurements, and the DNS profile at the corresponding location provides a confirmation of the quality of the present simulations with non-adiabatic wall conditions. A further comparison is shown in figure 4, where the distribution of the total temperature in the boundary layer is shown. The figure allows to appreciate the rapid growth of the thermal boundary layer starting from the step change position and again highlights a remarkable agreement between the experimental measurements and DNS data, despite the slightly different nominal conditions in the wall temperature and Reynolds number. To highlight the effect of heating/cooling on the heat transfer rate, the spatial distribution of the Stanton number is shown in figure 5, where the origin of the streamwise coordinate is located at the beginning of the step change (x T ). For both cooling and heating, the simulation predicts a rapid decay of the heat transfer coefficient towards values typical of an equilibrium boundary layer, and in agreement with recent DNS data [38], C h is found to increase when s decreases. In this case the agreement with the experimental data (available for the hot wall) is reasonably good, the computed values being approximately 8% lower than the measurements. These differences might be explained recalling that in the experiment C h was computed through an iterative procedure based on the theoretical Walz's temperature-velocity relationship, which is far from being valid past the step change location, as previously seen in figure 4. B. Effect of wall temperature on SBLI flow fields To provide an overview of the flow organization and a qualitative perception of the influence of the wall thermal conditions we report in figure 6 contours of mean velocity components and of mean density gradient magnitude for some representative values of s (0.5, 1 and 1.9). The typical topology of SBLI is observed for all flow cases, independently of the wall temperature: i) the incoming turbulent boundary layer thickens within the interaction region and relaxes to a new equilibrium state further downstream; ii) a compression fan develops near the separation point well upstream of the nominal impinging location; iii) away from the wall the compression waves coalesce to form the principal reflected shock; and iv) the flow turns through an expansion fan towards the wall and reattaches. Snapshots of the instantaneous density field and of its wall-normal derivative (numerical schlieren) in the longitudinal mid-plane are reported in figure 7. These visualizations bring to light the convoluted structures of the turbulent boundary layer and allow to appreciate the complex pattern of waves originating from the interaction with the impinging shock. The step change imposed in the wall temperature distribution is also revealed in figure 7 by the formation of a weak disturbance originated at x * ≈ −9, also visible in the mean density gradient of figure 6. The FIG. 7: Contours of instantaneous density (left panels) and wall-normal density gradient (right panels) in the longitudinal mid-plane at various wall-to-recovery-temperature ratios, increasing from top to bottom (s = 0.5, 1, 1.9). Sixty-four contour levels are shown in the range: 0.48 < ρ/ρ ∞ < 2.12; −1.5 < dρ/dy/ρ ∞ < 1.5. main effect of the wall thermal condition is a change in the interaction scales, well highlighted by the mean and instantaneous visualizations, that clearly shows that the impinging shock penetrates deeper in the incoming turbulent boundary layer when the wall temperature is reduced This effect is mainly associated with the displacement of the sonic line (displayed in 6) towards to (away from) the wall with wall cooling (heating). The interaction length-scale L (see table I), defined as the distance between the nominal incoming shock impingement point and the apparent origin of the reflected shock, is strongly affected by s. Compared to the adiabatic case, L decreases (increases) significantly with wall cooling (heating), in agreement with previous experimental findings for impinging shock and compression ramp configurations [17,20]. A strong amplification of turbulence kinetic energy k = u i u i /2 and Reynolds shear stress u v is found across the interaction region, as revealed by figure 8. For all SBLI cases, a remarkable growth is observed in the first part of the interaction and the maximum values of both k and u v are seen to gradually detach from the wall. This behavior is associated with the development of a shear layer at the separation shock and is consistent with previous numerical and experimental findings in supersonic [4,19] and transonic interactions [27]. To characterize the behavior Refer to table I for nomenclature of the DNS data. of turbulence across the interaction, the ratio between the absolute value of the shear stress and the turbulence kinetic energy, known as structure parameter (Π), is also reported in figure 8. In the upstream region this quantity is approximately constant for all cases, assuming a value typical of a turbulent boundary layer not too far from the equilibrium (Π ≈ 0.3). At the beginning of the interaction, independently of s, a rapid decrease is observed and Π attains values in the range 0.1 ÷ 0.15, before gradually recovering the original value. The influence of the wall temperature on the behavior of the structure parameter is found to be marginal, except for the previously mentioned shrinking/expansion effect of the interaction domain. To better quantify the enhancement of turbulence across the interaction, we have computed at each x station the peak values of the Reynolds stress components, reported in figure 9 as a function of the scaled streamwise coordinate. The distributions are strongly influenced by the wall temperature, an increment of s implying an upstream shift of the turbulence amplification location. Furthermore, the intensity of all the Reynolds stress components is seen to increase when the wall is heated, with the exception of u u , whose peak is identical for the various SBLI cases. The maximum amplification (approximately a factor 4 with respect to the upstream level) is attained by the wall-normal component v v , whose behavior is qualitatively similar to that of w w , whereas the shear stress displays a second maximum immediately past the nominal impingment location. A major effect of cooling/heating is found in the fields of the mean temperature T and of the wall-normal turbulent heat-flux v T , displayed in figure 8, where the y-axis has been magnified to better highlight the near-wall behavior. The impinging shock greatly affects both T and v T , leading to a thickening of the thermal boundary layer and to a strong amplification of the turbulent heat flux. However, the specific behavior of the flow significantly depends on the wall thermal condition. In particular, in both the adiabatic and hot wall case the mean temperature attains its maximum at the wall and a positive correlation is always found between temperature and wall-normal velocity fluctuations across the interaction region. On the other hand, when surface cooling is applied, a local maximum of the mean temperature within the boundary layer starts to develop (white solid line in figure 8a) , which moves far away from the wall at the beginning of the interaction process. In this case, a negative v T correlation is found close to the wall, and as observed for a cold spatially evolving boundary layer [38], the crossover position ( v T = 0) occurs close to the location of maximum mean temperature. C. Wall properties in adiabatic and non-adiabatic SBLI The spatial distribution of the mean skin friction coefficient C f = 2τ w /ρ ∞ u 2 ∞ at various s is depicted in figure 11 (a). For reference purposes, we also report in the figure the skin friction distribution of the cold and hot spatially evolving boundary layers BL-s0.5 and BL-s1.9 (dashed lines). Upstream of the region of shock influence, a collapse of the curves for the same temperature conditions is observed. The temperature step change produces an abrupt variation of the skin friction, characterized by a maximum (minimum) when cooling (heating) the wall. In the absence of the shock the skin friction distribution gradually relaxes to that of an equilibrium boundary layer, and in agreement with previous studies [38], C f is increased by wall cooling and decreased by heating. In the presence of the impinging shock, the skin friction exhibits a sharp decrease at the beginning of the interaction and for all cases mean flow separation is observed. The extent of the recirculation region (L sep ) is reported in table I and plotted in figure 12, where the location of the separation and reattachment points is also shown. Compared to the adiabatic case, wall cooling results in a significant reduction of L sep (−74% for SBLI-s0.5), whereas heating the wall leads to the opposite effect (+79.8% for SBLI-s1.9). The location of the separation point is most affected by the wall temperature change, whereas the boundary layer reattachment is less influenced by s, being mainly controlled by the nominal (fixed) impinging shock location. A simple extrapolation of the available data leads to a value of s = 0.427 to obtain the condition of incipient separation. We observe that, for all cases, the skin friction in the interaction region exhibits the typical W-shape previously observed in both laminar and (adiabatic) turbulent shock boundary layer interactions [11,39], characterized by two minima, which are both affected by s. In particular, an increase of the wall-to-recovery-temperature ratio produces an upstream displacement of the first minimum, associated with the upstream shift of the separation shock. The location of the second minimum is relatively insensitive to s but its magnitude decreases when the wall temperature is raised. The major influence of cooling/heating is also apparent from the mean wall pressure p w , whose distribution is reported in figure 11 (b). Heating the wall shifts upstream the beginning of the interaction, leading to a smoother pressure rise. The opposite behavior occurs in the case of cooling, that produces a downstream shift of the upstream influence and a steeper variation of p w within the interaction zone. Interestingly, all the curves cross at the same point (x * = −1) close to the nominal impingment location, before gradually relaxing towards the value predicted by the inviscid theory. In the downstream portion, contrary to some experimental observations [19], our data do not show any overshoot with respect to the level of the inviscid fluid solution. To characterize the heat transfer behavior across the interaction the spatial distribution of the Stanton number C h is reported in figure 13 (a), for all flow cases (BL and SBLI) here investigated. As a reference purpose, we also show in figure 13 (b) the wall heat flux q w , that being normalized by the constant factor (ρ ∞ u ∞ C p T r ), provides a perception of the direction and of the effective amount of heat exchanged at the wall in the various cases. A strong amplification of the heat transfer rate C h is found in the interaction region with respect to the reference cooled/heated boundary layers, with a maximum increase of approximately a factor 2 for the cooled and 1.7 for the heated wall. A complex variation of the Stanton distribution is observed when varying the wall thermal condition, the curves being characterized by four local extrema. First, St decreases attaining a minimum in the proximity of the separation point, followed by a sharp increase in the interaction zone, with the peak achieved at the same point where the skin friction features its local maximum. In the case of heated wall, characterized by an extended separation, the Stanton number exhibits a curvature change with a second minimum around the reattachment point and then increases again attaining a second broad maximum in the downstream relaxation region. In the presence of cold wall, where the extent of the separation bubble is strongly reduced, the curvature change is still observed but St peaks immediately past the reattachment point. These trends are very similar to those reported by Hayashi et al. [21], who explored the effect of the shock strength (by varying the shock generator wedge angle) under the same thermal condition (cold wall). In particular, the Stanton number distribution found in the experiments for strong interactions is here recovered by increasing the wall-to-recovery-temperature ratio. We remark that, despite cooling the wall results in a weaker interaction (as far as the separation bubble size is concerned), the reduction of the length-scales in the streamwise and wall normal direction produces stronger temperature gradients at the wall thus leading to larger heating rates. Similarly, since the shock penetrates deeper in the boundary layer and the pressure jump imparted by the shock must be sustained in a narrower region, cooling the wall increases the root-mean-square wall pressure p rms , as shown in figure 14. The location of the maximum values of p rms perfectly matches that of the first peak in the Stanton distribution, implying that the generation of high thermal loads is likely to be associated with the turbulence amplification in the interaction region. The results on the mean skin friction and the Stanton number reported in the previous figures confirm the experimental observations of Schülein [22], who highlighted that the analogy between momentum and heat transfer, which is well assessed in equilibrium flows and represents the basis of many simplified physical models is not valid in the interaction region. This conclusion is not surprising, since even the most advanced and refined forms of the Reynolds analogy [40] are all based on the chief assumption/approximation of a quasi-one-dimensional flow, which clearly fails in the presence of mean flow separation as in the present SBLI cases. To examine in depth the relationship between momentum and heat transfer, and to better characterize the unsteady behavior of the flow we show in figure 15 contours of the instantaneous skin friction c f and instantaneous heat transfer coefficient c h in the wall plane, for the two extreme cases SBLIs-0.5 and SBLIs-1.9. We also provide more quantitative information in figure 16 by reporting their correlation coefficient (R c f c h ), as a function of the streamiwse coordinate. Upstream of the interaction, a streaky pattern typical of a zero-pressure-gradient boundary layer is found for c f and c h in both the cold and hot wall cases. This region is characterized by a positive correlation between the fluctuating friction and heat transfer coefficients, especially in the case of cooling (R c f c h = 0.98). This scenario completely changes across the interaction, where flow patches of instantaneously reversed flow are found, starting from the beginning of the interaction and extending well into the recovery zone. In this region the local Stanton number exhibits a strong intermittent behavior, characterized by scattered spots with extremely high heat transfer rates and the correlation coefficient displays a rapid decay, attaining a nearly flat distribution throughout the separation bubble. The relaxation region is characterized by a gradual recover of the upstream behavior which is not yet completed at the end of the computational domain. To further characterize the flow unsteadiness and to assess the possible influence of the reflected shock motion on the wall heat flux, we report in figure 17 the pre-multiplied spectra of both the wall pressure and the instantaneous heat flux as a function of Strouhal number St = f δ 0 /u ∞ and streamwise position x * . The spectral maps refer to SBLIs1.9, which is characterized by extended separation and correspond to the flow case for which the low-frequency shock motion is more evident. The power spectral densities have been computed using the Welch method, subdividing the overall pressure record into 4 segments with 50% overlapping, which are individually Fourier-transformed. The frequency spectra are then obtained by averaging the periodograms of the various segments, which allows to minimize the variance of the PSD estimator, and by applying a Konno-Omachi smoothing filter [41] that ensures a constant bandwidth on a logarithmic scale. The map of the wall pressure signal shows the typical features observed in previous studies [14]. Upstream of the interaction zone the spectra are bump-shaped as for canonical wall-bounded flows, with a peak at St ∼ O(1), associated with the energetic turbulent structures of the boundary layer. A similar shape is also found in the downstream relaxation region, although the spectral density is broadened and the peak shifted to lower frequencies owing to the thicknening of the boundary layer. A different behavior is observed at the beginning of the interaction region, close to the foot of the reflected shock, where a broad peak appears in the map at low frequencies, centered at St ≈ 0.004, corresponding to a Strouhal number based on the separation length St L = f L sep /u ∞ ≈ 0.025. This secondary peak is the signature of the broadband motion of the reflect shock, that in SBLI with massive separation is known to be mainly driven by a donwstream mechanism associated with the dynamics of the separation bubble [5,15]. The power spectral density of the heat transfer coefficient brings to light a completely different picture. In this case no evidence of any low frequency dynamics is apparent and most part of the energy is contained at intermediate/high frequencies throughout the interaction. In particular a strong amplification of the heat transfer fluctuations is found close to the separation and reattachement points, with a shift toward intermediate frequencies, classically associated with the shedding of vortical structures in the shear layer that develops in the first part of the interaction [13]. This again suggests that in the flow cases here investigated, the primary mechanism responsible for the generation of peak heating in the interaction zone is the turbulence amplification associated with the SBLI. IV. CONCLUSIONS In the present work the influence of different wall thermal conditions on the properties of impinging shockwave/turbulent boundary layer interactions is investigated by means of direct numerical simulations at M ∞ = 2.28 and shock angle ϕ = 8 • . Five different values of wall-to-recovery-temperature ratio are considered, corresponding to cold (s = 0.5, 0.75), adiabatic (s = 1) and hot (s = 1.4, 1.9) walls. The characteristic features of SBLI are observed for all flow cases, but the interaction properties are significantly affected by the wall temperature and our results confirm the observations of the few experimental data available in literature. Wall cooling has some beneficial effects on SBLI, leading to a considerable reduction of the interaction scales and size of the separation bubble, whereas the opposite holds for wall heating. A complex spatial variation of the Stanton number is found across the interaction, whose structure strongly depends on the wall-to-recovery-temperature ratio. The fluctuating heat flux exhibits a strong intermittent behavior, characterized by scattered spots with extremely high values compared to the mean, and the analogy between momentum and heat transfer typical of equilirium boundary layers is no longer valid in the interaction region. The pre-multiplied spectra of the Stanton number do not show any evidence of the influence of the low-frequency shock motion, and the primary mechanism for the generation of peak heating is found to be linked with the turbulence amplification in the interaction region. If the primary objective is to reduce flow separation, our results indicate that wall cooling can be considered as an effective mean for flow control. However, since the pressure jump imparted by the shock must be sustained by the boundary layer in a narrower region, when the wall temperature decreases, the maximum values of thermal (heat transfer rates) and dynamic loads (root-mean-square wall pressure) are found in the case of cold wall. We expect that the DNS database developed in this work, whose statistics and raw data are available at http://newton.dima.uniroma1.it/osbli/, would be useful for the high-speed turbulence modeling community, by fostering the development of advanced models to improve the prediction of heat transfer in SBLI. Future efforts will be made to extend our database to a wider range of flow conditions, including different Mach numbers and shock strengths.
2016-06-14T10:55:50.000Z
2016-06-14T00:00:00.000
{ "year": 2016, "sha1": "eeaf1b465c4b057cff135da1afa358c6fa3a062f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.04305", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eeaf1b465c4b057cff135da1afa358c6fa3a062f", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
243850836
pes2o/s2orc
v3-fos-license
Alcohol use during pregnancy in post-conflict northern Uganda: pregnant women’s experiences and provider perceptions Background Alcohol use during pregnancy has been associated with several birth defects and developmental disabilities generally known as Fetal Alcohol Spectrum Disorders (FASD). Contextual in-depth understanding on why women drink while pregnant is scarce. For this reason, we explored pregnant women’s experiences, knowledge, attitudes as well as provider perceptions regarding prenatal alcohol consumption to inform interventions meant to address alcohol-exposed pregnancies in post-conflict settings. Methods In the months of May and June 2019, 30 in-depth interviews were conducted with pregnant mothers who reported maternal alcohol use during pregnancy. In addition 30 Key informant interviews were carried out with health workers providing Antenatal Care services (ANC) in health facilities in Gulu, Kitgum and Pader districts in Northern Uganda. Data was recorded, transcribed and subjected to thematic content analysis. Results Women reported diverse views regarding maternal alcohol use during pregnancy. Whereas some felt it was favourable, others had misgivings about it. There was marked variability in knowledge on dangers of drinking during pregnancy. In this study, women reported that they found themselves in alluring situations that predisposed them to drinking alcohol. These included brewing alcohol as a source of livelihood, pregnancy-induced craving for alcohol, and participation in cultural festivities that are characterised by eating and drinking alcohol. Nonetheless, women who consume alcohol during pregnancy were not held in high esteem in the Acholi communities. Various prevention interventions reportedly existed in communities to address alcohol use during pregnancy including ANC health education, public debates, radio talk shows, community health worker group and individual counselling, and local council by laws. Conclusions and recommendations Pregnant mothers in post-conflict northern Uganda regard alcohol as a remedy to some of the social, economic and health challenges they face. Hence they continue drinking even during pregnancy because of the existing socio-cultural norms that promote it. The findings of this study demonstrate a need for sensitising communities in which pregnant women live so they can provide a supportive environment for mothers to abstain from alcohol consumption during pregnancy. Health care providers should ensure pregnant women consistently receive accurate and honest messages on the dangers of drinking during pregnancy so they can make informed decisions. Conclusions and recommendations: Pregnant mothers in post-conflict northern Uganda regard alcohol as a remedy to some of the social, economic and health challenges they face. Hence they continue drinking even during pregnancy because of the existing socio-cultural norms that promote it. The findings of this study demonstrate a need for sensitising communities in which pregnant women live so they can provide a supportive environment for mothers to abstain from alcohol consumption during pregnancy. Health care providers should ensure pregnant women consistently receive accurate and honest messages on the dangers of drinking during pregnancy so they can make informed decisions. Background Prenatal alcohol consumption has been recognized as the cause of several congenital anomalies [1,2]. In spite of this, alcohol consumption is generally accepted in many communities around the world [3]. In Uganda, alcohol is a fundamental part of religious and social ceremonies. A number of studies have documented higher prevalence of alcohol use among residents in Northern Uganda compared to other regions in the country [4][5][6][7]. This has partly been attributed to the traumatic stress resulting from the two-decade civil war in Northern Uganda [8]. Many people who were affected by the war including women resorted to alcohol use to manage depression while others turned to brewing and selling alcohol as a source of income to fight poverty [8]. There have also been reported increases in levels of alcohol use among women in Uganda for various reasons [4,7]. The most recent Uganda clinical guidelines recommend alcohol history taking and health education of mothers against alcohol drinking during pregnancy. However, screening for alcohol use is not part of routine practice in health care settings in the country. The health workers depend on self-reported information [9,10]. This is subject to under reporting due to social desirability [11]. Knowledge and attitude have been reported to influence behavior in some cases while other studies have noted that information is not synonymous with understanding [12]. A recent study in Mozambique reported that 43% of women continued to drink after they found out they were pregnant [13] while in another study in Copenhagen pregnant women reported reduction in weekly alcohol consumption during early pregnancy as compared with pre-pregnancy levels. Prevalence of alcohol use in late pregnancy in West Virginia was reported at 8.1% [14] and problem drinking was reported by 21.2% of pregnant women in Lusaka, Zambia [15]. While there is a growing body of literature on knowledge and attitudes of women on alcohol use during pregnancy, there is a dearth of qualitative investigations that articulate and contextualize the circumstances in which pregnant women find themselves that may influence their decisions to use alcohol. The aim of this paper is to explore pregnant women's attitudes, knowledge and experiences regarding alcohol use during pregnancy, understand perceptions of the general public in the Acholi region on alcohol use during pregnancy, and highlight available alcohol prevention approaches in the community. Study area and population This study was conducted in the Acholi sub-region in Northern Uganda, a region scarred from the two-decade protracted conflict between the Lord's Resistance Army and the Uganda Government. For over a decade most of the populace lived in Internally Displaced Peoples (IDP) camps that affected the economic, social and moral fabric of the communities. Acholi sub-region administratively consists of the districts of Agago, Amuru, Gulu, Kitgum, Lamwo, Nwoya, Pader and Omoro. The districts of Gulu, Kitgum, and Pader were randomly selected for this study. Acholi people constitute 4.4% (1.4million people) of Uganda's population. Gulu district is located in the central part of Northern Uganda; Pader and Kitgum districts are located in the North East of Gulu. Agriculture is the mainstay for locals in the three districts. Gulu district was the epi-center of much of the fighting between the Ugandan government army and the Lord's Resistance Army. Kitgum and Pader districts also suffered many deaths and social disruption resulting from the two-decade civil war that plagued the region. Up to 32.5% of people in Northern Uganda live below the national poverty line [16]. Acholi sub-region has one of the highest domestic violence levels in the country with 52.8% of women having experienced physical violence while 59.9% experienced some form of violence including physical, emotional and sexual from their current spouse [17]. It also has a fertility rate of 5.5 children per woman. A population of 14.4% of persons aged 10 and above in Northern Uganda have never been to school [18]. Also, about 11.4% of children in the subregion were born less than 2.5 kg, which is higher than the national average of 9.6% children born in Uganda with a low birth weight below 2.5 kg [17]. Study design and data collection and procedures This was an exploratory qualitative study to elucidate how pregnant women and health workers perceived alcohol use during pregnancy. It was a follow up study of a larger quantitative cross-sectional study that investigated prevalence of alcohol use among 420 women seeking ANC services in health facilities in Gulu, Kitgum and Pader districts and recorded 23.5% prevalence of alcohol use among pregnant women in this region [19]. To explore women's perceptions, experiences and knowledge about alcohol use during pregnancy, in-depth interviews were considered the most appropriate method [20]. Pregnant women seeking ANC services who reported alcohol use during a larger quantitative survey and consented to participate in this study were interviewed. Thirty in-depth interviews (IDIs) were conducted with these women by three research assistants with skills in social and health sciences research. Prior to conducting interviews, these research assistants were trained for 2 days. They were university graduates in the field of humanities, social sciences and gender studies. They were all born and raised in the region and were proficient in the local language (Acholi). This was to ensure culturally sensitive approaches as well as make it easy to get detailed information on respondents' experiences, perceptions and knowledge about alcohol use during pregnancy. With the help of language experts, the researchers prepared an in-depth interview guide for pregnant mothers and a Key informant interview guide for health workers in both Acoli and English. These tools were tested for conceptual equivalence and completeness in data collection with three mothers and three key informants at a health facility in Amuru district. This pilot data was analysed and formed part of the final interviews for this study. Whereas the in-depth interviews delved deeper into the women's experiences, knowledge and perceptions towards alcohol use during pregnancy, 30 Key informant interviews (KIIs) were conducted with health workers providing antenatal care services to obtain expert technical information and clarification on drinking patterns, alcohol prevention approaches in the community and challenges faced in fighting the vice. The KIIs also captured community views on maternal drinking during pregnancy and alcoholic beverages commonly consumed. Table 1 on Appendix 1 on page 25 shows sample questions in the data collection tools. Key informants were purposively selected based on their role in the provision of antenatal care services. Venues for both in-depth interviews and Key informant interviews were carefully selected to minimize distractions and maximise privacy for study participants. Non-participants were not allowed at these venues. Data management and analysis In-depth and Key informant interviews were tape recorded, transcribed verbatim and translated to English by trained research assistants. Interview transcripts were analysed using thematic content analysis. This was chosen because of its flexibility [21]. Theme analysis allowed themes to be generated by the researcher based on literature before data analysis and previous theory and inductively from the raw data itself. Authors read through all the transcripts twice and developed a code manual. Data was then exported and systematically open-coded in ATLAS TI -7 for content analysis. Several codes were generated. Two persons were entrusted with the coding and worked independently to ensure reliability. The coders included the first author who has a postgraduate degree in population and reproductive health and another coder who has a postgraduate degree in sociology. They both have vast experience in coding qualitative data for related studies. Disagreements between them, whenever they arose, were resolved through discussions with a third party after which some themes were collapsed into others and new codes created. When new codes were identified, previous transcripts were reread to determine if the new codes were applicable to the texts. Thus the code manual was continually revised. These codes were then grouped into themes. A final code manual was then produced. Recurrent and emerging themes were identified and organised into meaningful categories and sub-categories [20,22]. Some tentative themes didn't have enough data to support them. These were broken down or accommodated in other themes [23]. Each thematic code satisfied Boyatzi's five elements including: a conceptually meaningful label, a definition, a description of any qualifications or exclusions to the application of the theme and examples of positively and negatively coded extracts from the data. The team of investigators then met to share findings and agree on the interpretation of themes. More review and refinement was conducted to ensure coherent patterns [24]. Findings are presented using a thematic approach whereby responses from different respondents are integrated under the same theme(s). Illuminating excerpts/ quotations are used to illustrate findings. Ethical considerations The study was approved by the Makerere University School of Public Health faculty Institutional Review Board and the Uganda National Council for Science and Technology (UNCST) Ref SS 4938. Written informed individual consent was sought from participants before interviews commenced. In addition, Permission was obtained from the district authorities and local leaders. Confidentiality was observed by study investigators. We assured study participants of their liberty to freely withhold information if they were uncomfortable to give it. When the recorder was used, permission of the respondents was sought before tape recording. Results The study explored perspectives of women about drinking alcohol during pregnancy, their knowledge and experiences. Using the Thematic Framework Analysis (TFA), a number of themes related to the study objectives emerged from the interpretation and analysis of these results. Participant characteristics Women who participated in the study were aged between 19 and 38 years. The mean age was 28. Almost all study participants were married or cohabiting save for three who had separated with partners. Eighteen were primary school dropouts, eight had attended up to secondary school, and two had completed tertiary level of education while two had never attended school. Most of the women (19) were small scale farmers, three were alcohol brewers, and two were prisoners and other two in formal employment and the rest (four) were housewives. This is presented in Table 2 in the Appendix 2 on page 27. Majority (22) of health workers interviewed were women. Only eight were men. Most belonged to the midwifery and nursing profession, only four were medical doctors and three were clinical officers. This is presented in Table 3 on page 28 in the Appendix 3. Types and content of alcohol consumed Commercial beverages Respondents reported consumption of various types of alcohol. They drank bottled beer, wine, and distilled clear spirits referred to as waragi served in small plastic bags in which they are consumed. Most of these were available in bars and shops in the study areas. As expected, commercial alcohol such as beer and wine was more popular in urban areas. Two thirds of those who have attained at least secondary level of education as well as 70 % of those in the highest wealth quintile in the sample revealed that they consumed commercial alcoholic beverages. Various types of beer that were consumed included Bell lager, Nile special, Eagle extra, Club pilsner beer, Tusker, but Smirnoff came off as a more popular drink among women as it was regarded as a feminine drink. The low end users consumed cheap sachet waragi. Locally made beverages Respondents reported consumption of homemade brews such as lujutu (fermented and distilled cassava, maize or sorghum), kwete (fermented maize and/or sorghum). They also consumed Munjuti (fermented simsim), Arege (made of sorghum and yeast), Kasese (made from bananas), Uma Uma (made of yeast and sorghum flour or cassava flour) and local wine (made of yeast, tea leaves and seeds of marijuana). As anticipated, these homemade brews being indigenous are revered and consumed regularly by almost all family members. For individuals who are addicted, this alcohol is readily available as they don't have to be purchased from the market. Womens' perceptions on alcohol use during pregnancy Alcohol relieves stress Some women described alcohol use during pregnancy as favourable since they believed it helped them relieve stress resulting from other social challenges such as poverty, divorce and separation among others. They felt drinking offered them temporary recourse to their immediate concerns such as fending for their children and relatives. Given the high total fertility rate in this region, many families have many children that require feeding, clothing and school fees among others. Some women have been abandoned by their partners and have to take care of the children single-handedly. "It helps relieve stress. Sometimes I may have problems with my husband and I need to take alcohol to relieve myself of stress. For some women, the husbands don't care about them. They don't provide for their families. Some have other women. These things can cause stress," (IDI,Kitgum district). Alcohol is medicinal Some women held the view that pregnancy comes with ailments some of which may not be treatable by health workers. This theme emerged inductively from the data. They used alcohol to manage some of these pregnancyrelated conditions such as nausea, vomiting and abdominal pains. They revealed that they were not aware of other remedies for these conditions and some who know claim do not have access to other options that can treat the ailments. "Actually I don't always take alcohol when I am not pregnant but when I have nausea and feel like vomiting like you know what pregnancy does to us, I take it and it stops the vomiting. You see, I work there in the market. I can't be vomiting everywhere or moving to the latrine to vomit all the time, otherwise I will miss or even lose some customers," (IDI, Pregnant Woman, Gulu district). Alcohol cleanses the baby in the womb An unexpected theme which emerged inductively from the data was the benefits of alcohol to the unborn baby. Some respondents reported that alcohol cleanses the unborn baby in the womb. They had been advised by their mothers and grandmothers to take alcohol especially waragi whenever pregnant to rid their baby of any toxic and unhealthy substances. This information has been passed on from one generation to another, so waragi is reputed as a healthy drink for pregnant mothers in these communities to the extent that they felt disregarding it would be an injustice to their unborn babies. "I take waragi. I just feel like taking it. They say it cleanses the baby in the womb. It cleans the baby. Our mothers, our grandmothers told us that. Every time I am pregnant I have to keep taking waragi for that reason." (IDI, Gulu District). Alcohol use during pregnancy is inconsequential Previous studies have associated maternal drinking during pregnancy with prior pregnancy experiences [25] and we explored this theme. Respondents believed that alcohol use, especially low level alcohol use, was without consequence. At least 60 % of multiparous women in the sample regarded drinking during pregnancy as harmless since they had been drinking during previous pregnancies and had not realized any negative impact on their offspring. These disbelieved advice from health workers about drinking during pregnancy. Even those who consumed small amounts of alcohol believed it wouldn't affect their pregnancy in any way. They also noted that some local brews, especially those that did not contain marijuana, were less harmful than others. "There are women who are abusing alcohol, especially those who take too much that can affect the baby's growth but us who take a few cups of alcohol I think we are okay. It's those who are taking this local brew with marijuana in it that can harm the baby. I hear it is so strong that even when an adult man takes just one cup of it they begin to stammer in their speech. I have been taking alcohol before and all my children are okay," (IDI, Pregnant Woman, Pader District). Alcohol is shameful Yet other women considered alcohol use during pregnancy deplorable much as they admitted drinking various types of alcoholic beverages even during pregnancy. They said drinking demeans them and strips them of their dignity as women, as mothers and as caretakers. "Alcohol can cause shame. Sometime back I used to get embarrassed so I sat down and made a selfevaluation and that is when I said I must reduce my drinking. So I started drinking little and this time drinking from nearby or even home and one good thing is that my husband left drinking so that alone helped me a lot. As a mother you can even fail to cook and do other responsibilities. You may even forget to go for antenatal care," (IDI pregnant Woman Pader). What women know about drinking during pregnancy Alcohol can harm mother and baby Prompted by previous research findings that women know that drinking during pregnancy can harm their unborn babies [26][27][28], we explored women's knowledge on alcohol use during pregnancy in this community. Findings revealed that some women possessed some knowledge on dangers of alcohol use during pregnancy although it varied from one woman to another. Some respondents had general knowledge that alcohol use during pregnancy could potentially harm the mother and unborn baby in various ways. Many said they may get accidents while drunk and end up hurting various body parts of the foetus and the mother. "It can be something bad. One can fall down when drunk and hurt the baby. You never know which part of the baby may get damaged, it could be the limbs and the child is born with deformed limbs. It could be the head and the child is born with a crooked head or dysfunctional brain," (IDI, Pregnant Woman Kitgum District). Alcohol causes poverty and domestic violence This theme emerged inductively from the data. Mothers associated alcohol consumption with poverty and other social ills such as domestic violence which they believe affects the wellbeing of the unborn baby and the entire family into which the baby is about to be born. "I highly support that all pregnant women, including me, should stop drinking alcohol because it has a lot of health effects not only on the mother but also on the unborn baby. Secondly, we all know the effect of alcohol financially. It causes a lot of poverty. A family that drinks is always poor and domestic problems are always prevalent. Children from such families don't go to school," (IDI, Pregnant Woman, Pader District). Alcohol can result into poor birth outcomes On the other hand, some respondents exhibited specific comprehensive knowledge about dangers of drinking during pregnancy. They said alcohol could cause miscarriage, delayed development, brain damage, low birth weight and deformation of the baby in the womb. "I know that alcohol causes serious health problems both to the mother and the unborn baby. It should be stopped. Alcohol leads to impairment, brain damage, and low thinking and these are all serious problems. The baby may be born very thin and unhealthy," (IDI, Pregnant Woman Gulu District). Alcohol can cause complications during pregnancy Whereas some respondents had specific knowledge on dangers of maternal drinking during pregnancy, others had fragmentary information on possible dangers of drinking during pregnancy. They said alcohol use during pregnancy could result in complications during pregnancy. They further reported that the child would be born with 'a disfigured shape of head' if the mother drinks during pregnancy and the child could be 'born abnormal'. Some mentioned that the 'baby may not grow well'. "Children born of women who drink can have a crooked head. Some are born when they are drunk. They can't cry like other children. They are born abnormal. They are not like other children," (IDI Pregnant Woman Pader District). No knowledge on alcohol use during pregnancy Some mothers admitted they were not aware that alcohol use during pregnancy was of any good or bad to the mother or unborn baby. "I don't know. I have no information about it. I don't know whether alcohol poses any threat to the baby or not," (IDI, Pregnant Mother, Gulu District). Potential for FASD among children born of drinking mothers Some key informants noted that they had observed undesirable characteristics among children born of mothers who imbibe alcohol during pregnancy. From the perspective of health workers, children born of drinking mothers were fatigued at birth, inactive and small for gestation age (SGA). "I once attended to a drank mother. She was always drank whenever she came for antenatal care. She used to have sackets of waragi in her pockets. On delivery her child was small; he looked tired and didn't cry at birth," (KII, Kitgum District). Drinking alcohol as inevitable under certain circumstances Alcohol as a source of livelihood The Acholi region has some of the worst poverty indicators in the country. This theme emerged inductively. Some respondents reported that they resorted to making local brew as a source of income and inevitably find themselves consuming alcohol most of their lives even during pregnancy either to taste its tartness or to entertain their clients. Some revealed that they would be willing to abandon the trade given alternative sources of income that do not jeopardise their health. Some reportedly hail from families that brewed alcohol for a living and have been habituated into drinking alcohol since childhood. These same sentiments were echoed by health workers. "It is not easy to stop alcohol use once one has already started. Some have been drinking alcohol since childhood. Some come from families where brewing alcohol has been a source of income and all family members are engaged in brewing. In that case it is hard to avoid drinking alcohol," (KII, Kitgum District). Comorbid conditions responsible for drinking This theme emerged inductively from the data. Some pregnant women said that their drinking was precipitated by other comorbid conditions. Women with long term illnesses such as Human Immuno Deficiency Virus (HIV) reported that they were more at risk of drinking during pregnancy as compared to other women. They said that drinking enabled them to socialise and temporarily forget their predicament. Interviews with health workers confirm that indeed women in HIV care were more vulnerable to drinking alcohol during pregnancy as compared to their counterparts. "Although it is not okay for pregnant women to drink, some situations force them to do so. Even though as pregnant mothers we are advised not to drink alcohol, it is very hard for some of us to comply with the situation. Thinking of swallowing Anti-Retroviral Therapy Drugs (ARVs) daily for the rest of your life, its better you drink and forget about your HIV status and about the kind of life the children you are producing are going to have when both parents are not alive. It also helps me mix with people," (IDI, Pregnant Woman, Kitgum district). Bewitched to drink alcohol Some respondents reported that they did not plan to drink but still found themselves drinking. Some believed they were bewitched or cursed to drink and become a laughing stock in society. "Some say they have been bewitched to take alcohol. They say people who don't wish them well such as neighbours, relatives or even friends could have cast a spell on them to drink endlessly and become a disgrace to society" (KII, Pader District). Craving alcohol This theme emerged inductively from the data. Some of the study participants intimated that they often craved various types of alcoholic beverages during pregnancy. Astonishingly, some reportedly drank only during pregnancy because of this craving and stopped soon after delivery. "People have their own reasons for drinking. I see from myself most times when I am pregnant, I feel like drinking alcohol most of the time. But alcohol is a problem here. Women drink a lot of waragi and it's a problem. Sometimes we drink because our hearts feel that we should drink, the body demands for it and me, especially, I stay alone without a husband so nobody can stop me from drinking." (Pregnant Woman, Gulu District). Drinking during social gatherings Other researchers have recorded drinking as part and parcel of culture [29] and we explored this theme. Cultural festivals are deeply ingrained in the Acholi culture. These include funeral rites and marriage ceremonies that take place for half of the year. Other gatherings where drinking is the norm are clan meetings and market days which are more regular. Respondents reported that during such occasions, drinking alcohol is part of these festivities. Without food and alcohol many of these festivals would be incomplete. Pregnant women are lured into drinking alcohol during such festivities. Women mentioned that family and friends expected them to fully participate in all the ceremonial activities including drinking irrespective of their pregnancy status. "You know attending these cultural festivals is a must. Only children are left home. All daughters-inlaw must come and attend whether they are pregnant or not. They do have funeral rites after every festive season. And they prepare alcohol and food. These festivities are usually funeral rites and marriage ceremonies," (KII, Gulu District). "Access is not restricted. They have to fit in. They have to do what others are doing or be seen to be uncooperative. If you are married in a home you have to do what is expected of you in your matrimonial home. Whatever your in-laws are eating or drinking you must also participate. When there are festivals women have to participate in all the activities," (KII,Kitgum District). Drinking pattern Whereas some pregnant women drank singly at home, others reportedly drank in groups in bars with friends, family members or spouses as a means of socialisation. To some, these alcoholic beverages are consumed as a repast. They claim some of these drinks are nutritious especially kwete and when they make it they sometimes don't prepare any other meal and the dregs are consumed by children. Community views about drinking during pregnancy Drinking is disgraceful In anticipation that drinking mothers would be stigmatised [8], we looked for this theme. Uncharitable, pejorative language was used by many respondents to refer to women who consume alcohol more so during pregnancy. Words such as 'despised', 'disrespected','irresponsible' 'unserious', careless, 'useless', 'disgraceful' and 'uncultured' were used to describe these women. In some communities maternal drinking during pregnancy was frowned upon and these women were given name tags. This forced them to drink in hiding further compounding the problem since they couldn't access counselling. "A woman who drinks is considered careless. They are actually considered even maybe useless. Imagine mature women with children drinking. It is disgraceful. She is a shame to her family. A shame to the community and a shame to herself, "(KII, Kitgum District). Alcohol is a form of socialisation This theme emerged inductively from the data. In other communities' especially urban settings, maternal drinking during pregnancy was tolerated especially if it was not excessive and the women were accompanied by spouses, friends or family members. This was more so in urban settings. It was regarded as an opportunity to socialise with other persons and make meaningful friendships. Drinking mothers are emotionally challenged Some respondents sympathised with drinking mothers and regarded them as persons with emotional challenges that were thwarting their lives and that of their unborn babies. These were referred for counselling. This is illustrated by the quotation below. "They give them advice to stop drinking, especially when pregnant. They look at it badly. You know when a woman takes alcohol it is not good. They call her and sit her down not to continue alcohol use," (KII, Gulu District). Let drinking mothers be This theme emerged inductively from the data. Some had a laissez faire attitude towards alcohol use during pregnancy. They felt women should be left to do what they like. Since most alcohol consumed is locally brewed and has no warning labels, they reasoned that drinking mothers should be left to drink as they like as some brews were in fact nutritious. "They neglect them. They let them be. It's your life. You live it the way you like. People don't care about them that much. Everybody is preoccupied with their own issues. People just ignore them, they just don't give a damn."(KII, Gulu District). Alcohol prevention measures in communities Antenatal care education Many health workers mentioned that as part of routine antenatal care, women are educated about dos and don'ts during pregnancy including alcohol use and its dangers. But they also mentioned that some mothers report late for their first antenatal visit thereby affecting the timing of receiving the information. When asked why they reported late for antenatal care, the women said that they have to wait on their spouses for permission. "We teach women here at the health facility about dangers of drinking during pregnancy, although some report for ANC late. These women start attending antenatal care when they are like four months pregnant so they are still drinking and miss out on education sessions about alcohol use when pregnant."(KII, Kitgum District) Local council by-Laws In some of the study areas, the local authorities had come up with measures that sought to regulate drinking hours and prohibit consumption of certain types of alcohol that they considered dangerous. This was more so in Gulu district. According to respondents, bars are only permitted to open from 04.00p.m to midnight. Individuals found drinking during working hours were subjected to disciplinary action which included police arrests and paying fines among others. The sale, marketing and consumption of sachet alcohol was especially banned in some communities. Clan meeting rebukes Clan meetings are an intrinsic part of the Acholi culture. Every clan meets regularly and discusses several issues pertaining to the wellbeing of their people. In these meetings the people who are considered excessive drinkers are rebuked. "Here, there are clan meetings every month. Every clan has a meeting. If there is an emergency they call a meeting immediately. You are asked to pay a fine when you do something wrong. They meet monthly and correct whatever is going wrong in the clan e.g land disputes. Drinking alcohol also falls in there … ." (KII, Gulu District). Community dialoguing Community dialogue meetings commonly known as BARAZAs are held in which various health issues are discussed. Community members freely share their feedback about services provided by health workers. They also have question and answer sessions about various issues in the community. Alcohol use and its dangers was mentioned as one of the issues discussed during the dialogues that are mostly funded by development partners. Group and individual Counselling Community Health Workers have also been involved in providing both individual and group counselling to women who have been identified as problem drinkers. Some of these have suffered other consequences as a result of drinking such as gender based violence. Community health workers follow up women in their communities regularly and keep speaking to them to avoid alcohol. Additional challenges faced by women Lack of male spousal support This theme emerged inductively from the data. Pregnant women reported several other issues that affect them during pregnancy and predispose them to drinking. These include lack of male spousal support. Some had husbands who drink and expect them to join in even when pregnant. This is elucidated in the following extract: "Because of problems we may have with husbands, we decide to drink to relax a bit and forget those problems. Some husbands are drunkards and want us to drink with them. Some don't provide for us. I am pregnant but he hasn't bought me any maternity dress and he has money to drink. You are a woman, so I think you know men. Sometimes, we harvest our produce and the man grabs the money from us," (IDI, Kitgum district). Maltreatment by health workers This theme emerged inductively from the data. Some respondents reported that some health workers at health facilities were unkind to them. They said they were harshly treated by medical professionals whenever they did not meet their expectations such as reporting for Antenatal care clinics in maternity wear as they always advised them to. The women noted that some of these expectations were beyond their reach financially. The mothers revealed that they were forced to keep drinking to cope with stress. "Pregnant mothers don't have food to eat, and there is no money to buy needs as told by health workers. For instance, I don't have maternity wear and yet it's needed during ANC. Without it, the health workers quarrel with us, which makes us feel bad. We work very hard to get money. Given our condition, getting firewood is a problem for us here and you can't eat what you feel like eating. I have to drink to forget the rude remarks from health workers," (IDI, Pregnant woman, Kitgum District). Abusive relationships In relation to the previous theme, this theme also emerged inductively from the data. Some pregnant mothers reported that they are trapped in abusive relationships. They said they are abused by their spouses emotionally, verbally and sometimes physically. Drinking alcohol was a temporary remedy to this challenge. "For me I drink because my husband is a drunkard and I feel bad when I haven't drank and he is drank. He uses bad language to insult me so drinking helps me not to notice those insults of his. When he drinks and I don't, I take in every insult and it hurts so much," (IDI, Kitgum). Women's views about anti-alcohol Services in the Community Given that some studies before have noted prevailing strategies to avert maternal alcohol use during pregnancy [30], we explored this theme deductively. Some women did not recall receiving any information about drinking during pregnancy from any source. Some said they had received education on the dangers of drinking during pregnancy but lacked alternative remedies to treat pregnancy-related conditions for which they consumed alcohol. Others revealed that they were willing to abandon alcohol brewing and drinking given a supportive environment. Some mothers suggested the need to use mass media to continuously educate women on the dangers of maternal drinking during pregnancy. Respondents also recommended a complete ban of alcohol in the communities to minimise temptations for its use by pregnant mothers. Finally, there was a call for community role models to influence mothers by example. Discussion According to the study findings, alcohol is a sociological and economically constructed reality in the Acholi subregion. Alcohol continues to be an integral part of several ceremonies in the study area such as marriages, funerals, naming children, resolving legal disputes and market days' festivities. It is a part of the local value system and women being part of the community find themselves consuming alcohol. This study also reveals that women believed that alcohol use during pregnancy was advantageous. These results are in concordance with findings from a study conducted among healthcare professionals providing antenatal care in Australia some of whom reportedly recommended alcohol use to pregnant mothers to relieve stress [31]. These results are also in agreement with other previous findings that reported alcohol use during pregnancy as advantageous. In a study conducted on maternal drinking during pregnancy in western Uganda, women reported that drinking waragi, an alcoholic drink, was believed to 'relieve heartburn' and 'make baby lighter' with regard to their preference for vaginal delivery; others reportedly 'drank beer to make the baby big' [32]. A Ghanaian study surveying pregnant women revealed that drinking during pregnancy 'reduces stress and 'cleaned' the baby in the womb or acted as an 'appetizer' [26]. In the United Kingdom, a study revealed public belief that light drinking in pregnancy could enhance a child's intelligence and behaviour [33]. These beliefs may be as a result of inconclusive debate as to whether light or moderate drinking may convey harm to the unborn child's health [34]. Whereas some women in this study believed alcohol use during pregnancy was inconsequential, others believed that it is medicinal and also cleanses the baby in the womb. Potential explanations for these beliefs include the fact that this information was passed on to the mothers from the previous generations but it also represents gaps in health promotion and chronic disease prevention. In this study, women, especially multiparous women, said drinking during pregnancy was harmless. This may be because they haven't experienced the perils of drinking during pregnancy. This study confirms previous findings from a qualitative study conducted among pregnant women in the United Kingdom who reported that their current drinking was influenced by experiences of previous pregnancies [35]. Women reported varied levels of knowledge on the dangers of drinking alcohol during pregnancy. In general, women were unaware or had limited knowledge of the impact of drinking on maternal and infant outcomes, such as fetal alcohol syndrome. Similarly, studies conducted in other sub-Saharan countries such as Ghana and Eastern Nigeria reveal several myths and misconceptions about the dangers of maternal drinking during pregnancy [26,27]. In this study women reported that they found themselves in tempting situations that lured them to consume alcohol. These included participation in economic and social activities that predisposed them to drinking during pregnancy. This is similar to another study in a tribal dominated district in India where pregnant women reportedly could not restrain themselves from drinking alcohol during pregnancy because it was deeply ingrained in their culture [29]. The comorbidity between alcohol consumption during pregnancy and HIV positive status reported in this study shows that these mothers experience a significant burden of disease and other social problems. This supports previous research that linked HIV to substance abuse and poor healthcare seeking behaviours [36,37]. Thus combined interventions addressing alcohol use during pregnancy and HIV/AIDS and a range of other social challenges that may lead to alcohol use are most appropriate. It is also startling that unkind words were used to describe women who consume alcohol during pregnancy yet these women were expected to fully participate in cultural festivities such as clan meetings, funeral rites, marriage ceremonies and market/auction days where drinking alcohol is the norm. This supports the view that female drinking is not a common or generally accepted part of African culture due to religion and or their gender and reproductive roles [29]. Results of a recent study on alcohol and substance abuse in northern Uganda reported that women who were using alcohol and drugs were judged harshly [8]. These findings are in agreement with our results. This is not surprising since women are viewed as an embodiment of society's moral values in many African societies and are often judged more harshly than men in the event that they indulge in any deviant behaviour. Study strength and limitations The strengths of this study include use of qualitative approaches to obtain in-depth understanding of phenomena of alcohol use in this community particularly among pregnant women. The in-depth interviews provided rich descriptive data about women's attitudes, knowledge and experiences regarding alcohol use during pregnancy. This was triangulated with information from service providers. This study discusses a topical subject and contributes to the debate about efforts aimed at reducing alcohol use among women and alcohol-exposed pregnancies such as alcohol screening and brief intervention and choices, more so within primary healthcare systems. However, this study was done in a post-conflict setting and findings may not be generalizable to other populations. Also, the majority of health workers interviewed were midwives and nurses because this cadre of staff were available at the health facilities during the study visit. It is possible that the views of some specific cadres including obstetricians, gynaecologists and anaesthetists who provide birthing and pregnancy care in this study were not adequately represented. Conclusions and recommendations Pregnant mothers in post-conflict northern Uganda regard alcohol as a remedy to some of their social, economic and health challenges. It is weaved into the Acholi societal customs, and women continue drinking even during pregnancy because of the existing sociocultural norms that promote it. The limited knowledge about the dangers of alcohol and favourable attitudes towards maternal drinking could be responsible for this practice. Policy makers at various levels should ensure that mothers are given honest and accurate information on drinking during pregnancy so that they can make informed choices. Both community leaders and immediate family members should be educated on the dangers of maternal drinking so that they can provide a supportive environment for women to abstain from alcohol or drink less during pregnancy while still respecting Acholi culture. Health care providers should also screen ANC mothers for depression as it may be a proxy measure to identify women at risk of using alcohol and if need be provide individual counselling since women drink for various social challenges some of which are pregnancyrelated. Women with high levels of alcohol use should be encouraged to cut down on their alcohol use during pregnancy as much as possible, more so if they cannot or are unwilling to abstain. Given the strong beliefs voiced by women in this study and socio-cultural factors around alcohol use, community strategies should be employed that minimize risk associated with alcohol use especially since low level alcohol use seems to be inconsistently or only weakly associated with negative maternal and infant/child outcomes. Many healthcare providers reported that all women visiting health facilities for antenatal care are educated on dangers of maternal drinking yet these women had varying knowledge levels and conflicting views regarding maternal alcohol use during pregnancy. Future studies should investigate in detail the content and consistency of messages shared by health workers about pregnancy and drinking. In-depth Interviews 00 10 00 10 00 10 30 Over 70% of health workers interviewed were female. Thirty pregnant women who reported alcohol use during pregnancy were interviewed
2021-11-09T14:40:50.123Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "22243f144273549e81fd657d6289b1f7acee4bc7", "oa_license": "CCBY", "oa_url": "https://substanceabusepolicy.biomedcentral.com/track/pdf/10.1186/s13011-021-00418-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22243f144273549e81fd657d6289b1f7acee4bc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126298858
pes2o/s2orc
v3-fos-license
Countable models of the theories of Baldwin-Shi hypergraphs and their regular types We continue the study of the theories of Baldwin-Shi hypergraphs from $[5]$. Restricting our attention to when the rank $\delta$ is rational valued, we show that each countable model of the theory of a given Baldwin-Shi hypergraph is isomorphic to a generic structure built from some suitable subclass of the original class of finite structures with the inherited notion of strong substructure. We introduce a notion of dimension for a model and show that there is a an elementary chain $\{\mathfrak{M}_{\beta}:\beta<\omega+1\}$ of countable models of the theory of a fixed Baldwin-Shi hypergraph with $\mathfrak{M}_{\beta}\preccurlyeq\mathfrak{M}_\gamma$ if and only if the dimension of $\mathfrak{M}_\beta$ is at most the dimension of $\mathfrak{M}_\gamma$ and that each countable model is isomorphic to some $\mathfrak{M}_\beta$. We also study the regular types that appear in these theories and show that the dimension of a model is determined by a particular regular type. Further, drawing on the work of Brody and Laskowski, we use these structures to give an example of a pseudofinite, $\omega$-stable theory with a non-locally modular regular type, answering a question of Pillay in $[9]$. Introduction Fix a finite relational language L where each relation symbol has arity at least 2 and let K L be the class of finite structures where each relation symbols is interpreted reflexively and symmetrically.Fix a function α : L → (0, 1) ∩ Q. Define a rank function δ : is the number of subsets of A on which E holds.Let K α = {A ∈ K L : δ(A ) ≥ 0 for all A ⊆ A}.Given A, B ∈ K α , we say that A ≤ B if and only if A ⊆ B and δ(A) ≤ δ(A ) for all A ⊆ A ⊆ B. The class (K α , ≤) forms a Fraïssé class, i.e.K α has amalgamation and joint embedding under ≤.In [3], Baldwin and Shi initiated a systematic study of the generic structures constructed from various sub-classes K * ⊆ K α where (K * , ≤) forms a Fraïssé class.In particular they obtained the stability of the theory of the generic for (K α , ≤).We call the generic of (K α , ≤) the Baldwin-Shi hypergraph (for α). We begin in Section 2 by introducing preliminary notions that we will be using throughout this note.These sections mirror the sections with the same name in [5].In Section 3 we collect some known results that will be using throughout the rest of the note.We also use the notion of an essential minimal pair and present Theorem 3.8 (see [5]) which provides the existence of infinitely many essential minimal pairs.This result will play a significant part in the results that follow. In Section 4 we begin by studying the countable models the theory of the generic, which we denote by S α .We prove that each countable model of S α can be obtained as a generic structure by considering a particular subclass of the class of finite substructures used to construct the Baldwin-Shi hypergraph with the naturally inherited notion of strong substructure.We then use this result, along with a notion of dimension for models, to prove Theorem 4.7 which establishes that the countable spectrum is ℵ 0 .In Theorem 4.8 we sharpen this result and show that for countable M, N |= S α , if the dimension of M is at most the dimension of N, then M embeds elementarily in to N. Thus the countable models of S α form an elementary chain {M β : β < ω + 1} with M β M γ for β ≤ γ with each model of S α isomorphic to some M β . In Section 5 we study the regular types of S α .A key result is Theorem 5.11 which identifies certain types as being regular.In Theorem 5.12 we establishes that a certain class of regular types are domination equivalent.We also show that these regular types are non-trivial and their independent realizations determine the dimension of a model that was introduced in 4. We end the section with Theorem 5.13, which establishes that a large class of types are not regular. In Section 6 drawing on the work of Brody and Laskowski, Hrushovski and Wagner we observe that certain of these generic structures have pseudofinite, ω-stable theories with non-locally regular modular types.This answers a question of Pillay in [9] on whether all regular types in a pseudofinite stable theory is locally modular. The author wishes to thank Chris Laskowski for all his help and guidance in the preparation of this note. Preliminaries We work throughout with a finite relational language L where each relation symbol E ∈ L is at least binary. Let ar : L → {n : n ∈ ω and n ≥ 2} be a function that takes each relation symbol to its arity. Some general notions We begin with some notation. Notation 2.1.We let K L denote the class of all finite L structures A (including the empty structure), where each E ∈ L is interpreted symmetrically and irrelexively in A: i.e.A ∈ K L if and only if for every , then a has no repetitions and A |= E(π(a)) for every permutation π of {0, . . ., n − 1}. By K L we denote the class of L-structures whose finite substructures all lie in K L , i.e. We now introduce the class K α as a subclass of Typically the above notion of ≤ is usually defined on K α × K α .However, we define the concept on the broader class K L × K L .This will allow us to make the exposition significantly simpler for the following reason: Definition 2.5.By K α we denote the class of all L-structures whose finite substructures are all in K α , i.e. The following definition extends the notion of strong substructure to structures in K L : Definition 2.7.Let n be a positive integer.A set {B i : i < n} of elements of K α is disjoint over A if A ⊆ B i for each i < n and B i ∩ B j = A for i < j < n.If {B i : i < n} is disjoint over A, then D is the free join of {B i : i < n} if the universe D = {B i : i < n} and B i ⊆ D for all i and there are no additional relations, i.e.E D = {E Bi : i < n} for all E ∈ L where E D is the set of subsets of D on which E holds.We denote a free join by ⊕ i<n B i .In the case n = 2 we will use the notation We now turn our attention towards constructing the generic structure for (K α , ≤).We note that ∅ ∈ K α .It is also immediate that δ(∅) = 0 and that K α is closed under substructure.The following is well known (See for example: [3]).Fact 2.9.We note that ≤ is reflexive, transitive and given Further (K α , ≤) is a Fraïssé class and a generic structure for (K α , ≤) exists and will be called the Baldwin-Shi hypergraph (for (K α , ≤)). Closed sets In this section we generalize the notion of closed set to arbitrary subsets.This will provide as with a useful tool for analyzing the theory of Baldwin-Shi Hypergraphs. Further it is immediate from the above definition that any Z ∈ K α , Z is closed in Z and that the intersection of a family of closed sets of Z is again closed.These observations justify the following definition: Definition 2.12.Let Z ∈ K L and X ⊆ Z.The intrinsic closure of X in Z, denoted by icl Z (X) is the smallest set X such that X ⊆ X ⊆ Z and X is closed in Z. An easy argument used that a finite set is closed in some ambient structure Z ∈ K L if and only if it strong in Z. Some basic properties of the rank function We start exploring the rank function δ in more detail.The following facts will be very useful throughout and we will often use them without explicitly pointing them out.We gather them here for convenience.Their proofs follow from routine computations and are well known: A collection of known results In this section we provide some key definitions and results.We let c := lcm{q E : E ∈ L} where α(E) = p E q E is in reduced form.We begin with some definitions and some notation. Definition 3.1.The theory S α is the smallest set of sentences insuring that if M |= S α , then 1. M ∈ K α , i.e. every finite substructure of M is in K α 2. For all A ≤ B from K α , every (isomorphic) embedding f : A → M extends to an embedding g : B → M Remark 3.2.We note that S α is a collection of ∀∃-sentences.The following appears in [3] We collect some key results about S α from various sources in the following.Theorem 3.6. 1. Every L-formula is S α -equivalent to a boolean combination of chain-minimal extension formulas (see [5]). 2. The theory S α is complete and is the theory of the generic for (K α , ≤). (see [7] or [5]). 4. Given any M |= S α and X ⊆ M , X is algebraically closed in M if and only if X is intrinsically closed in M .(see [3], [11] or [5]). 5. The theory S α has weak elimination of imaginaries, i.e. every complete type over an algebraically closed set in the home sort is stationary.(see [3], [5] or [10]) 6. Let M |= S α and A be a finite closed set of M. Suppose that π is a consistent partial type over A. Then (see [5]).[3] or [10]). We now define essential minimal pairs.We use them here to study various properties of forking.The following appears in [5].It will form the backbone of many of the results to follow, Theorem 3.8.Let A ∈ K α with δ(A) = k/c > 0. We can construct infinitely many non-isomorphic We immediately obtain the following Lemma that will be very useful for proving results about regular types in Section 5. Lemma 3.9.Fix n ≥ 2 and let A, C 1 . . .C n ∈ K α be such that A ≤ C i and δ(C i /A) > 0 for each 1 ≤ i ≤ n.Let C = ⊕C i be the free join of the C i over A. Then there are infinitely many essential minimal pairs (C, D) such that δ(D/C) = −1/c.Further for any such essential minimal (C, D) for any Φ 0 {1, . . ., n}, C Φ0 , the free join of the C i such that i ∈ Φ 0 sits strongly in D. Proof.Note that under the given conditions, Theorem 3.8 yeilds the existence of infinitely many D ∈ K α with (C, D) an essential minimal pair and δ(D/C) = −1/c.. Fix such an essential minimal pair (C, D). We claim that for any Φ 0 {1, . . ., n}, A and the claim will follow if we establish this result for even one Φ 0 = ∅ as ≤ is transitive.So assume that Φ 0 = ∅.Consider 4 Countable models of S α Our goal in this section is to study the countable models of S α .We begin by defining a notion of dimension for (countable) models.We then show that this notion of dimension is able to categorize countable models up to both isomorphism and elementary embeddability.Recall that c is the least common multiple of the denominators of the α E (in reduced form). We begin with the following technical lemma: We use Theorem 3.8 to create J such that (D, J) is an essential minimal pair with −1/c = δ(J/D).Consider H, the free join of m isomorphic copies J 1 , . . ., J m of J over D. Clearly H ∈ K L .We claim that H is as required.We now work towards showing that certain countable models of S α can be built as Fraïssé limits (K k/c , ≤).In Theorem 4.7 we show that these are in fact, all of the countable models (up to isomorphism).We now work towards classifying the countable models of S α up to isomorphism using our notion of dimension. Lemma 4.6.Let M |= S α and A ≤ M be finite.Let D ∈ K α be such that A ≤ D. Then dim(M/A) ≥ δ(D/A) if and only if there is some g such that g strongly embeds D into M over A. Proof.The statement that if there is some g such that g strongly embeds D into M over A, then dim(M/A) ≥ δ(D/A) is immediate from the definition.Thus we prove the converse.Let A ≤ M be finite.Let D ∈ K α be such that A ≤ D. set.So assume that each ϕ i is the negation of a chain minimal formula.Note that we may split b = b 1 b 2 where b 1 is formed via a minimal chain and Ab 1 ≤ N .As above, it follows that b 1 ⊆∈ M lg(y)−lg(b1) .But as M |= S α , it follows that there exists a b 2 ∈ M lg(y)−lg(b1) that is isomorphic to b 2 over Ab 1 .It is now easily seen that the b 1 b 2 ∈ M lg(y) and N |= ϕ i (a, b 1 b 2 ) for each i.Thus N is an elementary extension of M. The rest of the claim follows from Theorem 4.7. Regular Types In Section 5 we turn our attention towards the study of regular types.We fix a monster model M of S α .Recall the notions of d(A) and d(B/X) for some finite A ⊆ M and X ⊆ M from Definition 3.5.We begin by extending this notion to a type as follows (see also [2]) Definition 5.1.Let M be a monster model of S α and let X be a small subset of M. Let p ∈ S(X).We let d(p/X) = d(b/X) for some (equivalently any) realization b of p. Now, due to ω-stability and weak elimination of imaginaries (see ( 3) and ( 5) of Theorem 3.6), it suffices to restrict our attention to non-algebraic types over finite algebraically closed sets (in the home sort) for the study of regular types.So fix some finite A ≤ M (recall that algebraically closed sets are precisely the intrinsically closed ones).In what follows we freely use regular types, orthogonality, modular types etc. and facts about them.The relevant definitions and results can be found in [8]. We begin by identifying certain types satisfying d(p/A) = 0 or d(p/A) = 1/c as regular types in Theorem 5.11. Remark 5.2.Let A ≤ M be finite and b be finite such that b ∩ A = ∅.Now let A ⊆ C also be finite.Note that b | A C if and only if acl(bA) | acl(A) acl(C).Since S α has finite closures it follows that acl(bA), acl(C) are both finite.Thus in order to understand non-forking, it suffices to look at types p ∈ S(A) such that x = a ∈ p for all a ∈ A such that for any b |= p, bA ≤ M. Note that this information, along with the atomic diagram of some (of any) realization of p is sufficient to determine p uniquely as noted in (1) of Lemma 3.6.Also such a type p is non-algebraic and stationary as A is algebraically closed. In light of our comments at the beginning of Section 5 and Remark 5.2 it suffices to study basic types over finite sets in order to understand regular types (i.e.we can choose a basic type to represent the required parallelism class).Definition 5.3.Let A ≤ M be finite and p ∈ S(A), we say that p is a basic type if x = a ∈ p for all a ∈ A and for some (equivalently any) b |= p, bA ≤ M. Proof.Consider the structure given by A * = A ⊕ A 0 where A 0 ∈ K α consists of a single point.Using Theorem 3.8 we can construct a structure D ∈ K α such that δ(D/A * ) = −1/c and for all proper D ⊆ D, δ(D /D ∩ A * ) ≥ 0. Take sufficiently many isomorphic copies D 1 , . . .D n of D so that the free join of the D i over A * , say B has the property δ(B/A * ) = 1/c − 1.We leave it to the reader to verify that B has the required properties. To further our study of regular types, we begin by studying basic types such that d(p/A) = 0, 1/c where A ≤ M is finite.The choice to restrict our attention to such types will be justified by Theorem 5.13, where we show any type p with d(p/A) ≥ 2/c cannot be regular. We begin our analysis of types that can be regular types by defining nuggets and nugget-like types. Definition 5.7.Let A ≤ M be finite.We say that a basic type p ∈ S(A) is nugget-like over A, if given B where B realizes the quantifier free type of p over A, then B is a k/c-nugget over A for some k ∈ N. We now explore how the behavior of the d function interacts with nugget-like types.The following result is well known (see e.g.Theorem 3.28 of [3] or Lemma 3.13 of [10]) The following is a more restrictive form of Lemma 2.6 of [2].Lemma 5.9.Let A ≤ M be finite and let p ∈ S(A).Suppose that for some k ∈ N, d(p/A) = k/c.Let A ⊆ X ≤ M. Suppose that q ∈ S(X) extends p.If d(q/X) < d(p/A), then q is a forking extension of p. We now obtain the following fact about the forking of nugget-like types: Lemma 5.10.Let A ≤ M be finite and let p ∈ S(A) is nugget-like.Let A ⊆ Y ≤ M. Let q be an extension of p to Y .Now q is a forking extension of p if and only if d(q/Y ) < d(p/A) or given b |= q, b ⊆ Y . Proof.If d(q/Y ) < d(p/A), then Lemma 5.9 tells us that q is a forking extension of p.Further Y is algebraically closed.So if for any b |= q, b ⊆ Y , it follows that b is an algebraic type over Y .Since p is not an algebraic type over A, it follows that q is a forking extension of p. For the converse assume that q is a forking extension of p and that d(q/Y ) = d(p/A).As q is a forking extension of p, it follows that icl(bA) ∩ icl(Y ) icl(A).But icl(A) = A, icl(Y ) = Y and as b realizes p over A, icl(bA The following theorem allows us to identify certain regular types.Further it establishes that 0-nuggets are, in some sense, orthogonal to almost all other types.Theorem 5.11.Let A ≤ M be finite and let p, q ∈ S(A) be distinct and nugget-like.Now if d(p/A) = 0 or d(p/A) = 1/c, then p is regular.Further if d(p/A) = 0, then p, q are orthogonal to each other. Proof.Under the given conditions p is clearly non-algebraic and stationary.We directly establish that it will be orthogonal to any forking extension of itself.Let A ⊆ X ≤ M. First assume that d(p/A) = 0. Let p be a forking extension of p to X. Let b |= p| X and f |= p .It follows easily from Proposition 5.8, that d(f /A) ≥ d(f /X).As d(f /A) = 0 and d(f /X) ≥ 0, it now follows that d(f /X) = 0. Similarly we obtain that 0 = d(f /X) = d(f /Xb).We now show that bX ∩ f X = X which yields b | X f (see (8) of Theorem 3.6).Towards this end consider X = Xb ∩ Xf .If X = X, then we obtain b | X f .So suppose not.Since b |= p| X it follows that b ∩ X = ∅.Thus X − X = b ∩ f .Using the fact that both b, f are nuggets over A, if X − X = ∅, then it follows that b = f , a contradiction (as f then realizes p| X ).Thus b | X f .Hence p is regular. So assume that d(p/A) = 1/c.Let p , b and f be as above.By Lemma 5.10, For the second half of the claim, assume that d(p/A) = 0. Let p , q be the non-forking extensions to X ⊇ A of p, q respectively.Here we may as well assume that X is algebraically closed.Now d(p/A) = d(p /X) and d(q/A) = d(q /X) as p , q are non-forking extensions of p, q respectively.Assume that b |= p and f |= q .Arguing as above, we can show that if b ∩ f = ∅, then b = f .But this contradicts p = q.Thus it follows that bX ∩ f X = X.Further 0 = d(b/X) ≥ d(b/Xf ) ≥ 0. Again by (8) of Theorem 3.6, we obtain that b | X f and thus p, q are orthogonal. The following theorem shows that while there are many regular types with d(p/A) = 1/c, all such types are domination equivalent.Thus up to domination equivalence, there is only one regular type with d(p/A) = 1/c.This is in contrast to distinct 0-nuggets, any two of which are orthogonal to each other.We also show that 1/c-nuggets are non-trivial and the number of independent realizations of a 1/c nugget determines the dimension of a model.Theorem 5.12.Let A be closed and finite and let p, q ∈ S(A) be distinct and satisfy d(p/A) = d(q/A) = 1/c.Then 1. p, q are non-orthogonal.Hence any two regular types over p , q ∈ S(X) where X is closed and d(p /X) = d(q /X) = 1/c are domination equivalent. 3. Let A = ∅ and let M |= S α be such that M M. The dimension of M is determined by the number of independent realizations of p in M. Thus a single regular type determines the dimension of M. Proof.Let A be as given.Consider A as a finite structure that lives in K α . (1): Consider the finite structures AB, AC where B, C realize the quantifier free types of p, q respectively.Using Lemma 3.9, we can create an essential minimal pair (ABC, D) such that A, AB, AC ≤ D and δ(D/ABC) = −1/c.Let f be a strong embedding of D into M where f is the identity on A. From (6) of Theorem 3.6 and the transitivity of it follows that f (B) |= p and f (C) |= q.Now from (8) of Theorem 3.6, it follows that f (B) £ £ | A f (C) and thus p ⊥ q.For the second half of the claim note that given p , q ∈ S(X), there exists A finite and closed such that p is based and stationary over A and B such that q is based and stationary over B .Let X be the closure of A B and consider p| X , q| X .Since regularity is parallelism invariant both p| X and q| X are regular.Arguing as above we see that p | X ⊥ q | X .For regular types being non-orthogonal is equivalent to being domination equivalent.Since domination equivalence is invariant under parallelism, the result follows. (2): Consider the finite structure C = ⊕B i , the free join of three copies of the quantifier free type of p over A. Use Lemma 3.9.to construct a finite structure D ⊇ C such that for all Φ 0 {1, 2, 3}, C Φ0 = ⊕B i , the free join of the B i over A with indices in Φ 0 , C Φ0 ≤ D but C D. An argument similar to the one in (1) shows that a strong embedding of D in to M over A witnesses the non-triviality of p. (3): Let M M and assume that A = ∅.Given n ∈ N, consider the finite structure C n that is the free join of n-copies of the quantifier free type of p over ∅.If dim(M) ≥ n/c, by Lemma 4.6, there is a strong embedding of C n in to M. It is easily checked that the strong embedding witnesses n-independent realizations of p.The rest follows easily. The following result shows that a broad class of types cannot be regular types and justifies the choice to study types p ∈ S(A) with d(p/A) = 0, 1/c in our study of regular types.Theorem 5.13.Let A be finite and closed in M. Let p ∈ S(A) be a basic type such that d(p/A) ≥ 2/c.Then p is not regular. Proof.Recall that a regular type has weight 1.We establish the above result showing that p has pre-weight at least 2 and hence weight at least 2. Our strategy is similar to the one used in Theorem 5.12: we consider A as living inside of K α .We then construct a finite structure G over the finite structure A that we then embed strongly into M over A using saturation.Finally we argue that the strong embedding witnesses a realization of p and also the fact that the pre-weight of p is at least 2. Consider A as a finite structure that lives in K α .By Lemma 5.5 we may construct D ∈ K α such that the D = AC, A ∩ C = ∅ (as sets) and A ≤ D with δ(D/A) = δ(C/A) = 1/c.Let AB be such that B realizes the quantifier free type of p over A. Consider finite structures F i , the free join of AB and an isomorphic copy of D over A. We label the isomorphic copies as AC 1 , AC 2 and thus F i = ABC i , the free join of Remark 6.2.We note that in the above proof, we can replace our choice of 1/2 by any rational in (0, 1) and the requirement E be binary by any arity.Remark 6.3.The restriction of α : L → (0, 1) is not necessary.In fact, as long as each E ∈ L is not 2-ary, all of the results in Sections 4, 5 hold. In the case each E ∈ L is binary and α(E) = 1 for each E ∈ L, the results of Section 4 and Section 5, barring the non-triviality of a 1/c nugget, holds for the resulting generics.However Theorem 3.8 fails in this context which necessitates the use of lengthy ad hoc arguments which we omit.It should be note that in this case, S α will have trivial independence and thus is an example of a ω-stable pseudofinite theory where all regular types are locally modular. a fixed enumeration a of A, we write ∆ A (x) for the atomic diagram of A. Also for A, B, C ∈ K L with A ⊆ B ⊆ C and fixed enumerations a, b, c respectively with a an initial segment of b and b an initial segment of c; we let ∆ A,B (x, y) the atomic diagram of B with the universe of A enumerated first according to the enumeration a. Definition 3.4.Let A, B ∈ K and assume A ⊆ B. Let Ψ A,B (x) = ∆ A (x) ∧ ∃y∆ (A,B) (x, y).Such formulas are collectively called extension formulas (over A).A chain minimal extension formula is an extension formula Ψ A,B where B us the union of a minimal chain over A. (a) If M is ℵ 0 -saturated and any realization b of π in M has the property that bA is closed in M, then π has a unique completion to a complete type p over A. (b) If any realization b of the quantifier free type of π (over A) has the property δ(b/A) = 0, then π has a unique completion p over A and further p is isolated by the formula ∆ A,Ab (a, x). Definition 3 . 7 . Let B ∈ K α with δ(B) > 0. We call D ∈ K α with B ⊆ D an essential minimal pair if (B, D) is a minimal pair and for any D D, δ(D /D ∩ B) ≥ 0. Lemma 4 . 4 . For any fixed integer k ≥ 0, (K k/c , ≤), where ≤ is inherited from K α is a Fraïssé class.Further if B ∈ K α , there exists D ∈ K k/c such that B ⊆ D and δ(D) = k/c.Proof.Fix an integer k ≥ 0 and consider K k/c .Let A, B, C ∈ K k/c .Note that for the purposes of proving amalgamation, we may as well assume B, C are freely joined over A and that A ≤ B, C. Note that δ(B/A) = δ(C/A) = 0.The required statement follows by a simple application of Lemma 4.3 on B ⊕ A C (taking isomorphic copies to obtain freeness as required), the free join of B, C over A with the resulting H satisfying the required properties.For join embedding consider ∅ ≤ B, C. Note that δ(B/∅) = δ(C/∅) = k/c.Apply Lemma 4.3 on B ⊕ ∅ C, the free join of B, C over ∅.Again the resulting H satisfies the required properties.For the second half of the claim, let B ∈ K α .If δ(B) = k/c, then we can simply take D = B.So consider B with δ(B) > k/c.Using Lemma 3.8 recursively, construct a sequence of minimal pairs D i of length c(δ(B) − k/c) such that D i ∈ K α such that B = D 1 and (D i , D i+1 ) is a minimal pair with δ(D i+1 /D i ) = −1/c.Clearly D c(δ(B)−k/c) is as required.The fact that such a sequence exists follows as δ(D i+1 ) = δ(D i ) − 1/c.So assume that δ(B) < k/c.Take B = B ⊕ B 0 where B 0 is such that δ(B ) > k/c.Now we are in the above case and we finish. Theorem 4 . 5 . Let k be a fixed integer with k ≥ 0. Let M k/c be the generic for the Fraïssé class(K k/c , ≤) where ≤ is inherited from K α .Now M k/c |= S α and dim(M k/c ) = k/c.Proof.Fix an integer k ≥ 0. From Lemma 4.4, it follows that (K k/c , ≤) where ≤ is inherited from K α is a Fraïssé class.Let M k/c be the (K k/c , ≤) generic.Note that given B ∈ K α ,there is some D ∈ K k/c such that D ⊇ B by Lemma 4.4.Thus it suffices to show that M k/c satisfies the extension formulas in S α .Let A, B ∈ K α with A ≤ B and assume that A ⊆ Fin M k/c .As M k/c is the (K k/c , ≤) generic, there is some C ≤ M k/c with A ⊆ C and δ(C) = k/c.By Fact 2.8, we have that D = B ⊕ C, the free join of B, C over A is in K α .But now there is some G ∈ K k/c such that D ⊆ G.But as M k/c is the (K k/c , ≤) generic we can find a strong embedding of G in to M k/c over C. Thus it follows that M k/c |= ∀x∃y(∆ A (x) ∧ ∆ A,B (x, y)).Hence it follows that M k/c |= S α .Further as noted above, given any A ⊆ Fin M k/c , there is some C ≤ M k/c with A ⊆ C and δ(C) = k/c.Hence dim(M k/c ) = k/c. First assume that δ(D/A) = 0. Now as S α |= ∀x∃y(∆ A (x) ∧ ∆ A,D (x, y)).Thus there is some A ⊆ D ⊆ M such that D ∼ =A D .Further as δ(D /A) = 0, from (2) of Lemma 3.6, D ≤ M. Thus regardless of the value of dim(M/A), if δ(D/A) = 0 then there is some g such that g strongly embeds D into M over A. Now assume that m/c = δ(D/A) ≤ dim(M/A) with m ≥ 1 and further assume that dim(M/A) ≥ k/c with k ≥ m.Let A ≤ F ≤ M be such that δ(F/A) = k/c.Let G = D ⊕ F, the free join of D, F over A. By Lemma 4.3, there exists H ∈ K α with G ⊆ H and A, D, F ≤ H and δ(H/F) = 0. Since F ≤ M and δ(H/F) = 0 we are in the setting above.So take a strong embedding g of H into M over F. Clearly g fixes A and D has the property that g(D) ≤ F ≤ M and thus g(D) ≤ M. then using the fact that b, f are nugget-like over A and they realize the same quantifier free type it follows that b = f .But this contradicts b |= p| X .Thus f ∩ b = ∅.Now d(f /X) = d(f /Xb) and bX ∩ f X = X and thus b | X f by (8) of Theorem 3.6 and thus p is regular.
2018-07-16T16:33:06.000Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "a16b807b31c3d4dbfeb9c528448cccd8846aaee3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.00932", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f3ab6f2075b0d05eef3c3263893544644feac799", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244220576
pes2o/s2orc
v3-fos-license
Mucormycosis: A Case Series of Patients Admitted in Non-COVID-19 Intensive Care Unit of a Tertiary Care Center during the Second Wave Abstract Rhino-orbital-cerebral mucormycosis is an invasive fungal infection associated with mortality of 25–62%. There has been a surge in the number of cases during this second wave of coronavirus disease-2019 (COVID-19) in India. We report 10 cases of mucormycosis admitted to our adult intensive care unit. After reviewing the patient's information, we found that 60% of patients had received steroids, and most had uncontrolled blood sugar levels. Most patients received treatment with surgical debridement and antifungal, although the mortality rate was as high as 40%. We report two unique cases of renal and gastrointestinal mucormycosis as well. We concluded that poor glycemic control was the primary etiology behind the rise in the number of cases. Our report also stresses the importance of early surgical intervention and suggests further research comparing the efficacy of combination antifungal therapy versus single antifungal (amphotericin B) to help resource-limited settings in these times of drug crisis. How to cite this article: Yadav S, Sharma A, Kothari N, Bhatia PK, Goyal S, Goyal A. Mucormycosis: A Case Series of Patients Admitted in Non-COVID-19 Intensive Care Unit of a Tertiary Care Center during the Second Wave. Indian J Crit Care Med 2021;25(10):1193–1196. IntroductIon Mucormycosis is a rare but potentially life-threatening infection caused by a fungus belonging to the Mucoraceae family. Generally, the susceptible population includes patients who have malignancy, are on steroids, are immune-compromised, are uncontrolled diabetic, etc. 1 The annual incidence of mucormycosis is estimated to range from 1.7 cases per 1,000,000 inhabitants in the United States to 140 cases per 1,000,000 in India and Pakistan. 2 There has been a sudden surge in the number of mucormycosis cases during the second wave of coronavirus disease-2019 (COVID-19) in India. 3 The most common site involved in Mucor infection is the paranasal sinus and orbit, leading to pain, swelling, numbness, and visual defect. However, renal and gastrointestinal involvement has also been reported though rarely. 4,5 Here, we present a case series of 10 patients who were admitted to our adult ICU for mucormycosis management. cAse serIes We collected information of the patients admitted to our adult ICU, who were either diagnosed with mucormycosis or found to have the same latter in the course of treatment. The data were recorded over a period of 1 month from May to June, where we recorded the demographic information, comorbidities, h/o COVID-19 infection, dependency on steroid, evidence of mucormycosis, treatment received, duration between symptoms and surgery, length of ICU stay, and outcome ( Table 1). The mean age of the study group was 47.5 years, ranging from 31 to 65 years. Forty percent were female and 60% males. Out of the 10 patients, 4 patients had COVID-19 infection, whereas 7 patients were recently diagnosed or known diabetic. Except for one patient who was a case of chronic liver disease (CLD), all the patients had uncontrolled sugar as shown by the HbA1c levels (glycosylated hemoglobin). Including the four patients who were earlier COVID-19 positive, two more patients received steroid therapy; one was a case of rheumatoid arthritis. The other was given steroid because of suspicion of COVID-19 as radiologically, and the picture was of viral pneumonia in a local hospital. The mean duration of steroid therapy was 11 days. The most common presenting symptom was facial swelling. The mean duration between COVID symptoms and symptoms of mucormycosis was approximately 21 days. One patient presented with unique complaints of abdominal pain, decreased urination, and burning micturition and was diagnosed with renal mucormycosis on a biopsy of the kidney. One more patient presented with abdominal pain, vomiting, and constipation who was retrospectively found to have abdominal mucormycosis on histopathological examination (HPE). Most patients (8) required surgery and received liposomal amphotericin B either alone or in combination with posaconazole, except for two patients who received tablet posaconazole. Antibiotics were administered in accordance with the ICU protocol and culture sensitivity. Surgery could not be done for one patient with CLD who was hemodynamically unstable and the other patient who became brain-dead. Out of 10 patients, 4 succumbed to their illness, while the 6 were discharged; hence, the mortality was 40%, which is similar to figures reported by Mishra et al. 2 dIscussIon Rhinocerebral mucormycosis is associated with a mortality rate of 25-60%. 6 During the second wave of the COVID-19 pandemic, which India is going through, there has been a surge in the number of mucormycosis cases, and it has been made a notifiable disease in many states, including Rajasthan. The predisposing factors like diabetes mellitus, diabetic ketoacidosis, malignancy, immunocompromised state, and steroid therapy can lead to flaring infection caused by this fungus due to the lack of adequate chemotactic response/neutropenia/monocytes. The use of steroid therapy in the treatment of COVID-19 as advocated by the RECOVERY trial can be one factor for this surge. But multiple unevaluated factors like: • Uncontrolled sugars and • The use of industrial oxygen cylinders during this oxygen can also be contributing factor. • Pro-coagulable state of the patient in COVID-19. The disease is more common in males, as seen in our case series as well. 7 Out of the 10 patients studied, 6 patients had received steroids, which could be the etiology for immunosuppression, but what is glaring is the uncontrolled sugars in all but 1 patient, which would have made them susceptible to the fungus. In addition, one patient with normal HbA1c levels was a patient with CLD, which in itself is an immune-suppressed state, thus predisposing to the infection. We also report two unique cases of renal and intestinal mucormycosis. The etiology for the involvement is the endothelial invasion of the vessels by the fungus leading to thrombosis and infarction of the tissue. The most common route of spread for renal mucor is hematogenous, whereas the intestinal once can be due to ingestion or hematogenous both. 8 In the gastrointestinal tract, the stomach and colon are commonly involved, as seen in our patient who had distal ileal and colonic gangrene. 5 Isolated renal mucormycosis is rare and is generally bilateral, which has a very poor prognosis. 9 High index of suspicion and early renal biopsy could help in early diagnosis and surgical intervention. As seen in our patient, the early surgical intervention helped in avoiding a catastrophe. Similarly, intestinal mucormycosis has a high mortality, as it is associated with multi-organ failure, which was seen in our patient too who succumbed to the illness. There was an early surgical intervention in all patients except two (CLD with hemodynamic instability and brain-dead patient). The mean duration from symptoms to intervention was 5 days (ranging from 1 to 10 days). Early surgical intervention with early initiation of therapy with liposomal amphotericin B helped keep mortality to 40%, including the two patients who did not receive surgical intervention. Excluding them, the mortality figure was 20%. One more problem arising now is the shortage of supply of amphotericin B because of a sudden surge in the number of cases. In such cases, oral/i.v. formulation of posaconazole can be tried. We do not suggest it as first-line treatment, but in case of scarcity and after careful individualization in less critical individuals, posaconazole can be given as a trial. In our study, two of our patients who received posaconazole did well and were discharged from ICU by day 1 and day 2, respectively. The other option can be a combination of amphotericin B and posaconazole, which has similar efficacy but helps in reducing the dose of amphotericin B. 10 conclusIon • Early surgical debridement and initiation of antifungal therapy help in reducing mortality. • Regular monitoring of blood glucose levels during COVID-19 treatment should help prevent mucormycosis infection. • In the case of a resource-limited setting, consideration for a combination of amphotericin B and posaconazole, if amphotericin in less supply. • A high index of suspicion of mucormycosis in susceptible patients with symptoms of pyelonephritis and obstruction, especially in the current scenario. • Monthly audit in all ICUs to see the trend of mucormycosis and generate data. Future Scope A multicenter randomized control trial on the efficacy of amphotericin+posaconazole combination on mucormycosis is required. The possible duration post-COVID-19, when an individual is at the highest risk of acquiring mucormycosis infection, should be examined. Also, the role of uncontrolled sugars in post-COVID-19 patients and industrial cylinder usage in the surge of mucormycosis cases need to be studied.
2021-10-18T17:51:55.150Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "df625fd58880ecfa7d0e8b63ae0d81185bb82625", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5005/jp-journals-10071-23986", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "939ec0c445bd75ece8ff756db3c12ed2a96e118b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221616178
pes2o/s2orc
v3-fos-license
“We only have the one”: Mapping the prevalence of people with high body mass to aid regional emergency management planning in Aotearoa New Zealand Introduction People have been left behind in disasters directly associated with their size, shape, and weight and are disproportionately impacted in pandemics. Despite alignment with known vulnerabilities such as poverty, age, and disability, the literature is inaudible on body mass. Emergency managers report little or no information on body mass prevalence. This exploratory study aimed to illustrate population prevalence of high body mass for emergency planning. Methods Cross-sectional data from the New Zealand Health Survey were pooled for the years 2013/14–2017/18 (n = 68 053 adults aged ≥15 years). Height and weight were measured and used to calculate body mass index. The prevalence of high body mass were mapped to emergency management boundary shapefiles. The resulting maps were piloted with emergency managers. Results Maps highlight the population prevalence of high body mass across emergency management regions, providing a visual tool. A pilot with 14 emergency managers assessed the utility of such mapping. On the basis of the visual information, the tool prompted 12 emergency managers to consider such groups in regional planning and to discuss needs. Conclusions Visual mapping is a useful tool to highlight population prevalence of groups likely to be at higher risk in disasters. This is believed to be the first study to map high body mass for the purposes of emergency planning. Future research is required to identify prevalence at a finer geographical scale. More features in the local context such as physical location features, risk and vulnerability features could also be included in future research. High body mass The prevalence of adults living with high body mass is associated with reported increased risk of a plethora of adverse health outcomes [1]. Aotearoa New Zealand (NZ) is amongst the countries with the highest rates of obesity worldwide [2]. The prevalence of high body mass increases with age [1], with rates peaking in the 55-64 age group in NZ [3]. Obesity is typically defined by a Body Mass Index (BMI) of greater or equal to 30 kg/m 2 [4]. High body mass such as weighing ≥150 kg, or having a BMI ≥35 kg/m 2 , are associated with mobility limitations [5,6] and this can make moving and evacuation more difficult in emergency situations [7][8][9]. The two highest categories of body mass are the focus of this paper: class II (severe) and class III (extreme) obesity. In the context of pandemic emergencies an association between high body mass and severity of illness and risk of death was reported with influenza A (H1N1) 2009 even after adjustment for co-conditions known to be a risk, although the exact relationship was not yet fully understood or defined [10][11][12]. Strong associations are currently being reported for obesity and SARS-COV-2 novel coronavirus (COVID-19) [13]. High body mass and disasters There is a significant gap in research relating to how people with high body mass are considered in disasters despite accounts that people have been left behind in direct relation to their size, shape, and weight [8,[14][15][16][17]. While everyone is at risk of harm in a disaster, some people have been identified at higher risk in relation to their particular circumstances before, during, and following a disaster: this includes, but is not limited to, people from socioeconomically deprived areas, adults with severe mental illness, older people, people with chronic health conditions, gender minorities and people with disabilities [18][19][20][21]. Gray identified that many such populations also intersect with increased prevalence of high body mass and refers to this as 'triple jeopardy' [22]. Of concern, recent research shows that emergency managers, planners and responders (EMs) may underestimate prevalence of high body mass in their area of responsibility, recalling only those individuals where prior or intensive assistance had been involved, such as movement from home to hospital in relation to health care needs. Several EMs recalled "we only have the one in our area" when the researcher knew this not to be the case from experience, health statistics, and interactions with community members [23]. Some EMs deferred to health agencies, expecting they would notify EMs of any specific needs or priorities. Such assumptions are concerning and present profound implications for those individuals with high body mass who are generally well or who may refrain from interaction with local community and health providers, yet may present very specific needs in the event of a disaster [23]. Geographical Information Systems in emergency management The popularity of information technology use in emergency management is increasing as EMs rely on varying software to assist with response [24,25]. Research has identified the practical and effectiveness of using such software including Geographical Information Systems (GIS) in emergency management. Working with The Red Cross, National Disaster Management Agency (NaDMA), and local communities in the small Caribbean nation of Grenada, Canevari-Luzardo et al. [26], facilitated mapping household vulnerability and hazards as a method to reduce risk and vulnerability. By using GIS, community members were able to indicate areas that they considered vulnerable and at risk of landslides, hurricanes and flooding. The maps produced were practical and accessible ensuring the usability by community members. A further example of the effectiveness of GIS use in disaster planning and risk assessment was highlighted in research conducted in Toronto [27] that mapped the social, physical, infrastructure and economic vulnerabilities that may contribute towards higher levels of risk. Their research demonstrated the complex and multiple levels of vulnerability in a given population. The authors argue for the use of GIS in risk assessments in order to produce greater awareness of the multiple risks across a diverse population. Since Hurricane Andrew devastated the Southern Coast of Florida in 1992, there has been a rapid increase in the use of GIS in the USA state and federal government, notably Federal Emergency Management Agency (FEMA) to assist with mitigation, preparation, response and recovery [24]. A crucial component of GIS is to support effective deployment of response resources to critical areas in real-time [24]. In NZ information systems have not been utilised to their full potential by EMs [25]. Utilising information technology can assist in meeting EMs needs in identifying high risk areas and the needs of the population by facilitating effective preparations and response to a natural hazard event. Geographical information and high body mass for emergency planning Geographic Information Systems (GIS) have the potential to effectively deliver visual data such as, the prevalence of those individuals with high body mass. Indeed, as information technology has evolved, we have seen increased frequency of health communication with disease maps [28,29]. We also now live in an increasingly visual society, where most of us see and process images more than we read words [30]. Visual mapping has been shown to improve understanding of hazard information when compared to tables and written material [31][32][33]. Therefore, a map or spatial depiction of where at risk populations are is a key medium for communicating such information. For example, hazard maps are routinely utilised by scientists to relay information concerning volcanic hazards to many different recipients [34]. Such information, how it is relayed and how it is interpreted quickly, has specific utility during rapidly evolving and potentially major events "people tend to rely more on their initial impressions and intuitive feelings about hazard and risk than on exhaustive analytical evaluation of hazard and risk information" [35], p.622]. The value of having simple and clear hazard maps for use in crisis communication has been consistently demonstrated for instance, within volcanic crises or wildfire events [34]. Thompson [34] highlights the work of Lester [30], Carrasco [36], Domke, Perimutter and Sprattl [37], Mould and Mandryk [38] to underline the benefit of images over written words to grab attention. This has been utilised very successfully by the Centres for Disease Control and Prevention to show the changes in body mass prevalence over time in the USA [39]. However, this has never to the authors' knowledge previously been applied to show prevalence of those with high body mass in relation to disaster management. The failure to incorporate high body mass prevalence in disaster management has produced a significant and important gap in current evidence as maps allow the influence of decision-making without the barrier of literacy or linguistics. Hobbs and colleagues have extensively applied GIS mapping techniques to describe the spatial and spatiotemporal patterning of health outcomes and environmental exposures [40][41][42]. While consideration is required to ensure confidentiality and interpretation of prevalence data to small geographic areas, the development of a mapping resource can help better inform emergency planning [29]. Communicating the risks of natural hazards to EMs and the public is regarded as essential in reducing vulnerability and supporting effective coordination [43,44]. Effective communication of risk is also dependent on how information is received and processed [45]. Good communication between scientists and EMs can mitigate or accentuate risk, in particular for vulnerable individuals and groups [45]. Communicating scientific information to the public is an established area of research [43,[46][47][48]. Effective communication between scientists and EMs enables timely decision making and the coordination of response [49]. Demuth et al. [49] argue that a challenge is that data can be technical and too detailed, thus is difficult for EMs and policy makers to understand. Therefore, only practical and essential information needs to be communicated to decision makers. Poor communication can lead to failure in effectively responding to a disaster event as was seen during Hurricane Katrina. The hurricane forecasting was correct, yet senior EMs and policy makers did not engage adequately with the data, thus delaying evacuation putting people at high risk [44]. A good understanding of data can strengthen preparation and response; presenting information clearly to EMs is essential. Scholars argue that positive disaster response develops by integrating data from multiple agencies and sources opposed to the more common hierarchical structures of response [44,50]. This paper describes an exploratory study and the development of a visual mapping resource for EMs to gain better insight into the prevalence of people with high body mass in each Civil Defence Emergency Management (CDEM) area in NZ providing rigorous and original evidence on an often overlooked topic internationally. 2. 1. This study used cross-sectional data that was pooled for the year 2013/14-2017/18 from the New Zealand Health Survey. These data were then applied to the geographical boundary areas of each CDEM area in order to provide visual representation for high body mass prevalence. Study setting The study setting was nationwide data across NZ and involved designated regional boundaries identified by NEMA. Fig. 1 shows the different geographical boundaries of each CDEM area. Data sources Cross-sectional data from the New Zealand Health Survey (NZHS) were pooled for the five years of 2013/14-2017/18 (n = 68 053 adults aged ≥15 years). Data on body mass were obtained from the adult NZHS that provides information on a range of sociodemographic, health behaviours, and self-reported health status. As outlined elsewhere [40] the survey uses a multistage sampling method for participants who reside within NZ. Results are then weighted to account for survey design, oversampling and non-response in height and weight-related questions. Additional details are provided elsewhere [51]. The NZ Ministry of Health assigned area-level summarised spatial data to survey responses based on the geographical area of the respondent and removed participant identifying information before data transfer [51]. This process ensures all data used in analyses are anonymised prior to our use. All de-identified data were password protected and kept in a secure computer facility accessible only to a named researcher (51). Data were obtained for measured BMI for each participant. All respondents had their height and weight measurements taken by the interviewer at the end of the survey. Height (cm) and weight (kg) of the participant was measured by a trained interviewer. A laser height measuring device consisted of a professional laser meter (Precaster HANS CA770) and weight was measured with electronic weighing scales (Tanita HD-351) [51]. A standardised protocol was followed and interviewers were re-trained annually and had to pass a recertification assessment to ensure they maintained the required skill levels to calculate BMI (weight in kg/height in metres 2 ). While it is not a direct measure of body fat, previous research has shown that BMI is a useful population level measure in large epidemiological studies [52]. For the purposes of this study, data were mapped for participants with class II body mass (BMI≥35.00-39.99) and class III body mass (BMI≥40.00). These are standardised categories in obesity statistics and provide comparison across the two highest body mass classifications routinely reported. Data visualisation Shapefiles (a geospatial vector data format for geographic information system software) were provided by the National Emergency Management Agency (NEMA) (formerly the Ministry of Civil Defence and Emergency Management) under Crown Copyright and restricted access for the purpose of academic research. Boundaries were visualised within ArcGIS V10.7. Meshblocks within each NEMA boundary were then joined based on spatial location if the population weighted centroid of the meshblock was within the NEMA boundary. Meshblocks are defined by Statistics New Zealand as the smallest geographic unit for which statistical data is collected and processed (the 2013 Census comprised 46 629 units, i.e. meshblocks). Data was obtained from NZDep2013, giving a deprivation score for each meshblock in NZ which were divided into quintiles (quintile one = least deprived) [53]. NZDep2013 includes material and social deprivation dimensions [49]. Data on obesity prevalence contained a meshblock identifier allowing the spatial join. Regional prevalence of high body mass were then mapped to nominated area boundaries for Civil Defence to provide a visual planning tool for EMs. Data were weighted to be nationally representative over the five years pooled data so the sum of the weights were equal to the average resident population of that time period. It should be noted that CDEM area boundaries are not coterminous to area health authority boundaries in NZ. Piloting Piloting of the visual mapping material was conducted with 14 EMs attending the Emergency Management Summer Institute at Massey University, NZ (March 2020, Wellington NZ). Following an oral presentation of the mapping process and overview of issues for people with high body mass in previous disasters, paper copies of the prevalence maps were provided to EMs with a paper feedback form. Feedback was provided during the session to the presenting author via completion of the paper based feedback form and verbal comments. National profile The prevalence of high body mass was geographically mapped to CDEM boundary areas (Table 1). High body mass are depicted visually within Fig. 2 Fig. 3. National profile by area-level deprivation The prevalence of high body mass was split according to CDEM boundary areas and by area-level deprivation quintile (Table 2). Confidentiality is protected as data are only presented when there are at least 30 people in the cell as a denominator. Care should also be taken when interpreting these findings due to low numbers of participants in particular within the West Coast, Gisborne, Marlborough, Nelson Tasman, Southland, and Northland (Q1 and Q2) areas. Despite the caveat of smaller numbers in some areas, there was a consistent gradient exhibited with a higher prevalence of high body mass in the most socioeconomically deprived areas (see supplementary materials for further analysis). In Auckland for instance where one quarter of the whole NZ population live, 28.9% were in the most deprived areas compared to 5.9% in the least deprived quintile. Feedback from EMs Written feedback from most EMs present welcomed data presented in this way and this prompted many questions about disaster risk reduction and needs of people with high body mass. An example of typical written feedback is "the more that I think about it, the more I am thinking that it [considerations for people with high body mass] does involve some serious work" (EM pilot participant number 3). Two EMs felt the data provided at regional level was illustrative but felt more local level geographical data was required to be meaningful for emergency management. Many EMs agreed the prevalence maps were "an eye opener" (EM pilot participant number 8) and prompted about half of the EMs present to seek detailed further information from the presenting author over the session break. N = the total number of people within each area. 4.1. Our study developed one of the first visual mapping resources for emergency managers, planners and responders (EMs) to gain better insight into the prevalence of people with high body mass in each emergency management area of NZ. Uniquely, this study combines rigorous nationally representative and pooled data over four years with measured height and weight. This was in response to underestimations of the levels of people with high body mass by EMs in a recent multimethods study [23], despite recognition of heightened risks associated with disability, long term conditions, older people and people who are socio-economically deprived. People with high body mass are over-represented in such groups and yet are overlooked in relation to their particular disaster risk reduction needs internationally [8,22,23]. More people with higher body mass While increased action in childhood obesity prevention efforts is occurring [54], no Country's public health obesity strategy appears to have sustained a reduction in population body mass to date [55]. Further, we have seen increases in adults from high to higher BMI with age, and increasing levels in low and middle income countries in addition to high income countries [1,56]. This has international relevance for disaster risk reduction considerations. For instance, it is estimated that by the year 2025 there will be more women with severe (class II) body mass (BMI 35-39 kg/m 2 ) than women with underweight [1] and with increasing age this suggests those with class II body mass will move to extreme (class III) body mass over time rather than a trend downwards. In terms of numbers of adults likely to have high body mass, the numbers are not insignificant in NZ: those with a BMI 35-39 (kg/m 2 ) are estimated around 314 000 adults (8% of the adult population) and those with a BMI greater than 40 (kg/m 2 ) estimated to be 181 000 adults, (4.6% of the adult population) [3]. When identifying risks and vulnerabilities EMs need to include prevalence of high body mass for their area of responsibility to determine any additional considerations for already vulnerable populations. Depicting emergency management area prevalence of high body mass Of particular note, Southland area prevalence is highest in the most deprived areas but also notably high in quintile 2 and quintile 3 relative to other areas. Indeed, the social gradient shown across all areas where a higher prevalence of high body mass exists in the most deprived areas should be an important consideration for any emergency management service wishing to estimate prevalence of high body mass for its population. The weighted percentage was also higher in predominantly more rural northerly areas of the North Island of NZ in areas such as Gisborne, Hawke's Bay and Bay of Plenty. These finding may align with global data from the NCD Risk Collaborative showing increases in rural BMI is fueling more than 50% of the rise in BMI globally during the last three decades [57]. However, recent data in New Zealand shows that the semi-urban areas (not urban not rural) may have the greatest prevalence of poor health including higher levels of obesity [58]. Therefore, this is an area that requires further research, particularly given the challenges of working in rural areas for EMs [59]. Policy and practice implications Importantly for EMs, people with high body mass may require equipment with higher weight and width ratings that may not usually be held in stock in all CDEM areas. Education around the complex causes of obesity, and the contribution of stigma and bias should be available to EMs. CDEM areas should review equipment utilised in evacuation or temporary shelter for its width and weight capacities and encourage local designated community emergency centres to consider local need. For example, a lot of health equipment is only rated up to ≥150 kg [5,7,60] and basic office type chairs found in local community centres are unlikely to have higher weight ratings. It is also pertinent to note that while many people with high body mass weigh less than 150 kg, wider/larger sized equipment may still be required to avoid harm caused by pressure injury or trapped skin [61]. Provision of chairs with no armrests or benches that can accommodate one or two persons are simple solutions for local centres where displaced people may congregate. Wider rollaway beds can be purchased, although centres should check the weight ratings of items before use. People with high body mass are very concerned about dignity and access to suitable size clothing in an emergency, worry about toileting requirements in temporary shelter, have a great fear of falling and embarrassment about needing multiple people to assist them to get up [9,62]. These may also be factors in not being able to take protective actions such as 'drop, cover, hold' in the event of a major earthquake [63]. Personnel will be required to assist a person with high body mass in the case of a fall or reduced mobility evacuation and extra time will be required for movement [9,62]. Utility of GIS in emergency planning Disaster planning requires a good understanding of the geographical dimensions, boundaries, lifelines and important facilities [64] with advances in health data it is also possible to include key human health geography. The promise of GIS mapping include the potential to reach a broad array of audiences, including health planners, policymakers, advocacy groups, and an interested public [65]. Although this movement promotes creative means of analysis and identification of at-risk populations for planners and researchers, such accessibility may pose dilemmas relating to labelling populations living in particular geographic locales [40][41][42]. McBride [66] also argues that while maps are considered as trusted and useful communication tools they are also open to interpretation. As GIS is a visual tool, we recognise that mapping intended for a wide audience needs to ensure those with visual impairment are not excluded. Audio map equipment [67] and more recently 3D printing technology options are available [68] and need to be considered. High body mass in focus People with high body mass have been 'conspicuously invisible' in disaster risk reduction planning [8]. Those engaged in active health or hospital care at the time of an event or specific planning may be known to local agencies, hence the perception '"we only have the one in our area" [23]. Whereas many people with high body mass will be going about their usual daily business with little active health care interaction and therefore invisible so far as EMs are concerned. Data presented as visual area maps offers a proxy for each area to give an indication of the likely affected population for planning purposes. A strength of this study is the utilisation of pooled data over five years. In addition, the sample is nationally representative through weighting in the analysis, which also accounts for the missing data in height and weight measurements. Data for height and weight are objectively measured by trained individuals as part of the NZHS that reduces bias often found with self-reported data [69]. It is also one of the first times these data have been combined to result in a nationally representative large sample of pooled individuals with measured height and weight. While these are notable strengths the study is limited in the level of detail it can provide on the maps. For instance, the geographical areas presented are large and a finer or smaller geographical area may help EM better target specific areas. Despite this limitation, this study is exploratory in nature and provides an important first step toward mapping high body mass at a finer geographical scale in the future when such data is available. Conclusion To the authors knowledge, this study is believed to be the first of its kind to map the prevalence of high body mass to CDEM regions for the purposes of supporting disaster planning decisions for people with high body mass. While geographical areas presented are quite large, being able to discuss prevalence with EMs and talk through likely rates in an emergency management area allows for more nuanced discussion around planning considerations for vulnerable populations involving important stakeholders. When presented with data in an easy to use manner, EMs may better consider the needs of their regional population living with high body mass. The visual mapping in this study presents data to EMs with disaster risk reduction planning for populations likely to be at higher risk in disasters. Future research will test the utility of Table 2 The prevalence with severe and extreme obesity total by CDEM boundary areas (data are weighted % and [associated weighted 95% confidence intervals] * * In line with the NZHS methodology report and to ensure the survey data presented are reliable and the respondents' confidentiality is protected, data are only presented when there are at least 30 people in the denominator. This ensures care is taken so no respondent can be identified in the results. + Care should be taken when interpreting findings due to low numbers in this cell and wide confidence interval. Data represent number with severe or extreme obesity/total of population (not extreme and with severe or extreme obesity). visual mapping in this study and while this study was exploratory in nature using coarse geographical areas, future research may benefit from exploring finer geographical scales to better pin point specific locations where EM may need to target. Ethical approval This study did not require ethical approval. Approval was sought from the National Emergency Management Agency (formerly the Ministry of Civil Defence and Emergency Management) to utilise the shapefiles/boundaries. Pilot testing with EMs was evaluated and judged to be low risk by Massey University, it did not require review by the University's Human Ethics Committee (ref: 4000018662). Patient consent Not required. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2020-09-12T13:06:26.459Z
2020-09-12T00:00:00.000
{ "year": 2020, "sha1": "5bd87ed1c9e1608423415e3b4d567338ce0005a4", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7486187", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6c2928d69de90d567d45224ed2b66a05da63c2b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
247839168
pes2o/s2orc
v3-fos-license
Efficiency of antioxidant Avenanthramide-C on high-dose methotrexate-induced ototoxicity in mice Methotrexate (MTX) has been used in treating various types of cancers but can also cause damage to normal organs and cell types. Folinic acid (FA) is a well-known MTX antidote that protects against toxicity caused by the drug and has been used for decades. Since hearing loss caused by MTX treatment is not well studied, herein we aimed to investigate the efficiency of the antioxidant Avenanthramide-C (AVN-C) on high-dose MTX (HDMTX) toxicity in the ear and provide insights into the possible mechanism involved in MTX-induced hearing loss in normal adult C57Bl/6 mice and HEI-OC1 cells. Our results show that the levels of MTX increased in the serum and perilymph 30 minutes after systemic administration. MTX increased hearing thresholds in mice, whereas AVN-C and FA preserved hearing within the normal range. MTX also caused a decrease in wave I amplitude, while AVN-C and FA maintained it at higher levels. MTX considerably damaged the cochlear synapses and neuronal integrity, and both AVN-C and FA rescued the synapses. MTX reduced the cell viability and increased the reactive oxygen species (ROS) level in HEI-OC1 cells, but AVN-C and FA reversed these changes. Apoptosis- and ROS-related genes were significantly upregulated in MTX-treated HEI-OC1 cells; however, they were downregulated by AVN-C and FA treatment. We show that MTX can cause severe hearing loss; it can cross the blood–labyrinth barrier and cause damage to the cochlear neurons and outer hair cells (OHCs). The antioxidant AVN-C exerts a strong protective effect against MTX-induced ototoxicity and preserved the inner ear structures (synapses, neurons, and OHCs) from MTX-induced damage. The mechanism of AVN-C against MTX suggests that ROS is involved in HDMTX-induced ototoxicity. Introduction Methotrexate (MTX), once recognized as amethopterin, is a chemotherapeutic agent that also acts as an immunosuppressant [1]. MTX is an antifolate and antimetabolite with immunomodulatory activity against a variety of inflammatory conditions. It inhibits folate metabolism through the suppression of dihydrofolic acid reductase, thereby preventing purine and pyrimidine synthesis and reducing DNA and RNA synthesis [2]. Its pharmacokinetics and potential toxic effects, such as nephrotoxicity and hepatotoxicity, are also well understood [3]. MTX has been effectively and extensively used for treating various types of cancers [4]. MTX induced the production of reactive oxygen species (ROS) in monocytes and cytotoxic T cells, thus, reducing monocyte adhesion to the endothelial cells [5]. In patients with acute lymphocytic leukemia, high-dose MTX (HDMTX) caused reversible neurotoxicity in the form of white matter injury [6]. In a study examining the brainstem auditory system in children (2-12 years) with acute lymphoid leukemia who received MTX treatment, 60% of individuals aged 5 years or less showed an auditory deficit [7]. The brainstem auditory-evoked potential assessment, used to evaluate ototoxicity in patients undergoing chemotherapy, revealed that 80% of those tested exhibited some form of change and latency delay, with auditory impairment in the lower brainstem being the most common. In addition, the combination of MTX and other cancer drugs for the treatment of both solid and hematological malignant tumors caused ototoxicity [8]. Avenanthramides (AVNs) are phenolic compounds originally extracted from oat grain (Avena sativa L.) and have a molecular weight of approximately 300 g/mol. These polyphenols exhibit a high antioxidant capacity and are highly abundant in human food. Several AVNs have already been found to exist in oats; one of the most prevalent type is AVN-C, which has the highest antioxidant activity [9]. AVN-C inhibited the proliferation of vascular smooth muscle cells and increased the production of nitric oxide (NO), resulting in the prevention of atherosclerosis [10]. We have previously demonstrated that AVN-C has a protective effect against noise-induced hearing loss (NIHL) and drug-induced hearing loss (DIHL) [11]. The aims of this study were to investigate the effect of high-dose MTX on normal hearing, provide insights into the possible mechanisms involved in MTX-induced hearing loss, and evaluate the efficacy of the antioxidant AVN-C in the prevention of MTX toxicity. Animal care maintenance and drugs C57Bl/6 background mice (with each group comprising 7 mice, 4 males and 3 females) aged 4-6 weeks were used in this study. In vivarium, standard conditions to shelter the mice were followed, and adequate food and water were provided. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of Chonnam National University. The protocol was approved by the Committee on the Ethics of Animal Experiments of Chonnam National University (CNUHIACUC-20027). All surgeries were performed under anesthesia and were made to minimize suffering. The drugs used in this study were MTX (Cat. M9929; Sigma-Aldrich, St Louis, MO, USA), AVN-C (SL-340; 4-105A NINT Innovation Center, 11421 Saskatchewan Dr Edmonton, AB, T6G2M9, Canada), and folinic acid calcium salt (FA) (47612, Sigma, St Louis, MO, USA). The following drug dosages were used in this study: MTX 4 mg/kg (high dose), AVN-C 10 mg/Kg, and/or FA 7 mg/kg, once each day for 7 days. For in vivo experiments, the drug was administered intraperitoneally. In this study, ketamine (100 mg/kg) and xylazine (10 mg/kg) were used as anesthetics. The mice in these experiments were administered 0.3 cc of 0.9% NaCl intraperitoneally 5 hours post HDMTX treatment, every day beginning on the first day of HDMTX treatment and continuing for 1 week after the HDMTX treatment. No mortality was observed among the treated mice, and the animals were in good health overall. For in vitro investigations with HEI-OC1 cells, 0.2 μM MTX was combined with AVN-C 1 μM and/or FA 3 μM. Methotrexate detection in mouse body fluids The MTX used in this study was dissolved in 0.9% normal saline for both in vitro and in vivo applications. To detect MTX in the mouse serum and perilymph, MTX 4 mg/kg was administered to the subjects intraperitoneally (IP), and controls received normal saline. Under anesthesia, the whole blood was extracted directly from the mice hearts and collected in a sterile Eppendorf (EP) tube, which was then left undisturbed for 30 minutes to allow coagulation. After centrifuging at 1500 × g for 10 minutes at 20˚C, the serum was removed quickly and analyzed using liquid chromatography-mass spectrometry (LC-MS/MS, AB SCIEX 4000 Q Trap mass spectrometer, Shimadzu LC 20A System) to detect MTX. The mice were anesthetized, and their heads were fixed to receive perilymph. The subcutaneous fat layer was dissected after skin incision, with gentle removal of the muscles to expose the tympanic bulla periosteum. By the incremental removal of bony fragments, the bulla was encapsulated before uncovering the round window niche that was then gently penetrated using a glass pipette to harvest the perilymph. Every fluid retrieved was analyzed using an LC-MS/MS to detect MTX. The MS conditions were as follows: Turbo Ion Spray, 500˚C, MRM scan form-positive mode, 5500 V, CG 20, GS1 50, and GS 60 spray voltage (MTX m/z 316.169/163.000, loperamide m/z 477.223/ 266.200). Gemini C18 3.0 μm columns (150 mm × 3.0 mm) fitted with Gemini C18 guard cartridge (4.0 mm × 2.0 mm), with the column oven at 40˚C and autosamplers at 4˚C, were set as the local conditions. Mobile phase was ACN:deionized water = 40:60 (V/V) with 0.1% formic acid and the flow rate set at 0.3 ml/min. In 0.9% normal saline and loperamide, standard stock solution of 1 mg/mL MTX was used. Auditory brainstem response for assessing animal hearing One month after drug treatment, we measured the auditory brainstem response (ABR) from click to tone burst stimuli. The ABR in the left and right ears was evaluated; the body temperature was maintained with the help of a heat therapy pump (#TP700, MI, USA). All animals were anesthetized with ketamine (120 mg/kg) diluted with xylazine (10 mg/kg), which was administered intraperitoneally, and the mouse was place in an audiometric booth. The following ABR stimulus frequencies were tested: 8, 16, 24, and 32 kHz, as reported previously [12]. TDT's MF1 Multi-Field Magnetic Speaker was used to optimize the free field utilized to test the hearing range of mice, rats, and guinea pigs [13]. We tested the stimulus intensity levels at each frequency in decreasing order, i.e., from 90 dB to 20 dB of the visual ABR threshold. The stimulus level was calibrated at the ear opening using a custom-made probe tube microphone. Evaluation of wave I The obtained ABR threshold was reduced in stimulus intensity by 20 dB for each animal to identify the lowest intensity at which an ABR wave I was detected. The ABR threshold and the wave I amplitude were determined by analyzing the stacked waveforms using the program R (version 4.0.4, Free Software Foundation's GNU General Public, Austria). The extensive ABR data obtained were stored on floppy disks for later offline analysis of the amplitude and latency of the ABR components. The amplitude of wave I was described as the difference in magnitude between the first positive peak and the next negative peak at 90 dB SPL. Scanning electron microscopy of outer hair cells To shorten the time between death and fixation (typically 2 minutes) at room temperature (RT), the cochlea was rapidly dissected out of the mouse skull bone surgically (one animal at a time) after giving anesthesia to the mouse and a hole was made at the apex. The fixative (500 μL), comprising 4% paraformaldehyde and 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, was carefully perfused through the open round window, exiting through the hole created at the apical turn. The tissues were then post-fixed overnight at 4˚C on a rotating platform with the same buffer, rinsed three times with distilled water, and decalcified for 2 hours in 5% ethylenediaminetetraacetic acid (EDTA) in 100 mM Tris (pH 7.4). The cochlear coils were cut open and post-fixed at 4˚C in 1% osmium tetroxide for 2 hours. The samples were then dehydrated with 50% to absolute ethanol by sequential ethanol rinses, dried at the critical point, mounted on carbon tab support inserts, and sputter-coated with platinum. Imaging was performed using a scanning electron microscope (COXEM EM-30AX Plus, Republic of Korea) with a beam energy of 15 kV. The number of outer hair cells in cochleae was quantified. The cochleae were separated into apical, middle, and basal turns, and the hair cells in each turn were counted at a magnification of 500X. For each group, the number of hair cells in 100 μm cochlear turn length was averaged. If the bundle of stereocilia was missing, a hair cell was considered absent. Ribbon synapses and cochlear neuron integrity After anesthetizing the mice, the cochleae were extracted from the mouse head skull, and a gaping tear was formed directly at the distal turn of the cochlea. Then, 4% paraformaldehyde was passed through the round window to the apex, with 4 hours of post-fixation at 4˚C, under gentle rotation. In addition to bone demineralization, the cochleae were placed in 0.12 mM EDTA for 1 hour at 4˚C. From each cochlea, three small pieces were cut (base, middle, and apex). The tissue samples were immersed in blocking buffer (donkey serum: 0.1% PBS-T, at 1:100 dilution) for 1 h at RT and were immediately incubated with primary antibodies at 4˚C overnight. After washing three times with 0.1% PBS-T (30 minutes, each wash), the samples were incubated in secondary antibodies for 4 hours at RT. The samples were then rinsed three times with 0.1% PBS-T for 30 minutes. The samples were then stained with phalloidin and DAPI for 3 minutes and rinsed once in PBS for 30 min. The samples were mounted on glass slides with vector protection solution and analyzed using an LSM 800 laser scanning microscope (Carl Zeiss Microscopy GmbH, Germany). The following major antibodies and titers were used: C-terminal binding protein 2 (CtBP2) (1:100, # 612044, BD Transduction Labora-tories™), myosin-7a (1:200, # 25-6791, Proteus), Anti-Neurofilament 200 (NF200) (1:200, #8135, Abcam), phalloidin (PL) (1:1,000, # A12379, Cell signaling), and DAPI (1:10,000, Invitrogen). Counting of presynaptic ribbons of inner hair cells Cochlear turn lengths were determined for each study group. A high-resolution confocal microscope (LSM 800 laser scanning microscope) was used to generate confocal z-stacks of three regions from each cochlea. Image stacks were translated into image-processing software. Each cochlear turn (apex, middle, and base) contained at least 12 IHCs. We chose three visual areas of 20 μm each for each turn of the cochlea in the same way for all groups, demarcated them with a square, and counted presynaptic ribbons (CtBP2 punctates) of the IHCs that were found around IHCs as well as within nuclei as previously reported [14,15]. We used seven mice from each study group to calculate the average number of presynaptic ribbons per IHC. HEI-OC1 cell culture The House Ear Institute-Organ of Corti 1 (HEI-OC1) cells used in these experiments were a kind gift from Professor Hun Yi Park from Ajou university hospital, South Korea and were cultured under permissive conditions (33˚C). The media consisted of Dulbecco's Modified Eagle Medium-high glucose (Gibco BRL, Gaithersburg, MD, USA) supplemented with 10% nonantibiotic fetal bovine serum (Gibco BRL), and 50 U/mL gamma interferon (Genzyme, Cambridge, MA, USA). AVN-C was diluted in DMSO and administered at a dose of 1 μM; MTX was dissolved in 0.9% normal saline and administered at a dose of 0.2 μM to induce cytotoxic effects on HEI-OC1. FA was dissolved in water and administered at a dose of 3 μM. AVN-C and FA treatment was provided 3 hours prior to the administration of MTX. MTT assay for cell viability assessment Approximately 25 mg of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; Sigma-Aldrich) was dissolved in 5 mL of PBS to obtain a reagent for this assay. A total of 20 μL of MTT reagent was added to each well, and the mixture was incubated at 37˚C for 30 minutes. Then, 100 μL of DMSO was added to each sample. The samples were incubated at 37˚C for 4 hours; the optical density was measured at 570 nm using a scanning electron spectrophotometer (Molecular Devices, SpectraMax ABS Plus #202-6262). The optical density of formazan in the solutions was measured using glass cuvettes for the spectrophotometer. All estimations within the plate reader were made by performing assays in 96-well plates, and the average optical density in control cells was assumed to be 100% and n = 7. Measurement of ROS After drug treatment of HEI-OC cells, a Reactive Oxygen Species Detection Assay Kit was used to assess ROS levels in the cells. The cells were incubated at 37˚C in a humidified incubator with 5% CO 2 for 30 minutes before they were suspended. This kit uses the cell-permeable reagent 2ʹ,7ʹ-dichlorodihydrofluorescein diacetate (DCFDA), a fluorogenic dye that quantifies the activity of hydroxyl, peroxyl, and other ROS within the cell [16]. HEI-OC cells were rinsed once in 1× buffer and suspended in a microplate reader; fluorescence was recorded at a maximum excitation and emission wavelength of 495 nm and 529 nm, respectively, using a flow cytometer (BD FACSCalibur™, BD Biosciences, San Jose, CA, USA). Change in ROS levels was expressed as a percentage of control after background subtraction using Kaluza Analysis Software (Beckman Coulter, Inc., Brea, CA, USA). Data analysis The data were analyzed using the Student's t test or one-way ANOVA with post-hoc Tukey-Kramer comparison tests. All statistical studies were performed via GraphPad Prism Software version 8.0. For p values of less than or equal to 0.05, the results were considered statistically significant. The number of repeats used for each experiment is described in the corresponding figure legends. MTX penetrates the blood-labyrinth barrier after systemic administration We assessed for the presence of MTX in the cochlear fluid after systemic administration to determine whether it has a direct effect on the cochlea. To achieve this, mice were administered with 4 mg/kg of MTX intraperitoneally, and blood was collected at various time intervals (30 minutes and 1, 2, 3, 6, and 8 hours) and analyzed using LC-MS/MS to determine the level of MTX in the serum (Fig 1A). In wild-type (WT) mice, the blood serum level of MTX peaked at 1,490±2.8 ng/mL 30 minutes after IP treatment and rapidly declined thereafter ( Fig 1B). The presence of MTX in the perilymph was evaluated at several time points (1, 2, and 3 hours) after IP injection of MTX in WT mice to determine whether MTX passes the blood-labyrinth barrier (BLB). The levels of MTX in the perilymph peaked at 93.2±4.2 ng/mL 1 hour after systemic injection, demonstrating that MTX penetrated the BLB (Fig 1C). After 2 and 3 hours, the level of MTX was much lower in the perilymph. High-dose MTX causes considerable hearing loss in vivo, whereas AVN-C and FA protect hearing against MTX ototoxicity Our in vivo study had shown that HDMTX causes an increase in the hearing thresholds in mice administered MTX. Subsequent assessments one month post treatment demonstrated a significant increase in the hearing thresholds for click sounds ( ��� p<0.001; Fig 2C) in addition to the tone burst frequencies, at all tested frequencies ( ��� p<0.001) in WT mice as compared to that noted in untreated control mice ( Fig 2D). Furthermore, we also examined the efficacy of AVN-C and FA against HDMTX-induced ototoxicity. Administration of AVN-C and FA 1 hour prior to MTX treatment markedly reduced the hearing thresholds for click sounds (21.4±2.4 dB and 28.9±3.9 dB SPL, respectively) in WT mice (Fig 2C) as compared to that in the group that received MTX alone. Moreover, the AVN-C and FA treatments also significantly reduced the hearing thresholds for tone bursts ( ��� p<0.001) at all tested frequencies (Fig 2D) as compared to that in the control mice. Simultaneous administration of AVN-C and FA 30 minutes prior to MTX treatment resulted in a reduction of hearing thresholds for click sounds (23.6±2.4 dB SPL) ( Fig 2C) and tone bursts ( ��� p<0.001) at all tested frequencies as compared to that in the control mice ( Fig 2D). AVN-C protects OHCs from MTX-induced ototoxicity To characterize the protective effect of AVN-C on auditory hair cells against MTX-induced ototoxicity, we assessed for damage to the outer hair cell (OHC) stereocilia. Scanning electron microscopy (SEM) was performed to compare a normal cochlea with a treated cochlea 1 month after systemic administration of MTX, AVN-C, and FA. We observed that several OHCs were extinguished in the MTX-treated group compared with those in the other groups ( Fig 3A), and damage to OHCs was noted mainly at the base (14±5.8 OHCs), middle (18.9±4.7 OHCs), and apex (20.4±3.8 OHCs) turns of the cochlea (Fig 3B-3D). In the FA+MTX-treated group, it was observed that OHCs were missing at the base (38±2.3), middle (39±4.6), and apex (40±7) turns (Fig 3A-3D). Meanwhile, AVN-C protected the OHCs at the base (49±2.4 OHCs), middle (49.3±1.9 OHCs), and apex (50.6±2.5 OHCs) turns (Fig 3A-3D). The combination treatment of AVN-C and FA limited the deleterious effect of MTX on OHCs to 42.3 ±3.7 OHCs at the base turn, 42.9±4.6 OHCs at the middle turn, and 43±4.1 at the apex turn (Fig 3A-3D). These OHCs are known to be affected by ototoxic drugs or noise. In the control group receiving the carrier, the number of OHCs was as follows: base turn, 43.7±2.4 OHCs; middle turn, 42.7±4.3 OHCs; and apex turn, 43.1±5 OHCs (Fig 3A-3D). The number of OHCs was counted per 100 μm, and a p<0.001 was considered significant in all experimental groups. IHCs appeared normal and were well preserved in all groups. AVN-C treatment inhibits MTX-induced synaptic ribbon damage The synaptic ribbon in the cochlear whole-mount preparations obtained from mice euthanized immediately after the ABR measurements were labelled with RIBEYE/CtBP2 and counted to directly test the following hypotheses: (1) AVN-C prevents synaptic ribbon loss and (2) MTX causes loss of synaptic ribbons of the cochlea IHC bands. Synaptic ribbons were counted in the cochlea regions corresponding to the tested stimulus frequencies that triggered ABR. Results showed that MTX damaged and reduced the number of synaptic ribbons of IHCs throughout the cochlea (Fig 4A). The CtBP2-positive signal count as determined by IHC Fig 2. AVN-C and FA preserve hearing from MTX ototoxicity by ABR. A. Schematic diagram of drug treatment schedule and timeline. AVN-C and FA were injected intraperitoneally one hour before MTX administration for seven consecutive days. B. ABR wave I amplitude in 90 dB SPL. Wave I amplitude was defined as the magnitude difference between the first positive peak and the next negative peak. MTX treatment decreased the wave I amplitude, whereas treatment with AVN-C and FA reversed this decrease ( ��� p<0.001; one-way ANOVA). AVN-C treatment outperformed the rescue of wave I amplitude by FA as well as the combination treatment with AVN-C and FA. C (Click ABR) and D (Tone burst ABR) show that one month after drug treatment, the hearing thresholds in ABR increased in the MTX-treated group, whereas AVN-C and FA treatment reduced hearing thresholds ( ��� p<0.001; one-way ANOVA). n = 7 mice per group. https://doi.org/10.1371/journal.pone.0266108.g002 PLOS ONE Avenanthramide-C for high-dose methotrexate-induced ototoxicity revealed that MTX treatment resulted in a significant decrease in the number of synaptic ribbons in all three turns of the cochlea among the group that received only MTX from the cochlea: apex (6±0.9), middle (5±0.7) and base (4±0.7) (Fig 4B-4D). On the other hand, the AVN-C-treated group maintained high numbers of these synaptic ribbons at the apex (14 In addition, to assess the effect of MTX on hearing, we used the R program (version 4.0.4) to extract wave I at 90 dB SPL from the stored ABR raw data of each group of tested mice. The wave I amplitudes were severely decreased in the MTX-treated group (38.6±6.2 nanovolts), while it was higher in the remaining groups (control: 903.6±10.3, FA+MTX: 687.9±10.8, and AVN-C+MTX: 1,345.1±13.7 nanovolts). The combined treatment of AVN-C and FA 1 hour before MTX brought this wave I amplitude to 1,048.3±10.7 nanovolts (Fig 2B). AVN-C shelters the spiral ganglion neurons from axonal degeneration To assess cochlear neurodegeneration in the turns, we stained all cochleae with anti-neurofilament 200 (NF200) and observed the cochlear morphology under a confocal microscope. Degeneration of cochlear neurons was more pronounced in the basal turn of the cochlea than in the middle and apex portions in the MTX-treated group. In the FA+MTX-treated group, the continuity and integrity of the cochlear neurons in the ascending position at the basal turn were disrupted, but those of the neurons in the middle and apex turns were intact. Notably, administration of the antioxidant AVN-C as well as the combination of AVN-C and FA resulted in a well-defined neuronal integrity of the cochlea, but MTX treatment caused tremendous neuronal degeneration throughout the cochlear turn (Fig 5B). AVN-C saves cells from MTX-induced cytotoxicity The MTT assay, a known colorimetric assay that assesses both the metabolic activity of cells and the number of viable cells present, was performed on HEI-OC1 cells treated with MTX, AVN-C, and FA (alone and in combination with AVN-C). The viability of HEI-OC1 cells was evaluated in a concentration-and time-dependent manner. We initially determined the levels of MTX (Fig 6A) and AVN-C (Fig 6B) in HEI-OC1 cells by evaluating different dose regimens, before selecting the appropriate MTX and AVN-C doses to be used along with FA ( Fig 6C). After completion of all treatment schedules and MTT assay, the absorbance of HEI-OC1 cells measured at 570 nm at 24 hours after MTX treatment showed that cell viability was as follow: control (100%), FA+MTX (66.2±13.1%), and AVN-C+MTX (86.3±8.2%). However, MTX significantly reduced the cell viability to 54.2±13.3%, resulting in cell death (Fig 6C). AVN-C decreases the ROS levels in MTX-induced HEI-OC1 cell ototoxicity The cells were treated with cell-permeable DCFDA for 30 minutes at 37˚C in 5% CO 2 , and after completing all treatment regimens cells were then processed for FACS analysis to determine ROS production. In HEI-OC1 cells treated with MTX alone, the fluorescein-DCFDA channel positive population produced considerably higher ROS levels than the control group ( ��� p<0.001; Fig 7A). Three hours prior to the administration of MTX, treatment with AVN-C and FA-either alone or in combination-resulted in a significant reduction in ROS formation in HEI-OC1 cells ( ��� p<0.001; Fig 7B). Qualitative real-time polymerase chain reaction (RT-PCR) was performed to further dissect the downstream signaling pathways of AVN-C, FA, and MTX based on the changes in expression patterns of inflammatory cytokines. MTX upregulated all genes tested, including ROSand apoptosis-related genes (TNFα, IL1β, IL6, BAX, and HRK); this was a common finding when experiment was repeated. The addition of AVN-C or FA appeared to significantly lower inflammation by decreasing the expression levels of these genes (Fig 8). The experiment was repeated seven times, and a p <0.001 was significant in all compared groups. Discussion Although MTX-induced ototoxicity is not well studied, many studies have reported the occurrence of MTX cytotoxicity in other organs; an important underlying mechanism of MTXinduced toxicity is related to ROS production [17]. Hence, we evaluated the incidence of HDMTX-induced ototoxicity and assessed the preventive role of antioxidant AVN-C against this condition. The level of MTX in the bloodstream rapidly peaked 30 minutes after its administration in experimental mice, but decreased shortly thereafter, and was undetectable 8 (Fig 5B). Under a confocal microscope, severe degeneration of cochlear neurons was observed, which was more pronounced at the basal turn than in the middle and apex portions in the MTX-treated group (Fig 5B). In the FA+MTX-treated group, the cochlear neurons at the basal turn showed discontinuity and deformity but preserved in the middle and apex turns of the cochlea. On the contrary, treatment with an antioxidant AVN-C alone as well as the combination of AVN-C and FA gave a well-defined cochlea neuronal integrity, while treatment with MTX disrupted cochlea neurons throughout the cochlear turns (Fig 5A and 5B). The following antibodies were used in staining: antineurofilament 200 (NF200), phalloidin (PL), and DAPI; the merged image shows the OHCs (Fig 5A): The scale bar used is 50 μm. https://doi.org/10.1371/journal.pone.0266108.g005 PLOS ONE Avenanthramide-C for high-dose methotrexate-induced ototoxicity hours later (Fig 1B) owing to uptake of MTX in the plasma, spleen, liver, gastrointestinal tract, kidney, muscles, skin, and bone marrow [18]. The plasma levels of MTX have been evaluated as a function of time in Abcc3 knockout (KO) and WT mice [19]. These mice were all administered a single dose of MTX (10, 50, and 200 mg/kg) through intravenous (IV) bolus. The KO mice demonstrated considerably better total MTX clearance than the WT mice after 8 hours. This could explain the pharmacodynamics, absorption, and diffusion of MTX in many tissues. In addition, MTX is primarily distributed to the non-fatty tissues of the body after administration, and is rapidly transported across the capillary and cell membrane of the liver, kidney, and skin, allowing tissue to plasma concentration equilibrium ratios to be established on a time scale consistent with those of plasma flow limitation [18]. Meanwhile, we also measured the level of MTX in the perilymph after systemic treatment to check if it has a direct deleterious effect on the cochlea. The presence of MTX in the cochlea was confirmed by the appearance of MTX in the perilymph 1 hour after systemic injection (Fig 1C), which could explain the direct negative effect of MTX on the inner ear components. A previous study [20] had demonstrated that MTX administered at therapeutic doses can pass the blood-brain barrier and enter the cerebrospinal fluid. Following intravenous injection, less than 1 mM of MTX was found in the cerebrospinal fluid [21]. Similarly, platinum-based chemotherapeutic agents routinely used in oncology (namely cisplatin, carboplatin, nedaplatin, and oxaliplatin) have diverse ototoxic and neurotoxic effects. When the BLB is disrupted by treatment with diuretics or noise exposure, the uptake of drugs is increased and the extent of damage is greatly enhanced [22]. Our results showed that MTX crosses the BLB and causes direct damage to the OHCs. We had previously established the biodistribution and bioavailability of AVN-C in the body fluids of experimental mice; AVN-C was transited substantially longer in the perilymph than in the serum before it was washed out of the mouse cochlea [11]. A prospective open-label study on 11 patients with treatment-refractory autoimmune hearing loss was previously conducted to assess the efficacy of low-dose MTX as long-term treatment for autoimmune hearing loss. At the start of this study, an improvement in audiometric HEI-OC1 cells were pretreated for 3 hours with 1 μM of AVN-C and 3 μM of FA simultaneously before being exposed to 0.2 μM of MTX for 24 hours. AVN-C alone or AVN-C concomitant to FA showed a significant protective effect against MTX cytotoxicity ( ��� p<0.001), while FA showed moderate effect. �� p < 0.01; one-way ANOVA. Each group was tested seven times. https://doi.org/10.1371/journal.pone.0266108.g006 The obtained positive DCFDA percentages gated reactive oxygen species (ROS) after subtracting the background using the program Kaluza. All groups were scored seven times. The levels of intracellular ROS were determined by flow cytometry and DCFDA assay; the MTX-only-treated group produced high levels of ROS (Fig 7A). In the MTX-alone group, ROS generation increased, as demonstrated in the typical histograms of ROS fluorescence. On the other hand, pretreatment with AVN-C and FA alone or in combination, reduced ROS levels ( Fig 7B). All groups tested had a significant p-value ( ��� p < 0.001; one-way ANOVA). Each group was tested seven times. After removing the background using the software Kaluza, the acquired positive DCFDA percentages gated ROS. PLOS ONE parameters (Permanent Threshold (PT) by >10 dB or standard deviation (SD) by >15%) was observed for at least one ear. Long-term treatment with low-dose MTX has been since proven effective in some patients with hearing loss that is thought to be mediated by autoimmunity and resistant to conventional therapies [23]. Another study showed that intratympanic injection of MTX does not cause any ototoxic effects; this study assumed that this method can be safely applied and used as a safe treatment alternative for autoimmune vestibulocochlear diseases [24]. In our investigation, HDMTX treatment caused DIHL with permanent threshold shiftsboth in the click sound and in all evaluated frequencies of tone bursts-one month after treatment (Fig 2C and 2D). Methotrexate dosages are classified into three groups clinically. The first group is low-dose methotrexate, which is defined as less than 20 mg/m 2 (LDMTX � 0.66 mg/kg), that is frequently used for rheumatological disorders such as rheumatoid arthritis. The second category includes intermediate IV doses ranging from 100-500 mg/m 2 (3.3-16.6 mg/kg) and are designated for breast and other solid tumors, as well as low-grade leukemias and lymphomas. The final category includes high-dose MTX, defined as a dose greater than or equal to 500 mg/m 2 (HDMTX �16.6 mg/kg), that is the mainstay IV chemotherapy for primary central nervous system lymphoma, osteosarcoma, and high-grade leukemias/lymphomas such as non-Hodgkin lymphoma; these treatment regimens are administered over several days [25][26][27]. Accordingly, we decided to administer a HDMTX treatment to WT mice for seven consecutive days in our present investigation as it is in line with the dosage employed in the clinical setting. Moreover, there is a lack of information in the literature about the death rate and general health concerns associated with high-dose MTX. In addition, several other clinical trials and follow-up studies of drugs, such as methotrexate, showed that they are beneficial in reducing morbidity measures, but their effect on mortality in people with rheumatoid arthritis remains uncertain [28]. Ambulatory high-dose methotrexate injection was used as a central nervous system prophylactic in patients with aggressive lymphoma and it was shown that MTX penetrates cell membranes, especially at doses high enough to cross the blood-brain barrier. MTX is heavily linked with albumin in the plasma circulation (50-80%), which explains the potential of severe toxicity associated with high-dose treatment in different organs. Patients whose MTX elimination is delayed are exposed to harmful MTX concentrations for an extended period, which can result in considerable morbidity. These toxins have the potential to cause long-term injury or death [29,30]. LDMTX and HDMTX have different mechanisms, especially in cases such as renal failure, where many drugs can accumulate in the bloodstream [31]; this drug can cause treatment-related illness or even death. It is therefore particularly important to weigh the benefits against the risks. The treatment protocols used in our investigations (AVN-C, FA, and AVN-C+MTX) all provided protection against HDMTX-induced toxicity, and AVN-C alone was efficient and overtopped FA alone or in combination, in respect of sheltering the synaptic ribbons and neuronal integrity (Figs 4A-4D, 5A and 5B). In young men and women, oat AVNs supplementation has been shown to reduce circulating inflammatory cytokines and inhibit the production of chemokines and cell adhesion molecules produced by downhill running [32]. Furthermore, administration of 10 mg/kg of AVN-C has been reported to prevent hearing loss in normal mice that were exposed to noise and ototoxic drugs (furosemide and kanamycin) [11]. Our findings pointed out that MTX caused oxidative stress that induced damage to the inner ear tissues. As evidenced by the findings of ABR and SEM, pre-administration of AVN-C to normal mice was effective in preventing or reversing the effects of MTX, with the AVN-C-treated group yielding outcomes like the control group that received the carrier (Fig 2C and 2D). Since the antioxidant AVN-C had protective effects against MTX toxicity, it can be assumed that MTX ototoxicity included the generation of ROS. Folinic acid (FA) has been extensively examined and reported as an antidote of MTX, and it was employed in this study to assess the efficacy of AVN-C. It has been administered as an adjuvant treatment in studies with HDMTX [33][34][35]; moreover, we postulated that this could be the reason why hearing damage was not observed when treated with HDMTX. Subsequently, hearing impairment caused by oxidative stress in mice was prevented in NIHL and DIHL when the antioxidant AVN-C was administered [11]. However, combination treatment with AVN-C and FA before the administration of HDMTX had no additional effects, implying that AVN-C and FA share the same pathway. Furthermore, inequalities in hearing loss between sexes have significant consequences for knowledge gaps in the translation from non-clinical to clinical settings [36]. We used 4 males and 3 females in our study to check for statistical differences between sexes as the difference in sex-based needs for non-clinical versus clinical research can limit a comprehensive knowledge of sex-based mechanistic variables. Some reports showed that women of all ages have superior hearing to men [37]. Such disparities may play a role in understanding and explaining clinically significant sex differences, and they are almost certainly essential for developing successful therapeutic treatment options. However, we found no variation in findings attributable to sex differences among the wild type mice studied during our investigations. Ribbon synapses connect the IHCs to the spiral ganglion neurons (SGNs), which are the primary synaptic structures in the sound conduction pathway and play a key role in sound signal transmission [38]. Damage to the ribbon synapses hinders transmission of sound and conduction to the brain (where it is interpreted as sound), thereby increasing the hearing thresholds, and causing hearing loss. In addition, MTX treatment impairs both the central and peripheral nervous systems, with the potential for neurotoxicity in the central auditory nervous system [7]. Our results showed a decline in the number of synaptic ribbons of IHCs in the MTX-treated group. This finding suggests that HDMTX has a direct harmful effect on the synaptic ribbons. Additionally, MTX caused the death of several axons of the auditory nerve fibers, which reduced the neural output of the cochlea, and impaired the sensitivity and optimization of auditory nerve fibers. Besides, wave I amplitude was significantly decreased, and this implies a decrease in the firing of electrical impulses from the cochlea to the brain for interpretation of sound and ultimately leading to hearing loss (Fig 2B). Earlier reports have demonstrated that cisplatin-induced ROS accumulation and aging reduced the number of ribbon synapses in IHCs, resulting in synaptopathy and OHC loss. ROS-induced deterioration in ribbon synapses may be a prelude to HC loss, according to these findings [39]. In mice, noise exposure caused significant reductions in ABR wave I amplitudes as well as the loss of cochlear ribbon synapses [40]. ABR amplitudes have been used successfully to alienate synaptopathy in listeners, considering that wave I depicts the synchronous firing of many auditory nerve fibers in the spiral ganglion cells [41]. Despite a significant decrease in ABR wave I amplitude readings in the MTX group owing to oxidative stress, AVN-C treatment substantially increased the wave I amplitude and maintained it higher and superseded FA and the concomitant treatment of AVN-C and FA ( Fig 2B). The synaptic ribbons were retained and preserved under our treatment regimens (AVN-C, FA, and AVN-C+FA) (Fig 4A-4D). This is the reasonable because AVN-C, as a powerful antioxidant capable of reducing the ROS levels, protected the synaptic ribbons from the damaging effects of ROS. Meanwhile, FA is widely known as an antidote to MTX and indeed served as an antidote to MTX toxicity in this study. However, AVN-C was more effective and improved cochlear nerve fiber axon integrity than FA, since some innervation distortion was observed at the basal turn in FA-treated cochlea (Fig 5B). Following the damage caused by MTX, AVN-C appeared to promote cochlear neuron survival, which may have boosted the firing of electrical impulses from the cochlea to the brain, explaining the improved ABR results when compared with those treated with MTX. Previous works have demonstrated the functional recovery of regenerated synapses in treated animals using round-window delivery of NT3 protein by evaluating the suprathreshold amplitude of ABR wave I in response to tone pips in the damaged cochlear frequency regions [42]. Based on the current widely accepted theory of mammalian cochlear mechanics, the fluid in the cochlear scalae interacts with the elastic cochlear partition to produce transversely oscillating displacement waves that travel along the cochlear coil [43]. Previous studies have shown that exogenous neurotrophins directly delivered to the cochlear fluids enhance the survival of cochlear neurons after the HCs are damaged by ototoxic drugs [44]. The antioxidative role of AVN-C was observed in all in vivo studies, as expected from our earlier work [11], and FA was found to block MTX ototoxicity. To better understand the role of ROS in the mechanism of MTX-induced apoptosis and the effects of AVN-C and FA, we pre-treated HEI-OC1 cells with 1 μM of AVN-C and 3 μM of FA for 3 hours before administering 0.2 μM of MTX. After 24 hours of MTX treatment, all the cells were examined under a fluorescence microscope. We observed that MTX could cause apoptotic morphological changes such as chromatin condensation, membrane blebbing and shrinkage, and apoptotic body formation (data not shown). DCF, a fluorescent probe commonly used to detect total ROS in cells, was also used. We observed that MTX-treated HEI-OC cells exhibited an increased DCF-positive population, suggesting the induction of ROS production by MTX (Fig 7A). ROS are known to be extremely dangerous, triggering oxidative stress through the oxidation of biomolecules and resulting in irreversible cellular damage and cell death [45][46][47][48]. The inflammatory cytokines (TNF-α, IL1b, IL6, BAX, and HRK) were significantly upregulated in HEI-OC1 cells treated with MTX (Fig 8). The overexpression of BAX in the MTX-treated cell group indicates that MTX induces ROS generation via a mitochondria-mediated pathway, leading to an increase in inflammation. Previous reports have shown that MTX causes potent mitochondrial disruption and apoptosis in HL-60 and Jurkat T cells through the production of ROS [17]. The apoptotic morphological changes were significantly reduced when the cells were pretreated with ROS scavenger AVN-C, and FA ( Fig 7B). The number of healthy cells was much higher in the AVN-C-treated group than in the FAtreated group (Fig 7A and 7B). Despite the use of FA (a well-known antidote for MTX toxicity), either alone or in conjunction with AVN-C, the antioxidant AVN-C was found to be significantly beneficial in lowering ROS production and inflammation, as well as avoiding ototoxicity in HEI-OC1 cells. AVN-C was previously found to protect HEI-OC1 cells from oxidative stress caused by gentamicin treatment [11]. Here, we demonstrated a way to protect these cells from MTX ototoxicity. Conclusion First, we demonstrated that treatment with MTX causes significant hearing loss. When administered in large doses, it can penetrate the BLB, causing damage to the synaptic ribbons, cochlear neurons, and OHCs. The antioxidant AVN-C has a remarkable protective effect against HDMTX-induced ototoxicity, and we used FA to weigh the effect of AVN-C during our investigations. We showed that AVN-C protects the ribbon synapses, cochlear neuron integrity, and OHCs from the harmful effects of MTX and that it improves ABR. These findings suggest that AVN-C can be utilized for hearing preservation. Our findings emphasize that AVN-C is effective and can protect against the harmful effects of HDMTX and suggest that ROS is involved in the occurrence of MTX-induced ototoxicity.
2022-04-01T05:16:03.383Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "48a4309b6d1428b32793094a2c622b9caaf18ac1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0266108&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48a4309b6d1428b32793094a2c622b9caaf18ac1", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
55602428
pes2o/s2orc
v3-fos-license
Morphometric Analysis of Karadya Micro Watershed: A Case Study of Mandya District Water is one of the essential natural resource for the very survival of life becoming a scarce commodity. It is very important to manage this very essential natural resource at micro watershed level for achieving sustainable development. The morphometric analysis plays a vital role in understanding the hydro-geological behavior of drainage basin. Remote sensing and Geographical Information System (GIS) techniques are proven efficient tool for morphometric analysis of a drainage basin throughout the world. Hence, an attempt has been made in this paper to study morphometric parameters of Karadya micro watershed using Geographical information system(GIS) approach. The study reveals that the terrain exhibits dendritic type drainage pattern with highest stream order being third order. The drainage density of watershed is 2.65km-1. The mean bifurcation ratio of the entire basin is 6.73 indicating that the drainage pattern is not much influenced by geological structures. Relief ratio indicates that the discharge capability of these watersheds is very high and the ground water potential is low. Further, the study reveals that GIS techniques proved to be a competent tool in morphometric analysis helps in planning and management of watershed. Introduction Water is one of the essential natural resource for the very survival of life on the planet earth. This essential natural resource is becoming a scarce commodity due to various reasons needs to be conserved. Conservation of available natural resources through demarcation of potential zones at micro watershed level are primary necessitate for achieving sustainable development. A watershed is an ideal unit calling for multidisciplinary approach to the resource management for insuring continuous benefits on sustainable basis. Morphometry is the measurement and mathematical analysis of the configuration of the earth's surface, shape and dimension of its landforms [3]. Morphometric analysis of a watershed provides a quantitative description of the drainage system, which is an important aspect of the characterization of watersheds [32]. An understanding of the hydrological behavior of a watershed is important for effective planning and management of land and water resources development. Morphometric analysis of a watershed involves the measurement of linear features, aerial aspects and gradient of channel network and contributing ground slope of the drainage basin [20]. To understand the evolution and behavior of drainage patterns, several methods have been developed like traditional methods such as field observations and topographic maps and advanced methods like remote sensing and GIS [14,28]. In traditional methods, it is difficult to examine all drainage networks from field observations due to their extent throughout rough terrain and or vast areas. Remote Sensing coupled with GIS A Case Study of Mandya District technique as emerged as powerful tool in the recent years in analyzing the drainage morphometry throughout the world. These techniques have been immense utility for the analysis of morphometric parameters to arrive cost effective plans for conservation and development measures for watersheds at micro level. Many soft computing techniques were employed to estimate the water consumption under different climatic conditions [7, 8, 9, and 10] In India, some of the studies on morphometric analysis using remote sensing and GIS technique were carried out by [1,2,11,12,13,18,19,20,21,23,24,30] & others have revealed that the results obtained by GIS and remote sensing technique were reliable and accurate and aid in watershed management. Hence, an attempt has been made to study the morphometric analysis of KARADYA micro watershed using Geographical Information System (GIS) techniques for water resource planning, conservation and management. Study Area The Karadya watershed is situated in Mandya district, Karnataka, India and geographically is located between 76°37'30" and 76 0 45'E longitude and 12°45' and 12°37'30"N latitude. The study covers an area of 23.95 km 2 having a maximum length of 7.34km and width of 5.04km. The study area attains maximum elevation of 1065.000m and a minimum of 848.000m. It has a typical sub tropical climate with hot dry summers and cool dry winters. Temperature varies between the minimum of 15°C during December or January months to the maximum of 35°C in May or June. The rainfall in the study area is highly erratic varying between 400mm to1200mm. Methodology In this study, Topographic map on a scale of 1:25000 prepared by Survey of India (SOI) bearing number 57D/10 was used. To delineate the water shed boundary and drainage pattern the top sheet was Geo-referenced and digitized. The Landsat TM satellite images used to get the drainage pattern of different orders of the basin. Drainage basin morphometric parameters and stream order characteristics of the area were extracted from the digitized data using the Strahler's method of stream ordering and also DEM and Slope maps were prepared with the help of Remote Sensing and GIS techniques (ArcGIS). In the present study the morphometric parameters such as linear, area land relief aspects of the watershed have been computed using the formulae developed by different researchers as presented in Table 1. Morphometric Parameters The morphometric analysis of the drainage basin plays an important role in understanding the geo-hydrological behavior of drainage basin and expresses the prevailing climate, geology, geomorphology, structural antecedents of the catchment. The values of various basin characteristics required for calculating morphometric parameters are discussed briefly. The morphometric parameters were analyzed and divided in three categories:(1)linear aspect (2) area land (3) relief aspect Linear Aspects Linear Table 2. Stream Order The designation of stream order is the first step in morphometric analysis of a drainage basin. Herein, the number of streams gradually decreases with increase in stream order. According to [32], the1st order streams are those, which have no tributaries. The 2 nd order streams are those, which have tributaries only of 1 st order streams, where two 2 nd order channels join, a segment of 3 rd order is formed and soon. The variation in stream order is due to physiographic and structural condition of the region. As per Strahler's method study area is third order drainage basin is shown Figure 2 Stream Length Stream length is one of the most important hydrological features of the basin indicate the variation of surface runoff behaviors. Longer lengths of streams are generally indicative of flatter gradients. The stream length is higher for the first order and decreases as the stream order increases. The stream length has been computed based on the law proposed by Horton, with the help of GIS software. In the present work, results show that the total length of stream is more in case of first order stream sand decreases with the increase in the stream order as shown in Table 2. The overall length of 84 streams of water shed is 55.106km. The lengths of first order, second order and third order streams are 33.426km, 13.64 km and 8.04km respectively. Stream Number The order wise total number of stream segment is known as the stream number. Higher the stream number indicates lesser permeability and infiltration. It leads to inference that several stream usually upsurges in geometric progression as stream order increases. The results of study area reveal that the number of streams in the first-order is 67 and accounts for 79.76% of all segments. The number of streams in second-order is 16 and accounts for 19.04% while the number of streams in third order is only 1and account for 1.19%. The results are shown in the Figure 3. As per Horton's [6] laws the stream number decreases in geometric progression as the stream order increases. Mean Stream Length Mean Stream length is a dimensional property revealing the characteristic size of components of a drainage network and its contributing watershed surfaces [32]. It's directly proportional to the size and topography of drainage basin. It is obtained by dividing the total length of stream of an order by total number of segments in the same order. The mean stream length of any given order is greater than that of lower order in all watersheds. The mean length of the study area is0.50 for first order, 0.85 for second order and 8.04 for third order respectively. Stream Length Ratio Stream length ratio may be defined as the ratio of the mean length of the one order to the next lower order of stream segment. The mean stream length of the given order is higher than the previous order and lower than the next successive order. The stream length ratio has important relevance with surface flow and discharge and erosion stage of the basin. The stream length ratio of study area is 1.7 and 9.458 respectively. The increase in stream length ratio from lower to higher order shows that the study area has reached a mature geomorphic stage. Bifurcation Ratio Bifurcation ratio is closely related to the branching pattern of a drainage network. It is the ratio of the number of the stream segments of given order 'Nu' to the number of streams in the next higher order (Nu+1) [25,6,25]considered the bifurcation ratio as an index of relief and dissection. The bifurcation ratio is dimension less property indicates the degree of integration prevailing between streams of various orders in drainage basin and generally ranges from 3.0 to 5.0. In the study bifurcation ratio varies from 4.19 to16 and mean of bifurcation ratio for entire basin was 6.73 which is higher than the range of 3.0 to 5.0. The higher value of the mean bifurcation ratio indicates a high structural complexity and low permeability of the terrain [22]. Length of Overland Flow It is the length of water over the ground before it gets concentrated into definite streams channels [6]. It is approximately equals to half of reciprocal of drainage density. This factor depends on the rock type, permeability, climatic regime, vegetation cover and relief as well as duration of erosion [25]. Higher the values of Length of overland flow lower will be the relief and vice versa. The value of length of overland flow in Karadya watershed is 0.19 may be under the influence of high structural disturbance, low permeability, steep to very steep slopes and high surface runoff. Drainage Pattern The drainage pattern reflects the influence of slope, lithology and structure of the watershed. The study of drainage pattern helps in identifying the stage in the cycle of erosion. The drainage pattern of the study area have been observed as mainly dendritic type(Figure3) which indicates the homogeneity in texture and lack of structural control. Areal Aspects Areal aspects of a watershed of given order is defined as the total area projected upon a horizontal plane contributing over land flow to the channel segment of the given order and includes all tributaries of lower order. Area and perimeter of a basin are the important parameters in quantitative geomorphology. The area of the basin is defined as the total area projected upon a horizontal plane. Perimeter is length of the boundary of the basin. Areal aspects of the drainage basin such as drainage density, drainage texture, stream frequency, form factor, circularity ratio, elongation ratio, shape factor, compactness coefficient were calculated and results are given in Table 3. A Case Study of Mandya District Drainage Area The fundamental unit of virtually all watershed and fluvial investigations is the drainage area. An individual drainage basin is a finite area whose runoff is channeled through a single outlet. It is enclosed within the boundary of the watershed divide. A drainage divide is simply a line on either side of which water flows to different streams. Drainage area measures the average drainage area of streams in each order; it increases exponentially with increasing order. The drainage area of the study area was found to be 23.95sq.km. Drainage Density It is a measure of the total length of the stream segment of all order per unit area. The drainage density indirectly indicates the ground water potential of an area, due to its relation with surface runoff and permeability. Slope gradient and relative relief are the main morphological factor of drainage density. Higher the range of drainage density faster the runoff will be and it also suggests that the value vary between 0.55 and 2.09km/km 2 in a humid region with an average of 1.03km/km 2 [22]. Low drainage density generally result in the area of highly resistant or permeable sub soil material and high drainage density is the resultant of weak or impermeable sub surface material [26,27]. Low drainage density leads to coarse drainage texture while high drainage density leads to fine drainage texture. The drainage density of the study area is 2.65km/km 2 which indicate that the basin has moderate permeable drainage. Drainage Texture It is the total number of stream segment so fall orders per perimeter of the basin (Horton1945). The drainage texture depends upon a number of natural factors such as rainfall, vegetation, climate, rock and soil type, infiltration capacity, relief and stage of development. It is important to understand geomorphology which means that the relative spacing of drainage lines. The drainage texture has been classified into five different texture such as very coarse (<2), coarse (2to4), moderate (4 to 6), fine (6 to 8) and very fine (>8) (Smith1950). The drainage texture of study area is 3.585 and can be categorized as moderate in nature. Stream Frequency The stream frequency or channel frequency is the total number of stream segments of all order per unit area [5]. Stream frequency reflects the texture of the drainage density. The stream frequency value of the study area is 3.507 exhibits a positive correlation with drainage density valve of the area indicating increase in stream population with increase in drainage density. Circularity Ratio The circularity ratio is the ratio of the area of the basin to the area of a circle having the same circumference as the perimeter of the basin [17]. Circularity ratio is dimensionless and expresses the degree of circularity of the basin depends on stream flow in the watershed. It is influenced by the length and frequency of streams, geological structures, land use/land cover, climate and slope of the basin. It is a significant ratio that indicates the dendritic stage of a watershed. Low, medium and high values of circularity ratio indicate the young, mature, and old stages of the life cycle of the tributary watershed. Circularity ratio value for the study area was obtained as 2.20 and it indicated the basin has maturity stage of topography. Form Factor Form factor is a dimensionless ratio of watershed area to the square of the length of the watershed [5]. This factor indicates the flow intensity of a basin of a defined area. The form factor values vary from 0(in highly elongated shape) to1 (in perfect circular shape). Value greater than 0.78 indicated the perfectly circular basin, smaller values suggest the elongated form of basin. The value of form factor for the study is 0.69 indicating that basin is more or less circular basin. Elongation Ratio The elongation ratio is the ratio between the diameter of the circle of the same area as the drainage basin and the maximum length of the basin [25]. It is a very significant index in the analysis of basin shape which helps to give an idea about the hydrological character of a drainage basin. The values elongation ratio are grouped into three class namely circular (>0.9), Oval (0.9-0.8), and less elongated (0.8-0.7) and elongated (<0.7). A circular basin is more efficient in the discharge of runoff than an elongated basin [26,27]. The elongation ratio of the study area is 1.81 which indicates that the water shed is circular in nature more efficient in the discharge of runoff. Constant of Channel Maintenance The constant of channel maintenance is the inverse of drainage density. It indicates the relative size of land form units in a drainage basin and has a specific genetic connotation [31]. The value of the constant of channel maintenance for the study area is 0.38Sq.km/km is under the influence of high structural disturbance, low permeability, steep to very steep slopes and high surface runoff. Relief Aspects Relief aspects of drainage basin relate to the three dimensional features of the basin involving area, volume and altitude of vertical dimension of landforms where in different morphometric methods are used to analyze terrain characteristics. Relief aspects include relief, relief ratio and ruggedness numbers indicate the erosion potential of the processes operating within a drainage basin. The results of a real aspect of Karadya watershed are given inTable4. Watershed Relief Water shed relief is the difference in elevation between the lowest and the highest point of the watershed. Relief is an important factor in understanding the denudational characteristics of the basin and plays a significant role in land forms development, drainage development, surface and subsurface water flow, permeability and erosional properties of the terrain [15]. The high relief value indicates high gravity of water flow, low permeable and high runoff conditions. The difference in elevation between the remotest point and discharge point is obtained from the available contour map. The highest elevation of watershed was1065 m above mean sea level and the lowest relief was 848m above mean sea level. The overall relief calculated for the watershed was 217m. Relief Ratio (Rh) The ratio of basin relief to basin length (horizontal distance along the longest dimension of the basin parallel to the principal drainage line) is Relief Ratio(Rh) [25]. It is used to measure the overall steepness of a river basin and is an indicator of intensity of erosion processes operating on the slopes of the basin. For the present study, relief ratio is 0.002957. The high value of relief ratio is characteristics of hilly region. Relative Relief Ratio (Rbh) It is an important morphometric variable used for the overall assessment of morphological characteristics of any topography [4]. [16] suggested to calculate relative relief by dividing relief ratio by basin perimeter and classified into three categories viz.(i)low relative relief is lies between 0 to 100m, (ii)moderately relative relief is lies between 100 to 300m and (iii) high relative relief is above 300m. From the study the valve of the relative relief ratio of the study area was 9.26 indicating the study area is a low relative relief. [31] Defined the ruggedness number as the product of the basin relief and the drainage density where both parameters are in the same unit and usefully combines slope steepness with its length. The valve of the ruggedness number calculated for the Karadya watershed is 0.575. The low ruggedness value of watershed implies that area is less prone to soil erosion and have intrinsic structural complexity in association with relief and drainage density. Conclusions Quantitative analysis of morphometric parameters of the water shed is found to be very useful in the drainage basin evaluation, water conservation and natural resource management at micro level. Morphometric analysis of the study area is characterized by dendritic type drainage basin. Lower order streams dominate the basin with highest stream order being third order. Higher mean bifurcation ratio of the study area indicates a strong structural control on the drainage pattern of watershed. The value of form factor and circulator ratio and elongation ratio indicates that Karardya water shed is circular in shape having high runoff with low ground water potential. Further, the study concludes that ASTER(DEM) data coupled with GIS technique is a competent tool to analyze the morphometric parameters for water resource management at micro level of any terrain by planners and decision makers to develop strategy for sustainable watershed development programs.
2019-04-27T13:09:06.110Z
2018-03-14T00:00:00.000
{ "year": 2018, "sha1": "7512dbf9520f2574c5a45af61ddb445935407a13", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajrs.20180601.13.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8e0fd4dcae0359977acfb96a697f9d6c66507836", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
119253043
pes2o/s2orc
v3-fos-license
Recombination within multi-chain contributions in pp scattering We investigate the evolution of multiple parton chains in proton-proton scattering and show that interactions between different chains may become quite important. Introduction In the last years it has become clear that multiple parton interactions play an important role in hadron-hadron collisions at high energies [1,2,3,4,5,6,7,8]. As a first step these chains are modelled as a collection of single noninteracting chains (Fig.1). Each chain follows the usual partonic DGLAP evolution, i.e. the ladders are in color singlet states, and the momentum transfer across the ladder is set equal to zero. Disregarding final state radiation and working in leading order only, the final state produced by k such chains consists of N = n 1 + n 2 + ... + n k partons, and the cross section, dσ ∼ |T 2→N | 2 dΩ N , is described by a sum of squares, without any interference terms. The theory of multiparton (higher twist) evolution has been outlined in [9]. To leading order, the evolution is described by the sum over the pairwise interactions between two t-channel partons, and the evolution kernels are given by the nonforward DGLAP splitting functions. Of particular importance is the small-x region where powers of ln 1/x may compensate (and even overcome) the higher twist suppression. In this region, the dominant contributions are given by gluon ladders, and their evolution in ln 1/x is described by the BKP equations [10]. In leading order this evolution is, again, described by the sum over parwise interactions between t-channel gluons. The evolution kernels are given by the nonforward BFKL-kernels. At small x, the leading logarithmic approximations of the two approaches -higher twist evolution in momentum scale or small-x evolution in ln 1/x Within each chain, the boxes mark the hard subprocess with largest transverse momenta (production of dijets). -coincide in the so-called 'double logarithmic approximation' which samples powers of (α s ln 1/x ln p 2 ). For both evolution schemes the t-channel multiparton state is in a color singlet state and, as far as the total cross section is concerned, the total momentum transfer is set equal to zero. However, any subsystem consisting of two t-channel gluons, in general, will have nonzero color quantum number and nonzero momentum transfer. Therefore, describing the evolution of a t-channel state consisting of, say, 2n gluons as the evolution of n noninteracting color singlet ladders with zero momentum transfer represents an approximation whose validity deserves further investigation. In this note we study, as a first correction beyond the approximation of noninteracting ladders, a particular type of 'interactions' between two ladders which we illustrate in Fig.2: Starting at the proton at the bottom of Fig.2, we first have two noninteracting color singlet ladders (denoted by the pairs of t-channel gluons (14) and (23)). At rapidity Y we allow for a 'recombination' of t-channel gluons: from now on we have the two color singlet pairs (13) and (24). In the following we will denote his transition by 'recombination vertex'. It introduces a correlation between the two ladders. However, it is important to note that, in the double log approximation, this kind of interaction between the two ladders still belongs to the leading logarithmic approximation: for each momentum integral we have a factor (α s ln p 2 ln 1/x). It is this kind interaction between the two ladders which we will study in the following, staying within the double logarithmic approximation of gluon ladders. Particular attention will be given to the possibility that this recombination of two chains takes place in the perturbative region, i.e. in the region of large transverse momenta. Recently an important potential application of such recombination effects has been suggested. In their attempt to explain the ridge effect reported by the CMS group at the LHC [11], it has been suggested [12] that the observed long range rapidity correlation and azimuthal correlation can be explained, within the Color Glass Condensate framework, by a two-chain recombination which will be discussed in this paper. Two noninteracting ladders We begin with the double logarithmic approximation of the two-chain configurations shown in Figs.1 and 2: we search for regions of integration where each closed momentum loop gives two logarithms, one in the transverse momentum and one in rapidity. We restrict ourselves to gluon ladders which, at small x, are known to give the largest contributions. We parametrize our momenta as where p A , p B are the large momenta of the incoming protons A and B, resp., the momentum fractions x, y range between 0 and 1, and k denotes the two dimensional transverse momentum. In the double logarithmic approximation, the BFKL kernel and the splitting function P gg lead to the same answer ( Fig.3): we can either start from the small-x limit which is described by the BFKL equation and then take the limit of strongly different momentum scales; alternatively, we can begin with the collinear limit where the DGLAP equations apply and then take the limit of small x. For our purposes it will turn out that the approach based upon the BFKL equation is more suitable: it is the region where the logarithms in 1/x are slightly larger than the transverse momentum logarithms where the recombination effects become important. In the region of strongly ordered transverse momenta the color singlet BFKL kernel is approximated by This approximation will be used in the following. We begin with the two noninteracting ladders shown in Fig.1. Our main focus is on the integration over the loop momentum q, and we consider a single cell with momentum k ′ inside one of the ladders below the produced pairs of jets. This cell is illustrated in Fig.3. Using (2) for the upper rung (and the corresponding expression for the lower rung), together with the gluon propagators for the vertical lines, one sees that the integration over the transverse momentum k ′ is logarithmic only if k ′ ≫ q, i.e. q defines the momentum scale Q 0 where the k 2 evolution along the ladders starts. On the other hand, the range of the integration over the momentum transfer along the ladders is determined by the size of the interaction region and by the correlation length of the initial gluons of the two ladders inside the proton: we denote this effective radius byR, and put Q 2 0 = 1/R 2 . As a result, the cross section for the production of two pairs of gluon jets (cf. Fig.1) will be of the form: where f (x, p 2 ) = xg(x, p 2 ) denotes the gluon density with initial momentum scale Q 2 0 , the factor 1/R 2 results from the integration over the loop momentum q, and the momentum factors 1/(p 2 1 p 2 2 ) 2 represent the two production vertices of the pairs of gluon jets (evaluated in the double logarithmic approximation). As an important feature of (3) we mention that, as long as interactions between different parton chains are not taken into account, the momentum transfer along a ladder is of the order of the initial momentum scale of the (multi)parton distribution. Recombinations in a two-ladder configuration We now turn to the main topic of this paper, a study of the two recombinations shown in Fig.2. We begin with the recombination vertex below the produced jets; the kinematics are illustrated in Fig.4,. Above the recombination, the two ladders formed by the lines (13) and (24) are in the color singlet configuration; below the color singlet ladders are formed out of (14) and (23). As a result, each color recombination is accompanied by the color suppression factor 1 In order to have, for the momentum loops below the lower two rungs (connecting (14) and (23)), logarithmic momentum integrals, we need Similarly, in order to find logarithmic momentum integrals above the upper two rungs (connecting (13) and (24)), we need Finally, inside the k loop of Fig.4. we need Using, for the upper two rungs, the approximations following from (2), and combining them with the propagators for the vertical gluon lines, we obtain: In order to obtain a logarithmic integrals of k, we identify two regions of k and q: either |q| ≫ |k or |q| ≫ |q − k|. For each of the two cases, after averaging over the azimuthal angle, we arrive at the integrals which gives the desired logarithmic integral in k (or (q − k)). For the two ladders below the recombination we derive, from the condition (7) and from the second integral in (9), that the upper cutoff is given by q 2 . The lower cutoff is obtained by applying the discussion of section 2.1: the loop momentum l appears only inside the initial condition of the lower proton B, and it is restricted by the effective scale Q 2 0 The condition (6) implies that the momentum q also defines the lower momentum cutoff for the ladders above the recombination vertex. In order to find the full dependence on q 2 , we need to consider the full diagram in Fig.2 1 . The recombination vertex above the produced pairs of jets is analysed in the same way as the lower one. This leads to an expression similar to (9), i.e. the complete dependence in q is of the form: The integral in q is dominated by small values. Since the momentum q defines the upper momentum cutoff, both for the two ladders below the lower recombination vertex and for the two ladders above the upper recombination vertex, we conclude that the infrared divergence of the q-integration destroys the ladders above and below the recombination vertices. The recombination vertices, therefore, are absorbed into the nonperturbative initial conditions. As far as the perturbative part is concerned, we are back to the two noninteracting chains of section 2.1. The situation changes if logarithms in rapidity become more important than those in transverse momentum, i.e. within the BFKL approach we move towards small x. In Fig.2, we replace the rungs by BFKL Green's functions. Re-drawing the diagram in an more suitable way, we arrive at Fig.5: Let us first reformulate the result which we have just obtained in the double logarithmic approxmation. We have shown that, in order to find inside the BFKL Green's function the maximal number of transverse logarithms, the momentm transfer across the Green's function has to be smaller than the transverse momenta of the two gluons entering the Green' function at the low-momentum side. In Fig.5. this says that l and l ′ have to be small, while for the q-loop we found the integral dq 2 /q 4 which favors small values, too. In order to see how the appearance of large rapidity intervals changes this situation, let us use the following integral representation for the forward BFKL-Green's function: where µ = iν + 1 2 , the integration contours in µ and ω run parallel to the imaginay axis, χ(µ, n) is the BFKL eigenvalue function, and we have averaged over the azimuthal angles of k, k ′ . Furthermore, we have kept only the leading term of the conformal spin, n = 0. From this representation one easily deduces the dependence upon Y − Y ′ and ln k 2 k ′2 : after the integration over ω the saddle point analysis of the remaining µ -integral shows that, for large ln k 2 k ′2 , the dominant contributions come from µ = iν + 1 2 ≈ 0 and ω = O(1), whereas for large Y − Y ′ , one finds µ ≈ 1 2 and small ω = ω BF KL = 4Nc ln 2αs π . This observation 1 Otherwise the momentum q will run through the blob corresponding to the proton initial conditions and, like the momentum l, it will be restricted by a low scale Q 0 . has important consequences. Let us denote the rapidities of the two recombinations by Y ′ and Y , resp.. Beginning with the BFKL amplitudes near the produced jets, large rapidity intervals To see this im more detail, we insert the integral representations for all the BFKL Green' function in Fig.5. We arrive at Here we have introduced another length scale, R 2 c : it results from the additional integrals (as compared to eq.(3)) d 2 l and d 2 l ′ , which are restricted by the proton radius and by the correlation between the two chains inside the proton. As long as all BFKL amplitudes are in DGLAP regime, i.e. they are dominated by the logarithms in the transverse momenta, all µ variables are small, and we are in the situation which we have described above: the q integral is dominated by small values, and the transverse momentum logarithms inside those four BFKL amplitudes which are close to the protons are destroyed. If, however, the rapidity intervals become large and the BFKL amplitudes are in the small-x region, µ values are close to 1 2 . As a result, in (12) the overall power of q 2 may increase and the dominance of the small-q 2 region disappears. One easily sees that large rapidity intervals near the protons, Y tot − Y ′ and Y , tend to make µ and µ ′ large and thus help to increase the overall power of q 2 . Let us see in more detail how this balance works. Defining in (12) the phase function the saddle points are determined from the conditions: and which lead to and Similar equations are obtained for the other µ variables. For a systematic analysis one first determines, for fixed Y , Y ′ and q 2 , the stationary points of the µ variables and then finds the dominant values of the rapidities Y , Y ′ and of the momentum scale ln q 2 . As we have said before, if in (16) the evolution in rapidity dominates over that in momentum scale, the rhs becomes small. Since χ ′ (µ) vanishes at µ = 1 2 , we have On the other hand, if in (17) the interval in momentum evolution is larger than in rapidity, the rhs is large when µ is close to zero: The results can be illustrated in terms of evolution paths in Fig.6. There is an infinite number of paths in the ln k 2 -y plane which connect the protons with the produced jets with rapidity Y 1 , Y 2 and momenta p 2 1 , p 2 2 . The saddle point analysis determines the most probable path. Two examples are shown in Fig.6. Let us now consider a few special cases. For simplicity, we start with the symmetric choice Y 1 = Y 2 and p 2 1 = p 2 2 = p 2 . In order to get a large q 2 we search for the situation where µ > µ i and µ ′ > µ ′ i . We insert the saddle point values (18), (19) into (13), and first look for the extrema with respect to Y and Y ′ , that is for the saddle point of the expression which belongs to the lower part of Fig.5. To get (20) we have used the value of µ s in (18) and neglected a weak dependence of µ s ∼ 1/2 on Y coming from (18). Thus the typical value of Y is leading to An analogous expression is obtained for the upper part of the amplitude in Fig.5, A up . Assuming that the total avaliable rapidity interval Y tot and the sub-rapidities Y 1 , Y 2 are so large that the saddle point position µ s (18) is close to 1/2, i.e. 28ζ(3)aY 1 >> ln(q 2 /Q 2 0 ). We put µ s = 0.5 in (20), (22), and we see that in (12) the integral over q 2 takes the form Here we have used the LO BFKL ratio a/χ(µ s ) = a/χ(0.5) = 1/4 ln 2. The integral (23) has its saddle point at q 2 ∼ p 2 exp(−z 2 ) (24) with z = 3/4 + 9/16 + 0.5 ln 2 ≃ 1.7; exp(−z 2 ) ≃ 0.055. The corresponding value of µ 1,s ≃ z/4 ln 2 ∼ 0.6 is not small enough to justify the approximate estimate (19). We therefore conclude that the competition between the Y and q 2 dependence leads to a rapidity saddle point (21) somewhere inside the avaliable interval (Y 1 , 0), which in its turn leads to a rather large µ i,s violating the initial inequality µ i < µ. In general we can say that the ordering µ i,s < µ s leads to the opposite ordering of the intercepts, χ(µ s ) < χ(µ i,s ). Therefore, in (12) the dominant contribution to the Y integral comes from the region of small Y where the recombination vertex is close to the initial proton and far from the production vertex of the dijets. For a small Y value the anomalous dimension µ cannot be large, and the essential q 2 values are small as well. This means that we are back to the situation of non-interacting ladders, illustrated in Fig.6a. Next we turn to the opposite case µ i > µ which corresponds to χ(µ i ) < χ(µ). Now the dominant Y -value is large and close to the rapidity of high E T dijets Y 1 . However, now the whole anomalous dimension in the q 2 behaviour 2(µ + µ ′ ) − µ 1 − µ ′ 1 − µ 2 − µ ′ 2 < 0 is negative, and the q 2 integral is dominated by a low q 2 -value. The most interesting possibility is to put the recombination vertices just as close as possible to the high E T dijet production matrix elements. In this case there is no BFKL or DGLAP evolution in the intervals between the produced pairs of jets and the recombination vertices. That is, in the centre of Fig.5, we simply delete the four 'BFKL blobs' nearest to the produced jet pairs. Correspondingly, in (12) we eliminate the third line, togather with the integrations over µ 1 , µ 2 , µ ′ 1 , µ ′ 2 . The rapidities Y, Y ′ are close to Y i , and the q 2 integral takes the form where the saddle point values, µ s and µ ′ s , follow from the condition (14): and Their values are taken from (18): i.e. the integral over q 2 receives its main contribution from q 2 close to min{p 2 1 , p 2 2 } 2 This situation belongs to the evolution path shown in Fig.6b. Let us finally consider a more realistic situation with Y 1 = Y 2 but p 2 < p 1 . Recall that the true argument of the BFKL amplitude is not rapidity but the momentum fraction x, that is actually we have to take Y = ln(1/x). When p 1 >> p 2 for the same rapidities Y 1 = Y 2 we get in the right ladder the momentum fraction x 2 << x 1 . In other words, in this configuration we may put, in Fig.5, the recombination vertex just into the cell nearest to the left dijet. But then there will be a large ln x (and may be ln q 2 ) interval for the evolution of the right ladders (between the dijets on the rhs and the two recombination vertices. In other words in Fig.5. we delete only the two 'BFKL blobs' on the lhs below and above the dijet production. Assuming that, in (12), the total rapidity interval Y tot is very large, we may perform first the rapidity integral where for the BFKL blobs on the lhs we have set χ(µ 1 ) = 0, and for µ we put its asymptotic value µ = 1/2. Now we close the countor of the µ 2 integration around the pole χ(µ 2 ) − 2χ(µ) = 0: this leads to µ 2 ≃ 0.18. The same result is obtained for µ ′ 2 . Finally, the q 2 integral takes the form and the major contribution comes from the domain close to upper limit q 2 ∼ p 2 2 . A closer look reveals still another detail. In the region of interest, for example in a 14 TeV pp-collision at the LHC, we observe in the central region the dijet with p 1 ∼ 20GeV , corresponding to x ∼ 2p 1 / √ s ∼ 0.003. For such x-values, the anomalous dimension observed at HERA is not so large. For x < 0.01 the behaviour of the structure function F 2 (x, q 2 ) can be parametrized as with c ≃ const(q 2 ) and λ = 0.048 ± 0.004 [13]. This means that effective anomalous dimension µ ef f = λ ln(1/x) ∼ 0.28 for x = 3 · 10 −3 . This value is still large enough to provide the convergence of the q 2 integral (25) in the large q 2 domain for the case considered above where both recombination vertices are jsut near the dijet production cell. However it is not evident that the parametrization (31) reflects the behaviour of a single ladder. At not large q 2 the experimentally measured F 2 already includes some absorptive effects which reduce the growth of F 2 with x decreasing and thus leads to a lower value of λ in comparison with a single ladder contribution. In other words the true value of µ ef f which corresponds to a single ladder may be even larger, pushing the characteristic values of q 2 closer to the (lower) hard scale p 2 2 . Generalizations So far we have discussed the effect of two recombinations inside a two-chain contribution: one recombination on esch side of the produced jet pairs. Let us first comment on the case where we have no second recombination vertex above the jet pairs: as far as only one recombination vertex is concerned, the integration over q is logarithmic. However, q runs also through both upper ladders and defines the low momentum scale Q 2 0 where the evolution starts: a large value of q therefore kills the evolution in the upper ladders, whereas a low value prevents the evolution in the lower ladders. Therefore, a single recombination vertex is suppressed. Next a comment on the color suppression factor (4). This suppression applies to the case when, as illustrated in Fig.2, there is evolution above and below the recombination vertex. As we have discussed before, in a preferred situation we have little or no evolution between the recombination vertices and the dijet production vertices. In this case there is no need to reconnect, between the two recombination vertices, the four t-channel gluon lines to color singlet pairs. As result, the color suppression becomes much weaker.. Next we consider the case of more than two chains, say three chains with three produced pairs of jets.. In this case a pair of two recombination vertices can be attributed to any pair of chains, i.e. we have three possibilities. Similarly, for n chains we have n(n−1) 2 possiblities: these counting factors can easily overcome the color suppression factor in (4). As an example, for n = 4, the overall counting factor is already 3/4, and it exceeds unity for n ≥ 5. Finally, we mention another important possibility, related to final states with rapidity gaps. Besides the recombination illustrated in Fig.2 there exists another configuration to which our discussion applies. We show this in Fig.7: Applying our previous discussion, in particular the evolution paths illustrated in Fig.6, we conclude that the momentum scale at the upper end of the lower rapidity gap, q 2 , will be above Q 2 0 but not too close to the jet momenta p 2 1 = p 2 2 : this allows for 'semihard' diffraction and is in qualitative agreement with inclusive diffraction seen at HERA. Conclusions and outlook We have studied the possibility of interactions ('recombinations') between two evolution chains, which describe double parton scattering corrections in high energy hadron collisions. We show that, thanks to large anomalous dimensions of the parton distributions in the low-x region, such an interaction may occur at small distances within the pertubative domain, provided we consider two recombination vertices (which describe the chain-chain interaction) placed relatively close to the 'hard' matrix elements. In our double leading log analysis of gluon ladder diagrams we find a competition between 'collinear' logarithms (ln p 2 ) and 'energy' logarithms (ln(1/x)). Depending on the ratio between these logarithms, the major contribution to integrals over the rapidity of the two recombination vertices, Y and Y ′ , comes either from the region near the protons (Fig.6a) or from the region close to the jet production vertices (Fig.6b), i.e. these integrals have no saddle point somewhere in centre of the avaliable interval. The first case (Fig.6a) corresponds to two independent ladders which do not communicate with each other and are described by 'double DGLAP' evolution equations. More interesting is the second possibility (Fig.6b) where the recombination vertices are close to the hard matrix elements and thus are entirely in the perturbative region. These configurations may lead to nontrivial correlations between the secondaries produced in 'double parton scattering' processes. We note that, in Fig.6b, the rapidities and transverse momenta of the partons inside the recombination vertices need not be very close to the 'hard' matrix element: in a more or less realistic situation the convergence of the integrals in rapidity (Y and Y ′ ) and transverse momentm (q) are rather slow, since they are driven by numerically small powers of 1/x and q 2 . Therefore the particles coming from the recombination vertices may still be separated from those produced via the 'hard' subprocess by relatively large intervals in rapidity (∼ few units) and in the logarithms of transverse momenta. The interactions between different ladders discussed in this paper also allow for semihard diffractive final states (Fig.7). In the case of multiple parton interactions with a larger number of evolution chains the suppression of chain-chain interactions caused by the colour factor 1/(N 2 c − 1) may be compensated by the combinatorical factor. For n chains we have n(n − 1)/2 possibilities. According to naive 'eikonal model' estimates the mean number of chains in proton-proton collision at the LHC is < n >∼ 5. Therefore the expected probability of multi-chain recombinations is not small and may lead both to a noticeable correlation between the secondaries in inclusive processes and to 'semihard' diffraction final states. All results of this paper are based upon the double logarithmic approximation (with fixed α s ). We consider this as a first step towards a more accurate analysis. Within the small-x approach it is possible to go beyond the double logarithmic approximation and to reach single logarithmic accuracy (leading ln 1/x). Also, a more detailed numerical analysis will be needed in order to obtain a more reliable estimate of the importance of the recombination corrections addressed in this paper. Both tasks will be topics of future work. We finally mention an important consequence of our result. In contrast to noninteracting multiparton chains which often are modelled within the eikonal approximation, corrections due to the recombination of ladder diagrams no longer fit into the eikonal picture. This raises the question of the AGK cutting rules which provide a crucial theoretical constraint of multiparton corrections. An investigation of this problem is quite important.
2011-05-09T11:35:26.000Z
2011-05-09T00:00:00.000
{ "year": 2011, "sha1": "5b1b21c30e80bff3aa385975495dbe7efce9afd1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5b1b21c30e80bff3aa385975495dbe7efce9afd1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219075504
pes2o/s2orc
v3-fos-license
Participatory Visual Methodologies : Social change , community and policy Authors SILVIA GARCIA Assistant Director for Research, IUPUI Office of Community Engagement. Mitchell, De Lange and Moletsane (2017) discuss the use of participatory visual research (PVR) to give voice to those involved in research and particularly to create opportunities for social change. The authors intend to shift the conversation on PVR “towards outcomes and the ever-present question ‘What difference does it make?’” (p.3). Drawing on the principles of Rose’s (2001) critical visual methodology that provide an analytical framework for understanding how images become meaningful to audiences, and from the sociology of literature (Escarpit, 1958) –literature as a socio-cultural fact– the book presents the use of PVR to reach critical audiences and provide entry points to policy dialogues and eventually to social change. Social change is characterized in different ways “new conversations and dialogues, altered perspectives of participants to take action, policy debates, and actual policy development.” (p.16). Mitchell, De Lange and Moletsane (2017) discuss the use of participatory visual research (PVR) to give voice to those involved in research and particularly to create opportunities for social change. The authors intend to shift the conversation on PVR "towards outcomes and the ever-present question 'What difference does it make?'" (p.3). Drawing on the principles of Rose's (2001) critical visual methodology that provide an analytical framework for understanding how images become meaningful to audiences, and from the sociology of literature (Escarpit, 1958) -literature as a socio-cultural fact-the book presents the use of PVR to reach critical audiences and provide entry points to policy dialogues and eventually to social change. Social change is characterized in different ways "new conversations and dialogues, altered perspectives of participants to take action, policy debates, and actual policy development." (p.16). The authors bring upfront the importance of studying how audiences engage with the visual artifacts, and the importance of political listening, defined as the communicative interaction among political actors that enables democratic decisions about how to react to visual artifacts. Reflexivity is an important element of the authors' framework. Reflexivity is key to ensuring participation, engaging participants, audiences and researchers in questioning the purpose, strategies, and takeaways of visual presentations. Reflexivity can be used as a tool to acknowledge unbalanced power relations between researchers, audiences -policy makers-and participants and may lead to co-construction of meaning. These ideas are used in the book to "theorize the ways in which participatory visual methodologies can be key to leveraging change through community and policy change" (p.193). Both the ways social change is portrayed, and the positioning that researchers, research participants, the community and policy makers take as audiences that reflect on the visual productions, are crucial to understand how PVR can stimulate social transformations. needs to change and how, visual methodologies are expected to increase community agency and the potential for social change. The authors sustain that to facilitate building strategies that evoke responses towards change, it is crucial to start the research process with an idea of the expected change in mind. Reflexivity is central to audiences' engagement. The authors introduce "speaking back," a method that allows research participants to act as audiences of visual productions, reflect on them and engage in new productions that contest, contradict, or complement the content of previous visual work. The method allows for conversations and discussions among participants, new knowledge creation and participant-driven critique in the context of policy dialogue. The mechanics of exhibiting the participatory visual product is also key for engaging external audiences and research participants. First, involving participants as co-curators of the exhibition -deciding what to show, to whom, and how-opens the doors for adapting exhibitions to the social context where they are displayed, providing opportunities for learning. Second, this engagement provides a space where participants can interact with audiences (community and policy makers). Third, research participants can actively engage in studying the reactions of the audiences and the factors that affect their response, exploring future courses of action for change based on audiences' response to the participatory visual productions. The final three chapters (6 to 8) are dedicated to changes in the mechanics of policy making by 1) including the voices of marginalized populations in the policy dialogue, and 2) engaging policy-makers in policy conversations and reflections on what should be done to address the issues raised. Chapter 7 presents participant-led tools founded in the principles of transformative pedagogy for engaging policy-makers. One of the main takeaways of this chapter is that these practices do not necessarily change the power relations that produce the negative conditions in the first place. The book ends with strategies to track change and demonstrate impact. The authors agree that studying the 'afterlife' of a project -after enough time has passed for policy change to happen-is relevant to understanding social change. An interesting approach is the use of reflexive revisiting. This implies returning to the place where the initial research study was conducted to understand through conversations, interviews and observations the long-term effects of the project and develop explanations of what changed -or not-and why. The main premise of the book is that "participatory visual research holds potential to bring about change" (p.20). However, the main question "what difference does it make?" remains partially unanswered when the aspiration is policy change. Participatory visual research seems effective to change participants' perspectives and dialogues within their network of personal connections. However, its success in reconstructing policy discussions to include alternative voices and discourses and especially in translating dialogues into social action seems inconsistent. Questions should be raised about: Can community agency for social change be effectively created through PVR alone? How can PVR be used to elicit social action after policy-makers are confronted with the visual representations? More importantly, how can PVR contribute to build the relational context for dialogue and collaboration within the community and with policy makers that is important to energize social change? In general, the book uses a research perspective that helps understand the interpretive processes, reactions, and meaningful interactions of the audiences (researchers, research participants, community and policy makers) with the visual artifacts during the production and exhibition of the visual pieces. Yet, the discussion of how participatory visual productions create opportunities for interactions and mutual engagements of different groups in co-leading social change is inexistent. In this sense, the gap between research and practice that the book promises to address is still partially unsolved. Possibly, a way to address this gap is as Shawn (2015) has proposed to reframe the use of participatory visual research as a transformational process built not only to facilitate democratic participation, but also to grow the agency, relational capital and energy required to sustain community-driven change. G A R C I A ENGAGE! | i u p u i b i c e n t e n n i a l e d i t i o n
2020-04-23T09:15:22.597Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "6d8c8382d42e891211bd0819ecbf44cd7d8001c9", "oa_license": null, "oa_url": "https://journals.iupui.edu/index.php/ENGAGE/article/download/23863/23037", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9446d548b6c9a9fab5aac4708bc3b0a61203849e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
236964575
pes2o/s2orc
v3-fos-license
An Empirical Case on the Measurement of China’s regional Low-carbon Development Level The ecological problems brought by the rapid development of the economy and society have threatened humanity’s sustainable development. As the world’s largest carbon emitter, China is still at a critical stage of industrialization, and it is urgent to solve the problem of how to realize the low-carbon circular development of the economy. This paper constructed the evaluation index system of regional low carbon development from the three dimensions of development, low carbon, and ecology. Based on the entropy weight-TOPSIS model, the index was quantitatively analyzed with the data of 30 provinces in China in 2018. According to the measurement results, 30 provinces, were clustered into low-carbon development zones, relatively low-carbon development zones, relatively high-carbon development zones, and high-carbon development zones according to the low-carbon ecological development level. The results show that China’s overall low-carbon economic development level is not high in 2018, with the south being superior to the north, and the eastern coastal areas of Guangdong, Jiangsu, Shandong, and Zhejiang being significantly prime to other regions. Finally, suggestions are given for different types of low-carbon development areas. Introduction In the early 21st century, the U.K. government set out the concept of a "low-carbon economy" in its first energy white paper [1]. It has been widely supported and responded to by the United States, Japan, the European Union, and other developed countries. Seen from the process of industrialization, these countries have now completed the historical task of industrialization and urbanization, gone through the stage of development supported by a large consumption of fossil energy, and moved to a new economic era dominated by information services and the financial industry. In December of the same year, the Paris Climate Change Conference held the G20 Hangzhou Summit held in September 2016 shared the same goals in addressing climate change, promoting green, circular, and low-carbon development. The rapid growth of China's economy comes from the rapid growth of extensive investment in various industries. The "three high" characteristics of high investment, high consumption, and increased pollution have gradually led to many problems such as excessive consumption of resources, deterioration of the ecological environment, and low resource allocation efficiency [2]. As a typical developing country, China will still be critical from the middle to the later stage of industrialization in the next 10 years or even longer. The urbanization will accelerate, the economic scale will continue to expand, and the total demand for resources and energy will rise rapidly [3]. For China, low-carbon development is China's responsibility to mitigate global climate deterioration and a strategic choice for China's economic development and transformation and upgrading [4]. In the context of low carbon development becoming an essential demand of all countries globally, the academic community has begun to pay attention to the theory and practice of low carbon development and continuously deepen its research. In 1994, John Elkington proposed the "triple bottom line" of sustainable development: environment, economy and society. At the same time, he pointed out that environment, economy and society are the three significant aspects that must be coordinated for sustainable human development [5]. Since then, many scholars have used socioeconomic and environmental indicators to evaluate the low-carbon development or sustainable development level. In 2013, Lynn Price established the China Low Carbon City Evaluation Index system, which focuses on measuring the carbon intensity of economic and energy-related activities [6]. In 2014, Floriana summarized the index system of sustainable urban development in Mainland China, Taiwan China and Malaysia. The index system for mainland China includes 4 major indexes: society, environment, economy and resources, and 21 specific indexes [7]. There are many methods to evaluate economic development. At present, the combination method is used to give weight to indexes before comprehensive evaluation. The index weight is of great significance in the overall evaluation. The greater the weight is, the more important the index is and the greater its impact will be. At present, objective weighting method and subjective weighting method are commonly used. Subjective weighting has strong subjective arbitrariness, which is more representative of analytic hierarchy process, fuzzy clustering method and Delphi method [8][9][10]. The objective weighting method has a strong mathematical theoretical basis and can effectively avoid the subjectivity of weighting by assigning weights to the relation of the initial data. The more representative methods include coefficient of variation and entropy method [11,12]. The entropy weight method has high precision and can better interpret the results obtained. In addition, it has adaptability and can be used in any process that needs to determine weights. Comprehensive evaluation methods such as fuzzy comprehensive evaluation method, TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution), grey relational method are widely used [13][14][15][16]. They all have their advantages and disadvantages. At present, there are still some limitations in the research on low-carbon development evaluation in China. First, most places in China have begun to transform the economic development mode and develop the low-carbon economy, but the standards for the successful development of a low-carbon economy and the construction of an evaluation index system are not unified. Second, scholars mainly adopt single evaluation methods such as AHP, factor analysis, coefficient of variation and entropy, which cannot effectively reflect the importance of indicators while avoiding subjective arbitrariness of weights. To accurately grasp the regional low carbon development level and existing problems, improve the regional low carbon competitiveness. Based on the concept of "development-low-carbon-ecology," this paper constructs an evaluation system. It conducts an empirical study on China's regional low-carbon development based on the entropy weight-Topsis evaluation model and the raw data of 30 Provinces and cities in China in 2018. Evaluation indicators for regional low-carbon development There are still some problems in the existing evaluation index system of low-carbon development at home and abroad: First, some studies involve too many indicators, which lead to the offsetting of the indicators reflecting the advantages and disadvantages of low-carbon development in cities, and the guiding effect of low-carbon development is not apparent. Second, the index system with high universality, comparability, and uniformity is less studied. Thirdly, some indexes have poor data acquisition, directly affecting the system's application values application value. Therefore, based on domestic and foreign scholars' existing research, this paper constructs the regional low-carbon development measure and evaluation index system: the target layer is the regional low-carbon development level. The criteria are divided into three aspects: development, low carbon and ecology. Development will be measured from the system of economic development and social progress, Low carbon will be measured from the energy system, carbon emission system. Ecology will be calculated from the ecological environment system. "Low-carbon-ecological-development" is a complex concept, which combines the connotation of "low-carbon development" and "ecological civilization". The ecological city takes a circular economy as the core and emphasizes the symbiosis between city and 3 natural environment. Low-carbon cities mainly consider urban construction from the perspective of "carbon reduction" and emphasize urban carbon emission reduction. The concept of "low-carbon ecology" combines the advantages of the above two single images, highlighting the fusion of ecology and low-carbon, the fusion of social system and natural system, and the fusion of complexity and symbiosis. This complex concept is also more operable in practice, which is more conducive to guide the planning and construction of China's low-carbon development. The evaluation index system should have the following functions: first, monitoring should reflect the basic situation of low-carbon city construction on a time scale, and form an annual dynamic monitoring database to show the changes of low-carbon city indicators; The second is evaluation, which can not only compare the low-carbon development level of a city with other cities or standards at a certain point but also vertically reflect the changes and efforts of the low-carbon construction of the town itself. The third is the creation and planning, which should guide the city in the low-carbon construction, the specific aspects of the work, the measurement standard is the basis to train the city's low-carbon construction. The specific evaluation index system is shown in Table 1. Entropy weight method The entropy weight method is an objective weighting method, which eliminates the influence of subjective factors. It uses each evaluation object's index value to construct the judgment matrix, after the normalization of the matrix, calculates the index entropy according to the definition of entropy, and finally calculates the entropy weight of each principal component. Among them, entropy is a measure of the disorder degree of the system. The smaller the information entropy of the index is, the greater the index's information and the more significant its role in the comprehensive evaluation. The specific calculation steps of entropy weight are as follows: (1) Construct index judgment matrix. n measure objects and p principal component factors are set, and a standardized matrix is constructed according to the scores of each principal component R = (f ij ) n * P (i = 1,2,3, … … ,m). (2) Calculate the entropy and entropy weight of each principal component. According to the definition, the entropy e j and entropy weight w j of the jth principal component factor are: (1) (2) Whenb ij = 0,ln b ij = 0 is meaningless, so the original formula is modified into the formula(4). TOPSIS method C.L.Wang and K.OON first proposed the TOPSIS method in 1981. It is a standard method to analyze the objective decision of the finite scheme in system engineering. Its core idea is that the optimal scheme should have the smallest distance from the positive ideal scenario and the largest gap from the ideal negative scheme. This method can sort several measured objects with measurable attributes, and the specific steps are as follows: Where, J 1 means that the jth indicator is a forward indicator, and J 2 means that the jth indicator is a backward indicator. (2) The distance S i + and the distance S i − , between the ideal positive point and the ideal negative point of each province to be tested is calculated, and the relative proximity C i between each province, and the ideal target is obtained to represent each province's sustainable development ability and city. The higher the value of C i the more significant the low-carbon sustainable development capacity of a region; on the contrary, the smaller the value of C i , the smaller the low-carbon sustainable development capacity of an area. According to the weight and score, the relative advantage index and the relative disadvantage index can be obtained to determine the type of economic development in each region. Clustering analysis Cluster analysis is a mathematical method to identify the proximity between objects according to certain criteria and classify the similarity into one class. Clustering aims to minimize the difference within the class and maximize the difference between the classes. In this paper, K-means clustering is adopted to classify the low-carbon economic development levels of 30 provinces, autonomous regions, and municipalities. The k-means clustering algorithm is as follows: (1) Initial cluster center. There are three main methods: First, according to specific problems, select k samples as the initial clustering center by experience; Second, all the samples were randomly divided into K classes, and the sample mean of each class was taken as the initial clustering center. Third, the first K samples are used as the initial clustering center. (2) Initial clustering. The sample mean is recalculated, and the cluster center is updated by taking the same base and placing it in the class closest to the initial cluster center. Repeat this operation until all samples are placed in the appropriate class. (3) Judge whether clustering is reasonable. The error square sum criterion function is used to judge whether the clustering is reasonable or not, and the classification is modified if it is not. The loop is judged and modified until the algorithm terminates. Using the K-means clustering algorithm, the 30 provinces, autonomous regions, and municipalities can be divided into 4 categories according to the low-carbon economic development level, namely, low-carbon development zone, relatively low-carbon development zone, relatively high-carbon development zone, and high-carbon development zone. Study area and data source Considering data availability, this paper's evaluation unit is 30 provinces, autonomous regions, and municipalities in China. The Xizang region and Hong Kong, Macao, and Taiwan region are not included in the research scope due to the lack of data. The research data mainly come from the China Statistical Yearbook, China Energy Statistical Yearbook, and statistical yearbooks of various provinces and cities obtained directly or through a simple calculation. Carbon emission data refer to IPCC methods and domestic and foreign literature. This paper uses the relationship between energy consumption and carbon emission coefficient for accounting. This article chooses coal, coke, crude oil, gasoline, kerosene, diesel oil, fuel oil, natural gas, electricity as the research object. Based on the IPCC carbon accounting methods, using statistical energy yearbook of statistics are given detailed all fuel consumption value (The unit is ton standard coal) for carbon dioxide emissions estimates. Carbon dioxide emissions from fossil fuel consumption were calculated according to the Department of Energy's benchmark method in the 2006 IPCC Guidelines for National Greenhouse Gas Inventories. The calculation results According to formula (1) -(4), the entropy weight result can be calculated, as shown in Table 1. According to the entropy weight calculation results, the developmental weight accounts for 0.48848, followed by the low-carbon 0.33466, and the relatively low ecological weight accounts for 0.17686. Among them, indicators such as industrial added value, unemployment rate, energy consumption per unit of GDP, forest coverage rate, and expenditure on energy conservation and environmental protection account for relatively high weight. According to formula (5) -(8), TOPSIS scores were obtained. In order to make the evaluation more intuitive, the values were expanded by 100 times without affecting the evaluation results, and the score was sorted, as shown in Table 2. The highest score was 74.48 in Guangdong, which was also one of China's first low-carbon urban provinces. The lowest score was 25.07 from Xinjiang. Using SPSS22.0, k-means clustering was adopted for TOPSIS results, and the clustering results of low-carbon economic development level of 30 provinces and municipalities were shown in Figure 1. According to the low-carbon ecological development level, the 30 provinces, autonomous regions, and municipalities were clustered into low-carbon development zones (purple), relatively low-carbon development zones (blue), relatively high-carbon development zones (yellow), and high-carbon development zones (red). Conclusions This paper constructs an evaluation index system of regional low-carbon development level from three aspects of development, low carbon, and ecology, and makes an empirical analysis based on 30 Chinese provinces in 2018. The results show that China's overall low-carbon economic development level was not high in 2018. Overall, the low-carbon development level of the south is better than that
2021-08-10T20:03:43.814Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a09b79b6ee6db8afda8c0acdfd5c36cba3481620", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/798/1/012006", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a09b79b6ee6db8afda8c0acdfd5c36cba3481620", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Physics" ] }
221888108
pes2o/s2orc
v3-fos-license
TRI microparticles prevent inflammatory arthritis in a collagen-induced arthritis model Despite recent progress in the treatment of rheumatoid arthritis (RA), many patients still fail to achieve remission or low disease activity. An imbalance between auto-reactive effector T cells (Teff) and regulatory T cells (Treg) may contribute to joint inflammation and damage in RA. Therefore, restoring this balance is a promising approach for the treatment of inflammatory arthritis. Accordingly, our group has previously shown that the combination of TGF-β-releasing microparticles (MP), rapamycin-releasing MP, and IL-2-releasing MP (TRI MP) can effectively increase the ratio of Tregs to Teff in vivo and provide disease protection in several preclinical models. In this study TRI MP was evaluated in the collagen-induced arthritis (CIA) model. Although this formulation has been tested previously in models of destructive inflammation and transplantation, this is the first model of autoimmunity for which this therapy has been applied. In this context, TRI MP effectively reduced arthritis incidence, the severity of arthritis scores, and bone erosion. The proposed mechanism of action includes not only reducing CD4+ T cell proliferation, but also expanding a regulatory population in the periphery soon after TRI MP administration. These changes were reflected in the CD4+ T cell population that infiltrated the paws at the onset of arthritis and were associated with a reduction of immune infiltrate and inflammatory myeloid cells in the paws. TRI MP administration also reduced the titer of collagen antibodies, however the contribution of this reduced titer to disease protection remains uncertain since there was no correlation between collagen antibody titer and arthritis score. Introduction Rheumatoid arthritis (RA) is an autoimmune disease of chronic joint inflammation affecting 0.5-1% of the population in Western countries, and approximately 1.5 million people in the U. S. [1,2]. RA joint inflammation leads to irreversible damage to cartilage and bone. This can be debilitating for patients and the corresponding decreased work capacity is the main driver of the estimated $46 billion societal burden of RA in the U.S. [3]. Tremendous progress has been made over the past few decades in optimizing treatment with conventional synthetic diseasemodifying antirheumatic drugs (DMARDs), developing biologic DMARDs including TNF-α inhibitors, and the recent introduction of Janus kinase (JAK) inhibitors. However, none of these therapies have been able to achieve low disease activity in even 50% of methotrexatenaive patients, and with each line of further therapy there is a diminishing return of patients who adequately respond [4,5]. It has been suggested that a similar maximum efficacy has been observed across RA drug types because regardless of the direct target, all of these drugs ultimately act by blocking TNF-α and/or IL-6 [6]. Thus, a substantial population of RA patients remain underserved by existing treatments, and there is a need to develop new treatments with a different mechanism of action. Although there is not a natural spontaneous animal model of RA, collagen-induced arthritis (CIA) is a widely used mouse model that has many similarities to RA. CIA resembles RA in some important histological and radiographic measures including fibrin deposition, synovial hyperplasia, mononuclear infiltration, and bone erosion [7][8][9]. While collagen II (CII) is the initiating antigen in CIA, the defining antibodies of seropositive RA-rheumatoid factor (antibodies to self IgG-Fc) and anti-citrullinated protein antibodies (ACPAs)-have been detected in CIA with the latter shown to contribute to disease pathogenesis [10,11]. Both auto-antibodies and T cells contribute to CIA pathogenesis. Anti-CII antibody administration is sufficient to transfer CIA [12], assuming it is of appropriate dose, avidity, and isotype [10]. CD4 + T cells play an important role in generation of anti-CII antibodies in CIA [13][14][15], and CII or citrullinated protein specific CD4 + T cells can also exacerbate disease by trafficking to the joints and producing inflammatory cytokines [10,12,16].Together, the complement activation by autoantibodies and CD4 + T cell production of IFN-γ and/or IL-17 is thought to lead to recruitment and activation of innate immune cells which in turn produce TNF-α and IL-1β leading to tissue swelling and destruction [10,17,18]. The balance between regulatory T cells (Tregs) and auto-reactive effector T cells (Teff) influences arthritis disease progression in both RA and CIA. Tregs restrain Teff from causing damage to healthy tissue in the elimination of pathogens as well as play a critical role in peripheral tolerance by preventing auto-reactive T cells from causing autoimmunity. Tregs have a variety of possible mechanisms to directly suppress Teff or indirectly suppress Teff through actions on antigen presenting cells (APCs). These mechanisms can either be contact dependent, such as expression of CTLA-4 or other co-inhibitory receptors, or contact independent such as the production of immunosuppressive cytokines or adenosine via CD39 and CD73 [19]. Canonical Tregs express the transcription factor FoxP3, and their importance in maintaining self-tolerance is illustrated by Foxp3 mutation which results in fatal multi-organ autoimmune disease in both mice (scurfy mice) and humans (IPEX syndrome) [20]. However, non-canonical FoxP3regulatory CD4 + T cells [21,22] and other regulatory populations [23][24][25] have also been identified in a variety of contexts. RA is associated with reduced suppressive ability of Tregs, due to a Treg intrinsic defect as well as the inflammatory milieu [26][27][28]. While in the CIA model, Treg depletion accelerates the onset of disease [29] and cell-therapy with collagen-specific Tregs can reverse disease progression [30]. A treatment capable of re-establishing Treg-Teff balance in RA may be able to restore tolerance and protect against disease progression. Polyclonal Treg cell-therapy is one approach to achieve this and while initial trials in several auto-immune and transplant indications have demonstrated safety, efficacy has not yet been proven [31,32]. There are substantial challenges to polyclonal Treg cell-therapy including the cost and complexity of good manufacturing practice (GMP) isolation and cell expansion [33], as well as concerns about potency [33,34], non-specific immunosuppression [35], and Treg instability or plasticity [36]. Approaches to restore Treg-Teff balance that use auto-antigen and/or localized immunomodulatory agents could avoid many of these issues. Several such approaches using citrullinated peptides [11,37], CII peptide-MHC II complex [38,39] or liposomes encapsulating antigen and NF-κB inhibitor [40] have demonstrated success in inflammatory arthritis models, but it remains unclear if these technologies will translate to RA. In particular, the immune response to antigen is highly dependent on antigen dose and context including cytokine milieu and prior exposure [41][42][43]. Previously, we reported the use of polymeric microparticles (MP) which release TGF-β, rapamycin, and IL-2 (TRI MP) [44] so that endogenous antigen can be presented in a tolerance-promoting local immunological microenvironment. This combination was initially chosen due to the role of each of these factors in promoting Treg induction and expansion [44][45][46][47][48]. IL-2 is needed for T cell differentiation/proliferation, and low doses expand Tregs [49]. In addition to TGF-β and rapamycin promoting Treg expansion and naïve T cell differentiation into Tregs (which is in part achieved by effects on APCs), these factor can also directly suppress Teff cell proliferation [50][51][52]. Subcutaneous TRI MP administration at the site of inflammation has previously demonstrated an ability to expand Tregs and limit Teff levels, resulting in disease prevention or therapeutic treatment in several preclinical models [53][54][55]. However, TRI MP has not been previously evaluated in a model of autoimmunity. These studies have also shown the combination of all three drugs is more effective than any drug alone or pair of two drugs, that TRI MP can confine drug activity to a local area resulting in antigenspecific immunosuppression, and that sustained release of drug from TRI MP is more potent than equivalent unencapsulated doses. Furthermore, injection of microparticles into inflamed joints could be a viable clinical approach as intra-articular injection of corticosteroid containing microparticles is already FDA-approved for osteoarthritis pain management [56]. Here we demonstrate the ability of TRI MP to prevent arthritic inflammation and bone erosion of the paws in a CIA model of arthritis. The proposed mechanism of this protective effect involves reduced T cell proliferation and the expansion of a regulatory cell population which together ultimately resulted in less immune infiltration of the paws. Anti-CII IgG antibodies were also reduced by TRI MP administration, but not found to contribute to the arthritis prevention provided by this treatment. Microparticle fabrication TRI MP were fabricated using an emulsion-solvent evaporation method as previously described [54]. A 5% w/v polymer solution was prepared by dissolving 200 mg of Poly (lacticco-glycolic) acid (PLGA) in 4 mL of dichloromethane (DCM) ( For TGF-β and IL-2, primary emulsions were formed by adding 5 μg of recombinant protein (hTGF-β from PeproTech, Rocky Hill, NJ) (mIL-2 from R&D Systems, Minneapolis, MN), dissolved in 200 μL of deionized (DI) water or phosphate buffered saline (PBS) respectively, to the organic polymer phase, and sonicating at 25% amplitude for 10 s (Active Motif, Carlsbad, CA). For Rapamycin, 1 mg of rapamycin (Alfa Aesar, Ward Hill, MA) dissolved in 100 μL of dimethyl sulfoxide was added to the polymer solution without sonication. Blank MP was made for each type of MP using vehicle control solution. The resulting primary emulsion or polymer-drug solution was poured into 60 mL of 2% w/ v poly(vinyl alcohol) (PVA, MW~25 kDa, 98% hydrolyzed, Polysciences, Warrington, PA) in DI water (or 51.6 mM NaCl for IL-2) and homogenized (L4RT-1, Silverson, East Longmeadow MA) at 3,000 rpm for 1 min. The resulting double or single emulsion was then poured into 80 mL of 1% w/v PVA in DI water or (51.6 mM NaCl for IL-2) and stirred (600 rpm) for 3 h to allow DCM to evaporate. TGF-β and IL-2 emulsions were homogenized and stirred on ice. After stirring, MP were collected by centrifugation (200 g, 5 min, 4˚C) and washed 4 times with DI water before lyophilizing for 48 hours. Total drug loading of MP was assessed as previously described [53,57]. For TGF-β and IL-2, drug was extracted using DCM and PBS with 0.1% sodium dodecyl sulfate (SDS) as a surfactant in a two-phase extraction. 5 mg of MP was dissolved in 500 μL DCM, mixed with 250 μL of PBS + SDS using a vortex mixer, and centrifuged (5,000 g, 10 min, 4˚C) to separate the phases. The aqueous phase was collected (200 μL), and the extraction process was repeated 2 more times, with 250 μL of PBS + SDS collected for the third extraction. TGF-β and IL-2 concentrations were measured using enzyme-linked immunosorbent assay (ELISA) according to manufacturer's instructions (R&D Systems) and used to calculate drug loading (nanograms of drug per mg of microparticles). For rapamycin, drug was extracted by dissolving MP (5 mg) in acetonitrile (500 μL). Drug concentration and subsequently drug loading was calculated by measuring absorbance (278 nm) using a microplate reader (SpectraMax M5, Molecular Devices, Sunnyvale, CA) and comparing values to a standard curve of rapamycin in acetonitrile. MP release kinetics were assessed by dissolving 10 mg of MP in 1 mL of release solution, incubating at 37˚C with end-over-end rotation, and collecting samples with solution replacement at indicated time points. PBS with 1% w/v bovine serum albumin (BSA) was used as release solution for TGF-β and IL-2, and PBS with 0.02% v/v Tween-80 was used as release solution for rapamycin. TGF-β and IL-2 concentrations were assessed by ELISA and rapamycin concentration was assessed by microplate reader (absorbance 278 nm). These concentrations were then used to calculate cumulative release (ng drug/mg MP). Mice Male DBA/1J mice were purchased from The Jackson Laboratory, Bar Harbor, ME), and used at 8-10 weeks of age. A single gender of mice (male) was used due to gender differences in arthritis severity in the CIA model [58,59]. All animal experiments were approved by the Institutional Animal Care and Use Committee at the University of Pittsburgh (Protocol Number: 18103788) and all methods were performed in accordance with the relevant guidelines and regulations. Animal pain and distress were assessed by checking for lethargy, weight loss (20% or more), and a scruffy coat. However, as no mice exhibited these symptoms, euthanasia was never performed prior to experimental endpoints. Mice sacrificed at experimental endpoints were euthanized using carbon dioxide followed by cervical dislocation. Collagen-induced arthritis (CIA) initiation, treatment, and clinical scoring CIA was initiated as previously described [11,58]. Mice were immunized subcutaneously (s.c.) at the base of the tail on Day 0 and again on Day 21 with 100 μL of a 1:1 emulsion prepared from 4 mg/mL bovine collagen II (bCII, Chondrex, Redmond, WA) dissolved in 0.1 M acetic acid, and complete Freund's adjuvant (CFA) consisting of incomplete Freund's adjuvant (BD, Franklin Lakes, NJ) and 4 mg/mL of M. tuberculosis H37 RA (BD). Mice were shaved and anesthetized with isoflurane for immunizations and MP treatment to facilitate injection. Mice were injected s.c. with 300 μL of PBS, Blank MP, or TRI MP on each flank above the hind limb on Day 0 and every 4 days through Day 12. For groups receiving MP, each injection contained 15 mg of TGF-β MP and 5 mg of IL-2 MP (or corresponding Blank MP) dissolved in PBS. Injections on Days 0 and 8 also contained 15 mg of rapamycin MP (or corresponding Blank MP). In a pilot prevention study, mice (n = 6 per group) were given daily injections (Day 0-13) on each flank above the hind limb with 100 μL of PBS, TRI Low Dose (2 ng TGF-β, 1 μg rapamycin, and 2 ng IL-2), or TRI High Dose (20 ng TGF-β, 10 μg rapamycin, and 20 ng IL-2) instead of MP. For CIA prevention studies, mice (n = 24 per group for MP or n = 6 per group for soluble factor pilot study) were anesthetized and paws were imaged at the indicated time points between Day 26 and Day 40 so that they could be scored by a blinded individual. A clinical scoring similar to the one previously described [11,16] was used. Each paw was scored from 0-4 based on the following scale: 0 -no redness or swelling; 1 -a single digit swollen, 2 -two or more digits swollen, but no footpad/palm or ankle/wrist swelling; 3 -two or more digits swollen, and some footpad/palm or ankle/wrist swelling; 4 -all digits swollen, and severe footpad/ palm and ankle/wrist swelling. The scores for each paw were summed, giving a maximum score of 16 per mouse. Microcomputed tomography (micro-CT) imaging and analysis On Day 52-60, mice (n = 12 per group) selected prior to study initiation for imaging were sacrificed and hind paws were fixed in 4% formaldehyde (Thermo Fisher Scientific, Waltham, MA). The endpoint for this experiment was chosen to provide a sufficient duration of paw inflammation for bone erosion to occur [9]. Micro-CT scanning was performed using an Inveon multimodal scanner (Siemens, Washington, D.C.) at 23 μm isotropic voxel size, with 360 projections, voltage of 80 kV, and current of 500 μA. The open source program ITK-SNAP [60] (www.itksnap.org) was used to reconstruct three-dimensional images and to calculate the bone volume within an arbitrary distance of the metatarsophalangeal (MTP) joints (40 voxels or 920 μm on either side of the joint) similar to a previously described method [9]. Joint bone volume for each hind paw was calculated by summing the 5 MTP volumes. Surface meshes from the three-dimensional images made in ITK-SNAP were exported and surface area was calculated using the Meshmixer program. Measurement of CII antibody titer Between Day 40-42, mice selected prior to study initiation (n = 12 per group) for serum collection were anesthetized with isoflurane and blood was collected via the retro-orbital vein. Serum was obtained by allowing blood to clot for a minimum of 30 minutes followed by centrifugation (1,000 g, 10 min) and collection of the supernatant. ELISAs were performed as previously described [58], 96 well plates were coated overnight at 4˚C with 5 μg/mL bCII in Tris-HCl (0.05 M)-NaCl (0.2 M) buffer (pH 7.4). Plates were washed with 0.05% v/v Tween-20 in PBS between all steps prior to the use of stop solution. Plates were blocked with 2% w/v BSA for 1 hr, and serum or a monoclonal anti-CII antibody used as standard (clone 2B1.5, Invitrogen, Carlsbad, CA) were serially diluted in steps of 5x from 500 fold to~1.5 x 10 6 fold and added in duplicate for 2 hrs. Horseradish peroxidase (HRP) conjugated goat anti-mouse-IgG (Invitrogen) at 1 μg/mL or HRP conjugated goat anti-mouse-IgG2a at 0.25 μg/mL was added for 1 hr, followed by TMB substrate (substrate reagent pack, R&D systems) for 20 min, and sulfuric acid stop solution (R&D systems). Absorbance was measured using a microplate reader (450 nm, subtracting background absorbance at 540 nm). Antibody titer was defined as the dilution corresponding to the half-maximal absorbance in the linear section of the dilution curve [29], which was calculated as the IC50 value using a non-linear four parameter regression. Normalized titer was calculated by dividing the titer by that of the 2B1.5 antibody standard for a given plate. Localization of inhibited T cell proliferation To assess the effects of TRI MP on T cell proliferation and the localization of those effects, mice (n = 6 per group) were immunized with a non-arthritic antigen on one flank and immunized with collagen and TRI MP on the opposite flank. Specifically, mice were immunized s.c. on Day 0 on the left flank with 100 μL of a 1:1 emulsion prepared from 2 mg/mL Keyhole limpet hemocyanin (KLH, Sigma Aldrich) and CFA prepared as described above. Mice were also immunized on the right side on Day 0 by the base of the tail with bCII and given injections of PBS, Blank MP, or TRI MP every 4 days through Day 12 as described above. On Day 15, mice were sacrificed and the left and right iLN were removed and separately ground to single cell suspensions using 70 μm filters. Cells were stained with Fc block, fixable viability dye, and for CD4, CD25, fixed/permeabilized (FoxP3/Transcription Factor Staining Buffer Set, eBioscience), and then stained for Ki67 (SolA15;eBioscience) and Tbet (O4-06;BD). Counting beads (Thermo Fisher Scientific) were added, then samples were run on a flow cytometer (LSRII, BD) and analyzed using FlowJo (Tree Star) with gates based on isotype and singlecolor controls. Statistical analysis Statistical analyses were performed with GraphPad Prism v7 (San Diego, CA). Data are presented as mean ± SEM and the following cutoffs were used for significance: � p < 0.05, �� p < 0.01, ��� p < 0.001, ���� p < 0.0001. For arthritis incidence curves, a Log-rank (Mantel-Cox) test was ran comparing all curves. Since this was significant, Long-rank (Mantel-Cox) test were performed for each individual comparison, and p values were multiplied by the number of comparisons made (3).For arthritis clinical score curves, a two-way mixed effects ANOVA (for time as a repeated measure and treatment group) was performed, followed by Tukey post-hoc analysis to compare the mean of every group with the mean of every other group at each time point. The ROUT outlier test with the most stringent threshold for outlier removal (Q = 0.1%) was used to remove outliers from the graph of normalized antibody titers. For all plots assessing a correlation with arthritis scores, the Spearman r correlation coefficient was calculated and a two-tailed p value was used to determine the significance of the correlation. All other graphs had 3 treatment groups and were analyzed by one-way ANOVA, followed by Tukey post-hoc analysis in order to compare the mean of every group with the mean of every other group. TRI MP treatment prevents induction of arthritis TRI MP morphology, size, and drug release kinetics (S1 Fig) were similar to those previously reported [54]. The dose of MP administered for CIA prevention was chosen based on MP release (S1 Fig) in order to approximate the effect observed in a pilot CIA prevention study using daily local injection of un-encapsulated TRI (S2 Fig). In the MP CIA prevention study, PBS treated mice had less than 50% of mice remaining arthritis free by Day 28 and all mice had developed arthritis by Day 36 (Fig 1A). Blank MP, or vehicle control, treated mice had less than 50% of mice remaining arthritis free by Day 30, and 25% of mice remining arthritis free at the study endpoint ( Fig 1A). In comparison to these groups, TRI MP had a significantly improved survival curve (Mantel-Cox, p < 0.0001 and p < 0.05 respectively), with 62.5% of mice remaining arthritis free at the study endpoint (Fig 1A). When the clinical arthritis score was assessed, TRI MP significantly prevented the development of disease relative to both PBS (Two-way ANOVA, Tukey post-hoc, p < 0.0001) and Blank MP (Two-way ANOVA, Tukey post-hoc for treatment group, p < 0.01) treatment at all timepoints past Day 32 ( Fig 1B). These differences were 2-3x in magnitude with TRI MP treatment resulting in an average arthritis score of 2.5 at Day 40, while PBS and Blank MP treatment led to average arthritis scores of 7.5 and 5.8 respectively (Fig 1B). To demonstrate how TRI MP treatment influenced the number and severity of inflamed paws, results were also presented in terms of number of affected paws per mouse. Relative to PBS treatment, TRI MP treatment significantly reduced the number of paws per mouse with arthritis (Fig 1C), as well as the number of paws per mouse with severe arthritis (Fig 1D). Where severe arthritis (arthritis score � 3 per paw) was defined by the involvement of footpad/ankle swelling. Taken together, these data show that TRI MP was able to significantly inhibit the incidence and severity of arthritis in a prevention model. Correlation demonstrated between arthritis clinical score and bone erosion To determine if the reduction in paw inflammation observed with TRI MP administration was associated with less bone erosion, micro-computed tomography (CT) scans were performed on fixed hind paws from mice sacrificed between Day 52 and Day 60. An experimental timeline illustrating the different cohorts for mice (used for different experimental endpoints) can be found in S3 Fig. Visible full-thickness bone erosions could be detected at the MTP joints in some paws with high clinical arthritis scores (Fig 2A). Quantification of the relationship between MTP joint bone volume and arthritis score for an individual paw demonstrated a negative, moderate strength (Spearman r = -0.573), and significant (p < 0.0001) correlation ( Fig 2B). Notably there is some variability in the joint bone volume among paws that had arthritis scores of zero at Day 40. While some of this may be natural variation present in healthy paws (i.e. bone volumes of~4mm 3 -5mm 3 ), some of the lower bone volume measurements in this group may reflect the delayed emergence of arthritis between Day 40 and Day 60 in corresponding mice. Despite this variability, the moderate strength and significant correlation observed suggest that on average the arthritis score is still a good predictor of bone erosion. When the data is presented by treatment group, the TRI MP group has significantly (one-way ANOVA, Tukey post-hoc, p < 0.001) more joint bone volume than the PBS group (Fig 2C). Depending on the severity, an arthritic bone erosion should theoretically result in a loss of joint bone volume (V) and/or an increase in joint bone surface area (SA) due to the irregular nature of bone erosions. Together this would result in an increased surface area to volume ratio (SA/V). As expected, there was a positive, moderate strength (Spearman r = 0.699), and significant (p < 0.0001) correlation between MTP joint bone surface area to volume ratio and arthritis score (Fig 2D). Likewise relative to PBS treatment, TRI MP treatment significantly (one-way ANOVA, Tukey post-hoc, p < 0.001) prevented the increased joint bone surface area to volume ratio associated with arthritis ( Fig 2E). Together these findings demonstrate that a reduced arthritis score was associated with protection from bone erosion, and on average TRI MP treated mice exhibited less bone erosion. Auto-antibodies are reduced in mice that are administered TRI MP To begin to understand the mechanism by which TRI MP is acting, serum taken on Day 40 was used in indirect ELISAs with bCII as the antigen to measure levels of anti-CII IgG antibodies (Ab). Representative serial dilution curves (Fig 3A) show one TRI MP mouse (blue) with a particularly left-shifted curve, and thus reduced anti-CII IgG Ab titer. Ab titer was normalized to the titer of a monoclonal CII Ab included on each plate to account for plate-to-plate variability. A plot of normalized anti-CII IgG Ab titer vs. arthritis score had a weak (Spearman r = 0.303) and non-significant (p = 0.0817) correlation (Fig 3B). However, TRI MP treatment significantly (one-way ANOVA, Tukey post-hoc, p < 0.05) lowered the average anti-CII IgG Ab titer by approximately 40% relative to PBS treatment (Fig 3C). These results demonstrate that TRI MP significantly reduced the level of an arthritis causing auto-antibody but did not completely block auto-antibody generation even in mice that had no signs of arthritis. The lack of correlation between anti-CII IgG Ab titer and arthritis clinical score suggests that the mechanism of TRI MP action is not a reduction of the concentration or affinity of total anti-CII IgG Ab. While the anti-CII IgG level has been associated with CIA disease severity in a few studies [29,39], there is also evidence that the percentage of anti-CII IgG that is of the Th1 associated IgG2a isotype [62] and not the overall IgG level predicts susceptibility to CIA since IgG2a is associated with complement system activity [63,64]. Therefore, anti-CII IgG2a Ab titers were also assessed. There was no correlation between anti-CII IgG2a Ab titers and arthritis score and no differences in anti-CII IgG2a Ab titers between treatment groups (S4 Fig). TRI MP treatment increases a CD4 + T cell population with elevated regulatory markers in the draining lymph node and spleen To investigate whether regulatory T cells could be playing a role in TRI MP prevention of CIA, mice were immunized with bCII, injected with PBS, Blank MP, or TRI MP every 4 days, and sacrificed on Day 15. While there was not a significant increase in the levels of FoxP3 + CD25 + Tregs in the draining lymph node (inguinal, iLN) (Fig 4A and 4B) or spleen (Fig 4A and 4D) of TRI MP treated mice relative to controls, there was a significant increase (one-way ANOVA, Tukey post-hoc, p < 0.01) in FoxP3 -CD25 + T cells relative to PBS treated mice in the iLN ( Fig 4C) and spleen ( Fig 4E). Likewise, no significant increase in FoxP3 + Tregs was observed at a later time point (Day 35) in the pilot study with daily injections of un-encapsulated TRI factors (S2 Fig). Several markers associated with regulatory T cell function were also assessed to evaluate how their expression on the FoxP3 -CD25 + population compared to that of conventional CD4 + T cells (FoxP3 -CD25 -) and Tregs (FoxP3 + CD25 + ), as well as whether TRI MP led to evaluated expression of these markers relative to control treatments on either the FoxP3 -CD25 + or Treg populations. The analyzed markers included: latency-associated peptide (LAP), part of the latent TGF-beta complex; CTLA-4, a checkpoint molecule that blocks CD80/86 co-stimulation; and CD73, an enzyme which degrades AMP to immunosuppressive adenosine. When mice from all treatment groups were pooled together in the analysis, the FoxP3 -CD25 + population had significantly (One-way ANOVA, Tukey post-hoc for T cell population, p < 0.01 or p <0.0001) higher expression of LAP, CTLA-4, and CD73 than the conventional CD4 + T cells (FoxP3 -CD25 -) population in both the iLN and spleen (Fig 4F-4L). However, while TRI MP treatment resulted in significantly elevated expression of CD73 for the iLN FoxP3 -CD25 + population, TRI MP led to trends toward reduction (and one TRI MP administration lowers level of anti-collagen II IgG antibodies. A) Representative serial dilution curves with each curve corresponding to a single mouse, black-PBS treated mouse, red-Blank MP treated mouse, blue-TRI MP treated mouse, green-monoclonal anti-collagen II (CII) antibody (Ab) (Clone 2B1.5) used as standard. B) Normalized anti-CII IgG Ab titer versus arthritis score. Ab titer was defined as the dilution corresponding to the half-maximal absorbance in the linear section of the dilution curve, or the IC50 value using a non-linear four parameter regression. Normalized titer was calculated by dividing the titer by that of the 2B1.5 Ab standard for a given plate. Color coded based on treatment group: black-PBS, red-Blank MP, blue-TRI MP. Spearman correlation coefficient and p value for correlation are indicated. C) Average normalized anti-CII IgG Ab titer by treatment group. n = 12 mice per group, data presented as mean ± SEM, and the following cutoffs were used for significance: � p < 0.05. While TRI MP treatment did not increase the levels of conventional FoxP3 + Tregs or increase expression of suppressive markers on these cells, it did increase a population of activated CD4 + T cells (FoxP3 -CD25 + ) that had elevated levels of suppressive markers. The effects of the dose of TRI MP administered are not localized to the draining lymph node In order to evaluate the role of TRI MP suppression of T cell proliferation in arthritis protection as well as the localization of this immunosuppression, mice were immunized with KLH on one flank and immunized with bCII along with PBS, Blank MP, or TRI MP on the other flank. The iLN of the TRI MP treated flank had a trend towards reduced cell numbers, and significantly (One-way ANOVA, Tukey post-hoc, p < 0.01) reduced proliferation of CD4 + T cells , and CD73 (right) expression for the isotype control (shaded gray), the FoxP3 -CD25population (black), the FoxP3 -CD25 + population (orange), and the FoxP3 + CD25 + (cyan) population (from a PBS treated mouse). G-L) Quantification of the percentage of CD4 + T cell populations that are LAP + (G,J), CTLA-4 + (H, K), or CD73 + (I, L) relative to isotype control. Presented by CD4 + T cell population (FoxP3 -CD25 -, FoxP3 -CD25 + , and FoxP3 + CD25 + ) for the iLN (G-I) and spleen (J-L). n = 6 mice per treatment group and n = 18 mice per CD4 + T cell population group, data presented as mean ± SEM, and the following cutoffs were used for significance: �� p < 0.01, ��� p < 0.001, ���� p < 0.0001. https://doi.org/10.1371/journal.pone.0239396.g004 (Fig 5A and 5B). There also was an increase in the FoxP3 -CD25 + population (Fig 5C) consistent with Fig 4C. However, the contralateral limb in TRI MP treated mice also had the response to immunization suppressed to a similar degree. The contralateral iLN of the TRI MP group had a trend towards reduced cell numbers and reduced proliferation, with significant differences (One-way ANOVA, Tukey post-hoc, p < 0.01) observed relative to the Blank MP group (Fig 5D and 5E). There was also a significant increase (One-way ANOVA, Tukey posthoc, p < 0.01) in the FoxP3 -CD25 + population in the contralateral iLN (Fig 5F). These results suggest that the actions of TRI MP were not localized to the draining LN, as similar levels of reduced cellular proliferation and an increased regulatory population were observed in both the draining and contralateral iLN. Lower arthritis score associated with less immune infiltrate and inflammatory cytokine in the paws To assess how TRI MP treatment altered the amount and characteristics of immune infiltrate in the inflamed paws themselves, immune cells were extracted from paws between Day 40-42. Since TRI MP was shown to reduce CD4 + T cell proliferation and expand a FoxP3 -CD25 + population expressing suppressive markers in the draining LN (Figs 4 and 5), the accumulation of CD4 + T cells in the paws and fraction of them that were FoxP3 -CD25 + was assessed. A moderate (Spearman r = 0.634) and significant (p < 0.0001) positive correlation was observed between the number of CD4 + T cells in the paws and the arthritis score (Fig 6A). While mice with lower arthritis scores had fewer number of CD4 + T cells in the paws, a larger percentage of these CD4 + T cells were FoxP3 -CD25 + (Fig 6B). Although TRI MP treated mice did not have a significantly different FoxP3 -CD25 + cell population relative to PBS and Blank MP controls (Fig 6C), the average for TRI MP was slightly larger driven by three TRI MP treated mice with arthritis scores of zero and greater than 20% of CD4 T cells expressing the FoxP3 -CD25 + phenotype (Fig 6C). Notably, the percentage of CD4 + T cells expressing FoxP3 was not significantly correlated with arthritis score or significantly increased with TRI MP treatment (S7 Fig). Given the paradigm of auto-antibodies and CD4 + T cells promoting myeloid cell recruitment and expansion in CIA, the size of the overall immune infiltrate in the paws and levels of monocytes/macrophages and neutrophils were assessed. There was a significant (p < 0.0001) positive correlation between the amount of immune infiltrate in the paws, as defined by CD45 expression, and the arthritis score ( Fig 6D). Not only did the number of immune cells present in the paws increase with higher arthritis scores, but the composition of the CD45 + immune population changed as well. The percentages of monocytes/macrophages (CD11b + Ly-6G -Ly-6C + ) [65] and neutrophils (CD11b + Ly-6G + ) among CD45 + cells were significantly (p = 0.0001 and p = 0.002 respectively) and positively correlated with arthritis score (Fig 6E and 6F). The percentage of monocytes/macrophages and neutrophils in the paws of bCII-immunized mice was also noticeably higher than that of un-immunized mice (S7 Fig). Due to the role of TNF-α in causing paw redness and swelling, myeloid cell expression of TNF-α was also measured. Both the number of TNF-α + monocytes/macrophages and the number of TNF-α + neutrophils were significantly (p = 0.001 and p = 0.019) and positively correlated with arthritis score (Fig 6G and 6H). Taken together, these findings are consistent with a scenario in which mice with low arthritis scores have less CD4 + T cell infiltrate and/or a higher proportion of regulatory FoxP3 -CD25 + cells, resulting in reduced infiltrate of myeloid cells and less production of an inflammatory cytokine responsible for paw redness and swelling. Discussion New therapeutic approaches to RA are necessary as a large number of patients do not respond sufficiently to existing treatments. In particular, approaches aiming to maintain or restore Treg-Teff balance are of considerable interest because of the role this balance has in influencing disease progression, both in RA and the murine model of CIA. We have previously explored microparticle formulations that expand Tregs and limit Teff levels, resulting in disease prevention or therapeutic treatment in several preclinical models. Here we evaluated the ability of TRI MP to prevent the development of arthritis in the CIA model, and explored the mechanism behind disease prevention. Mice given s.c. TRI MP injections every four days between Day 0-12 following bCII immunization had significantly reduced incidence of arthritis and severity of arthritis relative to both PBS and Blank MP treated control groups (Fig 1). The protection provided by TRI MP not only served to block tissue swelling, but also prevented bone erosion in the digits relative to PBS but not relative to Blank MP (Fig 2). While this may in part reflect some protective effect of Blank MP as discussed below, the lack of significant difference between Blank MP and TRI MP in bone erosion also likely reflects some limitations of the bone erosion assessment given the robust differences observed between Blank MP and TRI MP in Fig 1. First although all PBS treated mice developed arthritis, only a small fraction had paws with substantial bone erosions. While this is likely a natural reflection of the fact that sufficient severity and duration of inflammation must occur to result in bone erosion, it means that differences between treatment groups will be accordingly harder to detect. Secondly, there was a relatively large degree of variability in the bone volume measurements of mice without arthritis relative to the magnitude of bone volume reduction in mice with bone erosions (Fig 2B). The assessment of bone surface area to volume ratio as opposed to bone volume resulted in a stronger correlation with arthritis score and a larger trend towards a difference between Blank MP and TRI MP (Fig 2D and 2E). This may have been due an ability of the surface area to volume ratio to partially mitigate these limitations. For example, minor bone erosions would be expected to be disproportionally detected by surface area and the surface area to volume ratio may help reduce natural variability in the size of healthy (arthritis score of zero) joints. Taken together these factors of a high bar for detection and high variability may have contributed to the lack of significant difference observed for bone erosions between Blank MP and TRI MP based on the sample size studied. However, a substantial and significant difference was still demonstrated with TRI MP treatment compared to both PBS and Blank MP for the primary endpoints of the CIA model, arthritis incidence and arthritis score (Fig 1A and 1B). The first step in examining the mechanism by which TRI MP achieved these preventative effects was assessing levels of anti-CII auto-antibodies. While TRI MP significantly lowered titers of anti-CII IgG Ab relative to the PBS control, there were still similar titers in mice who were arthritis free and those who developed severe arthritis (Fig 3). Likewise, no correlation between titers of anti-CII IgG2a and clinical arthritis score was observed (S4 Fig). This suggests that TRI MP was either affecting other aspects of the Ab response, such as epitope spreading, and/or affecting immune cell recruitment/expansion in the paws. The impact of TRI MP administration on the CD4 + T cell population was assessed next since TRI MP has been proven to influence CD4 + Treg and Teff levels in other disease models [53][54][55], and CD4 + T cells contribute to CIA disease severity independently of helping with Ab production [12]. While TRI MP treatment did not result in increased levels of canonical FoxP3 + Tregs in the draining LN or spleen at the Day 15 time point, it did result in increased levels of a FoxP3 -CD25 + T cell population with elevated expression of several suppressive molecules utilized by Tregs including LAP, CTLA-4, and CD73 (Fig 4). TRI MP also had an anti-proliferative effect, reducing immunization-induced expansion of LN cell numbers and the proliferation of CD4 + T cells in the LN at Day 15 (Fig 5). To understand how these early TRI MP induced changes in the periphery protected against the development of arthritis between Days 26-40, paw immune infiltrate was analyzed at the experimental endpoint. Mice with lower arthritis scores had less CD4 + T cells in the paws, but a larger percentage of cells were FoxP3 -CD25 + (Fig 6). Additional correlations showed that lower arthritis scores were associated with reduced overall immune infiltration, reduced myeloid cell representation among the infiltrate, and less TNF-α producing myeloid cells (Fig 6). These findings are consistent with a mechanism in which TRI MP decreases arthritis score by limiting immunization-induced expansion of CD4 + T cells directly and/or through an increased regulatory FoxP3 -CD25 + population in the periphery, resulting in less CD4 + T cell recruitment and/or the migration of FoxP3 -CD25 + T cells to the paws, and in turn less recruitment and activation of myeloid cells to produce arthritis causing inflammatory cytokines. Although significant correlations in agreement with this mechanism were observed for Fig 6, when the data was presented by treatment group significant differences were not observed between TRI MP and other treatment groups (Fig 6 and S7 Fig). This may be because of a lower sample size in this experiment than the prevention study, less separation between the TRI MP and control group arthritis scores in the cohort used for this experiment, and/or because the phases of paw inflammation are dynamic with the timing of disease onset varied among mice. While significant correlations with arthritis score cannot definitely prove the order or causality of the proposed mechanism of action for TRI MP, literature on the immunological processes of CIA development supports this sequence of events [10]. While it is possible that TRI MP administration does not directly cause all of the arthritis score associated changes in paw immune infiltrate observed, if TNF-α is directly responsible for the redness and swelling measured in arthritis scores [10,18], then at the very least TRI MP must reduce TNF-α production in order to lower arthritis scores. Previous studies investigating TRI MP have demonstrated that the combination of all three types of MP in TRI MP was more effective than any single MP factor or any dual combination of factors [53][54][55], so only Blank MP and PBS alone controls were evaluated here. The Blank MP control exhibits a trend in the same direction as TRI MP in several figures, including figures where Blank MP is significantly different from PBS (Fig 1A and 1C) or Blank MP is not significantly different from TRI MP (Figs 2C, 3C, and 4C). These findings may be due to the immunomodulatory properties of PLGA microparticles themselves. Notably, lactic acid from PLGA MP degradation was previously shown to inhibit dendritic cell maturation, possibly by interfering with NF-κB activation [66]. Furthermore, i.v. injected PLGA NP prevented autoimmunity by causing monocytes/neutrophils that phagocytosed them to traffic to the liver and spleen instead of the site of inflammation [67]. While the average diameter of TRI MP was approximately 15-20 μm (S1 Fig), a size likely too large to be phagocytosed by APCs, there is a relatively broad distribution of MP size with some small enough to be phagocytosed (albeit with those smaller microparticles in the distribution representing a much smaller quantity of overall % encapsulated active ingredients). These effects could be more pronounced in this model relative to past models TRI MP have been used in due to a higher dose and frequency of microparticle administration. Despite any protective effects observed with the Blank MP group, the drugs delivered by TRI MP still have a substantial and significant role in reducing arthritis incidence and severity (Fig 1A and 1B). While TRI MP was hypothesized to increase levels of FoxP3 + Tregs in the CIA model based on experience with TRI MP in most other disease models, a previous TRI MP study also observed increases in a population of FoxP3 -CD25 + cells similar to the one observed here and the regulatory function of this regulatory population was demonstrated through a T cell suppression assay. Specifically, in an OVA protein-specific contact hypersensitivity model, TRI MP administration led to a significant increase in the percentage of OVA-specific CD4 + T cells that were FoxP3 -CD25 + Tbetbut not a significant increase in the percentage of CD4 + T cells that were FoxP3 + [53]. Because this was an adoptive transfer model involving use of congenic (CD45.2) OT-II T cell clone, the percentage of transferred CD4 + T cells expressing FoxP3 + was negligible and over 90% of the (CD45.2 + CD4 + ) CD25 + population in the draining LN of TRI MP treated mice was made up of FoxP3 -CD25 + Tbetcells as opposed to FoxP3 + CD25 + cells [53]. Thus, when CD45.2 + CD4 + CD25 + T cells were sorted and shown to inhibit conventional (CD4 + CD25 -) T cell proliferation in a suppression assay at ratios as low as 1 CD25 + T cell: 8 conventional T cells [53], it was clear that the FoxP3 -CD25 + Tbetpopulation had suppressive function. Here we observed increases in a similar population with likely regulatory function that was characterized as FoxP3 -CD25 + and had elevated expression of LAP, CTLA-4, and CD73 (Fig 4). However, because of the much higher level of FoxP3 + CD25 + Tregs in the CIA model, the FoxP3 -CD25 + population accounts for only~25% of the CD25 + population in the draining LN of TRI MP treated mice (Fig 4). Therefore, a suppression assay using CD25 + regulatory cells in the CIA model would be unlikely to be informative due to an inability to distinguish the suppressive contribution of the FoxP3in the presence of a much larger population of suppressive FoxP3 + cells. Although it cannot definitively be claimed that the FoxP3 -CD25 + population observed here is not an activated effector population, the increased levels of suppressive markers expressed by this population, similarity to a verified suppressive population observed using TRI MP in a different disease model, and the observations that TRI MP provided strong CIA protection while inhibiting CD4 + T cell proliferation together provide strong evidence for the regulatory nature of the FoxP3 -CD25 + population increased by TRI MP. The reason that an increase in FoxP3 + expression was not observed in this model is unclear, but may have to do with the MP dose used, MP injection location, use of CFA as the priming agent, and/or the single initiating antigen in this model as opposed to previous models eliciting more polyclonal responses. Further developing TRI MP towards clinical use for arthritis will require dose optimization to minimize any non-specific immunosuppression. The use of subcutaneous MP delivery, depending on the drug delivered and it's dose, may be able to keep delivery relatively localized to the injection site [68]. This is of particular interest when delivering immunomodulatory or immunosuppressive agents, so that the ability of the immune system to fight pathogens in other tissues is not impaired. A previous study evaluating hind limb allotransplantation found that TRI MP injected in the contralateral limb was not effective in prolonging graft survival relative to TRI MP injected in the transplanted limb, indicating that the immunomodulatory effects of TRI MP were restricted to the local area/antigens [55]. Here, we found that TRI MP administered on one limb reduced proliferation and expanded a regulatory population in the contralateral limb (Fig 5). The reason for this discrepancy may be the larger dose of TRI MP used in this study, and in particular, the dose of rapamycin. Unlike active TGF-β and IL-2, which have serum half-lives of only 2-4 minutes when i.v. injected [69,70], rapamycin has a serum half-life of 6 hours when i.v. injected [71] which may permit greater systemic distribution than the other TRI MP components. Although TRI MP dosing used in this study was based on a pilot using daily injections of un-encapsulated drugs and a lower TRI dose provided limited arthritis protection (S2 Fig), it is possible that further optimization of dose and delivery kinetics to use a rapamycin dose in between that of high and low tested doses and/or lowering the rapamycin dose while increasing doses of TGF-β and IL-2 yields a formulation capable of preventing CIA development without causing systemic immunosuppression. When TRI MP was previously shown to be more effective than a comparable dose of un-encapsulated TRI factors in a different model, both were given at the same frequency (one administration for a shorter timeline) [53] While the pilot experiment using the higher dose of un-encapsulated TRI factors for TRI MP dose estimation led to an arthritis score of similar magnitude to TRI MP, the sample size of this group was substantially smaller (n = 6 vs. n = 24) and the daily delivery of un-encapsulated TRI factors partially mimicked the sustained delivery role of microparticles which were given less frequently in the later prevention study. A direct comparison of TRI MP to un-encapsulated TRI factors administered with the same frequency should be evaluated in the CIA model after future optimization of TRI factor dose. Pharmacokinetic studies will ultimately be necessary to support TRI MP translation for arthritis or other indications, however radiolabeled agents may be required given the extremely small amount of cytokines released. In summary, this study found that TRI MP was able to significantly reduce the incidence, severity, and associated bone erosion of arthritis induced by collagen II immunization. The mechanism of this protective effect involved reduced CD4 + T cell proliferation and an increased regulatory population in the periphery following TRI MP administration, and these changes were also reflected in the paws during arthritis onset and associated with reduced recruitment/expansion of TNF-α producing myeloid cells. The next steps in the development of TRI MP as a therapy for arthritis include identifying optimal dosing to prevent CIA without causing systemic immunosuppression and evaluating the ability of TRI MP to reverse established arthritis for clinical relevance. (Fig 1), at which point half of mice were sacrificed and used to measure serum auto-antibodies (Fig 3) and to extract immune cells from the paws (Fig 6). The other half of mice were left until Day 52-60 to allow sufficient time for inflammation to result in bone erosion, and then sacrificed and used for micro-computed tomography (CT) (Fig 2). B) Timeline for measurement of regulatory T cell levels and phenotype in lymphoid tissue. Mice (n = 6 per group) treated as in A), but sacrificed at Day 15 to assess T cells at a time point close to MP administration to assess regulatory T cell levels and phenotype in the draining inguinal lymph nodes (iLN) and spleen (Fig 4). C) Timeline for localization of inhibited T cell proliferation. Mice (n = 6 per group) were immunized with bCII by the base of the tail on the right side only, and on the left flank an emulsion of CFA and Keyhole limpet hemocyanin (KLH) was given. Mice were treated with PBS or MP as described above, but only on the right flank. T cell responses were assessed for both the draining iLN (right side) and contralateral iLN (left side) relative to MP localization (Fig 5). (TIF) S4 Fig. Assessment of anti-collagen II IgG 2a antibodies. A) Normalized anti-CII IgG2a Ab titer versus arthritis score. Ab titer was defined as the dilution corresponding to the half-maximal absorbance in the linear section of the dilution curve, or the IC50 value using a non-linear four parameter regression. Normalized titer was calculated by dividing the titer by that of the 2B1.5 clone Ab standard for a given plate. Color coded based on treatment group: black-PBS, red-Blank MP, blue-TRI MP. Spearman correlation coefficient and p value for correlation are indicated. B) Average normalized anti-CII IgG 2a Ab titer by treatment group. n = 12 mice per group, data presented as mean ± SEM. [40][41][42]. In two of these (E and F), mice that were not immunized with bCII or treated in any other way are included as an additional control. n = 6-12 mice per group (n = 3 un-immunized), data presented as mean ± SEM. These include the number of CD4 + T cells (A), the percentage of CD4 + T cells that are FoxP3 + (C), the number of CD45 + immune cells (D), the percentage of CD45 + cells that are monocytes/macrophages (CD11b + Ly-6G -Ly-6C + ) (E), the percentage of CD45 + cells that are neutrophils (CD11b + Ly-6G + ) (F), the number of TNF-α expressing monocytes/macrophages (G), and the number of TNF-α expressing neutrophils (H). B) Percentage of CD4 + T cells that are FoxP3 + versus arthritis score. Spearman correlation coefficient and p value for correlation are indicated. n = 12 mice per group.
2020-09-25T13:01:40.808Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "dee229d7ed929596d6e18c1b75169bac7df49c5e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0239396&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "487c07136a71304c8d058ca9af305b53b5ff275f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256862813
pes2o/s2orc
v3-fos-license
Application of machine learning in Chinese medicine differentiation of dampness-heat pattern in patients with type 2 diabetes mellitus Background China has become the country with the largest number of people with type 2 diabetes mellitus (T2DM), and Chinese medicine (CM) has unique advantages in preventing and treating T2DM, while accurate pattern differentiation is the guarantee for proper treatment. Objective The establishment of the CM pattern differentiation model of T2DM is helpful to the pattern diagnosis of the disease. At present, there are few studies on dampness-heat pattern differentiation models of T2DM. Therefore, we establish a machine learning model, hoping to provide an efficient tool for the pattern diagnosis of CM for T2DM in the future. Methods A total of 1021 effective samples of T2DM patients from ten CM hospitals or clinics were collected by a questionnaire including patients' demographic and dampness-heat-related symptoms and signs. All information and the diagnosis of the dampness-heat pattern of patients were completed by experienced CM physicians at each visit. We applied six machine learning algorithms (Artificial Neural Network [ANN], K-Nearest Neighbor [KNN], Naïve Bayes [NB], Support Vector Machine [SVM], Extreme Gradient Boosting [XGBoost] and Random Forest [RF]) and compared their performance. And then we also utilized Shapley additive explanation (SHAP) method to explain the best performance model. Results The XGBoost model had the highest AUC (0.951, 95% CI 0.925–0.978) among the six models, with the best sensitivity, accuracy, F1 score, negative predictive value, and excellent specificity, precision, and positive predictive value. The SHAP method based on XGBoost showed that slimy yellow tongue fur was the most important sign in dampness-heat pattern diagnosis. The slippery pulse or rapid-slippery pulse, sticky stool with ungratifying defecation also performed an important role in this diagnostic model. Furthermore, the red tongue acted as an important tongue sign for the dampness-heat pattern. Conclusion This study constructed a dampness-heat pattern differentiation model of T2DM based on machine learning. The XGBoost model is a tool with the potential to help CM practitioners make quick diagnosis decisions and contribute to the standardization and international application of CM patterns. Introduction Nowadays, diabetes is a global epidemic, and its prevalence has accelerated quickly. The International Diabetes Federation (IDF) estimated that more than 5.37 million people have diabetes, and this number is expected to reach 7.84 million by 2045 [ [1]]. Especially since the epidemic of COVID- 19, several studies have revealed that patients with COVID-19 complicated with diabetes mellitus (DM) have an increased risk of morbidity and mortality [ [2,3]]. The prevalence of diabetes in China has risen substantially in recent years, with research data showing that it reached 12.8% in 2015-2017 [ [4]], making it the country with the largest number of diabetics in the world [ [1]]. Traditional Chinese medicine (TCM) has been used for thousands of years to treat and prevent diseases and health care in China. During this pandemic, the clinical use of TCM in fighting against COVID-19 in China indicated the integration of TCM in planning for clinical management was worthy of consideration, which was recommended by specialists to WHO [ [5]]. Currently, TCM as a treatment of DM has made great progress in recent years, and its effect has been acknowledged [ [6]]. Pattern identification as the basis for determining treatment is the core of TCM theory [ [7]]. TCM patterns, also known as ZHENG (证,zhèng) or syndrome, is distinguished by symptoms and signs examined in an individual by four main diagnostic techniques: inspection, auscultation and smell, palpation, and interrogation, which a comprehensive summary of the cause, location, nature, and development tendency of an illness at a certain stage during its course. It specifies the state of interaction between pathogenic factors and the corresponding reactions of the body [ [8]]. The World Health Organization International Classification of Diseases (ICD-11) [ [9]] has incorporated the TCM pattern as a supplementary chapter. Accordingly, TCM is bound to receive more attention in the future. It differs from the conventional diagnosis approach of western medicine in that TCM establishes patterns using four main diagnostic procedures. Figuratively speaking, it acts as a bridge to analyze four diagnosis methods and then guides the choice of TCM therapy with acupuncture and herbal formulas in accordance with TCM diagnosis and treatment theory. A correct diagnosis is an essential prerequisite to appropriate treatment [ [10]]. With the release of ICD-11, there is an urgent need to standardize pattern diagnoses. Nonetheless, each of these diagnostic methods requires considerable skill, which would spend many years for beginners to understand the complicated relationships between symptoms and patterns, even learning knowledge from distinguished CM veteran doctors [ [11]]. Therefore, it is worthwhile for TCM doctors and scholars to develop an objective and reliable aid for pattern diagnosis. Machine learning (ML) is a burgeoning field of medicine where computer science and statistics are applied to solve medical problems [ [12]], spurred on by the modernization of TCM, which relies heavily on ML for diagnosing syndromes [ [11]] and related research of Chinese herbal medicine[ [13]]. Although there have been some exploratory studies [ [14]] and expert consensus [ [15]] on the diagnosis of TCM pattern of type 2 diabetes mellitus (T2DM), there have been few studies on solving the problem of single pattern diagnosis, i.e. dampness-heat pattern of T2DM. Traditional Chinese medicine is effective for T2DM, but it is difficult to distinguish the syndrome effectively in the clinic. Therefore, our team, based on the dampness-heat-related symptoms/signs obtained by the Delphi method, collected multicenter data. Six machine learning methods were used to explore a new diagnostic method for the dampness-heat pattern of T2DM. Finally, an efficient diagnosis model of the dampness-heat pattern of T2DM based on Extreme Gradient Boosting (XGboost) was obtained, and the model was successfully interpreted by the Shapley additive explanation (SHAP) method. Study design and population The Institutional Ethics Committee (ICE) of the First Affiliated Hospital of Guangdong Pharmaceutical University approved all experimental protocols related to this study (ICE approval ID:2019-ICE-109) and confirmed that informed consent was obtained. This prospective observational study was conducted at multiple centers. Participants with T2DM who visited one of the ten CM hospitals or clinics completed electronic questionnaires. In this research, we analyzed the same data from these ten sites as our previous study and merged them without considering the original site. Patients diagnosed with T2DM, according to the diagnostic criteria established by the 2020 Chinese Medical Association Diabetes Branch [ [16]]. Exclusion criteria were:(1) an unwillingness to participate in the study; (2) age younger than 18 years; (3) Diseases with severe respiratory symptoms, severe infectious diseases, severe heart diseases, severe liver diseases, or tumors; (4) presence of any complications of diabetes (such as diabetic kidney disease or diabetic coronary artery disease); (5) or pregnancy. A total of 1973 questionnaires were collected from Jun 18, 2021, to Aug 9, 2021. By using the Python package (Scikit-learn), patients were randomly divided into two groups, a training set (n = 715) and a validation set (n = 306). For preprocessing optimization and hyperparameter tuning, five-fold cross-validation was performed on the training set [ Fig. 1]. Patient questionnaire Patients' demographic and dampness-heat-related symptoms/signs were recorded by electronic questionnaire, which was designed to conduct in type 2 diabetes mellitus. All dampness-heat-related items were unanimously selected after a 2-round Delphi study by the CM experts with 10-30 years of clinical experience in a previous study [ [17]]. There were 14 dampness-heat-related symptoms/signs in the questionnaire, which mainly consisted of three domains: TCM symptoms, pulse conditions and tongue pictures, and one CM veteran doctor assessed or inquired about the symptoms/signs and recorded their value. Symptoms and signs include heavy body, obesity, heavy sensation of head, sticky and greasy in mouth, sticky stool with ungratifying defecation, bitter taste in the mouth, halitosis, dry mouth and thirst, deep-colored urine, constipation, slimy yellow tongue fur, thick tongue fur, red tongue, slippery pulse, or rapid-slippery pulse. The items of TCM syndromes, tongue, and pulse characters by inspecting, listening to the sound, and smelling the odors, inquiring and pulse-taking. The TCM syndromes were described by "none"(0), "mild"(1), "moderate"(2), and "severe"(3), and for the tongue and pulse characters, response options were "present" (1) or "absent" (0). However, there is no standard case definition for the dampness-heat syndrome of T2DM, given the limitations of existing diagnostic tools. The diagnosis of the dampness-heat pattern of patients was completed by experienced CM physicians at each visit. The participants were informed that they could leave an interview at any time, and all recordings would be transcribed confidentially and analyzed anonymously. We were granted a waiver of written informed consent but obtained verbal informed consent from participants before their interview because of a shortage of staff and funds. STARD guidelines were followed in reporting our results. To improve the reliability and response rate of the questionnaire, we adjusted the questionnaire many times to make it easier for operators and respondents to understand the Chinese context to the maximum extent. At the same time, we gave explanations of TCM terms to facilitate the understanding of operators and respondents. And we did not use ordinary investigators, but all invited doctors with TCM qualifications to conduct the survey. Mainland China has a strict examination and training system for TCM qualification, so the credibility of the information collection of the four diagnoses of TCM in this study can be improved to the maximum extent. Model application Due to the serious difficulty of TCM pattern diagnosis, experienced and high-level TCM doctors are often needed for clinical data collection. Therefore, it is difficult to form a large sample of TCM research due to the lack of personnel. Based on this, we believe that traditional machine learning seems to be able to get better results from small samples compared to deep learning [ [18]]. At the same time, the "black box" problem of deep learning is more challenging than traditional machine learning [ [18]]. Besides, the labels and results in this study are binary or multi-classification data, and it seems more appropriate to adopt the classification algorithm in the supervised mode [ [19]]. For the above reasons, six machine learning models, including artificial neural network (ANN), K-nearest neighbor (KNN), naive Bayes (NB), support vector machine (SVM), extreme gradient boosting (XGBoost) and random forest (RF), were used to develop models that distinguished dampness-heat pattern as a binary outcome (presence and absence). Interpretable solutions will be key to machine learning becoming routine clinical and healthcare practice [ [20]]. The use of interpretable models can effectively reduce bias [ [21]]. RF and XGboost have unique explanatory properties, and they are both integrated algorithms of the decision tree, which can improve the accuracy of the decision tree to a certain extent [ [22]]. According to a previous study, for classification purposes, the RF and XGBoost classification models performed most optimally with clinical data [[23]]. In addition, compared with other algorithms, the features of multicollinearity do not affect the predictive ability of RF and XGboost models based on decision trees [ [24,25]]. SVM is used for binary classification problems in numerous fields, especially in the field of medicine [ [26]]. The core principle of SVM classification is that they map vectors into a higher dimensional space, and in this space, there is a maximum margin hyperplane. On either side of the hyperplane separating the data are two hyperplanes that are parallel to each other, separating the hyperplanes to maximize the distance between the two parallel hyperplanes [ [27]]. The algorithm is data-driven and can perform fairly well when the sample size is small in comparison to the number of variables, which is why it is widely used in prognostic studies for tasks related to the automatic classification of diseases [ [28]]. KNN, also called Reference Sample Plot Method, is another classification technique. The basic principle is to assign labels of classified data points to the closest unclassified data points [ [26]], which is a simple classification algorithm with good performance in medical diagnosis [ [29]]. ANN belong to a subtype of artificial intelligence and has been used in many subspecialties of clinical medicine [ [30]]. The ANN consists of nodes connected by weighted edges in a multilayer architecture, including an input layer, an output layer and one or more hidden layers [ [27]]. It can help doctors to identify complex TCM patterns, process large amounts of data, and reduce diagnosis time and the possibility of ignoring relevant information [ [31]]. Moreover, ANN-based models usually have optimal accuracy and AUC values [ [30]]. NB is based on the Bayes theorem. This simple yet effective rule calculates the probability of an event based on information gained about that event [ [26]]. Similar to ANN, both NB and ANN demonstrate robust performance in classification [ [32]]. Statistical analysis Python (https://www.python.org/; v3.10) was used for statistical description and analysis. Measurement data were expressed as means and standard deviations (median or quartile is used for abnormal conditions), while enumeration data were described as frequencies and percentages. Student's test, Chi-square test, or Mann-Whitney U test was used for differences between the two groups according to data type. P < 0.05 was considered statistically significant. Model building and evaluation mainly include the following aspects. First, we imported the electronic questionnaire data into Python, including 14 feature items and one result item. Secondly, we randomly divided the data into the training set and the test set at a percentage of 70%/30%. After that, six models were initially built by using a package (Scikit-learn). And then, the optimal parameters of the model were found by 5-fold cross-validation of the training data, and the optimal parameters were used to further adjust the model. Subsequently, the validation set was used to evaluate the model performance by confusion matrix, receiver operator characteristic (ROC) curve, precision-recall(P-R) curve, area under ROC curve (AUC), average precision (AP), sensitivity, specificity, accuracy, recall rate, F1 score. Finally, the Shapley additive explanation (SHAP) method was used to explain individual predictions of the best-performing prediction model in our study by quantifying and ranking the importance of each variable to the diagnosis [ [33]]. The Python package SHAP was used to estimate SHAP values for the trained models and to visualize the results. Patient characteristics During the study period, a total of 1973 cases were diagnosed with T2DM in ten CM hospitals or clinics. Among them, the following 952 cases were excluded: 543 of diabetic kidney disease and 409 of diabetic coronary heart disease. Ultimately, 1021 cases were included in the analysis. A total of 38.1% of the patients had the dampness-heat syndrome, the average age was 57.5 years, 592 (58.0%) of the patients were male, and the median duration of diabetes was six years. Table 1 shows the demographics and items of dampness-heat-related symptoms/signs. No significant differences in age, gender, or any other demographic factor were found between the two groups, but for the clinical symptoms of the two groups, there were apparently distinctions and significant differences. Model building and evaluation To create the diagnostic model, the 14 symptoms and signs were used as a feature for six machine-learning models. A comparison of the performance of the six machine-learning models is shown in Table 2. The XGBoost model had the optimal AUC (0.951, 95% CI 0.925-0.978), sensitivity, accuracy, average precision, F1 score, negative predictive value, excellent specificity, and positive predictive value. The KNN and NB model had the lowest AUC value [Fig. 2a]. Based on the diagnosed results, we calculated the P-R curve of six models and calculated the area under the P-R curve (average precision, AP) to measure the models' AP of the dampness-heat syndrome in T2DM. The XGBoost model also had the highest AP, and the KNN and NB models had the lowest [Fig. 2b]. Model performance interpretation To interpret the best performance machine-learning model XGBoost that was important for pattern differentiation, we used Shapley additive explanations (SHAP). SHAP values for all 1021 patients in the train set are shown in Fig. 3a. SHAP force plots show the contours of patients at the high or low likelihood of diagnosing dampness-heat pattern. One typical patient in the positive group (diagnosis for dampness-heat pattern) and one in the negative group (non-diagnosis for dampness-heat pattern) are shown in Fig. 3b and c with the detailed SHAP values of the most important variables. Explanation of variables We used SHAP to find the features that were important for pattern differentiation. The importance matrix graph [ Fig. 4a] and SHAP summary graph [ Fig. 4b] for the XGBoost model identified how each variable is important for the diagnosis of the dampness-heat pattern. SHAP values greater than zero represented a higher possibility of dampness-heat pattern in T2DM. The importance matrix plot ranked the variables contributing to dampness-heat diagnosis from most to least important and showed that slimy yellow tongue fur was the most important sign in dampness-heat pattern diagnosis. The slippery pulse or rapid-slippery pulse, sticky stool with ungratifying defecation also performed an important role in this diagnostic model. Furthermore, the red tongue acted as an important tongue sign for the dampness-heat pattern. In addition, other symptoms and signs contributed to the diagnosis model [ Fig. 4]. Discussion This study constructed a CM pattern differentiation model for dampness-heat in patients with type 2 diabetes mellitus patients based on machine learning and clinical variables from four main diagnostic procedures. High performance was achieved by all models, with AUCs ranging from 0.922 to 0.951. Compared to other models, the XGBoost model performed the best, with the best performance of diagnosis in AUC (0.951, 95% CI 0.925-0.978), sensitivity, accuracy, average precision, F1 score, negative predictive value, and excellent specificity and positive predictive value. The XGBoost model is high-performance and overcomes the shortcomings(long learning times and overfitting problems) of the gradient boosting machine(GBM) that has been used for diagnosis and prediction in multiple clinical scenarios for T2DM [ [34]]. Among all syndromes, tongue and pulse characters from four main diagnostic procedures, slimy yellow tongue fur was the most important sign in dampness-heat pattern diagnosis, determined by machine learning in our study. Our study indicated that machine learning algorithms appear to be a feasible and viable enhancement for pattern differentiation in Chinese Medicine clinical practice. The dampness-heat pattern is the most common CM pattern in patients with T2DM and is also a hot spot for combined disease and CM pattern research [ [35]]. To the best of our knowledge, this will be the first published study to generate machine-learning algorithms to distinguish the dampness-heat pattern of T2DM. Due to the diagnostic criteria for dampness-heat pattern in T2DM were not standardized in the past, the prevalence of dampness-heat pattern ranged from 13.2% to 58.29% [ [36,37]], while in our study, the percentage of patients with a positive diagnosis was 38.10% and the accuracy of XGBoost algorithm in validate set was 91.2%, which implied the reproducibility of the model is excellent. Our results were consistent with the previous findings. Previous studies have demonstrated the significant role of the XGBoost algorithm in other medical fields, such as electronic medical records and natural language processing for pattern diagnosis [ [38]], development of risk score model [ [39]], and prediction of mortality [ [40]]. Our results confirmed the outstanding performance of the XGBoost model in the diagnosis of the CM pattern. Recently, there has been an increase in the application and modelling of machine learning methods in medicine, which provides a viable avenue for constructing pattern differentiation diagnostic models [ [11]]. However, the inability of machine learning users to understand the results of complex machine learning models becomes problematic, presented as black boxes [ [12]]. This situation is not more receptive than a doctor performing pattern differentiation. However, pattern differentiation can be modelled as a dimensionality reduction process that deserves further exploration and research based on the machine learning perspective [ [41]]. SHAP methods are now commonly used in medical diagnostic or predictive models for variable importance interpretation, especially machine learning models, and are constructive in understanding the importance of clinical characteristics for disease diagnosis [ [42,43]]. In the present study, to facilitate the interpretation of the decision-making process of the XGBoost algorithm model, we used the SHAP methodology to explain our diagnosis model [ [33]]. Slimy yellow tongue fur, slippery pulse or rapid-slippery pulse, sticky stool with ungratifying defecation and red tongue are the symptoms most associated with the diagnosis of the dampness-heat pattern. Previous studies have identified Slimy yellow tongue fur as one of the features of the diabetic tongue characteristic [ [44]], and it is likewise a representative of the hot pattern [ [45]], where the dampness-heat pattern belongs to. Slimy yellow tongue fur, red tongue and slippery pulse or rapid-slippery pulse have been used as a typical sign in the diagnosis of dampness-heat patterns in diabetes in expert consensus [ [15]], and furthermore, has also demonstrated a strong correlation with dampness-heat pattern in other diseases [ [46]]. Sticky stool with ungratifying defecation, as a typical symptom of intestinal dampness-heat pattern, has been previously included in several expert consensuses on the diagnosis of CM pattern of digestive system diseases [ [47]] and is also an objective symptom as one of the evaluation indicators of the animal model of the dampness-heat pattern [ [48]], and also performed an important role in the present model. Consistent with this, we used SHAP's visualization approach to provide clinical insight and inform clinical pattern differentiation and highlight the most important symptoms and signs of diagnostic models. The ICD-11, the new release of the ICD, contains a supplementary chapter for Traditional Medicine Conditions [ [9]]. This chapter describes various types of traditional medicine patterns, including the dampness-heat pattern in the liver-gallbladder, uterus, bladder, liver meridian, spleen system et al. Although the revision of ICD-11 added a chapter on TCM, and WHO had made clear that this chapter does not refer to nor endorse any specific form of traditional medical treatment [ [49]], there still was some worry voice about how to provide objective, reliable, reproducible assessment and to reduce inter-rater variability by diagnostic procedures [ [50]]. The basic methodology of CM practitioners for pattern diagnostic is still primarily based on experience, tacit knowledge and possibly subjective perceptions from rigorous training. This can lead to inconsistent diagnoses because doctors rely heavily on subjective experience and personal knowledge [[11]]. In detail, the inconsistency comes from two aspects: identification of symptoms and signs and pattern differentiation. For a CM practitioner, recognizing signs and symptoms are as basic a diagnostic art as the physical examination in Western medicine and does not require over-reliance on medical diagnostic techniques and tests. A reproducibility study supported that there was reasonable to a very good agreement on a range of clinical data collected from diagnostic methods used in a TCM examination, such as inspection, auscultation, and palpation [ [51]]. This means there is a greater need to develop an auxiliary tool to improve the accuracy and reproducibility of pattern differentiation, as we have done in this study, applying machine learning algorithms to assist in pattern differentiation. In practice, successful and appropriate treatment will require accurate pattern differentiation based on the signs and symptoms collected. Further speaking, pattern differentiation is critical not only for the clinical consistency and efficacy of different TCM experts but also for the development of TCM standardization. The number of CM pattern differences using machine learning methodology has recently increased. The majority of these studies have attempted to develop diagnosis models capable of reproducing a CM doctor's diagnosis [ [52,53]]. While at the same time, there are studies using machine learning techniques driven data mining methods to study patient pattern differentiation [ [53,54]]. Simultaneously some studies solely analyzed the Chinese medicine tongue images so that diabetes can be effectively differentiated [ [55]]. Palpation diagnosis is also a non-invasive and effective method for CM practitioners to check the location and extent of a patient's disease, and the data was collected as pulse waveforms data [ [11,56]]. These advances mean that CM researchers are facing new challenges with big data as the use of instruments and sensors increases [ [57]]. And all patterns are the theoretical profiles of symptoms and signs, and each pattern is based on the diagnostic conclusions of the four diagnostic methods of TCM. As newly emerged approaches that recognize the potential and useful information from a large number of data, ML approaches are favored for their inherent advantages in handling big data [ [13]]. It also means that multimodal data will be the future application of machine learning in CM diagnosis. This study has the following advantages. First, this is the first study to implement machine learning methods to differentiate the dampness-heat pattern of T2DM, and all applied models showed good differentiation performance. Second, among the other models, the XGBoost model performed best. XGBoost is an efficient and scalable machine learning classifier that has advantages such as ease of use, ease of parallelization, and high predictive accuracy. Third, we applied the SHAP method to rank the importance of the included variables and found that slimy yellow tongue fur, slippery pulse or rapid-slippery pulse, sticky stool with ungratifying defecation and red tongue were the most important diagnostic factors for the dampness-heat pattern of T2DM, which compensated to some extent for the machine learning model as an uninterpretable black box. Fourth, our model can be used by CM beginners as a visual approach to support the decision-making or diagnose dampness-heat pattern of T2DM before they become CM veteran doctors, which means it may accelerate the cultivation of CM talents. Last, clinical data for pattern differentiation was collected in several provinces in China, which underlines the generalizability of our findings. This study was associated with some limitations. First, this study was conducted only for a single label for the dampness-heat pattern of T2DM. However, a patient may suffer from several diseases at the same time, and one disease can reflect several syndromes. For these complex cases, there is no satisfactory framework to deal with the diagnosis of multiple coexisting patterns, so we just focus on one of the typical patterns in T2DM. Second, limited by funding and human resources, an independent external validation patient cohort was not used to verify the stability in the performance of our diagnosis model. Notwithstanding, we believe our rigorous methodology generated a robust predictive model of dampness-heat pattern diagnosis model based on CM's four diagnostic methods. In the future, in addition to the models from the present study, we will also develop applications and conduct a prospective study to further validate our results. Conclusion In conclusion, our study attempts to the utility of machine learning algorithms trained on CM four diagnostic datasets to estimate diagnosis accuracy and potentially apply in pattern differentiation. The XGBoost model we established as a tool to diagnose dampnessheat patterns in patients with T2DM may pave the way to help CM practitioners make quick diagnosis decisions. However, this model should be further evaluated, specifically in clinical scenarios in the future. Data availability statement Data will be made available on request. Additional information No additional information is available for this paper. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-02-15T16:03:01.504Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "d1a97c9eba60fab1ae4e2770821b9cb228d835e0", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "96976640e470aeed139b8aab339307db435c8b36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250320945
pes2o/s2orc
v3-fos-license
NOTIFICATIONS OF INCIDENTS RELATED TO PATIENT SAFETY IN A SENTINEL UNIVERSITY HOSPITAL Objective: to analyze the notifications of incidents related to patient safety in a sentinel public university hospital. Method: retrospective, quantitative research conducted in a university hospital located in southern Brazil. It analyzed 760 notifications of incidents that occurred in the years 2015 to 2017 forwarded to the risk management sector of the institution. Data was collected from May to August 2018. Descriptive statistical analysis was performed using the Statistical Package for the Social Sciences version 20.0. Results: the incidents reported were pressure ulcers (64.0%), followed by falls (25.0%), medication errors (9.7%), incorrect patient identification (1.0%) and incidents in surgical procedures (0.3%). The morning period, nursing professionals and adult intensive care unit were the ones that made the most notifications. The most reported adverse event was related to medication error (50.7%) followed by falls (26.8%). Conclusion: the results of this study contribute to increasing interest in the analysis of incident and adverse event data, and to defining or refining strategies to improve patient safety. INTRODUCTION Worldwide, patient safety has become a challenge for health care organizations because it is considered a component of the quality of care provided to patients and because of its relevance in improving health care (1)(2)(3) . In this aspect, the theme has been highlighted by proposing measures to prevent risks and damage to patients' health. It is up to health professionals to identify these risks and complications during the client's stay, because they are key players in ensuring patient safety (1)(2)(3) Incidents related to health care have occurred with unacceptable frequency and affect clients who seek healthcare facilities for treatment, prevention, diagnosis or rehabilitation. It is necessary to understand the causes and factors that contribute to the emergence of incidents, with or without damage, and also to analyze their consequences and repercussions for the development of solution and mitigation strategies, in order to prevent their occurrence (3) . Safety incidents are defined as events or circumstances that may or may not trigger unnecessary harm to the patient. Those arising from health care can have negative impacts on their quality of life and major implications for inpatient mortality and morbidity (2)(3) . Incidents that cause harm are called adverse events (AE), which can worsen the condition or lead to disability (3)(4) . Adverse events, in particular, can lead to immeasurable harm to the patient and consequences for healthcare institutions (5) . In Brazil, the advancement in the area occurred with the institution of the National Program for Patient Safety (NPPS), through the publication of Ordinance No. 529/2013 and the Collegiate Board Resolution (CBR) No. 36/2013, in order to qualify health care and determine the mandatory implementation of Patient Safety Centers (PSCs) in all health facilities in the country (4) . One of the PSC's main competencies is to notify technical complaints and incidents linked to healthcare (4,6) . In this sense, incident notification systems (INS) are created, which help identify risks, contribute to data collection and analysis, and promote safety culture. In Brazil, the INS is the (NOTIVISA), which receives the mandatory notifications since 2014 from sentinel hospitals (5) . Thus, hospitals have developed strategies for monitoring incidents and AEs through notifications and analysis of the indicators generated (5) . These data allow a detailed evaluation of the conducts and of the need or not of the implementation of new actions that make it possible to reduce the risks to patients, considering the particular practice of each institution (2) . It is understood that by analyzing the notifications of incidents and AEs, these results may favor the evaluation of the causes and effects, which contributes to reducing the occurrence of undesirable events, qualifying health care, through the development and strengthening of the culture of institutional safety (7,8) . Despite the mandatory reporting of incidents and the implementation and creation of the PSCs, of which there are about 4,000 in Brazil, the number of notifications is low, since, among the institutions that have the PSC implemented, only 1,664 have made at least one notification, making it evident that the implementation may have happened more to comply with the legislation than for the incorporation of the tool that has the potential to change health care and sediment a culture of safety (8) . Therefore, knowing the incidents related to patient safety can contribute to the intervention in health care, in order to make it safer and of quality. Thus, the following question arose: What are the characteristics of the notifications of patient safety incidents in a sentinel hospital? The objective of this investigation was to analyze the notifications of incidents related to patient safety in a sentinel public university hospital. METHOD A retrospective and descriptive study, with a quantitative approach, carried out in a public university hospital in the interior of the state of Paraná. The institution has 291 beds and has been part of ANVISA's Brazilian Network of Sentinel Hospitals since 2001, and should mandatorily carry out notifications of technical complaints and incidents, contributing to risk management in health services, in partnership with the PSC and the Hospital Risk Management sector. The study population comprised all incident and adverse event notification forms managed by the Patient Safety Center of the institution under study. All printed forms of incident and adverse event notifications performed in the period from 2015 to 2017 were included. Hand hygiene practice and communication notification forms were excluded, as they were in the process of structuring and implementation at the hospital. Data collection took place between May and August 2018, through an electronic instrument developed by the researchers in Google forms, divided into two sections. The first section collected data regarding patient identification, such as: initials of the patient's name, care record, age (in years), gender (male/female), admission diagnosis, and present morbidities. In this session, information regarding the reason for the incident (pressure ulcer, fall, medication error, failures in patient identification and errors related to surgical procedures), presence of a companion (yes/no), date and period of the occurrence (morning, afternoon and evening), moment of the occurrence (admission, care, during care and not informed), unit of the incident and notifying professional were also filled in. In the second session, data related to the incident were collected. Thus, this section was subdivided into five subsections, as follows: incidents related to pressure ulcers (PU), falls, medication, patient identification, and surgical procedures. In the presence of PU, the data collected were: ulcer stage (1, 2, 3, 4, nonstageable), site, external risk factors (shear, friction and humidity), risk factors inherent to the patient, Braden scale assessment, time of PU detection, prevention measures adopted (change Cienc Cuid Saude. 2022;21:e56674 of decubitus every two hours, active mobilization in bed, dressing for ulcer prevention and treatment, viscoelastic mattress, nutritional support, skin hydration, limb elevation, frequent sanitization and others). To list the risk factors and the evaluation of the prevention measures adopted, the notification form allowed the choice of more than one item. When reporting falls, the following variables were collected: type of incident (incident without damage or adverse event), place of the fall, main consequences (no consequences, abrasions and bruises, fracture, bleeding/bleeding, small cuts, pain, and/or others), Morse scale assessment, prevention measures, companion at the time of the fall (yes, no) and disclosure (yes, no). For the collection of the variables main consequences and evaluation of the preventive measures adopted, the notification form gave the possibility of describing more than one item. In the presence of a medication-related incident, the following variables were collected: type of incident (incident without harm or adverse event), classification of adverse event (mild temporary harm, severe harm, and moderate harm), drug category, route of administration (intravenous, oral, subcutaneous) and disclosure (yes, no). Information was also collected regarding the factors that contributed to the incident and the prevention measures, and these variables gave the possibility of describing more than one item. When the notification dealt with an incident related to failures in patient identification, the factors that contributed to the incident were collected. Due to the specificity of the notifications referring to the operating room, the following variables were collected: type of surgery and problem occurred. It is noteworthy that the instrument developed by the authors was built based on the notification forms of the institution under study and the data requested in the NOTIVISA. Thus, since this is a retrospective research, the variables of this study followed those found in the notification forms. It is also noteworthy that these notification forms were developed based on the experience of NSP professionals of the institution under study. The database was built and organized in Microsoft Office Excel version 2014. The descriptive statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS) version 20.0, with the presentation of relative and absolute frequencies and standard deviation. The research was approved by the Research Ethics Committee of a public university, under Opinion no. 765.995 and CAAE: 34938614.9.0000.5231. RESULTS From 2015 to 2017, there were 760 incident notifications. The year 2017 had the highest number of notifications, totaling 609 (80.1%). Regarding the profile of the patients who suffered the incidents, the predominance is male with 62.0%. The average age was 57 years (SD: 19 years), ranging from zero day to 97 years. The sectors that most reported incidents are shown in Table 1. Regarding the period of notification and occurrence of the incident, there was a predominance of the morning, with 45.1%. The professional who most made the notifications was the nurse, with 37.7%, and in 46.3% of the forms the professionals were not identified, because this information was optional in the forms. There was a predominance of: pressure ulcers (493; 64.0%), falls (183; 25.0%), medication errors (73; 9.7%); inadequate identification of the patient presented eight (1.0%) and incidents in surgical procedures, only three (0.3%) notifications. Table 2 characterizes the PU notifications as to stage, location, and external risk factors. For the collection of external risk factors, one or more factors could be considered. Besides the external risk factors, the form also contained an item for the description of risk factors inherent to the patient for the occurrence of PU. Among them, 87.0% of the forms were related to immobility and impaired or reduced mobility; 53.5% were related to nutritional conditions; 47.7%, to age; 38.9%, to tissue perfusion; 34.0%, to systemic conditions; 33.9%, to comorbidities; 23.5%, related to the use of specific medications that can contribute to the development of LP; 15.6%, to body temperature; and in 3.4% forms this information was not described. Regarding the moment of PU detection by the professional, it occurred: in 57.2% during care delivery, 27.4% upon admission of the patient to the unit, 0.8% in transfers, 0.8% during the consultation, 0.4% at discharge, 11.0% at other times, and 2.4% did not report this data. Regarding notifications of pressure ulcers (PU), the institution adopts the Braden scale as a form of risk classification and monitoring of PUs. Thus, 43.0% of the forms were related to patients who were classified as high risk, according to this scale, followed by 28.7% as Notifications of incidents related to patient safety in a sentinel university hospital 5 Cienc Cuid Saude. 2022;21:e56674 moderate and 8.0% as low risk. This data was not filled out in 20.3% of the forms analyzed. The main measures adopted to prevent PU were: change of decubitus every two hours and active mobilization in bed in 80.3% of the reports, dressing for prevention and treatment of the lesion in 82.9%, use of viscoelastic mattress in 20.5%, improvement in nutritional support in 17.0%, skin hydration in 5.0%, elevation of the limbs with PU in 4.5%, and frequent washing of the patient in 2.4%. These behaviors could be adopted concomitantly by the team. The notifications regarding falls were also described in this study. These notifications accounted for 54.6% of the notification forms classified as incidents without ulcers, 26.8% as adverse events, and in 18.6% the consequences were not informed. Table 3 below characterizes the notifications of falls. At the study institution, the Morse scale is used to assess patients' risk of falling. Thus, fall notifications comprised the following scores: 15.2% moderate risk, 8.8% low risk, 8.1% high risk, and in 67.9% of the forms this information was not filled in. Regarding disclosure involving falls, 27.3% were communicated to their families/ companions. Among the measures performed by health professionals for the prevention of falls, the following stood out in 72.7% of the forms, the information to keep the safety rails elevated; in 66.6% the delivery of a folder and patient orientation; in 54.0% the identification in a bracelet with the use of a red clasp, with the purpose of signaling the risk of a fall, and the plaque of fall risk; in 53.5% the orientation to keep the bed in the low position and with the wheels locked; in 53.0% of the forms, the care to keep the most used belongings and objects within the reach of the patients; and in 14.7% to keep the doorbell within the reach of the patient. These measures were indicated, and the association of one or more preventive measures was possible. Another aspect evaluated was that 64.5% of the patients had no companions and/or family members during the fall. Regarding incidents involving the use of medicines, 50.7% of the notifications were classified as adverse events, 39.7% as incidents without harm, and in 9.6% of the forms this information was not filled out. Among the adverse events, 20.5% were classified as mild temporary harm; 11.0% as severe harm; and 6.8% as moderate harm. In these data, 12.4% of the adverse events were not filled in regarding severity. Table 4 details the notifications of incidents with medications found in this study. As for preventive practices for incidents with medications, the following stood out: training 23.3%, double-checking of high surveillance medications 19.2%, improving professional attention during medication preparation and administration 12.3%, improving communication between the multidisciplinary team 9.6%, updating prescriptions daily with attention 8.2%, verification of all prescriptions by nurses in the different shifts 5.5%, medications with better packaging 1.4%, and 20.5% of the forms did not present this information. Another point addressed in this study refers to the notifications regarding patient identification. In this aspect, the main factor that contributed to the occurrence of the incident was related to the change of the patient's name (75.0%), followed by the lack of identification wristband (12.5%), and in 12.5% the information was not presented on the forms. Regarding surgical procedures, only three notifications were made during the period that comprised the data collection of this study, and they were related to electric scalpel burns (the type of surgery was not informed), associated organ damage (bilateral mastectomy) and surgical procedure in the wrong place (diabetic foot amputation). DISCUSSION The need for notification is justified in the real evaluation of the factors that may have led to the incident, thus enabling its prevention or minimizing the consequences of adverse events (9) . It is noted in this study the increase in incident notifications in the year 2017. It is assumed that this is due to the fact that in that year the institution intensified the awareness of professionals with guidance and training, as an example, the formulation of the "Patient Safety Olympics" held at the institution. According to the literature, from the moment the professional is stimulated and trained, he starts to understand the importance of safety in the quality of care. He feels encouraged to make the notifications in Notifications of incidents related to patient safety in a sentinel university hospital 7 Cienc Cuid Saude. 2022;21:e56674 order to improve the care offered (9) . The nurse's professional basis is to provide quality care to the patient. In this aspect, it is noteworthy that this professional category was the one that presented the most notifications. This result is associated with the culture of this professional in performing procedures and inserting tools in their work dynamics for monitoring and follow-up of these incidents (10) . Other points that justify the greater engagement of nurses in notification are the fact that they spend more time with the patient and manage the unit and nursing care, besides being a reference to the multidisciplinary team (10) . It is noteworthy that several health professionals do not identify themselves when making the notifications. Although the literature addresses that adverse events occur due to failed systems and not to negligence or lack of technical training of the professional, it is noted that among health professionals still permeate feelings of fear and guilt to be involved in an incident, reinforcing the mystification of the punitive culture present in health institutions, contributing to the omission of information (11) . Thus, the entire healthcare team must be aware of its important role in the reporting of adverse events. Thus, it is necessary to identify the flaws in this process, seeking preventive alternatives instead of punitive ones. Thus, communication between care professionals and managers is necessary, sharing responsibilities, information, and seeking measures to prevent future errors and ulcers (11) . Another aspect evaluated in this study is disclosure, which is considered an extremely important process, because it is at this moment that communication occurs between the professional, the patient, and their families. The healthcare professional must provide information about what happened, what behaviors and measures were adopted to prevent future incidents. This communication process should occur with the professional of assistance, manager and legal professional, demonstrating the concern of the institution and the commitment to patient safety (12) . Patient identification is another key aspect to ensure safety in healthcare institutions. The use of the identification wristband and the identification of the bed are daily and necessary practices. It is the responsibility of the multidisciplinary team to check, in order to prevent possible incidents (10) . In a study conducted by the Department of Biomedical Engineering, aiming to analyze the applicability of infusion devices, it was found that 30% of incidents in health care institutions were related to medication errors. The main medication administration errors were related to dosage, administration of non-prescribed drugs, and incorrect timing, and 80% of the recorded incidents were related to intravenous drugs (13)(14) , therefore, incidents involving medication can compromise your clinical picture and increase your length of stay. It is observed that many times omission of this information occurs, due to fear of punishment and lack of knowledge about the characterization of what a medication-related incident is (11) . This demonstrates the need for professionals involved in patient safety to deepen their knowledge of the subject and implement teaching strategies that can be adopted in the institution where they work. As presented in the literature, the main adverse events in health care institutions are those related to medication (11,13) , unlike the data from this study, in which falls and PUs have the highest rates. In a survey conducted in a general hospital in the countryside of São Paulo, Brazil, the occurrence of PU was the second most reported adverse event in the institution, showing its high rate of occurrence (11) . In another study, in the United States, it is pointed out that about 60,000 patients die each year due to the development of PU, making it a worldwide public health problem (15) . The literature shows some risk factors for the development of PU, highlighting impaired or limited physical mobility, sensory loss, tissue perfusion, urinary and fecal incontinence, nutritional deficiency, age, polypharmacy, and changes in the level of consciousness (16) . This result is in line with the findings of this study, with emphasis on the patient's impaired mobility. Therefore, preventive practices such as changing the decubitus every two hours, active mobilization in bed, improving nutritional support, use of protective barriers and It is important that the nursing team pays attention to risk factors and preventive measures, minimizing the occurrence of these events. The development of PU can generate negative consequences for the patient and his quality of life, including increased hospitalization time, emotional damage, worsening of clinical status, and increased health care costs (16) . The identification of the risks of PU is performed mainly by the nurse, through the use of tools such as the Braden scale (17) . This action reveals the zeal of the health professional in managing care (11) . Falling was another incident addressed in this study and is considered a common but impactful problem in healthcare institutions (18) . Among the main factors found are those related to age, balance and strength problems, reduced mobility, polypharmacy, psychic problems such as mental confusion, and the patient's own clinical condition. Falls can have a bad impact, since most incidents generate negative consequences, including fractures, increased hospitalization time, and can worsen the patient's clinical condition (5) . Thus, fall notifications are an important tool in care management, because they provide indicators that help in the understanding of how and why these incidents occur in health institutions, in order to assist in prevention measures (5,11) . In the analysis of the notification forms it was possible to observe the failure to fill in some data, erasures, lack of content, and omission of information. It is noted in the literature that problems with registration of information are present in health institutions (19) . Therefore, it is essential that the notification forms are simple, objective and easy to understand. Another important aspect is the need for continuing education of all health professionals, in which they are instructed on the importance of filling out this data for better analysis of the cause and effect of the incident (8)(9) . The limitation of this study was the fact that, because it was a retrospective study with data collected from notification forms, not completing these topics made deeper analyses of the theme unfeasible. CONCLUSION This study highlights the importance of encouraging health professionals to make the notifications, modifying and improving the safety culture for the entire multidisciplinary team, not only focusing on the nurse professional. The results of this study can contribute to stimulate health services in the training of professionals, in the creation of objective and clear notification forms, so that all relevant information is contemplated, because it is a low cost tool and high impact in promoting Patient Safety. Furthermore, the importance of preventive practices for the occurrence of adverse events is emphasized, so that health professionals reflect positively on the quality of care provided to patients.
2022-07-07T15:14:55.331Z
2022-06-24T00:00:00.000
{ "year": 2022, "sha1": "09d93ea0ab7787b62327b13e1b03ce2352d4b8f7", "oa_license": "CCBYNC", "oa_url": "https://periodicos.uem.br/ojs/index.php/CiencCuidSaude/article/download/56674/751375154417", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d87ed69bc5fdef2f9beea17071c8c42a41f6b62f", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
214399237
pes2o/s2orc
v3-fos-license
Some statistical consideration of azimuth and inclination angles determination based on walk-away VSP data in Python It is a common knowledge that proper inclinations and azimuth angle determination is a critical step in processing and interpretation of walk-away VSP data. Additionally, an in-depth analysis of the uncertainty of these interpreted values requires the introduction of measurement errors. In this contribution, we present a statistical analysis of obtained polarization angles from three-component, multi-depth level, walk-away VSP using Python 3 programming language. Our analysis is presented in the context of different processing sequences and correlation with local features of the geological medium. We show that the obtained values of polarization angles and their errors can be strongly affected by processing sequence and when done correctly can give addition inside into features of analysis medium. Moreover, in some cases, even a presence of saturation can be express by polarization angles variations. Additionally, we examined the impact of well-casing on interpretational values of polarization angles. Introduction Obtaining full anisotropy tensor using P-wave only from the walk-away VSP (Vertical Seismic Profiling) survey is not a trivial task, especially when the acquisition was gathered in challenging conditions [1]. The methodology for P-wave only inversion for local anisotropy estimation has been proposed by Grechka and Mateeva [2]. Statistical analysis including EDA (Exploratory Data Analysis) of VSP data and obtained inclinations and azimuths are crucial steps and are not less important than proper processing of data. The purpose of this step was to find the patterns in data and prepare it for cluster analysis to decide which samples correspond to particular layers differing in acoustic properties. Additionally, it was important to find the correlation between geological layers (based on core analysis) and calculated polarization angles. The difficulty of P-wave only inversion method is into finding minimum of target function using optimization algorithms. On its input there have to be included the inclinations for various offsets with errors of its estimations, velocities obtained from VSP, and velocities of formations from other logs. The analysis presented in this paper allowed for proper preparation of data for the P-wave only inversion. Beside difficulty of method itself, the acquisition was performed in difficult terrain after heavy rains and half of the measurements were carried out in uncased part of the well. Due to heavy rains, ground conditions between each sweep were varying significantly [1]. The main aim of this research was to find the quantitive information about which processing scheme is the best for obtaining reliable polarization angle using PCA (Principal Component Analysis) method for components rotation. Additionally, the patterns in the data were studied for further data clustering using unsupervised machine learning. Walkaway VSP acquisition and region characterization The acquisition was projected by the Department of Fossil Fuels, a part of the Faculty of Geology, Geophysics, and Environmental Protection AGH UST. It was a part of a scientific project GASLUPSEJSM, a part of Blue Gas I project no. BG1/GASLUPSEJSM/13 financed by National Center of Research and Development (NCBiR), co-financed by Polskie Gornictwo Naftowe i Gazownictwo S.A. and Orlen Upstream. Data were gathered in Wysin village located in Northern Poland. It is a part of the Baltic-Central Russian Depression Zone tectonic unit called the Peri-Baltic Syneclise. There were three wells drilled -one vertical (Wysin 1) and two horizontal ones (Wysin 2H and Wysin 3H) -see Fig. 1. There were 480 shot points along 12200 meters profile (parallel to horizontal wells). Wysin 1 well is placed on the center of 3D surface seismic survey. There were 96receiver 3C BSR Array System (Oyo Geospace Company) with spacing between receivers 15 m. This system allows for simultaneous recording of zone from 2400 to 3825 m MDGL (measured depth from ground level). It allowed for analysis of 11 lithological complexes of Silurian rocks (determined from well-logs) which corresponds to 4 macro-complexes separated on the base of velocity analysis for depth migration. The whole recording system was placed below high-attenuating Zechstein formation [1]. In the case of litology, they can be classified as anhydrite, halite, shale, and dolomite. There were up to 15 sweeps on each of 480 shot points performed in frequency range from 6 to 140 Hz. Processing and rotations In this section, we introduce basic information about seismic processing of presented data in this paper just to cover the minimum to understand the aim of presented statistical analysis. The wide description of each procedure presented in this paper: pre-processing, rotation methods, and matching filter creation (including graphical illustrations) can be found in [1]. There were 4 different options of processing tested to choose which one is the best for polarization angles determination: Option 1 -raw data rotation, then vertical stacking for each shot independently, next vertical stacking with noise removal after it. Option 2 -vertical stacked data rotation, then noise removal. Option 3 -de-noising then rotation for each shot separately, then vertical stacking. Option 4 -rotation is done for each shot after noise attenuation and vertical stacking of un-rotated data. Then average error of inclination estimation is performed for whole receivers range (R1-RN) in a group of five (N=5) and was calculated according to equation 1: Errors for the whole measured depth range for every shot point (SP) were estimated according to equation 2: where NE -number of calculated angles with , σ (Ri) < 15 0 for azimuths and (Ri) < 5 0 for inclinations, and (SP) -azimuth or inclination average error for each SP. Statistical analysis and EDA of walkaway VSP data The statistical analysis of polarization angles distribution and patter analysis was performed using Python programming language [3]. It is efficient and easy to use high-level programming language. There are libraries for SEG-Y files read (Segpy), framework for Seismology (ObsPy) and the possibility of large-scale processing of Adaptable Seismic Data Format (ASDF). It is also easy to import data from Excel format using Pandas package and from SQL database. It allows for multifactorial, simultaneous analysis of data from different formats (including seismic ones) and efficient visualization. It also gives opportunity to perform machine learning using all data together. In this paper the Seaborn Python library (based on matplotlib) [4] was used for visualizations ( Fig. 2 to Fig. 9) The pattern analysis and comparison of descriptive statistics between each processing option, input data, and previously option 4 with signal matching filter (SM) applied were done. For this purpose, the box plots and swarm plots were calculated (Fig. 3). Box plot allows for similar analysis like histogram or kernel density function but in simplified version. The center of a box is median, the top and bottom represent 3 rd quartile and 1 st quartile. Whiskers represent the minimum and maximum. The bee swarm plot shows each sample and helps to understand the structure of data. It helps to gain inside into distribution of data. It is easy to see that inclinations obtained from option 1 are three-modal, where one mode is placed far from median and close to minimum. For Option 2, the distribution is similar. However, the average value of inclination is lower, and median is closer to 1 st quartile which is opposite to option 1. Option 3 and 4 have similar average inclination value, however values are more cumulated around median than in case of option 1 and 2. The most consistent distribution with no outliner values was obtained for option 4, which has a lower average estimation error. Applying matching filter for data in option 4 (Option_4 with SM in Fig. 3) has the narrowest interquartile range and a slightly lower average inclination value. Then the kernel density estimation using the Gaussian kernel method (KDE) has been calculated. The results are shown in Fig. 4. The blue graphs show the KDE of input data, option1 -4, and option 4 with SM inclinations according to depth, and the light-brown graphs show the same data according to Option 4 with SM. It is clearly visible that data has three or four main concentrations. They can be correlated with 4 main velocities complexes shown in [1]. The worst option of processing (in case of polarization angles determination) is option 2 which makes KDE flat. The second analysis light-brown KDEs was done to answer which processing scheme results are the most similar for those obtained using option 4 with SM. It can be noticed that the answer is option 3 (Pearson correlation between results = 0.78) and 4 (Pearson correlation between results = 0.95), however for Option 3 the center of kernel is placed up-right from center and is asymmetric. The similarity is bigger for lower inclination values. Not only proper processing of seismic data is critical, but also the elimination of bad data from input. In this case, we understand bad data as values anomalously higher or lower than surrounding values. It can be caused by wrong anchorage of receiver in well, signal transmission problem, measurement device errors or other random problems during acquisition in extreme conditions of deep well. In Fig. 5. histogram for inclinations before (blue) and after (orange) bad values elimination, with KDE plot and fitted Gauss distribution line is shown. After this process the distribution is more concentrated about central value, additionally, the modes of data distribution are stronger visible. The distributions for different Shot Points are similar in shape. However for Shot Point nr 5 and 14 the distribution is slightly different. The last step was to plot the bee swarm plot before (Fig. 6) and after (Fig. 7) bad values removal. It can be seen that after this procedure the outline values are removed and then, the distribution analysis can be done with higher accuracy. There is an unexpected rapid change between Shot Point nr 5 and Shot Point nr 6. We expected linear trend in obtained swarms. Similar behavior can be noticed in Shot Point nr 15. Thus, the data can be divided into three different groups according to data distribution -first: from Shot Point 1 to 5, second from 8 to 14, and third from 15 to 19. These three main group data will be used for separate calculation of P-wave only inversion in the future. Conclusion Presently, the detailed investigation of seismic anisotropy for shale gas exploration is extremely important [1]. The use of new tools, which are not included in seismic processing software gives a chance to make the correct decision for a data processing scheme. The power of data analysis and statical examination with proper visualization of data distribution should not be underestimated when a detailed investigation is performed. This step is even more important when application of machine learning algorithms is planned. P-wave only inversion is not a trivial problem to solve, especially when the signal is recorded in presence of high attenuating rocks above receivers and in uncased well. The use of Python for statistical analysis and visualization allowed for fast and accurate examination of data from different data types. This work was supported by the AGH University of Science and Technology as part of the statutory project of the Faculty of Geology, Geophysics and Environmental Protection. Our work was done in the context of scientific project GASLUPSEJSM, a part of Blue Gas I project no. BG1/GASŁUPSEJSM/13 financed by National Center of Research and Development (NCBiR), cofinanced by Polskie Gornictwo Naftowe i Gazownictwo S.A. and Orlen Upstream. The data we used were gathered by Department of Fossil Fuels, a part of the Faculty of Geology, Geophysics and Environmental Protection AGH UST under the supervision of prof. Michal Stefaniuk.
2019-11-28T12:06:09.615Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "af0f506aff5e14061ae867788c7bd5a8b818853c", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/59/e3sconf_ag2019_01006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e92f06edfb89fb25dd50ca4c1cba889c35352453", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Computer Science" ] }
272651790
pes2o/s2orc
v3-fos-license
European Journal of Educational Research : Introduction The use of computers is becoming more common in today's educational system.This was the case prior to the COVID-19 pandemic, and it remained the same during the lockdown period in order to continue offering online training.There are several factors that impact the use of computers by educators, and during the pandemic, the incorporation of technology into the instructional process was, for a significant number of educators, most likely forced and required.In general, educators have a positive attitude toward the use of information and communications technology (ICT) in education (Jimoyiannis & Komis, 2006;Sánchez et al., 2012).This includes educators who are involved in the education of children who have special educational needs (Jimoyiannis & Komis, 2006;Katsarou, 2020;Sánchez et al., 2012;Stankova, Kamenski et al., 2018;Stankova, Mihova et al., 2021).It has been shown that self-efficacy is one of the most important positive aspects connected with computer use, particularly during the pandemic in the context of online learning (Hong et al., 2021). Self-efficacy and the way teachers perceive their competence play a significant role in the skills of lesson preparation and the application of computers.It largely determines the extent to which they perceive computers positively as part of learning and successfully apply technology (Kent & Giles, 2017).In addition to self-efficacy, access to technology (Siyam, 2019), self-efficacy beliefs, and ICT training is needed for success (ELDaou, 2016).Age is also relevant to the use of computers, with younger educators usually indicating higher levels of computer use in their work, including those that work with children with special educational needs (Stankova, Tuparova et al., 2021). Despite educators' generally positive orientation toward using computers, there are barriers.For example, regarding the use of computer games in education, common barriers for teachers are the high costs of software products, insufficient technical equipment, and lack of both special training and specific products that correspond to the school curriculum (Tuparova et al., 2019).Employers in education point to the fear of technology and unwillingness to add extra workload as barriers to the use of computers in the educational process (Yurinova et al., 2021); more experienced professionals point to a lack of time as a serious barrier (Stankova, Tuparova et al., 2021). In online learning, teachers are presented with another challenge -at certain moments, computers are used as the only means to implement the educational process with students, including children with special educational needs (SEN). Here again, digital competence and opportunities to learn and enhance digital skills are key for teachers in helping them adapt to the new environment (König et al., 2020).At the same time, the use of an educational technology solution can have a positive effect on online learning outcomes, whereas the age of the teacher and school infrastructure may not (Dincher & Wagner, 2021).A teacher's confidence in the use of technology corresponds to their ICT self-efficacy (Ninković et al., 2021;Rabaglietti et al., 2021).Teachers' ICT self-efficacy and their communication with students and families can increase along with an increase in their motivation to improve their skills in using technology (Beardsley et al., 2021). Given that situation with the pandemic of COVID-19 has definitely changed the use of technology in education and, for some time, established online education as the only possible option, the factors that influence the use of ICT among general and special education teachers have also undergone changes.We believe that special educators represent a particular group in these studies, since working with children with special needs is more specific, not least because of the frequent difficulties in holding children's attention and presenting the material in an appropriate way.Before online learning was highly necessary, years of computer use had an impact on teachers' attitudes towards technology (Teo, 2008), but whether the necessity of their use as the sole means of instruction can change these attitudes remains unclear. Skills in the applications of technology in a distance-learning environment can predict high levels of teachers' selfefficacy, (Andreou et al., 2022), but computer use self-efficacy is another factor that is interesting to learn about.The demographic and work characteristics that are related to attitudes towards computers and computer use self-efficacy have not been studied a lot, and probably they have also undergone changes due to the forced use of online learning, both for general and special education teachers. Even before the COVID-19 pandemic, special education teachers were using technology when working with children with special educational needs (Stankova, Tuparova et al., 2021). This is because many children with special educational needs prefer technology and, before the pandemic, specialists worked hard to create tools that support special education teachers in diagnosis and therapy.As the pandemic has changed the situation, an emerging question is whether there are current differences between general and special education teachers in terms of attitudes towards computers and computer use self-efficacy. Objectives of the Present Study The aim of our research was to assess the familiarity with and use of ICT among general and special education teachers.Specifically, four research questions were formulated: 1. How familiar are general and special education teachers with ICT? 2. How much do general and special education teachers use ICT? 3. What is the relationship between attitudes towards computers and computer use self-efficacy? 4. Do demographic and work characteristics affect attitudes towards computers and computer use self-efficacy? Participants The participants in the study were N = 705 teachers of primary and secondary education.Of these, 535 were general education teachers (76%), while 170 were special education teachers (24%).The participants were approached using a non-random convenience sampling method (Creswell, 2014).Each participant had to be an adult teacher who worked at the primary or secondary level in general or special education.These primary and secondary general and special education teachers were employed in public schools in the greater area of central Athens. Participants were approached in the school environment through email.The aim of the research was briefly described, and all potential participants were informed that their participation would be anonymous and voluntary, given that none of the information they provided could be used to identify them, and that they could withdraw from the study at any time without the need for an explanation.No deception was used in this research, and there was no physical or psychological risk or harm to the participants, according to the Ethics Committee of the British Psychological Society (Ethics Committee of the British Psychological Society, 2018). Participants were 88% female and 12%, male.Their ages varied, with 53% of teachers aged 26-40 and 39% over 40.Most participants were general education teachers (76%), with 24% working in special education.Most teachers worked in high schools (67%), while the remaining 33% worked in elementary schools.Many participants had less than five years of experience (43%), with 16% having five to 10 years, 18% having 11 to 15 years, and 23% having more than 15 years of work experience (Table 1). Study Design The study used a quantitative questionnaire approach (Patten & Newhart, 2018).The instrument used comprised one survey and two questionnaires. The survey collected demographic and other information for the sample, including gender, age, position, school type, years of experience, and training in ICT.This section also asked the participants to respond to four items pertaining to their use of ICT ('Were you forced to use ICT due to the pandemic?';'Does the school environment help you in the use of ICT?'; 'Does the school principal encourage you to use ICT?'; and 'Before the onset of distance education, did you use ICT in your teaching?').The survey was created by the researchers in order to understand what happened during the lockdowns and under COVID-19, before and after the onset of distance education. The first questionnaire included the Greek Computer Attitudes Scale (GCAS), developed by Kassotaki and Roussos (2006), which comprises 30 items that assess teachers' views about the use of computers.The questionnaire provides total computer attitudes score as well as three sub-scores for confidence with computers (15 items), affection/feelings for computers (10 items), and cognitions about computing and computers (five items).The total score and subscores are calculated by adding up all the relevant item responses.Sixteen items are reverse-coded.The questionnaire is rated on a five-point Likert scale ('completely disagree' to 'completely agree').The original version of the questionnaire, proven by testing participants from four Greek samples (including teachers) showed adequate internal consistency and test-retest reliability, as well as good concurrent validity (Roussos, 2007). The second questionnaire is the Greek Computer Self-Efficacy Scale (GCSES), which comprises 29 items that provide a total score for teachers' self-efficacy in using a computer or the extent to which they feel competent in using and solving simple problems that occur when using a computer.The items are rated on a five-point Likert scale ('completely disagree' to 'completely agree').The original Greek version of the questionnaire was found to have acceptable validity and reliability (Kassotaki & Roussos, 2006). Reliability for the Scales Concerning the reliability of the scales of the study, a series of Cronbach's alpha tests were performed.The GCSES had a very high reliability of a = 0.97 (Table 2)..97129 Given this high reliability, Cronbach's alpha was calculated for the GCAS (Table 3).The total scale of computer attitudes had a high reliability, of a = 0.93 (30 items).The computer attitudes subscales also had acceptable reliability: the 'confidence with computers' and 'affection towards computers' subscales had a = 0.92 (15 items) and a = 0.83 (10 items), respectively.The subscale of 'cognitions about computing and computers' had lower but acceptable alpha reliability (a = 0.66; five items).The data were entered and coded into the Statistical Package for the Social Sciences SPSS version 25. Descriptive analysis was provided for the demographic and other information of the sample, through frequencies and percentages, while descriptive statistics were calculated for the items of the GCAS and GCSES through means and standard deviations.Next, Cronbach's alpha for reliability was computed for the scales and subscales of the questionnaire, and the dimensions of the study were calculated.The means and standard deviations for the computer attitudes and computer self-efficacy of teachers were provided, and the data were tested for normality using the Kolmogorov-Smirnov normality test.Given that the data did not follow the normal distribution, non-parametric Spearman rho correlations were performed between the dimensions of the study.Additionally, Mann-Whitney and Kruskal-Wallis tests were used to examine the potential effect of gender, age, ICT training, and position (general or special education), as categorical independent variables, on the attitudes and self-efficacy of teachers regarding computer use, as continuous dependent variables.Mann-Whitney tests were used for the dichotomous independent variables of gender, position, and training in ICT, while a Kruskal-Wallis test was used for the independent variable of age. Use of ICT Most of the teachers had undergone training in ICT (61%).A large majority were compelled to use ICT due to the pandemic (89%), and the school environment helped 33% of participants in the use of ICT a lot or very much.Thirty percent were not helped by their school environment.Overall, the school principal encouraged 41% of teachers to use ICT a lot or very much, while 24% of teachers were encouraged a little or were not encouraged at all.Before the onset of distance education, 37% used ICT very often in their teaching, while 36% used it a little or did not use it at all (Table 4).As expected, the percentage of teachers using technology increases during the pandemic, but unfortunately, our expectation that the school would support teachers seriously in this situation is not fully justified.The implication of these results is that teachers probably went through enormous difficulties in trying to introduce technology quickly and forcefully into learning. Results for the Computer Attitudes Scale Regarding the items on the computer showing the attitudes of the teachers, there were no missing values (valid sample N = 705).Using a five-point Likert scale, where 1 = 'completely disagree', 2 = 'disagree', 3 = 'neither agree nor disagree', 4 = 'agree', and 5 = 'completely agree', on average, teachers 'agreed' /(mean) between 4.5 and 3.5/ that computers do not scare them at all (mean = 4.13), that they could learn to use any computer software (mean = 3.99), that they feel comfortable when they have to use a computer (mean = 3.90), that they have a lot of self-confidence when it comes to using a computer (mean = 3.89), that they could get good grades in computer courses (mean = 3.85), that they enjoy working with computers (mean = 3.80), that if someone gives them a new computer to look at, they could get some programs to run (mean = 3.79), that anyone can use a computer (mean = 3.79), and that computers are enjoyable (mean = 3.74). The results showed that although teachers were not afraid of computers, the more complicated the conditions related to the presence of computers, the more their positive responses decreased, reaching the lowest for the question related to whether computers are enjoyable.Obviously, computers are still a challenge, and even if they are needed in the work, teachers approach them with some uncertainty that they can use any computer, could work with some programs, even if they are not in a completely familiar situation. On the other hand, on average, teachers 'disagreed' /(mean) between 2.5 and 1.5/ that not many people can use computers (mean = 2.13), that computers are boring (mean = 1.97), that one has to be young to learn how to use a computer (mean = 1.86), that one needs to be 'brainy' in order to work with computers (mean = 1.83), that computers are difficult to understand (mean = 1.81), that they hesitate to use a computer for fear of making mistakes they cannot correct (mean = 1.80), and that they are no good with computers (mean = 1.79).Participants also disagreed with the following statements: that they hope to never reach the point of having to use computers (mean = 1.71), that they are not the type to do well with computers (mean = 1.70), that they need someone experienced nearby when they use a computer (mean = 1.68), that they avoid using a computer whenever they can (mean = 1.60), that they feel hostile towards computers (mean = 1.56), that they hesitate to use a computer in order not to look like a fool (mean = 1.50), or that they get a sinking feeling when they think of using a computer (mean = 1.50). There is a positive trend among teachers, which shows that they do not think that there are serious obstacles for a person to work with a computer.Their answers point out the idea that computer mastery is an accessible skill that is not related to specific conditions or personal characteristics. The results also show means of more than 3 ('neither agree nor disagree') but less than 3.5 (4 = 'agree') for the following statements: I can do advanced computer work; I could probably teach myself most of the things I need to know about computers; the challenge of using a computer is very appealing to me; when I have a problem with the computer, I will usually solve it on my own; and I like to spend a lot of time using a computer.The results for the following statements fall just between 2 = 'disagree' and 3 = 'neither agree nor disagree: computers fail very frequently; and I do not enjoy talking with others about computers. Results for the Computer Use Self-Efficacy Scale Following teachers' self-efficacy in using computers, there were no missing values (N = 705).Using a five-point Likert scale, where 1 = 'completely disagree', 2 = 'disagree', 3 = 'neither agree nor disagree', 4 = 'agree', and 5 = 'completely agree', teachers 'completely agreed' /means > 4.5/ that they feel they can copy parts of a text to another section of the same text (4.68), search for information on the internet using search engines (4.67), download files from the internet (4.67), compose texts on the computer (4.63), the forward email they have received to other recipients (4.62), format text documents (4.60), download and read email attachments (4.59), move files to a folder on the computer (4.57), and use the spell check provided by word processors (4.52). Means of between 4.5 and 3. Following these results, we see that the teachers do very well with the basic operations that are carried out with the computers, and of course, as they become more difficult, their confidence decreases.However, their good general skills can be noted, which are the basis for upgrading additional abilities, if necessary, even if only by self-training. As a result, the dimension of computer attitudes and its subdimensions and the dimension of computer self-efficacy were calculated.Teachers had positive attitudes toward computers (mean score 115.81); they were confident with computers (mean score 59.41); they felt positive about computers (mean score 39.28); and they had positive thoughts about computing and computers (mean score 19.35).The teachers' self-efficacy regarding computer use was even higher (mean score of 120.59).Table 5 presents these findings.Normality tests showed that none of the data followed the normal distribution, and, given that finding, a series of nonparametric Spearman correlations were performed between the dimensions and subdimensions of the study (Table 6). Results showed that computer self-efficacy had high positive correlations with the dimension of computer attitudes, as well as with the attitude subdimensions of confidence and affection, while it had a low positive correlation with cognitions.Furthermore, it is interesting to note that all computer attitude subdimensions had positive, significant correlations, which were low for affection and cognitions, medium for confidence and cognitions, and high for confidence and affection.Computer self-efficacy probably leads to an increase in confidence with computers and thus likely increases the use of computers and teachers' understanding that technology does not require special characteristics, knowledge, or skills that are difficult to achieve.Of course, we consider the fact that the longer teachers use technologies, the more Computer selfefficacy would increase, and the pandemic forced relatively long online learning, which in turn led to the need for the application of technology in education. Effects of Age, Position, and ICT Training on the Dimensions of the Study Finally, the effects of age, position, and ICT training on the dimensions and subdimensions of the study were studied.The effect of gender was not calculated since there was a large difference between the number of males (N = 85) and females (N = 620) in the sample. The means and standard deviations for computer attitudes (total), confidence with computers, affection for computers, cognitions about computing and computers, and computer self-efficacy (total) for both groups -general education teachers and special education teachers -are presented in Table 7.There was a significant effect of position (general/special education teaching) on confidence with computers (Mann-Whitney U = 39525.0,p < 0.05) in the sample.General education teachers in our study (N = 535) had lower confidence with computers than special education teachers (N = 170).This is probably due to the fact that even before the pandemic, special education teachers paid great attention to technology and its inclusion in their work with children with special educational needs. Being a general education teacher or a special education teacher did not affect the total dimensions of computer attitudes or computer use self-efficacy or the computer attitude subdimensions of affection for computers or cognitions about computing and computers (all p > 0.05). Furthermore, there were significant effects of ICT training in our sample on the dimension of computer attitudes (U = 49362.5,p < 0.001), as well as on the subdimensions of confidence (U = 49812.5,p < 0.001) and affection (U = 49412.5,p < 0.001).ICT training also had a significant effect on the dimension of computer use self-efficacy (U = 53912.5,p < 0.05), even if the difference is small (Table 8).Teachers who had received ICT training had more positive attitudes toward computers, higher confidence and affection towards computers, and higher computer use self-efficacy than teachers who had not received ICT training.Finally, we wanted to check whether significant differences exist between the different age groups.Table 9 includes the means for the different age groups for computer attitudes (total), confidence with computers, affection for computers, cognitions about computing and computers, and computer self-efficacy (total), and differences between different age groups using Kruskal-Wallis tests.Age had a significant effect on all dimensions and subdimensions of our study, specifically on the dimension of attitudes towards computers and confidence with computers, affection for computers, cognitions about computers, and the dimension of computer use self-efficacy (Table 9). Non-parametric post-hoc tests indicated that, for the total score of computer attitudes, teachers aged 46-50 and teachers over 50 had less positive attitudes about computers than teachers aged 26-30 (p < 0.05 and p < 0.05, respectively) and teachers aged 41-50 (p < 0.05 and p < 0.05, respectively). Regarding the subscale of affection for computers, post-hoc tests indicated that teachers aged 46-50 had less affection for computers than teachers aged 31-35 (p < 0.05) and 41-45 (p < 0.05), while participants over 50 also had less affection for computers than participants aged 41-45 (p < 0.05). Discussion The present study examined attitudes towards ICT and computer self-efficacy in 705 teachers in primary and secondary education.In general, the teachers held positive views regarding computers and felt confident with, and had positive cognitions and feelings about, computers. The teachers had an even high computer use self-efficacy.Computer use self-efficacy was significantly and highly positively related to computer attitudes and its subscales of confidence and affection.This probably means that teachers generally use computers in their practice and feel prepared to incorporate technology into the educational process.Other authors report similar data -a positive correlation between computer self-efficacy and attitudes towards the use of webbased instruction (Doğru, 2020).A positive relationship exists also between teachers' ICT self-efficacy and the use of ICT in the educational environment (Hatlevik & Hatlevik, 2018). In our study computer self-efficacy had a low positive correlation with cognitions about computing and computers.We must note, however, that teachers in most cases use computers only as a tool that supports the educational process. ICT training, position, and age significantly affected the attitudes and self-efficacy of teachers regarding computer use.Teachers who had had ICT training held more positive views towards computers, had more confidence in and higher affection for computers and showed more computer use self-efficacy than teachers without ICT training.Thus, technology training emerges as an essential factor to consider.That corresponds to other findings -training in the use of computers increases the inclusion of computers in educational practice and computer self-efficacy (Ikhlas & Dela Rosa, 2023;Krause et al., 2017). Additionally, general education teachers showed less confidence with computer use than special education teachers (computer attitudes subdimension), while there was no effect of position (general or special education teaching) on either total computer attitudes and computer use self-efficacy or on affection for computers and cognitions about computing and computers (computer attitudes subdimensions).The difference is probably due to the fact that special teachers usually work individually and often use computers in their work with children, and many children with special needs enjoy learning activities that incorporate technology. Finally, age affects attitudes towards computers and their subscales, confidence, affection, and cognitions about computers, as well as computer use self-efficacy.In general, older teachers had less favorable attitudes toward computers, less confidence, less affection, and less positive cognitions regarding computers than younger teachers. Senior teachers also had lower computer use self-efficacy than younger teachers, except for teachers aged up to 25, who had lower computer self-efficacy than teachers aged 26-35 and 41-45, which is probably due to the general lack of confidence in very young teachers.These results correspond to results from other studies where teachers over 50 years old have consider less the use of ICT in education (Admiraal et al., 2017). Conclusion Despite the pandemic, technology development is developing in the field of education, and it is currently playing a very significant role in the process of supporting children who have special needs.The question of whether or not ordinary educators and special educators have appropriate training to make effective use of technology is an essential one for the administration of educational institutions.On the other hand, many children and parents have a positive attitude toward the application of technology in education.However, during this process, the relationship between the teacher or special education teacher and the child should not be lost because it is an important part of children's lives at school.The selection of technology, the acquisition of new abilities to make use of those technologies, and the development of new skills to evaluate their impact all require assistance for educators. The present study attempts to shed a light on the changes in the use of computers by both general and special education teachers following the COVID-19 pandemic, as well as to look for the interrelationships between various factors that could influence the inclusion of computers into the educational process, considering the enormous challenges facing modern professionals who work with children. Recommendations The successful inclusion of technologies in the learning process must consider the following factors as important in the educational environment: training and support from the school to general teachers and special educational teachers is essential for the development of attitudes towards technologies; emphasizing the benefits of the technologies and the satisfaction of children when working with technologies can increase the interest of teachers in using them; improving attitudes towards computers and increasing the experiences of the teachers in the use of computers as an useful tool in the educational process can support their computer self-efficacy. In order to gain more knowledge and insight into the factors that influence the use of computers, it is suggested that additional research be conducted.This would allow for the effective use of information and communication technologies in the classroom to be fostered, which would be beneficial for students, teachers, and society. Limitations The primary limitation of the study is that only 535 Athens-based educators from the general education cohort and 170 from the special education cohort participated.Another limitation of this study is the lack of prior research on the influence of the use of computers by educators, particularly in a pandemic situation, as well as the introduction of technology into the educational process for many instructors during the epidemic. Authorship Contribution Statement Proedrou: Methodology, data analysis.Stankova: Design and project management.Malagkoniari: Literature review, conceptualization.Mihova: Review-editing and writing, original manuscript preparation.All authors have read and approved the published final version of the article. Table 2 . Cronbach's Alpha Reliability for Computer Use Self-Efficacy Questionnaire Table 3 . Cronbach's Alpha Reliability for Computer Attitudes Questionnaire Table 4 . Teachers' Training in and Use of ICT (N = 705) Table 5 . Mean Scores for the Dimensions of the Study Table 6 . Spearman Rho Correlations Between the Dimensions of the Study Table 7 . Mean Scores for the Dimensions of the Study by Position Table 8 . Comparison Between Means of Teachers Who Received and Did Not Receive ICT Training Table 9 . Mean Scores for the Dimensions of the Study by Age
2022-12-15T16:35:41.140Z
0001-01-01T00:00:00.000
{ "year": 2023, "sha1": "c769bd22f7a7f22d2c47ea7731bc9dad1ea4c676", "oa_license": null, "oa_url": "https://pdf.eu-jer.com/EU-JER_12_1_159.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c769bd22f7a7f22d2c47ea7731bc9dad1ea4c676", "s2fieldsofstudy": [ "Mathematics", "Education" ], "extfieldsofstudy": [] }
257026941
pes2o/s2orc
v3-fos-license
The effects of slit-pore geometry on capacitive properties: a molecular dynamics study Ionic-liquids (IL) inside conductive porous media can be used to make electrical energy storage units. Many parameters such as the shape of the pores and the type of IL affect the storage performance. In this work, a simple IL model inside two geometrically different slit-pores is simulated and their capacitive properties are measured. The pores were of finite length, one of them was linear and the other had a convex extra space in the center. The molecular dynamics simulations are done for two, qualitatively, low and high molarities. The pores have been simulated for both initially filled or empty conditions. Differential capacitance, induced charge density, and IL dynamics are calculated for all of the systems. The effects of slit-pore geometry on capacitive properties: a molecular dynamics study Morad Biagooi 1 & SeyedEhsan nedaaee oskoee 1,2* Ionic-liquids (IL) inside conductive porous media can be used to make electrical energy storage units. Many parameters such as the shape of the pores and the type of IL affect the storage performance. In this work, a simple IL model inside two geometrically different slit-pores is simulated and their capacitive properties are measured. The pores were of finite length, one of them was linear and the other had a convex extra space in the center. The molecular dynamics simulations are done for two, qualitatively, low and high molarities. The pores have been simulated for both initially filled or empty conditions. Differential capacitance, induced charge density, and IL dynamics are calculated for all of the systems. Supercapacitors have gained much attention for the energy storage [1][2][3][4] . They can be made of ionic-liquids (IL) or electrolytes such as water-in-salt between electrodes 5 . ILs are a type of molten salts with melting point below 100 °C 6 . Supercapacitors are also called electrical double-layer capacitors (EDLC) because of the layered configurations of electrolyte or IL near the electrode surfaces. They are expected to have great power performances, high capacitance and theoretically unlimited charge-discharge cycle 7 . However, because of the complexities of these systems, the experiments do not necessarily coincide with the theoretical expectations 8 . EDLCs are different from chemical batteries. In these systems, there's no electron transfer between charged particles and the surfaces. If a charged particle or ion gets near to the electrode's surface, there would be an increase of induced opposite charges on the surface. This is the mechanism of electrical energy storage. EDLC have shorter charge-discharge time because of adsorption-desorption rate of electrolyte's ions on the electrodes 8 . Since the electric storage in these devices is related to the attraction-repulsion of the ions to the electrode surfaces, the conductive nanoporous media, such as carbide-derived carbon 9 , are a good candidate to be used as electrodes. Rough electrodes possess a bigger amount of surface area and therefore larger capacitance than the smooth ones 10 . The researches show that charging process in EDLC have a lots of mechanisms and parameters affecting on their capacitive performance, such as self discharge due to redox reactions 11 , overscreening and crowding in dense electrolytes 12 , electrochemical potential windows (EPW) of IL 13 , and changes in volume for the electrodes 14 . Considering all of the known mechanisms at the same time is not possible, especially in experimental [15][16][17][18] and analytical 11 studies. Because of that, there are different methods invented and used for EDLS simulations such as Molecular dynamics (MD) [19][20][21][22] , Monte-Carlo 23,24 , lattice model 7 , density functional theory (DFT) 13,25 , mixed DFT with MD and post-HartreeFock calculation 6 , and machine-learning [26][27][28] . In addition, modeling and simplifications in simulations gives us some hints about the ongoing microscopic processes in EDLC. The geometry of the electrodes is an important factor of supercapacitors that highly affects its performance 22 . There are numerous researches on the different electrodes geometries such as, planar electrodes 20,21 , slit-pores 17,23,29 , combining flat and porous electrodes 30 , cylindrical pores 29,31 , spherical electrodes 31 , carbon nanotubes forest 32 , mathematically flat electrodes vs atomic structured electrodes 33 , atomic rough vs atomic non-rough electrode surfaces 10 and so on. Equilibrium condition of the pores at zero applied voltage shows another effective factor of the supercapacitors. They are described as ion-philic and ion-phobic pores for filled and empty pores 34 . Ion-phobic pores are expected to have fast charging and higher energy storage than ion-philic ones 23 . Comparing initially filled and empty pores have been done by Kondrat and Kornyshev 35 with a mean-field model, they have reported that the initially filled pores charging is like diffusive and of the empty pores is a front-like process 8 . Some researches www.nature.com/scientificreports www.nature.com/scientificreports/ stated that by changing total ion concentration, ion-phobicity of the pores can be controlled, however, this does not seem to be a general rule 23 . In this work, we focus on the effects of pore geometries. The electrodes geometry chosen for this research is asymmetric, one flat electrode and one slit-pore. In order to do that, two different slit nano-pores are designed and simulated while having IL inside. The simplest symmetric coarse-grained model of ILs is used 20,21 . The electrode walls have no atomic structure and are created of polyhedrons, similar to some other researches 24 . The simulations are done at different molarities with initially filled and empty pores. It is tried to see the effects of the geometry on the capacitive properties. The plan of the paper is as follows; the results are interpreted and discussed in 'Results and discussion' section. In section 'Methods' , the simulation geometry, algorithms, models and parameters are explained. Results and discussion Neutral thermal configuration. Figures 1 and 2 are some snapshots of the system for linear slit-pore and convex pore at zero voltage difference. In both cases, when the pore initially is empty, it remains empty during the simulation and ions do not tend to penetrate inside pores. On the other hand, when pores are initially occupied, ions tend to remain inside the pore except for the entrance of it. In this case, there is an empty region at the entrance of the linear slit-pore where ions do not pass. In the case of the convex pore, ions leave the narrow regions on both sides of the convex pore, they remain outside the pore or occupy the convex and do not leave it during simulation time. As a simple argument to justify the empty regions, one can refer to the maximum entropy principle. Being in a narrow slit causes ions to be in a pseudo-two-dimensional structure in which ions with opposite charge alternate in each direction and construct a 2D pseudo lattice. This structure imposes an extra order to the IL system and reduces the entropy. Therefore, to maximize the entropy of the system, ions leave the narrow slit to the convex pore or the outside of the electrode. In the case of the linear slit-pore system (Fig. 1), however, after leaving those ions near to the entrance, the remaining ions in the slit construct a connected structure in which the electrostatic force plays the binding interaction role. This binding interaction are strengthened because of the conductive boundary of the slit and makes it hard for ions to leave the slit. Figure 3 shows the dynamics of equilibrium runs at zero potential. Systems that are initially filled lose a few particles near pore entrance while the empty ones do not gain new ions, as it has already been mentioned. In systems with initially filled convex pore (Sys. 7 & 8) ions mostly accumulate at the convex hole at the center of pore and as a result, number of ions in pore remains constant with time, except for the early stages of the simulation, where ions at the entrance of pore leave it to the reservoir. In contrast, initially filled linear slit-pores (Sys. 4 & 3), loose their in-pore ions moderately. This is because the system is not at the global maximum of its entropy because of the existence of a pseudo-2D lattice of ions in the slit-pore. Leaving this pseudo lattice and going to the reservoir need to overcome a potential barrier at the entrance of the pore. Thermal fluctuations, however, help ions to leave this pseudo-lattice and therefore, there exists a moderate reduction in the number of ions as a function of time in the linear slit-pores. A simple justification of the above-mentioned barrier is as follow; an ion belongs to the pseudo-lattice in the pore has to break its links with the other ions to move towards the reservoir. At early stages of the simulation when the whole pore is filled, ions at the entrance of pore have links with both ions in the reservoir as well as pore, therefore they can easily break their links and go to the reservoir. On the other hand, when enough ions leave the pore entrance, an empty space arises, ions on the edge of the pseudo-lattice feel a surface tension due to the lack of symmetry and therefore, it is hard for them to leave the pore. IL dynamics. Figures 4 and 5 are some snapshot of the system of high molarities at finite applied potential difference. The co-ions of the initially filled pores do not leave the pores while empty pores remain empty of co-ions during the charging process. Indeed, there are 3 different ways that an ideal supercapacitor charges: counter-ion adsorption, co-ion desorption, and, ion-exchange. For empty pores, the only method of charging is counter-ion adsorption. For filled pores, however, any of these three methods or any combinations of them can be considered. To understand the true charging process, we plot the population of the co-ions in the pore as a function of time during the charging process (inset of Fig. 6). Surprisingly, the number of co-ions remains constant, indicating the fact that the only method that contributes to the charging process is counter-ion adsorption. In the other two methods, number of co-ions should decrease due to leaving the pore (co-ion desorption) or due to the exchange with counter-ions (ion-exchange method). The outset plot of Fig. 6 is the number of entering counter-ions in the pore during the charging process. The plot is in the logarithmic scale. The continuous line is which is plotted for the sake of clarity. This figure demonstrates power-law dynamics for counter-ions. Although we did not perform a power-law fitting on each individual cures, it is obvious that the power-law exponent is less than unity, indicating on a dynamic slower than the ballistic dynamics which one expects for an ion in an electrical field. This is due to the existence of interactions between ions which cause energy dissipation. Capacitive properties. The capacitance for a linear capacitor is defined by = C q U / , in which U is the potential difference between the electrodes and q is the induced charges. Here C is a function of the capacitor geometry as well as the dielectric material inside. Supercapacitors, on the other hand, are more complicated, there are many different parameters such as gravimetric and volumetric capacitance involved in storing the electrical energy. Charge fluctuations play an important role in determining the capacitance of the system and causes the supercapacitor has a nonlinear response to the applied electrical potential 5 . As a measure of this response, one can refer to the differential capacitance (DC) = DC dq dU / . To reduce the finite size effect in our simulation, we use the mean-square fluctuations in the electrode surface charge density to determine the DC; where S is the electrode internal surface, k B is the Boltzmann constant, T is the temperature, and σ is the surface charge density. The angle brackets denote ensemble average in fixed applied voltage U 38 . Nonlinear dependency of DC to applied voltage U is considered in this relation, since surface charge density is generally potential dependent in our simulations. The DC plots (Fig. 7) shows a major difference between different configurations of the system. depending on the geometry and initial condition of different systems distinct peaks of the DC appear at different voltages. If these peaks are related to the saddle points in the free energy as it has been mentioned by Merlet et al. 38 , it means that free energy is highly affected by details of the initial and boundary conditions of the system. The DC values are of the order of some of other works 10,23 . Furthermore, it is clear from the Fig. 7 that convex systems typically have a lower differential capacitance value compared to a simple slit-pore. The overall lowest DC is of sys. 8 and the highest is of sys. 4, in which both are filled high molarity systems. It is reported 23 that for a slit-pore, the change in molarity does not change induced charge density (ICD) in equilibrium. Figure 8, qualitatively shows the same results for this work. However, one can see a tiny difference (a www.nature.com/scientificreports www.nature.com/scientificreports/ maximum of 1 μC/cm 2 ) for induced charge across systems with different geometries. The lowest ICD are belonged to the convex pores with low molarities (sys. 5 and 7). Conclusion This paper showed how the geometry of two different pores affects their capacitive properties. While results illustrated that differential capacitance follows complicated rules in all systems with different initial and geometric conditions, IL dynamics, and induced charge showed approximately the same behavior in all cases. Furthermore, from three different charging mechanisms, our simulations show that counter-ion adsorption is the only one that contributes. However, more work will be needed for establishing a complete theory of pore geometry effects on the charging process and capacitive properties of the supercapacitors. Methods. Details of the simulation method including, simulation setup, the algorithms and the parameters used for this work is discussed in this section. The simulation setup is a combination of ions, pore geometries, and FE meshes. The ions had interactions with each other as well as the conductive boundaries of the porous media. The simulation method was MD, the systems were simulated with the CAVIAR software package, Version 1.0. Details of the simulation procedure are given in the following subsections. www.nature.com/scientificreports www.nature.com/scientificreports/ Ionic-liquid. ILs are modeled as simple spherical symmetric objects with a pure repulsive Lennard-Jones (LJ) potential, (1 /6) in which, r is the distance between the particles, and the ε and σ are the potential parameters. The cutoff radius is set as σ 2 (1/6) to ensure the repulsive force at any distance, and the U cut is set according to U r ( 2 ; , ) 0 (1/6) σ ε σ = = to omit the discontinuity due to the potential truncation at the cutoff distance 20 The cations/anions have plus/minus one electron absolute charge. Since the target of this study is studying the geometric effects of the electrodes, we can set a similar mass for the cations and anions. As for setting the mass value, there's a lot of choices as many as the number of known ILs. In addition, the investigation of the different mass values on the results takes some long simulations. It's good to choose the mass value so that it would not be far from the majority of the experiments and ILs' applications. A well known IL is BMIM-PF6 that is used in many simulation and experimental studies 20,21,23,33,40,41 . The anion of this IL, PF6 (or Hexafluorophosphate), is one of the most stable anions of ILs and provide the largest EPWs when they are paired with conventional organic cations 13 . It is also one of three widely used non-coordinating anions. So we have choosen the IL mass the same as PF6, i.e. 144u. Pore geometry. Because we want to study the effect of geometry on the capacitance of supercapacitor, two distinct geometries with different electrodes are designed. One of them is a simple slit-pore and the other is a slit-pore with a convex space inside. They are called linear and convex pores in the rest of the paper. Figure 9 illustrates both of the slit-pores geometries in the true scales. This geometry is designed with an open-source www.nature.com/scientificreports www.nature.com/scientificreports/ computer-aided-design software, SALOME 42 Version 8.2.0 https://www.salome-platform.org/ and exported as a VTK file format. https://www.vtk.org/VTK/img/file-formats.pdf. The VTK format describes the geometry by triangles. The IL particles interact with the triangles using a discreet-element-method (DEM) algorithm 43 . This algorithm calculates the distance between the particles and the triangles that describe the walls. Using this distance, an interaction potential between ILs and the walls can be set. We have chosen the Eq. 2 for this interaction. In this work, the walls represents the carbon atoms of the pores, meaning σ = . 3 37 c Å, and ε =1 kJ/mol c 23 . Using the LJ parameters chosen for IL, the carbon atom parameters, and the Lorentz-Berthelot mixing rules 44 IL Wall Wall Wall is used to calculate the LJ interaction between the ILs and the pore walls. This force is also purely repulsive, and since the walls are defined stationary, it only acts on the ILs. Figure 10 is an illustrative figure of a slice of the linear slit-pore which demonstrate the different regions of the pore in terms of the interaction of IL particle with pore walls. Electrostatic algorithms. Electrodes are conductors, so they can be described by surfaces with constant potential. There are some preliminary efforts to model them as constant-charge surfaces, but the result was not quite realistic 45,46 . Some constant-potential methods were invented for different types of geometries. In addition, some popular methods have been improved for simulating non-flat conductors. As some example of these methods, one can refer to ICC* 47 , as well as an unnamed method introduced by Siepmann et al. 48,49 . In this work, the constant potential surfaces are simulated by utilizing the Poisson to Laplace Transformation (PLT) Figure 9. The complete geometry description of the systems. The dashed lines show the extra space of the convex slit-pore compared to the linear one. The geometry is 30Å long along the z-axis, with periodic boundary condition. The reservoir is closed in the y-direction. The electrodes are actually the line interfaces between two light and dark areas. The dark gray areas are forbidden zone for the particles, and are shaded to show the electrodes better. This figure is plotted with Inkscape Version 0.91 (https://inkscape.org/). Figure 10. The LJ force, on the IL particles due to the walls of the linear slit-pore, with respect to y-coordinate. The IL particles have a width of 1.475 Å freedom between the walls (white area) in which they have no LJ interaction with the walls. The pink area is the start of the LJ cutoff. The gray area indicates that the distance to the wall is less than the average of σ IL and σ c . The thick black line shows the position of metallic boundaries. The complete circle shows the size of IL particles compared to the pore width. (Plotted with Gnuplot 37 ). (2020) 10:6533 | https://doi.org/10.1038/s41598-020-62943-7 www.nature.com/scientificreports www.nature.com/scientificreports/ algorithm which is developed by the authors 43,50 . Currently, PLT is only implemented in CAVIAR software package 43 . Instead of defining discrete surface charge density, the PLT method tries to solve the continuous Laplace equation using the superposition principle. This method uses a finite-element (FE) mesh of the pore geometries 51 . Then outer parts of the mesh tags as a surface mesh of different electrodes by which the electric potential difference applies to the system. To reduce the destructive effect of the finite size, we consider the periodic boundary condition in the z direction. As a well-known method of evaluating the long-range electrostatic potential in systems with periodic boundary conditions, one can refer to the Ewald-sum based methods [52][53][54] . In this work, a 1D Ewald algorithm 55 for the electrostatic summations is chosen and been used in the PLT algorithm. Simulation parameters and tools. There are two slit-pores, each of them has been simulated at different, low and high, IL molarities. In addition, the slit-pore space has been set to be initially filled or empty at the starting point of the simulation (See Figs. 4 and 5). These cases make totally 8 different systems that are summarized in Table 1. The systems have been simulated at 11 different, step-like, voltages: . . . . . . . . . 0,0 25,0 5,0 75,1 0,1 25,1 5,1 75,2 0,2 25, and, 2.5 volts. There's a complete symmetry over the charges, so unlike some other works 10,21 , the results won't change if the applied potential is reversed. The dielectric constant ε r is set to 4.0 23 . Every simulation runs for about 20 ns to reach to equilibrium, then, about ns 20 extra runs are done for data sampling. Induced charges are sampled every . 0 3 ps. The MD process had a Langevin thermostat at temperature K 400 with a friction coefficient of ξ= − ps 10 1 . The systems are simulated with LJ reduced units 56 . Length is scaled with ion diameter σ = = x 5Å, the mass unit is the mass of ions = m 144 g/mol, the energy unit is ε= 1 kJ/mol, and the unit of charge is one electron. Using the above units to make the governing equation dimensionless conduct us to a time unit equal to σ = =ˆt m x / 6 2 ps. A time-step of . 0 001 in LJ units (6 fs in SI) were used for velocity-verlet integration. Besides, temperature scales as ε = = .T k / 120 267 B K, and voltage ε = = .ˆV q/ 0 01036. The pore geometries and their mesh are created by SALOME 42 software. The CAVIAR 43 software package is used for MD simulations and post-processing the results. The finite-element calculations in CAVIAR are done by using deal.II library 51 . The figures containing IL particles with the geometry are visualized using Ovito 36 and the line plots are made by Gnuplot 37 . Total Mol. Qualitative Mol. Table 1. This table contains the systems initial conditions (I.C.). The system No. is used to refer to them in the results section (Sec. 0). The slit-pore geometry has two types, linear and convex space (see Fig. 9). Pore I.C. is the state of the pore, whether it is filled with ILs or it is empty. Mol. means Molarity in the table.
2023-02-20T14:42:24.981Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "fec582df57b72a1f7d1972e9294197a8fabbd3ff", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-62943-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "fec582df57b72a1f7d1972e9294197a8fabbd3ff", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
15685901
pes2o/s2orc
v3-fos-license
A software product line approach to enhance a meta-scheduler middleware Software Projects in general tend to get more software reuse and componentization in order to reduce time, cost and new products resources. The need for techniques and tools to organize projects of higher quality in less time is one of the greatest challenges of Software Engineering. The Software Product Line is proposed to organize and systematically assist the development of new products in series at the same domain. In this context, this paper is proposed to apply the Software Product Line approach in Distributed Computing Environments. In projects that involve Distributed Environments, each version of the same product can generate repeatedly the same artifacts in a product that evolves its characteristics; however there is a principal architecture with variations of components. The goal of the proposed approach is to analyze the actual process and propose a new approach to develop new projects reusing the whole architecture, components and documents, starting with a solid base and creating new products focusing in new functionalities. We expect that with the application of this approach give support to the development of projects in Distributed Computing Environment. Introduction A Software Product Line represents a set of systems sharing common functionalities and characteristics that satisfies the needs of a particular market or specific segment [1].Such a system set also can be named as Product Family [2].The members of the product family are specific products systematically developed by the Software Product Line also classified as core assets.The core assets are represented by a variable features set which indicate a late decision of design of project [1].The assets choice and configuration composes a specific product [3]. The Computational Grid approach in general presents a complex and distributed structure involving hardware and software that allows access with low costs to the high performance computing resources [4].In the context of this approach, could find several elements like graphic interfaces to access the environment, middleware, meta-scheduler and so on, in consequence involve several knowledge areas that can derive other set of elements. In this sense it becomes a challenge to systematically organize through the Software Product Line, such elements and architecture, aimed at planning systematic reuse and fast customization for products development that complete other needs and have the same structure. This paper is organized as follows: Section II represents the value of concept Software Product Line.Section III describes the Distributed Computing Environments.Section IV illustrates and focuses on the proposal and related works.Section V discusses the facts and problems found at the moment.Section VI provides comments e directions about future works. Software Product Line Software Product Line respects to systematical planning and strategic reuse productivity, by exploiting similar features of a product.Its main goal is to achieve significant reduction in organizations in terms of development, cost reduction in maintenance, reduce time-to-market, improve quality productivity and customer satisfaction as well to anticipate the problems encountered [1]. The main difference between conventional Software Engineering and Software Product Line is the variation presence in some or all the software requirements and the focus.In the conventional systems the focuses is a unique goal, in other words, produce a specific product.While the conventional systems development works ad-hoc, in other words, contract oriented, the Product Line development has a strategic vision of the niche market [5]. The implementation of Software Product Line approach requires efforts on the major artifacts in the development cycle.Table 1 shows the main characteristics of the essential artifacts for Software Product Line.Should involve training and experience about the artifacts and procedures associated with the Product Line There are essential activities for Software Product Line.Such activities are Domain Engineering, which aims to develop the core of artifacts and Application Engineering, which will be instantiated in the core of artifacts to generate different products also known as Product Development [1]. Core assets are a set of assets ready to be reused in new product development.The core assets can be software components, project patterns, documents used in development, architecture, schedule and other artifacts that will serve as building blocks in the Product Line.The core assets are present in every product line.The architecture is one of these core assets and carries with it the possibilities of line variability.Architecture core asset, in particular, is seen as a key point of a product line and if it is projected badly can derail the entire project [2]. The main objective of Product Development is the generation of products based on core assets of a Product Line.It corresponds to the instantiation of applications and since found new requirements that had not been specified, it starts a feedback between this activity and the activity of core assets (Domain Engineering), in cycles in an evolutionary way. A very important concept intrinsic to a Software Product Line is the variability.To form a software family through Software Product Line it is necessary identify possible changes that may occur in the artifacts produced.The products generated during products development phase can be differentiated in terms of behavior, quality attributes, physical configurations, scaling factors, among others [2].Variations are the tangible differences between the products of Product Line in any device such as architecture, components, interfaces between components and components connection.The variations can be identified in any development phase of Products Line [7]. To represent the possible variations identified in a particular domain becomes necessary an approach of variability, which until now, there is no consensus about how to identify and represent variability in Product Line because it is a recent issue [8].One possible approach available and based on UML was developed and named SMarty (Stereotype-based Management of Variability) [8].The SMarty approach was developed based on the activities and on the concepts of variability.A SMarty approach was created to manage the variability consisting of a UML profile called SMartyProfile and a process named SMartyProcess.The main goal of SMarty approach is to allow the variability of a Product Line to be managed effectively supporting UML models. Distributed Computing Grid computing is characterized by making a variety of distributed resources, including services, devices and applications, available to a wide range of users [9].Various organizations, both real and virtual (VO), can make different types of resources available under dynamically changing availability constraints and with varying policies for access and use of these resources [10]. As a result of their nature, this variety of services and resources can be incorporated for achieving computational tasks.Specifically, users can access resources, applications and services, submit jobs for execution either via queues or by advance reservation, create combination processes in workflows, and verify the status of jobs or systems.Also, there has been a movement towards integrating grid computing environments with mobile computing [11] [12], creating an interface for users access the resources and services of a grid from anywhere, at anytime. One the other hand, clusters environments are common configurations used in many organizations to reach high performance computing (HPC) for specific local applications.Multi-clusters, if well orchestrate as a grid environment, can represent a differential computational power for a global execution of several class of applications, including also parallel jobs.A grid environment, composed by multi-clusters configurations, can be considered in a private or in multi-domain organizations.In other words, this metacomputing [13] environment can be employed as a HPC facility inside a specific organization, or among different organizations, creating in both cases the concept of virtual organization (VO). One important challenge is how to reach an efficient re-utilization of this environment and coordinate all resources and services. Proposal and Related Works With a focus on Distributed Computing Environment the present study aims to systematically organize the elements and architecture, aiming the systematic reuse planning and rapid customization for new product development to complete other requirements and to have the same structure.To achieve this objective, this work applies the SMarty process specifically in the project being developed by the laboratory LaPesD at the Federal University of Santa Catarina UFSC, contributing to the development of techniques to systematize a process for reuse of the artifacts, architecture and components of Software Products Line in Distributed Computing Environment. Among the existing approaches of Software Product Line and researches, SMarty approach was chosen due to its ability of modeling artifacts and ability of integrate with the UML. Currently in the literature are few works in the area of Distributed Environments.One of the few existing, but with excellent quality [14] [15] proposes to build an approach of Software Product Line for Grid-Oriented Middleware.The work developed a Software Product Line Architecture to facilitate the development of Grid-Oriented Middleware Systems.In order to validate the proposed PLA, it is instantiated in the construction of a middleware for grid computing. Another approach is named GISPL (Grid Infrastructure Software Product Line) adopts SPL concepts in the design, optimization and implementation of Grid-Oriented Middleware.Initially, the requirements are defined through ontology and a specific language.Next, service oriented architecture is defined that incorporates software and hardware artifacts needed to the functioning of VO (Virtual Organization).The architecture is modeled in System Modeling Language (SysML), which allows specifying the infrastructure using specific models derived from the ontology defined in the previous step.Finally, the infrastructure models are prototyped and optimized [16] [15]. Unlike the presented approaches, the purpose of this study is to apply in the Distributed Computing Environment as a whole, covering all the part of domain and instantiating complete applications distributed at the stage of product development. Discussion The initial motivation to use a Software Product Line approach in the distributed environment, which is often a complex architecture, was based on problems encountered, such as: architecture elements are deployed and are removed a few times, because the client does not know exactly what you want about a project.Such problem without using Software Product Line causes a massive rework. In the proposed architecture, each element presented (GUI Interface, Meta-scheduler, schedulers) can be componentized as part of a common architecture called Domain Engineering (DE), and then every new product would made an analyze to an instance of this Architecture of Application Engineering (AE) and specialization of the parties not common.For example it can apply ontologies aimed to describe the existing settings in the available network resources specializing only one of the architecture components.It can also add new elements or components in the architecture, such as an upper layer to replace the GUI for a mobile application with an ontology itself (where this component would be part of the Domain and later reused in other products). All these elements presented in the architecture and in any layer can be planned carefully developed thinking of reuse and variations points (mandatory, optional and alternative) aimed at creating a Software Product Line in the Distributed Computing Environment. Study the possibility of using OSGI [17] on the GUI layer and on the cellular phone.This technology increases the level of componentization lowering the granularity of each module. Conclusion and Future Works This paper presented an approach to Software Product Line architecture specially designed for distributed computing environments.The principal relevance for this approach can be defined as follow. Increasingly through the Software Engineering seeks to reuse elements for reuse in project from the same domain.This practice is directly related to improvement in the cost of projects, time and productivity [6].The Software Product Line contributes with all these features and to increase the quality of final product, productivity and so on. The adoption of Software Products Line concepts provides a way for better integration of environment components quickly and securely, and also supports an approach of mass customization. The reuse of the artifacts and final product decreases the number of errors, increases compliance of the produced software, eases maintenance and reduces development time. We are now applying the approach in the projects and documenting information about this.With the application of the approach, we are able to solve practical problems related to the user's indecision as to what elements in its architecture the user would like to have and quickly add and remove elements without major problems. Figure 1 . Figure 1.Domain Engineering in the Distributed Computing Environment. Table 1 . Implementation costs of a Software Product Line [6].
2015-07-06T21:03:06.000Z
2012-02-09T00:00:00.000
{ "year": 2012, "sha1": "6912172c320555c69c5f3524688c54a63eaf7837", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/341/1/012030/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ec06a3f5408117877ae449745a8d71743107cc7", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
16971853
pes2o/s2orc
v3-fos-license
Effects of terlipressin as early treatment for protection of brain in a model of haemorrhagic shock Introduction We investigated whether treatment with terlipressin during recovery from hypotension due to haemorrhagic shock (HS) is effective in restoring cerebral perfusion pressure (CPP) and brain tissue markers of water balance, oxidative stress and apoptosis. Methods In this randomised controlled study, animals undergoing HS (target mean arterial pressure (MAP) 40 mmHg for 30 minutes) were randomised to receive lactated Ringer’s solution (LR group; n =14; volume equal to three times the volume bled), terlipressin (TERLI group; n =14; 2-mg bolus), no treatment (HAEMO group; n =12) or sham (n =6). CPP, systemic haemodynamics (thermodilution technique) and blood gas analyses were registered at baseline, shock and 5, 30, 60 (T60), 90 and 120 minutes after treatment (T120). After the animals were killed, brain tissue samples were obtained to measure markers of water balance (aquaporin-4 (AQP4)), Na+-K+-2Cl− co-transporter (NKCC1)), oxidative stress (thiobarbituric acid reactive substances (TBARS) and manganese superoxide dismutase (MnSOD)) and apoptotic damage (Bcl-x and Bax). Results Despite the HS-induced decrease in cardiac output (CO) and hyperlactataemia, resuscitation with terlipressin recovered MAP and resulted in restoration of CPP and in cerebral protection expressed by normalisation of AQP4, NKCC1, TBARS and MnSOD expression and Bcl-x/Bax ratio at T60 and T120 compared with sham animals. In the LR group, CO and blood lactate levels were recovered, but the CPP and MAP were significantly decreased and TBARS levels and AQP4, NKCC1 and MnSOD expression and Bcl-x/Bax ratio were significantly increased at T60 and T120 compared with the sham group. Conclusions During recovery from HS-induced hypotension, terlipressin was effective in normalising CPP and cerebral markers of water balance, oxidative damage and apoptosis. The role of this pressor agent on brain perfusion in HS requires further investigation. Introduction Haemorrhagic shock is the leading cause of early death in trauma patients [1]. During the pre-hospital period, haemorrhage contributes to death in 33% to 56% of cases, and it is the most common cause of death among those found dead upon the arrival of emergency medical services personnel [2]. Neurological signs such as altered mental state, which typically includes obtundation, disorientation, confusion, agitation and irritability, cannot be neglected, because cerebral hypoperfusion is a consequence in patients experiencing bleeding-associated hypotension [3][4][5]. In addition, animal studies of haemorrhagic shock have shown that cerebral ischaemia with cell damage begins at the onset of the haemodynamic impairment [6][7][8]. Under hypotensive conditions such as those in haemorrhagic shock, cerebral perfusion pressure (CPP) is sustained below the lower limits of autoregulation [8], which is detrimental to brain tissue oxygenation [5,7,9]. Cerebral ischaemia has been associated with dysregulation of aquaporin-4 (AQP4) and Na + -K + -2Cl − co-transporter (NKCC1) in the astrocytes [9,10] and Bcl-2 related apoptotic proteins in the neurons [11]. Oxidative stress is implicated in the neuronal apoptosis that occurs in haemorrhagic shock. It has been shown to accompany increased lipid peroxidation within the brain, as reflected by changes in the levels of thiobarbituric acid reactive substances (TBARS) and changes in the expression of antioxidant enzymes such as manganese superoxide dismutase (MnSOD) [12]. Standard resuscitation practice for haemorrhagic shock mandates use of high-volume crystalloids. However, such therapy can result in adverse effects such as interstitial oedema in the gut and cellular oedema in the heart [13], increases in the inflammatory cytokine profile [14] and increased intracranial pressure (ICP) [8]. Crystalloids may also fail to recover CPP and oxygenation within the brain [8,15]. Terlipressin is a synthetic, long-acting (4 to 6 hours) analogue of vasopressin. The structure of terlipressin contains a peptide that represents the natural hormone lysine vasopressin, the innate vasopressin analogue in pigs. Its structure is very similar to human arginine vasopressin, but the synthetic drug is characterised by a more specific V 1 agonistic effect (V 1 :V 2 ratio =2.2:1) compared with arginine-vasopressin (V 1 :V 2 ratio =1:1). Terlipressin has been studied as a vasoactive drug in the management of catecholamine-resistant arterial hypotension in septic shock [16], liver failure [17] and acute gastrointestinal bleeding [18]. The effects of terlipressin consist of vasoconstrictive activity on vascular smooth muscle cells and a pronounced vasoconstriction within the splanchnic circulation that has been shown to redistribute blood flow to recover perfusion pressure to organs such as the liver, kidney and brain [19,20] and to increase survival rates in animal studies of haemorrhagic shock [14,21]. It has been reported that terlipressin can improve CPP in patients with acute liver failure [17], septic shock [22] and traumatic brain injury with catecholamine-resistant shock [23]. However, the effects of terlipressin on cerebral haemodynamics during early treatment for haemorrhagic shock remain unclear, and no data are available comparing the effects of resuscitation with terlipressin with those of standard fluid. We hypothesised that early recovery of haemorrhagic shock-induced hypotension with terlipressin could restore CPP and improve oxygenation of the brain. Therefore, the purpose of the present study was to investigate the effects of early administration of terlipressin on CPP and brain tissue oxygen pressure (PbtO 2 ), as well as on the regulation of tissue markers of water balance (that is, AQP4 and NKCC1), oxidative stress (that is, TBARS and MnSOD) and apoptosis (that is, Bax and Bcl-x) within the brain in a porcine model of haemorrhagic shock. Ethical approval This prospective randomised experimental study was approved by the Comissão de Ética em Pesquisa of the Hospital das Clínicas at Faculdade de Medicina of Universidade de São Paulo (067/11 and 280/13). Surgical preparation and monitoring Female Large White pigs (n =46) weighing 20 to 30 kg were fasted for 12 hours with free access to water before the experiments. The animals were premedicated with ketamine (5 mg/kg intramuscular) and midazolam (0.25 mg/kg intramuscular), and anaesthesia was induced with propofol (7 mg/kg intravenous). After endotracheal intubation, anaesthesia was maintained with isoflurane vaporized in 40% oxygen, and the pigs were ventilated (Fabius GS Premium; Dräger, Lübeck, Germany) with a tidal volume of 8 ml/kg and positive end-expiratory pressure of 5 cmH 2 O. The respiratory rate was adjusted to maintain normocapnia (partial pressure of carbon dioxide in arterial blood (PaCO 2 ) between 35 and 45 mmHg). Lactated Ringer's solution (4 ml/kg/hr) and pancuronium (0.3 mg/kg/hr) were administered continuously throughout the experiments. Body temperature was maintained at 38°C using a heated mat (Medi-Therm II; Gaymar Industries, Orchard Park, NY, USA). Both femoral arteries were catheterised for measurement of mean arterial pressure (MAP) and withdrawal of blood to induce haemorrhagic shock, respectively. Another catheter was inserted into the right femoral vein for later administration of treatments. A 7.5-French pulmonary artery catheter (Swan-Ganz; Edwards Lifesciences, Irvine, CA, USA) was surgically introduced into the right internal jugular vein and advanced under continuous pressure recording into wedge position. Cardiac output was determined by bolus pulmonary artery thermodilution (Vigilance monitor; Edwards Lifesciences). All catheters and pressure transducers were filled with isotonic saline containing heparin (5 U/ml) and connected to a multiparametric data collection system (IntelliVue MP50 monitor; Philips Healthcare, Best, the Netherlands). The heart rate (HR), right atrial pressure (RAP), mean pulmonary artery pressure (MPAP), pulmonary artery occlusion pressure (PAOP) and central body temperature were also continuously monitored with the IntelliVue MP50 monitor. The cardiac index (CI) was calculated to normalise the data for body surface area in square metres by using a conversion factor appropriate for pigs (k × BW 2/3 , where k =0.09) [24]. Systemic vascular resistance index (SVRI), pulmonary vascular resistance index (PVRI), left ventricular stroke work index (LVSWI), right ventricular stroke work index (RVSWI), stroke volume index (SVI), systemic oxygen delivery index (DO 2 I), systemic oxygen consumption (VO 2 I) and systemic oxygen extraction ratio (O 2 ER) were calculated using standard formulae [25]. Arterial and mixed venous blood were sampled at each time point for blood gas analysis, including measurement of haemoglobin (Hb), lactate, sodium (Na + ) and potassium (K + ) ion levels (ABL 555 blood gas meter; Radiometer, Copenhagen, Denmark). Two burr holes of 5-mm diameter each were placed over the right and left coronal sutures (12-mm paramedian). In the right hemisphere, an intraparenchymal probe was inserted into the cerebral cortex (15-mm depth) and secured with a single lumen bolt for measurement of PbtO 2 (Neurovent-PTO; RAUMEDIC, Helmbrechts, Germany). On the left side, a fibre-optic probe (Codman ICP EXPRESS Monitoring System; Codman Neuro, Raynham, MA, USA) was inserted epidurally for continuous monitoring of ICP after sealing the cranial window with bone wax. CPP was calculated using a standard formula (CPP = MAP − ICP) [6]. Experimental design Following surgical preparation, animals were allowed to stabilise for 30 minutes before being randomly divided into one of the following four groups: (1) a sham group (n =6) consisting of animals that were not subjected to haemorrhagic shock, (2) a HAEMO group (n =12) that was subjected to haemorrhagic shock and did not receive treatment, (3) a LR group (n =14) that was subjected to haemorrhagic shock and treated with LR (volume equal to three times the volume bled) and (4) a TERLI group (n =14) that was subjected to haemorrhagic shock and treated with terlipressin (2-mg bolus of GLYPRESSIN; Ferring Pharmaceuticals, São Paulo, Brazil). Randomisation was previously performed, and the blind allocation of the pigs among groups was placed in numbered manila envelopes, which were opened in a consecutive manner immediately before baseline measurements were registered. Haemorrhagic shock was induced by pressure-controlled bleeding targeting a MAP of 40 mmHg, which was maintained for 30 minutes before treatment. Data were recorded prior to blood removal (baseline), at 30 minutes after achieving the target MAP (shock) and at 5 minutes (T5), 30 minutes (T30) and 60 minutes (T60) after treatment. In some of these animals (sham group: n =3; HAEMO group: n =9; LR group: n =9; TERLI group: n =9), the study was continued for 1 additional hour, allowing data to be registered at 90 minutes (T90) and 120 minutes (T120) posttreatment. At the end of the study, the animals were killed with an overdose of isoflurane and potassium chloride. The intraparenchymal probe then was macroscopically inspected for insertion depth, and cortical samples of the brain were collected and immediately frozen in liquid nitrogen and stored at −80°C for later analysis. Preparation of cerebral samples for Western blotting assays and thiobarbituric acid reactive substance measurement The samples were homogenized in ice-cold solution (200 mM mannitol, 80 mM 4-(2-hydroxyethyl)piperazine-1-ethanesulfonic acid, 41 mM KOH, pH 7.5) containing a protease inhibitor cocktail (Sigma-Aldrich, St Louis, MO, USA) using a POLYTRON PT 10-35 homogenizer (KINEMATICA, Lucerne, Switzerland). The homogenates were centrifuged at 4,000 × g for 30 minutes at 4°C to remove cell debris. Protein concentrations were determined by the Bradford assay method using a Bio-Rad protein assay kit (Bio-Rad Laboratories, Hercules, CA, USA). Thiobarbituric acid assay To assess the levels of TBARS, a 0.2-ml cortical cerebral homogenate sample was diluted in 0.8 ml of distilled water, followed by addition of 1 ml of 17.5% trichloroacetic acid. Then, 1 ml of 0.6% thiobarbituric acid, pH 2, was added to the sample and placed in a boiling-water bath for 15 minutes. After the sample was allowed to cool, 1 ml of 70% trichloroacetic acid was added and the mixture was incubated for 20 minutes. The sample was then centrifuged at 2,000 × g for 15 minutes. The absorbance was recorded at 534 nm using a spectrophotometer, and values were calculated by using a molar extinction coefficient of 1.56 × 105 M/cm. The TBARS levels were then normalised to the total protein concentration, and the results are expressed as nanomoles per gram of protein. Statistical analysis Physiological and neuromonitoring parameters and arterial and mixed venous blood gas data were analysed (GraphPad Prism version 5.03 for Windows; GraphPad Software, La Jolla, CA, USA) across groups and time using two-way analysis of variance (ANOVA) tests. Tukey's tests were used for post hoc analysis. Analyses of survival were performed according to the Kaplan-Meier method and compared using Fisher's exact test. The last observation carried forward imputation method was applied throughout the study for the animals that died. Differences in the expression levels of AQP4, NKCC1, Bcl-x, Bax, MnSOD and TBARS were analysed by one-way ANOVA followed by the Student-Newman-Keuls test, and the results are presented as mean ± standard error. For all analyses, P <0.05 was considered statistically significant. Results The blood volume withdrawn from each group was similar, averaging 60% of the estimated blood volume (HAEMO group: 1,083 ± 124 ml; LR group: 1,162 ± 203 ml; TERLI group: 1,011 ± 215 ml). In the HAEMO group, the number of deaths following haemorrhage was significantly higher at 120 minutes (six deaths at 41 ± 15 minutes after shock; P =0.0007). At 120 minutes after shock, one animal from the TERLI group had died (at 80 minutes after shock) ( Figure 1). Haemodynamics HR was significantly increased in the HAEMO, LR and TERLI groups from the time of shock to T120 compared with the sham group (P <0.001). From T30 to T120, HR was significantly lower in the LR group than in the HAEMO group (P <0.05) ( Figure 2). MAP was significantly decreased at shock in all study groups compared with the sham group (P <0.001). In the HAEMO group, the MAP was significantly decreased from T5 to T120 compared with the other groups (P <0.001). Compared with sham animals, MAP was significantly decreased at T60, T90 and T120 in the LR group (P <0.001) and at T5 in the TERLI group (P <0.001). No significant differences in this variable were observed from T30 to T120 between the LR and TERLI groups ( Figure 2). The CI was significantly decreased from T5 to T120 in the HAEMO and TERLI groups compared with the sham group (P <0.01). At the corresponding time points, the CI was significantly higher in the TERLI group than in the HAEMO group (P <0.05). In the LR group, CI was higher than in the HAEMO and TERLI groups (P <0.05) (Figure 2). RAP was significantly decreased (P <0.001) and SVRI and PVRI were significantly increased (P <0.01) from shock to T120 in the HAEMO and TERLI groups compared with the sham group (P <0.01), whereas no significant differences were observed in these variables from T5 to T120 between the LR and sham groups. MPAP was significantly decreased in all groups from T5 to T120 compared with the sham group (P <0.05) ( Figure 2 and Table 1). In all study groups, PAOP was significantly decreased at shock compared with the sham group. In the LR group, PAOP was significantly increased at T5 and T120 compared with the HAEMO group at these corresponding time points (Table 1). LVSWI, Figure 1 Percent survival at 120 minutes after haemorrhagic shock using the Kaplan-Meier curve. Rats treated with lactated Ringer's solution (LR) and terlipressin (TERLI) after haemorrhagic shock were compared with the non-treated rats (HAEMO) and a sham group that was not subjected to bleeding. *P =0.0007 indicates significant difference compared with the sham group. RVSWI and SVI were significantly decreased in the LR and TERLI groups compared with the sham group (P <0.01) and significantly increased in the HAEMO group, compared with the sham group (P <0.01) ( Table 1). Blood gases, oxygenation and electrolytes pH, HCO 3 − , BE, SvO 2 and DO 2 I were significantly decreased, and O 2 ER, arterial lactate and K + were significantly increased, from shock to T120 in the HAEMO, LR and TERLI groups compared with the sham group (P <0.05) ( Table 2). In the LR and TERLI groups, DO 2 I was significantly increased from T5 to T120 and from T30 to T120, respectively, compared with the HAEMO group (P <0.05). In the LR and TERLI groups, the VO 2 I was significantly increased from T5 to T120 compared with the sham group (P <0.05) and significantly increased from T30 to T120 compared with the HAEMO group (P <0.05). The levels of Hb and Na + were significantly lower from T5 to T120 in the LR group compared with the sham group (P <0.05). The ratio of arterial oxygen partial pressure to fractional inspired oxygen and the level of PaCO 2 did not change Table 1 Effects of haemorrhagic shock on haemodynamics at baseline, shock and at 5, 30, 60, 90 and 120 posttreatment Baseline Shock After treatment (min) 5 3 0 6 0 9 0 1 2 0 RAP (mmHg) Sham 8 ± 1 9 ± 2 8 ± 1 8 ± 1 9 ± 1 7 ± 1 6 ± 0 HAEMO 7 ± 2 2 ± 1* 2 ± 1* 2 ± 1* 2 ± 1* 2 ± 1* 2 ± 1* significantly in any group during the study ( Figure 2 and Table 2). Neuromonitoring CPP, ICP and PbtO 2 were significantly decreased from shock to T60 in the HAEMO group compared with the sham animals (P <0.05). Both treatments with LR and TERLI were followed by a significant increase in CPP compared with the HAEMO group (P <0.01), with CPP recovering to values not significantly different from those of the sham group. The LR group had the largest increase in ICP, which was observed from T5 to T120 (P <0.05 versus sham; P <0.001 versus HAEMO group; P <0.001 versus TERLI group). The TERLI group had no significant differences in ICP from T30 to T120 compared with the sham group. Treatments with LR and TERLI recovered PbtO 2 to values similar to those in the sham group ( Figure 2). Aquaporin-4 and Na + -K + -2Cl − co-transporter At 60 minutes after shock, semiquantitative immunoblot analysis revealed a significant increase in the expression of AQP4 in the HAEMO group (179 ± 12% of sham, P =0.0086), which was not reversed by treatment with LR (196 ± 8% of sham, P =0.0047), but was fully restored by TERLI (125 ± 6% of sham). In the TERLI group, the expression of AQP4 was significantly higher at 60 minutes compared with the HAEMO and LR groups (P =0.0071). At 120 minutes, a significant upregulation of AQP4 was observed only in the LR group (217 ± 37% of sham, P =0.0084), which was significantly higher than in TERLI group (117 ± 19% of sham, P =0.0169) (Figure 3). No significant increase in the expression of NKCC1 was observed in any group at 60 minutes after shock, but NKCC1 expression was significantly increased at 120 minutes in the HAEMO group (237 ± 47% of sham, P =0.0234), which was fully restored by treatment with terlipressin (100 ± 1% of sham, P =0.0270) (Figure 3). Manganese superoxide dismutase and thiobarbituric acid reactive substances The levels of TBARS were not significantly different from the sham group in any study group at 60 minutes after shock. However, at 120 minutes after shock, these levels were clearly higher in the HAEMO group (0.38 ± 0.05 nmol/mg of protein; P =0.0013) and LR group (0.31 ± 0.10 nmol/mg of protein; P =0.0167), but not in the TERLI group (0.14 ± 0.01 nmol/mg of protein) compared with the sham group (0.03 ± 0.01 nmol/mg of protein). At 120 minutes after shock, the levels of TBARS in the TERLI group were significantly lower than in the HAEMO group (P <0.0001) and the LR group (P =0.0394) (Figure 4). Animals treated with LR had the highest expression of MnSOD at 60 minutes after shock (245 ± 11% of sham, P <0.0001), whereas no significant changes in the expression of MnSOD were observed in the other groups at the corresponding time point (HAEMO group: 157 ± 10% of sham; TERLI group: 125 ± 5% of sham). At 120 minutes after shock, the expression of MnSOD was significantly increased in the HAEMO group (237 ± 14% of sham, P =0.0081), which was not reversed by LR (244 ± 9% of sham, P =0.0009), but it was fully restored by TERLI (105 ± 16% of sham) (Figure 4). Discussion Haemorrhagic shock can result in global cerebral hypoxia caused by hypovolaemia and hypotension, and the haemodynamic resuscitation must restore CPP in order to prevent ischaemic injury within the brain [6][7][8]26]. The results of the present study indicate that early treatment with terlipressin can recover CPP after haemorrhagic shock and that the underlying mechanisms include regulation of water and Na + channels, inhibition of oxidative stress and decrease of apoptotic signalling within the brain. Survival times were similar at 120 minutes after haemorrhagic shock among groups, with the exception of the HAEMO group. However, whereas therapy with TERLI provided superior outcomes than LR with regard to measures of cerebral damage, it provided inferior outcomes with regard to systemic or peripheral measures of haemodynamics and tissue perfusion. As expected, CPP was not preserved at a blood pressure below the cerebral autoregulation threshold [27], which was followed by reduction in PbtO 2 [8,28,29]. In the non-treated animals, these derangements were associated with upregulation of AQP4 and NKCC1. These proteins play an important role in the formation of cellular oedema in the brain by regulating water and Na + transport through the sealing junctions of the blood-brain barrier in response to cerebral ischaemia [10,[30][31][32]. AQP4 acts by increasing water transport mainly in the pericapillary foot process of astrocytes [9], and NKCC1 acts by increasing secretion of Na + , Cl − and water through an intact blood-brain barrier into the brain [10]. Therefore, it was suggested that cerebral hypoperfusion was followed by ischaemic lesions in the HAEMO group. Cerebral hypoperfusion might have contributed to an accumulation of lipid peroxidation products, as reflected by the increased levels of TBARS, which might induce a compensatory increase in the expression of MnSOD [12,33,34]. Oxygen free radicals exert their pathophysiologic effects by directly attacking lipids and proteins in the biologic membranes, which can cause cellular dysfunction and induce apoptotic cell death [35]. Indeed, in the present study, the exposure of the brain to a high level of oxidative stress following haemorrhagic shock was associated with a marked shift in the Bcl-x/Bax ratio, indicating a loss of antiapoptotic ability [36]. Bcl-x and Bax are proteins that play an important role in determining the relative sensitivity of neuronal subpopulations to ischaemia. Accordingly, previous studies showed that haemorrhagic shock can induce a significant oxidative stress in the brain [12,37] and that cerebral ischaemia can decrease the Bcl-x/Bax ratio [38]. Furthermore, other markers of cerebral cellular damage have also been described following haemorrhagic shock in other studies, such as an increased level of glycerol in brain tissue [26] and increased plasma levels of S100B [7], findings which support the presently reported results. The direct vasoconstrictive effect of terlipressin, reflected by increased MAP and SRVI, prevented an improvement in CI but probably allowed for the redistribution of blood flow towards the cerebral circulation, leading to the restoration of CPP and PbtO 2 . Some effects cannot be differentiated from the common properties of any vasopressor, but it has been suggested that terlipressin can induce a selective vasoconstrictor effect according to the distribution of V 1 vasopressin receptors. This hypothesis is supported by previous studies in which researchers used norepinephrine in models of haemorrhagic shock, which did not improve CPP or oxygenation [39,40]. Furthermore, another study showed that vasopressin, the natural terlipressin analogue, resulted in a significantly higher increase of CPP compared with norepinephrine [41]. As with the TERLI group, an increased SRVI was also found in non-treated animals; however, this was not accompanied by an increase in MAP in the HAEMO group. The increase in MAP could have restored the cerebral autoregulation in animals treated with terlipressin, which might explain the recovery in CPP and PbtO 2 . Another explanation for the recovery in CPP is the higher PaCO 2 compared with the HAEMO group. Because PaCO 2 has a linear positive correlation with cerebral blood flow [42], it could account for more cerebral vasodilation and hence better perfusion compared with non-treated animals. However, it was unclear whether terlipressin also acted directly via V 1 vasopressin receptors within the brain [43], which might also explain the recovery in ICP and, in turn, CPP. Moreover, the expression levels of both AQP4 and NKCC1 were also restored in animals treated with terlipressin, which also supports the finding that cerebral perfusion was recovered. Despite the increase in SRVI, the fact that lactate, O 2 ER and SvO 2 were not significantly different between groups suggests that terlipressin did not impair peripheral perfusion compared with the HAEMO group. In fact, BE was less negative in the TERLI group. However, compared with the treatment with LR, terlipressin resulted in inferior outcomes with regard to measures of haemodynamics such as CI, RAP, SVRI and PVRI, which might explain the death of one animal in the TERLI group. As with the present data, an improvement in CPP has also been described in patients with persistent arterial hypotension and acute liver failure [17], traumatic brain injury [23] and septic shock [22] who were successfully treated with terlipressin. In the present study, systemic hypoperfusion could have accounted for unreleased, and thus undetected, products of oxidative stress within the brain, which could have caused cell damage. However, if that was the case, then PbtO 2 would not have recovered, unless mitochondria were incapable of using oxygen, allowing an increased availability of oxygen within the tissue. As mitochondrial function was not assessed in the present study, this explanation remains speculative. Also, oxidative damage would probably be associated with inflammation, but this was not supported by a previous study in which researchers found an improved inflammatory cytokine profile in rats treated with terlipressin compared with LR [14]. Nonetheless, the outcome was associated with an increase in the Bcl-x/Bax ratio, suggesting that if any cerebral oxidative injury or ischaemia were present, it probably was not sufficiently severe to trigger antiapoptotic signalling [36,38] within the brain in terlipressin-treated animals. Treatment with LR, however, was followed by a discrepancy between the increments in PbtO 2 that was not accompanied by recovery of CPP. This discrepancy can be attributed to the systemic third-spacing of crystalloids, which would also be consistent with the decline in MAP over the course of 120 minutes [14]. This explanation is supported by the overexpression of AQP4 in the LR group, which might indicate a compensatory mechanism to eliminate excess water within the brain water [44]. An increase in brain water content can also explain the increase in ICP, which, in turn, is detrimental to the restoration of CPP. This finding is also in line with the decreased blood Na + levels in the group treated with LR compared with the sham group. Another hypothesis is that a decrease in blood viscosity after intravascular volume expansion, despite some differences in ICP, could increase cerebral blood flow [45] and thus explain the similar PbtO 2 values between LR and TERLI groups. The LR group had the largest increase in ICP, but whether it induced overexpression of the brain tissue markers of water balance, oxidative stress and apoptosis is unknown. The fact is that regardless of the mechanism(s) underlying the failure of full recovery of CPP in the LR group, it remains the case that the markers of oxidative stress, TBARS and MnSOD, were overexpressed in animals treated with LR. The fluid infusion could have carried the overproduction of reactive oxygen species (ROS) throughout the tissue initiating a postischaemic reperfusion injury [36,46,47]. ROS regulate mechanisms via inflammatory pathways that ultimately can decrease vascular resistance [35], allowing for an increase in brain volume that could be partly responsible for unrecovered CPP in the LR group. In fact, researchers in a previous study found an increased proinflammatory cytokine profile and severe hypotension (40 mmHg) in rats treated with LR after haemorrhagic shock [14]. This oxidative stress could have reduced the antiapoptotic trend of the Bcl-x/Bax ratio in the brain during postischaemic reperfusion [36,38]. These alterations in the expression of Bcl-x and Bax may indicate that mitochondria were dysfunctional, as these proteins are part of the intrinsic mitochondria-related apoptotic pathway. Therefore, similar to the considerations previously described for the terlipressin-treated animals, one hypothesis is that an increased PbtO 2 could have been caused by an increased availability of oxygen because dysfunctional mitochondria are not capable of using the oxygen available in the tissue. Some limitations of this study should be noted. First is the short observation time, which we used because the experiment was designed to determine the early cerebral effects observed during prehospital care, rather than to determine long-term functional neurologic outcome or correlation to brain histopathology. Second, PbtO 2 was measured locally with a probe placed in the cerebral cortex, and therefore global ischaemia caused by a heterogeneous distribution of PbtO 2 may have been underestimated with regard to the brain region analysed. Third, cerebral blood flow was not measured, because the purpose of the present study was to evaluate changes in CPP and oxygenation. Also, the anaesthetics used may be cerebroprotective and may have different effects in ICP and MAP, but these were minimised by having a sham group and another group not treated after haemorrhagic shock, which were both subjected to the same anaesthetic protocol as the treated groups. In addition, some varieties caused by different vasopressin receptors in pigs (lysine vasopressin) and humans (arginine vasopressin) might have resulted in a different haemodynamic response to terlipressin. However, these differences do not interfere with the results of our study, because the study was aimed at investigating differences between groups and changes over time rather than presenting absolute values. Finally, we used only terlipressin as a vasopressor, and therefore we are unable to report whether different vasopressors would have yielded other results. Conclusions Early treatment with terlipressin was effective at restoring CPP and preventing dysregulation of water balance, and oxidative and apoptotic markers within the brain, following haemorrhagic shock in our model. These results indicate that the role of this pressor agent on brain perfusion in haemorrhagic shock requires further investigation. Key messages Early treatment with terlipressin recovered cerebral perfusion pressure and brain tissue oxygen tension after haemorrhagic shock in pigs. Terlipressin was effective for normalising cerebral markers of water balance, oxidative damage and apoptosis after haemorrhagic shock. These cerebral improvements were observed for at least 2 hours after haemorrhagic shock in animals treated with terlipressin. Competing interests The authors declare that they have no competing interests. Authors' contributions KKI and DAO conceived the study, designed the trial; obtained research funding; collected, analysed and interpreted the data; and drafted the manuscript and contributed substantially to its revision. ATCS, ESB, LUCC, TRS and MHH collected, analysed and interpreted the data and drafted the manuscript and contributed substantially to its revision. LCA and LMSM conceived the study, designed the trial, obtained research funding, supervised the conduct of the trial and data collection, provided senior advice on study design and statistical analysis, and drafted the manuscript and contributed substantially to its revision. JOCA contributed substantially to analysis and interpretation of data, provided senior advice on study design and statistical analysis, and drafted the manuscript and contributed substantially to its revision. AD and KJS analysed and interpreted data, provided senior advice on study design and contributed substantially to manuscript revision. JARF supervised the conduct of the trial and data collection, contributed to data analysis and interpretation, provided senior advice on study design and statistical analysis, and drafted the manuscript and contributed substantially to its revision. KKI takes responsibility for the article as a whole. All authors read and approved the final manuscript.
2016-05-04T20:20:58.661Z
2015-03-13T00:00:00.000
{ "year": 2015, "sha1": "b73e6ff119d73f3a1be6f0706682d8b208ca672c", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-015-0825-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41288ededdc6abd8002fec8c982d9ccb4e736e17", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }